entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 14
193
| authors
sequencelengths 1
1.14k
| primary_category
stringclasses 125
values | categories
sequencelengths 1
6
| text
stringlengths 12
495k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2409.03559v1 | 20240905141848 | Nonlinear identifiability of directed acyclic graphs with partial excitation and measurement | [
"Renato Vizuete",
"Julien M. Hendrickx"
] | math.OC | [
"math.OC",
"cs.SY",
"eess.SY"
] |
Enabling Practical and Privacy-Preserving Image Processing
Chao Wang
[email protected]
Worcester Polytechnic Institute
Shubing Yang
[email protected]
University of Washington
Xiaoyan Sun
[email protected]
Worcester Polytechnic Institute
Jun Dai
[email protected]
Worcester Polytechnic Institute
Dongfang Zhao
[email protected]
University of Washington
=============================================================================================================================================================================================================================================================================================
empty
§ ABSTRACT
We analyze the identifiability of directed acyclic graphs in the case of partial excitation and measurement. We consider an additive model where the nonlinear functions located in the edges depend only on a past input, and we analyze the identifiability problem in the class of pure nonlinear functions satisfying f(0)=0. We show that any identification pattern (set of measured nodes and set of excited nodes) requires the excitation of sources, measurement of sinks and the excitation or measurement of the other nodes. Then, we show that a directed acyclic graph (DAG) is identifiable with a given identification pattern if and only if it is identifiable with the measurement of all the nodes. Next, we analyze the case of trees where we prove that any identification pattern guarantees the identifiability of the network. Finally, by introducing the notion of a generic nonlinear network matrix, we provide sufficient conditions for the identifiability of DAGs based on the notion of vertex-disjoint paths.
§ INTRODUCTION
Systems composed by single entities or units that interact in a network to generate a more complex global behavior are ubiquitous <cit.>. Many of these interactions are characterized by dynamics located at the level of edges, and the identification of these dynamics is essential to study the evolution, analyze the stability, and design control actions. Since the size of the networks can be really large and the design of experiments generally involves an economic cost, it is important to derive identifiability conditions that specify which nodes must be excited and measured to guarantee the identification of the network.
While the identifiability of linear interactions modeled by transfer functions have been extensively analyzed in recent years <cit.>, few works have focused in nonlinear networks due to the complexity of the dynamics. The case of nonlinear identifiability where all the nodes can be excited (full excitation) has been analyzed in <cit.>, where surprisingly, the identifiability conditions are weaker than in the linear case when pure nonlinear functions are considered. These differences between the linear and nonlinear cases can be explained by the consequences of the superposition principle that allows the mixing of certain information only in the linear case.
In many scenarios, due to physical limitations, some nodes might not be available to be excited, or the cost associated with the excitation of a node (actuator) might be greater than the cost associated with the measurement of a node (sensor). For instance, in an electrical network, the excitation of a node might require a power source while the measurement of a node might require only a sensor.
Therefore, it is important to relax the full excitation assumption and guarantee at the same time the identifiability of all the network for more complex identification patterns (combination of excited nodes and measured nodes).
The identifiability of linear networks in the case of partial excitation and measurement has been well analyzed in several works <cit.>, where the symmetry excitation/measurement plays an important role. Nevertheless, even in the linear case, many problems are still open such as the minimum number of nodes that must be excited and measured, a characterization of the identification patterns that guarantee identifiability of the network, among others.
In this work, we analyze the identifiability of nonlinear networks in the case of partial excitation and measurement. We consider a model where the dynamics depend only on a past input, and we restrict the analysis to the case of directed acyclic graphs (DAGs) and pure nonlinear surjective functions.
We show that any identification pattern requires the excitation of sources, the measurement of sinks, and the excitation or measurement of the other nodes of the network. Then, we show that a DAG is identifiable with an identification pattern if and only if it is identifiable with the measurement of all the nodes, which is different from the linear case. In the case of trees, we show that there is a symmetry between the identifiability conditions for the full excitation and full measurement cases and that any identification pattern guarantees identifiability. Unlike the linear case, for more general DAGs, we show that the symmetry does not hold, and we derive sufficient conditions for identifiability based on the notion of vertex-disjoint paths and a generic nonlinear network matrix.
§ PROBLEM FORMULATION
§.§ Model class
We consider a network characterized by a weakly connected digraph G=(V,E) composed by a set of nodes V={ 1,…,n} and a set of edges E⊆ V× V. The output of each node i in the network is given by:
y_i^k=∑_j∈𝒩_if_i,j(y_j^k-m_i,j)+u_i^k-1, for all i∈ V,
where the superscripts of the inputs and outputs denote the corresponding values at the specific time instants, the delay m_i,j∈ℤ^+ is finite, f_i,j is a nonlinear function, 𝒩_i is the set of in-neighbors of node i, and u_i is an arbitrary external excitation signal. If a node is not excited, its corresponding excitation signal is set to zero. Fig. <ref> presents an illustration of the model considered in this work.
Our goal is to analyze a general nonlinear dynamics, but in this preliminary work, we will focus on the effect of the nonlinearities. The model (<ref>) corresponds to a generalized version of the nonlinear static model in <cit.>, where m_i,j can take a value different from 1.
The nonzero functions f_i,j between the nodes define the topology of the network G, forming the set of edges E. We do not consider multiple edges between two nodes since they would be indistinguishable and hence unidentifiable.
The topology of the network is known, where the presence of an edge implies a nonzero function.
Assumption <ref> implies that we know which nodes are connected by nonzero functions.
The objective of this work is to determine which nodes need to be excited and/or measured to identify all the nonlinear functions in the network. Our aim is to determine the possibility of identification and not to develop an algorithm or verify the accuracy of other identification methods.
Similarly to <cit.>, for the identification process we assume that the relations between the signals of excited nodes and the outputs of measured nodes have been perfectly identified. In addition, we restrict our attention to networks that do not contain any cycle (i.e., directed acyclic graphs). This implies that when we measure a node i, we obtain an identification of the function F_i:
y_i^k=u_i^k-1+F_i(u_1^k-2,…,u_1^k-M_1,…,u_n_i^k-2,…,u_n_i^k-M_n_i),
1,…,n_i∈𝒩^e→ i,
where 𝒩^e→ i is the set of excited nodes with a path to the node i. The function F_i determines the output of the node i based on the excitation signals and only depends on a finite number of inputs (i.e., M_1,…,M_n_i are finite) due to the finite delays m_i,j and the absence of cycles.
With a slight abuse of notation, we use the superscript in the function F_i^(s) to denote that all the inputs in (<ref>) are delayed by s:
F_i^(s)=F_i(u_1^k-2-s,…,u_1^k-M_1-s,…,u_n_i^k-2-s,…,u_n_i^k-M_n_i-s).
§.§ Identifiability
We define an identification pattern as a pair (^e,^m) where ^e⊆ V is the set of excited nodes and ^m⊆ V is the set of measured nodes.
We define the number of identification actions as N^m,e=^e+^m.
Clearly, if a node i is excited and measured at the same time (i.e., i∈^e and i∈^m), this increases N^m,e by 2.
Next, we define the relationships between the measurements of the nodes and the functions f_i,j.
Given a set of measured nodes 𝒩^m, the set of measured functions F(𝒩^m) associated with 𝒩^m is given by:
F(𝒩^m):={F_i | i∈𝒩^m}.
We say that a function f_i,j associated with an edge satisfies F(𝒩^m) if f_i,j can lead to F(𝒩^m) through (<ref>).
Since the identifiability problem can be hard or unrealistic for general functions, we restrict the problem to a certain class of functions : the functions associated with the edges belong to and that the identifiability is considered only among the functions belonging to .
Given a set of functions { f }={f_i,j∈ | (i,j)∈ E} that generates F(^m) and another set of functions {f̃}={f̃_i,j∈ | (i,j)∈ E} that generates F̃(^m). An edge f_i,j is identifiable in a class if F(^m)=F̃(^m), implies that f_i,j=f̃_i,j.
A network G is identifiable in a class if
F(^m)=F̃(^m),
implies that
{ f }={f̃}.
In this work, we will consider analytic functions with a Taylor series that converges to the function for all x∈ (possibly with a finite radius of convergence).
In the full excitation case <cit.>, it has been proved that a static component of a function could make the identifiability problem unsolvable. Also, when linear functions are allowed in a class , the identifiability conditions are similar to the linear case. However, it has been observed that the presence of nonlinearities change the identifiability conditions <cit.>. For this reason, we will focus on the class of pure nonlinear functions <cit.>.
Let be the class of functions f:→ with the following properties:
* f is analytic in .
* f(0)=0.
* The associated Taylor series f(x)=∑_n=1^∞ a_nx^n contains at least one coefficient a_n≠ 0 with n>1.
* The range of f is .
Therefore, in the rest of this work we will assume that f_i,j∈.
§ NECESSARY CONDITIONS FOR IDENTIFIABILITY
Since the sources and sinks play an important role in directed acyclic graphs, we first provide necessary conditions that these nodes must satisfy to guarantee identifiability.
For identifiability of a DAG, it is necessary to measure all the sinks and excite all the sources.
The necessity of the measurement of all the sinks has been proved in <cit.>. For the necessity of the excitation of the sources, let us consider an arbitrary source i.
The measurement of an out-neighbor j of the source i provides the output:
y_j^k =∑_ℓ∈_j f_j,ℓ(y_ℓ^k-m_j,ℓ)+u_j^k-1
=f_j,i(u_i^k-m_j,i-1)+∑_ℓ∈_j∖{i} f_j,ℓ(y_ℓ^k-m_j,ℓ)+u_j^k-1.
Since the source i is not excited, (<ref>) becomes:
y_j^k =f_j,i(0)+∑_ℓ∈_j∖{i} f_j,ℓ(y_ℓ^k-m_j,ℓ)+u_j^k-1
=∑_ℓ∈_j∖{i} f_j,ℓ(y_ℓ^k-m_j,ℓ)+u_j^k-1.
Notice that any other function f̃_j,i∈ satisfies (<ref>), which implies that it is not possible to identify the edge f_j,i.
Thus, the excitation of all the sources is necessary for identifiability of the network.
Next, we are interested in determining conditions that other nodes of the network must satisfy to guarantee identifiability.
For identifiability of a DAG, it is necessary to either excite or measure each node.
By Lemma <ref>, the excitation of sources and the measurement of sinks is necessary.
Let us consider an arbitrary node i in a network which is neither a source nor a sink. Let us assume that we can excite or measure the other nodes and all the edges have been identified except the incoming and outgoing edges of i. The measurement of an out-neighbor j of the node i is given by:
y_j^k =u_j^k-1+f_j,i(y_i^k-m_j,i)+∑_p∈_j∖{i} f_j,p(y_p^k-m_j,p)
=u_j^k-1+f_j,i(∑_ℓ∈_i f_i,ℓ(y_ℓ^k-m_j,i-m_i,ℓ))+∑_p∈_j∖{i}f_j,p(y_p^k-m_j,p).
Notice that the same output (<ref>) can be obtained with the functions f̃_i,ℓ(x)=γ f_i,ℓ(x) and f̃_j,i(x)=f_j,i(x/γ), with γ≠ 0, which implies that the incoming edges and outgoing edges of i cannot be identified. Thus, all the nodes must be either excited or measured.
Similarly to the linear case <cit.>, Proposition <ref> shows that we need at least N^m,e=n to guarantee the identifiability of DAGs in . Nevertheless, unlike the linear case, where N^m,e>n could be necessary for some DAGs <cit.>, in the nonlinear case, we have a stronger result.
There exists an identification pattern with =n that guarantees the identifiability of a DAG.
It is a direct consequence of the case of full excitation where we can guarantee identifiability by measuring all the sinks and exciting all the other nodes <cit.>.
However, the full excitation case could be restrictive since in many situations, the excitation of all the nodes could not be possible, or the cost of exciting a node could be significantly higher than the cost of measuring a node. For this reason, we are interested in other identification patterns (^e,^m) that guarantee identifiability of a DAG with only =n.
In the rest of this work, we will consider that in any identification pattern, all the sources are excited, all the sinks are measured, and the other nodes are either excited or measured.
§ FULL MEASUREMENT EQUIVALENCE
In the next proposition, we will prove a link between any identification pattern and the full measurement case.
A DAG is identifiable in the class with the identification pattern (^e,^m) if and only if it is identifiable with the identification pattern (^e,V).
Before presenting the proof of Proposition <ref>, we introduce two lemmas.
In the full excitation case, given a DAG G and a node j with only one path to its out-neighbor i. Let us assume that j has been measured and the edge f_i,j is identifiable. The edges f_i,ℓ and functions F_ℓ for ℓ∈_i, are identifiable in G if they are identifiable in the induced subgraph [An induced subgraph G_S of a digraph G is a subgraph formed by a subset S of the vertices of G and all the edges of G that have both endpoints in S.] G_V∖{ j} with the measurement of i.
Given three nonzero analytic functions f:→ and g,g̃:^p→. Let us assume that f is not linear
and they satisfy:
f(0)= g(0)= g̃(0)=0;
f(x+g(y))=
f(x+g̃(y))+h,
where h is an arbitrary function that does not depend on x.
Then g=g̃.
Proposition <ref>
Clearly, if the DAG is identifiable with (^e,^m), is also identifiable with (^e,V) since the full measurement case also includes the information of ^m used for the identifiability.
Now, let us consider an arbitrary DAG with a set of measured nodes ^m and a set of excited nodes ^e where all the sources are excited and all the sinks are measured. Since any DAG can be sorted in topological order [A topological ordering of a digraph is a linear ordering of its nodes such that for every edge (i,j), j comes before i in the ordering.] <cit.>, let us consider a particular ordering where all the nodes located between two consecutive measured nodes p and q, have a path to q.
Notice that if there is a topological ordering with a node w between p and q that does not have a path to q, the node w could be shifted to the right of q in this topological ordering.
(For example, see Fig. <ref>). The measurement of the node q provides an output of the type:
y_q^k =F_q
=∑_j∈_q^ef_q,j(u_j^k-m_q,j-1+F_j^(m_q,j))+∑_ℓ∈_q^mf_q,ℓ(F_ℓ^(m_q,ℓ)),
where _q^e is the set of excited in-neighbors of q and _q^m is the set of measured in-neighbors of q. Since F_q∈ F(^m), let us assume that there exists a set {f̃}≠{ f} such that F_q=F̃_q. Then, we obtain:
∑_j∈_q^ef_q,j(u_j^k-m_q,j-1+F_j^(m_q,j))+∑_ℓ∈_q^mf_q,ℓ(F_ℓ^(m_q,ℓ))=
∑_j∈_q^ef̃_q,j(u_j^k-m_q,j-1+F̃_j^(m_q,j))+∑_ℓ∈_q^mf̃_q,ℓ(F̃_ℓ^(m_q,ℓ)).
Clearly, the node before q denoted by q-1 has only one path to q. Then, we can set to zero all the inputs except by u_q-1^k-m_q,q-1-1, to have for all u_q-1^k-m_q,q-1-1∈:
f_q,q-1(u_q-1^k-m_q,q-1-1)=f̃_q,q-1(u_q-1^k-m_q,q-1-1),
which implies f_q,q-1=f̃_q,q-1. Then, by Lemma <ref>, we guarantee that F_q-1=F̃_q-1, which is equivalent to the measurement of the node q-1. Now, the identifiability problem becomes:
∑_j∈_q^e∖{q-1}f_q,j(u_j^k-m_q,j-1+F_j^(m_q,j))+∑_ℓ∈_q^mf_q,ℓ(F_ℓ^(m_q,ℓ))=
∑_j∈_q^e∖{q-1}f̃_q,j(u_j^k-m_q,j-1+F̃_j^(m_q,j))+∑_ℓ∈_q^mf̃_q,ℓ(F̃_ℓ^(m_q,ℓ)).
By Lemma <ref>, this new identifiability problem can be solved if the induced subgraph G_V∖{ q-1} obtained with the removal of q-1 is identifiable. In this subgraph G_V∖{ q-1}, the node q-2 has only one path to q, and we can use a similar approach to prove that f_q,q-2=f̃_q,q-2 and F_q-2=F̃_q-2. By following the same procedure, we can guarantee that F_v=F̃_v for all v between p and q, which corresponds to the mathematical constraints obtained with the measurement of these nodes.
In the case of the first measured node in the topological ordering, we apply the procedure until the first node in the topological ordering. Notice that this procedure can be used for any measured node such that all the nodes of the network can be covered. This implies that if the DAG is identifiable with (^e,V), it is also identifiable with (^e,^m).
Since any identification pattern guarantees the access to the information corresponding to the measurement of all the nodes in the network, the measurement of an excited node does not change the identifiability of a DAG.
In an identification pattern (^e,^m) that satisfies N^m,e>n, there must be at least a node which is excited and measured at the same time. If a DAG is identifiable with (^e,^m), it is also identifiable with another identification pattern (^e,^m) given by ^e= ^e and ^m=^m∖{ℓ}, where ℓ∈^e and ℓ∈^m. In this way, the identification pattern (^e,^m) can generate a family of subpatterns by removing excited nodes from the set of measured nodes.
Proposition <ref> illustrates that the information that can be obtained with the excitation of a node (extra input) is in some way more useful than the information that can be obtained with the measurement of a node.
This notion will be also corroborated with the absence of symmetry full excitation/full measurement in the case of general DAGs.
§ IDENTIFIABILITY CONDITIONS FOR PARTIAL EXCITATION AND MEASUREMENT
§.§ Identifiability of trees
We start by deriving identifiability conditions for trees in the case of partial excitation and measurement.
A tree is identifiable in the class with =n, if and only if all the sources are excited, all the sinks are measured and the other nodes are either excited or measured.
By Lemma <ref>, the excitation of all the sources and the measurement of all the sinks is necessary, and by Proposition <ref>, the other nodes must be either excited or measured. We will prove the sufficiency by induction. According to Proposition <ref>, the DAG is identifiable if (and only if) it is identifiable with ^m=V.
Let us assume that there exists a set { f }={f̃} such that F(^m)=F̃(^m).
Let us consider an arbitrary path of the tree that begins in a source and ends in a sink, and let us set to zero all the inputs except by the excitation signal corresponding to the source (node 1). For any node i in the path that is not a source, the output is given by:
y_i^k
=f_i,i-1(Θ_i-1(u_1^k-M_i,1)),
where u_1 is the excitation signal of the source with the corresponding delay M_i,1, and Θ_i-1 is a composition of nonlinear functions associated with the edges. Let us assume that all the edges until the node i-1 have been identified. This implies that we know the function Θ_i-1. Then, since F_i∈ F(^m), we have the identifiability problem:
f_i,i-1(Θ_i-1(u_1^k-M_i,1))=f̃_i,i-1(Θ_i-1(u_1^k-M_i,1)).
Since the range of all the nonlinear functions is , we have that the range of Θ_i-1 is also . Therefore, we obtain that f_i,i-1(x)=f̃_i,i-1(x) for all x∈, which implies that f_i,i-1=f̃_i,i-1.
Now, notice that when i=2, the function Θ_1 is the identity function, and clearly the edge f_2,1 can be identified. Then, by induction, all the edges of the path can be identified.
Finally, by considering other paths, we can identify all the tree.
This shows that similarly to the linear case <cit.>, any identification pattern with N^m,e=n, guarantees the identifiability of a tree. Notice that this also holds for path graphs and arborescences, which, unlike trees, necessarily have only one source.
According to Proposition <ref>, an identification pattern where all the sources are excited and the other nodes are measured, provides the identification of a tree. This corresponds to the symmetric result of the identifiability conditions for trees in the full excitation case where the measurement of the sinks is necessary and sufficient <cit.>. Therefore, symmetry holds for trees.
§.§ Generic nonlinear network matrix
One of the main challenges in the identification of networks is to distinguish the information that arrives to a node i through different paths. For instance, let us consider again the graph in Fig. <ref> with functions f_2,1(y_1^k-m_2,1), f_3,2(y_2^k-m_3,2) and f_3,1(y_1^k-m_3,1).
The measurement of node 3 provides the output:
y_3^k =f_3,1(y_1^k-m_3,1)+f_3,2(y_2^k-m_3,2)
=f_3,1(u_1^k-m_3,1-1)+f_3,2(f_2,1(u_1^k-m_2,1-m_3,2-1)),
If m_3,1=m_2,1+m_3,2, the output (<ref>) becomes:
y_3^k=f_3,1(u_1^k-m_2,1-m_3,2-1)+f_3,2(f_2,1(u_1^k-m_2,1-m_3,2-1))
which implies that the information coming through f_3,1 and f_3,2 cannot be distinguished for this particular choice of delays m_i,j.
Notice that for many networks,
there are particular choices of the delays associated with the nonlinear functions such that for each node i in the network, the excitation signals corresponding to the same excited node in (<ref>) have the same delays.
For instance, this is the case for multipartite digraphs like the one in Fig. <ref>, when the delays of all the nonlinear functions are the same. This particular setting is important for the generalization of results to more general dynamical models where the nonlinear functions can depend on the outputs of nodes with several delays and the mixing of information is possible <cit.>. Furthermore, the identification of networks with this particular choice of delays is more challenging since the number of excitation variables that can be used is smaller. For this reason, we will analyze this worst-case scenario and we will consider that (<ref>) is given by:
y_i^k=u_i^k-1+F_i(u_1^k-T_i,1,…,u_n_i^k-T_i,n_i), for
1,…,n_i∈𝒩^e→ i.
Regarding the measurement of the node 3 in (<ref>), the function F_3 will be of the form:
y_3^k=F_3(u_1^k-T_3,1),
where T_3,1=m_2,1+m_3,2+1.
In a more general DAG, a node can have more than one in-neighbor that shares several excitation signals.
Unlike the full excitation case, where each node has its own excitation signal, and identifiability conditions are valid for any function in a specific class, in the case of partial excitation and measurement, particular choices of functions could make a network unidentifiable.
Let us consider the DAG in Fig. <ref>. Let us assume that the functions f_3,1, f_3,2, f_4,1, f_4,2 and f_5,3 have been identified.
The measurement of the node 6 provides the output:
y_6^k=f_6,5(f_5,3(f_3,1(u_1^k-T_5,1)+f_3,2(u_2^k-T_5,2)))
+f_6,4(f_4,1(u_1^k-T_5,1)+f_4,2(u_2^k-T_5,2)).
Notice that if f_3,1=γ f_4,1 and f_3,2=γ f_4,2 with γ≠ 0, the functions f_6,5 and f_6,4 cannot be identified since f̃_6,5(x)=f_6,4(f_5,3^-1(x/γ)) and f̃_6,4(x)=f_6,5(f_5,3(γ x)) also generates the output (<ref>).
Example <ref> shows that a DAG could be identifiable for most of the functions, except for particular choices
(e.g., two in-neighbors p and q of a node with outputs of the form y_p=γ y_q for γ≠ 0).
This is similar to the concept of generic identifiability in the linear case, where a network can be identifiable except for a particular choice of transfer functions that remains in a set of measure zero <cit.>. In the context of nonlinear identifiability, we will introduce a notion similar to the linear case by defining a generic matrix.
For a network G, let us define the matrix J_∈^n× n, where [J_]_i,j=f'_i,j(y_j^k-m_i,j). This matrix can be considered as the adjacency matrix of a linear network G_, that preserves the same topology of G but with edges of the form f'_i,j(y_j^k-m_i,j). Since the output of any node can be expressed as a function of the excitation signals (see (<ref>)), the matrix J_ can also be expressed as a function of the excitation signals with the corresponding delays. Now, let us consider a modification of the matrix J_ where all the excitation signals corresponding to a node i, including different delays, are considered as the same input v_i. We call this new matrix the nonlinear network matrix and we denote it by J_G(), where ∈^^e.
Since J_G() is upper triangular in a DAG, we can guarantee that the matrix T_G():=(I-J_G())^-1=∑_n=0^∞ (J_G())^n is well defined <cit.>. Notice that an entry [T_G]_j,i corresponds to the sum of the product of the edges corresponding to all the walks from i to j in G_.
If we set free variables for the nonzero entries (i.e., edges) of the matrix J_G(), we say that for A,B⊆ V, the rank of a submatrix T_G^A,B() of T_G()=(I-J_G())^-1 is maximal if it is maximal with respect to the free variablesof J_G() <cit.>.
We say that the nonlinear network matrix J_G() associated to the network is generic if there exists a point ^*∈^^e such that any submatrix T_G^A,B(^*) of T_G(^*)=(I-J_G(^*))^-1 has maximal rank.
This generic nonlinear network matrix will be used in the derivation of identifiability conditions for general DAGs.
§.§ Identifiability of DAGs
Unlike trees, first, we will show that for more general DAGs, the identifiability conditions in the full excitation case do not have a symmetric equivalence in the full measurement case since the excitation of sources is not sufficient to guarantee identifiability.
Let us consider the DAG in Fig. <ref> with the functions f_2,1(x)=a_2,1x^3, f_3,1(x)=a_3,1x^3, f_4,2(x)=a_4,2x^3 and f_4,3(x)=a_4,3x^3, that belong to . If the symmetry full excitation/full measurement holds, the excitation of the source 1 should be sufficient to guarantee the identifiability of the network in the full measurement case.
The measurement of the node 2 provides the identification of f_2,1 and the measurement of the node 3 provides the identification of f_3,1. The measurement of the node 4 provides the output:
y_4^k
=f_4,2(f_2,1(u_1^k-T_4,1))+f_4,3(f_3,1(u_1^k-T_4,1))
=(a_4,2a_2,1^3+a_4,3a_3,1^3)(u_1^k-T_4,1)^9.
Notice that the output (<ref>) can also be generated with the functions f̃_4,2(x)=(a_4,2+γa_3,1^3/a_2,1^3)x^3 and f̃_4,3(x)=(a_4,3-γ)x^3 with γ≠ 0, which implies that it is not possible to identify the functions f_4,2 and f_4,3.
Unlike the DAG in Fig. <ref>, the DAG in Fig. <ref> is unidentifiable for any choice of the parameters of the cubic functions, which shows that the excitation of the sources is not sufficient to guarantee identifiability of DAGs in the class in the full measurement case. In order to discard cases where the identifiability is not possible only due to a particular choice of functions, we make the following assumption.
The nonlinear network matrix J_G() associated with the network is generic.
Based on the notions of vertex-disjoint paths introduced in the linear case, we will provide sufficient conditions that guarantee identifiability of a DAG with a specific identification pattern.
A group of paths are mutually vertex disjoint if no two paths of this group contain the same vertex.
Under Assumption <ref>, a DAG is identifiable in the class with =n, if:
* All the sources are excited, all the sinks are measured and the other nodes are either excited or measured.
* There are vertex-disjoint paths from excited nodes to the in-neighbors of each node.
Before presenting the proof of Theorem <ref>, we recall a technical result that will be used in the proof.
Let H⊂^p and let f:H→^q, where p≥ q. If f is continuously differentiable at the point a∈int H and the linear mapping f'(a):^p→^q is surjective, then the range of f contains a neighborhood of f(a).
Theorem <ref>
We will prove it by induction.
Let us consider an arbitrary DAG and an arbitrary topological ordering of the nodes.
According to Proposition <ref>, the DAG is identifiable if (and only if) it is identifiable with ^m=V.
Let us consider an arbitrary node i. Without loss of generality, let us denote the set of in-neighbors of the node i as _i={1,…,m}. If we set to zero the possible excitation signal u_i of the node i, the measurement of i is given by:
y_i^k =∑_j=1^m f_i,j(ϕ_i,j(u_1^k-T_i,1,…,u_n_i^k-T_i,n_i))
=F_i(ϕ_i,1,…,ϕ_i,m),
where we use the same notation as in (<ref>).
Let us assume that there exists a set { f }={f̃} such that F(^m)=F̃(^m). Since F_i∈ F(^m), we have the identifiability problem:
F_i(ϕ_i,1,…,ϕ_i,m)=F̃_i(ϕ̃_i,1,…,ϕ̃_i,m).
Let us assume that all the incoming edges of the nodes at the left of the node i in the topological ordering have been identified.
Then, the identifiability problem (<ref>) becomes:
F_i(ϕ_i,1,…,ϕ_i,m)=F̃_i(ϕ_i,1,…,ϕ_i,m).
Now, let us consider the mapping:
Φ_i:^n_i→^m
Φ_i(w_1,…,w_n_i)=(ϕ_i,1(w_1,…,w_n_i),…,ϕ_i,m(w_1,…,w_n_i)),
and the Jacobian matrix of Φ_i denoted by J_Φ_i() where =(w_1,…,w_n_i).
Notice that J_Φ_i() corresponds to the submatrix of the nonlinear network matrix T_G() where the rows are the in-neighbors of i and the columns are the nodes with the excitation signals. Under Assumption <ref>, there exists a point ^* such that T_G(^*) has maximal rank, which also determines a point ^* for J_Φ_i(^*).
According to <cit.>, since there are vertex-disjoint paths from excited nodes to the in-neighbors of i, the generic rank of J_Φ_i(^*) is m, which implies that J_Φ_i(^*) is surjective.
By virtue of Lemma <ref>, the range of Φ_i must contain a neighborhood of Φ_i(^*), which implies that (<ref>) holds in a set of positive measure. Then, by the Identifiy Theorem of analytic functions <cit.>, we guarantee that
F_i(z_1,…,z_m)=F̃_i(z_1,…,z_m) for all z_1,…,z_m∈,
since the range of all the functions ϕ_i,j is . From (<ref>), it follows that
f_i,j=f̃_i,j for all j=1…,m,
so that all the incoming edges of i can be identified.
Now, notice that for i=2 in the topological ordering, if there is an edge f_2,1, the function ϕ_2,1 is the identity function and the edge f_2,1 can be clearly identified. Then, by induction, the identifiablity analysis is valid for any node in the DAG. Thus, all the DAG is identifiable.
Theorem <ref> implies that a DAG satisfies symmetry full excitation/full measurement if there are vertex-disjoint paths from the sources to the in-neighbors of each node of the network.
This is also consistent with the case of trees since from all the sources we have vertex disjoint paths that reach the in-neighbors of all the nodes of a tree.
When the functions are linear, the space of functions that do not satisfy Assumption <ref> has a dimension smaller than the dimension of the system (i.e., set of measure zero) <cit.>. In the case of nonlinear functions, to the best of our knowledge, the dimension of the space of functions that do not satisfy Assumption <ref> is still not known and its full characterization is left for future work.
Theorem <ref> provides sufficient conditions for the identifiability of DAGs that are weaker than in the linear case.
For instance, let us consider the DAG in Fig. <ref>. In the linear case, the DAG requires N^m,e>n since the node 1 is a dource and a dink <cit.>, and must be measured and excited. However, in the nonlinear case, we only need N^m,e=n to guarantee identifiability. The node 3 has a vertex-disjoint path from the node 1 and the node 4 has a vertex-disjoint path from 2, so that the DAG satisfies the conditions of Theorem <ref> and it is identifiable with this identification pattern.
§ CONCLUSIONS
We analyzed the identifiability of DAGs with partial excitation and measurement. We showed that in the nonlinear case, a DAG is identifiable with an identification pattern if and only if it is identifiable with the full measurement of the nodes. For trees, we showed that the symmetry full excitation/full measurement holds and that any identification pattern guarantees identifiability.
Unlike the linear case, in the case of more general DAGs, we showed that symmetry does not hold. Finally, we introduced the notion of a generic nonlinear network matrix, and we derived identifiability conditions based on vertex-disjoint paths from excited nodes to the in-neighbors of each node in the
network.
For future work, it would be important to characterize the space of functions that do not satisfy Assumption <ref>.
Also, it would be interesting to analyze the case when the range of the functions is not all . Finally, a generalization of the analysis to more general digraphs where cycles might exist (F_i depends on an infinite number of inputs), and more general models where the nonlinear function can depend on inputs with several delays <cit.> is definitely an interesting area of research.
IEEEtran
|
http://arxiv.org/abs/2409.03525v1 | 20240905133650 | FrozenSeg: Harmonizing Frozen Foundation Models for Open-Vocabulary Segmentation | [
"Xi Chen",
"Haosen Yang",
"Sheng Jin",
"Xiatian Zhu",
"Hongxun Yao"
] | cs.CV | [
"cs.CV"
] |
FrozenSeg: Harmonizing Frozen Foundation Models for Open-Vocabulary Segmentation
Xi Chen 1 Haosen Yang 2 Sheng Jin 3 Xiatian Zhu 2 Hongxun Yao 1
1Harbin Institute of Technology 2 University of Surrey 3 Nanyang Technological University
==========================================================================================================================================================================
§ ABSTRACT
Open-vocabulary segmentation poses significant challenges, as it requires segmenting and recognizing objects across an open set of categories in unconstrained environments. Building on the success of powerful vision-language (ViL) foundation models, such as CLIP, recent efforts sought to harness their zero-short capabilities to recognize unseen categories.
Despite notable performance improvements, these models still encounter the critical issue of generating precise mask proposals for unseen categories and scenarios,
resulting in inferior segmentation performance eventually.
To address this challenge, we introduce a novel approach, , designed to integrate spatial knowledge from a localization foundation model (SAM) and semantic knowledge extracted from a ViL model (CLIP), in a synergistic framework.
Taking the ViL model's visual encoder as the feature backbone, we inject the space-aware feature into the learnable queries and CLIP features within the transformer decoder.
In addition, we devise a mask proposal ensemble strategy for further improving the recall rate and mask quality.
To fully exploit pre-trained knowledge while minimizing training overhead, we freeze both foundation models, focusing optimization efforts solely on
a lightweight transformer decoder for mask proposal generation – the performance bottleneck.
Extensive experiments demonstrate that advances state-of-the-art
results across various segmentation benchmarks, trained exclusively on COCO panoptic data, and tested in a zero-shot manner. Code is available at <https://github.com/chenxi52/FrozenSeg>.
§ INTRODUCTION
Image segmentation is a fundamental task in computer vision, enabling a wide range of applications
such as object recognition <cit.>, scene understanding <cit.>, and image manipulation <cit.>. However, traditional techniques are often tailored to specific datasets and segmentation tasks, resulting in a significant gap compared to human visual intelligence, which can perceive diverse visual concepts in the open world. To bridge this disparity, the concept of open-vocabulary segmentation has emerged. In this task, the segmenter is trained to recognize and segment instances and scene elements from any category, mirroring the broad capabilities of human perception.
Parallel to these efforts, significant advancements have been made in the field of purpose-generic image-level large-dataset pretrained Vision Language (ViL) representation learning, exemplified by foundational models such as CLIP <cit.> and ALIGN <cit.>.
These models are pivotal in understanding open scenes, as they leverage rich, descriptive language cues to enhance models’ ability to generalize across a wide array of unseen categories.
However, the absence of sufficient pixel-level annotations often leads to challenges in dense-level image-text alignment.
Recent studies have utilized these pre-trained ViL models for region classification <cit.>,
necessitating further training of a segmentation model <cit.> for precise pixel-level alignment, often resulting in inefficiencies and reduced effectiveness.
Alternatively, mask proposals generated with the CLIP visual encoder <cit.> are still suboptimal due to their limited fine-grained pixel-level understanding, which becomes a performance bottleneck as the mask proposal generation may overfit to the training classes, undermining the model’s generalizability to unseen classes. As shown in Fig. <ref>, existing methods such as FC-CLIP <cit.> struggle to generalize to unseen categories under different IoU thresholds, significantly limiting their practical utility.
In this paper, to overcome the above limitation, we introduce , a system that harnesses the capabilities of localization foundation model SAM to synergistically and efficiently enhance the coarse semantic features extracted from CLIP by incorporating generalized fine space-aware features.
has three key modules: (1) , which aggregate local space-aware features from SAM to serve as the spatial query for the corresponding mask region, enhancing the learnability of queries in a transformer decoder.
(2) ,
designed to enrich each pixel's CLIP feature by incorporating comprehensive global spatial information from SAM.
(3) , designed to further boost the quality of mask predictions based on the spatial information injection of SAM during training by ensembling with zero-shot mask proposals from SAM.
Building upon these modules, as shown in Fig. <ref>, the recall metrics of unseen categories on the challenging CityScapes dataset <cit.> showed significant improvement, consequently boosting PQ from 44.3 to 45.8. This upward trend is further supported by the results in PC-459 <cit.>, with mIoU increase from 17.3 to 19.7, validating the observed enhancement.
Our contributions can be summarized as follows:
(1) Addressing an acknowledged limitation in mask proposal quality, we introduce , a framework that incorporates foundational models to tackle the open-vocabulary segmentation task effectively.
(2) We propose three critical components: the , the , and the . These components are designed to enhance the integration of SAM features into the transformer decoder, facilitating generalized mask generation.
(3) Extensive experiments on various segmentation tasks demonstrate the superiority of our in generating mask proposals and achieving enhanced final performance, surpassing previous approaches.
§ RELATED WORKS
§.§ Open-vocabulary Segmentation
Open-vocabulary segmentation aims to segment objects even without seeing those classes during training. Previous approaches <cit.> typically employ a two-stage process, where an additional segmentation model generates class-agnostic mask proposals, which are then interacted with CLIP features.
In the context of open-vocabulary panoptic segmentation, which necessitates instance segmentation and interaction with multiple mask proposals <cit.>, methods such as OPSNet <cit.> combine query embeddings with the last-layer CLIP embeddings and applies an IoU branch to filter out less informative proposals. MaskCLIP <cit.> integrates learnable mask tokens with CLIP embeddings and class-agnostic masks.
Despite these advancements, challenges remain in effectively aligning segmenters with CLIP.
Alternatively, one-stage open-vocabulary segmentation faces challenges in extending vision-language models without dedicated segmentation models and addressing overfitting in an end-to-end format. CLIP’s pre-training on image-text pairs necessitates reconciling the region-level biases of the vision-language model. Research such as FC-CLIP and F-VLM <cit.> indicates that convolutional CLIP models generally exhibit superior generalization capabilities compared to ViT-based <cit.> counterparts, primarily due to their capability to handle larger input resolutions effectively. This finding highlights a promising direction for adapting CLIP for improved performance in segmentation tasks.
Despite these advancements, a fundamental issue persists: accurately generating mask proposals for unseen categories and scenarios. This challenge is compounded by the methods' dependence on a static Vision and ViL model, which is not equipped to discern intricate pixel-level details, thereby limiting its effectiveness in mask proposal generation.
§.§ Large-scale Foundation Models
Recent advances in large-scale foundation models, pre-trained on extensive datasets, have showcased exceptional zero-shot capabilities. Multi-modal foundation models, such as CLIP and ALIGN, exhibit strong generalization across various downstream tasks. Although these models are trained on image-level data with inherent noise, they can be effectively fine-tuned for various applications. Common strategies include prompt learning <cit.> and the use of adapters <cit.>, with CLIP often remaining frozen to preserve its broad generalization.
In the realm of the segmentation foundation models, significant progress is exemplified by the SAM model <cit.>, which leverages the extensive SA-1B dataset to achieve notable zero-shot generalization. SAM can adapt to new datasets without additional training by using input prompts.
Subsequent models, such as HQ-SAM <cit.> and GenSAM <cit.> have built upon this foundation by optimizing output tokens and integrating textual semantic reasoning, respectively. Despite these advancements, these methods often rely on manually crafted prompts, which constrains their wider applicability and scalability.
Recent research <cit.> has explored the use of bounding boxes generated through open-vocabulary detection methods as prompts, combining SAM and CLIP to exploit their complementary strengths in open-vocabulary segmentation. These approaches aim to combine SAM’s zero-shot generalization capabilities with CLIP’s robust feature representations. Despite these efforts, significant challenges remain in achieving fully automatic open-vocabulary segmentation and transitioning from instance segmentation to semantic and panoptic segmentation.
§ METHOD
Our objective is to achieve efficient open-vocabulary segmentation using frozen foundation models. In this section, we start by defining the problem. Subsequently, we present our method, , which integrates frozen foundation models for open-vocabulary segmentation through two key components: the and the , as illustrated in Fig. <ref>. Finally, we detail our inference strategy, the .
§.§ Problem Definition
Open-vocabulary segmentation involves training with ground-truth masks corresponding to a predefined set of class labels, 𝐂_train.
During testing, the model encounters a different set of class labels, 𝐂_test, which includes novel classes not seen during training. This process requires segmenting images in an open-world context, where the model must categorize pixels into semantic classes for semantic segmentation, identify individual instances for instance segmentation, or combine both for panoptic segmentation. The notation 𝐂 represents either 𝐂_train or 𝐂_test, depending on whether the phase is training or testing.
§.§ Our Approach
Overall Architecture
Following the approach of <cit.>, we adopt Mask2Former <cit.> as our framework. A set of N learnable queries that represent all things and stuff in the input image is processed through the transformer decoder to get mask predictions m.
To adapt the framework for open vocabulary segmentation, we replace the original classification layer with the text embeddings derived from the CLIP text encoder, resulting in class prediction p_d, where d denotes the mask detector.
Post-training, the embeddings for each mask and its corresponding category text are projected into a shared embedding space, facilitating effective categorization within the open-vocabulary framework.
In line with <cit.>, we adopt the convolution-based CLIP visual encoder as our image feature extractor, leveraging its pre-trained, frozen weights to obtain high-resolution semantic information.
To address the limitations of CLIP’s coarse features, we introduce two key modules: the and the . These modules integrate the spatial features from SAM into the mask proposal generation process, as depicted in Fig. <ref>.
Unlike <cit.>, which incorporates multi-level spatial information into the vision transformer, our injectors focus on infusing spatial information directly into mask queries.
Additionally, we propose the to further enhance segmentation performance during inference.
We detail our approach below:
Query Injector To improve local spatial understanding, we introduce the Query Injector, which enhances the learnable query with space-aware features derived from SAM.
The transformer decoder uses masked multi-head attention to bolster cross-attention between the image's foreground region and the learnable queries.
This mechanism facilitates the integration of both content and spatial information within the query, a concept supported by prior studies such as <cit.>.
However, capturing detailed spatial information remains challenging when the backbone is frozen.
To address this challenge, we devise the Query Injector, which leverages newly generated masks at each decoder layer to pool and transform SAM visual features into a spatial query. The process for generating the spatial query is defined as follows:
x_l = f(pool(M_l, ℱ_sam))
Here, l represents the layer index in the transformer decoder, f denotes a linear projection function, and pool refers to the mask pooling operation. ℱ_sam represents the SAM-derived image features.
This spatial query is specifically designed to concentrate on a region encompassing the mask region. Subsequently, the spatial query is integrated with the learnable query via element-wise addition.
Feature Injector
To refine the CLIP features for mask generation on a global scale, we introduce the Feature Injector, which uses the multi-head cross-attention mechanism (MHCA) as detailed in <cit.>. This mechanism is renowned for its effectiveness in amalgamating diverse information. In our approach, we extend MHCA to enhance the coarse semantic features from CLIP.
Specifically, the Feature Injector integrates semantic content from CLIP with pixel-level spatial awareness from SAM, providing a more nuanced understanding at the pixel scale.
The mathematical formulation of this feature integration is as follows:
ℱ = SoftMax( f_q (ℱ_clip) · f_k( ℱ_sam)/√(D)) · f_v(ℱ_sam)
Here, f_q, f_k, and f_v are linear projection functions in MHCA. ℱ_clip and ℱ_sam represent the features extracted from CLIP and SAM, respectively, while D denotes the dimensionality of the projected features.
Inference Strategy
Previous works such as <cit.> have validated the efficacy of mask pooling on CLIP features within class ensemble methodologies to enhance open-vocabulary segmentation capabilities. Building on these techniques, our approach introduces a novel mask ensemble strategy. As illustrated in Fig. <ref>, our OpenSeg Ensemble Module initiates with the class ensemble process:
p_i(j) =
(p_i,d(j))^(1-α)· (p_i,cl(j))^α, if j∈𝐂_train
(p_i,d(j))^(1-β)· (p_i,cl(j))^β, else
Here, p_i(j) denotes the combined probability for class j in proposal i, integrating inputs from both the detector (p_i,d) and CLIP (p_i,cl).
The mask predictions r for N queries are then generated by aggregating the products of these probability-mask pairs:
∑_i=1^N p_i(c)· m_i[x,y] = r ∈ℝ^C× HW.
Drawing inspiration from the class ensemble, we utilize zero-shot mask predictions from SAM to perform a mask ensemble on r. The SAM masks, denoted as M_sam={m̂_i}_i=1^N, are generated by uniformly sampling points prompts across the image.
These masks are used to pool CLIP features and derive classification scores P_sam = {p̂_i}_i=1^N by aligning with CLIP text features. A threshold ξ=0.5 is applied to filter these masks based on the maximum probability, resulting in the selected probability-mask pairs {(m̂_i,p̂_i) |argmax_cp̂_i > ξ}_i=1^N'.
In the context of semantic segmentation, the SAM mask predictions, denoted as r̂, are computed similarly as follows: r̂ = ∑_i=1^N'p̂_i(c)·m̂_i[x,y]. The final mask prediction, r^', is obtained by integrating the predictions r and r̂ through a mask ensemble approach:
r'[x,y](j) =
r[x,y](j), if j∈𝐂_train
(1-ϵ)*r[x,y](j) + ϵ * r̂[x,y](j), else
Subsequently, the final semantic segmentation results are determined by assigning each pixel [x,y] a class based on
argmax_c∈{1,...,|𝐂|} r'[x,y].
In the context of panoptic segmentation, the efficacy of the results significantly depends on the performance of individual queries. This dependency reduces the effectiveness of integrating class-agnostic mask predictions. Therefore, the final results are determined by assigning each pixel to one of the N predicted probability pairs. This assignment is performed through the following expression: argmax_i:c_i ≠∅p_i(c_i)· m_i[x,y]. In this expression, c_i represents the most likely class label, which is determined by c_i=argmax_c∈{1,...,|𝐂|,∅} p_i(c). Here, ∅ denotes the class of 'no object'.
§ EXPERIMENTS
§.§ Datasets and Evaluation Protocal
For training, we use the COCO panoptic <cit.> dataset, which includes 133 classes. Our evaluation covers open-vocabulary panoptic, semantic, and instance segmentation tasks in a zero-shot setting spanning several test datasets. In semantic segmentation, we access performance on ADE20K dataset <cit.>, which includes both a subset with 150 classes (A-150) and a full version with 847 classes (A-847). Additionally, we evaluate on PASCAL VOC <cit.>(PAS-21), which has 20 object classes and one background class, and PASCAL-Context <cit.>, an extension of PASCAL VOC with 459 classes (PC-459). For panoptic segmentation, the datasets used are ADE20K <cit.>, Cityscapes <cit.>, Mapillary Vistas <cit.>, and BDD100K <cit.>, alongside the closed-set COCO validation dataset.
For instance segmentation, we choose to evaluate LVIS v1.0 <cit.>, which features 337 rare categories.
The evaluation metrics include mean intersection-over-union (mIoU) and frequency-weighted IoU (FWIoU) that offer a comprehensive evaluation of overall performance for semantic segmentation, panoptic quality (PQ), average precision (AP), and mIoU for panoptic segmentation, as well as AP for instance segmentation.
§.§ Baselines
We compare with multiple state-of-art approaches as follows: OPSNet <cit.>,
MaskCLIP <cit.>, MasQCLIP <cit.>, ODISE <cit.>, CLIPSelf <cit.>, FC-CLIP <cit.>, Ovseg <cit.>, SAN <cit.> , RegionSpot <cit.> and Open-Vocabulary SAM <cit.>.
§.§ Implement Details
We use 250 queries for both training and testing, with CLIP serving as the backbone for open-vocabulary text-image alignment. Specifically, we employ the RN50x64 and ConvNext-Large <cit.> versions of CLIP. Additionally, we validate our approach using the ViT-Base <cit.> model from SAM, with the selection rationale detailed in the .
To obtain multi-level semantic features, we apply feature pyramid networks (FPN) after CLIP. SAM processes input images 𝐈∈ℝ^H, W, where H = W = 1024. As demonstrated by PlainViT <cit.>, the deepest feature of ViT contains sufficient information for multi-scale object recognition, and given that SAM is frozen, we do not use FPN for SAM. Instead, we utilize a single convolution layer to project the features to the necessary resolution and then feed them into a single-scale deformable attention transformer <cit.> as the pixel decoder in the SAM branch.
Our transformer decoder comprises L=9 layers. Feature maps with resolutions of 1/8, 1/16, and 1/32 are processed by successive decoder layers in a round-robin fashion. During training, we follow the strategy and losses outlined in FC-CLIP, selecting the model from the final iteration for our primary results. Training is conducted on 4 Tesla A100 GPUs with a batch size of 16.
§.§ Inference Details
During inference, we adhere to the FC-CLIP by resizing images such that the shortest side is 800 pixels for general datasets and 1024 for the Cityscapes and Mapillary Vistas datasets.
We employ a 32x32 grid of points research to generate masks from the SAM ViT-Huge model. The default parameters are set as follows: α=0.4 and β=0.8 in Eq.(<ref>), and a mask ensemble parameter ϵ=0.2 in Eq. (<ref>).
§.§ Evaluation on Open-vocabulary Segmentation
§.§.§ Open-vocabulary Panoptic Segmentation
Tab. <ref> presents a comparison of with leading methods in zero-shot open-vocabulary panoptic segmentation. Our approach, with RN50x64, notably surpasses other works and the baseline FC-CLIP, achieving improvements of +1.8 PQ, +0.3 AP, and +2.0 mIoU on ADE20K; +2.6 PQ, +1.6 AP, and +0.9 mIoU on Cityscapes; and +4.8 mIoU on BDD100K.
Additionally, the configuration with ConvNeXt-L delivers enhanced performance on open-set datasets without compromising results on the closed-set COCO validation dataset. Significant improvements include +0.8 PQ and +1.6 mIoU on ADE20K, +1.5 PQ, +0.5 AP, and +0.8 mIoU on Cityscapes, +0.4 PQ on Mapillary Vistas, and +1.4 PQ and +2.9 mIoU on BDD10K.
Qualitative results of panoptic segmentation on Cityscapes, depicted in Fig. <ref>, show improvements in segmentation, particularly for small objects, entity recognition, and novel class recognition. Additional details and additional results are available in the .
§.§.§ Open-vocabulary Semantic Segmentation
Tab. <ref> presents a comparative analysis of in cross-dataset open-vocabulary semantic segmentation. Using the RN50x64 backbone, significantly outperforms the baselines. Compared to FC-CLIP, achieves gains of +3.1 mIoU and +10.1 FWIoU on PC-459, +1.0 mIoU and +8.2 FWIoU on A-847, and +2.0 mIoU and +4.1 FWIoU on A-150. These improvements are also reflected in the ConvNeXt-L configuration. Overall, sets a new benchmark in performance across the datasets PC-459, A-847, and A-150. It is important to note that PAS-21’s categories fully overlap with those of the training dataset, which suggests that FC-CLIP may overfit to base classes and thus limit its generalization.
For qualitative insights, refer to Fig. <ref>, where delivers segmentations that are contextually more accurate compared to both the baseline and ground truth annotations, demonstrating its effective handling of complex scenes. Additional results are available in the .
§.§.§ Open-vocabulary Instance Segmentation
Tab. <ref> presents results for rare categories in the LVIS dataset. We compare with approaches that integrate SAM with CLIP for open-vocabulary segmentation tasks, specifically RegionSpot and Open-Vocabulary SAM. Both of these methods rely on proposals as location prompts and are trained on datasets beyond the COCO panoptic to align the models. Our method achieves the highest performance, with an improvement of +0.6 AP over FC-CLIP.
§.§ Ablation Studies
We perform a series of ablation studies on our method. All findings are presented using the ConvNext-L version of CLIP and the ViT-B version of SAM.
§.§.§ The Effectiveness of Each Component
We perform ablation studies to assess the effectiveness of each component of our method. Tab. <ref> presents the results of these ablations on three challenging out-of-vocabulary datasets, with rows 2-6 highlighting the contribution of each component to overall performance.
Specifically, row 1 illustrates the scenario where only proposals from SAM are utilized. In this setup, SAM masks are used to pool CLIP features, providing basic semantic understanding without explicit semantic guidance. This configuration achieves approximately 6.5 mIoU on PC-459 and A-847, and 25.4 mIoU on ADE20K, demonstrating the fundamental generalization capability of SAM masks.
Therefore, we integrate to address the limitation of unseen mask proposals. This enhancement is evident in the comparison between ablation cases (2) and (3), and also between cases (5) and (6). Fig. <ref> provides a clearer visual comparison, showing improved segmentation accuracy for objects such as ‘people’ and ‘fences’ in the PC-459(2) example, particularly in columns 3 and 4.
§.§.§ Where to Inject
Tab. <ref> presents an ablation examining the impact of layer insertion for the within the transformer decoder, which consists of a total of 9 layers. Since SAM Vision Transformers provide the final layer features as the most relevant feature maps, we explore the optimal layer for query injection based on its interaction with corresponding CLIP feature maps. The results indicate that injecting SAM query features at layers l=3, 6, 9 yields the most significant improvement, with layer l+1 leveraging the newly introduced queries for further refinement.
For the , due to the exponential increase in computational complexity associated with cross-attention computations as feature size expands, we restrict the application of the Feature Injector to 1/32-sized features, specifically at layers l=1, 4, 7.
§.§.§ Speed and Model Size
As shown in Tab. <ref>, incorporating SAM along with two custom injectors results in a slight reduction in inference speed, manifesting as a 0.56 and 0.09 decrement in frames per second (FPS) during single-image processing. Despite this, the adjustment leads to a notable improvement of 1.8 PQ on the Cityscapes datasets, with minimal impact on COCO. This reflects a well-balanced trade-off between enhanced performance and computational efficiency. Compared to FC-CLIP, our model requires a modest increase of only 5.5M training parameters and 93.5M frozen parameters, demonstrating its effectiveness.
§ CONCLUSION
In this study, we introduced , a method designed to enhance mask proposal quality in open-vocabulary segmentation by leveraging SAM’s dense-prediction capabilities. employs the and modules to integrate SAM visual features with learned queries and CLIP visual features, thereby refining mask proposals through multiple transformer decoder layers. Additionally, we introduce the for inference, which aggregates zero-shot SAM masks to improve out-vocabulary predictions further. Our experiments demonstrated that significantly enhances the mask proposal quality in open-vocabulary scenarios, highlighting its versatility.
§ APPENDIX
Our supplementary material begins with the in-depth analysis of the FC-CLIP baseline's performance. Next, we explore more numerical results across various datasets, including evaluating mask recalls and an ablation study focusing on the co-training size of SAM. Finally, we present further qualitative visualization findings, featuring mask attention maps and segmentation results for two challenging datasets, A-847 <cit.> and PC-459 <cit.>.
§.§ Further discussion of FC-CLIP baseline
FC-CLIP adopts a checkpoint selection strategy based on the PQ accuracy within the ADE20K benchmark <cit.>, a dataset known for its complexity with 150 diverse classes. Upon executing the FC-CLIP code and analyzing the final-round results, marked by * in Tab. <ref>, we observed tendencies towards overfitting and a subsequent decline in generalizability, This was accompanied by reduced effectiveness across various other open-vocabulary evaluation datasets, although there was improved performance on the COCO validation dataset.
Despite FC-CLIP's strategies to mitigate overfitting, the method's effectiveness in open-vocabulary scenarios, especially in the context of ADE20K's datasets, remains questionable. This raises concerns about the transparency of its model selection methodology. In contrast, our proposed framework, , which leverages the last iteration's checkpoints for inference, performs comparably on both the ADE20K and A-847 datasets. It demonstrates consistent and robust performance across all tested scenarios, thus eliminating the need for selective model evaluation.
§.§ More numerical results
§.§.§ Comparative recall across datasets
In Tab. <ref>, we present the recall rates for our method and FC-CLIP across four datasets. This comparison is between the predicted mask proposals and class-agnostic semantic ground truth. We detail recall rates at IoU thresholds of 0.5, 0.75, and 0.9. The results demonstrate that generally outperforms in generalizing to mask proposals for unseen classes.
§.§.§ Comparison with different SAMs
In Tab. <ref>, we provide detailed results of (w/o. mask ensemble), using ConvNeXt-L CLIP <cit.> alongside different size of co-trained SAM <cit.>: ViT-T (Tiny) <cit.>, ViT-B (Base), ViT-L (Large) and ViT-H (Huge). Across the board, the ViT-B configuration stands out, delivering better performance in our evaluations. We also provide visualizations of the k-means clustering results for feature embeddings from the SAM image encoders. The visualization demonstrates that ViT-B balances segmentation accuracy and connectivity, offering precise segmentation with good instance connectivity. ViT-T provides coarse boundaries, while ViT-L and ViT-H, though more precise, have reduced instance connectivity and may be less effective for panoptic segmentation with CLIP. Thus, ViT-B’s balanced performance makes it a robust choice.
§.§ More qualitative visualizations
§.§.§ Attention maps
To illustrate the refinement of query features facilitated by injectors, we identify the query with the highest confidence and present its corresponding attention map from the final cross-attention layer within the transformer decoder. We map the attention map back to the original image for visualization purposes. The results are depicted in Fig. <ref>. It is evident that our queries exhibit heightened attention towards both the object boundaries and intra-content regions, indicative of the effectiveness of our approach in mask proposal generation.
§.§.§ More results
We have expanded our visual comparisons in PC-459 dataset shown in Fig. <ref>, and the A-847 dataset, depicted in Fig. <ref>. In Fig. <ref>, it can be seen that our method generates more precise masks which are highlighted by red boxes.
Notably, we draw attention to the areas enclosed by white boxes, which exhibit coarse or imprecise annotations. For instance, the 'door' is overlooked in the first column, and the 'chair' annotations fail to precisely demarcate the chair legs. Meanwhile, in the second column, although the ground truth predominantly annotates the background as 'grass', a closer inspection reveals a composite of 'soil' and 'grass', with 'sidewalks' situated in the lower left quadrant.
Fig. <ref> exemplifies the efficacy of our proposed method in producing high-quality masks, extending across a diverse array of novel classes. These include but are not limited to, 'toys', 'painted pictures', 'baptismal fonts', 'altars', 'decorative elements', 'columns', 'pipes', and 'fluorescent lighting', among others.
ieee_fullname
|
http://arxiv.org/abs/2409.03386v1 | 20240905094521 | Movable Antennas: Channel Measurement, Modeling, and Performance Evaluation | [
"Yiqin Wang",
"Heyin Shen",
"Chong Han",
"Meixia Tao"
] | cs.IT | [
"cs.IT",
"eess.SP",
"math.IT"
] |
Movable Antennas: Channel Measurement, Modeling, and Performance Evaluation
Yiqin Wang,
Heyin Shen,
Chong Han, Senior Member, IEEE,
and Meixia Tao, Fellow, IEEE
Yiqin Wang and Heyin Shen are with Terahertz Wireless Communications (TWC) Laboratory, Shanghai Jiao Tong University, China (Email: {wangyiqin, heyin.shen}@sjtu.edu.cn).
Chong Han is with Terahertz Wireless Communications (TWC) Laboratory, also with Department of Electronic Engineering and the Cooperative Medianet Innovation Center (CMIC), Shanghai Jiao Tong University, China (Email: [email protected]).
Meixia Tao is with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China (E-mail: [email protected]).
September 9, 2024
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Since decades ago, multi-antenna has become a key enabling technology in the evolution of wireless communication systems. In contrast to conventional multi-antenna systems that contain antennas at fixed positions, position-flexible antenna systems have been proposed to fully utilize the spatial variation of wireless channels. In this paper, movable antenna (MA) systems are analyzed from channel measurement, modeling, position optimization to performance evaluation.
First, a broadband channel measurement system with physical MAs is developed, for which the extremely high movable resolution reaches 0.02 mm. A practical two-ray model is constructed based on the channel measurement for a two-dimensional movable antenna system across 32×32 planar port positions at 300 GHz.
In light of the measurement results, spatial-correlated channel models for the two-dimensional MA system are proposed,
which are statistically parameterized by the covariance matrix of measured channels.
Finally, the signal-to-interference-and-noise ratio (SINR)-maximized position selection algorithm is proposed, which achieves 99% of the optimal performance. The performance of different MA systems in terms of spectral efficiency are evaluated and compared for both planar and linear MA systems. Extensive results demonstrate the advantage of MAs over fixed-position antennas in coping with the multi-path fading and improving the spectral efficiency by 10% in a 300 GHz measured channel.
Terahertz communications, Movable Antennas, Channel measurement, Channel modeling.
§ INTRODUCTION
Over the past few decades, multi-antenna has become a key enabling technology in the evolution of wireless communication systems. From multiple-input multiple-output (MIMO) <cit.> in microwave systems to massive MIMO <cit.> in millimeter-wave (mmWave) systems and even ultra-massive MIMO (UM-MIMO) <cit.> in terahertz (THz) systems, multi-antenna technologies, by leveraging the degrees of freedom (DoFs) in the spatial domain, have been pursuing higher data rates and reliability to meet the demand of the future-generation wireless communications.
In contrast to conventional multi-antenna systems that contain antennas at fixed positions, the concept of position-flexible antennas is proposed to further utilize the spatial variation of wireless channels in a given region without increasing the number of antennas <cit.>. The flexibility in antenna position inside a spatial region improves the communication performance by adjusting the antenna to where is less impacted by the channel fading. Specifically, fluid antenna system (FAS) <cit.> and movable antennas (MAs) <cit.> are two implementations of position-flexible antennas. On one hand, the fluid antenna technology, also regarded as liquid antennas, applies software-controllable liquid materials and features the flexibility and reconfigurability in its shape and position <cit.>. On the other hand, MAs refer to antennas with the ability of physical motion in local, assigned regions <cit.>.
Till date, analytical channel models for position-flexible antenna systems have been developed <cit.>. In <cit.>, a deterministic channel model is proposed based on the superposition of multi-path components. The large amount of model parameters supports various analytical evaluation of the MA system, whereas the high complexity makes it difficult for the implementation. By contrast, authors in <cit.> proposed a spatially-correlated Rayleigh fading channel model, based on parameters that represent the analytical correlation between channels. The assumption of Rayleigh fading channels results in the real-valued covariance, which however is invalid for the complex-valued covariance of channel coefficients characterized from the measurement.
Moreover, in order to improve the performance and unlock the potential of position-flexible antenna systems, algorithms have been studied for antenna position selection <cit.>, beamforming design <cit.>, and joint optimization <cit.>.
Nevertheless, experimental evaluation of movable antenna systems with antenna position selection and beamforming is still absent.
To fill this research gap, in this paper, we provide a complete and practical assessment of movable antenna systems in the real-world environment, including the channel measurement, the channel model parameterized from measurement results, the port selection algorithm, and the beamforming design.
Specifically, we first develop a wideband channel measurement platform with a 0.2 mm-precision two-dimensional movable antenna system. Extensive channel measurement is carried out across 32×32 ports at 300 GHz. The variation of practical wireless channels across the different ports is elaborated in detail, by analyzing the line-of-sight (LoS) and the reflected transmission, as well as the multi-path fading attributed to their superposition. The practical two-ray model for each port is constructed and verified with the analytical result.
In light of the measurement results, spatial-correlated channel models for the two-dimensional MA system are proposed, statistically parameterized by the complex covariance matrix of measured channels.
Channel models are fundamentals for system design and performance evaluation. To this end, by distributing the given ports into uniform regions for each MA and maximizing the signal-to-interference-and-noise ratio (SINR), the antenna position selection algorithm is proposed, which is applicable to both planar and linear MA systems.
Finally, the beamforming algorithm is conducted and the performance, in terms of spectral efficiency, of different MA systems is evaluated and compared.
By coping with the multi-path fading, MA systems with the proposed uniform-region SINR-optimized position selection algorithm can improve the spectral efficiency by 11.48% across the 32×32 mm^2 area in the THz wireless channel. Besides, in view of the channel characteristic and the position selection algorithm, N× 1-type MA, better than 1× N or N× N types, can reach 99.57% of the optimal spectral efficiency obtained by traversing all candidate ports.
The remainder of this paper is organized as follows.
The wideband channel measurement with the MA system is introduced in Section <ref>. The measurement results and the practical two-ray model are also elaborated in this section. Then, in light of the measurement result, we propose the parameterized spatial-correlated channel model based on channel characterization of the complex covariance matrix in Section <ref>. To evaluate and compare the performance of MA systems, the uniform-region SINR-optimized antenna selection algorithm and the beamforming algorithm is performed for the MA system in Section <ref>. Finally, the paper is concluded in Section <ref>.
§ CHANNEL MEASUREMENT CAMPAIGN
In this section, we first introduce the vector network analyzer (VNA)-based channel measurement system with the movable antenna system at Tx. Then we describe the measurement conducted in a small anechoic chamber at 300 GHz. Finally, measurement results are discussed and the two-ray channel model is constructed for 1024 ports based on properties of the LoS ray and the reflected ray.
§.§ Measurement Deployment
§.§.§ Measurement platform
The channel measurement platform is composed of three parts, as shown in Fig. <ref>(a), including the computer (PC) as the control system, the displacement system, and the measuring system.
The measuring system consists of transmitter (Tx) and receiver (Rx) modules and the Ceyear 3672C VNA.
The VNA generates radio frequency (RF) and local oscillator (LO) sources. RF and LO signals are multiplied by 27 and 24, respectively in the THz module.
The transmit and received RF signals are mixed by the LO signal and down-converted to the reference intermediate frequency (IF) signal at Tx and the test IF signal at Rx, respectively.
Two IF signals are sent back to the VNA, and the transfer function of the device under test (DUT), including the wireless channel and the system hardware is calculated as the ratio of the two frequency responses. To eliminate the influence of the system hardware, the calibration is conducted, which is described in detail in our previous works <cit.>.
On one side, the VNA-based channel measurement system features extremely large bandwidth up to tens of GHz, which results in a high temporal resolution as large as tens of ps. On the other side, the measurement of MA systems requires the physical movement of antennas with a high spatial resolution. In the platform, the Tx is installed on a displacement system, which supports two-dimensional antenna movements in the x-z plane perpendicular to the Tx-Rx line-of-sight at the precision of 0.02 mm. The x and z axes correspond to the horizontal and vertical movement, respectively.
The PC alternately controls the movement of Tx through the displacement system and the measuring process through the VNA.
The measurement starts from the element at the left bottom corner, and all ports are scanned first horizontally (in the x-axis) and then vertically (in the z-axis). The measuring time for each position is about 2.3 s.
§.§.§ Measurement setup
In this measurement, we investigate the THz frequency band ranging from 260 GHz to 320 GHz, which covers a substantially large bandwidth of 60 GHz. As a result, the resolution in time is 16.7 ps, corresponding to the resolution of propagation path length equal to 5 mm, which can distinguish MPCs that are close in arrival time.
The frequency sweeping interval is 60 MHz, resulting in 1001 sample points at each Tx-Rx position and equivalently, the maximum detectable path length of 5 m, which is sufficient for the small-scale measurement and saves the measuring time in return.
Key parameters of the measurement are summarized in Table <ref>.
As shown in Fig. <ref>(b), the wireless propagation channel is confined in a 0.69 m long, 0.61 m wide, and 1.00 m deep anechoic chamber. A metal surface is vertically attached to one side surface of the chamber. Other inner surfaces are wrapped by absorbing materials to restrain the multi-path effect.
Tx and Rx are placed at two ends of the anechoic chamber.
In this measurement, 32×32 ports for the Tx antenna are measured with the spacing of 1 mm.
The Rx is static and aligned with the center of selectable ports at the Tx, whose distance is 0.86 m.
The measuring time is subject to the motion driven by the displacement platform, which takes about 1 hour to traverse 32×32 ports.
§.§ Measurement Results
Measurement results are discussed next, including the gain and the phase of the LoS ray and the metal surface-reflected ray across the 32×32 ports.
Besides, the multi-path fading caused by the superposition of the two rays is elaborated.
§.§.§ The LoS and the reflected ray
In the measurement, for each channel from one port at Tx to the Rx, the frequency-domain measurement derives 1001 samples in channel transfer function (CTF), which is transferred into the channel impulse response (CIR) by inverse Fourier transform (IFT).
The CIR result from the center of the Tx antenna to the Rx is shown in Fig. <ref>. Due to the high resolution of 16.7 ps, we can distinguish multi-path components in the time domain from the CIR result.
The LoS ray arrives at 2.8638 ns, corresponding to the traveling distance of 0.8591 m, and has the greatest path gain of -79.6 dB.
Right after the LoS rays comes the major reflected ray from the metal surface at 3.54645 ns, which travels 1.0639 m and has a second-greatest path gain of -89.95 dB.
Besides, since the two ends of the chamber are not covered by absorbing materials, we can also observe higher-order reflected rays. These rays are separated by 6 ns, corresponding to a back-and-forth distance between the Tx and the Rx along the y-axis inside the chamber.
The change of properties of the LoS ray across all port positions is shown in Fig. <ref>.
Starting from the center, properties of the LoS ray change evenly along all directions as the position becomes away from the center. The path gain decreases by 0.6686 dB and the phase changes by π from the center to the farthest corner.
The change of properties of the major reflection ray across all port positions is shown in Fig. <ref>.
The properties of the reflected ray vary differently along two axes. Specifically, the change along the x-axis is much more rapid than the counterparts along the z-axis. This is attributed to the scenario where the metal reflection surface is vertically placed on one side of the channel.
Specifically, the path gain changes periodically between -95 dB and -88 dB among horizontal positions along the x-axis. The period between two adjacent maxima is 6 mm.
By contrast, the path gain barely changes among vertical positions, as the path gain difference is less than 1 dB along the z-axis.
The phase of the reflected ray changes linearly along both axes, whereas the change rate is much larger between horizontal positions than between vertical positions.
§.§.§ Multi-path fading and the two-ray model
The magnitude and phase of the LoS ray and the major reflected ray are extracted, and then we construct a two-ray channel model as follows,
h(τ,f) = h_ LoS(τ,f)+h_ reflect(τ,f)
=α_ LoS(f)δ(τ-τ_ LoS) + α_ reflect(f)δ(τ-τ_ reflect),
where α_ LoS and α_ reflect denotes the complex gain of the LoS and the major reflected ray, respectively. τ_ LoS and τ_ reflect represents the arrival time.
The result is illustrated in Fig. <ref> and discussed as follows. Clearly, a sequence of power maxima and minima is observed along the x-axis among horizontal positions.
The classic two-ray model considers the wireless channel that contains the LoS ray and the ground-reflected ray. As shown in Fig. <ref>, by regarding the metal surface as the “ground”, the constructed model derived in the chamber can be translated into the classic two-ray model. As a result, the “horizontal” separation of the Tx-Rx antennas is d=√(d_0^2+Δ z^2), the receiver “height” is h_ r=0.324 m, and the transmitter “height” is h_ t=Δ x+0.324 m.
According to the simulation result of the classic two-ray model, multi-path fading occurs as h_ t<d<4h_ th_ r/λ, which accords with the measurement result. To be concrete, in this measurement scenario, d is dominant by d_0=0.86 m, as the horizontal and vertical movements Δ x and Δ z are mm-level. Besides, d<4h_th_r/λ is valid since λ is as small as 1 mm at 300 GHz. Therefore, a sequence of power maxima and minima can be observed in the constructed two-ray model derived from the measurement result.
§ SPATIAL-CORRELATED CHANNEL MODELS FOR TWO-DIMENSIONAL MOVABLE ANTENNA SYSTEMS
In this section, we propose the spatial-correlated channel model for the two-dimensional MA system. Specifically, the model is parameterized by the covariance matrix of measured channels. We start by the characterization of the complex-valued covariance matrix of the channel coefficient and the real-valued covariance matrix of the path gain. The simulation results of path gain and channel coefficient obtained from the channel models are compared with the measurement results, respectively.
Note that the real and the imaginary parts of the channel coefficient are generated independently in the Rayleigh fading model <cit.>, which results in the real-valued covariance. However, we characterize the complex-valued covariance of channel coefficient from the measurement, and therefore, the proposed channel model directly generates complex, spatial-correlated channel coefficients.
§.§ Characterization of Spatial Covariance Matrix
On account of port positions that are closely packed within several wavelengths in the MA system, channel coefficients at these ports are correlated.
As depicted in Fig. <ref>, the multi-path fading happens between ports along horizontal positions, while the change of channels in vertical positions is comparably insignificant. Therefore, we denote complex channel coefficients at horizontal positions in n rows as 𝐇=[H_1, H_2, ..., H_n]^T. Channel coefficients at different vertical positions in the i-th horizontal position, as column elements in the i-th row, are regarded as samples of H_i.
We characterize their correlation by the complex covariance matrix Σ_𝐇 as
Σ_𝐇 =
[ Var{H_1} Cov{H_1,H_2} ⋯ Cov{H_1,H_n}; Cov{H_2,H_1} Var{H_2} ⋯ Cov{H_2,H_n}; ⋮ ⋮ ⋱ ⋮; Cov{H_n,H_1} Cov{H_n,H_2} ⋯ Var{H_n} ],
where
Cov{H_i, H_j} = E{(H_i-E{H_i})(H_j-E{H_j})^*}
= E{H_iH_j^*} - E{H_i}E{H_j}^*,
is the covariance between complex channel coefficients H_i and H_j. For i=j, the variance is expressed as
Var{H_i} = E{(H_i-E{H_i})(H_i-E{H_i})^*}
= E{|H_i-E{H_i}|^2}.
Since 𝐇 is complex, the operations are generalized for complex values, and the covariance matrix Σ_𝐇 is complex-valued.
§.§ Spatial-Correlated Model for Path Gain
To start with, we can alternatively characterize the path gain, i.e. the magnitude of channel response |𝐇|=[|H_1|, |H_2|, ..., |H_n|]^T by its real-valued and symmetric covariance matrix
Σ_|𝐇| =
[ Var{|H_1|} ⋯ Cov{|H_1|,|H_n|}; ⋮ ⋱ ⋮; Cov{|H_n|,|H_1|} ⋯ Var{|H_n|} ].
The correlated real-valued random variables |𝐇| can be modeled by
|𝐇|=μ_|𝐇|+𝐂𝐗,
where μ_|𝐇| = [E{|H_1|},E{|H_2|},...,E{|H_n|}]^T is the mean vector.
For a symmetric positive definite covariance matrix Σ, the matrix 𝐂 is given by the Cholesky decomposition such that
Σ_|𝐇|=𝐂𝐂^T.
For a non-positive definite matrix which is symmetric, the decomposition is given by Σ_|𝐇|=𝐋𝐃𝐋^T where 𝐋 is a lower triangular matrix whose diagonal elements are 1 and 𝐃 is a positive diagonal matrix. In this case, the matrix 𝐂 is given by
𝐂 = 𝐋√(𝐃).
The set of random variables 𝐗 = [X_1,X_2,...,X_n]^T contains uncorrelated random variables X_i (i=1,2,...,n) that have zero mean and unit variance. The distribution function of X_i determines the samples of |H_i|. In our measurement, the magnitude of channel response, |g_i,k|, is uniformly distributed in each row (for each i=1,2,...,32). Therefore, we apply
X_i∼𝕌[-√(3),√(3)].
The modeling result of the channel response magnitude at 32×32 ports is shown and compared with the measurement result in Fig. <ref>.
First, the uniform distribution X_i (<ref>) only captures the distribution of |g_i,k| for each i=1,2,...,32, but cannot reveal the spatial correlation at positions in adjacent columns. Therefore, in Fig. <ref>(d), the result given by the model is not continuous between columns in each row. Second, the covariance matrix of the measured channel gain Σ_|𝐆| is well reproduced by the channel model (<ref>) in Fig. <ref>(f). For the two-ray channel, the covariance of |H_i| and |H_j| can be modeled by
Cov{|H_i|,|H_j|} = Cov{|H_p|,|H_q|}
if [d_ip,d_jq] ∈{[6λ,6λ],[0,12λ]}
Cov{|H_1|,|H_j|} = a_1 sin(b_1*(d_1j/λ+1)+c_1)
+ a_2 sin(b_2*(d_1j/λ+1)+c_2)
where d_ij denotes the distance between the i-th and the j-th row. Specifically, the covariance of magnitude between two rows varies periodically. The value is identical if one position is fixed and the other changes by 12λ, or if two positions both change by 6λ. Moreover, inside the first period, Cov{|H_1|,|H_j|}, can be modeled by the superposition of two sinusoidal functions as shown in Fig. <ref>. The parameters are obtained by fitting results, as a_1 = 4.116×10^-5, b_1 = 0.5468, c_1 = 0.004135, a_2 = 4.149×10^-5, b_2 = 1.6160, and c_2 = 0.5212.
§.§ Spatial-Correlated Model for Channel Coefficients
The method in (<ref>) and (<ref>) is also valid for the modeling of complex-valued channel coefficients, since the covariance matrix Σ_𝐇 is complex-valued and symmetric. Specifically for two random variables, i.e., H_i and H_j (i,j∈[1,n], i≠ j), (<ref>) is solved by
𝐋 =
[ 1 0; Cov{H_i,H_j}/Var{H_i} 1 ]
=
[ 1 0; ρ_i,jσ_j/σ_i 1 ],
𝐃 =
[ Var{H_i} 0; 0 Var{H_j}-Cov{H_i,H_j}/Var{H_j} ]
=
[ σ_i^2 0; 0 σ_j^2(1-ρ_i,j^2) ],
𝐂 =
[ σ_i 0; σ_jρ_i,j σ_j√(1-ρ_i,j^2) ],
denoting the complex variance by σ_i^2 = Var{H_i} as expressed in (<ref>) and ρ_i,j=Cov{H_i,H_j}/(σ_iσ_j) where the complex covariance is calculated in (<ref>).
Therefore, the correlated channel coefficient can be generated by
H_i = μ_i + σ_i Y_i, i∈[1,n],
H_j = μ_j + σ_jρ_i,j Y_i + σ_j√(1-ρ_i,j^2) Y_j, j≠ i,
where μ_i=E{H_i} is the complex mean of H_i. Uncorrelated random variables Y_1,Y_2,...,Y_n generates samples with zero mean and unit variance.
Note that if the real and the imaginary parts of the channel coefficient are independently modeled <cit.>, the covariance of complex channel coefficients would degrade into real values, and satisfies
Cov{H_i,H_j} = Cov{Real(H_i), Real(H_j)}
+ Cov{Imag(H_i), Imag(H_j)}.
However, for channel characterization of the spatial covariance from measured samples, the covariance matrix, as one input parameter of the channel model, remains complex in most cases. Therefore, we take the complex channel coefficient as one individual variable, instead of modeling its real and the imaginary parts separately.
To reveal the simulated channel coefficients in both rows and columns, we generate 2D channel coefficients h_i,k by regarding the k-th sample of H_i as the channel coefficient at the i-row and the k-th column, i.e.,
h_i,k = μ_i + σ_i y_i,k, i∈[1,n], k∈[1,m],
h_j,k = μ_j + σ_jρ_i,j y_i,k + σ_j√(1-ρ_i,j^2) y_j,k, j≠ i, k∈[1,m],
where
y_i,k = g_i,k - 1/m∑_k=1^mg_i,k/√(1/m∑_k=1^m|g_i,k-1/m∑_k=1^mg_i,k|^2),
are derived from measured channel coefficients in the i-th row and normalized to zero mean and unit variance. In the generalized channel model, Y_i reveals the distribution of channel coefficients in the i-th row, and its samples can be regarded as channel coefficients in different columns at this row. As long as the distribution has zero mean and unit variance, the covariance of channel coefficients between rows holds.
The modeling result of the complex channel coefficient at 32×32 ports is shown and compared with the measurement result in Fig. <ref>. The simulation result reproduces the complex channel coefficients across the 2D ports, and the characteristics in terms of covariance matrix is reserved.
§ PERFORMANCE ANALYSIS OF THE MA SYSTEMS
In light of the multi-path fading observed in channels across 32×32 ports for Tx antennas to the fixed Rx, we employ the concept of a movable antenna system, which selects the physical positions of antenna elements with the best channel condition and thus improves the system performance without introducing a large number of antennas <cit.>. In this section, we first introduce the setup of antenna arrays at Tx and how we distribute the given selectable ports into movable regions and select positions for antenna elements. Then, we employ a beamforming algorithm and evaluate the performance, in terms of spectral efficiency, of MA systems.
§.§ Movable Antenna Position Selection
§.§.§ Uniform-Region SINR-Optimized Position Selection
We simulate the Tx equipped with m× n planar MAs and m×1 or 1× n linear MAs. We elaborate the antenna position selection scheme in the given area 𝐂 with M× N ports as follows. The set of candidate ports are represented by 𝐂={(x,y) | x∈[1,N], y∈[1,M]}. The coordinates of the transmit MAs are denoted by 𝐭=(𝐭_1, 𝐭_2, ..., 𝐭_N_t)^T, where 𝐭_u = (x_u, y_u) ∈𝐂 for each u=1,2,...,N_t.
For the Tx equipped with m× n MAs, M× N ports from the measurement are uniformly divided into m× n movable regions for MAs, as
𝐂 = ∪𝐂_i,j, i = 1,2,...,N, j = 1,2,...,M,
𝐭_i,j = (x_i,y_j) ∈𝐂_i,j,
where
x_i ∈[(i-1)·⌊ N/n⌋+1, i·⌊ N/n⌋],
y_j ∈[(j-1)·⌊ M/m⌋+1, j·⌊ M/m⌋].
The position of each MA is determined as the port where the SINR of the channel is maximized in its local region.
For 1× n MAs, we scan the rows of the ports and equally distribute movable regions inside each row for each MA, as
𝐭_u = (x_u,y),∀ u∈[1,n]
x_u ∈[(u-1)·⌊ N/n⌋+1, u·⌊ N/m⌋],
y∈[1,M],
Similarly, for m×1 MAs, movable regions are defined as
𝐭_u = (x,y_u),∀ u∈[1,m]
x∈[1,N],
y_u ∈[(u-1)·⌊ M/m⌋+1, u·⌊ M/m⌋],
from which the best positions are selected with the highest SINR inside movable regions.
Fig. <ref> shows three examples of two-dimensional MAs, i.e., 2×2 planar MAs (m=n=2), 2×1 linear MAs (m=2, n=1), and 1×4 linear MAs (m=1, n=4), selected from 32×32 candidate ports (M=N=32) in the given area.
§.§.§ Greedy Selection
The greedy selection scheme does not distribute the selectable area into local movable regions for each MA, and directly selects the best N_t channels, in terms of SINR, among all selectable ports in the given area.
In Section <ref>, we also evaluate the optimal performance of MAs with the greedy selection of antenna positions, as the upper bound. Since the implementation of MA systems barely enables unrestricted movement of all antennas in the given area, the goal of the simulation with the greedy selection is to provide an upper bound for the comparision with the proposed uniform-region SINR-optimized position selection algorithm.
§.§ Beamforming Algorithm
After the position selection, we perform the beamforming scheme where the Tx is equipped with N_t antennas and one RF chain to communicate with a single-antenna Rx. Denote the channel vector as 𝐡∈ℂ^1 × N_t, then the received signal of the multi-input-single-output system can be written as
𝐲 = 𝐡𝐟_ RFs + n,
where s ∈ℂ represents the transmit symbol, and 𝐟_ RF∈ℂ^N_t × 1 is the analog precoder. n ∼𝒞𝒩(0,σ_n^2) is the additive white Gaussian noise. Denoting the total transmit power as p, then the spectral efficiency can be represented as
SE = log_2(1+p/σ_n^2|𝐡𝐟_RF|^2).
To maximize the spectral efficiency, the optimal solution of the analog precoder is given by 𝐟^ opt = 𝐡/||𝐡||_2, where ||·||_2 is the L2-norm. However, since the analog precoder is implemented by the phase shifters, it should follow the constant modulus constraint, i.e., |𝐟_RF(i)| = 1/√(N),∀ i.
Therefore, the optimization problem becomes
min_𝐟_RF ||𝐟^ opt-𝐟_RF||_2^2,
s. t. |𝐟_RF(i)|=1/√(N), ∀ i.
The solution is given by <cit.>
𝐟_RF(i) = 1/√(N) e ^i ∠𝐟^opt(i),
where ∠(·) denotes the phase of the element.
§.§ Evaluation Results
In this part, we evaluate the performance of linear and planar MA systems in terms of spectral efficiency in (<ref>). The standard deviation of the noise Gaussian distribution is σ_n^2=4.89×10^-6. The transmit power varies from 0 dBm to 20 dBm. The discussion is divided into three parts. First, we evaluate and compare the performance of different MA types. Second, the performance achieved by the proposed position selection algorithm is compared with the optimal performance. Third, the MA-enabled improvement of communication performance in terms of spectral efficiency is summarized.
§.§.§ Comparision among different MA types
First, we examine the performance of 1×4 and 4×1 linear MAs selected from 32×32, 16×16 and 8×8 ports around the same center.
The result is shown in Fig. <ref> and elaborated as follows. First, since the change of the two-ray channel shows a sequence of power maxima and minima along horizontal ports, the performance of 1×4 linear MAs is dependent on the range of selectable ports. Specifically, when candidate ports are more concentrated around the center, where the power is higher at the maxima, the performance of the 1×4 MA system is better. For instance, when the transmit power is 0 dBm, the spectral efficiency of the 1×4 MA system is 18.6829, 18.7017, and 19.0179 bps/Hz when the selection area is 32×32, 16×16, and 8×8 ports around the center.
In contrast, the performance of the 4×1 MA system is barely dependent on the range of selectable ports, as the change of channel gain along vertical ports is less significant. Moreover, as long as a column of maxima at the x-axis is found, the performance of 4×1 linear MAs on this column becomes 99% close to the optimal performance derived by the greedy selection.
Furthermore, we compare the performance of different MA systems under the same N_t. To be concrete, 4×1, 2×2, and 1×4 MAs at Tx are simulated. As shown in Fig. <ref>, the 4×1 linear MA system performs the best, which is 99% close to the optimal performance with the greedy selection of channels. The 2×2 planar MA system and the 4×1 linear MA system have similar performance that is 95% of the optimal performance.
To conclude, on account of the measurement scenario, the n×1 linear MA system not only performs better than the 1× n linear MA system, but also approaches the optimal performance under the same number of Tx antennas (N_t) in the two-ray wireless channel.
§.§.§ Comparision with the optimal performance
As shown in Fig. <ref>, the performance of n× 1 linear MAs is compared with the optimal performance, which is derived by greedy selction, i.e., selecting N_t=n best channels out of all candidate channels from 32×32 ports to the Rx. The observation is summarized as follows.
First, the spectral efficiency increases as N_t increases. Second, the proposed port selection scheme in Section <ref>, which equally distributes the given area into movable regions based on the antenna array type, can achieve 99.07%, 99.05%, 99.17%, and 99.57% of the optimal performance for 2×1, 4×1, 8×1, and 16×1 MAs selected from 32×32 ports. Hence, the proposed uniform-region position selection scheme not only reduces the complexity by distributing the given area into local regions for each MA, but also is adequate for the n× 1 linear MA system to approximate the optimal performance.
§.§.§ Performance improvement of MA systems
We further summarize the advantage of MA systems over
fixed-position antennas in improving the spectral efficiency by analyzing the improvement of performance with n× 1 linear MAs selected from N× N candidate ports, compared with the worst case.
The result is summarized in Table <ref> and partially shown in Fig. <ref>. On one hand, when the number of antennas decreases in the fixed port area, or the candidate port area expands for the same MA type, the freedom of MA positions and thus the degree of performance improvement increase. For 2×1 linear MAs selected from 32×32 ports, which maximizes the freedom of choice of antenna positions, SE is increased by 11.48% at most.
On the other hand, when the number of antennas and the selectable port area both become large, e.g., n≥8 and N≥16, the ports around the center, which have the best channel condition, are allocated to the movable region of one or several single MAs. In this case, the performance improvement is restrained.
§ CONCLUSION
In this paper, we provided a complete assessment of movable antenna systems in the real-world environment.
First, with a 0.02 mm-resolution two-dimensional movable antenna system, a 60-GHz wideband channel measurement is carried out across 32×32 ports at 300 GHz. The multi-path fading, caused by the superposition of LoS and surface-reflected rays, is modeled by the practical two-ray model.
In light of the measurement results, spatial-correlated channel models for the two-dimensional MA system are proposed, statistically parameterized by the complex covariance matrix of measured channels.
Moreover, the low-complexity and near-optimal uniform-region SINR-maximized antenna position selection is proposed. The beamforming algorithm is applied to evaluate the performance of planar and linear MA systems in terms of spectral efficiency.
The results demonstrate the advantage of MAs over fixed-position antennas, by coping with the multi-path fading, in improving channel gains and thus the spectral efficiency by 11.48%, which reaches 99% of the optimal performance, across 32×32 millimeter-interval ports in the THz wireless channel.
IEEEtran
|
http://arxiv.org/abs/2409.02302v1 | 20240903212845 | Speech Foundation Model Ensembles for the Controlled Singing Voice Deepfake Detection (CtrSVDD) Challenge 2024 | [
"Anmol Guragain",
"Tianchi Liu",
"Zihan Pan",
"Hardik B. Sailor",
"Qiongqiong Wang"
] | eess.AS | [
"eess.AS",
"cs.AI",
"cs.SD"
] |
A Lesion-aware Edge-based Graph Neural Network for Predicting Language Ability in Patients with Post-stroke Aphasia
Zijian Chen1, Maria Varkanitsa2, Prakash Ishwar1, Janusz Konrad1,
Margrit Betke3, Swathi Kiran2 Archana Venkataraman1
September 9, 2024 – Version 1.0
===========================================================================================================================
§ ABSTRACT
This work details our approach to achieving a leading system with a 1.79% pooled equal error rate (EER) on the evaluation set of the Controlled Singing Voice Deepfake Detection (CtrSVDD). The rapid advancement of generative AI models presents significant challenges for detecting AI-generated deepfake singing voices, attracting increased research attention. The Singing Voice Deepfake Detection (SVDD) Challenge 2024 aims to address this complex task. In this work, we explore the ensemble methods, utilizing speech foundation models to develop robust singing voice anti-spoofing systems. We also introduce a novel Squeeze-and-Excitation Aggregation (SEA) method, which efficiently and effectively integrates representation features from the speech foundation models, surpassing the performance of our other individual systems. Evaluation results confirm the efficacy of our approach in detecting deepfake singing voices.
The codes can be accessed at <https://github.com/Anmol2059/SVDD2024>.
Singing voice, deepfake detection, anti-spoofing, SVDD, SSL, SEA
§ INTRODUCTION
With the rapid development of generative AI technology, the quality of audio synthesis has significantly improved, making it increasingly difficult to distinguish between bona fide and spoofed audio. However, this progress also poses significant risks to human voice biometrics and can deceive both automatic speaker verification systems and their users <cit.>. Additionally, the proliferation of spoofed speech presents a serious threat to cybersecurity, as it can be used to manipulate information, conduct fraud, and bypass security measures that rely on voice authentication. Finding effective ways to detect spoofing attacks and protect users from the threat of spoofed speech is becoming increasingly important. Therefore, speech anti-spoofing, also known as speech deepfake detection, has emerged <cit.>. It is dedicated to developing reliable automatic spoofing countermeasures (CMs), which is of utmost importance to society and the ethical applications of generative models.
Unlike speech spoofing, creating deepfakes of singing voices introduces distinct challenges. This complexity arises from the inherently musical aspects of singing, such as varying pitch, tempo, and emotion, as well as the frequent presence of loud and intricate background music <cit.>. These factors make it more difficult to detect deepfakes in singing compared to regular speech, which typically features a more consistent and predictable sound pattern.
Recently, the speech anti-spoofing research community has been increasingly focusing on this challenging issue, resulting in the development of related datasets <cit.>, challenges <cit.>, and models <cit.>.
The Singing Voice Deepfake Detection (SVDD) Challenge 2024 aims to address these challenges by fostering the development of robust detection systems <cit.>.
Speech foundation models are large, pre-trained models designed to serve as the backbone for various speech-related tasks, including speaker verification, speech recognition, and more <cit.>. Many of these models rely on self-supervised learning (SSL) to develop robust speech representations, such as WavLM <cit.> and wav2vec2 <cit.>. These models excel in learning high-quality representations that can be fine-tuned for specific downstream tasks. Recently, many studies on speech anti-spoofing have adopted this approach and achieved state-of-the-art performance <cit.>. The progress of these studies and their promising performance motivate us to continue exploring along this particular line.
This work details our participation in the CtrSVDD track of the SVDD Challenge 2024.
We detect singing voice deepfakes by ensembling models developed using speech foundation models, data augmentation techniques, and various layer aggregation methods. Specifically, the default Weighted Sum aggregation method fixes weights after training, limiting adaptability to new data. The recently proposed Attentive Merging (AttM) method <cit.>, while powerful, can lead to overfitting on small datasets. To address these issues, inspired by Squeeze-and-Excitation Networks (SENet) <cit.>, we propose the SE Aggregation (SEA) method. This method dynamically assigns weights and mitigates overfitting issues, enabling our best individual model to achieve an EER of 2.70% on the CtrSVDD evaluation set. Further investigations show that ensembling systems enhances robustness and performance, achieving our best result of 1.79% EER.
§ METHODOLOGY
§.§ Data Augmentation
We employ the RawBoost augmentation <cit.>, which introduces various types of noise to the audio data to simulate real-world acoustic variations. These augmentation types include:
* (1) Linear and non-linear convolutive noise (LnLconvolutive noise). This involves applying a convolutive distortion to the feature set by filtering the input signal with notch filter coefficients, iterating N_f times,, and raising the signal to higher powers to simulate real-world distortions.
* (2) Impulsive signal-dependent noise (ISD additive noise). This is introduced by adding noise to a random percentage of the signal points, scaled by the original signal's amplitude.
* (3) Stationary signal independent noise (SSI additive noise). This represents stationary signal-independent noise, which is added uniformly across the signal.
§.§.§ Parallel Noise Addition
We adopt a parallel noise addition strategy to independently incorporate multiple noise characteristics. We process the input feature through both LnL Convolutive Noise and ISD Additive Noise algorithms simultaneously, resulting in two separate noisy signals. These signals are then combined by summing and normalizing to maintain consistent amplitude levels. This parallel approach allows each noise type to influence the signal independently, effectively capturing the combined effects of convolutive and impulsive noise, and providing a robust simulation of complex noise conditions. This method is referred to as the `parallel: (1)+(2)' approach described in RawBoost <cit.>.
§.§.§ Sequential Noise Addition
We use a sequential noise addition process to enhance the robustness of our features, incorporating the aforementioned three types of noise. This sequential approach ensures comprehensive noise simulation and results in various combinations such as `series: (1)+(2)', `series: (1)+(3)', and `series: (2)+(3)', following those in RawBoost <cit.>.
§.§ Individual Models Description
§.§.§ Frontend
In this subsection, we provide a detailed overview of the frontends used in our individual models, emphasizing their ability to efficiently process raw audio data.
Raw waveform. Following the baseline system described in the SVDD challenge 2024 <cit.>, we employ RawNet2 <cit.>-style learnable SincConv layers with 70 filters as the frontend. These SincConv layers are designed to effectively capture essential features from raw audio signals, enhancing the model's ability to process and analyze audio data for subsequent tasks.
wav2vec2.
The wav2vec2 model offers significant advantages in effectively capturing a wide range of audio features directly from raw audio inputs <cit.>. This model excels in extracting detailed and nuanced information from audio data, which can then be utilized for various downstream tasks such as speaker verification, speech recognition, and speech anti-spoofing. By processing the raw audio waveforms without requiring extensive pre-processing, wav2vec2 enhances the ability to perform complex audio-related tasks with improved accuracy and efficiency. This direct approach not only simplifies the workflow but also improves the overall performance of the subsequent processing and classification tasks <cit.>.
WavLM.
The WavLM <cit.> is a large-scale pre-trained speech foundation model for addressing the multifaceted nature of speech signals, including speaker identity, paralinguistics, and spoken content. Its robust performance on the SUPERB benchmark <cit.> underscores its potential versatility across diverse speech processing applications. Given its advanced capabilities in modeling and understanding complex speech patterns, WavLM holds promise for use in specialized area of singing voice deepfake detection. The model's ability to capture intricate vocal nuances and sequence ordering could be instrumental in identifying synthetic patterns in singing voices, thereby contributing to the SVDD task.
§.§.§ Layer Aggregation Strategy
The layer aggregation strategy in speech foundation models refers to the technique of combining information from multiple layers to enhance the model's performance in speech-related downstream tasks like speaker verification, emotion recognition, and anti-spoofing. Each layer in a speech foundation model captures distinct aspects and features of the input waveform. By aggregating these layers, the model can leverage a richer set of features, combining low-level acoustic information from early layers with higher-level semantic and contextual information from later layers. This process typically involves techniques such as concatenation, weighted sum, or attention mechanisms to effectively aggregate the multi-layer representations <cit.>. These learned weights allow the model to emphasize more relevant features and reduce noise or less important information. In this work, we explore weighted sum and attentive merging (AttM) <cit.>. Inspired by SE <cit.>, we propose SE Aggregation. These three methods are illustrated in Fig. <ref>, and the details are as follows:
Weighted Sum. The weighted sum method combines outputs from multiple neural network layers using adjustable parameters. Each layer's output receives a unique weight, enabling the model to determine the optimal contribution of each layer to the final representation. These weights are adjusted during the training process to enhance the model's performance and remain fixed during inference.
Attentive Merging (AttM). The AttM <cit.> approach emphasizes the most relevant features for anti-spoofing by averaging the embeddings across the time dimension and applying a fully connected layer to squeeze the hidden dimensions. Attentive weights are computed using a sigmoid activation function, which are then applied to the stack of embeddings. Finally, a linear projection network merges these re-weighted embeddings, retaining global spatial-temporal information while emphasizing the most relevant transformer layers for anti-spoofing. This method not only achieves state-of-the-art performance but also improves computational efficiency by utilizing only a subset of the transformer layers <cit.>.
Proposed SE Aggregation (SEA). The weighted sum method is simple yet effective. However, its weights are fixed after training, limiting its adaptability to new data. The AttM method, though powerful, requires a large number of parameters, which can lead to overfitting on small datasets.
Most of these parameters are concentrated in the final linear layer.
To address this, we introduce a new method called SE Aggregation (SEA), inspired by SENet <cit.>, which eliminates the need for the final linear layer. SEA enables a lightweight, cross-layer attention-based aggregation.
The SE module is well-knwon for its ability to adaptively recalibrate channel-wise feature responses by explicitly modeling interdependencies between channels <cit.>. This recalibration enhances the representational capability of the network by focusing on the most informative features and suppressing less useful ones, which is crucial for tasks requiring high precision and robustness <cit.>. This method has been widely applied and validated in speech tasks, such as anti-spoofing <cit.> and speaker verification <cit.>. Instead of using this approach to re-weight channels, we employ it to compute layer attention, dynamically emphasizing important channels for each sample. The proposed SEA method operates by initially compressing temporal and channel information through a global average pooling (GAP) operation, creating a layer-wise descriptor. This descriptor is then used to selectively emphasize informative features, as illustrated in Fig. <ref> (c).
Notably, the layer aggregation technique is only applied to the speech foundation model-based systems in this work. The RawNet2-based system does not require the layer aggregation strategy.
§.§.§ Backend
The audio anti-spoofing using integrated spectro-temporal graph attention networks (AASIST) functions as the model, leveraging graph-based attention mechanisms to capture spectral and temporal audio features <cit.>. It includes several key components <cit.>:
* The Graph Attention Layer (GAT) computes attention maps between nodes and projects them using attention mechanisms. This layer consists of linear layers, batch normalization, dropout, and Scaled Exponential Linear Unit (SELU) activation. Separate GAT layers are used for spectral and temporal features.
* The Heterogeneous Graph Attention Layer (HtrgGAT) processes both spectral and temporal feature nodes. It projects each type of node, generates attention maps, and updates a master node that represents the aggregated features. Sequential layers are used to refine these features further.
* The graph pooling layer reduces the number of nodes by selecting the top-k nodes based on attention scores. This process uses sigmoid activation and linear projection to compute the scores, with separate pooling layers for spectral and temporal features.
* The residual blocks apply convolutional layers, batch normalization, and SELU activation, similar to ResNet blocks, within the encoder to process input features.
* The attention mechanism derives spectral and temporal features from the encoded features, incorporating convolutional layers and SELU activation.
§.§.§ Classifier
The classifier outputs the final predictions by utilizing the refined features extracted from the backend model, subsequently performing the classification task. In this work, the input comprises a concatenation of maximum and average temporal features, maximum and average spectral features, and master node features from the ASSIST backend. To enhance generalization, dropout is applied to this concatenated feature vector. The output is generated through a linear layer, which produces logits, representing the raw scores.
§.§ Model Ensembling
Model ensembling is a strategy where multiple models are combined to improve the overall performance and robustness of predictions. The rationale behind this approach is that different models may capture various aspects of the data, and combining them can result in better generalization on unseen data. This method is widely adopted in many works in the anti-spoofing task <cit.>. In this work, we ensemble the individual models by averaging their output scores.
§ EXPERIMENTAL SETUP
§.§ Data Set
We utilized the official training and development datasets provided for the CtrSVDD track, available at Zenodo[<https://zenodo.org/records/10467648>]. Additionally, we incorporated other public datasets including JVS <cit.>, Kiritan <cit.>, Ofutan-P[<https://sites.google.com/view/oftn-utagoedb>], and Oniku[<https://onikuru.info/db-download/>] following the guidelines and scripts provided by the challenge organizers <cit.>. The combined dataset included a diverse range of singing voice recordings, both authentic and deepfake, segmented and processed[<https://github.com/SVDDChallenge/CtrSVDD_Utils>] to ensure consistency in training and evaluation. The details of the dataset partitions, along with the evaluation set statistics from <cit.>, are provided in Table <ref>.
§.§ Training Strategy
We use the equal error rate (EER) as the evaluation metric. To ensure reproducibility, we consistently apply a fixed random seed of 42 across all systems. Our training process employs the AdamW optimizer with a batch size of 48, an initial learning rate of 1 × 10^-6, and a weight decay of 1 × 10^-4. The learning rate is scheduled using cosine annealing with a cycle to a minimum of 1 × 10^-9. For the loss function, we utilize a binary focal loss, a generalized form of binary cross-entropy loss, with a focusing parameter (γ) of 2 and a positive example weight (α) of 0.25. To standardize input length, each sample is randomly cropped or padded to 4 seconds during the training. Our model is trained for 30 epochs, and the model checkpoint with the lowest EER on the validation set is selected for evaluation.
All experiments are performed on a single NVIDIA A100 GPU.
For certain experiments marked in Table <ref>, we employ the Rawboost data augmentation strategy as introduced in Section <ref>.
The RawBoost augmentation is sourced from the official implementation[<https://github.com/TakHemlata/SSL_Anti-spoofing>] and follows the default settings <cit.>. Our utilization of wav2vec2 also references this implementation. The wav2vec2 <cit.> model used in this work is the cross-lingual speech representations (XLSR) model[<https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec/xlsr>]. The implementation of WavLM is derived from S3PRL[<https://github.com/s3prl/s3prl>].
§ RESULTS
§.§ Baselines
The organizers of the CtrSVDD Challenge 2024 provide two baseline systems, referred to as B01 and B02 in Table <ref> <cit.>. B01, based on linear frequency cepstral coefficients (LFCCs), achieved a pooled EER of 11.37%, while B02, based on raw waveform, achieved a pooled EER of 10.39%. We re-implement B02 and obtain an improved performance of 9.45%, slightly better than the official implementation.
§.§ Frontend
As indicated in Table <ref>, when comparing wav2vec2-based models to WavLM-based models with the same type of augmentation (M2 vs. M4 for RawBoost `series: (1)+(2)', and M3 vs. M5 for `parallel: (1)+(2)'), we observe that the WavLM-based models consistently perform better. Therefore, in this work, we focus more on experimenting with WavLM-based models.
§.§ Data Augmentation
By comparing the wav2vec2-based models trained with and without `parallel: (1)+(2)' RawBoost augmentation <cit.> (M1 vs. M3), we observe a significant improvement in performance when the augmentation is applied. Further analysis based on various models and layer aggregation techniques reveals that the `parallel: (1)+(2)' configuration consistently provides better results compared to the `series: (1)+(2)' configuration (M2 vs. M3, M4 vs. M5, M6 vs. M7, M8 vs. M9), with an average relative performance improvement of 26.7%.
On the other hand, our experiments show that using type (3) of RawBoost (SSI additive noise) <cit.> does not yield more benefits (M11 and M12). Overall, RawBoost generally enhances system performance on the CtrSVDD dataset. Notably, benefiting from `parallel: (1)+(2)', the WavLM-based model with our proposed SEA (M9) achieves the best individual performance on the evaluation set, as shown in Table <ref>.
[8]We report the overall system performance according to the settings in the SVDD Challenge 2024 <cit.>, which calculates the pooled Equal Error Rate (EER) for attack types A09 to A13, excluding A14. Additionally, for the benefit of interested readers, we also include the pooled EER results for all attack types (A09 to A14).
§.§ Layer Aggregation Strategies
As shown in Table <ref>,when comparing different layer aggregation methods, we observe that the AttM strategy performs similarly to the weighted sum method in terms of pooled EER. Additionally, the AttM model (M7) achieves the best performance in the most sub-trials.
In this work, we simply utilize all WavLM layers, while the strength of AttM method lies in using fewer encoder layers. This not only lowers inference costs but also boosts performance <cit.>. This aspect is valuable for exploring in the SVDD task.
Given that the weighted sum method lacks a cross-layer attention mechanism, which may limit the representation features extracted by the speech foundation model in complex musical scenarios, and that AttM's higher number of training parameters could lead to overfitting on small datasets, we propose the SEA method.
Our proposed SEA aggregation method, based on the WavLM model, consistently outperforms both the Weighted Sum and AttM across different RawBoost augmentation scenarios, achieving an average relative reduction in EER by 16.7% and 19.1%, respectively.
With this proposed SEA, we achieve the best individual model performance of 2.70%, validating its superior performance and suitability for the task of singing voice deepfake detection.
§.§ Model Ensembling
We explore ensembling models to enhance robustness and performance. The ensembled models and their corresponding evaluation EER are shown in Table <ref>. Specifically,
we explore the model ensembling strategy by initially ensembling the top 5 individual models based on their performance on A09-A14 pooled EER. The E1 system, composed of M5, M7, M8, M9, and M10, achieves a 2.50% EER, outperforming all individual systems. Further investigation includes incorporating a wav2vec2-based model to enhance system diversity and robustness improvement. Consequently, we include the best wav2vec2 system, M3, and remove the weakest individual model, M8, from E1, resulting in E3, which performs at 2.13%. During post-evaluation, we further improve the ensemble performance by adding M2 and removing M5, achieving the best performance of 1.79%.
We note that although the pooled EER of the M2 model is not as good as other models in Table 2, it significantly contributes to ensemble performance. Since the evaluation labels have not yet been released, further analysis is not possible in this study. However, future investigations will help in understanding this improvement.
In Fig. <ref>, we provide a detailed comparison of the best individual model, the WavLM-based model with our proposed SEA (M9), and the best ensemble system (E5). The radar chart clearly illustrates that E5 consistently outperforms M9 in every sub-trial.
This demonstrates the superiority and robustness of ensemble systems by combining the strengths of multiple models, reducing the impact of individual model errors, and increasing overall prediction accuracy.
§ CONCLUSION
In this work, we present ensembled systems utilizing speech foundation models, demonstrating significant promise in the task of singing voice deepfake detection (SVDD).
Our novel layer aggregation strategy, SE Aggregation (SEA), enables the WavLM-based model to achieve the best performance with a 2.70% EER on the CtrSVDD evaluation set, outperforming all individual models. By implementing data augmentation techniques, such as RawBoost, our ensembled system further achieves a remarkable 1.79% pooled EER on the CtrSVDD evaluation set.
Further analysis validates that model ensembling effectively combines the strengths of different models, enhancing both robustness and accuracy. These findings contribute to advancing the field of audio anti-spoofing, particularly in SVDD. Future work can explore further optimization of layer aggregation techniques and broader applications to improve detection systems.
§ ACKNOWLEDGEMENTS
This work is supported
by the National Research Foundation, Prime Minister’s Office, Singapore, and the Ministry of Digital Development and Information, under its Online Trust and Safety (OTS) Research Programme (MCI-OTS-001). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Prime Minister’s Office, Singapore, or the Ministry of Digital Development and Information.
IEEEbib_limit6
|
http://arxiv.org/abs/2409.02320v1 | 20240903222522 | Demystified: double robustness with nuisance parameters estimated at rate n-to-the-1/4 | [
"Judith J. Lok"
] | math.ST | [
"math.ST",
"stat.TH"
] |
Demystified: double robustness with nuisance parameters estimated at rate n-to-the-1/4
Judith J. Lok
Department of Mathematics and Statistics, Boston University
[email protected]
============================================================================================
§ ABSTRACT
Have you also been wondering what is this thing with double robustness and nuisance parameters estimated at rate n^1/4? It turns out that to understand this phenomenon one just needs the Middle Value Theorem (or a Taylor expansion) and some smoothness conditions. This note explains why under some fairly simple conditions, as long as the nuisance parameter θ∈ℝ^k is estimated at rate n^1/4 or faster, 1. the resulting variance of the estimator of the parameter of interest ψ∈ℝ^d does not depend on how the nuisance parameter θ is estimated, and 2. the sandwich estimator of the variance of ψ̂ ignoring estimation of θ is consistent.
§ INTRODUCTION
It is not uncommon that an estimator for a parameter ψ depends on nuisance parameters θ. In such settings, θ is often estimated in a first step. Some estimators for ψ are doubly robust: they depend on two nuisance parameters θ_1 and θ_2, and are consistent if one of the nuisance parameters θ_1 or θ_2 is consistently estimated, but not necessarily both.
Double robustness has been shown to often improve precision, and several efficient estimators that depend on more than one nuisance parameter have been shown to be doubly robust. Examples of this include doubly robust estimation of means from observational data (<cit.>), doubly robust estimation of (coarse) Structural Nested Mean Models (<cit.>), and multiply robust estimation of indirect and direct effects (<cit.>). The orthogonal moment functions from <cit.> are locally doubly robust (see their equation (2.4)), but beyond the scope of this note.
In order to obtain the efficiency gain from double robustness, it is advantageous to use flexible models to estimate θ. Flexible methods do not always estimate θ at rate √(n) (e.g., <cit.>). Fortunately, it often suffices to estimate θ at rate n^1/4 in order to obtain the efficiency gain, and if this is achieved, the variance of the resulting estimator ψ̂ does not depend on how the nuisance parameter θ is estimated.
It does not take much more than the Mean Value Theorem (or a Taylor expansion) to understand this phenomenon. This note shows how this works for estimators ψ̂ based on smooth unbiased estimating equations.
§ SETTING AND NOTATION
Henceforth, ψ^*∈ℝ^d is the true parameter of interest and θ^*∈ℝ^k is the true nuisance parameter. ψ̂ solves
ℙ_n U(ψ,θ̂)=0,
where ℙ_n denotes the empirical average over i=1,…,n independent identically distributed observations, with
EU(ψ^*,θ^*)=0
and U of dimension k, the dimension of ψ. Examples include Maximum Likelihood Estimation settings where ψ̂ solves the score equations, but this so-called Z-estimation is much more general; see e.g. <cit.>.
Such ψ̂ is doubly robust if with θ=(θ_1,θ_2), ψ̂ solves unbiased estimating equations if θ_1 is consistently estimated and if θ_2 is consistently estimated, and not necessarily both; that is,
EU(ψ^*,θ^*_1,θ_2)=0 and EU(ψ^*,θ_1,θ^*_2)=0
for every θ_1 and θ_2.
This note assumes that θ is estimated at rate n^1/4 or faster:
n^1/4(θ̂-θ^*)=O_P(1).
§ REGULARITY CONDITIONS
Throughout, this note assumes that regularity conditions hold so that U(ψ,θ) and EU(ψ,θ) depend smoothly enough on (ψ,θ). It also assumes that the order of differentiation and integration with respect to θ can be changed, so it is assumed that the support of the distribution of the observations does not depend on θ.
It is also assumed that ψ is uniquely identified by equation (<ref>), so that
.∂/∂ψ|_ψ^*EU(ψ,θ^*)
has an inverse. To simplify the exposition, it is assumed that it has already been proven that ψ̂ converges in probability to ψ^*.
§ DERIVATIONS BASED ON TAYLOR EXPANSIONS
Double robustness implies that
E.∂/∂θ_p|_θ^* U_q(ψ^*,θ)=0,
where U_q is the qth component of U, 1≤ q≤ k∈ℕ.
This follows for the derivative with respect to θ_1 by taking the derivative with respect to θ_1 of EU(ψ^*,θ_1,θ_2^*), which equals zero because of equation (<ref>). Notice that this assumes that the support of the observations does not depend on θ, so that differentiation with respect to θ and integration can be interchanged. The same reasoning works for the derivative with respect to θ_2.
After estimating θ resulting in θ̂, ψ̂ solves equation (<ref>):
0 = ℙ_n U(ψ̂,θ̂)
= ℙ_n U(ψ^*,θ^*)
+
(.∂/∂(ψ,θ)|_(ψ̃,θ̃)ℙ_n U(ψ,θ))
([ ψ̂-ψ^*; θ̂-θ^* ])
= ℙ_n U(ψ^*,θ^*)
+
(.∂/∂ψ|_(ψ̃,θ̃)ℙ_n U(ψ,θ))
(ψ̂-ψ^*)
+(.∂/∂θ|_(ψ̃,θ̃)ℙ_n U(ψ,θ))(θ̂-θ^*)
= ℙ_n U(ψ^*,θ^*)
+
(.∂/∂ψ|_ψ̃ℙ_n U(ψ,θ̃))
(ψ̂-ψ^*)
+(.∂/∂θ|_θ̃ℙ_n U(ψ̃,θ))(θ̂-θ^*)
for some (ψ̃,θ̃) between (ψ̂,θ̂) and (ψ^*,θ^*), possibly different in each row (from the Middle Value Theorem applied to each entry in the vector separately). Equation (<ref>) implies that
(ψ̂-ψ^*)=
(.∂/∂ψ|_ψ̃ℙ_n U(ψ,θ̃))^-1(ℙ_n U(ψ^*,θ^*)-.∂/∂θ|_θ̃ℙ_n U(ψ̃,θ)(θ̂-θ^*)).
The derivations below show that if equation (<ref>) holds,
the last term in equation (<ref>) multiplied by √(n) converges in probability to zero.
First, we show that equation (<ref>) implies that
n^1/4(ψ̂-ψ^*)
→^P0.
Notice that as usual (see for example <cit.> Lemma A.6.1), under the usual regularity conditions (mainly differentiability conditions), since (ψ̃,θ̃)→^P (ψ^*,θ^*),
.∂/∂ψ|_ψ̃ℙ_n U(ψ,θ̃)→^P E.∂/∂ψ|_ψ^*U(ψ,θ^*)
and
.∂/∂θ|_θ̃ℙ_n U(ψ̃,θ)→^P E.∂/∂θ|_θ^* U(ψ^*,θ)=0,
where the equality follows from the double robustness equation (<ref>).
Combining with equation (<ref>), it follows that the last term in equation (<ref>) multiplied by n^1/4 converges in probability to zero. Combining with the Central Limit Theorem on ℙ_n U(ψ^*,θ^*), equation (<ref>) shows that equation (<ref>) implies equation (<ref>).
To show that the last term in equation (<ref>) multiplied by √(n) converges in probability to zero, we next consider each
n^1/4.∂/∂θ_p|_θ̃ℙ_n U_q(ψ̃,θ)
separately, were U_q is the qth component of U. We show that the quantity in equation (<ref>) converges in probability to zero when equation (<ref>) holds.
.∂/∂θ_p|_θ̃ℙ_n U_q(ψ̃,θ)
=.∂/∂θ_p|_θ^*ℙ_n U_q(ψ̃,θ)+.∂/∂θ|_θ̇∂/∂θ_pℙ_n U_q(ψ̃,θ)(θ̃-θ^*)
because of the Middle Value Theorem, for some θ̇ between θ̃ and θ^*.
As usual, under the usual regularity conditions, since (ψ̃,θ̇)→^P (ψ^*,θ^*),
.∂/∂θ|_θ̇∂/∂θ_pℙ_n U_q(ψ̃,θ)
→^P E.∂/∂θ|_θ^*∂/∂θ_p U_q(ψ^*,θ)=0,
where the equality follows from the same reasoning as equation (<ref>).
Combining equations (<ref>) and (<ref>) implies that n^1/4 times the last term in equation (<ref>) converges in probability to zero.
For the first term on the right hand side of equation (<ref>), because of the Middle Value Theorem,
.∂/∂θ_p|_θ^*ℙ_n U_q(ψ̃,θ)
= .∂/∂θ_p|_θ^*ℙ_n U_q(ψ^*,θ)+(.∂/∂ψ|_ψ̇.∂/∂θ_p|_θ^*ℙ_n U_q(ψ,θ))(ψ̃-ψ^*),
for some ψ̇ between ψ̃ and ψ, possibly different in each row.
As usual, under the usual regularity conditions,
.∂/∂ψ|_ψ̇.∂/∂θ_p|_θ^*ℙ_n U_q(ψ,θ)→^P E.∂/∂ψ|_ψ^*.∂/∂θ_p|_θ^* U_q(ψ,θ).
Moreover, from equation (<ref>), the Central Limit Theorem implies that
√(n).∂/∂θ_p|_θ^*ℙ_n U_q(ψ^*,θ)→^ D N(0,E((.∂/∂θ_p|_θ^*U_q(ψ^*,θ))^2)).
Combining equations (<ref>), (<ref>), (<ref>), and (<ref>) leads to
n^1/4.∂/∂θ_p|_θ^*ℙ_n U_q(ψ̃,θ)→^P0.
Combining with equations (<ref>), (<ref>), and (<ref>), it follows that
n^1/4.∂/∂θ_p|_θ̃ℙ_n U_q(ψ̃,θ)→^P 0.
Combining equations (<ref>), (<ref>), and (<ref>), it follows that √(n) times the last term in equation (<ref>) converges in probability to zero.
§ CONCLUSION
It follows that if equation (<ref>) and the usual regularity conditions from Section <ref> hold, for ψ̂ of the form of Section <ref> equation (<ref>),
√(n)(ψ̂-ψ^*)
= (.∂/∂ψ|_ψ̃ℙ_n U(ψ,θ̃))^-1√(n)ℙ_n U(ψ^*,θ^*)+o_P(1)
→^ D N(0,V(ψ^*,θ^*)),
from the Central Limit Theorem, equation (<ref>), and Slutsky's Theorem, with
V(ψ^*,θ^*)=(E.∂/∂ψ|_ψ^*U(ψ,θ^*))^-1
E(U(ψ^*,θ^*)^2)(E.∂/∂ψ|_ψ^*U(ψ,θ^*))^-1⊤.
That is, estimating θ leads to the same variance as plugging in the true but usually unknown θ, and the sandwich estimator for the variance of ψ̂ ignoring estimation of θ is consistent, all provided that ψ̂ is doubly robust and θ is estimated at rate n^1/4 or faster.
chicago
|
http://arxiv.org/abs/2409.03167v1 | 20240905015429 | InfraLib: Enabling Reinforcement Learning and Decision Making for Large Scale Infrastructure Management | [
"Pranay Thangeda",
"Trevor S. Betz",
"Michael N. Grussing",
"Melkior Ornik"
] | cs.AI | [
"cs.AI",
"cs.LG",
"cs.SY",
"eess.SY"
] |
Bi-capacity Choquet Integral for Sensor Fusion
with Label Uncertainty
This material is based upon work supported by the National Science
Foundation under Grant IIS-2153171-CRII: III: Explainable Multi-Source Data Integration with Uncertainty.
Hersh Vakharia
University of Michigan
Ann Arbor, MI
[email protected]
Xiaoxiao Du
University of Michigan
Ann Arbor, MI
[email protected]
Accepted XXX. Received YYY; in original form ZZZ
========================================================================================================================================================================================================================================================
§ ABSTRACT
Efficient management of infrastructure systems is crucial for economic stability, sustainability, and public safety. However, infrastructure management is challenging due to the vast scale of systems, stochastic deterioration of components, partial observability, and resource constraints. While data-driven approaches like reinforcement learning (RL) offer a promising avenue for optimizing management policies, their application to infrastructure has been limited by the lack of suitable simulation environments. We introduce InfraLib, a comprehensive framework for modeling and analyzing infrastructure management problems. InfraLib employs a hierarchical, stochastic approach to realistically model infrastructure systems and their deterioration. It supports practical functionality such as modeling component unavailability, cyclical budgets, and catastrophic failures. To facilitate research, InfraLib provides tools for expert data collection, simulation-driven analysis, and visualization. We demonstrate InfraLib's capabilities through case studies on a real-world road network and a synthetic benchmark with 100,000 components.
§ INTRODUCTION
Infrastructure systems are the backbone of modern society, encompassing a wide array of essential services including transportation networks, utility systems, and public facilities. Efficient infrastructure management is crucial for modern society's functioning, influencing economic stability <cit.>, <cit.>, environmental sustainability <cit.>, and public safety <cit.>. Managing modern infrastructure systems is a complex and multifaceted task, involving the maintenance, repair, and replacement of numerous components distributed across facilities and networks <cit.>, <cit.>. The challenges of infrastructure management are further compounded by their vast scale, the stochastic nature of component deterioration <cit.>, <cit.>, stringent operational constraints <cit.>, limited resources <cit.>, and extreme weather events due to climate change <cit.>, <cit.>, <cit.>.
some dicussion on how infra management is usually done and end with downsides of it
Traditional approaches to infrastructure management typically involve rule-based methodologies that rely on deterministic models. These methods, while useful in controlled environments, often struggle to capture the inherent uncertainties and dynamic variations present in real-world scenarios <cit.>. The complexity is further compounded by the need for strategic allocation of resources and budgetary considerations, which are critical yet challenging aspects of effective infrastructure management <cit.>, <cit.>.
talk about how decision making under uncertainty and RL, learning-based approaches work
In recent years, there has been a significant shift towards data-driven methodologies, particularly with the advent of machine learning techniques like reinforcement learning (RL) and imitation learning (IL) <cit.>, <cit.>. These approaches offer a promising avenue for decision-making under uncertainty, allowing for adaptive and proactive infrastructure management strategies <cit.>, <cit.>. RL, in particular, has shown remarkable success in various domains, thanks to its ability to learn optimal policies through interaction with an environment <cit.> ,<cit.>. However, the application of these techniques in infrastructure management is still in its infancy, primarily due to the lack of suitable simulation environments that can accurately model the complexity and scale of these systems <cit.>, <cit.>. There is a strong need for a unified framework modeling infrastructure management problems. Such a framework should provide a natural and intuitive way to represent the uncertainties and limitations in observability while ensuring scalability to handle large-scale, real-world problems. This tool would enable rapid progress in the field, providing a simulation environment for training and realistic benchmarks for comparing learning-based approaches, traditional optimization-based approaches, and rule-based methods.
Talk about what's needed for RL/IL/learning approaches. Also talk about how this need evolves in setting of infra management
Also talk here about existing work of RL and infra management a little bit.
talk about how infralib standardizes and provides unified framework - begin by talking about the need for one.
To address these challenges, we introduce InfraLib, a comprehensive and versatile simulation framework designed for modeling and analyzing large-scale infrastructure management problems. InfraLib provides a realistic and granular representation of infrastructure systems by integrating a hierarchical model that captures the intricate relationships between different components and facilities <cit.>. Moreover, InfraLib employs a stochastic approach to mimic the real-world uncertainties and partial observability inherent in infrastructure systems <cit.>, enabling the development and validation of infrastructure management strategies that are robust to the challenges faced in real-world scenarios.
Beyond serving as a simulation tool, InfraLib includes features for analysis, expert data collection, and modeling real-world budget schedules and failure modes. These features make it a valuable resource for both researchers looking to develop and test new management strategies and practitioners aiming to optimize operational efficiencies in real-world settings.
In this paper, we present a detailed overview of the architecture and capabilities of InfraLib, highlighting its capabilities and potential applications. We demonstrate the ability of InfraLib to create realistic scenarios for the deployment and evaluation of learning-based approaches, showcasing its ability to model the complexities and challenges encountered in real-world infrastructure systems. Furthermore, we provide a series of benchmarks and environments to illustrate the utility and scalability of InfraLib in facilitating the development and comparison of novel management strategies.
The rest of this paper is organized as follows. Section <ref> introduces key concepts and background information on infrastructure management, Partially Observable Markov Decision Processes, and data-driven approaches for decision making. Section <ref> formalizes the infrastructure management problem as a POMDP and discusses the various research challenges that arise in this context. We then present InfraLib in Section <ref> and Section <ref>, detailing its structure, component dynamics, and key functionalities. Section <ref> delves into the human interface aspects of InfraLib, including tools for expert data collection and analysis. Finally, in Section <ref>, we showcase example environments and benchmarks to demonstrate InfraLib's utility, versatility, and scalability.
§ PRELIMINARIES AND BACKGROUND
In this section, we provide background on the infrastructure management domain, including the hierarchical nature of infrastructure systems, the metrics used to quantify component health, the dynamics of component deterioration, and the budget constraints that govern infrastructure management. We also introduce the concepts of Partially Observable Markov Decision Processes and data-driven approaches for decision-making. We start by defining the notation used in the paper.
Given a finite set , || denotes its cardinality and Δ() denotes the set of all probability distributions over the set . ℕ_0 denotes the set of natural numbers including 0 i.e. ℕ_0 = {0,1,2, …}.
§.§ Infrastructure Hierarchy and Management
Infrastructure management is inherently hierarchical, comprising multiple layers of organization. At the base level, we have individual components which are the smallest units of individually managed infrastructure elements. These components are grouped into units, which are collections of components that are managed together. Multiple units collectively form a facility, which is the highest level of organization in the infrastructure hierarchy. This hierarchical structuring in inherent in real-world infrastructure systems and is crucial for systematic management and decision-making processes.
The condition of each component in the infrastructure is characterized by a Condition Index (CI) <cit.>, a metric that reflects its health status. The CI of a component quantitatively represents the health of a component and it deteriorates over time due to environmental factors, wear-and-tear, and in some cases catastrophically due to a manufacturing defect or an external event. This deterioration is typically stochastic, arising from unpredictable environmental interactions and the complex nature of infrastructure materials. Moreover, the CI is not always directly observable, necessitating periodic inspections to estimate its current state. These inspections, while essential, incur additional costs. Even when the CI is observed by inspection, the observation is subjective depending on the inspector and the inspection method, and can be noisy.
Management of infrastructure systems at the component level include inspection, repair, and replacement. Inspection provides an estimate of the CI at a cost, replacement involves completely substituting the component at a higher expense, and repair, a more cost-effective option, aims to improve the CI. These actions are fundamental to maintaining the overall health of the infrastructure system.
§.§ Partially Observable Markov Decision Process
A discrete-time finite-horizon POMDP M is specified by the tuple (, , , T, Z, R, H), where denotes a finite set of states, denotes a finite set of actions, and denotes a finite set of observations. T : × A →Δ() denotes the transition probability function, where Δ() is the space of probability distributions over . Furthermore, and Z : ××→Δ() denotes the observation probability function where Δ() is analogous to Δ(). Finally, R : ×→ [-R_min, R_max] denotes the reward function and H ∈ℕ_0 denotes the finite planning horizon.
For the above POMDP, at each time step, the environment is in some state s ∈ and the agent interacts with the environment by taking an action a ∈. Doing so results in the environment transitioning to a new state s̅∈ in the next time step with probability T(s,a,s̅). Simultaneously, the agent receives an observation o ∈ regarding the state of the environment with probability Z(o|s̅,a) which depends on the new state of the environment and the action taken by the agent. In a POMDP the agent doesn't have access to the true state of the environment. However, the agent can update it's belief about the true state of the environment using this observation. The agent also receives a reward R(s,a).
The problem of optimal policy synthesis for a finite-horizon POMDP is that of choosing a sequence of actions which maximizes the expected total reward.
[∑_t=0^Hr_t] where r_t is the reward earned at time instant t.
Hence the optimal behavior may often include actions which are taken simply because they improve the agent's belief about the true state. After reaching the state s^', the agent receives observation o ∈ with probabily Z(o|s^',a). Let the belief b be a probability distribution over . Then, b(s) denotes the belief state and the agent updates the belief state according to Bayes' rule.
§.§ Data-Driven Approaches for Decision Making
Data-driven approaches, such as reinforcement learning (RL), inverse reinforcement learning (IRL) <cit.>, and imitation learning (IL), have emerged as powerful tools for learning optimal decision-making policies in sequential decision-making problems. In RL, an agent learns to make decisions by interacting with an environment modeled as a (PO)MDP. The agent's goal is to learn a policy π: ×→ [0,1], which maps states to a probability distribution over actions, that maximizes the expected cumulative reward over a horizon H:
π^* = _π_π[∑_t=0^H R(s_t, a_t)]
where R(s_t, a_t) is the reward obtained by taking action a_t in state s_t at time step t, and the expectation is taken over the trajectories generated by following policy π. RL algorithms can be broadly classified into value-based methods <cit.>, which learn a value function that estimates the expected cumulative reward from each state or state-action pair, and policy-based methods <cit.>, which directly learn a parametrized policy. RL's success in various domains can be attributed to its ability to adaptively learn optimal strategies through trial and error.
Inverse Reinforcement Learning (IRL) addresses the problem of learning a reward function that explains the behavior of an expert demonstrator. Given a set of expert demonstrations 𝒟 = (s_0, a_0), (s_1, a_1), …, (s_T, a_T), where (s_t, a_t) represents the state-action pair at time step t, the goal of IRL is to find a reward function R(s,a) that rationalizes the expert's behavior:
R^* = _Rℒ(R |𝒟)
where ℒ(R |𝒟) is a likelihood function that measures how well the reward function R explains the expert demonstrations 𝒟. Common approaches to IRL include maximum entropy IRL <cit.> and Bayesian IRL <cit.>.
Imitation Learning (IL) focuses on learning a policy that mimics the behavior of an expert demonstrator. Given a set of expert demonstrations 𝒟, the goal of IL is to learn a policy π that generates behavior similar to the expert. IL can be approached through behavioral cloning <cit.>, which treats IL as a supervised learning problem and learns a mapping from states to actions by minimizing a loss function between the predicted actions and the expert actions:
π^* = _π∑_(s,a) ∈𝒟ℓ(π(s), a)
where ℓ(·, ·) is the chosen loss function. Alternatively, apprenticeship learning seeks to learn a policy that achieves a similar expected cumulative reward as the expert policy under some unknown reward function, often by iteratively solving an RL problem with a reward function learned via IRL based on the expert demonstrations.
The effectiveness of RL and IL is heavily contingent on the availability of accurate and comprehensive simulation environments and expert demonstrations, which presents unique challenges in the infrastructure domain. Benchmarks and baselines are essential in the realm of RL and IL as they provide standard metrics and methods for comparing different approaches. Benchmarks offer predefined problems with set parameters and goals, allowing for a consistent and fair evaluation of various strategies. Baselines, typically consisting of established methods or algorithms, serve as a reference point to gauge the performance of new approaches.
§ PROBLEM FORMULATION
Infrastructure management is a complex problem influenced by environmental, manufacturing, and operational factors. In real-world infrastructure systems, control over the environment and manufacturing is limited. Therefore, the space of possible decisions spans over the operational factors. We focus on optimizing management decisions under budget constraints, while ensuring that our model captures the stochastic nature of component deterioration and the partial observability of infrastructure condition.
In this section, we formalize the infrastructure management problem as a Partially Observable Markov Decision Process (POMDP) closely following the multi-component POMDP with shared budget formulation in <cit.> and proceed to discuss the challenges and research questions arising when optimizing infrastructure management decisions.
§.§ Modeling Infrastructure Systems as POMDPs
We model a large-scale, hierarchical infrastructure system as a single collection of components. Let n denote the total number of components in the infrastructure system. We model the condition index (CI) dynamics of each component as an independent POMDP with the assumption that the detoriation dynamics of individual components are not related. Specifically, let M^i denote a POMDP representing the dynamics of component i for i ∈{1,2,…,n}. For each component, the state space ⊂ℕ_0 is given by = {0,1,2, …, s_max}, where s_max∈ℕ_0. The state at any time step k denotes the CI of the component at that time step. The observation space is given by 𝒪 = ∪{e}, where e ∈ℕ_0 is a null observation that does not provide any information regarding the true state of the system.
The action space for each component is given by = {d,q,r,m} where (i) action d, the do nothing action, lets the component's CI transition to a new state following the deterioration dynamics, (ii) action q, the inspection action, follows similar state transition dynamics as action d and provides the true state as the observation, (iii) action r, the repair action, improves the CI of the component to a new state s' where s < s' ≤ s_max and provides the true state as the observation, and (iv) action m, the replace action, drives the component state to s_max, and also provides the true state as the observation.
The transition probability function for each component, governed by its deterioration dynamics D, is defined as
T(s,a,s̅) =
1, if s̅ = s_max and a = m,
1, if s̅ = s' and a = r,
D(s,s̅), if s̅≤ s and a ∈{d, q},
0, otherwise.
Similarly, the observation probability function for each component is defined as
Z(s̅,a,o) =
1, if o = s̅ and a ∈{q, m, r}
1, if o = e and a = d
0, otherwise.
The reward function for each component depends on the objective and constraints of the research problem considered. We discuss the reward formulation in detail in later sections.
In addition to the POMDP model M^i, each component is also associated with additional parameters and meta-data that capture the component's importance, maintenance costs, hierarchy, among other attributes. These parameters are essential for modeling the infrastructure system as a whole and are used to define the budget constraints, resource availability, and other operational considerations. Let Ω^i denote the set of additional parameters associated with component i including λ^i ∈ [0,1], the relative importance of component i in the infrastructure system, δ^i ∈ [0, s_max], the failure threshold of component i, and c^i_d, c^i_q, c^i_r, c^i_m, the costs associated with taking actions d, q, r and m respectively for component i.
We manage the collection of n components {(M^1,Ω^i), (M^2,Ω^2), …, (M^n,Ω^n)} with a shared budget B. The budget B is allocated across the components to perform maintenance, repair, and replacement actions. Assume that the number of d^i,q^i, m^i and r^i actions taken for component i for a horizon H are n^i_d,n^i_q, n^i_m and n^i_r respectively. Then, the total cost incurred for the all the components for the horizon H is given by:
C_H = ∑_i=1^|| (n^i_dc^i_d + n^i_qc^i_i + n^i_rc^i_r + n^i_mc^i_m).
§.§ Problem Statement
For an infrastructure system {(M^1,Ω^1), (M^2,Ω^2), …, (M^n,Ω^n)} with a shared budget B, we study a series of research problems that aim to find an optimal policyπ^* that maximizes the time before the components reach their failure thresholds while operating under the budget and other operational constraints. Formally, our goal is to find an optimal policy π^* under objective functions of the form
π^* = _π[∑_t=0^H∑_i=1^nλ^i ·𝕀(s^i_t > δ^i)]
while ensuring that, at minimum, the total cost incurred over the time horizon H does not exceed the total budget B i.e. C_H = ∑_i=1^n (n^i_dc^i_d + n^i_qc^i_q + n^i_mc^i_m + n^i_rc^i_r) ≤ B.
§.§ Research Problems in Infrastructure Management
In this section we present some of the interesting research questions that arise while solving the infrastructure management formulation presented in the previous section.
§.§.§ Hierarchical Decision Making
Infrastructure systems exhibit an inherent hierarchical structure, including components, units, and facilities. This hierarchy adds complexity to decision-making processes, as in many real-world applications the decisions are often made at different levels of the hierarchy. Further, the decisions made at lower levels can have cascading effects on higher levels.
§.§.§ Stochastic Component Deterioration
The deterioration of infrastructure components is inherently stochastic, influenced by environmental factors, wear-and-tear, and unexpected events. Accurately modeling this stochastic deterioration and integrating these models into decision-making processes is crucial, especially to ensure that the data-driven approaches trained in simulation environments work robustly in real-world deployments. Some additional challenges include how RL can be adapted to operate in environments with high levels of uncertainty and variability.
§.§.§ Budget Constraints
Management of infrastructure systems often involves operating under strict budget constraints. Key research questions involve optimal resource allocation for maintenance, repair, and replacement actions, and balancing short-term costs against long-term infrastructure health and functionality. Further, the budget is often not fixed and can vary over time, requiring adaptive policies and policies that plan over a long horizon to ensure optimal resource utilization.
§.§.§ Partial Observability
Most learning-based approaches including state-of-the-article reinforcement learning algorithms often assume a full observability of the environment. However, in infrastructure management, the condition of components is often only partially observable, requiring costly inspections to estimate the true state. Research is needed to develop algorithms that can effectively handle partial observability and make informed decisions based on uncertain or incomplete information.
§.§.§ Interpretability
Unlike other applications of learning-based decision making, interpretability and explainability are crucial for the adoption of intelligent approaches in real-world infrastructure management. Infrastructure management often involves critical decisions that impact public safety and economic stability and therefore it is essential to understand why learned-policies make certain decisions and how to ensure that these decisions align with domain-specific perceptions, requirements and constraints.
§.§.§ Sim2Real Gap
Simulation environments are essential for training and evaluating RL models. However, the gap between simulated environments and real-world dynamics can lead to ineffective policies when deployed in the real world. Research is required to develop simulation environments that accurately reflect the complexity and stochasticity of the real world, as well as algorithms that can bridge the Sim2Real gap.
§.§.§ Sparsity and Time Scales
Unlike typical environments modeled as (PO)MDPs, infrastructure management decisions are made over long time horizons and often involve sparse actions where the agent often has to stay idle and wait for the environment to evolve before taking an action. Further, the decisions often have long-term impacts, with rewards or consequences of actions not immediately observable. Research into approaches and reward shaping techniques capable of handling sparse rewards is crucial for effective infrastructure management.
§.§.§ Scalability and Computational Efficiency
Real-world infrastructure systems are large-scale, often involving millions of individual components distributed across facilities. The simulation environment and the decision-making algorithms need to be scalable, capable of handling large-scale problems efficiently. Further, policies trained using transfer-learning and meta-learning methods should be able to generalize across different infrastructure systems and scenarios while maintaining computational efficiency. Research into scalable solution methods and efficient computation techniques is vital for practical applicability.
§ INFRALIB
InfraLib is a comprehensive modeling, simulation, and analysis framework designed to enable research into data-driven, learning-based decision making for infrastructure management under uncertainty. It provides predefined, structured environments while also allowing users to flexibly define custom scenarios and constraints. The code, documentation, example environments, and tutorials are available at <https://infralib.github.io/>.
§.§ InfraLib Structure
InfraLib framework adopts a modular architecture, which enables separation of concerns and easy extensibility. The core infrastructure model is designed to be highly configurable, allowing users to define custom components, deterioration models, objectives, constraints, and management actions. The hierarchical structure of infrastructure systems is also configurable, enabling users to group components into units and facilities in domain-specific ways.
InfraLib is implemented as a Python library, leveraging popular scientific computing packages like NumPy and Numba for efficient computation. The framework is designed to be user-friendly, with a simple and intuitive API that abstracts the underlying complexity. This makes InfraLib accessible to a wide range of users, from researchers and practitioners to students and educators. The functionalities of InfraLib library are organized into different modules, with the Core module providing the foundational capabilities of modeling and simulating large-scale infrastructure systems. Additional modules, including the analysis module, visualization module, and expert data collection module, offer advanced tools for understanding infrastructure dynamics and assessing policy performance. The input-output module ensures that all data is stored and retrieved in a standardized format, facilitating seamless integration with external tools and libraries and enabling reproducibility and collaboration.
A key emphasis in InfraLib's design is scalability and computational efficiency. Through a scalable software architecture and efficient algorithms, the framework can simulate infrastructure systems comprising millions of components and spanning long time horizons. This massive scale is crucial for bridging the gap between research and the complexity of real-world infrastructure networks.
§.§ Component Condition and Cost Dynamics
In InfraLib, the Condition Index of each component, used to quantitatively represent the component's current state of degradation or functionality, takes values in the range [0, 100]. The CI evolves stochastically over time, and the dynamics of component deterioration are modeled as a Markov chain with transition probability function D(s,s'). Following the literature <cit.>, we model the CI dynamics as a Weibull distribution tailored to each component's deterioration pattern. The Weibull distribution is a flexible model that can capture a wide range of real-world deterioration behaviors, from early-life failures to wear-out failures. The Weibull distribution is parameterized by shape parameter k and scale parameter λ, with the CDF given as:
F(x; k, λ) = 1 - e^-(x/λ)^k.
For every component, based on real-world data, we assume access to the mean and variance of the shape and scale parameters its Weibull distribution. To generate the transition function D^i(s,s') for component i, we collect multiple samples of k and λ from their respective distributions:
k ∼ N(μ_k, σ_k^2)
λ∼ N(μ_λ, σ_λ^2)
and then compute the transition probabilities by estimating them from scaled Weibull CDF values where in each sample trajectory, the CI at time step k is given by:
CI(t) = ⌊ 100 × (1 - F(t; k, λ)) ⌋.
This equation ensures that when t is 0 (representing a new or fully functional component), CI(t) is 100 (least degraded), and as t increases towards the end of the component's expected lifecycle, CI(t) approaches 0 (most degraded).
The cost of repairing a component in InfraLib is designed to reflect the degree of degradation and the urgency of intervention, based on the condition index. The cost is dynamically calculated based on the state of the component at the time of repair and the effectiveness of the repair action, and is given as:
c_r^i = (100 - s^i/100 - δ^i)^α^i× c_m^i
where s^i is the current CI of component i, δ^i is the failure threshold, α^i is a parameter that adjusts the sensitivity of repair costs to damage, and c_m^i is replacement cost of component i. This formulation ensures that repairing severely damaged components is proportionally more expensive, aligning repair costs with the component's condition and the urgency of repairs.
§ INFRALIB FUNCTIONALITY
InfraLib supports resource allocation problems under several constraints and scenarios that are common in real-world infrastructure management. In this section, we highlight some of the key functionalities of InfraLib and discuss how they can be used to address critical research questions in infrastructure management.
§.§ Optimal Budget Allocation
InfraLib is fundamentally designed to enable optimal budget allocation for large-scale infrastructure systems comprising numerous components. As discussed in <ref>, the modeling framework allows users to optimize actions while considering budget constraints, importance scores, and component deterioration dynamics. Figure <ref> illustrates a simulation of this model through the visualization of condition indices for different components.
In a given instance, InfraLib can simulate the evolution of millions of component instances of tens of thousands of component types over a long time horizon. The simulation accepts the actions taken on each component at each time step as input, and transitions the components to new states after verifying that the actions are feasible under the budget constraints.
§.§ Intermittent Component Availability
All the components in an infrastructure systems may not always be available for management actions. This is especially relevant in the case of real-world infrastructure systems that are in remote or inaccessible locations, or critical infrastructure components that cannot be taken offline for maintanance. InfraLib supports modeling intermittent component availability, where components can be marked as unavailable for certain time periods. Figure <ref> illustrates the condition index of a component that is intermittently unavailable for inspect, repair, and replacement actions.
In addition to enabling users to simulate and analyze scenarios where components are only available for inspection, repair, or replacement during specific time windows, InfraLib also allows users to evaluate the impact of these constraints on their management policies.
§.§ Cyclic Budget
InfraLib can model scenarios with a cyclic budget, where the total budget allotted for infrastructure management is reset to a fixed amount periodically. For instance, the budget could be replenished annually to a predetermined value. Under a cyclic budget schedule, the user can specify either a fixed budget and cycle length or a budget profile with replenished Budget and cycle length that varies over time. Any resources that are not utilized within the current cycle are forfeited and do not carry over.
In addition to the cyclic budget schedules that are directly supported, users can also define custom budget profiles that reflect the budget allocation patterns in their system. Modeling such budget schedules allows testing management policies under real-world resource constraints where budget allocations tend to be more complex than simple cyclic schedules.
§.§ Catastrophic Failures
InfraLib enables modeling of unexpected catastrophic failure events that severely impact infrastructure components instantly. Users can specify failure events to occur at predefined time steps during a simulation or optionally let the library generate random failure events based on a built-in, predetermined distribution. The catastrophic failures can affect one or more components, and can be configured based on the component metadata such as the component type, location, or facility to introduce spatial and temporal dependencies. Figure <ref> illustrates how the condition indices of components change in the event of a catastrophic failure.
Simulating catastrophic failures provides a mechanism to stress-test infrastructure systems and management policies. It is particularly crucial for evaluating the resiliency of management policies to extreme weather events, natural disasters, or other unforeseen circumstances.
§.§ RL Environments
InfraLib generates standardized reinforcement learning environments that encapsulate the complexities of infrastructure management problems while maintaining compatibility with popular RL libraries. Given an infrastructure system modeled in InfraLib, we generate an RL environment E = (𝒮, 𝒜, 𝒫, ℛ, γ). The state space 𝒮 = ∏_i=1^n 𝒮_i ×ℝ^+ incorporates the condition indices of all components and the remaining budget. The action space 𝒜 = ∏_i=1^n 𝒜_i represents all possible combinations of actions across components. The transition probability function 𝒫(s' | s, a) = ∏_i=1^n T_i(s'_i | s_i, a_i) ·𝕀(b' = b - c(a)) is derived from component-level transition probabilities, where T_i is the transition function for component i, c(a) is the total action cost, and 𝕀 is the indicator function. The generic reward function ℛ(s, a, s') = ∑_i=1^n w_i · f_i(s_i, s'_i) - λ· c(a) balances the change in component conditions with action costs based on user specification.
To model partial observability, InfraLib can generate POMDP environments E' = (𝒮, 𝒜, 𝒫, ℛ, 𝒪, Z, γ), where the observation space 𝒪 = ∏_i=1^n (𝒮_i ∪u) ×ℝ^+ includes an unknown state u for uninspected components. The observation function Z(o | s, a) = ∏i=1^n Z_i(o_i | s_i, a_i) reflects the inspection history and recent actions. These environments provide standardized interfaces (, , ) compatible with popular RL libraries, facilitating the application and evaluation of RL algorithms to infrastructure management problems.
So far, we have discussed InfraLib from the perspective of modeling and simulating infrastructure systems for learning decision-making policies and evaluating them under various constraints. In the following section, we will delve into the tools provided by InfraLib for analysis and data collection from experts.
§ INFRALIB HUMAN INTERFACE
Analysis of existing infrastructure management policies in a unified framework is crucial for enabling decision-makers to compare and evaluate different policies and their impact on different aspects of the infrastructure system. In addition, the ability to collect expert data from human decision-makers is essential for training imitation learning approaches that can leverage these demonstrations without specific reward functions or extensive exploration. In this section, we discuss the tools provided by InfraLib for expert data collection and analysis.
At the core of InfraLib's human interface for analysis and data collection is an intuitive web-based dashboard interface. The interface is powered by a simulation process running in the background and provides experts with detailed information about the current state of a simulated infrastructure system, including component condition indices, recent observations, and historical management actions. The experts can inspect components and allocate resources for maintenance, repair, or replacement based on their domain knowledge. A snapshot of the dashboard with the options available for the user for analysis is shown in Figure <ref>.
Behind the scenes, InfraLib logs the full trajectory of expert actions, observations, and environment states, and the metadata of the scenario analyzed by the expert. These demonstrations can be used to train imitation learning algorithms to mimic expert behavior or to infer expert's preferences and priorities using inverse reinforcement learning. The expert data collection process is designed to be seamless and user-friendly, allowing experts to focus on demonstrating their management strategies without worrying about the technical details of the simulation environment. In addition to the expert demonstrations collected using the dashboard, InfraLib also supports batch uploads of expert demonstrations generated externally.
InfraLib allows injection of expert knowledge directly into the simulations. Experts can specify replacement thresholds, priority rules, or repair strategies for different components. This expert knowledge can help make the simulations more realistic and guide the agent's exploration in reinforcement learning approaches revealing areas that require tighter mimicking of experts versus allowances for more agent creativity. The modular design also enables the use of expert sub-policies for managing specific components alongside learning-based controllers.
§ EXAMPLE ENVIRONMENTS AND BENCHMARKS
This section provides some sample problems and scenarios that can be modeled using InfraLib. The goal of this section is two fold: (i) to provide a set of ready-to-start scenarios where other researchers can directly test their approaches and use similar templates to design their own custom environments, (ii) to provide baseline benchmarks that other researchers can use to compare the performance of their approach.
§.§ Champaign-Urbana Road Network Management
We model the road network in Champaign-Urbana, a metropolitan area in Illinois, United States, using InfraLib and simulate its deterioration without intervention. The road network data is sourced from OpenStreetMap (OSM), which provides detailed attributes for each road segment. These attributes are used to parameterize the deterioration dynamics in the simulation.
The key OSM attributes utilized in the model include the road type (specified by the highway tag), number of lanes, maximum speed limit, and surface material. The highway tag is particularly important as it classifies the road into categories such as motorway, trunk, primary, secondary, tertiary, residential, and service, which have distinct deterioration characteristics.
In the InfraLib model, each road segment is treated as a separate component with its own Weibull deterioration dynamics. The shape and scale parameters of the Weibull distribution for each segment are determined based on a combination of the OSM attributes. For instance, segments with a higher speed limits are assumed to have higher deterioration rates compared to lower-class roads <cit.>. Similarly, the surface material affects the deterioration, with asphalt and concrete roads having slower deterioration compared to gravel roads <cit.>.
The number of lanes and road width also influence the deterioration dynamics, as wider and multi-lane roads typically have higher construction standards and are more resilient to wear and tear. We note that the generated deterioration dynamics utilizing road attributes are illustrative, and serve to showcase InfraLib's capabilities in modeling and simulating infrastructure systems. However, it's important to emphasize that these generated dynamics are not intended to be an accurate representation or predictive model of real-world system behavior.
Figure <ref> illustrates the simulated deterioration of the Champaign-Urbana road network over a 50-year period using InfraLib. The condition index of each road segment is visualized on a color scale, with blue indicating good condition and red indicating poor condition. As seen in the figure, the road network progressively deteriorates over time, with different segments deteriorating at different rates based on their attributes.
This realistic simulation of the Champaign-Urbana road network showcases InfraLib's ability to model large-scale infrastructure systems with heterogeneous components having unique deterioration characteristics.
§.§ LargeSys-100K - Large-Scale infrastructure System Management
To demonstrate InfraLib's scalability and ability to handle large-scale infrastructure systems, we introduce the LargeSys-100K benchmark. This synthetic dataset consists of a massive network with 100,000 component instances spanning 1000 different component types. Each component type has 100 instances, resulting in a total of 100,000 components.
The deterioration dynamics and cost parameters for each component type in LargeSys-100K are synthesized based on realistic ranges observed in real-world infrastructure data. The Weibull distribution shape and scale parameters, inspection costs, repair parameters, and replacement costs for each component type are randomly generated while ensuring they fall within these practicable ranges.
LargeSys-100K serves as a standardized benchmark for comparing the performance and scalability of different infrastructure management approaches. By utilizing a large number of components and component types, LargeSys-100K aims to test the scalability and computational efficiency of infrastructure management algorithms. The vast scale of this benchmark poses challenges in terms of memory usage and computation time, pushing the boundaries of optimization and learning-based approaches.
Moreover, the diversity of component types in LargeSys-100K adds an additional layer of complexity. With varying deterioration dynamics and costs across component types, algorithms must be able to effectively prioritize and allocate resources considering the heterogeneity of the infrastructure system.
§ CONCLUSION AND FUTURE WORK
This paper introduced InfraLib, a comprehensive and versatile simulation framework designed to model and analyze large-scale infrastructure management problems. By providing a realistic and granular representation of infrastructure systems, InfraLib enables the application of reinforcement learning and other learning-based decision-making techniques to the complex domain of infrastructure management. The framework's hierarchical and stochastic approach accurately captures the nuances of real-world systems, including budget constraints, resource availability, and the geographical distribution of components. Through a variety of realistic scenarios and benchmarks, InfraLib demonstrates its potential to significantly impact the field, offering researchers and practitioners the means to develop, test, and refine strategies for efficient and effective infrastructure maintenance and allocation.
Looking ahead, there are several promising avenues for future work and enhancements to InfraLib. One direction is to expand the framework's capabilities to model finite crew allocation and scheduling problems, which go hand-in-hand with infrastructure management. Another direction is to integrate transfer learning and meta-learning algorithms to enable the rapid adaptation of learned policies to new infrastructure systems or changing environmental conditions. Furthermore, integrating InfraLib with other tools and platforms commonly used in infrastructure management, such as geographic information systems (GIS) and asset management software, would streamline the data exchange process and facilitate the adoption of learning-based approaches in practice. Finally, building a vibrant community around InfraLib is crucial for its long-term success and impact. Encouraging researchers and practitioners to contribute new components, deterioration models, and management strategies will ensure that the framework remains up-to-date and relevant.
unsrtnat
|
http://arxiv.org/abs/2409.02466v1 | 20240904062942 | CUEMPATHY: A Counseling Speech Dataset for Psychotherapy Research | [
"Dehua Tao",
"Harold Chui",
"Sarah Luk",
"Tan Lee"
] | eess.AS | [
"eess.AS",
"cs.SD"
] |
UAV-Mounted Movable Antenna: Joint Optimization of UAV Placement and Antenna Configuration
Xiao-Wei Tang, Yunmei Shi, Yi Huang, and Qingqing Wu
Xiao-Wei Tang, Yunmei Shi, Yi Huang ({xwtang, ymshi, and huangyi718b}@tongji.edu.cn) are with the Department of Information and Communication Engineering, Tongji University, Shanghai, China.
Qingqing Wu ([email protected]) is with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
3 September 2024
======================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Psychotherapy or counseling is typically conducted through spoken conversation between a therapist and a client. Analyzing the speech characteristics of psychotherapeutic interactions can help understand the factors associated with effective psychotherapy. This paper introduces CUEMPATHY, a large-scale speech dataset collected from actual counseling sessions. The dataset consists of 156 counseling sessions involving 39 therapist-client dyads. The process of speech data collection, subjective ratings (one observer and two client ratings), and transcription are described. An automatic speech and text processing system is developed to locate the time stamps of speaker turns in each session. Examining the relationships among the three subjective ratings suggests that observer and client ratings have no significant correlation, while the client-rated measures are significantly correlated. The intensity similarity between the therapist and the client, measured by the averaged absolute difference of speaker-turn-level intensities, is associated with the psychotherapy outcomes. Recent studies on the acoustic and linguistic characteristics of the CUEMPATHY are introduced.
Index Terms: speech dataset, counseling session, empathy rating, speech-text alignment
§ INTRODUCTION
Psychotherapy or counseling is an activity of conversational speaking involving a client and a therapist/counselor. “Through the verbal transactions of the therapy" <cit.>, clients are encouraged to express their thoughts and feelings, think more deeply about issues at hand, and make changes in life. The aims of psychotherapy are to lower clients' psychological distress, enhance their well-being, and reduce maladaptive behaviors that hinder their relationship and work functioning <cit.>. Better understanding about the factors associated with effective psychotherapy can be achieved by examining the speech characteristics of psychotherapeutic interactions in conjunction with traditional assessments of psychotherapy process and outcome. Previous studies investigated how language expression and style are correlated with psychotherapy outcomes <cit.>. Vocal cues related to the counseling quality were studied extensively in <cit.>. The findings of these studies are useful to the prediction of clinical outcomes and can guide what and how therapists should speak and express during counseling to maximize therapeutic effect.
One important feature of counseling and psychotherapy is confidentiality. Protecting clients' privacy does not only aim to meet ethical and professional standards <cit.>, but also help clients build up trust so that they can work on their issues freely and productively. To uphold the highest ethical practice, the standard of confidentiality extends beyond the counseling room to research activities. In the context of applying the latest data analytics technology, special attention must be paid when psychotherapy recordings need to be transmitted over the network for processing by any third-party software, e.g., cloud-based speech transcription, given the uncertainty around data storage and use. Psychotherapy researchers therefore choose to collect their own data and use these data to build tailor-made systems for data analysis.
This paper describes a recent study on constructing a large-scale database of counseling speech. The data are collected from actual counseling sessions at a counseling clinic of the Department of Educational Psychology in our university. In each session, a counseling trainee (master student) is arranged to see a help-seeking client under supervision. The speech dataset, named CUEMPATHY, contains hundreds of hours of audio recordings and text transcriptions of spoken conversation between therapists and clients. These data are intended to support the development of speech and language technologies that can be applied to analyze psychotherapy interactions efficiently and provide data evidences for counselor training.
Section <ref> gives the background and describes the process of data collection. Section <ref> details how manual transcription and empathy rating are done. Section <ref> presents what has been done on the processing of raw recordings. Section <ref> and <ref> describe our preliminary investigation and on-going work on counseling speech analysis with CUEMPATHY. Conclusion is given in Section <ref>.
§ DATA COLLECTION
§.§ Background
In the psychotherapy practicum, counseling trainees are required to take part in a training program of 20 weeks. The clients are adults from the general community who come to seek reduced-fee psychotherapy services. The clients' concerns are related to emotion, stress, career, relationship, personal growth, and self-esteem. All trainee therapists and clients speak Hong Kong Cantonese with occasional English code-mixing. An informed consent was obtained from each participating client or trainee therapist. The procedures followed the standard research ethics of the American Psychological Association and were approved by the university's Institutional Review Board.
Before formal counseling, potential clients were asked to go through a telephone screening. If they were deemed unsuitable for the service, they would be referred to other health professionals in the community. Reasons for referral included ongoing suicidal ideation, psychosis, active substance use, or any other concerns requiring intensive intervention. For the enrolled clients, trainee therapists discussed with their supervisors the counseling goals and strategies rather than following a specific treatment protocol. Counseling sessions were conducted weekly during the 20-week practicum, but not every client started in the first week, and not every client ended in the last week. As part of training, trainee therapists received individual and group supervision.
A total of 428 counseling sessions were recorded with 4 cohorts of practicum trainees (a total of 40 trainee therapists) during the period of November 2017 to October 2019. Each of the 40 therapists was paired with a designated client. Thus there were totally 40 unique therapist-client dyads. The number of counseling sessions conducted by these dyads varied from 6 to 16, with the mean of 10.7±2.83.
The CUEMPATHY dataset is made up of 156 selected sessions. They cover 39 therapist-client dyads (the excluded dyad has no audio recording) and 4 sessions per dyad. Table <ref> gives the gender and age information of therapists and clients.
§.§ Recording session setup
All counseling sessions were conducted in a sound-attenuated room. Only the therapist and the client were present in the room. Each session was about 50 minutes long. All sessions were video-taped by a camera mounted on the roof and audio-recorded by a digital recorder (TASCAM SD-20M). The therapist and client each wore a lavalier microphone clipped between the collar and chest. The microphones were connected to the recorder. Audio recording was done with sampling rate of 48 kHz and two channels (stereo). Before a session commenced, a research assistant helped to install the microphones for both speakers and tested the recording equipment. In a typical session, the therapist and the client took turn to speak. A speaker turn refers to the time period in which only one person speaks.
§ EMPATHY RATING AND SPEECH TRANSCRIPTION
§.§ Subjective measurement of empathy
Therapist empathy has long been hypothesized to be a primary indicator of counseling outcomes <cit.>. Empathy is described as “the therapist's sensitive ability and willingness to understand the client's thoughts, feelings, and struggles from the client's point of view" <cit.>. Two instruments are used to measure the therapist empathy in this research. They include Therapist Empathy Scale (TES) <cit.> rated by third-party observers, and Barrett-Lennard Relationship Inventory (BLRI) <cit.> rated by the clients. Another client-rated measure, Session Evaluation Scale (SES) <cit.>, is also adopted to assess clients' perception of session quality. The session quality measures the effectiveness of counseling session from client's perspective.
§.§.§ Therapist Empathy Scale (TES)
TES is an observer-rated measure of therapist empathy. It comprises nine items that cover the affective, cognitive, attitudinal, and attunement aspects of empathy. The observer gives a score on each item after watching the video recording of each session. The scores follow a 7-point scale: 1 = not at all to 7 = extremely. An example item is “Concern: A therapist conveys concern by showing a regard for and interest in the client. The therapist seems engaged and involved with the client and attentive to what the client has said. The therapist's voice has a soft resonance that supports and enhances the client's concerned expressions.” The total score (ranging from 9 to 63) is used in this research, with a higher value indicating higher therapist empathy.
§.§.§ Barrett-Lennard Relationship Inventory (BLRI)
Being a client-rated measure of therapist empathy, the BLRI is made up of sixteen items. Each item is rated on a 6-point Likert-type scale from -3 = strongly disagree to +3 = strongly agree. Zero value is not used in the scoring. An example item is: “My counselor tries to see things through my eyes.” The overall BLRI score ranges from -48 to +48, with a higher value indicating higher therapist empathy as perceived by clients.
§.§.§ Session Evaluation Scale (SES)
In our research, SES consists of five items, including four items from the original SES <cit.> rated on a 5-point Likert-type scale from 1 = strongly disagree to 5 = strongly agree, and one added item for assessing session effectiveness and increasing scale variance as suggested by Lent et al. <cit.>. An example item is: “I am glad I attended this session.” The value of SES ranges from 5 to 25. A higher value indicates better session quality as perceived by clients.
§.§ Transcription and empathy rating of CUEMPATHY
Speech transcription was done on a speaker-turn basis, with the speaker identity marked, i.e., therapist or client, and the speech content transcribed in the form of traditional Chinese characters with punctuation marks. The transcription was done by trained undergraduate research assistants at the counseling clinic. Identifiable information, such as names and locations, were removed from the transcriptions before analysis. Only research personnel involved in this study are allowed to access the audio data and text transcriptions.
Clients were asked to complete both BLRI and SES after each session. For observer-rated TES, 8 raters with at least master’s level training in counseling were recruited. They were trained first on performing TES rating with 8 video-taped counseling sessions that had been collected with the same setting. The intra-class coefficient (ICC) for the 8 raters on the 8 training sessions was 0.79. According to the Cicchetti’s guidelines <cit.>[The Cicchetti’s guidelines on inter-rater reliability: ICCs < 0.40 mean poor, 0.40 - 0.59 mean fair, 0.60 - 0.74 mean good, and > 0.75 mean excellent reliability.], the raters were considered to achieve excellent inter-rater reliability. The raters then proceed to rate the sessions in CUEMPATHY. As a reliability check, about 40% (62 sessions) of the sessions were rated by two raters. The ICC based on a mean-rating (k = 2), consistency, two-way random effects model was 0.90, indicating excellent inter-rater reliability beyond the training phase.
§ PROCESSING OF SPEECH RECORDINGS
In order to facilitate focused analysis of therapist and client speech, the time stamp information of each speaker turn in the audio recording needs to be known. Since the speaker-turn-based transcriptions are available, this problem can be formulated as (1) aligning long audio to corresponding text transcription <cit.>; (2) finding the beginning and ending time of all speaker turns.
Inspired by the work of <cit.>, a speech-text alignment system is developed to produce turn-level time alignment of the recorded counseling speech. Figure <ref> shows the system's block diagram. In the following sub-sections, we will explain the process of obtaining turn-level alignment of speech data and text transcription for a 50-minute recording.
§.§ Data pre-processing
The original stereo audio is first converted into mixed-mono audio and down-sampled to 16 kHz. The long audio is split into short voiced segments using a voice activity detection (VAD) tool, namely WebRTC VAD[https://github.com/wiseman/py-webrtcvad]. The Character2Syllable module converts the given transcription from Chinese characters to Cantonese syllables in the form of Jyutping symbols <cit.>. Code-mixing English words are also represented by Jyutping symbols that resemble the English pronunciations as closely as possible <cit.>.
§.§ Syllable recognition
A Cantonese syllable recognition module is built with the Kaldi toolkit <cit.>. A tri-gram language model is trained using the syllable transcription of the session. Such a session-specific language model is expected to effectively restrict the decoding search space by pruning irrelevant hypotheses. An initial acoustic model trained with other Cantonese databases is used. Each voiced segment detected by the VAD module is passed through the recognizer to produce a hypothesized sequence of syllables (in terms of Jyutping symbols). All segment-level hypotheses are concatenated chronologically to form a sequence of syllables for the whole recording.
§.§ Sequence alignment and anchor selection
The hypothesized syllable sequence obtained for the entire session is aligned with the manual transcription. The globally best alignment is determined by applying dynamic programming. The aligned sub-sequences of syllables are chosen as anchors. Each anchor contains a sequence of N or more consecutive syllables (N is set to 10 in our work) and is assumed to be correctly aligned. The anchors are used to partition the transcription and the audio recording into short segments. After obtaining speech segments and the corresponding text for all sessions, a new acoustic model is trained. The new model would replace the existing one for syllable recognition. The above steps are repeated with the in-domain acoustic model and speech-text segments updated iteratively.
§.§ Forced alignment and turn time locating
Based on segment-level alignment of speech and text, syllable-level time stamps, i.e., the beginning and ending time of each syllable, are obtained by the method of forced alignment. The time stamp of each speaker turn in the recording can be determined by locating the first and last syllables in the turn. So far, the speech data and text of each speaker turn in the recording are obtained.
After processing the recordings of all counseling sessions with the proposed system, the reliability of turn-level alignment of the speech data and text transcription is manually checked. We randomly select one session from each of 39 therapist-client dyads, i.e., a total of 39 sessions. Based on the number of speaker turns, a session is divided into three sections. In each section, at least 20 consecutive speaker turns are selected to check if the turn's speech data and corresponding text are consistent. The result indicates that the reliability of turn-level speech-text alignment is sufficient for further analysis. Table <ref> summarizes speech data in the CUEMPATHY.
§ PRELIMINARY INVESTIGATION ON CUEMPATHY
§.§ Relations among the three subjective ratings
It was found that some items of the BLRI and SES in some sessions were missing. If the amount of missing items for a measure was less than 20%, each missing item's value was replaced by the average of the available observed values to preserve session-level information <cit.>. Five sessions, each missing one item of the SES, were handled this way. All items of the BLRI and SES for one session were missing, so the session was discarded, resulting in a total of 155 sessions for the analysis of relationships among the three measures. Table <ref> shows the mean, standard deviation, and range of the three measures for the 155 sessions.
The Pearson's correlations (ρ) among session-level TES, BLRI, and SES scores are computed. Results show that there is no significant correlation between observer-rated TES and client-rated BLRI (ρ = -0.07) or SES (ρ = -0.07), while the BLRI and SES are strongly positively correlated (ρ = 0.72). The discrepancy between observer and client ratings indicates that different raters pay attention to different aspects of the counseling session in their rating decisions. For example, previous studies suggested that client ratings of therapist empathy may be based on relationship factors other than therapists' particular skills in empathic reflection <cit.>.
§.§ Prosodic similarity between therapist and client in counseling conversations
It is often observed that people interacting in a conversation exhibit similar speech patterns in terms of prosody, speech sounds, and lexicon <cit.>. This phenomenon has been described in a range of terms such as synchrony, entrainment, alignment, and accommodation. The similarity of interpersonal interaction has been found to be associated with empathy <cit.>. In particular, the relationship between similarity in speech prosody and therapist empathy has been extensively studied <cit.>. In this study, we investigate the relations between therapist-client similarity and observer-rated empathy ratings (i.e., TES) based on two prosodic parameters, i.e., pitch and intensity.
The method proposed in <cit.> is adopted in our analysis. The averaged absolute difference of the turn-level parameter between the therapist and the client is computed to measure the degree of entrainment. Then the correlation between session-wise difference and empathy ratings is calculated. Consider a counseling session that contains N speaker turns. Since our focus is to observe the therapist's response to the client's behavior, the client's turn is followed by the therapist's turn for computing difference. Let x(i) and x(i+1) be parameters of speaker turn i and i + 1 that belong to the client and the therapist, respectively. In order to remove the bias of the individual baseline, x(i) for each speaker is a zero-mean parameter by subtracting the mean of raw turn-wise parameters belonging to that speaker in the session. The averaged absolute difference D_x of the session is defined as in Eq. (<ref>).
D_x = 1/N/2∑_i=1^N/2 |x(2i) - x(2i - 1)|
The Pearson's correlation r between the session-wise D_x and empathy ratings is computed for each parameter. A significant correlation is found in the intensity with r = -0.17 and p< 0.05, which indicates that the higher entrainment of intensity between the client and the therapist is associated with higher empathy. The correlation coefficient of the pitch is also in a negative value, although not significant.
§ RECENT WORKS ON CUEMPATHY
Tao et al. proposed to characterize the therapist's speaking style related to empathy with the prosodic cues from therapist speech <cit.>. They found that “the empathy level tends to be low if therapists speak long utterances slowly or speak short utterances quickly, while high if therapists talk to clients with a steady tone and volume". The findings can guide how therapists should speak during counseling. In addition, another work <cit.> of Tao et al. introduced a hierarchical attention network with two-level attention mechanisms to evaluate therapist empathy from the acoustic features of conversational speech in counseling sessions. This study suggested that consecutive turns (2 to 6) may jointly contribute to determining the level of therapist empathy, and observer rating of empathy tended to take the whole counseling session into consideration.
Meanwhile, Lee et al. examined the relationships between durational patterning at discourse boundaries and client-rated therapist empathy in counseling <cit.>. The results suggested that “low-order temporal cues of prosodic phrasings, such as the duration of utterance-final syllables, silent pause, and speech rate, contribute to clients’ higher-order perceptual process of therapist empathy". In another work <cit.>, Lee et al. argued that the discrepancy in observer and client ratings of therapist empathy might be explained by therapists’ use of particles. The study showed that particle usage significantly affected observer-rated TES (the higher the particle usage, the lower the predicted TES) but not client-rated BLRI or SES.
§ CONCLUSIONS
This paper presents a speech dataset CUEMPATHY of manually transcribed and subjectively rated counseling sessions. Observer-rated TES and client-rated BLRI and TES measure the therapist empathy and the session quality. The speech data and text transcriptions of speaker turns in each session are obtained using an automatic speech-text alignment system. Preliminary investigation of the dataset suggests that (1) observer (TES) and client ratings (BLRI or TES) are not correlated, while client-rated measures are significantly correlated; (2) the degree of intensity entrainment between the therapist and the client, measured by the averaged absolute difference of speaker-turn-level intensities, is associated with empathy level. Recent findings on the CUEMPATHY motivate us to further analyze psychotherapy interactions by fusing acoustic and linguistic features in future works.
§ ACKNOWLEDGEMENTS
This research is partially supported by the Sustainable Research Fund of the Chinese University of Hong Kong (CUHK) and an ECS grant from the Hong Kong Research Grants Council (Ref.: 24604317).
IEEEtran
|
http://arxiv.org/abs/2409.03216v1 | 20240905032514 | Optimal Regularity for Fully Nonlinear Nonlocal Equations with Unbounded Source Terms | [
"Disson S. dos Prazeres",
"Makson S. Santos"
] | math.AP | [
"math.AP"
] |
Optimal Regularity for Fully Nonlinear Nonlocal Equations with Unbounded Source Terms
Disson S. dos Prazeres and Makson S. Santos
Received 16 July 2024; accepted 04 September 2024
=====================================================================================
§ ABSTRACT
We prove optimal regularity estimates for viscosity solutions to a class of fully nonlinear nonlocal equations with unbounded source terms. More precisely, depending on the integrability of the source term f ∈ L^p(B_1), we establish that solutions belong to classes ranging from C^σ-d/p to C^σ, at critical thresholds. We use approximation techniques and Liouville-type arguments. These results represent a novel contribution, providing the first such estimates in the context of not necessarily concave nonlocal equations.
Keywords: Nonlocal operators, Unbounded source, Hölder regularity.
AMS Subject Classifications: 35B65, 35D40, 35R11.
Optimal Regularity for Fully Nonlinear Nonlocal Equations with Unbounded Source Terms
Disson S. dos Prazeres and Makson S. Santos
Received 16 July 2024; accepted 04 September 2024
=====================================================================================
§ INTRODUCTION
In this article, we examine the regularity theory of viscosity solutions to a fully nonlinear nonlocal equation of the form
ℐ_σ(u, x) = C(σ)sup_α∈𝒜inf_β∈ℬ∫_ℝ^dδ(u, x, y)y^TA_α,β(x)y|y|^σ+d+2dy = f(x) B_1,
where σ∈ (0, 2), C(σ) > 0 is a normalizing constant, δ(u, x,y):= u(x+y) + u(x-y) -2u(x), 𝒜 and ℬ are indexes sets, and the source term f belongs to a suitable Lesbegue space. Moreover, we suppose the following ellipticity-type condition: for each α∈𝒜 and β∈ℬ, A_α, β is a symmetric d× d matrix (λ, Λ)-elliptic, i.e.,
λ I ≤ A ≤Λ I B_1,
for 0 < λ≤Λ. We establish Hölder regularity for solutions (or their gradient), depending on the range of p for which f ∈ L^p(B_1).
Regularity results for nonlocal operators have been extensively studied over the years. In <cit.>, the author presents an analytical proof of Hölder continuity and introduces more flexible assumptions on the operator than previous studies. But it is only in <cit.>, where such regularity estimates were given uniformly as the degree of the operator σ→ 2, seen as a natural extension of second-order operators. In that paper, the authors also prove properties such as comparison principle, a nonlocal Alexandrov-Bakelman-Pucci estimate (ABP for short), Harnack inequality, and regularity in C^1, α spaces for viscosity solutions to equations of type (<ref>).
Numerous results were uncovered based on the findings in <cit.>. In <cit.>, the authors extended their previous results using perturbative methods, including the C^1, α-regularity for a class of non translation-invariant equations. Around the same time, the authors in <cit.> also worked with non translation-invariant equations, establishing Hölder regularity of solutions, using different assumptions from <cit.>, allowing first-order terms and some degeneracy in the operators. We also mention the work <cit.>, where the author gives C^1, α-estimates for more general kernels than previous results.
Besides the C^1,α estimates, Evans-Krylov-type results have been explored by the community. In this direction, the authors in <cit.> proved that viscosity solutions for concave (or convex) equations of the form
ℐ_σ(u, x) := inf_α∈𝒜∫_ℝ^dδ(u, x, y)K_α(y)dy = 0 B_1,
are of class C^σ+β. They assume that, for each α∈𝒜, the kernel K_α belongs to the class ℒ_2, i.e., they are of class C^2 away from the origin, symmetric, satisfies the ellipticity condition
(2-σ)λ|y|^d+σ≤ K_α(y) ≤ (2-σ)Λ|y|^d+σ,
and
D^2K_α(y) ≤C|y|^d+2+σ.
In <cit.>, the author extended the result above to equations of type (<ref>) with rough kernels, i.e., where the kernels K_α satisfy (<ref>) but not necessarily (<ref>), for every α∈𝒜. Under the additional (and optimal) assumption of C^α exterior data for the solution, the author provides C^σ+α a priori estimates. Moreover, results involving non translation-invariant kernels are also given, provided the dependency on the variable x is of a C^α-fashion. We also mention the work <cit.>, where the author also provides C^σ+α for viscosity solutions to nonlocal equations of the type
F(D^σ u)= 0 B_1
provided the quantities u_L^∞(B_1) and u_L^1_σ(^d) are sufficiently small.
A natural question is whether one can prove optimal regularity estimates for fully nonlinear nonlocal equations with an unbounded right-hand side. However, such results are relatively scarce in the literature. One of the primary challenges in this context is the lack of compactness properties for viscosity solutions of (1.1) (or even the simpler equation (1.7) below) when f ∈ L^p(B_1). In addition, stability results are also crucial for implementing the now-classic strategy proposed in <cit.>. In fact, only a few works involving an unbounded norm on the right-hand side are available. Moreover, the existing results consider a class of kernels that differ from the broader ℒ_0 class discussed in <cit.>, which defines the extremal operator
ℳ^-_ℒ_0(u,x) = (2-σ)inf_λ≤ a(x,y) ≤Λ∫_^dδ(u,x,y)a(x,y)|y|^d+σ+2.
Instead, they work with kernels of the form
K_α(y) = y^TA_α y|y|^d+σ+2dy,
where A_α is a symmetric d× d matrix satisfying
λ I ≤1d+σ(σ A_α + Tr(A_α)I) ≤Λ I B_1,
for positive constants 0 < λ≤Λ. These kernels define a class of extremal operators as
ℳ^-(u,x) = inf_α∫_^dδ(u,x,y)K_α(y)dy.
It is important to note that, because they allow some degeneracy (conform (<ref>)), although this is a smaller class, it is not necessarily contained within the ℳ^-_ℒ_0.
In this direction, the authors in <cit.> established a quantitative ABP estimate for viscosity supersolutions to
{ℳ^-(u,x) ≤ f(x) B_1
u(x) ≥ 0 ^d∖ B_1.
.
They show that supersolutions to the equation above satisfy
-inf_B_1u ≤ C(d, λ)(f^+_L^∞(K_u))^(2-σ)/2(f^+_L^d(K_u))^σ/2,
where K_u is the coincidence set between u and a type of fractional convex envelope of u. In <cit.>, the author improved the previous result, by removing the dependency of the L^∞-norm in the estimate above, in case σ is sufficiently close to 2.
In <cit.>, the author removed the restriction on the degree of the operator present in <cit.>, by proving that supersolutions to (<ref>) with f ∈ L^p(B_1), for p ∈ (d-ε_0, ∞), satisfy
-inf_B_1u ≤ C(d, σ, λ, Λ)f^+_L^p(B_1).
Moreover, the author also gives W^σ, p-estimates for viscosity solutions to concave equations of the form
ℐ_σ(u, x) = C(σ)inf_α∈𝒜∫_ℝ^dδ(u, x, y)y^TA_α(x)y|y|^σ+d+2dy = f(x) B_1,
which, in particular, implies that solutions are of class C^σ-d/p. To our knowledge, this is the only result in the literature that gives this type of regularity for fully nonlinear nonlocal equations as in (<ref>), in the presence of an unbounded right-hand side.
The main purpose of this paper is to establish optimal regularity estimates for viscosity solutions to equation (<ref>), where the source term f ∈ L^p(B_1). We emphasize that such estimates have not been previously available for this type of equation. We employ the so-called half-relaxed method, originally introduced in <cit.>, to prove that a sequence (u_k)_k∈ of merely bounded viscosity solutions to (<ref>) converges uniformly to a function u_∞ which solves a suitable equation, see Lemma <ref> below. This method relies on the comparison principle of the operator and has been used previously in the context of Hamilton-Jacobi equations and nonlocal equations with Neumann boundary conditions in a half-space. We refer the reader to <cit.>. Once compactness and stability are available, standard approximation arguments can be applied to show the regularity properties of the viscosity solutions.
As discussed earlier, the regularity of solutions depends on the range of p for which f ∈ L^p(B_1), in the spirit of <cit.>. First, we prove that if p ∈ (d-ε_0, d/σ-1), then solutions are of class C^σ-d/p. The constant ε_0 is known as the Escauriaza's exponent. Notice that this type of regularity was previously known from <cit.> in the context of a concave equation as in (<ref>).
The borderline case f ∈ L^d/(σ-1) is particularly significant. In the local case, i.e., σ = 2, this value separates continuity estimates from differentiability properties in the regularity theory. Moreover, this quantity also appears in ABP and Harnack estimates. Meanwhile, in the nonlocal case, this is the first time such a threshold has been explicitly considered, as previous ABP estimates in <cit.> only consider the L^d-norm of the right-hand side. We believe this is the correct threshold for future general ABP estimates for the general class ℒ_0. Nevertheless, in this scenario, we show that viscosity solutions to (<ref>) are Log-Lipschitz, which is better than C^α for every α∈ (0,1).
For f ∈ L^p for p ∈ (d/σ-1, +∞), we prove that the solutions belong to the class C^1,α, where α is defined in (<ref>). Finally, for the borderline case where f ∈BMO(B_1), we show that viscosity solutions to (<ref>) are locally of class C^σ, which is the best regularity we can hope for without assuming further regularity for f. These results are detailed in Theorems <ref>, <ref>, <ref> and <ref> below.
The class of kernels in (<ref>) follows the form of (<ref>), as described in <cit.>, but with the additional requirement of uniform ellipticity, as in (<ref>). In particular, the class of kernels that we deal with is a subset of ℒ_0 and enjoys all its properties. Furthermore, because uniform ellipticity as in (<ref>) ensures that the condition (<ref>) is satisfied with the same ellipticity constants, our class of kernels is also included in those described in <cit.>. Developing a similar theory for more general kernels, such as those in ℒ_0, remains an open challenge, primarily due to the need for an appropriate ABP estimate, as discussed earlier.
The remainder of this paper is structured as follows: In section 2 we gather some auxiliary results and present our main results. The proof of the optimal Hölder regularity is the subject of Section 3. In Section 4 we put forward the Log-Lipschitz regularity of solutions. Section 5 is devoted to the proof of Hölder regularity for the gradient of solutions. Finally, in the last section, we investigate the borderline C^σ-regularity.
§ PRELIMINARIES
§.§ Notations and definitions
This section collects some definitions and notations used throughout the paper. The open ball of radius r and centered at x_0 in ^d is denoted by B_r(x_0). For α∈ (0, 1], the notation u ∈ C^α^-(B_1) means that u ∈ C^β(B_1), for every β < α. We proceed by defining the Log-Lipschitz space.
A function u belongs to C^ Log-Lip(B_r) if there exists a universal constant C > 0 such that
sup_B_r/2(x_0)|u(x) - u(x_0)| ≤ rlnr^-1.
We say that f ∈ BMO(B_1) if for all B_r(x_0) ⊂ B_1, we have
f_ BMO(B_1) := sup_0<r≤1_B_r(x_0)|f(x) - ⟨ f⟩_x_0,r |dx,
where ⟨ f⟩_x_0,r := _B_r(x_0)f(x)dx.
Since u is defined in the whole ^d, it can behave very widely as |x| →∞. Hence, we work within a class where we have a certain decay of the solutions as they approach infinity, see also <cit.>.
We say that a function u:ℝ^d →ℝ belongs to L^1_σ(^d), if
u_L^1_σ(^d):= ∫_^d |u(x)|11 +|x|^d+σdx < +∞.
In the next, we define viscosity solutions:
We say that an upper (lower) semicontinuous function u ∈ L_σ^1(^d) is a viscosity subsolution (supersolution) to (<ref>), if for any x_0 ∈ B_1 and φ∈ C^2(B_r(x)) such that u-φ has a local maximum (minimum) at x_0, then the function
v(x):=
{φ(x) B_r(x_0)
u(x) ^d∖ B_r(x_0),
.
satisfies
-ℐ_σ(v,x) ≤ (≥) f(x) B_1
We say a function u ∈ C(B_1)∩ L_σ^1(^d) is a viscosity solution to (<ref>) if it is simultaneously a viscosity subsolution and supersolution.
Throughout the manuscript, we assume certain smallness conditions on the norms of u and the source term f. We want to stress that such conditions are not restrictive. In fact, if u is a viscosity solution to (<ref>), then for ε >0 the function
v(x) := u(x)u_L^∞(B_1) + ε^-1f_L^p(B_1),
satisfy v_L^∞(B_1)≤ 1 and solves
ℐ_σ(v, x) = f̃(x) B_1,
where
f̃(x) := f(x)u_L^∞(B_1) + ε^-1f_L^p(B_1),
is such that f̃_L^∞(B_1)≤ε.
§.§ Main results
As mentioned earlier, the regularity of solutions depends on the range of p under consideration. The critical cases occur when f ∈ L^d/σ-1(B_1) for C^α-regularity and f ∈BMO for C^σ-regularity. Due to the nonlocal nature of the problem, achieving these critical cases requires f to possess higher regularity compared to the local case. For instance, in the local case, the first critical threshold is at p = d, leading to C^Log-Lip_loc-regularity. Similarly, when f ∈BMO in the local case, it yields C^1,Log-Lip_loc-regularity.
We now present the main results of this article, beginning with a result in Hölder spaces for the case where p is below d/σ-1.
Let u ∈ C(B_1)∩ L^1_σ(^d) be a viscosity solution to (<ref>), with f ∈ L^p(B_1), p ∈(d-ε_0, d/σ-1). Then, u ∈ C^α_loc(B_1) for any
α∈(0, σ p -d/p].
Moreover, there exists a positive constant C=C(p,d, σ, λ, Λ), such that
u_C^α(B_1/2)≤ C(u_L^∞(B_1) + u_L^1_σ(^d) + f_L^p(B_1)).
We observe that for σ = 2, we have p ∈ (d - ε_0, d) and α∈ (0, 2p-d/p), recovering the regularity result for the local case reported in <cit.>. Next, we consider the borderline case p = d/σ - 1. In this scenario, we show that solutions are Log-Lipschitz continuous, achieving the same level of regularity as in <cit.>, but with the requirement of higher regularity for the source term f.
Let u ∈ C(B_1)∩ L^1_σ(^d) be a viscosity solution to (<ref>), with f ∈ L^p(B_1), p = d/σ-1. Then, u ∈ C^Log-Lip_loc(B_1), and there exists a positive constant C=C(d, σ, λ, Λ), such that
u_C^Log-Lip(B_1/2)≤ C(u_L^∞(B_1) + u_L^1_σ(^d) + f_L^d/σ-1(B_1)).
In what follows, we present our third main theorem. As before, we recover the local regularity in the limit as σ→ 2, demonstrated in <cit.>. Recall that α_0 comes from the C^1, α_0-regularity of ℐ_σ-harmonic functions.
Let u ∈ C(B_1)∩ L^1_σ(^d) be a viscosity solution to (<ref>), with f ∈ L^p(B_1), p ∈(d/σ-1, +∞). Then, u ∈ C^1, α_loc(B_1) for any
α∈(0, min(σ - 1 - d/p, α_0^- )].
Moreover, there exists a positive constant C=C(p, d, σ, λ, Λ), such that
u_C^1, α(B_1/2)≤ C(u_L^∞(B_1) + u_L^1_σ(^d) + f_L^p(B_1)).
Observe that as p →∞, the corresponding Hölder exponent approaches σ - 1, indicating that we achieve C^σ-regularity in the case p = ∞, which is indeed the case as confirmed in <cit.>. However, we also show that this result holds under the weaker assumption that f ∈BMO(B_1), which is a proper subset of L^∞(B_1). This is the content of our last main result.
Let u ∈ C(B_1)∩ L^1_σ(^d) be a viscosity solution to (<ref>), with f ∈ BMO(B_1). Then, u ∈ C^σ_loc(B_1) and there exists a positive constant C=C(d, σ, λ, Λ), such that
u_C^σ(B_1/2)≤ C(u_L^∞(B_1) + u_L^1_σ(^d) + f_ BMO(B_1)).
The proof of Theorem <ref> differs from the strategies used for the previous theorems. This is primarily because scaling of the form x ↦ρ^-σu(ρ x), for ρ≪ 1, leads to a growth rate of |x|^-σ at infinity, which increases too rapidly to be integrable with respect to the tails of our kernel. Consequently, we employ techniques similar to those in <cit.>, where Liouville-type results are used to establish interior regularity of solutions through blow-up arguments.
§.§ Auxiliary results
In this subsection, we prove some results used throughout the paper. Since we did not find any references stating exactly what we needed, we start with a comparison principle for viscosity solutions of (<ref>) (and also (<ref>)), in the case where f ≡ 0. See also <cit.>.
Let u,v ∈ L^1_σ(^d), u upper semicontinuous and v lower semicontinuous, be respectively a viscosity subsolution and supersolution to the equation
ℐ_σ(w, x) = 0 B_1,
such that u ≤ v in ℝ^N∖ B_1. Then, u ≤ v in B_1.
Suppose by contradiction that
θ=sup_B_1(u-v)>0.
For ε>0 we define the auxiliary function
Φ(x,y)=u(x)-v(y)-|x-y|^2/2ε^2,
and consider (x_ε,y_ε)∈B_1×B_1 such that
Φ(x_ε,y_ε)=sup_x,y∈ B_1Φ(x,y).
Notice that Φ(x_ε,y_ε)≥sup_x∈ B_1Φ(x,x)=θ, which yields to
|x_ε-y_ε|/2ε^2≤ u(x_ε)-v(y_ε)-θ.
By compactness, we have that (x_ε,y_ε) → (x̅,y̅)∈B_1×B_1 as ε→ 0, and by using (<ref>) we obtain x̅=y̅. Therefore
0 ≤lim_ε→ 0|x_ε-y_ε|^2/2ε^2 = u(x̅) - v(x̅) - θ≤ 0,
which implies
u(x̅) - v(x̅) = θ > 0.
Moreover, since u≤ v on ^d∖ B_1, we have x̅∈ B_1.
We set φ_1(x):= v(y_ε) + |x-y_ε|/2ε^2 and φ_2(y):= u(x_ε) + |y-x_ε|/2ε^2, and observe that u-φ_1 has a local maximum at x_ε, while v-φ_2 has a local minimum at y_ε. Hence by using that u is a subsolution and v is a supersolution of (<ref>), we have the viscosity inequalities
sup_α∈𝒜inf_β∈ℬ(∫_B_rδ(φ_1, x_ε, y)y^TA_α,β(x_ε)y|y|^σ+d+2dy + ∫_^d∖ B_rδ(u, x_ε, y)y^TA_α,β(x_ε)y|y|^σ+d+2dy ) ≥ 0,
and
sup_α∈𝒜inf_β∈ℬ(∫_B_rδ(φ_2, y_ε, y)y^TA_α,β(y_ε)y|y|^σ+d+2dy + ∫_^d∖ B_rδ(v, y_ε, y)y^TA_α,β(y_ε)y|y|^σ+d+2dy ) ≤ 0,
for r small enough. Hence, from the definition of sup and inf, there exist α∈𝒜 and β∈ℬ such that
ε^-2 o_r(1) + ∫_^d∖ B_rδ(u, x_ε, y)y^TA_α,β(x_ε)y|y|^σ+d+2dy ≥ -γ/2,
and
ε^-2 o_r(1) + ∫_^d∖ B_rδ(v, y_ε, y)y^TA_α,β(y_ε)y|y|^σ+d+2dy ≤γ/2,
for γ> 0 sufficiently small. Subtracting the inequalities above yields to
ε^-2 o_r(1) + ∫_^d∖ B_rδ(u, x_ε, y)y^TA_α,β(x_ε)y|y|^σ+d+2dy - ∫_^d∖ B_rδ(v, y_ε, y)y^TA_α,β(y_ε)y|y|^σ+d+2dy ≥ -γ.
Notice that, When ε→ 0, by the contradiction hypotheses we have
∫_B_1∖ B_rδ(u, x̅, y)y^TA_α,β(x̅)y|y|^σ+d+2dy-∫_B_1∖ B_rδ(v, x̅, y)y^TA_α,β(x̅)y|y|^σ+d+2dy≤ 0.
Moreover,
∫_^d∖ B_1δ(u, x̅, y)y^TA_α,β(x̅)y|y|^σ+d+2dy-∫_^d∖ B_1δ(v, x̅, y)y^TA_α,β(x̅)y|y|^σ+d+2dy≤ - λθ∫_ℝ^d∖ B_1 |z|^-(d + σ)dz.
Therefore
-λθ∫_ℝ^d∖ B_1 |z|^-(d + σ)dz ≥ -γ,
which is a contradiction for γ small enough. This finishes the proof.
We now focus on one of the main contributions of this paper: the stability of solutions to (<ref>) when f ∈ L^p(B_1). To the best of our knowledge, this is the first time such a result has been established in the fully nonlinear nonlocal context. For comparison, see <cit.>.
Let u_k be a normalized viscosity solution to
ℐ_σ(u_k, x) = f_k B_1.
Suppose that there exists a positive constant M such that
|u_k(x)| ≤ M(1+|x|)^1+α x ∈^d.
Suppose further that
f_k_L^p(B_1)→ 0,
for p∈ (d-ε_0,+∞). Then there exists u_∞∈ C(B_1)∩ L_σ^1(^d) such that
u_k-u_∞_L^∞(B_4/5)→ 0.
Moreover, u_∞ solves
ℐ_σ(u_∞, x) = 0 B_1.
First, observe that given R > 0, we obtain from (<ref>)
|u_k(x)| ≤ C(R) x ∈ B_R.
Hence, Given any compact set Ω⊂^d, we have that the a.e. limits
u̅(x) := lim sup_k→∞, y_k→ x u_k(y_k), x Ω,
and
u(x) := lim inf_k→∞, y_k→ x u_k(y_k), x Ω,
are well-defined. Since the a.e. convergence holds for every compact set of ^d, we also have the a.e. convergence in the whole ^d. Using this fact and once again (<ref>), the Dominated Convergence Theorem ensures that
u̅-u_k_L^1_σ(^d)→ 0,
and
u-u_k_L^1_σ(^d)→ 0,
through the respective subsequences.
We are going to show that u̅ is a viscosity subsolution to (<ref>), and
u is a viscosity supersolution to (<ref>). We will prove the subsolution case since the supersolution case is analogous. Let x_0 ∈ B_1 and φ∈ C^2(B_r(x_0)) be such that u̅ - φ has a maximum at x_0. Without loss of generality, we can assume that φ is defined by
φ = {
P, B_r(x_0),
u̅, B_r(x_0)^c,
.
for some paraboloid P. We need to show that
ℐ_σ(φ, x) ≥ 0 B_1.
Suppose by contradiction that
ℐ_σ(φ, x) < -η,
for some η > 0. Now, let ψ_k be a viscosity solution to
{ℳ^+_λ^*, 1(ψ_k, x) = -|f_k(x)| B_r(x_0)
ψ_k = 0 ∂ B_r(x_0),
.
for some λ^* < 1 to be chosen later. Here, the maximal operator ℳ^+ is defined with respect to the class ℒ_0, as in <cit.> (and defined below). We have
ℐ_σ(φ + ψ_k,x) - ℐ_σ(φ, x) ≤ℳ^+_λ, Λ(ψ_k, x)
= Λ∫_ℝ^nδ^+(ψ_k, x, y)|y|^n+σdy - λ∫_ℝ^nδ^-(ψ_k, x, y)|y|^n+σdy
= Λ∫_ℝ^nδ^+(ψ_k, x, y)|y|^n+σdy - λλ^*(∫_ℝ^nδ^+(ψ_k, x, y)|y|^n+σdy + |f_k(x)|)
= (Λ - λλ^*)∫_ℝ^nδ^+(ψ_k, x, y)|y|^n+σdy - λλ^*|f_k(x)|,
where we have used (<ref>) to conclude
∫_ℝ^nδ^+(ψ_k, x, y)|y|^n+σdy - λ^*∫_ℝ^nδ^-(ψ_k, x, y)|y|^n+σdy = -|f_k(x)|.
Now, by choosing λ^* such that Λ - λλ^*≤ 0 and -λλ^*≤ -1, we obtain
ℐ_σ(φ + ψ_k,x) ≤ℐ_σ(φ, x) + f_k(x).
Let P_k be defined by
φ_k={
P, B_r(x_0),
u_k, B_r(x_0)^c,
.
and
P={
c|x-x_0|^2, B_r(x_0),
0, B_r(x_0)^c.
.
By using the ABP estimates in <cit.> we have ψ_k_∞→ 0, as k→∞. Then, there exists a x_k∈ B_r(x_0) such that φ_k+ψ_k+P touch u_k by above in B_r(x_0). Therefore, we have the viscosity inequality
ℐ_σ(φ_k+ψ_k+P)≥ f(x_k).
By ellipticity and using (<ref>) and (<ref>) we obtain that
ℐ_σ(φ_k+ψ_k+P) ≤ℐ_σ(φ_k+ψ_k)+ℳ^+_σ(P)
≤ℐ_σ(φ_k+ψ_k)-ℐ_σ(φ+ψ_k)-η+ f(x_k)+ℳ^+_σ(P).
We observe that
|∫_^Nδ(φ_k+ψ_k,x,y)dy -∫_^Nδ(φ+ψ_k,x,y)dy| ≤∫_^N∖ B_r(x_0)|δ(u_k,x,y)-δ(u̅,x,y)|dy
≤ C(r)u_k-u̅_L^1_σ(^d),
and by using (<ref>), we can conclude that for k sufficiently large
|ℐ_σ(φ_k+ψ_k)-ℐ_σ(φ+ψ_k)| ≤η4.
Hence, from (<ref>) and (<ref>), we obtain
ℐ_σ(φ_k+ψ_k+P)≤η4 -η +f(x_k)+ℳ^+_σ(P).
Now, choose c(Λ,N,σ) sufficiently small so that
ℳ^+_σ(P)≤η/4.
Finally, for k sufficiently large we get
ℐ_σ(φ_k+ψ_k+P,x_k)≤ -η/2 +f(x_k),
which is a contradiction with (<ref>). This finishes the proof of (<ref>), i.e., u̅ solves in the viscosity sense
ℐ_σ(u̅, x) ≥ 0 B_1,
We similarly show that
ℐ_σ(u, x) ≤ 0 B_1.
Now, from the definition of u̅, u and the viscosity inequalities above we can infer from Proposition <ref>, that in fact
u̅ = u = u_∞,
and hence up to a subsequence, u_k → u_∞ locally uniformly in B_1 (see for instance <cit.>). Moreover, from the viscosity inequalities satisfied by u̅ and u, we conclude that u_∞ solves
ℐ_σ(u_∞, x) = 0 B_1,
in the viscosity sense.
Using the stability result above, we can prove the following Approximation Lemma, which relates the solutions to our problem with ℐ_σ-harmonic functions.
Let u ∈ C(B_1)∩ L^1_σ(^d) be a normalized viscosity solution to (<ref>), with p ∈ (d-ε_0, +∞). Suppose that
|u(x)|≤ M(1+|x|)^1+α x∈^d.
Given δ >0 there exist ε >0, such that if
f_L^p(B_1)≤ε,
we can find a function h∈ C^1,α_0(B_4/5) satisfying
sup_B_3/4|u-h|≤δ.
Suppose not, then there exist δ_0 > 0 and sequences (u_k)_k∈ℕ, (f_k)_k∈ℕ such that
ℐ_σ(u_k, x) = f_k B_1,
f_k_L^p(B_1)≤1k,
and
|u_k(x)|≤ M(1+|x|)^1+α x∈^d,
but,
|u_k-h|>δ_0,
for all h∈ C^1,α_0(B_4/5). From the contradiction hypotheses (<ref>), (<ref>), (<ref>) and Proposition <ref>, we can guarantee the existence of a function u_∞∈ C(B_1)∩ L^1_σ(^d) such that u_k → u_∞ locally uniformly in B_1 satisfying
ℐ_σ(u_∞, x) = 0 B_1.
Now, the regularity available for (<ref>), see <cit.>, implies that u_∞∈ C^1,α_0(B_4/5). By taking h≡ u_∞, we reach a contradiction with (<ref>) for k sufficiently large.
§ HÖLDER REGULARITY
In this section, we detail the proof of Theorem <ref>, namely, the optimal C^α_loc-regularity, for
α∈(0, σ p -d/p],
where p ∈(d-ε_0, d/σ-1). We start by applying Lemma <ref> and showing the existence of a constant close to u in sufficiently small balls.
Let u ∈ C(B_1)∩ L^1_σ(^d) be a normalized a viscosity solution to (<ref>), with p ∈(d-ε_0, d/σ-1).
Assume that
|u(x)|≤ M(1+|x|)^1+α x∈^d.
If
f_L^p(B_1)≤ε,
then there exist constants 0 < ρ≪ 1/2 and A satisfying, |A| ≤ C and
sup_B_ρ|u-A| ≤ρ^α,
where C> 0 is a universal constant.
Fix δ > 0 (to be chosen later) and let h be the function from Lemma <ref>. Since h ∈ C^1, α_0(B_4/5), for ρ sufficiently small, we have
|h - h(0)| ≤ Cρ.
Now, from Lemma <ref> and the Triangular inequality we obtain
sup_B_ρ|u - h(0)| ≤sup_B_ρ|u - h| + sup_B_ρ|h - h(0)|
≤δ + Cρ.
Now, we make the universal choices
ρ = min[(1/2C)^1/1-α, (1/(1+C)100)^1/α_0-α] δ = ρ^α/2,
and by setting A = h(0), we conclude that
sup_B_ρ|u - A| ≤ρ^α.
Notice that the choice of δ determines the value of ε via Lemma <ref>.
In what follows, we iterate the previous proposition to find a sequence of constants that approaches u at the origin.
Let u ∈ C(B_1)∩ L^1_σ(^d) be a normalized a viscosity solution to (<ref>), with p ∈(d-ε_0, d/σ-1). Assume that
|u(x)|≤ M(1+|x|)^1+α x∈^d.
If
f_L^p(B_1)≤ε,
then we can find a sequence (A_k)_k ∈ℕ satisfying
sup_B_ρ^k|u(x)-A_k| ≤ρ^kα,
with
|A_k+1 - A_k| ≤ Cρ^kα.
We argue by an induction argument. By setting A_0=A_1 = 0, the case k=0 follows immediately. Suppose we have verified the statement for k = 1, …, n, and let us prove the case k=n+1. We introduce the auxiliary function v_k : ^d ⟶
v_k(x) := u(ρ^kx) - A_k/ρ^kα.
Notice that by (<ref>) we have |v_k(x)| ≤ 1 in B_1. In addition, v_k solves
ℐ_σ(v,x) = f̃(x) B_1,
where f̃(x) = ρ^k(σ-α)f(ρ^kx). Moreover, our choice of α in (<ref>) assures f̃_L^p(B_1)≤ε. Next, we are going to show that v_k satisfies
|v(x)| ≤ M(1+|x|^1+α_0) x ∈^d,
for some universal constant M. In fact, we resort again to an induction argument. For k=0, we have v_0=u and (<ref>) is verified. Now, assume that the case k=1, …, n is already verified. We shall prove the case k=n+1. Observe that
v_n+1 = v_n(ρ x) - Ã_nρ^α,
where Ã_n comes from Lemma (<ref>) applied to v_n. Now, for 2|x|ρ > 1 we estimate
|v_n+1(x)| ≤ρ^-α(|v_n(ρ x)| +|Ã_n|)
≤ρ^-(1+α)[(1+ ρ^1+α_0|x|^1+α_0) + C(1 + ρ|x|)]
≤ρ^(α_0 - α)(5 +9C)|x|^1 + α_0
≤ |x|^1+ α_0,
where in the last inequality we used (<ref>). On the other hand, if 2|x|ρ≤ 1, we obtain
|v_n+1(x)| ≤ρ^-α(|v_n(ρ x) - h̃(ρ x)| + |h̃(ρ x) -Ã_n|
≤ρ^-α(ρ^α2+ Cρ|x|)
≤1/2 + C/2ρ^α
≤ M(1+|x|^1+α_0),
where M:= 1/2 + C/(2ρ^α), and hence (<ref>) is proved. Finally, we now can apply Proposition <ref> to v_k and we obtain
sup_B_ρ|v_k - Ã_k| ≤ρ^α,
and rescaling back to u we conclude
sup_B_ρ^k+1|u - A_k+1| ≤ρ^(k+1)α,
where A_k+1 = A_k + ρ^kαÃ_k, which satisfies (<ref>). This finishes the proof.
We are now ready to prove Theorem <ref>.
Notice that from (<ref>) we have that (A_k)_k∈ is a Cauchy sequence, and hence there exists A_∞ such that A_k → A_∞, as k→∞. Moreover, we also have from (<ref>)
|A_k-A_∞| ≤ Cρ^kα.
Now, fix 0< r ≪ 1 and let k ∈ be such that ρ^k+1≤ r ≤ρ^k. We estimate,
sup_B_r|u(x)-A_∞| ≤sup_B_ρ^k|u(x)-A_k| + sup_B_ρ^k|A_k - A_∞|
≤ρ^kα + Cρ^kα
≤(C+1)/ρ^αρ^(k+1)α
≤ Cr^α.
By taking the limit as k →∞ in (<ref>), we obtain A_∞ = u(0). This finishes the proof.
§ LOG-LIPSCHITZ CONTINUITY
This section addresses the first critical case p = d/σ - 1, which yields the desired Log-Lipschitz regularity. In particular, solutions are of class C_loc^α for every α∈ (0, 1). As before, we begin by demonstrating the existence of a linear approximation of u within sufficiently small balls.
Let u ∈ C(B_1)∩ L^1_σ(^d) be a normalized viscosity solution to (<ref>), with p =d/σ-1. Suppose further that
|u(x)| ≤ M(1+|x|)^1+α_0 x∈^d.
If
f_L^p(B_1)≤ε,
then, there exist a constant 0 < ρ≪ 1/2 and an affine function ℓ of the form
ℓ(x) = A + B· x,
satisfying |A|, |B| ≤ C and
sup_B_ρ|u(x)-ℓ(x)| ≤ρ.
The proof is similar to Proposition <ref>. We fix δ >0 to be determined later. For ρ≪ 1/2, we have that
|h(x)-h(0)-Dh(0)· x| ≤ Cρ^1+α_0,
where h ∈ C^1,α_0(B_3/4) comes from Lemma <ref>. By setting ℓ(x) = h(0) + Dh(0)· x, we obtain from the Triangular inequality that
sup_B_ρ|u(x)-ℓ(x)| ≤sup_B_ρ|u(x)-h(x)| + sup_B_ρ|h(x)-ℓ(x)|
≤δ + Cρ^1+α_0.
As before, we make universal choices
ρ = (1/2C)^1/α_0 δ = ρ/2,
which determines the value of ε through Lemma <ref>. Therefore
sup_B_ρ|u(x)-ℓ(x)| ≤ρ.
Let u ∈ C(B_1)∩ L^1_σ(^d) be a normalized viscosity solution to (<ref>), with p=d/σ-1. Suppose further that
|u(x)| ≤ M(1+|x|)^1+α_0 x∈^d.
If
f_L^p(B_1)≤ε,
then, there exists a sequence of affine function (ℓ_k)_k ∈ of the form
ℓ_k(x) = A_k + B_k· x,
satisfying
|A_k+1-A_k|/ρ^k + |B_k+1-B_k| ≤ C,
and
sup_B_ρ|u(x)-ℓ(x)| ≤ρ^k.
As before, we resort to an induction argument. By considering ℓ_0=ℓ_1=0, the case k=0 follows trivially. Now, suppose that the cases k=1, …, n have been verified, and let us prove the case k=n+1. We define the auxiliary function v_k: ^d ⟶ by
v_k(x) := u(ρ^kx)-ℓ(ρ^kx)/ρ^k.
We have that v_k(x) ≤ 1 in B_1, and solves
ℐ_σ(x,v) = f̃(x) B_1,
where f̃(x) = ρ^k(σ-1)f(ρ^kx). Notice that as σ-1>0, we have f̃_L^p(B_1)≤ε. Arguing similarly as in Proposition <ref>, we can also show that
|v_k(x)| ≤ 1 + |x|^1+α_0.
Hence, we can apply Proposition <ref> to v_k to conclude that there exists ℓ̃_k=Ã+B̃x such that
sup_B_ρ|v_k(x)-ℓ̃_k(x)| ≤ρ.
Rescaling back to u, we obtain
sup_B_ρ|u(x)-ℓ_k+1(x)| ≤ρ^k,
where ℓ_k+1 = ℓ_k(x) + ρ^kℓ̃_k(ρ^-1x). Observe that
|A_k+1-A_k|=|Ãρ^k|≤ Cρ^k,
and
|B_k+1-B_k|=|B̃|≤ C
which prove the condition (<ref>).
We now present the proof of Theorem <ref>
Notice that, by (<ref>), (A_k)_k∈ is a Cauchy sequence, and hence, there exists A_∞ such that A_k → A_∞, as k→∞. Moreover, we have
|A_k-A_∞| ≤ Cρ^kα.
Now, fix 0< r ≪ 1 and let k ∈ be such that ρ^k+1≤ r ≤ρ^k. We have,
sup_B_r|u(x)-A_∞| ≤sup_B_ρ^k|u(x)-A_k-B_kx| + sup_B_ρ^k|B_kx|
≤ρ^k + Ckρ^k
≤1/ρ(ρ^k+1+Ckρ^k+1)
≤ Cr+lnrlnρCr
≤ -Crlnr.
Finally, by taking the limit as k →∞ in (<ref>), we obtain A_∞ = u(0). This finishes the proof.
§ HÖDER CONTINUITY OF THE GRADIENT
In this section, we give the proof of Theorem <ref>, in which we prove C_loc^1, α-regularity for
α∈(0, min(σ-1-d/p , α_0^-)].
The proof follows the general lines of the proof of Theorem <ref>, but now at the gradient level.
Let u ∈ C(B_1)∩ L^1_σ(^d) be a normalized viscosity solution to (<ref>), with p ∈(d/σ-1, +∞). Suppose further that
|u(x)| ≤ M(1+|x|)^1+α_0 x∈^d.
If
f_L^p(B_1)≤ε,
then, there exist a constant 0 < ρ≪ 1/2 and an affine function ℓ of the form
ℓ(x) = A + B· x,
satisfying |A|, |B| ≤ C and
sup_B_ρ|u(x)-ℓ(x)| ≤ρ^1+α.
We fix δ >0 to be determined later. For ρ≪ 1/2, we have that
|h(x)-h(0)-Dh(0)· x| ≤ Cρ^1+α_0,
where h ∈ C^1,α_0(B_3/4) comes from Lemma <ref>. By setting ℓ(x) = h(0) + Dh(0)· x, we obtain from the Triangular inequality that
sup_B_ρ|u(x)-ℓ(x)| ≤sup_B_ρ|u(x)-h(x)| + sup_B_ρ|h(x)-ℓ(x)|
≤δ + Cρ^1+α_0.
As before, we make universal choices
ρ = min[(1/2C)^1/α_0-α, (1/(1+C)100)^1/α_0-α] δ = ρ^1+α/2,
which determines the value of ε through Lemma <ref>. Therefore
sup_B_ρ|u(x)-ℓ(x)| ≤ρ^1+α.
Let u ∈ C(B_1)∩ L^1_σ(^d) be a normalized viscosity solution to (<ref>), with p ∈(d/σ-1, +∞). Suppose further that
|u(x)| ≤ M(1+|x|)^1+α_0 x∈^d.
If
f_L^p(B_1)≤ε,
then, there exists a sequence of affine function (ℓ_k)_k ∈ of the form
ℓ_k(x) = A_k + B_k· x,
satisfying
|A_k+1-A_k| + ρ^k|B_k+1-B_k| ≤ Cρ^k(1+α),
and
sup_B_ρ^k|u(x)-ℓ_k(x)| ≤ρ^k(1+α).
As before, we resort to an induction argument. By considering ℓ_0=ℓ_1=0, the case k=0 follows trivially. Now, suppose that the cases k=1, …, n have been verified, and let us prove the case k=n+1. We define the auxiliary function v_k: ^d ⟶ by
v_k(x) := u(ρ^kx)-ℓ(ρ^kx)/ρ^k(1+α).
We have that v_k(x) ≤ 1 in B_1, and solves
ℐ_σ(x,v) = f̃(x) B_1,
where f̃(x) = ρ^k(σ-α-1)f(ρ^kx). Notice that for our choice of α in (<ref>), we have f̃_L^p(B_1)≤ε. Arguing similarly as in Proposition <ref>, we can also show that
|v_k(x)| ≤ 1 + |x|^1+α_0.
Hence, we can apply Proposition <ref> to v_k to conclude that there exists ℓ̃_k such that
sup_B_ρ|v_k(x)-ℓ̃_k(x)| ≤ρ^1+α.
Rescaling back to u, we obtain
sup_B_ρ|u(x)-ℓ_k+1(x)| ≤ρ^1+α,
where ℓ_k+1 = ℓ_k(x) + ρ^k(1+α)ℓ̃_k(ρ^-1x). From the definition of ℓ_k+1, it is immediate that condition (<ref>) is also satisfied. This finishes the proof.
The proof follows the same lines as in Theorem 1.1 and Theorem 1.2. Notice that from (<ref>) we have that (A_k)_k∈ and (B_k)_k∈ are a Cauchy sequence, and hence, we can find A_∞ and B_∞ satisfying A_k → A_∞ and B_k → B_∞, as k→∞. Moreover, we have
|A_k-A_∞| ≤ Cρ^k(1+α) |B_k-B_∞| ≤ Cρ^kα.
Now, fix 0< r ≪ 1 and let k ∈ be such that ρ^k+1≤ r ≤ρ^k. We have,
sup_B_r|u(x)-A_∞-B_∞· x| ≤sup_B_ρ^k|u(x)-A_k-B_k· x| + sup_B_ρ^k|A_k - A_∞| + ρ^ksup_B_ρ^k|B_k - B_∞|
≤ρ^k(1+α) + Cρ^k(1+α) + Cρ^k(1+α)
≤C/ρ^1+αρ^(k+1)(1+α)
≤ Cr^1+α.
Finally, by taking the limit as k →∞ in (<ref>), we obtain A_∞ = u(0). We can also show that B_∞ = Du(0), see for instance <cit.>. This finishes the proof.
§ THE BORDERLINE CASE
This section deals with the C^σ-regularity for viscosity solutions of (<ref>). As previously discussed, since 1+α = σ, we can no longer follow the strategy employed above. In this case, we follow the ideas put forward in <cit.> (see also<cit.>). We begin with a technical lemma, that can be found in <cit.>.
Let 0<α<α<1 and u∈ C^1,α(B_1). If u(0)=|Du(0)|=0 and
sup_0<r<1/2r^α-α[u]_C^1,α(B_r)≤ A,
for a constant A>0, then
[u]_C^1+α(B_1/2)≤ 2A.
Let u ∈ C^1+α(B_1)∩ L^1_σ(^d), with α< σ-1, be a normalized viscosity solution to (<ref>) with f∈ BMO(B_1). Suppose further that
u_C^1+α(B_1)≤ M f_ BMO(B_1)≤ε,
then u∈ C^σ(B_1/2) and
u_C^σ(B_1/2)≤ C.
We argue by contradiction. Suppose that the result is false, then we can find sequences (u_k)_k∈ℕ, (f_k)_k∈ℕ, such that for every k∈ℕ,
[u_k]_C^1+α(B_1)≤ M
f_k_ BMO(B_1) < ε_k,
but
u_k_C^σ(B_1/2) > k,
where ε_k → 0, as k →∞. For k ∈, we define the quantity
θ_k(r')=sup_r'<r<1/2sup_z∈ B_1/2r^1+α - σ[u]_C^1+α(B_r(z))
and observe that
lim_r'→ 0θ_k(r')=sup_r'>0θ_k(r').
Moreover, if r_1 ≤ r_2, then θ_k(r_2) ≤θ_k(r_1). By the Lemma <ref> we have that
sup_r'>0θ_k(r')≥ k/2,
therefore there exists a r_k>1/k and z_k∈ B_1/2 such that
r_k^1+α - σ[u_k]_C^1+α(B_r_k(z_k))>θ_k(1/k)>θ_k(r_k)→∞.
Since from (<ref>) [u_k]_C^1+α(B_1)<M, we have r_k→ 0 as k →∞. Now, for R ∈[1, 1/2r_k], we define the blow-up v_k:B_R→ℝ
v_k(x)=1/θ_k(r_k)1/r_k^σu_k(r_kx+z_k).
Notice that
[v_k]_ C^1+α(B_R) =r_k^1+α-σ/θ_k(r_k)[u_k]_C^1+α(B_r_kR(z_k))
= (Rr_k)^1+α-σ[u_k]_B_r_kR(z_k)/θ_k(r_k)R^σ-1-α
≤ R^σ-1-α.
Consider the auxiliary function w_k: ^d → defined as
w_k(x) := (v_k-l_k)(x),
where
l_k=v_k(0)+Dv_k(0)x.
Since that [l_k]_ C^1+α(B_R)=0, it follows from (<ref>) that
[w_k]_ C^1+α(B_R)≤ R^σ-1-α.
In particular, for R=1 we have
[Dw_k]_ C^α(B_1)≤ 1,
therefore
|Dw_k(x)|=|Dw_k(x)-Dw_k(0)|≤ |x|^α, x∈ B_1,
which implies
|w_k(x)|=|w_k(x)-w_k(0)|≤Dw_k_L^∞(B_|x|)|x|≤ |x|^1+α,
for all x ∈ B_1. Let η be a smooth function such that η=1 in B_1/2 and η=0 outside B_1. For e∈𝕊^n we have
∫_B_1η· D_e w_k dx=∫_B_1D_e η· w_k dx≤ C(n).
Hence, there exists z∈ B_1 such that |Dw_k(z)|≤ C(n) and by (<ref>) we have
|D_ew_k(x)-D_ew_k(z)|≤ R^σ-1-α|x-z|^α.
Therefore, for x∈ B_R and 1≤ R ≤1/2r_k,
|D_ew_k(x)|≤ C(n)+ R^σ-1-α|x-z|^α≤ CR^σ-1.
Our goal now is to show that
[w_k]_C^β(B_R)≤ R^σ-β,
for all β∈ [0,1+α] and R ∈[1, 1/2r_k].
Case β =0: Notice that for 1≤ |x|≤ R, we have from <ref>
|w_k(x)| = |w_k(x)-w_k(0)|
≤D_ew_k(x)_L^∞(B_|x|)|x|
≤ CR^σ.
The estimate for x ∈ B_1 follows from (<ref>).
Case β∈ (0, 1): In this case, we estimate for x,x̅∈ B_R
|w_k(x)-w_k(x̅)| ≤Dw_k_L^∞(B_R)|x-x̅|
≤ R^σ-1|x-x̅|^1-β|x-x̅|^β
≤ CR^σ-β|x-x̅|^β,
which gives
[w_k]_ C^β(B_R)≤ CR^σ-β.
Case β∈ [1, 1+α]: Finally, we have
|Dw_k(x)-Dw_k(x̅)| ≤ [Dw_k]_C^α(R)|x-x̅|^α
≤ CR^σ-1-αR^α-β+1|x-x̅|^β-1,
which implies
[Dw_k]_ C^β-1(B_R)≤ CR^σ-β,
or equivalently
[w_k]_ C^β(B_R)≤ CR^σ-β.
Therefore, (<ref>) follows from (<ref>), (<ref>) and (<ref>). Thus, there exists a w∈ C^1+α(ℝ^d) such that w_k→ w locally uniformly in the C^1+α-norm. Now, from (<ref>), we have that
[w_k]_C^1+α(B_1)≥1/2
and therefore
[w]_C^1+α(B_1)≥1/2.
Moreover, from (<ref>),
[w]_C^β(^d)≤ R^σ-β,
for all β∈ [0,1+α] and R > 1. Notice that w_k solves
ℐ_σ(w_k, x) = f̃_k(x),
where f̃_k(x) = 1θ_k(r_k)f_k(r_kx+z_k),
for p≥ d, we have
f̃_k(x)_L^p(B_1 = ( ∫_B_1 |f̃_k(x)|^p dx )^1/p
= |B_1|^1/p ( B_r_k(z_k) |f_k(x)|^p dx )^1/p
≤ |B_1|^1/pf_k_ BMO(B_1)≤ |B_1|^1/pε_k.
Hence,
ℳ^+_ℒ_0(w_k(· + h) - w_k) ≥ℐ_σ(w_k, x+h) - ℐ_σ(w_k, x)
= 1θ_k(r_k)[ f_k(r_k(x+h) + z_k) - f_k(r_kx + z_k) ] .
Moreover,
|w_k(x + h) - w_k|≤ [w_k]_ C^α(B_R)|h|^α≤ C|x|^σ-α
and as [f_k]_ BMO(B_1)→ 0 (and therefore f_k_L^p(B_1)→ 0), we have from Proposition <ref> that
ℳ^+_ℒ_0(w(· + h) - w) ≥ 0 ^d.
Similarly, we can prove that
ℳ^-_ℒ_0(w(· + h) - w) ≤ 0 ^d.
Therefore,
ℳ^-_ℒ_0(w(· + h) - w) ≤ 0 ≤ℳ^+_ℒ_0(w(· + h) - w) ^d.
Finally, for
w̃_k= w_k(· + h)dμ(h) - w_k,
we have that
|w̃_k|≤ |w_k(· + h) - w_k|dμ(h)≤ C|x|^σ-1-α.
The concavity of ℐ_σ yields
ℳ^+_ℒ_0( w̃_k(· + h)dμ(h) - w_k , x) ≥ℐ_σ(w̃_k(· + h)dμ(h), x) - ℐ_σ(w_k, x)
≥ℐ_σ(w_k, x+h)dμ(h) - f̃_k(x)
= f̃_k(x+h) - f̃_k(x) dμ(h).
Hence, by passing the limit as k →∞ we get
ℳ^+_ℒ_0( w(· + h)dμ(h) - w , x ) ≥ 0.
Therefore, from (<ref>), (<ref>) and (<ref>), we can apply Theorem <cit.>, to conclude that w is a polynomial of degree 1, which is a contradiction with (<ref>).
Recall that by Remark <ref> we can assume f_BMO(B_1)≤ε, where ε>0 is the constant from the previous lemma. Since f_L^p(B_1)≤ Cf_ BMO(B_1), we can use Theorem <ref> to conclude
u_C^1, α(B_1/2)≤ C(u_L^∞(B_1) + u_L^1_σ(^d) + f_ BMO(B_1)).
Therefore from Proposition <ref> we have
u_C^σ(B_1/2)≤ C(u_L^∞(B_1) + u_L^1_σ(^d) + f_BMO(B_1)),
proving the result.
Acknowledgement: D. dos Prazeres was partially supported by CNPq and CAPES/Fapitec. M. Santos was partially supported by the Portuguese government through FCT-Fundação para a Ciência e a Tecnologia, I.P., under the projects UID/MAT/04459/2020, and PTDC/MAT-PUR/1788/2020. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brazil (CAPES) - Finance Code 001.
plain
Makson S. Santos
Departamento de Matemática do Instituto Superior Técnico
Universidade de Lisboa
1049-001 Lisboa, Portugal
Disson dos Prazeres
Department of Mathematics
Universidade Federal de Sergipe - UFS,
49100-000, Jardim Rosa Elze, São Cristóvão - SE, Brazil
|
http://arxiv.org/abs/2409.02532v1 | 20240904084849 | Rough Functional Itô Formula | [
"Franziska Bielert"
] | math.PR | [
"math.PR",
"60L20, 60H20, 41A58, 60L10"
] |
§ ABSTRACT
We prove a rough Itô formula for path-dependent functionals of α-Hölder continuous paths for α∈(0,1). Our approach combines the sewing lemma and a Taylor approximation in terms of path-dependent derivatives.
: Leveraging Composability and Diversity to Design Fault and Intrusion Resilient Chips
Ahmad T. Sheikh, Ali Shoker, Suhaib A. Fahmy and Paulo Esteves-Verissimo
CEMSE Division, King Abdullah University of Science and Technology (KAUST)
Thuwal 23955-6900, Kingdom of Saudi Arabia
{ahmad.sheikh, ali.shoker, suhaib.fahmy, paulo.verissimo{@kaust.edu.sa}}
September 9, 2024
=============================================================================================================================================================================================================================================================================
§ INTRODUCTION
In dupireFunctionalItoCalculus2009 Dupire developed an Itô calculus for causal functionals F, i.e., functionals that depend at time t∈[0,T] on a path X [0,T] → up to time t. He introduced suitable notions of directional derivatives. A `time' derivative DF by a perturbation of time t+h with stopped paths X_t and a `space' derivative ∇ F by fixing the time and perturbing of the end point of the stopped path X_t + h_[t,T]. Similar results were established in a number of papers by R. Cont and D.A. Fournié using purely analytical arguments for paths X that have finite quadratic variation in a pathwise sense. A thorough treatment can be found in BallyVlad2016Sibp. In particular they proved a pathwise functional Itô formula using Föllmer type integrals introduced in Föllmer1981.
Afterwards the pathwise Itô formula for path-dependent functionals (as well as for standard functions) was extended to paths X with arbitrary regularity by R. Cont and N. Perkowski in contPathwiseIntegrationChange2019. For non path-dependent functions they also investigated the relation to rough path theory. By identifying a natural candidate for the reduced rough path induced by a multidimensional path X, it was shown that the Föllmer integral in the pathwise Itô formula coincides with a rough integral.
The main result of this paper is Theorem <ref>. It constructs a rough integral ∫∇ F(t, X) (t) for multidimensional α-Hölder continuous paths for α∈(0,1) and provides a rough functional Itô formula under suitable regularity assumptions on the causal functional F.
Similar results for less regular functionals F and cadlag paths X with finite p-variation have been obtained independently by Christa Cuchiero, Xin Guo and Francesca Primavera and are to be published in cuchieroFunctionalItoformulaTaylor. Their proof relies on a density argument, passing from linear functions of the
signature of the path to general path functionals.
This work was intended to give a simple proof that follows the standard approach in rough path theory. So we will allow for strong regularity assumptions on F and apply the sewing lemma.
Namely Corollary <ref> gives an error bound for higher order Taylor approximations of F(t, X) in terms of the causal derivatives. This is a generalization of Lemma 2.2 from A. Ananova and R. Cont in ananovaPathwiseIntegrationRespect2017. The higher order Taylor approximation allows to adapt the techniques in contPathwiseIntegrationChange2019 to the path-dependent setting.
§.§ Notation
Let T>0 and D denote the set of cadlag paths X [0, T] →^d equipped with the uniform norm |·|_∞. For such paths and t∈ [0,T] we denote by X(t) the value of the path at time t and by X_t the stopped path X_t = X(·∧ t). Let further X_t- denote the path X stopped right before t, namely for u∈[0,T], X_t-(u) = X(u)_[0,t)(u) + lim_r↑ tX(r) _[t, T](u) .
Let Δ_T:= { (s,t)∈ [0,T]× [0,T] 0≤ s≤ t≤ T}.
We call = { [t_k-1, t_k] k=1,…,n} with t_k∈[0,T] for all k=0,…,n, partition of [0,T] if 0=t_0< t_1<… < t_n = T. The mesh of a partition is defined as || = max_[s,t]∈| t-s|. interval@||
For α∈(0,1), a two-parameter path ΞΔ_T→^d is α-Hölder continuous if
|Ξ|_α := sup_(s,t)∈Δ_T|Ξ(s,t)|/| t-s|^α < ∞,
here |·| denotes the euclidean norm.
Then a path X [0,T]→^d is α-Hölder continuous if its increments (δ X)(s,t):= X(t) - X(s) are.
For two terms x,y we abbreviate the existence of some constant C>0 such that x≤ C y to x≲ y and by ≲_p we indicate a dependency C= C(p) on some parameter p.≲
§.§ Causal Derivatives
Following dupireFunctionalItoCalculus2009 and oberhauserExtensionFunctionalIto2012 (from where we took the present definitions), we consider for causal functionals F [0,T]× D →, i.e.,
F(t, X) = F(t, X_t), the following notions of differentiability:
[Causal Space Derivative]
If for all (t,X)∈ [0,T]× D the map
^d∋ h↦ F(t, X_t + h _[t,T])
is continuously differentiable at h = 0 we say that F has a causal space derivate. We denote it by ∇ F(t,X) = (∂_1 F(t,X), …, ∂_n F(t,X)). Recursively, we define for n∈ the nth causal space derivative and denote it by ∇^n F.
[Causal Time Derivative]
If for all (t,X)∈ [0,T]× D the map
[0,∞)∋ h ↦ F(t + h, X_t)
is continuous and right-differentiable at h=0 we denote this derivative by D F(t, X). If additionally t↦ D F(t, X) is Riemann integrable, then we say that F has a causal time derivative.
For n∈ we write F∈ℂ^1,n_b, if F has a causal time derivative and n causal space derivatives such that F, DF and for k=1, …, n, ∇^k F are continuous in [0,T]× D and bounded in the sense that sup_(t, X)∈[0,T]× D| F(t, X)| < ∞.
We refer to [Definition 19]oberhauserExtensionFunctionalIto2012 for weaker regularity notions. Since the purpose of this paper is to give a simple proof of a rough functional Itô formula, we follow the rough path tradition and keep the assumptions simple.
§ TAYLOR APPROXIMATION FOR CAUSAL FUNCTIONALS
To derive a Taylor formula for t ↦ F(t, X) we use the signature X of a paths that have bounded variation.
We briefly specify the (for us necessary) theory.
§.§ Symmetric Part of the Signature of a Path
Set T_0(^d) := 1 and for k∈ , T_k(^d) := (^d)^⊗ k the space of k-tensors and T(^d) = ⊕_k=0^∞ T_k(^d) the tensor algebra.
A word w in the alphabet := { 1, … d} of length k is a tuple (w_1, …, w_k) such that for j=1, …, k, w_j ∈.
Denote for i=1, …, d by e_i := (0, …, 0, 1, 0, …, 0) the ith unit vector and e_w := e_w_1⊗…, ⊗ e_w_k. Then the set { e_w w word in of length k} is a basis of T_k(^d).
We write ⟨·, ·⟩ for the natural inner product in T_k(^d).
Abusing the notation a bit, we also write for T∈ T_k(^d), S∈ T_m(^d) with m<k, T, S∈ T_m-k, where for h∈ T_k-m(^d), T, S (h) := T, S ⊗ h. Finally note that we can choose compatible norms |·| on T_k(^d), i.e. for v_1, …, v_k∈^d,
| v_1 ⊗…⊗ v_k|≤∏_j=1^k | v_j|.
Let _k denote the projection from T(^d) onto T_k(^d).
Let further X [0,T] →^d be continuous and of bounded variation, i.e. there exists finite signed measures μ^i([0,T]) →, such that for all t∈[0,T], μ^i([0,t]) = X^i(t).
Then the signature is a two-parameter path XΔ_T → T(^d), where for every (s,t)∈Δ_T, k∈,
_k Xs,t = ∑_w = (w_1, …, w_k),
w_j∈⟨Xs,t, e_w⟩ e_w,
with
⟨Xs,t, e_w⟩ := ∫_s^t ∫_s^s_k…∫_s^s_2 X^w_1(s_1) … X^w_k(s_k).
The symmetric part T of a k-tensor T is given via
⟨T, e_w⟩ = 1/k!∑_σ∈𝔖_k⟨ T, e_(w_σ1, …, w_σ k)⟩,
where 𝔖_k denote the permutation group of degree k.
We next define a commutative product on tensors indexed by words. Let m, k_1,…, k_m∈ and k= ∑_j=1^m k_j. The shuffles sh(k_1,…, k_m) of words of length k_1,…, k_m are those permutations σ∈𝔖_k such that σ 1 < … < σ k_1, σ (k_1+1) < … < σ (k_1+k_2) and so on. Then for a word w of length k, we define
e_(w_1, …, w_k_1)… e_(w_k - k_m + 1, …, w_k) = ∑_σ∈sh(k_1, …, k_m) e_(w_σ 1, …, w_σ k).
Note that for letters w_1, …, w_k∈, this reduces to
e_w_1… e_w_k = ∑_σ∈𝔖_k e_(w_σ 1,…, w_σ k).
It is easy to check that the signature has the remarkable property that for two words w and u it holds that
⟨Xs,t, e_w e_u ⟩ = ⟨Xs,t, e_w⟩⟨Xs,t, e_u⟩,
compare [Exercise 2.2]RoughBook. We deduce that the symmetric part of the kth level signature is
_k Xs,t = 1/k! (X(t) - X(s))^⊗ k,
since it follows from (<ref>), (<ref>), (<ref>) and (<ref>) for a word w of length k that
⟨_k Xs,t, e_w ⟩ =1/k!∑_σ∈𝔖_k⟨Xs,t, e_(w_σ1, …, w_σ k)⟩
= 1/k!⟨Xs,t, e_w_1… e_w_k⟩
= 1/k!∏_j=1^k ⟨Xs,t, e_w_j⟩ = 1/k!∏_j=1^k (X^w_j(t) - X^w_j(s)).
Finally we point out that [Proposition 3.5]frizUnifiedSignatureCumulants2021 shows that the symmetric part of the signature satisfies
_k Xs,t = ∫_s^t _k-1Xs,r⊗ X(r).
Proof basically just X = X⊗ X and so
∫_s^t _k-1Xs,r⊗ X(r) = ∫_s^t _k-1Xs,r⊗ X(r) = _kXs,t.
§.§ Taylor Formula for Causal Functionals
The first result establishes a Taylor formula in terms of path-dependent derivatives for paths of bounded variation. It is based on the Taylor expansion of one-dimensional and piecewise constant paths X that is used in BallyVlad2016Sibp, contPathwiseIntegrationChange2019 to prove the functional Itô formula with Föllmer integrals. It will prove very useful to have an explicit representation of the remainder.
[Taylor Formula for Functionals of Bounded Variation Paths]
Let n, d∈ and F∈ℂ^1,n_b, such that for k=1, …, n-1, ∇^k F∈ℂ^1,1_b. Then it holds for every path X [0,T]→^d that is continuous and of bounded variation and every (s,t)∈Δ_T, that
F(t, X) - F(s, X) = ∑_k=0^n-11/k!∫_s^t D∇^k F(u, X), (X(t) - X(u))^⊗ k u + ∑_k=1^n-11/k!∇^k F(s, X),(X(t) - X(s))^⊗ k
+ 1/(n-1)!∫_s^t ⟨∇^n F(u, X),(X(t) - X(u))^⊗ n-1⊗ X(u)⟩ .
The proof is by induction on n.
Note that in the case that d=1, the result follows for n=1 from [Theorem 1.10]contPathwiseIntegrationChange2019 applied with p=2 and X^2 = 0 with higher regularity assumption on F. For consistency we give the start of the induction with minor changes due to d≥ 1 and X more regular.
For n=1 let () be a sequence of partitions of [s,t] with ||→ 0. We consider the piecewise constant approximation of X on [s,t]:
X^(u) = X(u)_[0,s)(u) + ∑_[t_j, t_j+1]∈ X(t_j+1)_[t_j, t_j+1) + X(t)_[t, T].
Since X^→ X uniformly and X^_t- = X^_t, X^_s- = X_s, it holds
F(t, X) - F(s,X) = lim_||→ 0∑_[t_j, t_j+1]∈ F(t_j+1, X^_t_j+1-) - F(t_j,X^_t_j-).
Noting that X^_t_j = X^_t_j+1- on [0, t_j+1] and X^_t_j = X^_t_j- + (X(t_j+1) - X(t_j))_[t_j, T] we decompose the difference into the time and space perturbation,
F(t_j+1, X^_t_j+1-) - F(t_j,X^_t_j-)
= F(t_j+1, X^_t_j) - F(t_j, X^_t_j) + F(t_j, X^_t_j- + (X(t_j+1) - X(t_j))_[t_j, T]) - F(t_j,X^_t_j-).
By construction of the causal time derivative [t_j, t_j+1)∋ u ↦ F(u, X^_t_j) is right-differentiable with Riemann integrable derivatives, thus by the fundamental theorem of calculus, cf. botskoStrongerVersionsFundamental1986, it holds
∑_[t_j, t_j+1]∈ F(t_j+1, X^_t_j) - F(t_j, X^_t_j)
= ∑_[t_j, t_j+1]∈∫_t_j^t_j+1 D F(u, X^_t_j) u.
Since for every u∈[s,t], ∑_[t_j, t_j+1]∈ F(u, X^_t_j) _[t_j, t_j+1)(u) = F(u, X^_u) → F(u, X_u) as ||→ 0 and DF bounded, the last expression converges to ∫_s^t DF(u, X) u. Similarly it follows for the space perturbation in (<ref>) that
F(t_j, X^_t_j- + (X(t_j+1) - X(t_j))_[t_j, T]) - F(t_j,X^_t_j-)
= ∫_0^1 ⟨∇ F(t_j, X^_t_j- + λ(X(t_j+1) - X(t_j)_[t_j, T]), (X(t_j+1) - X(t_j))⟩λ
=: ⟨∇ F(t_j,X^_t_j-), (X(t_j+1) - X(t_j))⟩ + R_j.
For i=1,…, d let now μ^i denote measures of bounded variation related to component X^i. Since ∑_[t_j, t_j+1]∈∂_i F(t_j,X^_t_j-)_[t_j, t_j+1)(u) →∂_i F(u, X) as ||→ 0 and ∂_i F bounded, it follows that
∑_[t_j, t_j+1]∈∂_i F(t_j,X^_t_j-) (X^i(t_j+1) - X^i(t_j))
= ∫_s^t ∑_[t_j, t_j+1]∈∂_i F(t_j,X^_t_j-)_[t_j, t_j+1)(u) μ^i(u)
→∫_s^t ∂_i F(u, X) μ^i(u)
= ∫_s^t ∂_i F(u, X) X^i(u).
Moreover using that the images of (𝕀,X^) are compact in [0,T]× D, we may assume that ∇ F is compactly supported and therefore uniformly continuous.
For every n∈, K_n = {(t_j, X^_t_j-+ λ(X(t_j+1) - X(t_j))_[t_j, T]) j = 0, … ,n-1, λ∈[0,1]} is compact: Any sequence in K_n corresponds to a sequence t_m in the finite set {t_j j=0,…, n-1} and (λ_m)⊂[0,1]. Then there are note relabeled subsequences with t_m → t_j^* and λ_m →λ*∈[0,1]. Clearly for large m, t_m = t_j^*. Then trivially
(t_m, X^_t_m-+ λ_m(X(t_m+1) - X(t_m)_[t_m, T]) = (t_j^*, X^_t_j^*-+ λ_m(X(t_j+1^*) - X(t_j^*)_[t_j^*, T]) → (t_j^*, X^_t_j^*-+ λ^*(X(t_j+1^*) - X(t_j^*))_[t_j^*, T])∈ K_n.
Hence the remainders
R_j = ∫_0^1 ∇ F(t_j, X^_t_j- + λ(X(t_j+1) - X(t_j))_[t_j, T]) - ∇ F(t_j,X^_t_j-) λ· (X(t_j+1) - X(t_j))
satisfy
∑_[t_j, t_j+1]∈| R_j|≤ C(|∇ F|_∞, ) ∑_[t_j, t_j+1]∈| X(t_j+1) - X(t_j)|≤ C(|∇ F|_∞, )|μ|([s,t])
where |μ| denotes the total variation of μ and C(|∇ F|_∞, )→ 0 as ||→∞.
For n → n+1 we apply the previous result componentwise to get
∇^n F(u, X) - ∇^n F(s, X) = ∫_s^u D∇^n F(r, X) r + ∫_s^u ⟨∇^n+1 F(r, X), X(r) ⟩ (·).
Plugging that into the remainder (<ref>) and using Fubini, it follows
∫_s^t ⟨∇^n F(u, X), ( X(t) - X(u))^⊗ n-1⊗ X(u)⟩
= ∫_s^t D∇^n F(r,X), ∫_r^t (X(t) - X(u))^⊗ n-1⊗ X(u) r
+ ∫_s^t ∇^n+1 F(r,X),∫_r^t (X(t) - X(u))^⊗ n-1⊗ X(u) ⊗ X(r)
+ ∇^n F(s, X),∫_s^t( X(t) - X(u))^⊗ n-1⊗ X(u)
For every r∈[s,t] the function g(h):= F(r, X_r + h_[r,T]) is (n+1)-times continuously differentiable in zero by assumption. Thus Schwarz' lemma shows that the causal space derivative ∇^n+1 F(r, X) = ∇^n+1 g(0) is a symmetric tensor (i.e., ∇^n+1 F(r, X) = ∇^n+1 F(r, X)).
Consequently,
∇^n F(s, X),∫_s^t( X(t) - X(u))^⊗ n-1⊗ X(u)= ∇^n F(s, X), ∫_s^t( X(t) - X(u))^⊗ n-1⊗ X(u).
It holds that
∫_s^t( X(t) - X(u))^⊗ n-1⊗ X(u)
= ∑_l=0^n-1 (-1)^n-l-1n-1l(X(t)-X(s))^⊗ l⊗∫_s^t (X(u) - X(s))^⊗ n-l-1⊗ X(u).
Recalling (<ref>) and (<ref>) we deduce
1/(n-l-1)!∫_s^t (X(u) - X(s))^⊗ n-l-1⊗ X(u)
= 1/(n-l)! (X(t)- X(s))^⊗ n-l.
And plugging that into (<ref>) yields
∫_s^t( X(t) - X(u))^⊗ n-1⊗ X(u) = (X(t)-X(s))^⊗ n∑_l=0^n-1 (-1)^n-l-1n-1l(n-l-1)!/(n-l)!
=(n-1)!/n! (X(t)-X(s))^⊗ n,
since ∑_l=0^n-1 (-1)^n-l-1nl = 1. Using similar arguments for the inner product with D∇^n F and ∇^n+1 F in (<ref>) we conclude that
n!/(n-1)!∫_s^t ⟨∇^n F(u, X), ( X(t) - X(u))^⊗ n-1⊗ X(u)⟩
=∫_s^t D∇^n F(r,X), ( X(t) - X(r))^⊗ n r +∫_s^t ∇^n+1 F(r, X),(X(t) - X(r))^⊗ n⊗ X(r)
+ ∇^n F(s,X),(X(t)-X(s))^⊗ n.
Note that it is sufficient that the functional F and its causal derivatives are continuous. As seen in the proof the images (𝕀, X^) lie in a compact subset of [0,T]× D if X is continuous. Then any continuous functional restricted to this compact metric space is uniformly continuous and bounded.
The next corollary is a generalization of [Lemma 2.2]ananovaPathwiseIntegrationRespect2017.
It estimates the error of a lower order Taylor approximation of F composed with an α-Hölder continuous path X [0,T]→^m. The reader may notice that the previously mentioned result [Theorem 1.10]contPathwiseIntegrationChange2019 can be applied to less regular paths using Föllmer integrals. But since we want to estimate the remainder (<ref>) for the next result, we prefer to use integrals against paths of bounded variation.
Let X [0,T] →^d be α-Hölder continuous for some α∈(0,1) and F as in Theorem <ref>. Assume additionally that F and D F are Lipschitz continuous for fixed times with bounded Lipschitz constants. Then it holds for every (s,t)∈Δ_T with | t-s|≤ 1 and 0≤ l≤ n-1 that
| F(t, X) - ∫_s^t DF(r,X) r- ∑_k=0^l 1/k!∇^k F(s, X),(X(t) - X(s))^⊗ k|
≲| t-s|^α + (n-1)α^2 + | t-s|^1+α + | t-s|^(l+1)α,
with a constant depending on n, l, | F|_∞, | D∇^k F|_∞ and |∇^k+1F|_∞ for k=l+1,…, n-1 as well as sup_r∈[s,t]{ Lip(F(r, ·), DF(r, ·))} and | X|_α.
We point out the differences of (<ref>) to a typical Taylor approximation. As usual the exponent (l+1)α connected to the l space derivatives used in the approximation. The appearance of (1+α) is due to the path-dependent time derivatives. And finally α + (n-1)α^2 due to an approximation of X by piecewise constant paths.
Let be a partition of [s,t] whose subintervals are all of length ||.
Consider a piecewise linear approximation X^ of X on [s,t] such that X^_s = X_s, and for every [u,v]∈ it holds X^(u)= X(u) and X^(v) = X(v) and in between X^ is linearly interpolated. Then X^ is continuous and on [s,t] of bounded variation. Hence the previous theorem shows that
F(t, X^) - ∫_s^t DF(r,X^) r- ∑_k=0^l 1/k!∇^k F(s, X^),(X^(t) - X^(s))^⊗ k
= ∑_k=1^n-11/k!∫_s^t D∇^k F(r, X^), (X^(t) - X^(r))^⊗ k r+ ∑_k=l+1^n-11/k!∇^k F(s, X^),(X^(t) - X^(s))^⊗ k
+ 1/(n-1)!∫_s^t ⟨∇^n F(r, X^),(X^(t) - X^(r))^⊗ n-1⊗ X^(r)⟩.
Note that X^ is also α-Hölder continuous with | X^|_α≲| X|_α
u≤ v∈ [t_j, t_j+1] it holds X^(v) - X^(u)= v-u/t_j+1 - t_j (X(t_j+1) - X(t_j)), so
| X^(v) - X^(u)|/| v-u|^α≲_| X|_αv-u/t_j+1 - t_j^1-α≤ 1.
And for u∈[t_j, t_t_j+1], v∈[t_i, t_i+1] with j<i it follows that
| X^(v) - X^(u)|/| v-u|^α≲_| X|_α(t_j+1-u/v-u)^α + (t_i - t_j+1/v-u)^α + (v - t_i/v-u)^α≤ 3.
and that for every [u,v]∈ it holds on (u,v),
| X^/ r| = |X(v) - X(u)/v- u|≤| v-u |^α - 1 = ||^α - 1.
In the case l≤ n-2, it follows that (<ref>) is bounded by
∑_k=1^n-11/k!| D∇^k F|_∞| X^|_α^k| t - s|^kα| t-s| + ∑_k=l+1^n-11/k!|∇^k F|_∞| X|_α^k| t-s|^kα
+ 1/(n-1)!|∇^n F|_∞| X^|_α^(n-1)| t-s|^(n-1)α||^α - 1| t-s|
≲_n,l, | D∇^k F|_∞, | X|_α, |∇^k F|_∞| t-s|^1+α + | t-s|^(l+1)α + | t-s|^1+ (n-1)α||^α - 1.
Since F(s,X^) = F(s,X) and X^(t) -X^(s) = X(t) - X(s) it follows that
F(t, X) - ∫_s^t DF(r,X) r- ∑_k=0^l 1/k!∇^k F(s, X),(X(t) - X(s))^⊗ k
= F(t, X)-F(t, X^) - ∫_s^t DF(r, X) - DF(r, X^) r
+ F(t, X^) - ∫_s^t DF(r,X^) r- ∑_k=0^l 1/k!∇^k F(s, X^),(X^(t) - X^(s))^⊗ k.
Since F and DF are Lipschitz continuous for fixed times it holds that
| F(t, X)-F(t, X^) |≲_Lip(F(t,·))| X - X^|_∞≲_| X|_α||^α
and similar
|∫_s^t DF(r, X) - DF(r, X^) r |≲_sup_r∈[s,t] Lip(DF(r,·))| X - X^|_∞| t-s|≲_| X|_α| t-s|||^α.
Together with estimate (<ref>), we deduced that (<ref>) is bounded by a constant depending on n, l, Lip(F), sup_r∈[s,t] Lip(DF(r, ·), | X|_α, | D∇^k F|_∞, |∇^k F|_∞ times
||^α + | t-s|||^α + | t-s|^1+α +| t-s|^(l+1)α + | t-s|^1+ (n-1)α||^α - 1.
Optimizing the choice of ||, by balancing ||^α≈| t-s|^1+ (n-1)α||^α - 1, i.e. ||≈| t-s|^1+(n-1)α we obtain the assertion for 0≤ l≤ n-2. Finally note that for l=n-1, the second sum in (<ref>) is empty, so the RHS is simply | t-s|^1+α+| t-s|^1+ (n-1)α||^α - 1. Nevertheless there is nothing wrong in writing | t-s|^nα in the assertion (<ref>), since α + (n-1)α^2 < nα.
||≈| t-s|^1+(n-1)α, in the sense that the number of subintervals m=| t-s|/||∈ satisfies
| t-s|^-(n-1)α≤ m ≤ 2| t-s|^-(n-1)α,
possible because | t-s|^-(n-1)α≥ 1. Then
||^α≤| t-s|^α + (n-1)α^2, ||^α-1≤ 2^1-α| t-s|^-(1+(n-1)α) + α+(n-1)α^2.
§ ROUGH FUNCTIONAL ITÔ FORMULA
Throughout this section we denote the point evaluation of two-parameter paths Ξ by Ξ_s,t = Ξ(s,t).
We now define for an α-Hölder continuous path X and (s,t)∈Δ_T, ^0_s,t:=1 and for k≥1,
^k_s,t := 1/k!(X(t) - X(s))^⊗ k.
Further we write for their collection := (^0, ^1, …). It was shown in [Definition 4.6, Lemma 4.7]contPathwiseIntegrationChange2019 that for every k≥ 1, ^kΔ_T → T_k(^d) is a kα-Hölder continuous two-parameter path and a reduced Chen relation holds: For every (s,u), (u,t)∈Δ_T,
_s,t = _s,u⊗_u,t.
[Rough Functional Itô Formula]
Let X be α-Hölder continuous for α∈ (0,1) and n be the smallest natural number such that 2α + (n-1)α^2>1. Let further F be as in Corollary <ref> to parameter n+1. Assume additionally that for each k=1, …, n, ∇^k F and D∇^k F are also Lipschitz continuous for fixed times with bounded Lipschitz constants.
Then
∫_0^T ∇ F(u, X) (u) := lim_||→ 0∑_[s,t]∈∑_k=1^n⟨∇^k F(s, X),^k_s,t⟩,
is a well defined limit.
Moreover if F satisfies Corollary <ref> to parameter ñ such that α + (ñ-1)α^2>1, then
F(T, X) = F(0, X) + ∫_0^T DF(u, X) u + ∫_0^T ∇ F(u, X) (u).
We show existence of the rough integral by adapting the proof of [Proposition 4.10] contPathwiseIntegrationChange2019 to our path-dependent setting.
Set for (s,t)∈Δ_T, k=1,…, n,
Ξ_s,t^X := ∑_k=1^n ⟨∇^k F(s, X), _s,t^k ⟩,
R_s,t^X, k := ∇^k F(t, X) - ∑_l=k^n ⟨∇^l F(s, X), _s,t^l-k⟩.
As usual in rough path theory (<ref>) follows from the sewing lemma once we show that for every (s,u), (u,t)∈Δ_T (w.l.o.g | t -s|≤ 1),
|Ξ_s,t-Ξ_s,u-Ξ_u,t|≲| t-s|^θ
for some θ >1.
Recalling that ∇^k F(s, X) is symmetric, the reduced Chen relation (<ref>) implies that
∇^k F(s,X), _s,t^k
= ∇^k F(s,X), _k_s,u⊗_u,t
= ∑_l=0^k ∇^k F(s,X), _s,u^k-l⊗_u,t^l.
Plugging that into Ξ_s,t and interchanging the summation order, it follows that
Ξ_s,t - Ξ_s,u = ∑_k=1^n ∑_l=k^n ∇^l F(s,X), _s,u^l-k⊗_u,t^k .
Therefore
Ξ_s,t-Ξ_s,u-Ξ_u,t = - ∑_k=1^n R_s,u^X, k, _u,t^k.
For k=1, …, n, it holds by Corollary <ref> applied for ∇^k F∈ℂ_b^1,(n+1-k) with l= n-k, that
| R_s,u^X, k|≲| u-s|^α + (n-k)α^2.
The other terms in (<ref>) don't appear since α∈(0,1) and the minimality of n imply α + (n-k)α^2 < (n-k+1) α and 1> α + (n-k)α^2.
1≥α + (n-k)α^2, if not for some k, then 1 <α +(n-1)α^2. But then 2α + (n-2)α^2 > α + (n-1)α^2 > 1, which is a contradiction to n smallest natural number satisfying 2α + (n-1) α^2 > 1.
Recalling that ^k is kα-Hölder continuous, we deduced that
| R_s,u^X, k, _u,t^k|≲| u-s|^(k+1)α + (n-k)α^2.
Since
(k+1)α + (n-k)α^2 = nα^2 + α + k(α - α^2) > 2α + (n-1)α^2,
and by assumption 2α+(n-1)α^2>1, (<ref>) now follows from (<ref>).
It is left to show the functional Itô formula. Applying Corollary <ref> to F∈ℂ_b^1,ñ with l=n, shows that
| F(t, X) - F(s, X) - ∫_s^t DF(u, X) u - ∑_k=1^n⟨∇^k F(s, X),^k_s,t⟩|≲| t-s|^1 + α + | t-s|^(n+1)α + | t-s|^α+(ñ-1)α^2.
By assumption α+(ñ-1)α^2 and (n+1)α are both greater one.
Together with the estimate from the sewing lemma it follows that t↦ F(t, X) - F(0,X) - ∫_0^t DF(u, X) u - ∫_0^t ∇ F(u,X)(u) is θ̃-Hölder continuous for some θ̃>1. Consequently the map is constant zero.
Clearly ñ≥ n+1.
For Brownian sample paths the theorem can be applied with n=2 and ñ = 4. (Indeed 2α + α^2 <1 ⇔α > √(2) -1 ≈ 0,41 and α + 3α^2 >1 ⇔α > (√(13) - 1)/6≈ 0,43).
So for the existence of the integral it is sufficient that the functional F has 3-causal space derivatives, but for the Itô formula we need 4-causal space derivatives. This additional regularity is comparable to the regularity change in the standard setting [Lemma 4.1, Proposition 5.8]RoughBook. But there are regimes of α where the the change in regularity exceeds one. For example for α∈(√(2)-1, (√(16)-1)/6], it holds n+1=3 and ñ=5. This gap increases for α→ 0.
It remains an open question if the loss of regularity from α to α^2 in Corollary <ref> can be circumvented.
We can easily change to
^n_s,t := ^n_s,t - 1/n!μ((s,t]),
for a symmetric tensor-valued measure μ = ∑_wμ^w e_w (over words w of length n in the alphabet ), such that μ^w
are finite signed measures with no atoms. Then it is immediate that
F(T, X) = F(0, X) + ∫_0^T DF(u, X) u + ∫_0^T ∇ F(u, X) (u) - 1/n!∫_0^T ∇^nF(u, X), μ(u).
The measure μ could be for example a suitable notion of finite p-variation, cf. [Definition 4.1]contPathwiseIntegrationChange2019 or the stochastic quadratic variation if X is a sample path of a semimartingale.
§.§ Acknowledgement
The author would like to thank Christa Cuchiero, Xin Guo and Francesca Primavera for making available the slides of Christa Cuchiero's talk on cuchieroFunctionalItoformulaTaylor at TU Berlin, 2023. Moreover, Nicolas Perkowski for his helpful comments.
|
http://arxiv.org/abs/2409.02796v1 | 20240904151230 | Orientational properties of the HGO system in a slit geometry in two-dimensional and three-dimensional case from Monte Carlo simulations and Onsager theory revisited | [
"Agnieszka Chrzanowska"
] | cond-mat.soft | [
"cond-mat.soft"
] |
Department of Physics, Kraków University of Technology,
ul. Podchora̧żych 1, 30-084 Kraków, Poland.
e-mail: [email protected]
Orientational properties of the HGO system in a slit geometry in two-dimensional and three-dimensional case
from Monte Carlo simulations and Onsager theory revisited.
Agnieszka Chrzanowska
September 9, 2024
========================================================================================================================================================================
§ ABSTRACT
A problem of the orientational and density structure properties of a confined three-dimensional (3D) and two-dimensional (2D) Hard Gaussian Overlap (HGO) ellipsoids has been revisited
using the Onsager-type second virial approximation of Density Functional Theory (DFT) and constant-pressure Monte-Carlo (MC)
simulations.
At the walls the assumed particles in 3D are forced to exhibit planar alignment. In the nematic as well as in the smectic regime particles situated apart from the walls attain homeotropic arrangement. This unusual bistable rearrangement is named as the eigenvalue exchange problem of the order parameter tensor. At the same time a bistable arrangement is not observed in the two-dimensional case of the same system. Comparison of the DFT theory and MC simulation results has been given.
Whereas comparison of the orientational properties obtained from MC simulations and DFT theory is reasonable for a large range of densities, it does not concern
the density profiles. In denser systems differences become larger.
It occurred, however, that by manipulating degree of penetrability of the particles at the walls one can influence the surfacial density which improves comparison.
A discussion upon the problem what factors promote simultaneous existence of planar and homeotropic arrangement in a confinement has been provided.
§ INTRODUCTION
Due to application issues
interactions with surfaces is one of the most important factors that liquid crystalline community is interested in.
In the liquid crystalline cells used widely in technological applications molecules are anchored at surfaces and the state of the system
in the cell is the result of the interplay of this anchoring influence with the electric field interaction with the whole liquid crystal (LC) system.
Whereas tailoring this anchoring in practical applications has been already mastered, knowledge on its
molecular origin is still incomplete
and many attempts have been undertaken so far to understand
physics of this phenomenon and its influence on the orientational properties of the system.
It was already known more than 30 years ago <cit.> that
in the presence of substrates mean local densities as well as orientational properties become inhomogeneous, mostly within surfacial areas. If the walls separations are of a few molecules lengths this modulations may influence also middles of the samples.
So far extensive studies on confined LCs comprise purely computer simulations
<cit.>, as well as
available theoretical approaches <cit.> or both of them applied and compared for the same system <cit.>.
The most elaborated systems are ultra thin samples where the walls separations are less the 10 molecules lengths
and the anchoring conditions are assumed as symmetric (planar or homeotropic) or hybrid (planar on one side and homeotropic on the other side of the sample) conditions at the walls.
Theoretical approaches of such ultra thin systems started from the Landau–de Gennes descriptions <cit.>, yet, as the authors noticed, they suffered from many approximations.
More fruitful theories seem to be the ones of the
microscopic origin, like the density functional theory or the Onsager theory in the case of hard bodies.
Two types of rod like particles shape are usually studied: of ellipsoidal <cit.> or of cylindrical shape <cit.>.
One of the most popular hard core ellipsoidal particles are the objects defined by the Hard Gaussian Overlap rule <cit.> that stems out of the Gay-Berne potential <cit.>.
To consider LCs in confinement
this interaction must be supplemented with the wall-particle interactions.
The simplest interaction with the walls that is used is
the simple hard
needle-wall (HNW) surface potential.
This potential assumes that the interaction is governed by the behavior of a hard needle which goes throughout the particle
along its long axis <cit.> and interacts with a flat hard surface. This idea is very convenient to use in the studies also because of the fact that by changing the length of such a needle it is possible to moderate the degree of
of substrate penetrability.
Another idea to model surface interactions is to use a contact function that describes the walls as built from spheres or other HGO particles or to use special functions <cit.>.
In <cit.>, for instance, a special contact function has been developed
to characterize a surface potential capable of
exhibiting both homeotropic and planar anchoring alignments which promote appropriate alignment throughout the whole sample.
This surface potential allowed to observe the so-called
bistable anchoring - the existence of the planar and homeotropic alignment throughout the whole sample depending upon the parameters used in the contact function.
Application of the above idea has been further continued in <cit.>.
The word "bistable" is used here to show possibility to obtain different arrangements
with the use of the same surface contact function but with different parameters.
This theoretical attempt is aimed at describing
the display cells that possess two optically distinct surface stabilized arrangements.
It should be noted that the same word can be, however, used in different context –
when two types of arrangement, planar and homeotropic, exist at the same time in one sample.
In what follows we will focus on the effect found in the confined HGO system where the particles at the walls are planar and
the particles within the sample are perpendicular to the walls.
Section <ref> presents the model, Section <ref> introduces basic formulas of the Onsager DFT approach. Section <ref> and
Section <ref> provide the results for three- and two-dimensional cases, respectively.
Finally Section <ref> provides a discussion on the results.
§ MODEL
We consider ellipsoidal hard particles that interact through the potential
U_ij(a_i,a_j,R_ij ) =
{[ 0 if r_ij≥σ (a_i,a_j,r_ij ); ∞ if r_ij < σ (a_i,a_j,r_ij ),; ].
where a_i is the unit vector pointed along the particle i and describing its orientation and R_ij is the vector connecting centers of the particle i and j.
Vector r_ij is the unit vector along the direction R_ij.
σ (a_i,a_j,r_ij ) is the particle shape function used in the Gay-Berne potential <cit.> which serves here as the contact function. Its form follows
σ (a_i,a_j,r_ij ) = σ_0 (
1-1/2χ[
(r_ij·a_i+r_ij·a_j)^2 /1+χ (a_i·a_j)
+
(r_ij·a_i-r_ij·a_j)^2 /1-χ (a_i·a_j)]
)^-1/2
With respect to X,Y and Z coordinates this function is of the square type hence
the condition (<ref>) describes the surface of an ellipsoid in 3D and an ellipse in 2D.
Anisotropy of the ellipsoid is provided by the factor χ=κ^2-1/κ^2+1, where κ is the length to breadth ratio of the particle, κ=L/σ_0.
The above formulas are very convenient to use in the Monte Carlo simulation as well as in the Onsager theory.
To study arrangement of the particles confined between substrates one needs also to
impose conditions forcing orientational arrangement directly at the walls.
For homeotropic orientational arrangement it is sufficient to forbid the particles centers
to go beyond the walls. To induce planar arrangement the following condition has been proposed
U_i(θ_i,y_i)=
{[ 0 if |z_i-z_wall| ≥λcos(θ_i),; ∞ if |z_i-z_wall| > λcos(θ_i),; ].
where λ is the parameter responsible for embedment of the particle within the surface.
For λ = L/2 the condition (<ref>) is equivalent to the interaction of a hard needle of length L with a hard surface.
As it was shown, for instance in <cit.>, by changing λ one can influence indirectly the density at the surface.
§ THE ONSAGER THEORY
The Helmholtz free energy density functional in the second virial approximation is given by
β F_Helm[ρ ( r,a ) ]=∫ρ(r_1,a_1) log [ ρ(r_1,a_1) -1] dr_1da_1
-1/2∫( exp[-β U_12]-1
) ρ(r_1,a_1) ρ(r_2,a_2) d r_1 da_1 dr_2 da_2
-μβ∫ρ(r_1,a_1) dr_1da_1,
where U_12 stands for the interaction potential, ( exp[-β U_12]-1) is the Mayer function and
the reduced temperature is β=1/kT with k being the Boltzmann factor.
a in the case of uniaxial particles is the vector along main molecular axis.
ρ provides probability of finding a particle at a given spatial position r and oriented
according to the orientation of the vector a.
μ is the chemical potential.
The probability function ρ depends in 2D on the X and Y spatial coordinates that span the entire
surface and on one angular coordinate ϕ, whose values belong to the interval (0,2π).
In 3D ρ depends on three coordinates, X, Y and Z, and two angles θ (from the interval (0,π)) and ϕ (from the interval (0,2π)).
Routinely, ρ is normalized to the number of particles N.
In case of interacting walls this formula must be supplemented with the term
describing interactions of particles with the walls.
β F_walls[ ρ (r,a) ]= β∫ρ (r,a) ∑_i U^i_wall d r da,
where i is the number denoting the wall.
These forms, (<ref>) and (<ref>), have to be recalculated according to the system and particle geometry.
In the case of hard interactions the Mayer function is equal to -1, when the particles overlap and 0 otherwise, which, after integration over one set of spatial
variables, results in the excluded volume V_excl.
To obtain the distribution function ρ one has to minimize to whole system energy
with respect to the geometry of the outcome result, here Z dependence and θ and ϕ in 3D (or Y and θ in 2D).
δ (F_Helm+F_walls )/δρ=0.
This leads to the self-consistency equation, for 3D for instance as
logρ (a_1,z_1)= -d ∫
V_exclρ (a_2,z_2) dz_2 da_2
where d=N/V stands for the averaged density. Note that in the case of inhomogeneous systems the local density is not constant and the profiles of such local densities is the subject of the study whereas the averaged density plays the role of the parameter in the theory.
The equation (<ref>) has to be solved now numerically
in a self-consistency manner.
Numerical scheme chosen for performing integrals is based on the Gaussian quadratures applied at the Gaussian points.
The appropriate values, the excluded volume and distribution functions, are stored then in the form of a matrix and, subsequently, used to find equilibrium solutions to the problem.
The use of the Gaussian quadratures
allows for diminishing the number of the integral function evaluations, which is especially crucial in the case
of multidimensional integrations inherent to the DFT theories of liquid crystals.
Due to the hindrance imposed by the walls, integrations over the polar θ angle cannot span the whole interval (0-π) within the surface region.
The effective interval will depend here on the distance of the particle from the wall.
Due to the fact that we use matrix method in the calculations, where values of ρ are stored at the Gaussian points z_i,
we can also utilize the angular Gaussian points within the interval (θ_min(z_i)-θ_max(z_i)) parametrized by z_i values. In such a manner one can use always the whole set of the Gaussian points.
The numerical method with such z_i adjusted Gaussian points
has been presented in <cit.>.
Note also that the Gaussian method is also convenient
to resolve problems of liquid crystals
with higher symmetry than nematics
<cit.>.
§ RESULTS FOR 3D HGO SYSTEM.
In what follows we present properties of the above presented HGO system of particles.
Thickness of the particles is assumed here as σ_0=1 and the shape aspect ratio as κ=L/σ_0=5.
Molecules are placed between hard walls separated by their 4 lengths.
In such an ultra thin sample the walls significantly influence the state of the whole system.
Additionally particles at the walls are allowed to immerse slightly in the walls as to
tailor density at the surfaces which may decide, for instance, on the number of density peaks in the smectic regime.
In Figure (<ref>) three different liquid crystalline states of the investigated three-dimensional system are presented from the Monte Carlo simulations. For the low pressure equal to 0.3 a typical nematic arrangement is seen, where the particles inside the sample are ordered perpendicularly to the walls and these ellipsoids, which are at the walls, are planarly and chaotically arranged.
This is already worth paying attention since an initial arrangement of the particles was parallel to the walls. During MC simulation the molecules inside the sample must have reoriented about 90 degrees.
For the pressure equal to 0.6 an onset of a smectic layering with orientational disorder of the particles at surfaces is observed. Again one sees here coexistence of planar and homeotropic arrangement.
For the pressure equal to 4.0
a well orientational order of the particles, which are arranged in layers, is seen. At the surfaces one observes also
a two-dimensional orientational order.
By examining the Monte Carlo mobility of some chosen particles of the system at the pressure 4.0 in the form of MC trajectories,
Figure (<ref>),
one concludes that the system is of the smectic type, although at the first glance one may think of it as a solid.
Figure (<ref>a) shows the part of the system cut from the middle of the sample, so we see ends of the green ellipsoids.
Ellipsoids for which trajectories have been calculated are given in red. Trajectories themselves here are yellow.
From the panel (<ref>b), where only trajectories in the top view geometry are given, one observes that the particles can move from
site to site, hence they are not blocked in crystalline nodes. Perspective view at these trajectories in (<ref>c) reveals also that the ellipsoids can relatively easily move from one layer to another one. This convince us that the system exhibits smectic ordering.
In Figure (<ref>) an exemplary density profile for a system has been provided for d=0.32 .
A chosen value of pressure in the MC simulations is here 0.6. The average density d used
in the DFT calculation has been chosen in such a way that the resultant density profile
seemed to be the closest match to the MC result with respect to the heights of the peaks.
We do not use at this point any scalings like, for instance, Parsons scaling <cit.>, since it is not known to what extent it influences the surfacial regimes or whether it acts equally well in the bulk and in the surfacial areas. Additionally, we bear in mind that the arrangement is already of the smectic type (the onset).
The aim of our procedure was simply to find a match from the DFT calculations to the MC results and too see how density at the walls influences the particles arrangement.
We see already, what previously was also highlighted in <cit.>, that the second virial theory underestimates the numbers of layers. Here, DFT predicts strong peaks at the walls and three density peaks (layers) in the middle of the sample.
In the MC simulations the peaks at the walls are much lower with very steep peaks at the
distances from the walls where the particles regain complete rotational freedom.
The number of peaks is in the middle of the sample is two.
By changing the value of the penetrability parameter λ in the DFT application one observes that much better comparison can be obtained.
This can be seen in Figure (<ref>) on the right panel. Not only the number of peaks are correct but also their profiles are getting very close.
Orientational properties of the system are at best seen by studying the behaviour of the order parameter tensor defined as
Q= 1/2< 3 a a-δ >.
If the coordinate system is chosen to correspond to the symmetry of Q one needs only to look at the diagonal terms, since off diagonal terms are zero.
In our case Z axis is perpendicular to the walls surfaces and X and Y axes are situated within XY plane.
Figure (<ref>) shows eigenvalues of the order parameter tensor obtained from MC simulations for the case where pressure is equal to 0.6 and their matching profiles obtained from the DFT calculations. Off diagonal components are not presented since they are at the level of zero (with some fluctuations in the MC results).
Two different penetrability parameters have been here used. The meaning of these profiles is as follows.
If the particles are positioned along Z axis, Q_zz is positive (for perfect alignment will be equal to 1). At the walls, when particles
are perpendicular to Z axis, Q_zz is equal to -0.5. Since Tr Q=0 for uniaxial ordering two other eigenvalues must have opposite sign and be equal.
This is seen in the middle of the sample
in both cases, without and with renormalization of the surface density caused by the change of λ parameter.
In the left panel Figure (<ref>) the DFT theory predicts biaxial ordering close to the walls. All three eigenvalues are different. This is so since particles are (almost) confined to the plane and within this plane they are also ordered in one direction.
For the pressure equal 0.6
in Figure (<ref>) we do not see such an arrangement. Top view of the system shows chaotic arrangement in the surface plane.
By changing penetrability parameter (the right panel of (<ref>)), which influence the density at the walls,
the orientational profiles become more close to the MC results. Because of the visible modulation of the density profile in the middle of the sample this case is of the smectic type (onset of this ordering).
In Figure (<ref>) we present exemplary results for the nematic region.
Here again the profiles obtained from simulation and theory are quite similar. The height of the density peaks are almost the same, yet due to the use of different λ's the positions of them are different. On the other hand, the eigenvalues of Q seem very similar.
It also turns out that such comparisons for denser and well ordered smectic phases
(like in Figure (<ref>))
is also reasonable, although still some departures are seen. The density parameter in the DFT calculation was chosen as to obtain the similar heights of the peaks inside the sample. In this case however, the DFT results does not reproduce small second peaks at the walls and other peaks from the DFT close to the walls are larger.
In all the above examples we observe a simultaneous bistable ordering.
In the next section we consider a two-dimensional counterpart of the current system
in order to see, whether such a bistability is also present.
§ RESULTS FOR 2D HGO SYSTEM.
In this section we would like to check whether bistable simultaneous ordering will be also present in the two-dimensional case.
In Figure (<ref>) a low density case with isotropic phase in the middle of the sample is presented. Both types of profiles exhibit a good comparison of the simulation and theoretical results. Comparison of the orientational profiles are very good, yet in the case of the density profiles there are departures at the walls. No bistable ordering is seen.
Similar effects are seen in a denser system with the onset of the nematic phase presented in Figure (<ref>).
In Figure (<ref>) the profiles for a dense system is given. One sees several density
peaks denoting the structure is layered. The peak in the middle has similar height, yet the other peaks are different.
Also the number of peaks does not agree. The orientation is parallel to the walls and the orientational profiles in MC and in MD are similar, which is of no surprise, once the system is very ordered and the parameters are close to 1.
From the above presented results two important conclusions can be drawn. First conclusion is that no bistable orientational arrangement is observed in 2D cases. This indicates that the reason for such an arrangement observed in 3D cases is caused by the surface of ellipsoids, which are larger in the middle and narrower at the ends,
allowing for spaces and conditions, where
other ellipsoids tend to place their ends, which finally results in
their perpendicular to the walls position. It is interesting that such steric origin
tendency occurs even in the case of dilute systems (see Figure (<ref>)).
Another important conclusion is drawn while comparing pictures with particles arrangement
with the density and orientational profiles.
At the walls the Onsager DFT theory applied without any renormlization does not reproduce correctly the density profiles.
This is already seen in dilute systems (Figure (<ref>)).
These wall departures may change even the number of density peaks as shown in the 3D case
(Figure (<ref>)). This effect is rather on the side of the lack of third and higher order terms in the virial expansion of the free energy. For denser systems
one observes however that at the walls we see spatial arrangement (Figure (<ref>)) which requires
inclusion of all space coordinates instead of only one perpendicular to the walls,
but this influence is of smaller importance as compared to the inclusion of higher order terms in virial expansion.
§ DISCUSSION
Orientational and density profiles of the liquid crystalline system made from HGO particles placed in a slit geometry have been obtained and presented from the Monte Carlo simulations and DFT theories for 3D and 2D geometry. Both approaches provide similar results.
One observes, however, some structural departures that increase with the system averaged density. These departures are the result of the second virial approximation in the DFT approach and the lack of third and higher order terms in the virial expansion of the free energy.
We arrive at a conclusion that these terms play more important role in case of confined materials comparing to the bulk systems. Unfortunately, including such terms in the DFT calculation is not an easy task, mainly due to the computers performance limitations. To our knowledge, there has been no attempt so far made to tackle this problem in confined systems.
Nevertheless, even on the grounds of the second virial approximation still
a lot work can be done to understand mechanisms governing the physics of confined liquid crystals.
Work and interest of researchers focuses here on the issue of the
interactions with the walls and its influence. It seems, however, that the
most crucial resultant property is the local density at the walls.
It directly emerges from the surface interactions, yet there is no simply way
to predict its value just by changing the interactions parameters. One must perform
the whole procedure of solving theoretical equations or perform MC simulations.
In principle, we all are aware that density decides about liquid crystalline phase. If there is a strong density peak at the walls
one can expect then a strong orientational order of molecules. This order influences parts of LC adjacent to direct surfacial regions, and these adjacent parts influences the parts placed next to them and so on. As a result the number of the density (smectic) peaks can be changed (depending on the situation).
In the present paper we show that by changing penetrability of the molecules (in the DFT theory) we can change (or renormalize) surfacial density and, as a result, it is possible to find a solution that exhibits the correct number of the density peaks and the structural and orientational profiles are very close to the MC results. Examples were shown for the case of nematic, the onset of the smectic and well ordered smectic in 3D case.
One should however be cautious with such a renormalization of the surface density, since upon applying it the conditions used in the MC and DFT are not exactly the same, (although the results are close), so it is not fully legitimate or universal.
We present the above comparisons rather as an interesting result
that reveals the importance of the local density at the surfaces.
The current paper presentation clearly highlights then the role of the local surfacial density.
This local density, however, is the result of three different factors:
particles shape, the contact potential function and the averaged density.
Although we all realize that the contact potential functions are the most important
element that influences confined materials,
there are many aspects of these functions that may determine this influence.
Besides the strength and anisotropy of interactions at the confinement regions, the discussed in the present paper penetrability factor, it is also the
architecture of the surfaces themselves that is of vital importance.
In the last aspect the most popular idea used in practice is moderating surfaces with a thin layer of polymer liquid crystal, which, on the other hand, seems to be one of more sophisticated theoretical tasks.
It has occurred indeed that
changes of the surface density can be done by introducing the walls made from grafted
polymer chains.
In <cit.>
for a system made of ellipsoidal Gay-Berne particles
Lange and Schmid have presented the possibility
of an anchoring transition between tilted and homeotropic arrangements
upon manipulating the grafting density.
Besides the surface density it is the particles shape which determines the orientational behavior. It may also lead to new effects like presented here eigenvalue exchange problem
or bistable arrangement where particles directly at the walls are placed planarly and particles inside the sample attain perpendicular to the walls orientation.
This was the case for all densities used here and occurred in the MC simulations as well as in the DFT calculations for three-dimensional case. Interestingly, this effect does not occur in 2D.
The conclusion is that it is the surface of the particles that is responsible for such a bistable arrangement.
We suggest that it is the fact that ellipsoids become narrower at the ends which
entails occurrence of the force that reorients the particles. In contrast, in the case of spherocylinders this force may be absent.
This conclusion can be supported by the results of
the MC simulations of spherocylinders at the single wall performed by Dijkstra et al. <cit.>, where the hard wall promoting planar alignment
induces a thick layer of planar nematic. The spherocyllinders used in this paper were quite long, hence it is not fully certain that shorter particles will behave in the same manner.
It is very interesting that the bistable ordering has been also reported in the case of particles interacting through
a continuous potential.
In <cit.> Palermo et al. have shown by the use of the MC simulations of the Gay-Berne particles placed on the graphite surface, a discontinuous change in anchoring from planar to normal on going from the first
to the second adsorbed layer. This is exactly the same effect as discussed in the present paper. In view of the HGO properties discussed here we can claim then that this effect is
caused by the steric part of the Gay–Berne potential.
Since up to now this potential is the most realistic one, the question arises
about the physical mechanisms which promotes uniform planar (parallel to the walls) alignment in the confined geometries, of slit or cylindrical shape, which are met in reality.
It is an open question now whether the Gay-Berne potential is sufficient to describe the realistic LCs in confinement
or whether the surface potentials (contact functions) must be of much stronger influence to overcome the tendency that leads to simultaneous bistable configurations (like in <cit.>).
§ ACKNOWLEDGMENTS
This work was supported by Grant No. DEC-2021/43/B/ST3/03135 of the National Science Centre in Poland.
99
SluckinPoniewier Sluckin T. J., Poniewierski A. in Fluid Interfacial Phenomena, edited by C. Croxton,
Wiley, New York 1986; 215.
Jerome
Jerome B. Surface effects and anchoring in liquid crystals. Rep. Prog. Phys. 1991; 54: 391–452.
TelodaGama
Telo da Gama M. M., The Interfacial Properties of a Model of a
Nematic Liquid–Crystal. Mol. Phys. 2006; 52: 611–630.
Barmes Barmes F., Cleaver D. J.
Using particle shape to induce tilted and bistable liquid crystal anchoring
Phys. Rev. E 2005; 71: 021705–1–021705–11.
BarmesHardNeedle Barmes F., Cleaver D. J. Computer simulation of a liquid–crystal anchoring transition.
Phys. Rev. E 2004; 69: 061705–1–061705–12.
Cheung Cheung D. L., Schmid F.
Monte Carlo simulations of liquid crystals near rough walls. J. Chem. Phys. 2005; 122:
074902–1–074902–7.
Rene
van Roij R., Dijkstra M., Evans R. Orientational wetting and capillary nematization of hard–rod fluids. Europhys. Lett. 2000; 49:
350–356.
Marjolein Dijkstra M., van Roij R., Evans R. Wetting and capillary nematization of a hard-rod fluid: A simulation study. Phys. Rev. E 2001; 63:
051703–1–051703–7.
Palermo
V. Palermo V., Biscarini F., Zannoni C. Abrupt orientational changes for liquid crystals adsorbed on a graphite surface. Phys. Rev E 1998; 57: R2519–R2522.
Lange2002Comp
Lange H., Schmid F. Surface anchoring on liquid crystalline polymer brushes. Comput. Phys. Commun. 2002; 147: 276–281.
Lange2002JChemPhys
Lange H, Schmid F.
Surface anchoring on layers of grafted liquid-crystalline chain molecules: A computer simulation.
J. Chem. Phys. 2002; 117: 362–368.
Greschek2010
M. Greschek, Melle M., Schoen M.
Isotropic–nematic phase transitions in confined mesogenic fluids. The role of substrate anchoring. Soft Matter 2010; 6: 1898–1909.
Schoen Greschek M., Schoen M.
Frustration of nanoconfined liquid crystals due to hybrid substrate anchoring. Soft Matter 2010; 19:
4931–4941.
Lange2002Eur
Lange H., Schmid F. An anchoring transition at surfaces with grafted liquid–crystalline chain molecules.
Eur. Phys. J. E 2002; 7: 175–182.
Sluckin1
Teixeira P. I. C., Sluckin T. J. Microscopic theory of anchoring transitions at the surfaces of pure liquid crystals and their mixtures. I. The Fowler approximation. J. Chem. Phys. 1992; 97:1498–1509.
Sluckin2
Teixeira P. I. C., Sluckin T. J. Microscopic theory of anchoring transitions at the surfaces of pure liquid crystals and their mixtures. II. The effect of surface adsorption. J. Chem. Phys. 1992; 97: 1510–1519.
TeixeiraGayBerne
Teixeira P.I.C , Chrzanowska A., Wall G. D., Cleaver D. J.
Density functional theory of a Gay–Berne film between aligning walls.
Molecular Physics 2001; 99:889–897.
Cheung2004 Cheung D. L., Schmid F.
A density-functional theory study of the confined soft ellipsoid fluid. J. Chem. Phys. 2004;
120:9185–9191.
Malijewski2010 Malijewski A., Varga S.
Phase behaviour of parallel hard rods in confinement: An Onsager theory study. J. Phys. Cond. Mat. 2010; 22(17):175002–1–175002–12.
Moradi
Moradi M., Wheatley R. J., Avazpour A.
Density functional theory of liquid crystals and surface anchoring.
Physical Review E 2005;72:061706–1–061706–7.
Avazpour
Avazpour A., Avazpour L.
Density functional theory of liquid crystals and surface anchoring: Hard Gaussian overlap–sphere and hard Gaussian overlap–surface potentials. J. Chem. Phys. 2010;
133(24):244701–1–244701–8.
Allen Allen M. P.
Molecular simulation and theory of the isotropic–nematic interface.
J. Chem. Phys. 2000; 112: 5447–5453.
Chrzanowska2001 Chrzanowska A., Teixeira P. I. C., Eherentraut H., Cleaver D. J.
Ordering of hard particles between hard walls.
J. Phys.: Condens. Matter 2001; 13: 4715–4726.
Cleaver2001
Cleaver D.J., Teixeira P.I.C. Discontinuous structural
transition in a thin hybrid liquid crystal film. Chem. Phys. Lett. 2001; 338:1–6.
Deck Anquetil-Deck C., Cleaver D. J.,Teixeira P.I.C.
Ordering of Oblate Hard Particles between Hybrid Penetrable Walls.
J. Phys. Chem. 2020; 124:7709–7716.
TeixeiraBarmesDeck
Teixeira P.I.C., Barmes F., Anquetil-Deck C., Cleaver D. J. Simulation
and theory of hybrid aligned liquid crystal films. Phys Rev
E. 2009; 79:011709–1–011709–9.
Teixeira2004
Teixeira P.I.C., Cleaver D.J. Symmetric alignment of the nematic matrix between close penetrable colloidal particles. J. Phys. Cond. Matt. 2004; 16:S1969–S1980.
TeixeiraBarmes
Teixeira P.I.C, Barmes F., Cleaver D. J., Symmetric alignment of the nematic matrix between close penetrable colloidal particles. J. of Phys. Cond. Matter 2004; 16:S1969–S1980.
Teix2016
Teixeira P. I. C. Nematic Liquid Crystal Order Reconstruction in
Ultraconfinement. From Density-Functional Theory. Liq. Cryst. 2016;
43: 1526–1535.
Heras2005 de Las Heras D., Velasco E., Mederos L.
Capillary Smectization and Layering in a Confined Liquid Crystal. Phys. Rev. Lett. 2005;
94(1):017801–1–017801–4.
Heras2006 de Las Heras D., Velasco E., Mederos L.
Capillary effects in a confined smectic phase of hard spherocylinders: Influence of particle elongation. Phys. Rev. E 2006; 74:011709–1–011709–12.
Velasco
Velasco E., Mederos L. A theory for the liquid–crystalline
phase behavior of the Gay–Berne model. J Chem Phys. 1981; 74:3316–3319.
1998;109:2361–2370.
Padilla Padilla P., and Velasco E. J. The isotropic–nematic transition for the hard Gaussian overlap fluid: Testing the decoupling approximation. Chem. Phys. 1997; 106: 10299–10310.
Miguel de Miguel E, del Rio M. E. The isotropic–nematic transition
in hard Gaussian overlap fluids. J. Chem. Phys. 2001; 115:9072–9083.
GayBerne Gay J. G., Berne B. J. Modification of the overlap potential to mimic a linear site–site potential. J. Chem. Phys. 1981; 74:3316–3319.
Parsons
Parsons J.D. Nematic ordering in a system of rods. Phys Rev A. 1979; 19:1225–1230.
Lee
Lee S. D. A numerical investigation of nematic ordering
based on a simple hard–rod model. J Chem Phys.
1997; 78:4972–4874.
ChrzanowskaMetoda Chrzanowska A. Application of Gaussian quadratures to
density functional (df) theories of confined liquid crystals. J Comput Phys. 2003; 191:265–281.
AgnesferroMetoda1 Chrzanowska A. Possible mechanisms of polar order in 2D systems of banana type liquid crystals. Ferroelectrics 2016; 495:43–52.
AgnesSmekMetoda2 Chrzanowska A. Computational aspects of the smectization process in liquid crystals: An example study of a perfectly aligned two-dimensional hard-boomerang system. Phys. Rev E 2017; 95:063316–1–063316–9.
Brown
Bryan–Brown G. P., Wood E. L., Sage I. C. Weak Surface
Anchoring of Liquid Crystals. Nature 1999; 399: 338–340.
Vandenbrouck
Vandenbrouck, F. Valignat, M. P., Cazabat A. M. Thin Nematic
Films: Metastability and Spinodal Dewetting. Phys. Rev. Lett. 1999; 82:
2693–2696.
|
http://arxiv.org/abs/2409.03038v1 | 20240904191649 | Characterizing the negative triangularity reactor core operating space with integrated modeling | [
"H. S. Wilson",
"A. O. Nelson",
"J. McClenaghan",
"P. Rodriguez-Fernandez",
"J. Parisi",
"C. Paz-Soldan"
] | physics.plasm-ph | [
"physics.plasm-ph"
] |
[
\begin@twocolumnfalse
^1 Department of Applied Physics and Applied Mathematics, Columbia University, New York, NY 10027, USA
^2 General Atomics, San Diego, CA 92121, USA
^3 Plasma Science and Fusion Center, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
^4 Princeton Plasma Physics Laboratory, Princeton, NJ 08540, USA
Characterizing the negative triangularity reactor core operating space with integrated modeling
H. S. Wilson^1,
A. O. Nelson^1,
J. McClenaghan^2,
P. Rodriguez-Fernandez^3
J. Parisi^4
and
C. Paz-Soldan^1
================================================================================================================
§ ABSTRACT
Negative triangularity (NT) has received renewed interest as a fusion reactor regime due to its beneficial power-handling properties, including low scrape-off layer power and a larger divertor wetted area that facilitates simple divertor integration. NT experiments have also demonstrated core performance on par with positive triangularity (PT) H-mode without edge-localized modes (ELMs), encouraging further study of an NT reactor core. In this work, we use integrated modeling to scope the operating space around two NT reactor strategies. The first is the high-field, compact fusion pilot plant concept MANTA (The MANTA Collaboration et al 2024 Plasma Phys. Control. Fusion 66 105006) and the second is a low field, high aspect ratio concept based on work by Medvedev et al (Medvedev et al 2015 Nucl. Fusion 55 063013). By integrating equilibrium, core transport, and edge ballooning instability models, we establish a range of operating points with less than 50 MW scrape-off layer power and fusion power comparable to positive triangularity (PT) H-mode reactor concepts. Heating and seeded impurities are leveraged to accomplish the same fusion performance and scrape-off layer exhaust power for various pressure edge boundary conditions. Scans over these pressure edge conditions accommodate any current uncertainty of the properties of the NT edge and show that the performance of an NT reactor will be extremely dependent on the edge pressure. The high-field case is found to enable lower scrape-off layer power because it is capable of reaching high fusion powers at a relatively compact size, which allows increased separatrix density without exceeding the Greenwald density limit. Adjustments in NT shaping exhibit small changes in fusion power, with an increase in fusion power density seen at weaker NT. Infinite-n ballooning instability models indicate that an NT reactor core can reach fusion powers comparable to leading PT H-mode reactor concepts while remaining ballooning-stable. Seeded krypton is leveraged to further lower scrape-off layer power since NT does not have a requirement to remain in H-mode while still maintaining high confinement. We contextualize the NT reactor operating space by comparing to popular PT H-mode reactor concepts, and find that NT exhibits competitive ELM-free performance with these concepts for a variety of edge conditions while maintaining relatively low scrape-off layer power.
\end@twocolumnfalse
]
§ INTRODUCTION
While great progress has been made in the fusion energy field toward power-plant relevant plasma performance and divertor technology, a major outstanding challenge remains: the coupling of a high-performance core to a realistic exhaust solution. This challenge has shifted the focus of some plasma core modeling efforts away from maximizing power output and toward optimizing power handling potential. Fusion pilot plant (FPP) tokamak concepts have been dominated by positive triangularity (PT) plasmas operating in high confinement mode (H-mode). H-mode is accessed when a PT plasma is given sufficient heating and fueling and is characterized by the formation of an edge transport barrier with high pressure gradients called a pedestal <cit.>. Given its higher confinement, H-mode is generally seen as a desirable regime for an FPP. However, the power through the scrape-off-layer PSOL must be above the L-H mode transition power PLH estimated by scaling laws <cit.> to sustain H-mode. In a PT H-mode reactor-class device, this leads to heat fluxes that will be difficult for plasma facing components to sustain without an advanced divertor <cit.>. Even if we assume we can advance divertor and material technology to sustain reactor-level H-mode heat loads, H-mode bears yet another challenge: it is accompanied by edge localized modes (ELMs) <cit.>. ELMs are instabilities that can result in large energy fluences to plasma facing materials if not mitigated in some way <cit.>. As power, current, and magnetic field are increased to reactor-relevant levels, the machine damage from ELMs is likely to be intolerable <cit.>.
There are multiple regimes that exhibit higher confinement than the traditional low-confinement mode (L-mode) with smaller ELMs than H-mode or that avoid them all together. These include I-mode, QH-mode, EDA H-mode, and quasi-continuous exhaust (QCE), among others <cit.>. While PT no-ELM or small-ELM regimes are better than PT H-mode for device longevity, they often have sensitive access conditions depending on the heating and fueling scheme or do not yet provide sufficient fusion performance improvement over L-mode to support a realistic FPP design <cit.>.
Recently, the negative triangularity (NT) regime has resurfaced as a potential solution to the core-edge integration challenge in a reactor-class tokamak <cit.>. The “upper" and “lower" triangularities of a toroidal plasma are defined as δu,l = (Rgeo-Ru,l)/a, where Rgeo is the geometric major radius, Ru is the major radius at the highest point of the separatrix, Rl is the major radius at the lowest point of the separatrix, and a is the plasma minor radius. The average triangularity δ is the mean of the upper and lower triangularities. Because NT plasmas have Ru,l>Rgeo, NT x-points are located at a larger major radius than PT x-points. This allows for more space for a divertor and a larger divertor-wetted area, both of which are beneficial from a power-handling perspective <cit.>. Further, the NT regime has been shown to be ELM-free as long as the triangularity is sufficiently negative, even in cases where PSOL exceeds PLH by a significant margin <cit.>.
Importantly, experiments have shown that a plasma with NT shaping exhibits improved confinement over PT L-mode plasmas with otherwise similar parameters (current, magnetic field, density, and auxiliary heating) in both DIII-D <cit.> and TCV <cit.>. This improved confinement is attained without entering H-mode, so PSOL is not required to be greater than PLH as it would be in PT H-mode, which allows for the use of techniques like seeding noble gas impurities in NT to further lower PSOL while maintaining plasma performance <cit.>. Recent experiments on DIII-D have also extended the observed operating space in NT to reactor-relevant levels in non-dimensional parameters <cit.>. In a diverted configuration, DIII-D NT plasmas simultaneously exhibited βN > 3, fGr > 1, and q95 < 3 with H98y2 > 1 where βN is the normalized plasma beta, fGr is the Greenwald fraction <cit.>, q95 is the safety factor at ψN = 0.95, and H98y2 is normalized confinement time from the ITBP H98y2 energy confinement scaling law <cit.>. Additionally, recent gyrokinetic simulation work has found the reduction in turbulent transport in NT to be independent of machine size <cit.>, further encouraging the use of NT in a reactor-class device.
While a few NT FPP designs have been proposed at varying levels of fidelity <cit.>, these studies primarily focused on one design point and the performance trade-offs between various input parameters have not yet been fully established. To provide greater context for these trade-offs, we evaluate the performance of a reactor-class NT tokamak around two published operating points: the MANTA design <cit.> and the larger design from Medvedev et al <cit.>. We accomplish this by using the STEP code <cit.>, which facilitates predictive integrated modeling through easy and self-consistent data transfer between various equilibrium, stability, and transport codes. The two NT design points studied in this work differ most notably in size, magnetic field, and current, as described in section <ref>. Specifically, we investigate changes to Pfus and PSOL that result from changes in temperature and density profiles, auxiliary power Paux, seeded impurity fraction fimp, triangularity δ, and major radius Rmaj. The core effects from changes in toroidal magnetic field Bt, volume, and plasma current IP will be investigated through comparison of the smaller volume, higher magnetic field, lower current MANTA design and the larger volume, lower magnetic field, higher current Medvedev design. Of particular interest to us is the effect of the edge pressure boundary condition on fusion performance in NT, as a full characterization of the NT edge is currently absent from the literature.
In section <ref>, we introduce the high-field (MANTA-like <cit.>) and high-volume (Medvedev-like <cit.>) base cases around which we analyze NT reactor performance. We describe the integrated modeling workflow used with the STEP code <cit.> for the majority of simulations in this work. The density profile changes from a reactor-relevant particle source, i.e., a particle source localized toward the plasma edge, are investigated. We determine that the Angioni scaling <cit.> predicts a density peaking similar to that which would be evolved to in TGYRO from a near-edge particle source. Thus we implement a psuedo-evolving scheme for density for simplicity in subsequent scans, as described in section <ref>. In section <ref>, we discuss Pfus density changes from δ and Rmaj scans in a high-field core, and find that geometry is not as leveraging as the electron pressure at ρ = 0.8 pe,0.8. In section <ref>, we discuss the NT edge and its present uncertainty. Due to the lack of a physics-based predictive model of an NT edge, we scan various temperature and density edge boundary conditions (Te,0.8 and ne,0.8, respectively) and find that Pfus and PSOL are both highly dependent on both Te,0.8 and ne,0.8. We use infinite-n ballooning stability codes on the region beyond TGYRO evolution (ρ = 0.8 to ρ = 1.0) to determine that a H98y2≈ 1 (confinement is what would be expected from a PT H-mode plasma with similar parameters) and fGr≈ 1 high-field NT operating point is likely feasible from a ballooning stability standpoint for a variety of potential pedestal widths. In section <ref>, we evaluate the impact of Paux on Te,0.8, and find that a relatively high temperature boundary condition is required for sufficient Pfus and cannot be overcome by additional Paux. We compare the performance of NT reactor-like core scenarios at various T,e,0.8 and Paux to other published FPP concepts. In section <ref>, we determine that at levels of intrinsic impurities (helium, tungsten) assumed by other FPP designs, there is minimal effect on fusion power when compared to the effect of seeded impurities utilized in this work (krypton). Including krypton at various concentrations results in a nearly linear downward trend in PSOL from increased impurity fraction and a potential for using impurity fraction to optimize Pfus.
§ SIMULATION SETUP AND METHODS
§.§ Establishing a high-volume and a high-field NT reactor base case
There are currently two main strategies to reach reactor-relevant performance in tokamaks: the high-volume approach and the high-field approach.
For PT H-mode, two notable FPP designs utilizing the high-volume approach are ARIES-ACT2 <cit.> and EU-DEMO <cit.>. Meanwhile, the high-field approach is well-represented by ARC-class devices <cit.> and generally by the SPARC project <cit.>. For reference, basic parameters describing these concepts are displayed in table <ref>. In this work, we initialize a base case for both strategies applied to NT, using work by Medvedev et al <cit.> and the MANTA collaboration et al <cit.> as starting points for the high-volume and high-field strategies, respectively. This enables us to evaluate advantages and disadvantages of both strategies in the NT operating space.
MANTA (Modular, Adjustable, Negative Triangularity ARC) is a high-field (Bt≈ 11 T) NT FPP design. It is compliant with requirements laid out in the National Academy of Science, Engineering and Medicine’s (NASEM) report “Bringing Fusion to the U.S. Grid” <cit.>, made possible in part by utilizing a FLiBe liquid immersion blanket, demountable HTS magnet joints, seeded krypton, and an NT core that exhibits sufficiently high confinement without ELMs. The simple divertor design of MANTA requires that PSOL be below 40 MW for a separatrix density of 0.9 × 10^20/m^3 <cit.>. Fusion power is additionally constrained to be within 400-500 MW when using 40 MW of auxiliary heating to meet the NASEM net electric goal of ≥ 50 MWe <cit.>.
Compared to MANTA, the design outlined in Medvedev et al. is a lower field (B_t≈ 6 T) larger major radius (R_maj≈ 7 m) NT tokamak <cit.>. The primary focus of <cit.> was the MHD stability of a theoretical NT reactor-class design. As such, only the pressure profile is reported in <cit.>; the density and temperature profiles are not explicitly constrained. To facilitate a direct comparison of a high-volume NT case with a high-field NT case, we use the PRO-create module in OMFIT <cit.> to initialize similar profiles in the high-volume case to those in MANTA. Due to the high volume, the separatrix density had to be lowered significantly to avoid exceeding the Greenwald limit <cit.>, and consequently the ne,0.8 scans performed in section <ref> are over lower values than in the high-field case.
All scans in this work are done around either the high-volume or a high-field operating points with H98y2≈ 1 and fGr≈ 1 outlined in table <ref>. Parameters taken directly from references <cit.> and <cit.> are in bold. Figure <ref> shows the equilibrium cross sections of both reference cases in R and Z coordinates. As will be discussed in section <ref>, there is significant uncertainty in predicting the edge condition for an NT FPP. To partially accommodate this uncertainty, we restrict the edge of our high-volume and high-field base cases via upper bounds on the normalized parameters H98y2 and fGr because they are both largely affected by the edge pressure condition in any plasma. Reactor-class fusion devices may be able to operate at fGr>1 because they will exhibit higher power densities than current devices <cit.>. Supporting this idea, non-ELMing NT plasmas have accessed H98y2 > 1 simultaneously with fGr > 1 on DIII-D <cit.>. For these reasons, we chose to establish the high-volume and high-field base cases with normalized confinement time and density H_98y2≈ 1 and fGr≈ 1, respectively. Obtaining H98y2≈ 1 and fGr≈ 1 is accomplished by varying the temperature and density at ρ = 0.8 until a converged solution is found, where the method to find a converged solution is described below in subsection <ref>.
§.§ Integrated modeling with the STEP code
The integrated modeling in this work was performed with the STEP (Stability, Transport, Edge, Pedestal) code <cit.>. STEP enables self-consistent iteration between various OMFIT <cit.> modules. We utilize CHEASE <cit.> for equilibrium calculations, TGYRO <cit.> with TGLF <cit.> for transport, and BALOO <cit.> for infinite-n ballooning instability. It is of note that bootstrap current was not included in the simulations in this work except in the discussion of ballooning stability in the edge in subsection <ref>, but the inclusion of bootstrap current is not expected to alter the results of the core significantly. TORIC <cit.> was used on the MANTA core scenario to provide heat deposition profiles which were passed to the STEP workflow through the CHEF <cit.> module. A diagram of this workflow is shown in figure <ref>. TORIC was not used to solve for heating in the high-volume case. Heating for the high-volume case was instead copied from the heat deposition profiles solved for in MANTA and scaled as needed. TGYRO is a physics-based transport solver that uses NEO for neoclassical transport calculations and TGLF for turbulent transport calculations with moderate computational cost.
The modules mentioned thus far are primarily made for and/or trained on PT plasmas, and as such may not capture all NT effects. While the geometry change is taken into account in terms of surface area and volume, we note that TGLF is not a full gyrokinetic model, so there may be gyrokinetic effects of NT shaping that are not accounted for. However, gyrokinetic analysis done in other work has implied that the improved confinement of NT is likely due to gradients beyond ρ = 0.8 <cit.>, where TGYRO evolution does not extend to in this work. Gradients in this region are subject to stronger triangularity shaping, as toroidal plasmas become more circular farther from the edge. For example, in the high-field base case, triangularity increases from about -0.5 to -0.3 from ρ = 1.0 to ρ = 0.8. Even so, reference <cit.> found that even though the effect is stronger at high radii, NT exhibits reduced transport over PT at low radii as well. Additionally, TGYRO/TGLF has been shown to reliably recreate DIII-D NT shots with various saturation rules <cit.>, increasing the confidence with which we can apply these models to a reactor concept. However, we primarily use the SAT-2 saturation rule because it has been shown to better match experiment than SAT-0 at high powers and includes geometry effects <cit.>. Unless otherwise stated, all simulations in this work evolve the temperature profiles from ρ = 0.8 to ρ = 0 with TGYRO/TGLF, using the ICRH heating profiles that were simulated for MANTA in TORIC with CQL3D<cit.> (with scaling as needed) and holding all density profiles constant with electron density profiles at the Angioni peaking pkAngioni <cit.> and impurity density profiles scaled with electron density. Krypton is also included at a fraction of 0.001 as a seeded impurity with an otherwise 50/50 D/T fuel mix in all simulations unless otherwise noted. The use of impurities is elaborated on in section <ref>.
Only simulations fully converged in TGYRO are shown in this work. The definition of “full" convergence for purposes of this work is described in <ref>.
§.§ Density psuedo-evolution using density peaking scaling law
Most currently operating tokamaks are fueled using neutral beam injection (NBI). NBI allows on-axis particle sourcing, which can increase density peaking and overall aides the tokamak's performance <cit.>. However, due to the relatively large size and high-field of a reactor-class tokamak, fueling by NBI may not be practical in an FPP <cit.>. Instead, the particle sources in a reactor are likely to be confined to the edge region, outside of ρ = 0.8.
To assess the effect on density peaking pk of a particle source in this region, we scan a Gaussian electron source from ρ = 0.7 to ρ = 0.9 in a MANTA-like core scenario. In these scans, TGYRO/TGLF with SAT-0 converges the density profile to one that has no more than a 2% deviation from the peaking predicted by the Angioni scaling <cit.> pkAngioni, as shown in figure <ref>. SAT-0 was used in this case because it is known to be easier to converge particle flux and heat flux simultaneously. Given the results of our particle source scans we suspect that TGYRO/TGLF would predict a similar peaking as is expected from the Angioni scaling. Though it is a challenge to converge particle flux and heat flux simultaneously with SAT-2, we chose to use SAT-2 in the remainder of this work because it includes additional geometry effects that SAT-0 and SAT-1 do not. We therefore use the pkAngioni prediction for density and TGYRO/TGLF with SAT-2 for temperature profile prediction in all subsequent parameter scans for ease. This density peaking prediction is dependent on collisionality, NBI source, and normalized plasma pressure β. As both the collisionality and β are dependent on temperature, the peaking prediction changes as the temperature profile evolves in TGYRO. As a result, we pseudo-evolve the density between iterations by adjusting the core density after running TGYRO/TGLF and repeating until we find a converged solution with density peaking within the Angioni prediction. The reference “Angioni" profile in figure <ref> is that which was predicted using this procedure.
It is of note that the Angioni scaling only uses a dataset of AUG and JET H-mode observations. However, it was found that collisionality was the most statistically significant parameter in the analysis <cit.>. The normalized collisionality of MANTA is 0.4 at ρ = 0.9 and drops monotonically to 0.02 in the core. Note that the collisionality is high in the edge, which is not allowed in PT H-mode reactor core scenarios. However, NT does not have the same current-limiting pedestal demands as PT H-mode <cit.>.
§ IMPACT OF THE NT PRESSURE BOUNDARY CONDITION VARIATION ON FUSION PERFORMANCE
A significant remaining uncertainty in NT performance prediction is establishing the proper edge pressure condition. This is because the NT edge region is unique in that it is not a true “L-mode" or “H-mode" edge. Experiments have shown that NT can have steeper gradients than PT L-mode in the region outside of ψ_N = 0.9, forming a small “pedestal" while remaining ELM-free <cit.>. However, they are still often able to recover the same pressure as PT H-mode in the core <cit.>. Because the NT edge has yet to be characterized to the extent of the PT H-mode edge, which still itself retains significant uncertainty during extrapolations to an FPP, there is more uncertainty around what pressures and pressure gradients could potentially be obtained in an NT reactor edge.
To capture effects related to the edge boundary condition, we employ a “brute force" characterization of the edge in both the high-field and high-volume configurations. This characterization is performed by scanning over four electron temperature and four electron density boundary conditions at ρ = 0.8, for a total of 16 simulations. We use the psuedo-density evolving method described in section <ref> and evolve temperature from ρ = 0 to ρ = 0.8 with TGYRO/TGLF and SAT-2. The resultant electron density and converged temperature profiles are shown in figure <ref> with boundary condition values shown by the dotted gray lines. These values can also be seen from the x and y axis in figure <ref>. In figure <ref> and <ref>, H98y2 is given by the colorbar. The green region in figure <ref> is that which was evolved in TGYRO/TGLF. Note that as the edge value increases in both electron temperature and density, H98y2 increases as well. The highest pressure boundary condition for the high-field configuration tested in this work exhibited a confinement time 20% over that expected from the τ98y2 scaling <cit.>, while the highest for the high-volume configuration was 5% below. Attaining H98y2 > 1 for both cases is difficult at low pressure boundary conditions.
In both the high-field and high-volume cases, fusion performance is seen to depend heavily on the pressure boundary condition at ρ = 0.8. This can be seen in figures <ref> and <ref> for the high-field and high-volume cases, respectively. The same conclusion for high-field was drawn in reference <cit.> from high fidelity modeling of PT L-mode operation in SPARC. Investigation of the edge physics in tokamak plasmas requires further work in both cases.
In both figure <ref> and figure <ref>, the top plot shows Pfus increasing with increased electron density at ρ = 0.8 (ne,0.8) for three distinct temperature values at ρ = 0.8 (Te,0.8). Note that Pfus also increases with increased Te,0.8. The bottom plot in both figures shows the same trend for PSOL for both cases except for at the lowest temperature in the high-volume case (blue line in figure <ref>). All points shown are converged using the psuedo-evolving scheme for density and TYGRO/TGLF for temperature as described in section <ref>. The green circled points in figure <ref> and figure <ref> indicate simulations in which the Greenwald fraction (calculated with volume-averaged density) exceeded unity. The red circled points in figure <ref> are those in which the infinite-n ballooning stability limit was exceeded in the region from ρ = 0.8 to ρ = 1.0 for a pedestal width of 0.1 in ψN. The scans over ne,0.8 are at much smaller values for the high-volume case than for the high-field case due to exceeding the Greenwald limit at higher densities. This results in higher PSOL for the high-volume case as well, due to the lower separatrix density required. Therefore, a significant benefit of the high-field case from a power handling standpoint is the ability to employ higher separatrix density over higher volume cases without exceeding the Greenwald limit. This is even with the high-volume case employing higher IP, because the minor radius is more than twice that of the high-field case, which reduces the Greenwald limit significantly. Note also that the high-field case displays higher Pfus, but also converged with higher Te,0.8 than the high-volume case. This is once again likely attributable to the higher ne,0.8 employed in the high-field case. The mutual dependence of edge temperature and density and their significant effect on Pfus and PSOL motivates additional work in characterizing the NT edge boundary condition in future experiments and modeling.
§.§ Ballooning stability in the region outside of ρ = 0.8
In addition to the impact of pressure boundary condition on core fusion performance, another uncertainty related the edge in an NT FPP is the specific conditions under which an NT plasma remains ELM-free. The leading experimental explanation of ELM-free performance at significant negative triangularity is the closure of the second infinite-n ballooning stability region, which restricts pressure gradient growth <cit.>. In these experiments, where pressure gradients are confined to the first stability region, no ELMs are observed. However, it is of note that DIII-D NT experiments have also suggested that there is a gradient limiting mechanism that precedes ballooning instability <cit.>. Thus ballooning instability is likely only sufficient as an upper bound on the pressure gradient; it is possible that the pressure gradient will be even lower, which would lower global performance. This further justifies the scans of the edge pressure boundary condition performed earlier in this section. In this work, we use ballooning stability as a proxy for access to the ELM-free state, assuming that if the normalized pressure gradient remains in the first stability region the plasma will not generate ELMs, as suggested by reference <cit.>.
Gradients at the core-edge boundary (ρ = 0.8) in this work were maintained well below the infinite-n ballooning stability limit in all scans.
An example of the infinite-n ballooning stability calculated by BALOO is given in figure <ref> for the high-field case with fGr≈ 1 and H98y2≈ 1 and a pedestal width of Δped = 0.1. Here the red region indicates the ballooning unstable region while the normalized pressure gradient and the total pressure in the edge are shown by the blue profiles in figures <ref> and <ref>, respectively. Since edge profile prediction is outside the scope of this work, we extrapolate from the end of the TGYRO evolution region (ρ=0.8) to ρ = 1 with an H-mode-like pedestal shape. In figure <ref> the pressure gradient remains in the first stability region even at the steepest point. Similar analysis of the high-volume case reveals a significantly larger gap between the first stability limit and the normalized pressure gradient, as the edge is instead limited by the imposed f_Gr=1 constraint. Should Greenwald fractions greater than unity be achievable in an NT FPP, it may be possible to raise the edge pressure further in machines with larger Rmaj before destabilizing ideal ballooning modes <cit.>.
For ease of comparison with calculations presented in <cit.>, the bootstrap current is omitted from figure <ref>. However, we note that inclusion of this effect can impact the local magnetic shear in regions of strong pressure gradients, potentially leading to a reduction of the maximum ballooning-stable pressure gradient and adding uncertainty dependent on the choice of bootstrap model. As such, results presented in this work should be treated as a theoretical upper limit on the edge pressure gradient rather than a precise prediction, as mentioned above. The reduction of the maximum achievable edge pressure gradient with the inclusion of bootstrap current is less pronounced at larger Rmaj, again suggesting that an increase in aspect ratio could be used to recover edge performance that may otherwise be limited by ballooning stability <cit.>; the uncertainty in ballooning stability due to bootstrap current is more important at low aspect ratio.
Beyond the conventional presentation of ballooning stability presented in figure <ref>, it can be informative to examine a scan of the pedestal width and height for each given case. This is especially true in NT scenarios where the pedestal is not well-modeled using H-mode pedestal predictions like EPED <cit.> and a physics-based predictive model capable of describing the pedestal width and height has yet to be developed. In figure <ref>, we inspect the stability of potential NT edge pressure gradients resulting from various pedestal widths in the region from ρ = 0.8 to ρ = 1.0 by running gk_ped <cit.> on the high-field case. The gk_ped code is a linear gyrokinetic threshold model that generates self-consistent equilibria (including the bootstrap current, calculated with the analytical formulae from <cit.>) that vary in pedestal width, pedestal density, and pedestal temperature and then solves for the ballooning critical pedestal via the BALOO stability formalism at each point. Here we define the ballooning critical pedestal as the profile form that achieves the highest pressure gradient at a particular pedestal width before becoming ideal ballooning unstable. In figure <ref>, which shows the calculations without the inclusion of the bootstrap current, the ballooning critical pedestal is described by the relationship
Δped = 0.35βθ,ped^1.03,
where Δped is the pedestal width and βθ,ped is the normalized poloidal pedestal pressure. The color bar in figure <ref> represents the fraction of radial locations of the pedestal half-width that are ballooning unstable. We note that, for the high-volume cases presented, the modeled scenarios presented in this work feature pedestals that lie below the ballooning critical pedestal calculated by gk_ped.
Equation <ref> can be used to describe an upper ballooning-stable limit on the NT pedestal height as a function of pedestal width, as characterized by the traditional H-mode-like pedestal shape. However, as seen in the comparison to figure <ref>, which includes an analytic model for the bootstrap current in the edge region, this ballooning critical pedestal may over-predict the edge gradient in NT FPPs. This discrepancy is potentially compounded by observations on DIII-D that the actual edge gradient in NT plasmas lies somewhere below the infinite-n stability limit <cit.>, suggesting that an additional model is required for accurate prediction of the NT edge profile. However, we note that the location of high-field equilibrium point in the instability region in figure <ref> does not invalidate the value of pressure employed at ρ = 0.8 for the core profile modeling presented in the work. Indeed, it is possible to achieve a similar pedestal height as in figure <ref> while remaining stable to ballooning modes by going to a larger pedestal width. This highlights the need to develop a constraint for the NT pedestal width similar to the EPED model in H-mode scenarios, as the ballooning stability itself cannot fully constrain a pressure boundary condition at ρ = 0.8.
Because the region between ρ = 0.8 and ρ = 1.0 is beyond the TGYRO evolution in this work, any further analysis in this region is out of scope. However, figures <ref> and <ref> suggest that NT boundary condition at ρ = 0.8 may reach high pressures and pressure gradients, beyond those expected in a PT L-mode, without becoming ballooning unstable. This is supported by NT experiments on DIII-D that have observed increased edge pressure and pressure gradients in the NT edge without ELMs <cit.>, though we note that edge transport barrier mechanics and turbulence suppression remain an open area of research in NT. Figure <ref> illustrates that even in a high-field case with H98y2≈ 1 and fGr≈ 1, there are many ballooning-stable pedestal shapes that may be possible in NT. In particular, it is possible to achieve the same temperature and density boundary conditions at ρ=0.8 with reduced pedestal gradients by increasing the width of the NT pedestal, which is not constrained in this work. Thus the profile shown in <ref> is just one of many possibilities of an NT pedestal shape given the pressure at ρ = 0.8 and we encourage further experimental endeavours to characterize this region. In particular, physics-based constraints on the pedestal width or on the functional form of the pedestal would be valuable for improved predictive capabilities for NT FPPs, as they would enable a significant reduction in the parameter space available for stability codes like gk_ped.
§ RELATIVE EFFECT OF TRIANGULARITY AND MAJOR RADIUS ON FUSION PERFORMANCE
The importance of the boundary pressure at ρ = 0.8 on plasma performance is also seen when exploring changes in fusion performance due to geometry. While scanning triangularity and major radius, we found that changes in fusion power density due to Rmaj did not dominate over changes due to boundary electron pressure pe,0.8, as will be shown.
The motivation for studying a high-volume case is that Pfus increases with increasing volume <cit.>, so larger R_maj can be employed for lower field devices to achieve performance comparable with more compact high-field devices. In a high-field device like MANTA, compactness is prioritized because size is expected to be a major cost driver. However, there is an additional benefit of large R_maj in that it allows for a larger central solenoid, enabling increased flux swings and longer pulse lengths. Thus R_maj is an import parameter to optimize for FPP performance. In this work, we do not discuss the potential engineering benefits of increased Rmaj in an NT FPP, only the effect on transport and core performance.
Volume generally decreases with increasingly negative triangularity, which affects fusion power output. NT has shown robust avoidance of ELMs when δ < δ_crit in experiment, where δ_crit is a critical triangularity that is device-dependent, around δ_crit∼-0.15 on DIII-D <cit.>. This δ_crit is unknown experimentally in a MANTA-like device, so it is safer to assume stronger negative triangularity to ensure the plasma is robustly ELM-avoidant. However, plasmas with stronger NT are more vertically unstable, so δ is also a parameter ripe for optimization <cit.>.
Interestingly, volume does not directly correlate with negative triangularity in these scans, but peaks at around δ = -0.3. This is likely due to squareness not being held constant across all scans. We have plotted the last closed flux surface of the equilibrium shapes scanned in this section in figure <ref> for reference. Each color is at a different major radius while the transparency of the contours increases with increasing δ.
To illustrate the impact of Rmaj and δ on fusion performance in an NT scenario, the fusion power density is plotted against δ in figure <ref> with each distinctly colored line representing a different Rmaj. We do not focus on the obvious increase in Pfus with volume, instead investigating fusion power density to discover any underlying transport effects attributable to the change in shape. The dotted lines contain simulation points in which equilibria were initialized with the same Bt, IP, Paux, target aminor, temperature profiles, and density profiles, with only δ and Rmaj changing between each simulation. Note that this does lead to changes in magnetic shear and safety factor q as well, so there is potential performance optimization to be done at distinct δ and Rmaj owing to global stability considerations. For example, q_95 = 2 for the Rmaj = 6 m cases here, which is the lower stability limit. Additional scans that could increase IP to maintain q_95 constant at each Rmaj could expand the Rmaj optimization picture and are left to future work and should be combined with dedicated MHD stability modeling. Also note that scans of the shaping parameters lead to greater variation between the equilibrium in each simulation than in the Te,0.8 and ne,0.8 scans presented in section <ref>, leading to increased variation in equilibrium parameters that are targeted as inputs in CHEASE. In any case, an increase in the fusion power density with less negative triangularity is evident. This suggests that the optimal triangularity for operation of a NT FPP may be that which is just negative enough to avoid ELMs - further decrease in δ may result in a general decrease in the fusion power density.
In figure <ref>, the electron pressure at ρ = 0.8 is plotted against δ, highlighting some variation in edge pressure within each δ scan. In all TGYRO simulations in this work, the edge pressure at ρ = 0.9 was fixed while the pressure from ρ = 0 to ρ = 0.8 was allowed to evolve until convergence was met, resulting in some variation in edge pressure at ρ = 0.8. We tested the role of edge pressure on these results by performing two additional scans over δ at constant Rmaj = 5 m. The first is shown by the solid blue triangle markers, with Te,0.9 increased by 1 keV with all other parameters the same as the corresponding dashed Rmaj = 5 m line in figure <ref>(a). The second is shown by the solid blue square markers, with Te,0.9 decreased by 1 keV. When decreasing Te,0.9, it was more difficult to converge TGYRO in the edge for these cases. The Te,0.9 + 1 keV and Te,0.9 - 1 keV scans resulted in fusion power density changing more drastically than the change from Rmaj = 4 m to Rmaj = 6 m, but still displayed the same increase in fusion power density from increasing triangularity. Thus edge pressure has a significantly more profound effect on fusion power density than Rmaj, though there is still the benefit of increased Pfus from the increased volume at higher Rmaj.
Though there is increased vertical stability at weaker triangularity <cit.> and increased Pfus density at weaker triangularity, operating below a critical triangularity δcrit will be necessary for ELM avoidance <cit.>. For a MANTA-like device δcrit cannot yet be experimentally verified, but it is clear that it would be beneficial to operate as close to δcrit as possible while remaining ELM-free.
§ THE EFFECT OF THE TEMPERATURE EDGE CONDITION ON HEATING REQUIREMENTS
Given the sensitivity of core performance on the temperature and density edge condition, establishing a set of reliable actuators and controls for these parameters is paramount to the successful design of an NT FPP. One potential path to control the temperature edge is to leverage auxiliary heating power Paux. For MANTA, the full wave code TORIC was used with CQL3D<cit.> to determine 1D power deposition profiles for both ions and electrons from ICRH heating such that the total Paux≈ 40 MW<cit.>. In Paux scans in this work, heating was not calculated self-consistently but instead was simply scaled from the MANTA power deposition profiles for both the high-field and high-volume cases. In figure <ref>, scans of Te,0.8 and Paux on the high-field and high-volume core are shown. The colorbar gives Pfus in mega-watts, and selected tokamak FPP operating points are indicated by blue stars. Their corresponding parameters are given in table <ref>.
It is clear in both cases that Pfus increases with Te,0.8. Though Pfus also increases with Paux, it is not as pronounced as the change due to Te,0.8. Note that we cannot claim the level of Paux that will be required to maintain a certain Te,0.8, as the relationship between Paux and Te,0.8 is ultimately governed by edge physics. Instead, the two are varied independently in figure <ref> to scope potential Paux and Te,0.8 combinations and their corresponding Pfus. The relatively weak dependence of Pfus on Paux suggests the possibility of higher gain solutions at a given Te,0.8 by going to lower Paux. We ultimately find that Te,0.8 is more important than Paux in determining fusion performance, which once again motivates further investigation of the physics governing the NT edge.
Similar solutions can be found in both the high-field and high-volume cases at various Paux, but are harder to converge at Paux < 10 MW and Te,0.8 < 5.5 keV in the high-field case, requiring higher Te,0.8 than the high-volume case to reach the same Pfus. Note that ARC and EU-DEMO, representing the high-field and high-volume path for PT H-mode FPPs, respectively, have approximately the same Te,0.8 and Paux values. However, ARC has a fusion power of 525 MW while EU-DEMO has a fusion power of 1800 MW, due predominantly to its larger volume. If we compare the two points in grey squares on figure <ref>, they are also at approximately the same Te,0.8 and Paux value, but exhibit Pfus of 786 MW for the high-field case and 940 MW for the high-volume case. While there are likely other parameters at play constituting the difference between ARC and EU-DEMO performance, NT appears to approximately follow a similar trend to PT H-mode between the high-field and high-volume approaches in this case. Note also that the zone in which the converged simulations lie also differs between the high-field and high-volume case in NT. This is due to the density difference between cases, with the high-volume case able to converge at a lower ne,0.8. Referring to table <ref>, ne,0.8 for the high-field case is more than three times that of the high-volume case. Thus, a higher Te,0.8 is required for power balance in TGYRO. The high-field case also displays higher sensitivity to Te,0.8 than the high-volume case.
The expected Te,0.8 and Paux needed for comparable Pfus to leading PT H-mode FPPs is comparable to that seen in these devices. ARC is the lowest shown here at Pfus = 525 MW and ARIES-ACT2 the highest at Pfus = 2600 MW, but note that ARIES-ACT2 has significantly higher Te,0.8 which we've seen to be very influential on Pfus.
§ IMPURITY ANALYSIS
Given the primary benefit of operating in NT is enhanced confinement without ELMs, we are interested in robustly avoiding H-mode, contrary to most other FPP concept regimes. Discarding the requirement to maintain P_SOL above the L-H power threshold grants freedom to use seeded impurities to radiate heat in the edge, lowering PSOL and thereby reducing divertor heat loads. Noble gas impurity seeding in PT L-mode experiments on DIII-D has shown enhanced confinement with low P_SOL <cit.>, similarly to NT. Employing both NT shaping and seeded impurities allows more control over Pfus and PSOL and potentially easier divertor integration than with NT alone. In a reactor, high-Z noble gas impurities such as krypton and xenon are expected to be most useful given their ionization temperatures, because they primarily cause radiation in the edge, decreasing PSOL significantly with minimal effect on Pfus.
In all simulations mentioned thus far in this work, we included only krypton impurities at a fraction of 0.001 unless otherwise noted. Note that all impurity density profiles in this work are set to scale with the electron density profile, which is a limitation of this work. These results are likely to be affected by impurity transport causing impurity density profiles to differ significantly from the electron density profile. Tungsten is the leading candidate for plasma facing component in FPPs and so tungsten ions will likely also be in a reactor-class plasma along with helium ash, so its effect should be more closely studied in future work on the subject.
To better assess the impacts of impurities on fusion performance in an NT FPP, we plot power density profiles from line radiation in the high-field base case with three distinct impurity combinations in figure <ref>. In this figure, the power density profiles of the additional line radiation from including krypton (Kr) and tungsten (W) is plotted by the dashed lines in context of the alpha power density (solid lines) and auxiliary power density (dotted line). Note that the auxiliary power is the same for all three impurity combinations. The dilution effect of adding helium at a fraction of 0.02 in a D, T, and Kr mix results in a decrease in Pfus of only 121 MW, or about 9% of Pfus in the D, T, and Kr only mix. The power density profiles of a D, T, Kr, and He mix are not shown in figure <ref> because they overlap the D + T + Kr + He + W profiles. Including tungsten at a fraction of 1.5× 10^-5 does not significantly affect radiated power, and the Pfus difference between the D, T, and Kr profile and the D, T, Kr, He, and W profile in figure <ref> is only 127 MW, or ∼10%. Note that the presence of tungsten will be inescapable in any reactor class device using tungsten as a plasma facing material (the leading metal candidate) and that it elicits serious concern for radiative collapse due to its high atomic number. For example, the tungsten fraction used in EU-DEMO modeling is 10^-5 <cit.> and in any reactor-class tokamak the upper limit on tungsten fraction is likely to be on the order of several 10^-5 <cit.>. H-mode has the benefit that ELMs flush impurities out of the core <cit.>, but the tungsten fraction limit is also set by maintaining PSOL > PL-H, a limit that NT does not need to adhere to. Additionally, ELMs can cause sputtering, leading to increased tungsten influx <cit.>. While impurity transport in NT is an area of ongoing research, preliminary analysis on diverted DIII-D NT experiments suggests rapid impurity transport. This is indicated by hollow impurity profiles and lower Zeff in NT plasmas than PT plasmas with similar confinement factors <cit.>.
In figure <ref>, we plot Pfus against krypton impurity fraction fKr in the high-field case. The circles plotted are at 40 MW of auxiliary power while the triangles are at 20 MW of auxiliary power. The color bar gives PSOL. Figure <ref> shows that the same downward trend in PSOL versus fKr change is seen at Paux = 40 MW and when Paux =20 MW. This is promising for prospective direct PSOL control using impurity seeding. However, figure <ref> also shows that a small change in fKr results in a large change in PSOL, and research into whether this level of impurity control in a reactor will be possible is ongoing<cit.>. Additionally, figure <ref> shows a peaked trend in Pfus, indicating potential for Pfus optimization via impurity fraction. This improvement in Pfus from certain levels of additional impurity seeding while simultaneously decreasing PSOL has been observed in experiment in NT on DIII-D <cit.> and encourages further study of this phenomenon for implementation in reactor modeling scenarios. For a given fKr, we also see that Pfus is higher when Paux = 40 MW than when Paux=20 MW, indicating potential for Pfus control via input power. At each auxiliary power, H98y2 remained approximately constant over the scan of fKr, indicating that improvement in confinement from impurities may overcome the decrease in Pfus from dilution.
While the inclusion of radiative impurities could lead to benefits in the core performance and in the power handling properties of NT FPPs, we note that we do not include in this work any decrease in the edge temperature resulting from significant radiation just inside of the separatrix. The impurities studied above were chosen to consolidate radiation in the core region <cit.>, but any drop in the temperature boundary condition will lead to a decrease in fusion power, as discussed in section <ref>. As such, proper characterization of the role of impurities with a full impurity transport code that extends out into the SOL is needed for more robust NT FPP design, and should be the subject of future work.
§ CONCLUSION
These studies demonstrate the feasibility and flexibility of the NT reactor concept. Even with edge pressure conditions low enough to be infinite-n ballooning stable, a variety of operating points exist in which FPP-relevant fusion power is possible (∼400-500 MW for a MANTA-like device with 40 MW input power <cit.>), with opportunities for increased fusion gain by decreasing input power, increasing fueling, or optimizing seeded impurity fraction. Due to the lack of a requirement to maintain H-mode, NT also grants the freedom to increase the seeded impurity fraction. This can increase radiation in the edge to bring scrape-off layer power down to acceptable levels for simple divertor integration (<40 MW for a MANTA-like device with a separatrix density of 0.9 × 10^20/m^3 <cit.>).
The performance of an NT reactor will be heavily dependent on the edge condition. This dependency is not unique to NT but is perhaps of greater consequence in NT designs than in PT designs due to the present limitations in modeling the NT edge. Changing the edge temperature by 1 keV was found to have a more profound effect on the fusion power density than changing the major radius by 2 meters in a high-field case, though we note that this is without adjusting scans to account for changes in global stabilization parameters such as q95. Additionally, we found the temperature at ρ = 0.8 to have a significantly larger effect on fusion power than auxiliary heating in both the high-field and high-volume cases, though the high-field case exhibited stronger dependence than the high-volume case. We reiterate that the density and temperature chosen for scans over other parameters in this work were to satisfy H98y2≈ 1 and fGr≈ 1 while meeting temperature convergence in TGYRO with density at the Angioni peaking. We have not explored the extent to which these edge parameters can be accessed, and rely heavily on experimental observations to inform this work.
Though the effect of triangularity on fusion power density is minimal, fusion power density is found to consistently increase with less negative triangularity, suggesting that the optimal triangularity for an NT FPP may be that which is just negative enough to sustain ELM-free operation. Targeting this minimum negative triangularity will also have the added benefit of reducing vertical instability concerns <cit.>. In this work, the core was modeled from ρ = 0 to ρ = 0.8, so future work should prioritize high fidelity transport modeling in the edge where the triangularity is strongest to identify any benefits attributable directly to the negative triangularity geometry, like that seen in reference <cit.>.
For a high-field case with fGr≈ 1 and H98y2≈ 1, we demonstrated a possible model for extrapolation to ρ = 1.0 that relates pedestal width to height based on the stability found from infinite-n ballooning models. Thus for a given edge pressure boundary condition, there are a variety of pedestal shapes that remain ballooning stable. No scans of the high-volume case exceeded the infinite-n ballooning stability boundary, so an increase in Rmaj could enable an increase in fusion power on two fronts: from the increased volume as well as the increased pressure boundary condition <cit.>.
The main advantages of NT come from the improved pressure edge condition over PT L-mode supported by experiment and the absence of a requirement to remain in H-mode allowing flexibility in impurity and heating requirements. We found that the temperature boundary condition and auxiliary power needed to reach fusion powers between 500-900 MW is below or on the order of representative PT H-mode FPP concepts. Future work to develop a physics-based model for the behavior of an NT reactor edge must be prioritized to more accurately predict fusion performance, though we have shown in this work that feasible core operation can be attained at a variety of edge conditions. To further evaluate the feasibility of integrating a given operating point with simple divertor operation, whole-device modeling like that done in reference <cit.> could illuminate the engineering trade-offs attributable to the potential plasma core trade-offs described here, such as pressure boundary condition, impurity content, geometry, and heating. Ultimately, there are likely a variety of core operating points for a NT tokamak FPP with competitive fusion power to leading PT H-mode concepts even at relatively high radiation, at low input power, and with a ballooning-stable pressure boundary and significantly reduced scrape-off layer power.
§ ACKNOWLEDGEMENTS
Most calculations performed in this study were completed through the OMFIT framework <cit.>, and the generated equilibria are available upon request. This work was supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences under Award DE-SC0022272.
§ REFERENCES
[heading=none]
§ FULL TGYRO CONVERGENCE
We define “full" convergence to be met when the residual between the total flux calculated in the TGYRO/TGLF model (ftot) and the target flux from power balance (ftar) for each point between ρ = 0.35 and ρ = 0.8 is less than or equal to 0.02. Here the residual is defined as
(ftot-ftar)^2/ftot^2+ftar^2.
Converging from a residual of 0.02 to a residual of 0.00 resulted in less than a 5% change in Pfus in a few representative cases, so pushing convergence past this point is not expected to significantly change the qualitative results presented in this work. Flux matching between ρ = 0.0 and ρ = 0.35 is challenging but has ultimately been shown to have a marginal effect on output fusion power due in part to the relatively small plasma volume in the core compared to the edge <cit.>.
|
http://arxiv.org/abs/2409.02162v1 | 20240903180000 | Generalized Symmetry Resolution of Entanglement in CFT for Twisted and Anyonic sectors | [
"Arpit Das",
"Javier Molina-Vilaplana",
"Pablo Saura-Bastida"
] | hep-th | [
"hep-th",
"cond-mat.stat-mech",
"cond-mat.str-el"
] |
shapes,arrows
decorations.markings
|
http://arxiv.org/abs/2409.02427v1 | 20240904041447 | Students' Experience of Cultural Differences Between Mathematics and Physics | [
"Jeffrey M. Rabin",
"Andrew Meyertholen",
"Brian Shotwell"
] | physics.ed-ph | [
"physics.ed-ph"
] |
Transfer-based Adversarial Poisoning Attacks
for Online (MIMO-)Deep Receviers
Kunze Wu, Weiheng Jiang, Dusit Niyato, Yinghuan Li, and Chuang Luo
September 9, 2024
=========================================================================================
§ ABSTRACT How students use mathematics in their physics classes has been studied extensively in the physics education literature. In addition to specific mathematical methods in specific physics contexts, possible effects of more general “cultural" differences between the two disciplines have also been explored. However, there has been little examination of students' own awareness and interpretation of these differences. We explore the undergraduate student experience of these “cultural" contrasts, focusing on how they impact learning and problem-solving. Through a qualitative study, including surveys and interviews with students double-majoring in mathematics and physics (or majoring in one and minoring in the other), we investigate students' awareness of distinct pedagogical approaches, mathematical justifications, and organization of concepts in mathematics versus physics classes. We find that students do recognize and navigate these “cultural" differences, often employing specific coping strategies. We identify specific themes from our data and comment on how students feel that these themes impact their learning. We suggest that increased faculty and student awareness of the identified differences in educational practice could facilitate knowledge transfer between mathematics and physics.
§ INTRODUCTION
“Philosophy is written in this grand book, the universe, which lies continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and read the letters in which it is composed. It is written in the language of mathematics, and its characters are triangles, circles, and other geometric figures, without which it is humanly impossible to comprehend a single word of it." – Galileo, The Assayer, 1623
“Before we consider Galileo's demonstrations, it seems necessary to prove how far from the truth are those who wish to prove natural facts by means of mathematical reasoning, among whom, if I am not mistaken, is Galileo ... Therefore anyone who thinks he can prove natural properties with mathematical reasoning is simply demented, for the two sciences are very different." – Vincenzo di Grazia, 1613.
Since at least the time of Galileo it has been commonplace to observe that mathematics is the language of science, particularly physics.
Mathematical proficiency is necessary for students to understand physics deeply, and the physics education research literature has extensively investigated how they use their mathematical knowledge in physics <cit.>.
Much of this literature is highly specific in terms of both mathematical methods and physical applications, studying for example students' use of integration in electrostatics <cit.>, partial derivatives in thermodynamics <cit.>, or linear algebra in quantum mechanics <cit.>.
These studies can be conceptualized in terms of mathematical modeling, transfer of knowledge, and even conceptual blending.
Certainly, differences in how “the same" material is presented and used in physics versus mathematics courses can affect students' ability to transfer their knowledge of one to the other.
But there have also been more general studies of the relationship between physics and mathematics, for example the differences between the “dialects" of mathematics spoken in each field.
Redish and Kuo (2015) <cit.> describe a number of differences between these dialects, for example that “loading physical meaning onto symbols does work for physicists and leads to differences in how physicists and mathematicians interpret equations."
An anecdotal example is “Corinne's shibboleth" (attributed to Corinne Manogue <cit.>) which allegedly discriminates well between mathematics and physics faculty: if the function f(x,y) is defined as x^2+y^2, then what is f(r,θ)?
Physicists are said to answer r^2, while mathematicians answer r^2 + θ^2, showing that they interpret symbols and use function notation quite differently.
This literature suggests ways in which this language barrier may impact students' learning to use mathematics in physics contexts.
Redish (2021) <cit.> is an introduction to seven subsequent short articles he wrote to disseminate these ideas to practicing physics teachers, to support their students to “think about physics with math instead of just calculating."
Our conception of such “cultural" differences between mathematics and physics is quite broad, including not only these distinct dialects, but also how and to what extent mathematical claims are justified in each field, how approximations are used, how new concepts are introduced, and differences in pedagogy that may be common to instructors within each field but differ between them.
All these features go beyond specific topics like integration or electrostatics, but may affect students' ability to “transfer" or apply their mathematical knowledge in the physics context.
They form a sort of background to the teaching and learning of individual topics.
In addition to the work of Redish and collaborators, we have drawn on other papers addressing the cultural aspects of mathematics as applied to physics.
Uhden et al (2012) <cit.> distinguish between the technical skills of mathematics that are used for computation or derivation, and the structural skills that “are related to the capacity of employing mathematical knowledge for structuring physical situations."
These authors provide a model for students' use of mathematics in physics that recognizes structural skills in addition to procedural ones and recognizes that physical problems are not simply “translated into mathematics" as a naive modeling approach would suggest but rather that the math and the physics are inseparable at every stage.
Palmgren and Rasa (2022) <cit.> give examples of the roles of mathematics in physics drawn from the history of quantum mechanics.
They argue that math is much more than a tool for problem solving, rather “mathematics contributes to representations of physical information that in turn serve as a basis for further reasoning and modification."
To our knowledge, however, no research has investigated whether students themselves are aware of such “cultural" differences between physics and mathematics, or of how such differences may affect their own learning.
We planned our study to include students who were double majors at our home institution, thinking that they would have enough advanced course background in each subject to have experienced a variety of these differences.
To increase the number of participants we also included those majoring in one subject and minoring in the other.
We distributed a questionnaire to all participants, and after an initial analysis of the responses we selected a subgroup for individual interviews.
Our questions were quite broad and included the following areas: the role that students think mathematics plays in physics, differences in their experiences in math versus physics courses, stereotypes of mathematicians and physicists, how proof or justification differs in the two subjects, and whether mathematics has been a help or a hindrance in their understanding of physics.
Based on our own experience and prior research literature, we asked about specific mathematical topics that we expected to differ between the fields or create learning difficulties for students, for example, Fourier transforms, infinitesimals, linear algebra, and the Dirac delta function.
We expected, for example, that the common use of infinitesimal quantities such as dx or dW in physics would be an obstacle for students in applying calculus as they learned it in mathematics courses.
We also included Corinne's shibboleth as one question, since we have not seen published data on it.
Our research questions were the following:
* Are undergraduate double majors aware of “cultural" differences between mathematics and physics, and differences in the presentation and usage of mathematical content between their physics and math courses?
* What do they cite as major differences in “common" mathematical content?
* How do they perceive these differences as affecting their learning?
* What coping strategies do they employ?
* What specific classroom experiences do they cite?
Some previous empirical studies have used questionnaires and interviews to try to explore students' experiences along similar lines, but not as broadly as we have.
de Ataíde and Greca (2013) <cit.> asked students about the role of mathematics in physics (for example, do you experience difficulty in applying the concept of differential in problems of thermodynamics?), but their results are limited to exploring correlations between students' problem-solving strategies (categorized as Operational Mathematics, Conceptualization, and Mathematical Reasoning) and their epistemic views of math in physics (as Tool, Translator, or Structure).
de Winter and Airey (2022) <cit.> asked preservice physics teachers in England about the role of mathematics in physics at both university and school level.
Responses did show awareness of different treatments of “the same" subjects in math versus physics courses, but were vague as to examples.
Kapucu (2014) <cit.> asked Turkish preservice elementary math/science teachers about their conceptions of mathematics, physics, their relationship, and the learning of each, but it appears that their participants had only majored in these subjects in high school.
Responses were vague (for example, “physics is primarily based on calculations, so requires math") and only coarsely classified as “fragmented" versus “cohesive" and “lower level" versus “higher level".
Schermerhorn et al (2019) <cit.> were narrowly focused on students in a spins-first quantum mechanics course at three universities and asked whether the mathematics helped students understand the physics concepts, whether the reverse was true, and which was more challenging.
Some student comments resonate with those of our subjects, for example, “The physics makes the math easier to visualize and the math provides a base for physical intuition. For example, the properties of wave functions helped me understand Fourier transforms" (ibid., page 534).
§ DESIGN OF STUDY
We strived to make the survey questions unbiased — we did not want to show any preferences for math vs. physics classes or instructors, and we tried to formulate questions that would allow students to discuss pros and cons of their experiences with both subjects. After agreeing on the survey questions,[We give the survey questions and summaries of responses in the next section.] the authors sent out invitations to all math/physics double majors at our institution, and also to those students majoring in one subject and minoring in the other. We sent out 36 emails, offering each student a gift card to complete the ∼1hr survey. 24 students responded and were interested in participating in the study.
At this point, we numbered students roughly in the order they responded (numbered 1-24). In addition, the students were given a letter code to indicate their major combination: “D" for double majors, “M" for math majors with physics minors, and “P" for physics majors with math minors.[Although we did not select on the basis of class standing, all 20 students who completed the survey had JR/SR standing, and all 11 interviewees had SR standing. These interviewees were a mixture of 3rd-year students (who had enough credits for SR standing) and 4th-year students nearing graduation.]
The authors were able to collect 20 completed written responses to the surveys. After analyzing the surveys individually, we met to discuss the responses via a thematic analysis <cit.> of the surveys, flagging significant quotes and determining potential emergent themes.
We extended an interview invitation to those 13 students whose responses included experiences and opinions that we wanted to follow up on. Of these 13 students, 11 agreed to participate in an interview with one of the authors (with audio recorded).
Interviewed students were compensated with another gift card.
Interviews were designed <cit.> to be semi-structured, based on a general framework of questions that we asked each student, but we also followed up on specific statements made during the interview. Also, we followed up on statements made in the student's survey responses. If a student mentioned a specific mathematical topic in their written survey (see Section <ref>, Question 6), then we asked them in the interview to describe their learning experience in math and physics classes involving the topic. We split up the students randomly among the three authors and each author interviewed 3-4 students.
After completing the interviews, the authors transcribed each of their interviews using the audio recording. We created and shared summary notes and memos. We then met to discuss each interview in depth, highlighting quotes that stood out to us, or ideas that seemed to come up multiple times.[We also reviewed the survey answers after the interviews; we didn't find anything mentioned in the “non-interviewee" survey responses that was not also represented in the interviews.] After this, we were able to categorize the main results into four principal themes, which we discuss in detail in Sections <ref>-<ref>.
The following table summarizes students' participation at various stages of the study:
Stage #D #M #P Total Students
Request for participation 13 8 15 36
Surveys sent out via email 10 3 11 24
Surveys received/analyzed 8 2 10 20
Interview invitations extended 4 2 7 13
Interviews conducted/analyzed 4 1 6 11
§ SURVEY QUESTIONS AND RESULTS
The survey consisted of the following questions. A quick summary of the results directly follows each question.
There were no questions where there was a clear distinction of answers by major (other than question #1, of course). That is, there were no questions for which double majors responded significantly differently from physics majors with a math minor (or from math majors with a physics minor). However, our sample size is probably too small for this to be meaningful.
* What is your major (and minor, if any)?
(All students were double majors or one major and one minor in math and physics, by design.)
* If you chose one major/minor first, and added the second later, what was the reason you chose to add the second?
Most students said that this was done out of interest and/or that the minor subject complements the major subject.
* Suppose that f(x,y)=x^2+y^2. What is f(r,θ)? Do you think that either a mathematician or a physicist might give a different answer than you did? What answer, and why?
Only one student (D4) was able to identify that the answer could be r^2+θ^2 (the student provided both the “physicist" and the “mathematician" response). 17 others only gave the physicist's response, that the answer is r^2. Two students said f(r,θ) = r, with one adding, “A physicist would probably not be concerned with the square on the radius."
* Have you been dissatisfied with an explanation, derivation, or proof given in a physics class because you felt it was not mathematically precise or rigorous enough? Can you give examples?
15 out of 20 students cited at least some dissatisfaction, especially with Fourier analysis, Hilbert spaces, and differential equations. In addition, students did not enjoy the “hand-waviness" of mathematical manipulations and/or approximations made in physics classes. Others were satisfied with physics classes not spending too much time on the math.
* Have you had difficulty understanding mathematical techniques presented in a physics class because the presentation or notation differed from what you had seen in previous math classes? Can you give examples?
A great majority of students either said “no" or cited the difference in which variable was used to represent the polar angle in spherical coordinates (θ vs. ϕ) in physics vs. math courses. Nearly all of the students who cited the polar angle notational difference said that they got used to this.
* Have you found any of the following specific topics challenging to learn about or use because of differences in how they are presented in math versus physics? Conversely, are there instances in which your understanding of a topic was improved by having both a physics and a math perspective on it?
* The Dirac delta function (quantum mechanics, electromagnetism)?
* Infinitesimal quantities or vectors such as dx and d⃗s⃗, used extensively in physics but not so much in math?
* Concepts and terminology from linear algebra and matrix theory, such as basis, linear operator, eigenvalues?
* Calculations involving derivatives or integrals of functions?
* Calculations involving infinite series?
* Methods for solving or understanding differential equations?
* Fourier series and Fourier transforms?
* Dimensions or units of physical quantities, emphasized in physics but often ignored in math?
* 3D vector calculus (div/grad/curl, Stokes’ theorem, etc.)?
Students cited the Dirac delta function and Fourier analysis as topics they never saw presented in math classes; they had some difficulties with the topics as they were presented in physics. Several also mentioned that infinitesimals were not treated rigorously in math courses, but that they were used enough in physics that it would have been beneficial to have seen them more in math. For the most part, students said their understanding was improved by having seen 3D vector calculus and differential equations in both subjects, and nearly everyone agreed that their ability to calculate derivatives/integrals was improved by exposure in both subjects. Responses about linear algebra were mixed, with some saying that the math exposure was helpful for upper-division quantum mechanics, while others said that their math exposure was not sufficient. Not much was said about infinite series or dimensions/units.
* Have the different standards of proof/explanation/justification/evidence accepted in physics versus math classes had effects (positive or negative) on your learning, or your understanding of the material, in those classes?
(Numbers below indicate the number of responses in each category.)
(7) Yes: proofs and justifications are more rigorous in math classes than in physics classes, which has had a negative effect on my understanding in physics classes due to lack of clarity; (6) Yes: the two complement one another nicely, which has had a positive effect on my learning; (5) No: I recognize there are differences, but I don't think this has affected my learning overall. Other answers were either not clear or a blend of these.
* Along with the definitions, equations, laws, theorems, and so forth taught in physics and math courses, instructors and textbooks also try to convey intuitions about these topics (“how you should think about them or visualize them”) that are appropriate to each subject. Have you noticed differences between the intuitions taught or expected in math versus physics, and have any of these differences had effects (positive or negative) on your learning?
A few students said that they didn't notice a difference, but most said that intuition is more emphasized in physics classes. Students varied in which they preferred (math providing a better framework, but physics providing more concrete examples to visualize). A few mentioned that math and physics have different patterns of reasoning: math classes start fully general and only later provide examples or specific applications, while physics classes start with specific cases to build intuition, and then try to generalize.
* Have you experienced explicit comments from either physics or mathematics instructors that are critical of how some topics are presented in the other discipline? Can you give examples?
(8) Minor or only playful joking or minor comments; (12) No. Most examples involved physics instructors saying something to the effect of “mathematicians wouldn't like me doing this, but..." (not explicitly critical of the other discipline).
* Have you asked questions of your mathematics or physics instructors about topics from the other field, and been unsatisfied with their responses? Can you give examples?
(0) Yes, from math instructors; (7) Yes, from physics instructors; (13) No. Topics where the physics professor's response was unsatisfying: math behind special relativity (for example, Lorentz/Poincaré groups), various group and set theory concepts, expectation of an operator, special cases of integrals, bracket notation in quantum mechanics or probability, divergence and curl in electromagnetism.
* What suggestions would you give to physics or mathematics faculty about how to help students better understand material that overlaps or is common to both subject areas?
Suggestions to math professors: give real-world and physics applications.
Suggestions to physics professors: be more organized with the presentation of material, be more clear/precise/rigorous with the math.
Suggestions to both: be clear about differences in notation and conventions (for example, factors of 2π in Fourier transforms), provide resources to students to better understand math/physics connections.
* When the presentation of a topic in a lecture or a textbook is different between your math and physics classes, do you have strategies for reconciling them or coping with the differences?
(6) Use online resources; (5) Try to understand the differences; (5) Ask a professor; (3) Keep them separate; (3) Hasn’t been an issue;
(2) Give the math understanding priority.
§ SUMMARY OF FINDINGS IN SPECIFIC MATH TOPIC AREAS
Following is a brief summary of student comments about specific mathematical subject areas that we asked about. We will revisit many of them in the subsequent discussion of our major themes, providing interview quotes for support.
§.§ Infinitesimals
We anticipated that the common use of infinitesimal quantities in physics would conflict with students' prior understandings of calculus from their mathematics courses. Physicists happily work with infinitesimal amounts of charge dq or displacement dx whereas mathematics courses treat calculus using limits and often explicitly deny that infinitesimal or differential quantities are individually meaningful. However, our students did not identify this as a major obstacle. Some of them recognized physics infinitesimals as an indirect way to reason about limits or small finite increments, which could be rigorously reformulated in those terms if desired. Although they were not bothered by the supposed infinitesimal nature of these quantities, they did express some confusion over the rules for manipulating them, which they included in their catalog of mysterious physics “tricks". For example, one student realized that canceling two infinitesimals would solve their homework problem, but struggled to justify this manipulation. Others were unsure when differentials could be separated from derivatives and manipulated independently, or when it was permissible to change partial derivatives to total derivatives and vice-versa. Cancellations that look intuitive in single-variable calculus, for example in the Chain Rule, appear less so when partial derivatives are involved.
“I know that if I cancel them then [infinitesimals], then I would be able to get an answer pretty easily, but there's a reason that I was stuck on this problem for an hour and it wasn't that I didn't see that I could cancel them, it was that I kept looking at it and saying, well why can I do that?" (P1)
§.§ Fourier Analysis
Another topic that we expected the students to have observations on was Fourier analysis. This subject is not generally discussed in the introductory calculus/linear algebra/differential equations sequence but is crucial to an introductory physics curriculum. Fourier analysis may be encountered later in (non-required) mathematics courses such as partial differential equations or numerical methods. Our physics majors are introduced to the Fourier transform in the optics portion of the advanced first-year physics sequence but they find the treatment of it there too brief. Some were unfamiliar with the complex exponential function e^ikx and cited the differences between this notation and the alternative sines and cosines as a challenge. Some pointed out that the Fourier transform is defined as an improper integral but is rarely evaluated directly from that definition. Rather, “tricks" such as completing the square are employed. Many students mentioned their physics math methods course as being the first place they were given enough time to fully learn and appreciate Fourier analysis.
§.§ Delta functions
Another topic we specifically asked students about was the Dirac delta function. Like Fourier analysis the delta function is not discussed in the first-year math sequences (or perhaps anywhere in their math coursework). Most students were willing to accept it in the context of physics applications despite feeling that they did not understand it deeply as a mathematical object. They knew the rules for manipulating it and could do so despite wondering how these were justified. One student wondered how a δ(x) can differ from δ(x) if both are infinite at x=0 and zero elsewhere. A mathematics major who had seen the delta function in quantum mechanics took the view that since quantum mechanics violates our macroscopic intuitions anyway, it makes sense that its mathematical tools such as the delta function would also violate them. Several students mentioned the integral form of the delta function, δ(x) = 1/2π∫^∞_-∞dk e^ikx, specifically as something that seemed “like magic."
§.§ Corinne’s shibboleth
Dray & Manogue (2002) <cit.> propose that the following question will distinguish physicists from mathematicians:
0.85
One of your colleagues is measuring the temperature of a plate of metal placed above an outlet pipe that emits cool air. The result can be well described in Cartesian coordinates by the function
T(x, y)=k(x^2+y^2)
where k is a constant. If you were asked to give the following function,
what would you write?
T(r, θ)= ?
The physicist, considering T(x,y) to be a function representing the physical temperature at a point in space, would infer that T(r,θ) is the same temperature function but expressed in 2D polar coordinates instead of 2D Cartesian coordinates. The mathematician, instead, would consider the function T: ^2 → as a mathematical object independent of the dummy variables used to represent the two real number inputs. As a result, the physicist will answer “T(r,θ) = kr^2 ", and the mathematican will answer “T(r,θ) = k(r^2+θ^2)".
In our version of the prompt, below, we omitted the physical context for the function, hypothesizing that this would remove potential bias toward the “physicist" response. In addition, we explicitly suggested that there might be other possible answers given by a physicist or mathematician, to encourage students to apply multiple perspectives to the problem:
0.85
Suppose that f(x,y)=x^2+y^2. What is f(r,θ)? Do you think that either a mathematician or a physicist might give a different answer than you did? What answer, and why?
Surprisingly, all students gave the physicist's response (or a variant of it[Two students said that f(r,θ) = r, which we consider closer to the physicist's answer than the mathematician's answer.]), and only one recognized that a mathematician might possibly give a different answer because of an assumption made in the physicist's response:
“It’s possible someone who doesn’t study physics would be confused by/not understand the abuse of notation implied here, where the actual function f is a different mapping depending on what symbols (coordinates) we use. They might instead treat the question “What is f(r,θ)” as a request to directly substitute the symbols “r” and “θ” into the original function." (D4)
Student D4 mentioned that the mathematician's response is the more “rigorous" one and clarified that in his view the physics interpretation is the abuse of notation. Also, this student mentioned that they had thought about this issue before via discussions with their math-major close friend.
Redish and Kuo <cit.> identify that physicists load meaning onto variables in a way that mathematicians normally do not. We confirmed a strong version of this in our study with our version of Corinne's shibboleth — someone giving the physicist's response is clearly assigning meaning to the coordinates (x,y) and (r,θ), even though the identification of these variables as Cartesian and polar coordinates was never given in the problem statement. In the interviews, we tried to probe the students' answers a bit more, asking if their answer would have changed if we instead asked about the function f(u,v) = u^2 + v^2 or f(p,q) = p^2 + q^2 (and then asked what f(r,θ) would be in that case). It seemed the interviewees had already decided the question had to do with changing coordinates, and still gave the answer of f(r,θ) = r^2. Either this, or they said that they wouldn't know how to answer without knowing the meaning of the new variables.
We do not find it surprising that undergraduate students associate definite meanings with variables in this way. What is surprising is that even mathematics majors do not suggest the mathematician's interpretation of function notation when prompted for an alternative.
§.§ Role of Math in Physics
We asked our interviewees for their views on the role that mathematics plays within physics. The most common response was that mathematics is a tool for solving problems and understanding the physical world. The term “tool" includes the implication that physicists do not always need to delve into or justify how the tool works; for their purposes it is enough that it does work and the underlying reasons can be left to mathematicians. A related view, echoing Galileo, was that math is a language for describing the world. Physics needs a language that is more precise and more adapted to logical reasoning and calculation than English, and mathematics fills this need.
More detailed responses suggested that mathematics provides a general framework that physicists can then fill in with boundary conditions, physical parameters, or other “limitations" that describe the specific laws of physics and material properties applicable to a given problem.
“Physics is just applied math with concrete, like, limitations... like oh, the ball is not going to go outside of the room... we’ll never, when doing physics, ... you’ll never consider something outside of a third dimensional realm." (P20)
“In cosmology, the Einstein equations and the Friedmann equations are all just math, the physics is when you then say, okay, well this is the, this is our idea of what the dark matter content is, and this is our idea of the equation of state of the matter, and we plug that into the stuff that’s been mathed out for us and then we get, you know, we get the scale factor throughout the expansion history of the universe." (P1)
§ MAJOR THEME 1: THE NATURE OF THE SUBJECTS
A major theme in our data is that students perceive mathematics to be deductive, standardized, and highly structured, whereas physics is inductive, eclectic, and flexible.
These observations apply to the nature of proofs or arguments presented in each subject, the presentation of other material in the classroom and in textbooks, the homework and exam problems posed in each class, and the ways in which instructors respond to questions from students.
The presentation of mathematics is often structured by the Definition-Theorem-Proof format, even when “Motivation" or “Application" are added to this template.
Physics is more likely to proceed inductively from examples of phenomena to generalization and theory.
When a physics class requires a new mathematical concept or technique it is likely to be presented briefly in the form of an equation or rules and then applied without a deep justification or motivation.
Students recognize that mathematics is abstract and not necessarily connected to the physical world, whereas physics must be so connected even at its most abstract.
Physics is about explaining or predicting real phenomena, while mathematics is “about" itself.
In that sense physics is a blend of theory and experiment, while mathematics is “all theory".
§.§ The Nature of Proofs/Arguments in Each Subject
Students perceive mathematical proofs as highly structured, with the hypotheses and conclusion announced at the start and all the logic spelled out explicitly.
Specific named methods such as Mathematical Induction may form the proof structure, and named theorems may be quoted in support of particular steps.
A physics “proof" is more free-form and can contain jumps and gaps; it looks more like a calculation than a piece of reasoning, and the result may appear suddenly as a surprise.
(One student gave the example of the speed of light emerging suddenly from a calculation based on Maxwell's equations.)
The steps typically follow by standard computational tools from algebra and calculus without further overt justification.
Mathematical proofs are more wordy, the words serving to explain the reasoning.
In contrast, when words appear in a physics proof they may interrupt the flow of the calculation/proof to introduce a physically motivated approximation, which does not have a rigorous justification, or perhaps to dismiss a case as “unphysical" rather than mathematically disproved.
Unlike a mathematical proof, a physics proof may not be self-contained, and students feel that they have to ask additional questions after class if they want to understand the transitions between steps.
Students also mentioned that mathematical proofs may be more rigid in terms of how terminology or symbols must be used, or how the logic must be sequenced, while physics proofs are more flexible.
“Once I go through a mathematical proof, ... as long as I understand the intermediate steps, I end up with, like, I know why this result works inside and out. And, whereas physics proofs sometimes require a little bit more, in some ways, trust." (D22)
“Whereas in physics sometimes if you look at the notes for your proofs there's not – they have to make a jump but I'm not entirely sure how they got there, then I have to go talk to them, like how do you get from here to here?" (M9)
“Proofs in math class are more like a written paragraph type of thing versus in physics ... it is all just math like algebra and calculus ... and a lot of the proof in physics is in the math rather than in the spoken words." (P20)
These comments are consistent with the view of a student in <cit.> , who said (page 579), “[In math] it just all makes sense to me, because there's a reason it works, and it's just one reason. It's not like in physics really where there's so many different cases like I said before. In math, if I understand the proofs of why it's that way, and then, I'm comfortable using that equation."
10pt
§.§ How Topics are Presented in Class
Students felt that topics in mathematics classes are developed thoroughly through the definition/theorem/proof structure, so that their properties are built up gradually and fully justified.
In physics classes math is a means to an end, and new techniques may be simply presented and then exemplified through applications.
Students recognized that the main focus is on the physics, while the math is simply a tool which can be accepted without rigorous justification if it works.
The rules for calculating with it take precedence over complete definitions or derivations.
“My math classes have been, they have, like a familiar structure ... they have some motivation, they have some definitions, a theorem, and a proof. ... Whereas in physics I feel like it's kind if been more all over the place. Like the transitions between when somebody's giving like a heuristic argument or an actual proof of what they're trying to show, it's a bit blurry." (D3)
“It's also a lot easier to take notes in a math class for that reason. ... I can see what each thing is defined as, what the main theorem is, and how it's proved. Whereas in physics I think things are a bit at least, in my experience, a bit more, like haphazard." (D3)
“In physics the words that you would be using in that are generally the physical explanation of the phenomena where you are kind of saying like this is roughly how the equation matches up to the phenomena, while in math you're not trying necessarily to match up the equation to a phenomenon, you're simply trying to explain the equation and how it works." (M9)
“When they present you math in physics they don't tend to present words alongside it." (M9)
“[Math topics are presented as definitions, theorems, proofs, but] in the physics class I feel like often we're given an equation maybe and then we jump right into uses of it, like we're given the Fourier series and then we jump right into applying it." (P1)
“In a physics class math is presented as a means to an end rather than something to study itself ... one example is in classical mechanics we kind of did a crash course into calculus of variations for Lagrangian mechanics." (P20)
§.§ Nature of Homework and Exams in Each Subject
We will have more to say about assignments in each subject when we discuss student perceptions of “tricks" in physics.
However, students find math assignments more predictable than those in physics, particularly the expectations for what correct or complete solutions should include.
The stricter standards of justification in mathematics and the presence of physically motivated approximations or shortcuts in physics contribute to this.
“If I have my math homework and my physics homework on my desk, and someone comes by and looks at it [and asks] `Oh, what class is this?' I'm like, ... if it looks like an essay, then it's math. If it's math, then it's physics." (D5)
“I can get full credit on the assignment even though I don't know why the thing I just applied works in this instance ... you can BS the physics assignment and still get 100 on it, even though you don't understand why the stuff you're writing works ... I can explain the steps and still not understand why they work, versus in a math problem, in a math homework assignment, if you BS the homework assignment you're not getting 100 on it ... because you're going to be missing some crucial steps in your proof, and you know you're missing those crucial steps, and when a proof is complete you feel it." (P1)
“The midterms and the finals in math classes are so much more standardized, like I feel like I know exactly what I'm getting into in a math exam even if it's with a whole new professor. Like with all the math exams of the [calculus] series, they kind of felt the exact same, and then now that I took a couple proof-based math classes, those kind of exams all felt the exact same too. Versus the physics exams it kind of feels like there's like a massive variance, like a swing between them ... my grades in math exams are very consistent like I feel like I get pretty much the same score on every math exam even in different classes, versus in physics classes, like I've gotten anything from above a 100 to below a 20 in physics classes and it's not because I'm studying any differently, just because the exams vary so much." (P1)
§.§ Responses to Student Questions
Some students felt that their mathematics professors were more likely to give complete or satisfactory answers to “why" questions.
Physics professors might not know the answers to technical mathematical questions or might de-emphasize the importance of such issues relative to developing physical understanding.
“In physics, generally, ... we use it because it works, not because it's like, we never, like, prove anything ... If I ask some question, or if I hear somebody else ask a question, like related to the math, ... like `How are we able to use this? What does it mean?' If it's, like too math, like `jargony', in general the professor seemed to be like, `It doesn't really matter, like don't think about it ... you just have to know that it works.'" (P11)
§ MAJOR THEME 2: PHYSICS “TRICKS"
Another major theme from our data focuses on categorizing ways in which mathematics, as it's used in physics classes, can appear unclear, sloppy, handwavy, etc. In general, we call such sources of confusion physics “tricks."
Physics classes can gloss over mathematical details. Often, this is due to lack of lecture time, with physics instructors choosing to spend their limited time focusing on physical phenomena rather than mathematical manipulations. Sometimes students are fine with this, but in other cases the mathematical manipulations performed (or omitted) in a physics class are a big source of confusion. Such sources of confusion fall into two categories: “techniques” and “approximations.” Roughly speaking, the former are mathematical objects, structures, or manipulations that can be mathematically justified or further explained, although sometimes this is not clear to students and may be omitted by their instructors. It is mostly students’ unfamiliarity with the techniques or mathematical subject matter that leads to their confusion. The latter, “approximations,” involves approximating exact expressions, “erasing” information, and reducing general expressions to specific cases, sometimes as a “shortcut” to get around difficult math. These often require physical arguments specific to the physics problem at hand.
It is worth noting that students’ lack of conceptual understanding of the material, and lack of intuition about what the math is representing, can contribute to their confusion for both types of “tricks.” Also, we note that students became more comfortable with the use of many “tricks" over time, after initial discomfort when first encountering them.
§.§ Mathematical Techniques
Some of the mathematical “tricks” that students mentioned can be classified as mathematical techniques, manipulations, or entirely new topics. Generally, these are things that could be encountered in a math class, as they don’t rely on physical arguments or approximations.
For the most part, students were okay with an abuse of notation or somewhat “sloppy” mathematical methods, so long as they had some exposure to the mathematical topic. For example, the authors anticipated more complaints about the interpretation of differentials in physics classes (e.g., treating dq in electromagnetism as a small amount of charge), but largely this was okay with students. A few students said that they would have liked to have learned a bit more about differentials in math courses, but that they picked up what they needed in physics (especially upper-division electromagnetism). Another topic was the order of taking integrals or derivatives – a few students who had a bit more math background mentioned that physics instructors tended not to check the conditions justifying the interchange of the order of integrals/derivatives/summations, but that largely this didn’t affect their ability to follow along in the courses. When asked a followup question about some steps of proofs not being justified in physics classes, one double-major said:
“Uhh, they went like slow enough, whereas, like, if they did something, I could just kind of see why it worked out. Like, even if he/she didn't justify it like, I can just like, think, `Oh, this, works,' so I didn't really... there's nothing that really stood out to me." (D3)
One topic that confused several interviewees was Fourier analysis (Fourier series and transforms):[Most of the interviewed students took a 5-quarter honors sequence of introductory physics, which goes into a little more detail than the standard sequence intended for engineering majors. In this honors sequence, Fourier transforms are brought up in either the 4th or 5th quarter, first with optics and later with quantum mechanics.]
“I would say the Fourier transform... It was probably, I mean, it's definitely like the most foreign piece of math." (D5)
There seemed to be a lot of factors contributing to this: the topic was new when they were introduced to it in physics, it was introduced for a specific purpose (to discuss optics) and not discussed in full generality, and homework exercises tended to focus on particular manipulations that were important in performing calculations, but didn’t focus on conceptual understanding of Fourier analysis in general. It was these calculations in particular that were almost uniformly called “tricks” by the students — they didn’t know why they were doing what they were doing, didn’t know the “rules,” in effect, and didn’t know what “tricks” were allowed or justified. This was especially pronounced when the students reflected on their homework assignments in these courses:
“The homework for [upper-division math methods course in the physics department] was, was really complex and, and very difficult and it required some creative manipulations of the Fourier transform and of complex numbers that like, like we had to manipulate it in interesting ways, in order to do that you need to understand exactly what the, the Fourier transform is and what these operations are doing, and without really understanding exactly what they're doing it, it just felt like these problems were kind of magic and it's like, oh look, there's this trick to solve it but you know tricks aren't helpful if you don't understand the operation that they’re, that they're getting around." (P1)
“I personally felt like there was a lot of like tricks that are used in the Fourier transforms that are just kind of well-known and you just have to look up. But I remember like I didn't like know how to like… I didn't know they were tricks, so I would like try to actually solve them. And I it would take me so long, and I would like get together with my friends, try to like, solve these problems, and it would take me like so long. And then we just look it up, and then it's actually quite easy and it's just like a well-known fact, or whatever. So that was kind of tough." (P11)
A similar sentiment was expressed at the upper-division level with linear algebra in quantum mechanics. Almost uniformly, students mentioned that the one quarter of lower-division linear algebra that they had in a math class was insufficient preparation for upper-division quantum mechanics. The lower-division course never focused on abstract (or infinite-dimensional) vector spaces, nor used the field of complex numbers, nor discussed hermitian/unitary operators. As a result, there was a similar feeling of being overwhelmed in quantum mechanics as with Fourier analysis, as the students were trying to learn the math along with the particular physics applications, with new notation on top of it all:
“...in linear algebra you have a matrix and eigenvalues and corresponding eigenvectors but I feel like that sort of calculation, or like I guess that calculation algorithm type thing that you would do in linear algebra is not at all how it was done in quantum mechanics. And that's definitely more of a notation thing in physics where you use the Dirac notation, you know, like operators and all that stuff, you hear a lot about like, and then so it's like that what steps in the calculation are done I feel like in physics they assumed like, `oh they learned this in, like linear algebra,' but obviously we didn't learn Dirac notation in linear algebra." (P20)
There were a few other mathematical topics mentioned, but not as often as the above two. These include symmetry arguments (for evaluating integrals or determining the direction of vector fields) and solutions to differential equations. The authors suspect that these issues were not as universally cited because the students in general knew the physical context and/or they had some familiarity with the mathematical techniques required.
§.§ Approximations
Students brought up approximations in a few different contexts, with Taylor series being the most common. In general, students are uncomfortable continuing with an approximate expression if they’re aware (or believe) that an exact expression exists, if they don’t see how the approximation will help solve the problem at hand, or if they don’t think they’re developing tools/skills that will generalize and help them tackle other problems on their own. A related issue was mathematical “shortcuts” taken during lecture: going from general expressions to specific cases, which is a different kind of erasing information.
For example, regarding back-of-the-envelope calculations, one student expressed discomfort when a professor would plug in numbers, but not their exact values. An example might be canceling out π in the numerator with 3 in the denominator of an expression when an instructor is estimating numerical values. Students felt a disconnect between doing “real” (“precise” or “exact,” in their eyes), difficult physics problems, and being cavalier when getting a final result:
“[The professor] always liked to mention, like back-of-the-envelope calculations. And [laughs] I remember a lot of people really were kind of upset by that. He/she liked to just approximate a lot of stuff very quickly, and I remember people were kind of like... people didn't really like that at first... You're like, `why can't we just, why don't we just use like, the most exact answer?' But then later on, I realize it's like, it's really like, negligible. It's like there's no point in including all the extra stuff." (P20)
Many students mentioned Taylor series when the topic of approximations came up. One student, a double major who had already taken several advanced mathematics courses, gave the following ancedote:
“An example that I've thought about where it's like, physics kind of like sweeps a bit of stuff under the rug. I think for like, you know like the pendulum swinging? Umm, they make an approximation where the angle is small. So normally when like you Taylor expand it you have a bunch of trig terms, but they just do like sinθ is approximately θ, and it seems like, you know, as somebody coming from math, there's a whole lot of other information in those Taylor expansion terms… hypothetically, if I were to ask a professor like, `What do you do about those other terms, like isn't that part of the physics?' And they say like `no, it only matters that the angle is small, and we can make this approximation.' It's like, well that's not satisfying to somebody like, likes the full solution, I guess." (D3)
This student actually mentioned that their math background was a hindrance in physics problems – they are not used to discarding information in a seemingly arbitrary manner. What are the rules behind when students are allowed to linearize equations? When should we keep the second-order terms? Why are we not trying to quantify the error when throwing away these terms? Shouldn’t the error accumulate over time in a problem like the pendulum? And, if so, why are we ignoring this?
When asked later if there was an exact answer to that problem, the student replied:
“I think, well I'm pretty sure there is an exact answer. I think I knew that, because like the differential equation isn't that hard, like x” = sin x.” (D3)
This confusion is probably not too surprising, as a 5-10 second quick remark in lecture to the effect of “this equation is not solvable analytically” or “a complete solution involves elliptic integrals” can easily be missed. Instructors may say that advanced physics classes cover things in more detail, implicitly promising that a more exact treatment exists and awaits them in a more advanced course. Understandably, students might imagine that we’re always “sweeping things under the rug.”
Another student mentioned that if such approximations were to be made in a math class, then there would be more rigorous analysis of error bounds:
“I've taken real analysis, when you do real analysis and you're giving the approximation functions for equations in that class like the Taylor expansions, they of course give you the error and they prove why the error works and they are like, this is how we minimize the error to make sure that there is as little error as possible. We weren’t given anything like that [in the physics course]... I personally find it a little bit confusing to have that stuff and then not have any explanation for why these work the way they do and like what the margin of error is and how we can minimize that margin of error, especially since it feels like it would be important." (M9)
When students mentioned Taylor expansions in interviews, they cited negative experiences of homework problems where they were told explicitly to Taylor expand some expression to a certain order. Many said that Taylor series appeared often and seemed important, and that they wished they had more introduction to it in their classes. However, only one gave the sentiment that Taylor series/expansions can be a useful tool to analyze physics problems.
§ MAJOR THEME 3: WHEN ARE PREPARATORY MATH COURSES HELPFUL?
Our next theme centers around how physics students perceive their preparatory math courses. In general, the student feedback that we collected seems to support the claim that these math courses are always at least somewhat helpful. However, some appear to be more helpful than others.
For most physics majors, their first college math courses are the introductory calculus sequence. At our institution, this sequence is a series of five one-quarter courses. Based on our interviews, students taking introductory physics seem to universally appreciate introductory calculus and feel it prepares them well for introductory physics. For example:
“And so learning, being able to learn, like, something in [vector calculus] like a path integral or a surface integral, and then like a week and a half later it comes up for the first time in [electromagnetism], it made it a lot easier, or it’s like, for one, it’s your second time, like our professor did a good job of explaining like, oh, this is what a path integral is, that's like a review thing it helped to hear it a second time in both contexts like that and then seeing it applied to a physical system and so that helped, so then obviously learning it first in math made it easier to apply in physics but seeing a very concrete physical solution and in a way having a whole class dedicated to concrete physical solutions made [vector calculus] a bit easier to kind of digest.” (P20)
In the quote above, vector calculus seems to be especially helpful for electricity and magnetism, as one might expect. In math classes in particular, there does seem to be a tension between physical application and theoretical exposition (often roughly measured by the time spent introducing and developing proofs.) Whereas the introductory calculus series seems primarily interested in applications and building mathematical skills, a good math program is invested in fostering proof writing ability and thus some courses end up spending time in this pursuit. A sub-theme that we noted was how the balance of time spent by a math instructor on physical applications vs. more pure mathematical theory affected student learning. There seems to be student consensus that a more “proofy" course is less helpful when it comes to the learning of physics. For example, we have a quote comparing the complex analysis and the “Introduction to Proof" courses in the mathematics department:
“Like when you multiply, the angle just kind of goes around. And when you divide, the angle goes back the other way. That kind of stuff. When you multiply, it just increases. And then, the visualization of umm, the, what was it, the branch cut and stuff, where you know, it becomes like a spiral, and it keeps going around. I was like, “Wow, this is, this is really cool.” It was just really interesting to me. But then [Intro to Proof] was like a lot of these symbols, symbols. You have to write a proof in this way, this way, and I was like, it wasn’t as interesting.” (P12)
For some physics majors, it would seem that connecting math to real life and/or being able to visualize math is part of the allure of being a physics major. Furthermore, student quotes express similar feelings regarding other courses such as real analysis, typically a very proof-heavy course. In a further example, a student comments that upper-level math courses that are taught with physical applications in mind or in a way that balances the proof-based approach with more applications can be very beneficial, in particular heaping praise on complex analysis which has many applications useful to upper-level physics and, while being proof-based, is often taught with an eye towards applications.
“So, I think that math for us really helped me understand the complex numbers that are used in physics. Similarly, as I mentioned before, many physics courses just give you this, and you have to apply them immediately after you know this fact. Sometimes you don’t fully understand the mathematical foundation of it. So, I think [complex analysis] really provided you a good foundation for understanding those complex numbers and how to use them.” (P23)
We also found student comments suggesting that there are some cases where the content of a particular math course does not line up well with what physics students say they need, even if it is more application based. As an example, the Taylor series is a standard component of an introductory calculus series and an essential piece of a physics curriculum. However, as crucial as this is in physics some students feel that it plays more a minor role in introductory calculus.
“... Oh, another one would be Taylor series for sure, because my [calculus course] for some reason we got behind schedule and so Taylor series we spent only week 10 on it with like the understanding that it would play a very minor role on the final and so I didn't really care much about, about Taylor series, not realizing that Taylor series would be one of the most important things I would like, return to time and time again in physics…” (P1)
Another place where there seems to be misalignment is in quantum mechanics. A solid upper-division course on quantum mechanics involves many topics from linear algebra and most physics programs will require a course on linear algebra from the mathematics department. However, these prerequisite courses are often not taught with the goals of quantum mechanics in mind. Quantum mechanics uses complex vector spaces, Hermitian operators, and commutators; topics a typical introductory linear algebra course will not discuss. Many students expressed frustration with this mismatch and how difficult it can be to fully comprehend these topics upon encountering them for the first time.
Further complicating things is that, according to students, some physics faculty are unaware of this misalignment. Several students noted that instructors would sometimes make comments like, “as you probably learned in your linear algebra course,” regarding advanced topics that most linear algebra courses do not cover.
Additionally, there were several student comments that suggest that a more advanced linear algebra course could be a better match with the expectations of a quantum mechanics course.
“Obviously the number one useful thing was eigenvalues, eigenvectors, eigenfunctions and, and I feel like that was, that was maybe something that was more reinforced in [advanced linear algebra] then, and we, we kind of talked about eigenstuff in [basic linear algebra] but, but maybe in [basic linear algebra] it felt a little more arbitrary like, like I, I told my Dad that we were doing something called like, eigenvalues and he was like, oh yeah I use eigenvalues all the time in my work and, and he's, he's not a physicist, he's a totally different field but I was like, huh, well apparently these things are pretty important but I know, I had no idea how and, and my feeling is that [advanced linear algebra] reinforced that a lot more …" (P1)
Another sub-theme was the utility of so-called “trading zones”, an anthropological term for spaces where members of different cultures meet to trade goods and ideas that was adapted by Peter Galison <cit.> to describe physicists interacting with other “cultures", such as mathematicians and engineers, in the development of new technologies. Trading zones often develop their own languages that blend those of the cultures which interact there. Most physics departments, including ours, offer a mathematical methods course, in which mathematical topics are presented via more physical language and applications. We view such courses as trading zones that use blended math/physics language and offer an opportunity to really focus on the physical applications and visualizations of the math topics and their connection to real life systems. Furthermore, many math topics essential to physics are not discussed in the typical introductory math courses. Examples include partial differential equations (PDEs), Fourier analysis, complex analysis, and advanced linear algebra. From student comments, it seems having this space, this trading zone, to more fully explore these mathematical topics is valuable and appreciated.
“... I recently took mathematical physics [math methods] last quarter and I very much enjoyed that class cuz it was a very like nice blend, I think, between, you know, physics and actual solving math stuff and I think, you know, we covered Fourier transforms, you know, maybe not as rigorously as you might in a math class but I think it was relatively satisfying. We covered Fourier transforms and complex analysis and we did use a little bit of linear algebra, you know, but you know find out eigenvalues of like a linear, linear system and I thought that was very satisfying so had I taken that earlier I think I wouldn't really have these sorts of, you know, problems I think, but on the other hand like of course you know they need to wait for people to have a certain level of maturity to take that class so I understand why it's not, you know, first quarter kind of thing.” (D4)
As discussed, one topic rarely covered in introductory math courses that seemed problematic for our students is the Fourier transform. Even though this is typically included in a mathematical methods course it is so important that it is often introduced earlier in the physics curriculum. When this happens, it seems that many students feel that there is not enough time to get a good handle on it.
“Yes, and honestly I still think that a math course that introduces Fourier transforms and builds them up would be good for me, and good for maybe a lot of physics majors, but yes my feeling when I first got to [quantum physics] was like we introduced it and like right after we took the second midterm, so it was hard to like feel like it would be that important just because it was like the final two weeks of the course and we were really rushing through it and, and it kind of got introduced as like, like along with the idea of like Hilbert spaces and like, and like wave functions rather than or like, like wave vectors rather than wave functions and, and, and so at that point it was like, like if it all just kind of got thrown on us feeling a bit like magic and then it was like and look there's this thing called a Fourier transform that, that fixes this or that, that solves these problems for us and, and right now you don't really need to know the specifics of it, you just need to know that it can, it can switch you between position space and, and like momentum space and, and when you hit it on a cosine you just get the, the frequency of the sine wave and it was like it was like, okay I mean this is clearly very mathy but it's just being introduced kind of as, as magic to solve the few problems that we can even apply it to, we can't even really apply it to many problems right now, we can only really apply it to." (P1)
§ MAJOR THEME 4: WHEN DO PHYSICAL CONTEXTS FOR MATH HELP UNDERSTANDING?
Physical examples can often be extremely helpful in bridging the gap between mathematics and physics. The final main theme that arose was a question of timing: when do physical contexts for math help understanding? Which is more helpful, seeing the math before, after, or at the same time as the physical context? The students had much to say about this. We even had several cases of students referencing a single class example, involving matrices, as being extremely valuable.
Many student comments promoted the idea that learning math first, in a math class, and then applying it in a physics class works well, when the class is aligned well with the needs of the physics topic, as with the introductory calculus sequence and first-year physics. There were also several comments suggesting that when this does not occur, and the math topic is introduced in a physics class right before its application, it can lead to problems. There was consensus that this way of presenting the material often doesn’t give enough time for students to fully process the new math topic, making a discussion of the physical application difficult. For example, here is a quote relating to the introduction of Lagrangians and functionals in a junior-level classical mechanics course.
“I think I wrote the other one was the Lagrangian. 'Cause then we were... it was in [classical mechanics], we were presented with the Euler-Lagrange equation, and that, that seemed... it seemed a little fast to me how it was like functionals. I didn't know that was a thing before that, and then we just have this thing like the action. Minimizing action. I was like, okay, I guess, I'll... I mean, I didn't really understand, you know, the exact derivations, and in...intuitions behind everything. But then you could still use the Lagrangian, and use all the equations, and then, you know, get the equation of motion. But then, I kind of just start with that. And I'm not sure... I wasn't exactly sure how the... how that even tied into, you know, getting those equations of motion.” (P12)
Furthermore, a student directly discusses this situation dealing with the introduction of Fourier transforms in a first-year quantum theory course. As mentioned, Fourier transforms are not generally presented in the typical math courses that physics majors are required to take. Here again we see a student describing simply not having enough time to digest the new mathematics enough to apply it.
“I think the ideal way to do it would be learn it from math professor, and then learn it from a physics professor. (I: Why is that?) 'Cause, I mean, I think like things are always gonna be more complicated when you don't have, like, context around them. But at the same time, at least from my experience with both, like having at least some sort of baseline foundation of just the math part of it, and then being able to apply it." (D5)
There also seems to be evidence that providing physical examples along with the math theory can be an effective way to teach, if students are given enough time to begin to master the material. In the upper-level complex analysis course a mixture of theory and applications seemed to work well.
“For [complex analysis] I took the class taught by Professor X, and he gave us a lot of examples in physics. I think that bridges like well from complex numbers in math to complex applications in physics. I feel like I took [complex analysis] because of my major, so I think most physics students won't take that course. I think the complex numbers are only told by that course. No other lower physics courses teach you thoroughly about complex numbers. So, I think that math for us really helped me understand the complex numbers that are used in physics. Similarly, as I mentioned before, many physics courses just give you this, and you have to apply them immediately after you know this fact. Sometimes you don’t fully understand the mathematical foundation of it. So, I think [complex analysis] really provided you a good foundation for understanding those complex numbers and how to use them.” (P23)
It seems that that the balance and sequencing of time spent on new mathematical tools and physical applications is pedagogically important. There were several comments stating that little or no physical application was also not helpful.
Student: “From what I... what I remember feeling was like, I remember it was just like a lot of these like counting all these row reductions and all this stuff. And I was like, `what am I doing, doing all this stuff?' Yeah, I was like, `okay, I'll do all this.' Yeah."
Interviewer: Do you remember applying it to any physical system? Or was it always just math for...?
Student: “I think it was just math.” (P12)
Student comments would seem to support the idea that what works best for the learning of mathematics needed for physics is a mathematical style presentation with plenty of physical applications included. It doesn't seem to be ideal when the inverse occurs, a physics presentation with mathematics topics sprinkled in. However, what seems to be the most successful approach, according to student comments, is when the math is presented first in a mathematics course and then physics uses this topic in a subsequent physics course.
Finally, well-timed and key physical applications can prove very effective. There was one key example, regarding a ring of masses and springs solved with linear algebra techniques, that really resonated with many of our students. It seemed this type of `aha’ moment could be very powerful.
“[Professor Y] really did a really good job, I thought, with the whole, the linear algebra part of it, where umm — I don't remember how we got into it — it was like, basically just starting with like a couple… coupled oscillators and then extending it to like, n of them, like n of them in a ring, and then as n goes to infinity, and we were just talking about how you can write out like the force matrix or you can write out the symmetry matrix, which is equivalent, and then you can like find the eigenvalues of that. And like the way that they kind of presented eigenvalue and eigenvectors, and some of the linear algebra, I thought, was like really helpful, because it was like right in the context. They like, put it all into context, because everyone's taking [basic linear algebra]. Everyone kind of understands the ideas, but putting it in the practice was really helpful..." (D5)
§ DISCUSSION
Our analysis of the students' surveys and interviews is summarized by our four major themes:
* Students perceive physics classes as inductive, eclectic, and flexible, whereas they perceive mathematics classes as deductive, standardized, and highly structured.
* Students were frustrated by so-called physics “tricks" — mathematical techniques or approximations used in physics classes that appear unclear, unjustified, unmotivated, handwavy, etc.
* Students cite the introductory calculus sequence as being helpful for physics, and more formal, upper-division, proof-based math classes as less so. “Trading-zone" courses, which might include upper-division applied math classes (in the math department) as well as math-methods physics classes (in the physics department), helped clear up several topics.
* Students thought they benefit more from initial exposure to a mathematical topic in the mathematical “style" (theorem-definition-proof), and only later applying it to a physical context after a general introduction. Learning new mathematical ideas simultaneously with new physical applications (e.g., quantum mechanics using Hilbert spaces) can easily be overwhelming.
Our analysis of Corinne's shibboleth confirmed Redish and Kuo's claim <cit.> that physicists and undergraduates load meaning onto symbols in a way that mathematicians do not. One finding that resonated with the authors was that students were uncomfortable with “erasing information". In our data this meant making approximations in physics (where an exact answer is not required, relevant, or possible). However, we have seen similar student reactions in math classes when manipulating inequalities in analysis: they tend to want to find the optimal bounds and are uncomfortable settling for weaker ones.
This may be due to the fact that it is difficult to anticipate the goal of the instructors' seemingly arbitrary manipulations; students are more focused on individual manipulations rather than the overall big picture objective. It is also a general communication norm to make the most informative statements that you are able to.
Some of the results surprised us as well. First, students by-and-large were not very bothered by manipulations of infinitesimals in physics classes (for example, treating dy/dx as a ratio of two small but physically meaningful quantities). In addition, the authors did not anticipate how often Fourier analysis came up as an unsettling and confusing topic. However, perhaps this has to do with the specific presentation of this topic at our university, and is not a universal experience.
Our study has several limitations.
Although we contacted every student at our institution having a double major, or major plus minor, in mathematics and physics, our sample size was small: 20 surveys and 11 interviews.
We only interviewed one student with a math major and physics minor.
Our study could be repeated at a larger university, or at several, making a more quantitative analysis possible.
Although we felt that double majors would be best able to compare the cultures of mathematics and physics, one could conduct a similar study with physics majors exclusively inasmuch as they are a larger population and physics education research is particularly concerned with improving the curriculum for them.
The students we interviewed all had senior standing at our institution and therefore might have taken many of the same classes with the same instructors.
Some of our findings may reflect characteristics of those instructors rather than the courses or the curriculum.
Finally, we are on the quarter system, which may imply a faster pace for covering material than semesters provide.
A one-quarter linear algebra course taken by physics students may not reach topics relevant for quantum mechanics that could be included in a semester.
Our first quarter of complex variables usually cannot reach the Residue Theorem.
§.§ Pedagogical Recommendations
Our study involved a small number of students at a single institution, so we should be cautious in making pedagogical recommendations based on our limited data. We venture to suggest some which might be supported by further investigation. First, and perhaps most importantly, we suggest making efforts to raise awareness of these cultural differences for both faculty and students. For example, it would be helpful for physics faculty to be more familiar with what particular mathematical topics students seem to have trouble with (e.g. Fourier transforms, Hilbert spaces.) Being more conscious of these issues could lead instructors to be more prepared and proactive when dealing with these topics, for example looking more closely at what content the typical linear algebra course at their institution covers or taking more time to introduce the Fourier transform. Furthermore, these types of conversations might reveal blind spots in the curriculum that could be fixed. For instance, knowing that the Taylor series is taught at the end of the semester at your institution could lead one to spend a little longer on the mathematical background when first discussing this topic.
In much the same way that an instructor might explain the rationale behind new research-based teaching techniques they are using, we feel it might be beneficial to explicitly discuss these cultural differences, especially in a physics class, as the students seem to be less satisfied with mathematical explanations given by physics instructors. As our interviews indicate that students are cognizant of the differences between math and physics classes, discussion of why each discipline has developed its characteristic culture could be enlightening. These discussions could perhaps even help to start a dialogue. There seem to be times when a more mathematically inclined student is turned off by aspects of the physics teaching culture. We feel the first step in easing this tension is talking about it openly. There is even the opportunity to make light of some of these differences.
Several students suggested that it would be beneficial for physics instructors to offer extra resources, perhaps textbooks, articles, or web-based materials, for students who want more details or proofs about new or more advanced mathematical topics. We feel that this is an excellent suggestion. Perhaps the physics education community could create such resources specific to the needs of physics students.
There also seem to be some common issues that we imagine many physics departments would share. For example, being aware that approximations and “tricks" can prove troubling for students could lead instructors to rethink how they approach these topics, such as spending more time on why an approximation is needed/helpful or even trying to take a more mathematical approach and provide rigorous error bounds for an approximation. We were surprised by some of the resistance we found to what we felt were standard physical approximations, such as the small-angle approximation for a pendulum. We imagine that, even if math majors are more troubled by them than physics majors, most students would benefit from a more thorough analysis of some of these approximations and “tricks".
Finally, it's possible that physics departments might want to re-evaluate their curriculum, perhaps even surveying majors in a similar manner as we did in this study. For example, perhaps you'll find that modern quantum mechanics has reached a point where the traditional linear algebra course is no longer sufficient. Maybe there are small changes that could be made to alleviate some of the tension that arises when a new math topic is introduced. Perhaps better coordination between required sequences of math and physics courses is needed. Again, we believe just being more mindful would be a great first step. It is difficult to remember when one was a student and how foreign some of these topics may have seemed at a first encounter.
abbrv
§ INTERVIEW QUESTIONS
We used the following working script of interview questions during our interviews:
* How would you describe the role that mathematics plays within physics, in general? How would you describe the role it has played in your own physics courses? Describe instances where math has helped your understanding of physics concepts. Describe instances where it has been an obstacle to your understanding.
* How does a physics class “feel” different from a math class? Particularly in the ways that math is presented and used?
* What do you mean by calling a math or physics proof or problem solution “handwavey”? Can you give examples? How do you react to such proofs/solutions? [If they used this word in the survey responses]
* Think of some examples where the same mathematics topic has been presented or used in both your math and your physics classes. Describe cases in which it was helpful for your learning to have both perspectives on that topic. Describe cases in which the different perspectives made the topic harder to learn. [This can lead to the specific subject-area questions given below, if students mention that topic.]
* What kinds of questions (from students, or by the instructor) seem expected, encouraged, or discouraged in math and in physics classes?
* Mathematicians have been teased for refusing to solve an equation until they can prove it has a solution, or not using an infinite series until they have proved that it converges. On the other hand, physicists have been teased for not caring whether a series converges when they make use of it. Is this consistent with your experience of math and physics professors?
* In the survey question about the function f(x,y) = x^2+y^2 asking for f(r,θ) instread, would you have thought about the problem differently if it said f(u,v) = u^2 + v^2 instead? Would your answer for f(r,θ) have changed? Are variables like x,y,u,v used differently in mathematics than in physics?
* Can you expand more on (...)? [student-specific survey questions]
* Please illustrate (...) with an example? [student-specific survey questions]
We also drew on the following subject-specific questions:
* (Fourier Transform): How were you introduced to the Fourier transform in your mathematics classes? How were you introduced to it in your physics classes? In what ways was it useful to have two different perspectives on it? In what ways was it confusing? Has the use of the F.T. in later physics or math courses improved your understanding? Has it raised further questions for you? What differences have you noticed between the math and physics versions and uses of the F.T.?
* (Linear Algebra and Quantum Mechanics): Was your understanding of linear algebra (matrices, eigenvectors) from math classes helpful when those ideas were applied in your quantum mechanics classes? In what ways? In what ways were those aspects of quantum mechanics different from your prior mathematical understanding? How did this affect your learning? [If they limit their answers to particles moving in potentials, which involve infinite-dimensional vector spaces, suggest that they think instead about spin systems, which are closer to linear algebra as presented in math courses.]
* (Infinitesimals): Applications of calculus in physics often involve reasoning with infinitesimal quantities (dx, dQ, d⃗s⃗). Did your math calculus courses prepare you for this? Are you comfortable with both the mathematical and physical styles of doing calculus? How do you see them as related, or different?
* (Dirac Delta Function): Have you seen the Dirac delta function in physics, mathematics, or both? If both, how was it presented differently in each subject? In what ways do you feel that you understand it? In what ways is it confusing?
* (Proof): What differences have you seen between the types of proof (or justification) provided in your math courses versus your physics courses? Do you feel that either (or both) styles of proof give you a solid sense that you understand why the claims are true? Or do they leave you unconvinced to some extent?
* (Units): Another student told us that quantities with units attached (meters, grams, newtons, etc.) are very important in physics and less so in math. Have you experienced that as an important difference?
|
http://arxiv.org/abs/2409.02470v1 | 20240904064042 | Certifying Quantum Temporal Correlation via Randomized Measurements: Theory and Experiment | [
"Hongfeng Liu",
"Zhenhuan Liu",
"Shu Chen",
"Xinfang Nie",
"Xiangjing Liu",
"Dawei Lu"
] | quant-ph | [
"quant-ph"
] |
These authors contributed equally to this work.
These authors contributed equally to this work.
Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China
Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China
[email protected]
Quantum Science Center of Guangdong-Hong Kong-Macao Greater Bay Area, Shenzhen 518045, China
[email protected]
CNRS@CREATE, 1 Create Way, 08-01 Create Tower, Singapore 138602, Singapore
MajuLab, CNRS-UCA-SU-NUS-NTU International Joint Research Unit, Singapore
Centre for Quantum Technologies, National University of Singapore, Singapore 117543, Singapore
[email protected]
Quantum Science Center of Guangdong-Hong Kong-Macao Greater Bay Area, Shenzhen 518045, China
International Quantum Academy, Shenzhen 518055, China
§ ABSTRACT
We consider the certification of temporal quantum correlations using the pseudo-density matrix (PDM), an extension of the density matrix to the time domain, where negative eigenvalues are key indicators of temporal correlations.
Conventional methods for detecting these correlations rely on PDM tomography, which often involves excessive redundant information and requires exponential resources.
In this work, we develop an efficient protocol for temporal correlation detection by virtually preparing the PDM within a single time slice and estimating its second-order moments using randomized measurements.
Through sample complexity analysis, we demonstrate that our protocol requires only a constant number of measurement bases, making it particularly advantageous for systems utilizing ensemble average measurements, as it maintains constant runtime complexity regardless of the number of qubits.
We experimentally validate our protocol on a nuclear magnetic resonance platform, a typical thermodynamic quantum system, where the experimental results closely align with theoretical predictions, confirming the effectiveness of our protocol.
Certifying Quantum Temporal Correlation via Randomized Measurements: Theory and Experiment
Dawei Lu
==========================================================================================
Introduction.—Quantum correlations, both spatial and temporal, are distinguishing features of quantum mechanics.
Over the past few decades, the utilization of quantum spatial correlations, particularly entanglement, has significantly shaped quantum information science <cit.>.
Detecting and quantifying entanglement are also essential methods for benchmarking the capabilities of quantum devices <cit.>.
Recently, the focus on certifying quantum correlations has been generalized to include temporal correlations <cit.>.
Quantum temporal correlations, which emerge from sequential measurements on quantum systems, are crucial not only for deepening our understanding of the foundational aspects of quantum physics but also for a wide range of sequential information processing tasks.
For example, the Leggett–Garg inequalities, derived under the assumptions of macrorealism and non-invasive measurability, can be violated by quantum mechanical predictions <cit.>.
Furthermore, temporal correlations have been employed to witness quantum dimensionality <cit.> and have proven to be central in the performance of time-keeping devices <cit.>.
Among various methodologies for certifying temporal correlations, the pseudo-density matrix (PDM) is a prominent tool due to its rich physical implications and concise mathematical form <cit.>.
As illustrated in Fig. <ref>(a), the PDM is an extension of the density matrix to the time domain.
Notably, it permits negative eigenvalues <cit.>, which serve as indicators of quantum temporal correlations, as density matrices constructed from a single time point cannot exhibit such negativity.
Additionally, compared to the violation of Leggett–Garg inequalities, negativity in the PDM functions as a subprotocol for inferring quantum causal structures <cit.> and can also be used to bound quantum channel capacity <cit.>.
The conventional method for detecting negativity involves a process known as PDM tomography <cit.>.
However, as the number of qubits increases, PDM tomography becomes impractical due to the exponential consumption of both quantum and classical resources.
Moreover, tomography can lead to redundancy when the objective is to estimate specific target information <cit.>, such as negativity.
In this work, we integrate the techniques of quasi-probability decomposition <cit.> and randomized measurements <cit.> to propose a protocol for detecting quantum temporal correlations without requiring full system characterization.
Specifically, we design a quantum circuit that virtually prepares the PDM within a single time slice and employs randomized measurements to infer its negativity, as illustrated in Fig. <ref>(b).
Remarkably, our protocol requires only a constant number of measurement bases, making it particularly suitable for quantum platforms utilizing ensemble-average measurements, as depicted in Fig.<ref>(c).
In such platforms, measurements in a fixed basis are performed collectively across a thermodynamically large number of copies, yielding the expectation value of an observable in a single run.
This feature allows the runtime complexity of our protocol to be exponentially reduced compared to systems employing projective measurements.
Given the ensemble nature of nuclear spins and the high-fidelity controls achievable under ambient conditions, nuclear magnetic resonance (NMR) is an ideal candidate for testing our protocol.
Accordingly, we conduct a proof-of-principle experiment using NMR, demonstrating that quantum temporal correlations can be efficiently detected through randomized measurements on ensemble quantum systems.
Pseudo-density matrix.—The PDM generalizes the concept of the density matrix to cases involving multiple time slices <cit.>.
In this work, we focus specifically on the 2-time PDM.
Without loss of generality, we consider scenarios where both the input and output of a quantum process are n-qubit states.
Consider an n-qubit quantum system that is first measured by a Pauli observable σ_i ∈{𝕀,σ_x,σ_y,σ_z}^⊗ n, then transmitted through a quantum channel, followed by a second Pauli measurement σ_j.
Let ⟨σ_iσ_j ⟩ denote the product of the expectation values of these Pauli observables.
Given all the expectation values ⟨σ_iσ_j ⟩, the 2-time PDM is defined as
R =1/4^n∑^4^n-1_i,j=0σ_iσ_jσ_i⊗σ_j.
When performing the coarse-grained Pauli measurement <cit.>, the closed-formed expression of PDM coincides with the so-called canonical quantum state over time discussed in Refs. <cit.>.
The PDM is Hermitian with unit trace but may have negative eigenvalues, which serve as indicators of quantum temporal correlations <cit.>.
Intuitively, one could construct the PDM tomographically and then check for negative eigenvalues.
However, the complete information obtained from tomography is excessive for this purpose and requires an impractical amount of quantum and classical resources.
Quantitatively, if both the input and output states contain n qubits, Pauli-based tomography necessitates a total of 3^2n different measurement bases.
Furthermore, since the measurements needed to construct the PDM are more restricted than those for normal density matrices, the lower bound on the complexity of normal state tomography would still apply to PDM tomography.
Thus, even if highly joint operations among multiple copies of the system are permitted, PDM tomography still requires Ω(2^4n) experimental runs <cit.>. If only incoherent measurements are allowed, the sample complexity increases to Ω(2^6n) <cit.>.
Protocol.—To circumvent the need for PDM tomography, our core idea is to measure experimentally accessible quantities to certify temporal correlations.
Inspired by methodologies in entanglement detection <cit.>, the moments of R, such as the purity (R^2), can be used to infer the negativity of R.
Given a PDM R, if (R^2) > 1, then R is not positive semi-definite.
To prove this, note that (R^2) = ∑_i λ_i^2, where λ_i are the eigenvalues of R, satisfying ∑_i λ_i = 1 due to (R) = 1. When R ≥ 0, i.e., λ_i ≥ 0 for all i, (R^2) is upper bounded by 1.
Therefore, if (R^2) > 1, R must have at least one negative eigenvalue.
Additionally, we demonstrate in Appendix <ref> that (R^2) can attain a maximum value of 1/2(2^n + 1) when the input is an n-qubit pure state and the channel is unitary.
It can be similarly proved that the value of √((R^2)) provides a lower bound for (|R|) = ∑_i |λ_i|, which often serves as a monotone for quantum causality <cit.>.
Instead of PDM tomography, we employ the randomized measurement protocol to estimate (R^2).
It is known that the state purity, (ρ^2), can be efficiently estimated by evolving ρ with a random unitary and performing computational basis measurements <cit.>.
However, since the PDM spans multiple time slices, estimating its moments using the randomized measurement technique is not straightforward.
To address this challenge, we design a quantum circuit that virtually prepares the PDM within a single time slice, as shown in Fig. <ref>(b).
In the circuit depicted in the blue box, we denote the measurement results of the control qubit as 0 and 1, their corresponding probabilities as p_0 and p_1, and the collapsing states of the other two registers as ρ_0 and ρ_1.
We show in Appendix <ref> that
2^n(p_0ρ_0 - p_1ρ_1) = R,
where R is the PDM defined by the input state ρ and the channel 𝒞.
Note that, by setting the channel to be an identity channel, our circuit provides a new way to realize the virtual broadcasting map <cit.>.
Since the PDM virtually exists within a single time slice, we can apply the randomized measurement toolbox to estimate (R^2).
As illustrated in the green box of Fig.<ref>(b), one begins by independently selecting N_U random unitaries from a unitary ensemble ℰ_U and then applying each unitary in N_M independent experiments.
For each unitary U, experimental data {s_a^i, s⃗^i}_i=1^N_M is obtained, where s_a and s⃗ denote the measurement results of the control qubit and the other two registers, respectively.
This data is then used to construct an estimator M̂_2^U according to Eq. (<ref>).
The final estimator M̂_2 is calculated by averaging over the N_U independently constructed M̂_2^U.
In Appendix <ref> and Appendix <ref>, we analyze the estimator construction and the sample complexity and show that:
When the unitary ensemble ℰ_U is at least a unitary 2-design, and given the measurement results {s_a^i,s⃗^i}_i=1^N_M obtained from the circuit in Fig. <ref>(b) using the same random unitary sampled from ℰ_U, the expression
M̂_2^U = 2^2n/N_M(N_M-1)∑_i ≠ j (-1)^s_a^i + s_a^j X(s⃗^i, s⃗^j)
is an unbiased estimator for (R^2), where X(s⃗^i, s⃗^j) = -(-2^2n)^δs⃗^i, s⃗^j.
Assume that M̂_2 is constructed by averaging N_U independent estimators M̂_2^U.
When the unitary ensemble ℰ_U is at least a unitary 4-design, to ensure that M̂_2 - (R^2)≤ϵ with probability at least 1 - δ, it is required that N_U = 𝒪(1/ϵ^2 δ) and N_M = 𝒪(2^3n).
Consequently, the total sample complexity is N_U × N_M = 𝒪(2^3n/ϵ^2 δ).
Although the sample complexity for measuring (R^2) remains exponential with respect to the system size, 𝒪(2^3n) is significantly smaller than the sample complexity required for PDM tomography based on joint operations.
It is important to note that an exponential sample complexity for detecting temporal correlations is unavoidable, as the lower bounds for sample complexity in certain channel distinguishing tasks, which is achievable by measuring (R^2), have been proven to be exponential <cit.>.
A particularly advantageous aspect of our protocol is that the number of different unitaries, i.e., the measurement bases, remains constant and independent of the system size, as indicated by N_U = 𝒪(1/ϵ^2δ).
This feature makes our protocol especially suitable for platforms that utilize ensemble-average measurements, such as NMR, cold atomic systems, and nitrogen vacancy centers in diamonds.
In these platforms, the exponential projective measurements performed in a single measurement basis can be accomplished much more efficiently.
We also numerically demonstrate that this property holds even when the unitary ensemble is not a unitary 4-design.
In Fig.<ref>(c), we set the channel to be a fully depolarizing channel, the input state to be ρ = |0⟩⟨0|, and the unitary ensemble to be the Clifford group, which is only a unitary 3-design <cit.>.
For the line labeled “Ensemble", we set N_U = 10 and N_M = ∞, observing that the statistical error does not increase with the qubit number.
For the line labeled “Projection", we set N_M = 100, and it is evident that the error scales exponentially with the qubit number.
NMR experimental scheme.—We experimentally demonstrate our temporal correlation detection protocol on an NMR platform.
The experiments are conducted on a Bruker 300 MHz spectrometer at room temperature using a four-qubit nuclear spin system composed of ^13C-labeled trans-crotonic acid dissolved in d_6-acetone, with its molecular structure shown in Fig. <ref>(a).
The four carbon nuclear spins, labeled as C_1-4, form a four-qubit quantum processor <cit.>, characterized by the internal Hamiltonian:
ℋ_NMR=∑^4_i=1πν_iσ^i_z+∑^4_i<j,=1π/2J_ijσ^i_zσ^j_z,
where ν_i represents the chemical shift of the i-th spin, and J_ij denotes the scalar coupling between the i-th and j-th spins.
Specific values for ν_i and J_ij can be found in Appendix <ref>.
In the NMR system, single-qubit rotations are implemented using transverse radio-frequency pulses, while two-qubit interactions are achieved through free evolution. The accuracy of experimental control can be further enhanced using the gradient ascent pulse engineering algorithm <cit.>.
The quantum circuit used is shown in Fig.<ref>(b).
The first three spins, C_1,2,3, correspond to the control qubit, the ancillary qubit, and the system qubit, respectively.
The fourth spin, C_4, serves as the environment, which interacts with C_3 to realize the channel 𝒞 and is not measured at the end of the circuit.
The quantum circuit can be divided into three stages: state initialization, the virtual creation of the PDM, and the randomized measurement, as depicted in Fig.<ref>(b).
(i) Initialization. Starting from the |0000⟩ state, we first apply a Hadamard gate to C_2, followed by a gradient-field pulse in the z-direction applied to all spins.
This sequence transforms C_2 into the maximally mixed state 𝕀_2/2 while leaving the other three qubits unchanged.
Next, we apply another Hadamard gate to C_1 to prepare it in the |+⟩ state and perform a rotation R_y on C_3 to initialize the system qubit as a parameterized pure state |ψ(p)⟩ = √(p)|0⟩ + √(1-p)|1⟩.
(ii) Virtual creation of the PDM. First, we perform a controlled-SWAP gate on the first three qubits, C_1,2,3, as depicted in Fig. <ref>(b).
Next, we evolve the system spin C_3 and the environment spin C_4 using a partial SWAP operation with a parameterized time θ.
This unitary evolution induces a partial replacement channel 𝒞 acting on C_3.
Note that when θ = π/2, 𝒞 becomes a fully replacement channel that replaces any arbitrary input state with |0⟩⟨0|, thereby eliminating causal influence.
Finally, by applying a Hadamard gate to the control qubit and measuring it in the computational basis |0⟩, |1⟩, the residual system collapses to ρ_0 and ρ_1 with probabilities p_0 and p_1, respectively.
At this stage, the PDM is virtually prepared, as suggested by Eq. (<ref>).
(iii) Randomized measurements. The randomized measurements are performed by applying a set of N_U = 200 random unitaries independently sampled from the Clifford group, followed by computational basis measurements.
As shown in Theorem <ref>, systems with projective measurements require N_M = 𝒪(2^3n) measurements for each unitary to accurately estimate (R^2).
At the same time, an NMR platform can extract all the diagonal elements of an n-qubit density matrix with only n measurements, as discussed in Appendix <ref>.
In our experiment, after the random unitary evolution, we use just three measurements to extract eight diagonal elements of the density matrix of C_1, C_2, and C_3.
As shown in Fig.<ref>(c)-(e), each integral of a resonant peak at a specific frequency represents the expectation value of an observable that is diagonal in the computational basis.
Subsequently, the eight diagonal elements can be computed by linearly combining these expectation values.
Denoting the diagonal elements as Pr(s_a, s⃗), the estimator in Eq. (<ref>) reduces to
M̂_2^U = 2^2n∑_s_a, s_a', s⃗, s⃗' (-1)^s_a + s_a' X(s⃗, s⃗') Pr(s_a, s⃗) Pr(s_a', s⃗'),
which corresponds to the case of N_M = ∞ for projective measurements.
Therefore, the total runtime of our experiment scales only polynomially with the qubit number n.
Note that even with the ensemble-average measurements, performing PDM tomography remains challenging for the NMR system, as an exponential number of measurement bases is required <cit.>.
Results.—We first demonstrate the effectiveness of the virtual creation method.
Before implementing the randomized measurement protocol, we experimentally performed quantum state tomography on qubits C_1,2,3 and then processed the experimental data to construct R, according to Eq. (<ref>).
The processing steps and results are depicted in Fig. <ref>(a), using the case θ = π/6 and p = 0.6 as an example.
Both the experimental predictions and theoretical results of ρ_0,1 and the PDM R are presented side by side for comparison.
The close agreement between them demonstrates the effectiveness of the virtual creation method.
Additional tomography results can be found in Appendix <ref>.
We also present the eigenvalues of PDMs, which include two sets of experiments:
(i) Setting p = 1 and varying the channel parameter θ from 0 to π/2, we experimentally obtain the corresponding PDM and calculate the associated eigenvalues E_i of R, with results shown in Fig. <ref>(b).
(ii) Setting θ = π/6 and varying p from 0 to 1, the results are presented in Fig. <ref>(c).
The experimental results are in good agreement with the theoretical predictions, where the negative eigenvalues illustrate the temporal quantum correlation.
We then applied the randomized measurement protocol to estimate (R^2) and use it to certify the temporal quantum correlation.
First, we fixed the state parameter at p = 1 and varied θ from 0 to π/2.
The results, shown in Fig. <ref>(d), indicate temporal correlation for all parameters except θ = π/2, as the estimated values are greater than one.
The partial swap channel e^-iθ S reduces to the identity operation when θ = 0.
At this point, (R^2) reaches its maximum value of 1.5, indicating that the final state of the system is entirely determined by its initial state.
As θ increases, the SWAP component in the e^-iθ S operation becomes more pronounced, leading to a decrease in the correlation between the input and output states.
When θ = π/2, the channel acts as a full replacement channel, eliminating causality between the input and output states, resulting in (R^2) being around 1.
Moreover, we fixed θ at π/6 while varying the parameter p from 0 to 1 to quantify how the strength of causation depends on the initial state, with experimental results depicted in Fig. <ref>(e).
As the probability amplitude p increases, the measured (R^2) also grows.
All these experimental results align with theoretical expectations, demonstrating the effectiveness of our protocol.
By comparing Fig. <ref>(c) and Fig. <ref>(e), we observe that R has negative eigenvalues at the point where p = 0, while (R^2) fails to detect this negativity.
This indicates the limitation of using only the second-order moment to detect quantum temporal correlations.
It is worth exploring the use of higher-order moments to enhance the capabilities of our protocol <cit.>.
Discussion.—Since negativity in the PDM is a key signal for quantum temporal correlations, we employed quasi-probability decomposition and randomized measurements to estimate the second moment of the PDM, thereby assessing the negativity and certifying temporal quantum correlations.
Our results naturally inspire more efficient means in temporal quantum correlation detection <cit.>, quantum channel capacity <cit.> and quantum causal inference <cit.>.
Moreover, a key finding of our study is the proficiency of NMR systems in measuring the diagonal elements of density matrices.
It is therefore valuable to investigate other applications that could benefit from and leverage this unique capability of NMR systems.
We appreciate the valuable discussions with Andreas Elben, Richard Kueng, Oscar Dahlsten, and Mile Gu.
HL, XN, and DL acknowledge the support from the National Key Research and Development Program of China (2019YFA0308100), National Natural Science Foundation of China (12104213,12075110,12204230), Science, Technology and Innovation Commission of Shenzhen Municipality (JCYJ20200109140803865, KQTD20190929173815000), Guangdong Innovative and Entrepreneurial Research Team Program (2019ZT08C044), and Guangdong Provincial Key Laboratory (2019B121203002).
ZL acknowledges the support from the National Natural Science Foundation of China Grant No. 12174216 and the Innovation Program for Quantum Science and Technology Grant No. 2021ZD0300804 and No. 2021ZD0300702.
XL is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme.
64
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Horodecki et al.(2009)Horodecki, Horodecki, Horodecki, and Horodecki]Horodecki2009entanglement
author author R. Horodecki, author P. Horodecki, author M. Horodecki, and author K. Horodecki, title title Quantum entanglement, https://doi.org/10.1103/RevModPhys.81.865 journal
journal Rev. Mod. Phys. volume 81, pages 865 (year 2009)NoStop
[Gühne and Tóth(2009)]gunhe2009entanglement
author author O. Gühne and author G. Tóth, title title Entanglement detection, https://doi.org/https://doi.org/10.1016/j.physrep.2009.02.004 journal journal Physics Reports volume
474, pages 1 (year 2009)NoStop
[Häffner et al.(2005)Häffner, Hänsel, Roos,
Benhelm, Chek-al kar, Chwalla, Körber, Rapol, Riebe, Schmidt, Becher, Gühne, Dür, and Blatt]Haffner2005ion
author author H. Häffner, author W. Hänsel, author C. F. Roos, author J. Benhelm,
author D. Chek-al kar, author M. Chwalla, author
T. Körber, author
U. D. Rapol, author
M. Riebe, author P. O. Schmidt, author C. Becher, author O. Gühne, author W. Dür, and author R. Blatt, title title Scalable
multiparticle entanglement of trapped ions, https://doi.org/10.1038/nature04279 journal journal Nature volume 438, pages
643 (year 2005)NoStop
[Leibfried et al.(2005)Leibfried, Knill, Seidelin, Britton, Blakestad, Chiaverini,
Hume, Itano, Jost,
Langer, Ozeri, Reichle, and Wineland]Leibfried2005cat
author author D. Leibfried, author E. Knill,
author S. Seidelin, author J. Britton, author
R. B. Blakestad, author
J. Chiaverini, author
D. B. Hume, author
W. M. Itano, author
J. D. Jost, author
C. Langer, author R. Ozeri, author R. Reichle, and author D. J. Wineland, title title Creation of
a six-atom `schrödinger cat' state, https://doi.org/10.1038/nature04251 journal journal Nature volume 438, pages
639 (year 2005)NoStop
[Monz et al.(2011)Monz,
Schindler, Barreiro, Chwalla,
Nigg, Coish, Harlander,
Hänsel, Hennrich, and Blatt]monz2011fourteen
author author T. Monz, author P. Schindler,
author J. T. Barreiro, author M. Chwalla, author
D. Nigg, author W. A. Coish, author M. Harlander, author W. Hänsel, author M. Hennrich, and author R. Blatt, title title 14-qubit
entanglement: Creation and coherence, https://link.aps.org/doi/10.1103/PhysRevLett.106.130506 journal journal Phys. Rev. Lett. volume 106, pages 130506 (year
2011)NoStop
[Pan et al.(2012)Pan,
Chen, Lu, Weinfurter,
Zeilinger, and ŻŻukowski]pan2012multiphoton
author author J.-W. Pan, author Z.-B. Chen,
author C.-Y. Lu, author H. Weinfurter, author
A. Zeilinger, and author
M. ŻŻukowski, title title Multiphoton entanglement and
interferometry, https://doi.org/10.1103/RevModPhys.84.777
journal journal Rev. Mod. Phys. volume 84, pages 777 (year
2012)NoStop
[Omran et al.(2019)Omran,
Levine, Keesling, Semeghini,
Wang, Ebadi, Bernien,
Zibrov, Pichler, Choi,
Cui, Rossignolo, Rembold,
Montangero, Calarco, Endres,
Greiner, Vuletić, and Lukin]omran2019cat
author author A. Omran, author H. Levine,
author A. Keesling, author G. Semeghini, author
T. T. Wang, author
S. Ebadi, author H. Bernien, author A. S. Zibrov, author H. Pichler, author S. Choi,
author J. Cui, author
M. Rossignolo, author
P. Rembold, author S. Montangero, author T. Calarco, author M. Endres, author M. Greiner, author V. Vuletić, and author M. D. Lukin, title title
Generation and manipulation of schrödinger cat states in rydberg atom
arrays, https://www.science.org/doi/abs/10.1126/science.aax9743
journal journal Science volume 365, pages 570 (year
2019)NoStop
[Song et al.(2019)Song,
Xu, Li, Zhang, Zhang, Liu, Guo, Wang,
Ren, Hao, Feng, Fan, Zheng, Wang, Wang, and Zhu]chao2019twenty
author author C. Song, author K. Xu, author H. Li, author
Y.-R. Zhang, author
X. Zhang, author W. Liu, author Q. Guo, author Z. Wang, author W. Ren, author
J. Hao, author H. Feng, author H. Fan, author D. Zheng, author D.-W. Wang, author
H. Wang, and author
S.-Y. Zhu, title title Generation of multicomponent atomic schrödinger cat states of up to
20 qubits, https://www.science.org/doi/abs/10.1126/science.aay0600
journal journal Science volume 365, pages 574 (year
2019)NoStop
[Brydges et al.(2019a)Brydges, Elben,
Jurcevic, Vermersch, Maier,
Lanyon, Zoller, Blatt, and Roos]brydges2019renyi
author author T. Brydges, author A. Elben,
author P. Jurcevic, author B. Vermersch, author
C. Maier, author B. P. Lanyon, author P. Zoller, author R. Blatt, and author C. F. Roos, title title Probing rényi
entanglement entropy via randomized measurements, https://doi.org/10.1126/science.aau4963 journal journal Science volume 364, pages
260 (year 2019a)NoStop
[Cao et al.(2023)Cao,
Wu, Chen, Gong, Wu, Ye, Zha, Qian,
Ying, Guo, Zhu, Huang, Zhao, Li, Wang,
Yu, Fan, Wu, Su, Deng, Rong, Li,
Zhang, Chung, Liang,
Lin, Xu, Sun, Guo, Li, Huo, Peng,
Lu, Yuan, Zhu, and Pan]Cao2023super
author author S. Cao, author B. Wu, author F. Chen, author
M. Gong, author Y. Wu, author Y. Ye, author C. Zha, author H. Qian, author
C. Ying, author S. Guo, author Q. Zhu, author H.-L. Huang,
author Y. Zhao, author
S. Li, author S. Wang, author J. Yu, author D. Fan, author D. Wu, author
H. Su, author H. Deng, author H. Rong, author Y. Li, author K. Zhang, author
T.-H. Chung, author
F. Liang, author J. Lin, author Y. Xu, author L. Sun, author C. Guo, author
N. Li, author Y.-H. Huo, author C.-Z. Peng, author C.-Y. Lu, author X. Yuan, author X. Zhu, and author
J.-W. Pan, title title Generation of genuine entanglement up to 51 superconducting
qubits, https://doi.org/10.1038/s41586-023-06195-1 journal journal Nature volume 619, pages 738 (year 2023)NoStop
[Leggett and Garg(1985)]Leggett1985quantum
author author A. J. Leggett and author A. Garg, title title Quantum mechanics versus macroscopic
realism: Is the flux there when nobody looks?, https://doi.org/10.1103/PhysRevLett.54.857 journal journal Phys. Rev. Lett. volume 54, pages 857 (year 1985)NoStop
[Emary et al.(2013)Emary,
Lambert, and Nori]emary2013leggett
author author C. Emary, author N. Lambert, and author F. Nori, title title Leggett–garg inequalities, https://iopscience.iop.org/article/10.1088/0034-4885/77/1/016001/meta
journal journal Reports on Progress in Physics volume 77, pages 016001 (year 2013)NoStop
[Budroni et al.(2022)Budroni, Cabello, Gühne, Kleinmann, and Larsson]budroni2022KS
author author C. Budroni, author A. Cabello,
author O. Gühne, author M. Kleinmann, and author J.-A. Larsson, title title Kochen-specker contextuality, https://doi.org/10.1103/RevModPhys.94.045007 journal
journal Rev. Mod. Phys. volume 94, pages 045007 (year 2022)NoStop
[Vitagliano and Budroni(2023)]vitagliano2023leggett
author author G. Vitagliano and author C. Budroni, title title Leggett-garg macrorealism
and temporal correlations, https://doi.org/10.1103/PhysRevA.107.040101 journal journal Phys. Rev. A volume 107, pages 040101 (year 2023)NoStop
[Chen and Eisert(2024)]chen2024semi
author author S.-L. Chen and author J. Eisert, title title Semi-device-independently
characterizing quantum temporal correlations, https://link.aps.org/doi/10.1103/PhysRevLett.132.220201 journal journal Phys. Rev. Lett. volume 132, pages 220201 (year
2024)NoStop
[Fritz(2010)]fritz2010quantum
author author T. Fritz, title title Quantum correlations in the
temporal clauser–horne–shimony–holt (chsh) scenario, https://iopscience.iop.org/article/10.1088/1367-2630/12/8/083055/meta
journal journal New J. Phys. volume 12, pages 083055 (year
2010)NoStop
[Budroni et al.(2013)Budroni, Moroder, Kleinmann, and Gühne]costantino2013bounding
author author C. Budroni, author T. Moroder,
author M. Kleinmann, and author O. Gühne, title title Bounding temporal quantum correlations, https://doi.org/10.1103/PhysRevLett.111.020403 journal
journal Phys. Rev. Lett. volume 111, pages 020403 (year 2013)NoStop
[Wolf and Perez-Garcia(2009)]wolf2009assessing
author author M. M. Wolf and author D. Perez-Garcia, title title Assessing quantum
dimensionality from observable dynamics, https://doi.org/10.1103/PhysRevLett.102.190504 journal
journal Phys. Rev. Lett. volume 102, pages 190504 (year 2009)NoStop
[Spee et al.(2020)Spee,
Siebeneich, Gloger, Kaufmann,
Johanning, Kleinmann, Wunderlich, and Gühne]spee2020genuine
author author C. Spee, author H. Siebeneich,
author T. F. Gloger, author P. Kaufmann, author
M. Johanning, author
M. Kleinmann, author
C. Wunderlich, and author
O. Gühne, title title Genuine temporal correlations can certify the quantum dimension, https://iopscience.iop.org/article/10.1088/1367-2630/ab6d42/meta
journal journal New J. Phys. volume 22, pages 023028 (year
2020)NoStop
[Vieira et al.(2024)Vieira,
Milz, Vitagliano, and Budroni]vieira2024witnessing
author author L. B. Vieira, author S. Milz,
author G. Vitagliano, and author C. Budroni, title title Witnessing environment dimension through temporal
correlations, https://doi.org/10.22331/q-2024-01-10-1224
journal journal Quantum volume 8, pages 1224 (year 2024)NoStop
[Erker et al.(2017)Erker,
Mitchison, Silva, Woods,
Brunner, and Huber]Paul2017autonomous
author author P. Erker, author M. T. Mitchison, author R. Silva,
author M. P. Woods, author N. Brunner, and author
M. Huber, title title Autonomous quantum clocks: Does thermodynamics limit our ability to
measure time?, https://doi.org/10.1103/PhysRevX.7.031022
journal journal Phys. Rev. X volume 7, pages 031022 (year
2017)NoStop
[Budroni et al.(2021)Budroni, Vitagliano, and Woods]costantino2021clock
author author C. Budroni, author G. Vitagliano, and author M. P. Woods, title title Ticking-clock performance
enhanced by nonclassical temporal correlations, https://doi.org/10.1103/PhysRevResearch.3.033051 journal
journal Phys. Rev. Res. volume 3, pages 033051 (year 2021)NoStop
[Woods et al.(2022)Woods,
Silva, Pütz, Stupar, and Renner]woods2022quantum
author author M. P. Woods, author R. Silva,
author G. Pütz, author S. Stupar, and author
R. Renner, title title Quantum clocks are more accurate than classical ones, https://doi.org/10.1103/PRXQuantum.3.010319 journal journal PRX Quantum volume 3, pages
010319 (year 2022)NoStop
[Fitzsimons et al.(2015)Fitzsimons, Jones, and Vedral]fitzsimons2015quantum
author author J. F. Fitzsimons, author J. A. Jones, and author V. Vedral, title title Quantum correlations which
imply causation, https://doi.org/10.1038/srep18281 journal journal Sci. Rep. volume
5, pages 18281 (year 2015)NoStop
[Shrotriya et al.(2022)Shrotriya, Kwek, and Bharti]shrotriya2022certifying
author author H. Shrotriya, author L.-C. Kwek, and author K. Bharti, title title Certifying temporal correlations, https://doi.org/10.48550/arXiv.2206.06092 journal
journal arXiv preprint arXiv:2206.06092 (year
2022)NoStop
[Pusuluk et al.(2022)Pusuluk, Gedik, and Vedral]pusuluk2022witnessing
author author O. Pusuluk, author Z. Gedik, and author V. Vedral, title title Witnessing superpositions of causal
orders by weak measurements at a given time, https://doi.org/10.48550/arXiv.2209.09172 journal journal arXiv:2209.09172 (year 2022)NoStop
[Liu et al.(2024a)Liu, Chen, and Dahlsten]liu2024inferring
author author X. Liu, author Q. Chen, and author O. Dahlsten, title title Inferring the arrow of time in quantum
spatiotemporal correlations, https://doi.org/10.1103/PhysRevA.109.032219 journal journal Phys. Rev. A volume 109, pages 032219 (year 2024a)NoStop
[Liu et al.(2023)Liu,
Qiu, Dahlsten, and Vedral]liu2023quantum
author author X. Liu, author Y. Qiu, author O. Dahlsten, and author V. Vedral, title
title Quantum causal inference with extremely light touch, https://doi.org/10.48550/arXiv.2303.10544 journal
journal arXiv:2303.10544 (year
2023)NoStop
[Marletto et al.(2019)Marletto, Vedral, Virzì, Rebufello, Avella, Piacentini,
Gramegna, Degiovanni, and Genovese]marletto2019theoretical
author author C. Marletto, author V. Vedral,
author S. Virzì, author E. Rebufello, author
A. Avella, author F. Piacentini, author M. Gramegna, author I. P. Degiovanni, and author M. Genovese, title title
Theoretical description and experimental simulation of quantum entanglement
near open time-like curves via pseudo-density operators, https://www.nature.com/articles/s41467-018-08100-1 journal
journal Nature Commun. volume 10, pages 182 (year 2019)NoStop
[Jia et al.(2023)Jia,
Song, and Kaszlikowski]jia2023quantum
author author Z. Jia, author M. Song, and author D. Kaszlikowski, title title Quantum space-time marginal problem:
global causal structure from local causal information, https://doi.org/10.1088/1367-2630/ad1416 journal journal New J. Phys. volume 25, pages 123038 (year 2023)NoStop
[Song et al.(2023)Song,
Narasimhachar, Regula, Elliott, and Gu]song2023causal
author author M. Song, author V. Narasimhachar,
author B. Regula, author T. J. Elliott, and author M. Gu, title
title Causal classification of spatiotemporal quantum
correlations, https://doi.org/10.48550/arXiv.2306.09336 journal journal arXiv:2306.09336 (year
2023)NoStop
[Zhao et al.(2018)Zhao,
Pisarczyk, Thompson, Gu,
Vedral, and Fitzsimons]zhao2018geometry
author author Z. Zhao, author R. Pisarczyk,
author J. Thompson, author M. Gu, author
V. Vedral, and author
J. F. Fitzsimons, title
title Geometry of quantum correlations in space-time, https://doi.org/10.1103/PhysRevA.98.052312 journal journal Phys. Rev. A volume 98, pages 052312 (year 2018)NoStop
[Zhang et al.(2020)Zhang,
Dahlsten, and Vedral]zhang2020different
author author T. Zhang, author O. Dahlsten, and author V. Vedral, title title Different instances of time as
different quantum modes: quantum states across space-time for continuous
variables, https://iopscience.iop.org/article/10.1088/1367-2630/ab6b9f/meta journal journal New J. Phys. volume
22, pages 023029 (year 2020)NoStop
[Marletto et al.(2021)Marletto, Vedral, Virzì, Avella, Piacentini, Gramegna, Degiovanni, and Genovese]marletto2021temporal
author author C. Marletto, author V. Vedral,
author S. Virzì, author A. Avella, author
F. Piacentini, author
M. Gramegna, author
I. P. Degiovanni, and author
M. Genovese, title title Temporal teleportation with pseudo-density operators: How dynamics
emerges from temporal entanglement, https://www.science.org/doi/full/10.1126/sciadv.abe4742 journal journal Sci. Adv. volume
7, pages eabe4742 (year 2021)NoStop
[Liu et al.(2024b)Liu, Jia, Qiu, Li, and Dahlsten]liu2024unification
author author X. Liu, author Z. Jia, author Y. Qiu, author
F. Li, and author
O. Dahlsten, title title Unification of spatiotemporal quantum formalisms: mapping between
process and pseudo-density matrices via multiple-time states, https://doi.org/10.1088/1367-2630/ad264c journal journal New J. Phys. volume 26, pages 033008 (year 2024b)NoStop
[Pisarczyk et al.(2019)Pisarczyk, Zhao, Ouyang, Vedral, and Fitzsimons]Pisarczyk2019causal
author author R. Pisarczyk, author Z. Zhao,
author Y. Ouyang, author V. Vedral, and author
J. F. Fitzsimons, title
title Causal limit on quantum communication, https://doi.org/10.1103/PhysRevLett.123.150502 journal
journal Phys. Rev. Lett. volume 123, pages 150502 (year 2019)NoStop
[Aaronson(2018)]aaronson2018shadow
author author S. Aaronson, title title Shadow tomography of
quantum states, in https://doi.org/10.1145/3188745.3188802 booktitle Proceedings of the 50th Annual ACM SIGACT Symposium on
Theory of Computing, series and number STOC 2018 (publisher Association for Computing Machinery, address New
York, NY, USA, year 2018) p. pages
325–338NoStop
[Huang et al.(2020)Huang,
Kueng, and Preskill]huang2020predicting
author author H.-Y. Huang, author R. Kueng, and author J. Preskill, title title Predicting many properties of a quantum system
from very few measurements, https://doi.org/10.1038/s41567-020-0932-7 journal journal Nature Phys. volume 16, pages 1050 (year 2020)NoStop
[Temme et al.(2017)Temme,
Bravyi, and Gambetta]temme2017pec
author author K. Temme, author S. Bravyi, and author J. M. Gambetta, title title Error mitigation for short-depth
quantum circuits, https://doi.org/10.1103/PhysRevLett.119.180509
journal journal Phys. Rev. Lett. volume 119, pages 180509 (year
2017)NoStop
[Endo et al.(2018)Endo,
Benjamin, and Li]endo2018pec
author author S. Endo, author S. C. Benjamin, and author Y. Li, title title Practical quantum error mitigation for near-future
applications, https://doi.org/10.1103/PhysRevX.8.031027 journal journal Phys. Rev. X volume
8, pages 031027 (year 2018)NoStop
[Elben et al.(2019)Elben,
Vermersch, Roos, and Zoller]Elben2019toolbox
author author A. Elben, author B. Vermersch,
author C. F. Roos, and author P. Zoller, title title Statistical correlations between locally
randomized measurements: A toolbox for probing entanglement in many-body
quantum states, https://doi.org/10.1103/PhysRevA.99.052323
journal journal Phys. Rev. A volume 99, pages 052323 (year
2019)NoStop
[Brydges et al.(2019b)Brydges, Elben,
Jurcevic, Vermersch, Maier,
Lanyon, Zoller, Blatt, and Roos]brydges2019probing
author author T. Brydges, author A. Elben,
author P. Jurcevic, author B. Vermersch, author
C. Maier, author B. P. Lanyon, author P. Zoller, author R. Blatt, and author C. F. Roos, title title Probing rényi
entanglement entropy via randomized measurements, https://doi.org/10.1126/science.aau4963 journal journal Science volume 364, pages
260 (year 2019b)NoStop
[Elben et al.(2023)Elben,
Flammia, Huang, Kueng,
Preskill, Vermersch, and Zoller]Elben2023toolbox
author author A. Elben, author S. T. Flammia,
author H.-Y. Huang, author R. Kueng, author
J. Preskill, author
B. Vermersch, and author
P. Zoller, title title The randomized measurement toolbox, https://doi.org/10.1038/s42254-022-00535-2 journal journal Nat. Rev. Phys. volume 5, pages 9 (year 2023)NoStop
[Fullwood and Parzygnat(2022)]fullwood2022quantum
author author J. Fullwood and author A. J. Parzygnat, title title On quantum states over
time, https://royalsocietypublishing.org/doi/10.1098/rspa.2022.0104 journal journal Proceedings of the Royal Society A volume 478, pages 20220104 (year
2022)NoStop
[Parzygnat and Fullwood(2023)]arthur2023from
author author A. J. Parzygnat and author J. Fullwood, title title From time-reversal
symmetry to quantum bayes' rules, https://doi.org/10.1103/PRXQuantum.4.020334 journal journal PRX Quantum volume 4, pages
020334 (year 2023)NoStop
[Lie and Ng(2024)]lie2024qsot
author author S. H. Lie and author N. H. Y. Ng, title title Quantum state over time is
unique, https://doi.org/10.1103/PhysRevResearch.6.033144
journal journal Phys. Rev. Res. volume 6, pages 033144 (year
2024)NoStop
[Haah et al.(2016)Haah,
Harrow, Ji, Wu, and Yu]haah2016tomo
author author J. Haah, author A. W. Harrow,
author Z. Ji, author
X. Wu, and author
N. Yu, title title Sample-optimal tomography of quantum states, in https://doi.org/10.1145/2897518.2897585 booktitle
Proceedings of the Forty-Eighth Annual ACM Symposium on Theory of
Computing, series and number STOC '16 (publisher Association for Computing Machinery, address New
York, NY, USA, year 2016) p. pages
913–925NoStop
[Chen et al.(2023)Chen,
Huang, Li, Liu, and Sellke]chen2023adaptivity
author author S. Chen, author B. Huang,
author J. Li, author
A. Liu, and author
M. Sellke, title title When does adaptivity help for quantum state learning?, in https://doi.org/10.1109/FOCS57990.2023.00029 booktitle
2023 IEEE 64th Annual Symposium on Foundations of Computer Science
(FOCS) (year 2023) pp. pages
391–404NoStop
[Carteret(2005)]carteret2005peres
author author H. A. Carteret, title title Noiseless quantum
circuits for the peres separability criterion, https://doi.org/10.1103/PhysRevLett.94.040502 journal
journal Phys. Rev. Lett. volume 94, pages 040502 (year 2005)NoStop
[Cai and Song(2008)]cai2008novel
author author J. Cai and author W. Song, title title Novel schemes for directly measuring
entanglement of general states, https://doi.org/10.1103/PhysRevLett.101.190503 journal
journal Phys. Rev. Lett. volume 101, pages 190503 (year 2008)NoStop
[Gray et al.(2018)Gray,
Banchi, Bayat, and Bose]gray2018ml
author author J. Gray, author L. Banchi,
author A. Bayat, and author S. Bose, title
title Machine-learning-assisted many-body entanglement
measurement, https://doi.org/10.1103/PhysRevLett.121.150503
journal journal Phys. Rev. Lett. volume 121, pages 150503 (year
2018)NoStop
[Elben et al.(2020)Elben,
Kueng, Huang, van Bijnen,
Kokail, Dalmonte, Calabrese,
Kraus, Preskill, Zoller, and Vermersch]elben2020mixed
author author A. Elben, author R. Kueng,
author H.-Y. R. Huang, author R. van Bijnen, author
C. Kokail, author M. Dalmonte, author P. Calabrese, author B. Kraus, author J. Preskill, author P. Zoller, and author B. Vermersch, title title
Mixed-state entanglement from local randomized measurements, https://doi.org/10.1103/PhysRevLett.125.200501 journal
journal Phys. Rev. Lett. volume 125, pages 200501 (year 2020)NoStop
[Zhou et al.(2020)Zhou,
Zeng, and Liu]zhou2020single
author author Y. Zhou, author P. Zeng, and author Z. Liu, title title Single-copies estimation of entanglement
negativity, https://doi.org/10.1103/PhysRevLett.125.200502
journal journal Phys. Rev. Lett. volume 125, pages 200502 (year
2020)NoStop
[Neven et al.(2021)Neven,
Carrasco, Vitale, Kokail,
Elben, Dalmonte, Calabrese,
Zoller, Vermersch, Kueng, and Kraus]Neven2021resolved
author author A. Neven, author J. Carrasco,
author V. Vitale, author C. Kokail, author
A. Elben, author M. Dalmonte, author P. Calabrese, author P. Zoller, author B. Vermersch, author R. Kueng, and author B. Kraus, title title
Symmetry-resolved entanglement detection using partial transpose moments, https://doi.org/10.1038/s41534-021-00487-y journal
journal npj Quantum Inf. volume 7, pages 152 (year 2021)NoStop
[Yu et al.(2021)Yu,
Imai, and Gühne]yu2021optimal
author author X.-D. Yu, author S. Imai, and author O. Gühne, title title Optimal entanglement certification from moments of
the partial transpose, https://doi.org/10.1103/PhysRevLett.127.060504 journal
journal Phys. Rev. Lett. volume 127, pages 060504 (year 2021)NoStop
[Liu et al.(2022)Liu,
Tang, Dai, Liu, Chen, and Ma]liu2022detecting
author author Z. Liu, author Y. Tang, author H. Dai, author
P. Liu, author S. Chen, and author X. Ma, title title Detecting
entanglement in quantum many-body systems via permutation moments, https://doi.org/10.1103/PhysRevLett.129.260501 journal
journal Phys. Rev. Lett. volume 129, pages 260501 (year 2022)NoStop
[Parzygnat et al.(2024)Parzygnat, Fullwood, Buscemi, and Chiribella]arthur2024virtual
author author A. J. Parzygnat, author J. Fullwood,
author F. Buscemi, and author G. Chiribella, title
title Virtual quantum broadcasting, https://doi.org/10.1103/PhysRevLett.132.110203 journal
journal Phys. Rev. Lett. volume 132, pages 110203 (year 2024)NoStop
[Chen et al.(2022)Chen,
Cotler, Huang, and Li]chen2022separation
author author S. Chen, author J. Cotler,
author H.-Y. Huang, and author J. Li, title title Exponential separations between learning with and
without quantum memory, in https://doi.org/10.1109/FOCS52979.2021.00063 booktitle
2021 IEEE 62nd Annual Symposium on Foundations of Computer Science
(FOCS) (year 2022) pp. pages
574–585NoStop
[Chen et al.(2024)Chen,
Oh, Zhou, Huang, and Jiang]chen2024tight
author author S. Chen, author C. Oh, author S. Zhou, author
H.-Y. Huang, and author
L. Jiang, title title Tight bounds on pauli channel learning without entanglement, https://doi.org/10.1103/PhysRevLett.132.180805 journal
journal Phys. Rev. Lett. volume 132, pages 180805 (year 2024)NoStop
[Zhu(2017)]zhu2017clifford
author author H. Zhu, title title Multiqubit clifford groups
are unitary 3-designs, https://doi.org/10.1103/PhysRevA.96.062336
journal journal Phys. Rev. A volume 96, pages 062336 (year
2017)NoStop
[Long et al.(2022)Long,
He, Zhang, Tang,
Lin, Liu, Nie, Feng, Li, Xin, Ai, and Lu]PhysRevLett.129.070502
author author X. Long, author W.-T. He,
author N.-N. Zhang, author K. Tang, author
Z. Lin, author H. Liu, author X. Nie, author G. Feng, author J. Li, author
T. Xin, author Q. Ai, and author D. Lu, title title
Entanglement-enhanced quantum metrology in colored noise by quantum zeno
effect, https://doi.org/10.1103/PhysRevLett.129.070502 journal journal Phys. Rev. Lett. volume 129, pages 070502 (year
2022)NoStop
[Huang et al.(2024)Huang,
Xi, Long, Liu, Fan, Wang, Zheng, Feng,
Nie, and Lu]PhysRevLett.132.210403
author author K. Huang, author C. Xi, author X. Long, author
H. Liu, author Y.-a. Fan, author X. Wang, author Y. Zheng, author Y. Feng,
author X. Nie, and author D. Lu, title
title Experimental realization of self-contained quantum
refrigeration, https://doi.org/10.1103/PhysRevLett.132.210403
journal journal Phys. Rev. Lett. volume 132, pages 210403 (year
2024)NoStop
[Khaneja et al.(2005)Khaneja, Reiss, Kehlet, Schulte-Herbrüggen, and Glaser]khaneja2005optimal
author author N. Khaneja, author T. Reiss,
author C. Kehlet, author T. Schulte-Herbrüggen, and author S. J. Glaser, title title Optimal control of coupled spin dynamics: design
of nmr pulse sequences by gradient ascent algorithms, @noop
journal journal Journal of magnetic resonance volume 172, pages 296 (year
2005)NoStop
[Li et al.(2017)Li,
Huang, Luo, Li, Lu, and Zeng]li2017NMRmeasurement
author author J. Li, author S. Huang, author Z. Luo, author
K. Li, author D. Lu, and author B. Zeng, title title Optimal
design of measurement settings for quantum-state-tomography experiments, https://doi.org/10.1103/PhysRevA.96.032307 journal
journal Phys. Rev. A volume 96, pages 032307 (year 2017)NoStop
§ RANDOM UNITARIES
Here we provide some preliminaries of random unitaries.
Intuitively, the Haar measure is the uniform distribution over a unitary group that satisfies
∫_Haarf(U)dU = ∫_Haarf(UV)dU =∫_Haarf(VU)dU
for arbitrary unitary V and function f(·).
According to the Schur-Wely duality, we have
Φ_t(X)=∫_HaarU^⊗ tX U^†⊗ tdU=∑_π,τ∈S_tC_π,τ(Ŵ_π X)Ŵ_τ,
where π and τ are elements in the t-th order permutation group S_t, Ŵ_π is the permutation operator associating with π, C_π,τ is the element of Weingarten matrix.
In addition to the Haar measure distribution, averaging over some other unitary sets can give us the same result.
Specifically, if a unitary set ℰ_t is a unitary t-design, then
∫_HaarU^⊗ t^'X U^†⊗ t^'dU=1/ℰ_t∑_U∈ℰ_tU^⊗ t^'X U^†⊗ t^'
holds for all t^'≤ t and X.
When t=2, which is the case that relates to our protocol, we have
Φ_2(X)=1/d^2-1((X)I-1/d(X)S-1/d(SX)I+(SX)S),
where X is a d^2× d^2 matrix, and S and I are the SWAP and identity operators, which correspond to the two elements of S_2. When setting X=∑_s,s^'X(s,s^')|s,s^'⟩⟨s,s^'|, where X(s,s^')=-(-d)^δ_s,s^', we have Φ_2(X)=S.
§ CIRCUIT ANALYSIS
In this section, we review a closed form of the 2-time pseudo density matrix (PDM) and give a detailed analysis of the tensor network presentation for preparing the virtual PDM.
The definition of the 2-time PDM is
R =1/4^n∑^4^n-1_i,j=0σ_iσ_jσ_i⊗σ_j.
The key to constructing the PDM is to obtain the 2-time correlators ⟨σ_iσ_j⟩. The measurement scheme is crucial since measurements at the earlier time influence the quantum system at a later time. We call measurements that project the quantum state to the ± 1 eigenspace of a n-qubit Pauli observable σ_i the coarse-grained measurements. If we implement the coarse-grained measurements at each time, then a closed form of the PDM can be written as <cit.>
R=1/2[ Λ_C (ρ⊗I) + (ρ⊗I) Λ_C],
where Λ_C denotes Choi–Jamiołkowski (CJ) isomorphism of the channel C, which is defined as
Λ_C=∑_i,j|i⟩⟨j|⊗C( |j⟩⟨i|).
We call Λ_C the CJ matrix of the channel C. The action of a channel C on a quantum state ρ can be equivalently given by
𝒞(ρ)=_1[Λ_𝒞(ρ⊗𝕀)].
To affiliate our analysis, we re-express the PDM and Λ_C in the tensor network representation by Fig. <ref>.
The negativity of the PDM is the key indicator for temporal correlation.
Based on our Observable 1, one can certify negativity in PDM via Tr(R^2). Here we combine the randomized measurement technique and the circuit shown in Fig. <ref> to measure the moments of R, like Tr(R^2).
In addition to the channel C and the state ρ which consist of the PDM, this circuit also contains an ancilla qubit, a maximally mixed state, the controlled SWAP operation, the random unitary evolution, and the computational basis measurements at the end.
To show that this circuit can be used to measure the moment, we first need to analyze how the whole state evolves under this circuit.
At the first dashed line, the whole state evolves into
1/2d[ |0⟩⟨0|⊗ (𝕀⊗ρ) + |0⟩⟨1|⊗ (𝕀⊗ρ) S + |1⟩⟨0|⊗ S (𝕀⊗ρ) + |1⟩⟨1|⊗ S (𝕀⊗ρ) S ]
= 1/2d(
|0⟩⟨0|⊗
< g r a p h i c s >
+
|0⟩⟨1|⊗
< g r a p h i c s >
+
|1⟩⟨0|⊗
< g r a p h i c s >
+
|1⟩⟨1|⊗
< g r a p h i c s >
)
After applying the channel C, the whole state evolves into
1/2d(
|0⟩⟨0|⊗
< g r a p h i c s >
+
|0⟩⟨1|⊗
< g r a p h i c s >
+
|1⟩⟨0|⊗
< g r a p h i c s >
+
|1⟩⟨1|⊗
< g r a p h i c s >
)
= 1/2d(
|0⟩⟨0|⊗
< g r a p h i c s >
+
|0⟩⟨1|⊗
< g r a p h i c s >
+
|1⟩⟨0|⊗
< g r a p h i c s >
+
|1⟩⟨1|⊗
< g r a p h i c s >
),
where the gray dashed lines represent the trace functions.
At the second dashed line in Fig. <ref>, the control qubit is measured in the Pauli-X basis.
Define a new matrix Q=1/2[I_d⊗𝒞(ρ)+ρ⊗𝒞(I_d)].
Thus, the joint measurement probability distribution at the end of this circuit is
Pr(s_a=0,s⃗|U)=1/2d⟨s⃗|U(Q+R)U^†|s⃗⟩
Pr(s_a=1,s⃗|U)=1/2d⟨s⃗|U(Q-R)U^†|s⃗⟩,
where U is a global random unitary acting on two systems, s_a and s⃗ represent the measurement results of control qubit and the other two systems.
Here, s_a=0 corresponds to the control qubit collapsing to |+⟩ and vice versa.
Thus, one can see that the probability distribution of the measurement results contains the information of PDM R.
By taking the difference between these two probabilities, we can equivalently perform randomized measurement on the PDM R and get ⟨s⃗|URU^†|s⃗⟩.
§ RANDOMIZED MEASUREMENTS
We summarize our algorithm below:
Combining Eq. (<ref>) and Eq. (<ref>), we can now construct the estimator for (R^2).
Essentially, one applies N_U independent unitaries in the circuit of Fig. <ref> and measures N_M times under a single unitary.
After collecting all the data, the measurement results acquired with different unitaries will be processed independently.
Assume that the data {(s_a^i,s⃗^i)}_i=1^N_M is collected with the same random unitary, the estimator is
M̂_2^U = d^2/N_M(N_M-1)∑_i≠ j(-1)^s_a^i+s_a^jX(s⃗^i,s⃗^j),
where X(s,s^')=-(-d^2)^δ_s,s^'.
Here we choose d^2 instead of d because the dimensions of PDM and the system being measured are all d^2.
Then, the final estimator is obtained by averaging over all estimators obtained from different unitaries.
The unbiasedness of this estimator can be verified using random unitary theory.
Considering the independence of summation terms in Eq. <ref>, we have
E[M̂_2^U] = d^2E_U,s_a,s_a^',s⃗,s⃗^'(-1)^s_a+s_a^'X(s⃗,s⃗^')
= d^2E_U∑_s_a,s_a^',s⃗,s⃗^'(-1)^s_a+s_a^'X(s⃗,s⃗^') ×Pr(s_a,s⃗|U)×Pr(s_a^',s⃗^'|U)
= d^2E_U∑_s⃗,s⃗^'X(s⃗,s⃗^') ×[Pr(s_a=0,s⃗|U)-Pr(s_a=1,s⃗|U)] ×[Pr(s_a^'=0,s⃗^'|U)-Pr(s_a^'=1,s⃗^'|U)]
= E_U∑_s⃗,s⃗^'X(s⃗,s⃗^') ×⟨s⃗|URU^†|s⃗⟩×⟨s⃗^'|URU^†|s⃗^'⟩
= E_U[R^⊗ 2U^†⊗ 2XU^⊗ 2]
= (R^2),
where X=∑_s⃗,s⃗^'-(-d^2)^δ_s⃗,s⃗^'|s⃗,s⃗^'⟩⟨s⃗,s⃗^'|, the last equality is because E_UU^⊗ 2XU^†⊗ 2=S and (Sσ^⊗ 2)=(σ^2).
§ PROPERTIES OF PDM
To benefit our derivation for sample complexity, we need to have some properties of PDM.
For arbitrary quantum channel 𝒞:ℋ_d→ℋ_d and input state ρ∈ D(ℋ_d), we have (R^2)≤𝒪(d), where the equal sign is reached by the unitary channel and pure input state.
It is easy to prove that (R^2)=[(R^T_1)^2] where
R^T_1=1/2(Λ_𝒞^T_1(ρ⊗𝕀)+(ρ⊗𝕀)Λ_𝒞^T_1)
and T_1 represents partial transposition of indices that contract with indices from ρ.
According to the Choi–Jamiołkowski isomorphism, Λ_𝒞^T_1 is now a positive semi-definite matrix.
We then take a spectral decomposition of Λ_𝒞^T_1=U_1∑_1U_1^† and ρ⊗𝕀=U_2∑_2 U_2^†, where ∑_1 and ∑_2 are positive semi-definite diagonal matrices.
Then we have
(R^2)=1/2[(U_1Σ_1U_1^† U_2Σ_2U_2^† U_1Σ_1U_1^† U_2Σ_2U_2^†)+(U_1Σ_1^2U_1^† U_2Σ_2^2U_2^†)].
Defining V=U_1^† U_2, we have
(R^2)=1/2[ (Σ_1VΣ_2V^†Σ_1VΣ_2V^†)+(Σ_1^2VΣ_2^2V^†) ].
Defining B=VΣ_2V^†, we can simplify the above expression into
(R^2)=1/2[(Σ_1BΣ_1B)+(Σ_1^2B^2)].
Next we are going to prove that (R^2) is a convex function of Σ_1.
We first expand matrices Σ=∑_iλ_i|i⟩⟨i| and B=∑_i,jb_i,j|i⟩⟨j|.
Then
f(Σ_1)=(R^2)=1/2(∑_i,jλ_ib_i,jλ_jb_j,i+∑_i,jλ_i^2b_i,jb_j,i)=1/2∑_i,jb_i,jb_j,i(λ_i^2+λ_iλ_j).
Defining X=∑_ix_i|i⟩⟨i| and Y=∑_jx_j|j⟩⟨j| with x_i,y_i≥0, we have
f(θ X+(1-θ)Y)=1/2∑_i,jb_i,jb_j,i[(θ x_i+(1-θ)y_i)^2+(θ x_i+(1-θ)y_i)(θ x_j+(1-θ)y_j)]
and
θ f(X)+(1-θ) f(Y)=1/2∑_i,jb_i,jb_j,i(θ x_i^2+θ x_ix_j+(1-θ) y_i^2+(1-θ)y_iy_j).
The difference between them is
1/2θ(1-θ)∑_i,jb_i,jb_j,i[(x_i^2+x_ix_j+y_i^2+y_iy_j)-(2x_iy_i+x_iy_j+x_jy_i)]
= 1/2θ(1-θ)∑_i,jb_i,jb_j,i[(x_i-y_i)^2+(x_i-y_i)(x_j-y_j)]
= 1/4θ(1-θ)∑_i,jb_i,jb_j,i[(x_i-y_i)^2+2(x_i-y_i)(x_j-y_j)+(x_j-y_j)^2]
= 1/4θ(1-θ)∑_i,jb_i,jb_j,i[(x_i-y_i)+(x_j-y_j)]^2≥ 0,
which shows that (R^2) is a convex function of Σ_1.
Following the same logic, one can similarly prove that (R^2) is also a convex function of Σ_2.
As Σ_1 and Σ_2 represent the eigenvalues of Λ_𝒞^T_1 and ρ⊗𝕀, they should contain at least one and d positive nonzero elements, respectively.
Therefore, the convexity means that the maximal value of (R^2) can be obtained when 𝒞 is a unitary channel and ρ is a pure state.
When 𝒞=𝒰 is a unitary channel and ρ is a pure state, R^T_1 can be represented as
R^T_1=1/2(|Ψ_𝒰⟩⟨Ψ_𝒰|(ψ⊗𝕀)+(ψ⊗𝕀)|Ψ_𝒰⟩⟨Ψ_𝒰|),
where |Ψ_𝒰⟩ is a d^2-dimensional unnormalized pure state satisfying ⟨Ψ_𝒰|Ψ_𝒰⟩=d.
Then, we have
(R^2)= 1/2[d⟨Ψ_𝒰|(ψ⊗𝕀)|Ψ_𝒰⟩+⟨Ψ_𝒰|(ψ⊗𝕀)|Ψ_𝒰⟩^2]
= 1/2[d(U^†ψ U)+(U^†ψ U)^2]
= 1/2(d+1)=𝒪(d)
Note that this proof cannot be directly adopted to bound (R^2k) as [(R^T_1)^2k]=(R^2t) only holds when k=1.
§ VARIANCE ANALYSIS
Now we need to derive the sample complexity of this protocol.
Specifically, if we want to measure (R^2) to ϵ accuracy, how many experiments we need to perform?
We consider the case that d,N_M≫ 1. By definition,
Var(M̂_2^U)=E[(M̂_2^U)^2]-E(M̂_2^U)^2.
Substituting the estimator Eq. (<ref>), we have
Var(M̂_2^U)=E[d^4/N_M^2(N_M-1)^2∑_i≠ j∑_i^'≠ j^'(-1)^s_a^i+s_a^j+s_a^i^'+s_a^j^'X(s⃗^i,s⃗^j)X(s⃗^i^',s⃗^j^')]-(R^2)^2.
Expanding the summation according to the relation between (i,j) and (i^',j^'), we have
E[∑_i≠ j∑_i^'≠ j^'(-1)^s_a^i+s_a^j+s_a^i^'+s_a^j^'X(s⃗^i,s⃗^j)X(s⃗^i^',s⃗^j^')]
= E[∑_i≠ jX(s⃗^i,s⃗^j)^2+∑_i≠ j≠ k(-1)^s_a^i+s_a^kX(s⃗^i,s⃗^j)X(s⃗^j,s⃗^k)+∑_i≠ j≠ k≠ l(-1)^s_a^i+s_a^j+s_a^k+s_a^lX(s⃗^i,s⃗^j)X(s⃗^k,s⃗^l)]
= 2N_M(N_M-1)E[X(s⃗,s⃗^')^2]+4N_M(N_M-1)(N_M-2)E[(-1)^s_a+s_a^''X(s⃗,s⃗^')X(s⃗^',s⃗^'')]
+N_M(N_M-1)(N_M-2)(N_M-3)E[(-1)^s_a+s_a^'+s_a^''+s_a^'''X(s⃗,s⃗^')X(s⃗^'',s⃗^''')].
We now need to specify the three terms one by one. Firstly,
E[X(s⃗,s⃗^')^2]= E_U{∑_s⃗,s⃗^'[Pr(s_a=0,s⃗|U)+Pr(s_a=1,s⃗|U)][Pr(s_a^'=0,s⃗^'|U)+Pr(s_a^'=1,s⃗^'|U)]X(s⃗,s⃗^')^2}
= 1/d^2E_U{∑_s⃗,s⃗^'⟨s⃗|UQU^†|s⃗⟩⟨s⃗^'|UQU^†|s⃗^'⟩X(s⃗,s⃗^')^2}
= 1/d^2E_U[U^†⊗ 2X^2U^⊗ 2Q^⊗ 2]
= 1/d^2{[d^2I+(d^2-1)S]Q^⊗ 2}
= (Q)^2+d^2-1/d^2(Q^2),
where the fourth equal sign is because E_U(U^†⊗ 2X^2U^⊗ 2)=d^2I+(d^2-1)S, which can be verified using the random unitary theory.
By definition, Q=1/2[I_d⊗𝒞(ρ)+ρ⊗𝒞(I_d)].
Thus, we have (Q)=d and (Q^2)=d^2/4[𝒞(𝕀_d/d)^2](ρ^2)+d/4[C(ρ)^2]+d/2[𝒞(ρ)𝒞(𝕀_d/d)].
Substituting this into the calculation of the expectation, we get the first term
E[X(s⃗,s⃗^')^2]=d^2+d^2-1/d^2{d^2/4[𝒞(𝕀_d/d)^2](ρ^2)+d/4[C(ρ)^2]+d/2[𝒞(ρ)𝒞(𝕀_d/d)]}≤5/4d^2+O(d).
Similarly, for the second term, we have
E[(-1)^s_a+s_a^''X(s⃗,s⃗^')X(s⃗^',s⃗^'')]
= 1/d^3E_U{∑_s⃗,s⃗^',s⃗^''⟨s⃗|URU^†|s⃗⟩⟨s⃗^'|UQU^†|s⃗^'⟩⟨s⃗^''|UQU^†|s⃗^''⟩X(s⃗,s⃗^')X(s⃗^',s⃗^'')}
= 1/d^3E_U[U^†⊗ 3X_3U^⊗ 3(R⊗ Q⊗ R)]
= -1/d^3(d^2+2)[(Q)(R)^2+2(R)(QR)]+d^2+1/d^3(d^2+2)[(Q)(R^2)+2(QR^2)],
where X_3=∑_s⃗,s⃗^',s⃗^''X(s⃗,s⃗^')X(s⃗^',s⃗^'')|s⃗,s⃗^',s⃗^''⟩⟨s⃗,s⃗^',s⃗^''| and the proof of the last equal sign can be found in Ref. <cit.>.
Using facts including (Q)=d, (R)=1, (Q^2)≤O(d^2), (R^4)≤(R^2)^2≤ d^2, (QR)≤√((Q^2)(R^2))≤O(d^3/2), and (QR^2)≤(Q)(R^2)≤O(d^2), we have
E[(-1)^s_a+s_a^''X(s⃗,s⃗^')X(s⃗^',s⃗^'')]≤O(1/d).
Combining the conclusions derived in Ref. <cit.> and (R), (R^2), (R^4)>0, we can prove that the next term is
E[(-1)^s_a+s_a^'+s_a^''+s_a^'''X(s⃗,s⃗^')X(s⃗^'',s⃗^''')]
= 1/d^4E_U[U^†⊗ 4X^⊗ 2U^⊗ 4R^⊗ 4]
≤ 2/d^6(d^2+2)(d^2+3)[1+2(R^2)]+8(d^2+1)/d^6(d^2+2)(d^2+3)(R^3)
+d^2+1/d^6(d^2+3)[2(R^2)^2+2(R^4)]+d^2(d^2+2)(d^2+3)+2/d^6(d^2+2)(d^2+3)(R^2)^2.
As (R^3)≤√((R^2)(R^4))≤𝒪(d^3/2), we have
E[(-1)^s_a+s_a^'+s_a^''+s_a^'''X(s⃗,s⃗^')X(s⃗^'',s⃗^''')]≤𝒪(1/d^4)+(R^2)^2/d^4
Substituting Eq. (<ref>), Eq. (<ref>), and Eq. (<ref>) into Eq. (<ref>), we have
Var(M̂_2^U)≤𝒪(d^6/N_M^2+d^3/N_M+1).
As the final estimator is obtained by averaging over data collected in N_U different unitaries, the total variance is
𝒪[1/N_U(d^6/N_M^2+d^3/N_M+1)].
Thus, according to Chebyshev's inequality, to make sure that M̂_2-(R^2)≤ϵ with probability at least 1-δ, the experiment complexity should satisfy N_M=𝒪(d^3) and N_U=𝒪(1/ϵ^2δ).
The total sample complexity is N_M× N_U=𝒪(d^3/ϵ^2δ)
As discussed in the main context, for NMR platform, it is equivalent to the case of N_M=∞ as every computational basis measurement is performed on an ensemble with particle number of the thermodynamic limit.
Then, the total sample complexity is equivalent to the number of different unitaries N_U=𝒪(1/ϵ^2δ), which is independent of the system size.
§ THEORETICAL MODEL IN OUR EXPERIMENT
We introduce the physical process and the corresponding PDM of our experiment. In the main body, the physical process involves a quantum state |ψ⟩=√(p)|0⟩+√(1-p)|1⟩
undergoing a partial swap interaction with the environment qubit γ_E= |0⟩⟨0|. To simplify our analysis, we take p=1 in the following.
Consider a system qubit ρ_S= |0⟩⟨0| interacts with an environment qubit γ_E= |0⟩⟨0| via the partial swap interaction
V=e^-i θ S= cos(θ) 𝕀 + i sin(θ) S := c 𝕀 + i s S,
where S denotes the 2-qubit swap operator and θ∈ [0,π/2].
The effective dynamics of the system can be modeled as a partial replacement channel N with the set of Kraus operators given by
{K_1 = c 𝕀 + i s |0⟩⟨0|, K_2= is |0⟩⟨1|}.
One sees that when c=0, the initial state of the system cannot influence its final state, i.e., no temporal correlation from the input to the output. However, the influence from its input to output increases when c increases.
Given the Kraus operators of the channel N, one can calculate the CJ matrix Λ_N. Therefore, the corresponding PDM of the system across two times is given by
R= 1/2[ (ρ_s ⊗𝕀) Λ_N + Λ_N (ρ⊗𝕀) ] =
[ 1 0 0 0; 0 0 c(c + i s)/2 0; 0 c(c - i s)/2 0 0; 0 0 0 0; ].
We proceed to obtain
(R^2)-1 = 1/2c^2.
The quantity (R^2)-1 is equal to 0 when c=0, and monotonically increases as c increase. Hence, (R^2) correctly characterizes the temporal correlation (causal influence) from the system's input to its output.
§ EXPERIMENTAL DETAILS
Our experiments were conducted using a nuclear magnetic resonance (NMR) quantum processor, which utilizes the nuclear spins within a molecule to encode qubits. Initially, we provide a detailed characterization of the NMR system, covering aspects such as sample preparation, control mechanisms, and measurement techniques. Subsequently, we will elaborate on the method for creating a pseudo-pure state (PPS). Lastly, we describe the process of quantum state tomography in NMR systems.
Characterization–In this experiment, the ^13C-labeled trans-crotonic acid dissolved in d_6 acetone is used as a 4-qubit quantum processor. The molecular structure and the relevant parameters is shown in Fig. <ref>, where ^13C_1 to ^13C_4 correspond to qubits Q1 to Q4. The methyl group M and all hydrogen atoms were decoupled throughout all experiments. The total Hamiltonian ℋ_tot of this system includes the internal Hamiltonian ℋ_int and the control Hamiltonian ℋ_con is
ℋ_tot = ℋ_int +ℋ_con
= ∑_i = 1^4 πν_iσ _z^i + ∑_1 ≤ i < j ≤ 4^4 π/2J_ijσ _z^iσ _z^j
-B_1∑_i = 1^4 γ_i[cos(ω_rft+ ϕ)σ_x^i+sin(ω_rft+ ϕ)σ_y^i],
where the ν_i is the chemical shift of the ith spin and J_ij is the scalar coupling strength between spins i-th and j-th nuclei. Here, B_1, ω_rf and ϕ denote the amplitude, frequency and phase of the control pulse, respectively.
Pseudo-pure state preparation.–At room temperature, the thermal equilibrium state of the four-qubit NMR system is a highly mixed state that described by
ρ_eq = 𝕀/16 + ϵ∑^4_i=1σ_z^i,
where 𝕀 is the 16 × 16 identity matrix, and ϵ, representing polarization, is approximately 10^-5. This state is unsuitable for use as the initial state in quantum computing.
Various initialization methods are available, including the spatial averaging method, line-selective transition method, time averaging method, and cat-state method. In our experiments, we employed the spatial averaging method to initialize the NMR system, applying the pulse sequence illustrated in Fig.<ref>(a). In the circuit diagram, colored rectangles indicate single-qubit rotations performed using rf pulses. The two-qubit gates are achieved through the scalar coupling among different spins combined with shaped pulses. The pulse sequence transforms the equilibrium state described in Eq. <ref> into a pseudo-pure state (PPS), as shown by the equation
ρ_PPS = 1-ϵ'/16𝕀 + ϵ'00000000.
The dominant component, the identity matrix 𝕀, remains constant under any unitary transformation and is undetectable in NMR experiments. This characteristic enables the quantum system to be effectively treated as the pure state 00000000, despite its actual mixed nature. In our experimental setup, we combined each segment of the quantum circuit, separated by three gradient pulses, into one unitary operation. We then utilized the optimal-control algorithm to search for the corresponding rf pulse. The shaped pulses used in the experiments had lengths of 3 ms, 20 ms, 15 ms and 15 ms, respectively. All the pulses exhibited fidelities over 99.5%.
Measurement.–In the NMR quantum processor, the experimental sample consists not of a single molecule, but rather of a system comprising a large number of identical molecules. Consequently, the measurements performed by the NMR system are ensemble averages. After the operation, the nuclear spins precess around the B_0 direction and gradually return to thermal equilibrium. The precessing nuclear spins induce an electrical signal in the x,y-plane. Thus, the NMR system can only measure the transverse magnetization vectors, specifically the expectation values of σ_x and σ_y. In a four-qubit NMR quantum processor, the signal of each spin is usually split into 8 peaks due to the couplings between different nuclei.
According to the spin dynamics in NMR, the signal of each peak includes both real and imaginary components. These components encode the expectation values of the Pauli matrices σ_x and σ_y for the observable spin, respectively.
Consequently, NMR can measure the expectation values of the single-quantum coherence operators consisted of σ_x or σ_y in the target qubit and σ_z or I in the rest qubits. In our protocol, the focus is on measuring longitudinal magnetization observables such as σ_zIII. To facilitate this, readout pulses are applied to convert these observables into their transverse counterparts. Specifically, the readout pulse R_y^1(π/2) is used to measure σ_zIII by transferring it to σ_xIII.
Quantum state tomography–To measuring the Tr(R^2) using randomization measurements, only the diagonal elements of the density matrix of final state are required. Here, we illustrate the process of performing tomography on the diagonal elements of a density matrix. The diagonal elements can be decomposed via the Pauli basis ∏^n_i⊗σ^i_0,z, where the Pauli matrices σ_0=I and σ_z are used. Hence, using the above readout method, the diagonal elements of an unknown quantum state ρ can be determined.
In our experiment, we focus exclusively on the final state of the first three qubits and perform direct measurements on them to obtain the expectation values of the 8 Pauli operators σ_0,z⊗σ_0,z⊗σ_0,z⊗ I. As shown in Fig. <ref>(b), three readout operations are employed to realize the tomography of the reduced 3-qubit diagonal density matrix. Specifically, the figure only shows a subset of the measurable observables that we are concerned with. In reality, the number of observables that can be measured after each readout pulse far exceeds this subset.
§ EXPERIMENTAL TOMOGRAPHY RESULT FOR R
The results of the PDM R are shown in the Fig .<ref> and Fig .<ref>. The solid lines represent the theoretical predication while the color bars with dashed lines indicate the experimentally results.
§ EXPERIMENTAL RESULT OF EACH UNITARY U
In our experiment, we selected 200 Clifford Unitary operators U to conduct the randomized measurements. Each point of Fig. <ref> is derived from 200 experimental results. Here we present the comparison between these experimental results and simulation results behind each data point, with the comparative data displayed in Fig.<ref> and Fig.<ref>.
|
http://arxiv.org/abs/2409.03340v2 | 20240905083427 | Observation of Shapiro steps in an ultracold atomic Josephson junction | [
"Erik Bernhart",
"Marvin Röhrle",
"Vijay Pal Singh",
"Ludwig Mathey",
"Luigi Amico",
"Herwig Ott"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas"
] |
APS/123-QED
Department of Physics and Research Center OPTIMAS, Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau, 67663 Kaiserslautern, Germany
Department of Physics and Research Center OPTIMAS, Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau, 67663 Kaiserslautern, Germany
Quantum Research Centre, Technology Innovation Institute, Abu Dhabi, UAE
Zentrum für Optische Quantentechnologien and Institut für Quantenphysik, Universität Hamburg, 22761 Hamburg, Germany
The Hamburg Centre for Ultrafast Imaging, Luruper Chaussee 149, Hamburg 22761, Germany
Luigi [email protected]
Quantum Research Centre, Technology Innovation Institute, Abu Dhabi, UAE
Dipartimento di Fisica e Astronomia, Università di Catania, Via S. Sofia 64, 95123 Catania, Italy
INFN-Sezione di Catania, Via S. Sofia 64, 95127 Catania, Italy
Herwig [email protected]
Department of Physics and Research Center OPTIMAS, Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau, 67663 Kaiserslautern, Germany
§ ABSTRACT
The current-voltage characteristic of a driven superconducting Josephson junction displays discrete steps. This phenomenon, discovered by Sydney Shapiro, forms today's voltage standard.
Here, we report the observation of Shapiro steps in a driven Josephson junction in a gas of ultracold atoms.
We demonstrate that the steps exhibit universal features, and provide key insight into the microscopic dissipative dynamics that we directly observe in the experiment.
Most importantly, the steps are directly connected to phonon emission and soliton nucleation.
The experimental results are underpinned by extensive numerical simulations based on classical-field dynamics and represent the transfer of the voltage standard to the realm of ultracold quantum gases.
Observation of Shapiro steps in an ultracold atomic Josephson junction
Ludwig Mathey
September 9, 2024
======================================================================
§ INTRODUCTION
The Josephson effect is one of the most fundamental phenomena in quantum science and technology:
originally discovered for superconductors, the effect features a dissipation-less electrical current through a tunnel barrier by virtue of the phase difference of the traversing Cooper pairs' wave function. Above a critical value of the current, a finite voltage occurs across the junction as a result of the formation of superconducting quasi-particles <cit.>. The Josephson effect and Josephson junctions have played a prominent role for understanding the notion of macroscopic quantum coherence, which led to important technological applications such as SQUIDs <cit.> and are the core units of superconducting qubits <cit.>. If an additional microwave field is applied across the Josephson junction, a staircase current-voltage characteristic emerges, referred to as Shapiro steps <cit.>.
The origin of this effect is photon-assisted tunneling of Cooper pairs through the junction <cit.>.
Since the height of the steps depends only on the frequency of the microwave radiation, Planck's constant, and the electric charge, Shapiro steps are nowadays used as the voltage standard - see e.g. <cit.>. Shapiro steps in superfluid ^3He have been investigated in Ref. <cit.>.
The Josephson effect can also be observed in ultracold quantum gases <cit.>.
On the one hand, this enables the fabrication of atomtronic circuits, which have developed into their own field of research in recent years <cit.>. On the other hand, the high degree of control and flexibility over ultracold quantum gases provides unprecedented access to the microscopic physics underlying the system. In particular, Josephson physics has been recently implemented and extensively studied in ultracold atomic systems, both theoretically <cit.> as well as experimentally in bosonic <cit.> and fermionic <cit.> systems.
In implementations with ultracold atoms, the Josephson current-phase relation is realized by moving the barrier through an ultracold gas which is otherwise at rest <cit.>.
Shapiro steps in ultracold gases are predicted to appear if the linear translation of the barrier (corresponding to a dc particle current) is combined with a periodic modulation of the position of the barrier (corresponding to an ac particle current), thus emulating the external microwave radiation of the superconducting Josephson junction <cit.>.
Here, we demonstrate the emergence of Shapiro steps at an atomic Josephson junction and quantify their global and local properties. The steps occur in the chemical potential difference, in conjunction with a density imbalance across the junction. We confirm that the step heights in the chemical potential difference Δμ are quantized by the external driving frequency f_m, i.e., Δμ = nh f_m, where n is the step index and h is Planck's constant.
By spatially resolving the atomic density, we study the microscopic dynamics and observe the propagation of phonons and quasi-particles emerging from the barrier, which we identify as solitons.
This paves the way for studying the microscopic dissipative dynamics of Shapiro steps and other related Josephson transport effects.
Furthermore, since the magnitude of the step height relates the chemical potential difference to a tunable frequency, and since the density imbalance across the barrier is directly measurable, our results suggest an additional insight into the equation of state of strongly correlated superfluids, thus advancing the study of superfluids with a novel approach.
§ EXPERIMENTAL SYSTEM
The experimental setup and the Shapiro protocol are sketched in Fig. <ref>.
We prepare a ^87Rb Bose-Einstein condensate in an elongated optical dipole trap. The condensate has a tube-like cylindrical geometry with harmonic transverse confinement, see Fig. <ref> A.
The chemical potential is μ = h× 1900 s^-1.
A movable repulsive barrier with height V_B = 0.45 ×μ separates the condensate into two parts and thus realizes a weak link <cit.>.
We probe the system by absorption imaging. To achieve a high accuracy, we perform a matter wave imaging scheme <cit.> in order to reduce the optical density <cit.>.
We implement the dc and ac driving protocol following the proposal in Ref. <cit.>. To this end, we move the barrier with constant velocity v and an additional periodic modulation,
x(t) = v t + x_msin(2 π f_m t) ,
where x_m is the modulation amplitude, and f_m the modulation frequency.
We fix the driving time to 33ms, which means that the final position of the barrier depends on v.
At the end, we measure the atom number imbalance z = (N_R - N_L)/N,
where N is the total atom number and N_L (N_R) is the atom number on the left (right) hand side of the barrier.
As a reference z_ref , we also measure the corresponding imbalance without the barrier.
Using a simulation of the Gross-Pitaevskii equation we map Δ z = z -z_ref to the chemical potential difference Δμ <cit.>.
The barrier motion induces an atomic current relative to the barrier of value I, with I = v I_c/v_c, where I_c is the critical current and v_c is the critical velocity <cit.>.
The external current then reads I_ext = I + I_ac = I + I_msin(2 π f_m t), with the bias current I and the modulation current I_m.
The parameters which we modify in our protocol are I, I_ and f_.
In Fig. <ref> C we show the measurements of the current-chemical potential characteristics
that are obtained without and with ac driving.
From the undriven response we characterize the critical velocity and the critical current of the Josephson junction <cit.>.
For the periodically driven case, the system features plateaus of constant chemical potential difference occurring at integer multiple of the driving frequency.
Thereby, the steps occur at lower I for stronger I_. We model our system with extensive classical-field simulations <cit.>,
which capture the feature of our driven Josephson junction well, see Fig. <ref> D, E.
In a superconducting Josephson junction where the Shapiro steps arise from photon-assisted tunneling of the Cooper pairs <cit.>, the voltage V_n of the n-th Shapiro step is directly given by the energy quantization, i.e.,
V_n = n Φ_0 f_m = n f_m h/(2e),
where Φ_0 is the flux quantum and e the elementary charge <cit.>.
For our neutral atom implementation this results in the quantization condition
Δμ = nh f_m .
To confirm this fundamental relation we vary f_ and determine the height Δμ of the first step, see Fig. <ref> A. For the modulation frequencies explored here, we indeed find the behavior Δμ= h f_, see Fig. <ref> B.
This result establishes the transfer of the voltage standard to the realm of ultracold quantum gases <cit.>.
This is particularly useful in situations, where the relation between the atomic density and the chemical potential is not known, as the protocol allows for the generation of a predetermined chemical potential difference.
Next, we analyze the width of the Shapiro steps by varying the modulation amplitude I_m/I_c at a fixed modulation frequency of f_m = 90Hz <cit.>.
In Fig. <ref> C, D, the widths of the zeroth and first steps display a Bessel-function behavior as a function of I_m/I_c, which is in agreement with the results of numerical simulations <cit.>.
To support this observation we use the analytical prediction of an ac voltage driven Josephson junction,
which yields the step width I_n = I_c |J_n(V_/V_n)|,
where J_n is the Bessel function of the n-th order and V_ the modulation voltage <cit.>.
For an ac current driven junction this can be mathematically transformed into
I_n = I_c | J_n (α_th I_m/I_c) | ,
with α_th=R I_c/f_m.
The resistance R and I_c are determined independently, such that there is no free fitting parameter <cit.>.
In Fig. <ref> C, D the results of Eq. <ref> support the overall trend of our measurements and simulations.
§ MICROSCOPIC DYNAMICS
In contrast to solid-state physics the time and length scales in quantum gas experiments are much larger, such that the in situ study of excitations, quasi-particles and related microscopic phenomena becomes possible.
To do so, we follow the time evolution of the real space atomic density.
Using only a constant barrier velocity, i.e., no ac particle current, we measure the dc and ac Josephson regime, see Fig. <ref> A.
Below the critical current no imbalance is created and only a single phononic density wave is emitted from the initial acceleration of the barrier (I and II).
The group velocity of the emitted phonons coincides with a reference measurement, where a single perturbation creates a phonon wave packet <cit.>.
In the ac Josephson regime above the critical current, atoms are pushed away by the barrier and a density depletion remains in its wake (III).
In the Shapiro regime instead, we see that the phase dynamics is characterized by two distinct collective excitations: phonons and localized density depletions (see Fig. <ref> B and C).
Phonons are emitted at distinct times in both directions due to the oscillatory motion of the barrier. This is particularly visible on the zeroth plateau (I).
Increasing the constant current I beyond the zeroth plateau yields a chemical potential difference and atom number imbalance; this phenomenon is characterized by the formation of 'depletion waves' that move backwards compared to the barrier motion (blue lines propagating to the left in II and III).
These excitations are observed to have a smaller group velocity, i.e., a steeper slope and, by using the phase information of the simulation in Fig. <ref> A, we can conclude that such defects are accompanied by well resolved phase slippages with a phase jump of less than π at the density minimum. Therefore we identify these excitations as grey solitons.
These solitons are responsible for most of the reduction in density, ultimately causing the particle imbalance across the junction- see Fig. <ref> B and C.
Specifically, the simulation suggests that for the n-th plateau n solitons are on-average emitted backwards during one driving cycle, see panels II and III of Fig. <ref> C and <cit.>. This creates the reduced density in the wake of the moving barrier, and therefore gives rise to the resistive regime.
On the other hand, being also defects of the phase field, they play a similar role as quasi-particles of the driven superconducting Josephson junctions: as single-particle excitations are responsible for the resistive transport and for the actual phase dynamics across the superconducting junction, soliton emission provides the specific mechanism for the evolution of the phase coherence in our system, thus allowing for phase coherent dynamics even in presence of dissipation.
In this sense, soliton emission in our system plays a similar role to that of single-particle excitations in superconducting Josephson junctions.
This statement is corroborated by the fact that the rate of emitted solitons indeed displays a step-like behavior in close parallel with the staircase displayed by the chemical potential, see Fig. S11 of <cit.>.
We note that such a phenomenon resembles the interplay between single-particle excitations and photon-assisted tunneling occurring in superconducting junctions.
Depending on the geometry the collective excitations can be different. For instance, in the theoretical proposal <cit.>, a larger 2D system was used and vortex-antivortex pairs are generated instead of solitons.
In earlier studies of the dissipation at an atomic Josephson junction, the creation of vortex ring excitations at the barrier was observed <cit.>.
§ DISCUSSION AND OUTLOOK
We have experimentally demonstrated the emergence of Shapiro steps in an ultracold quantum gas by emulating the external microwave field and an alternating particle current across the barrier with a periodic modulation of the barrier position. Exploiting the know-how of the cold atoms field, we could monitor the coherent dynamics of the system with unprecedented accuracy.
This way, our work opens up new directions both in basic and applied physical science.
On the fundamental side, different dimensions, geometries, and particle statistics are direct extensions, as well as superfluid mixtures, dipolar superfluids, and superfluids in optical lattices, which can give rise to new physical processes, e.g. by generating other types of solitons, such as bright solitons, or vortices, or defects related to an underlying lattice or competing orders.
Indeed, Eq. <ref> provides the possibility to measure the equation of state of strongly correlated many-body systems.
In the context of superconducting Josephson junctions, our work motivates to expand the understanding of the microscopic dynamics of the Shapiro effect as well, including the role of superconducting quasi-particles, solitons and vortices.
On the application side, we remark that our system transfers the voltage standard to the field of ultracold quantum gases.
Towards the development of atomtronic technology, Shapiro steps can be used to fine tune quantum transport, as they combine both superfluid and resistive transport.
Stacks of Shapiro steps can be used to create even larger differences in chemical potential over a whole system, or as a source for predetermined chemical potential differences.
We note that Shapiro steps have also been observed recently in an experiment with strongly correlated ultracold fermions in the unitary regime <cit.>.
§ ACKNOWLEDGMENTS
We thank Giacomo Roati and Giulia Del Pace for useful discussions.
Funding:
We gratefully acknowledge financial support by the DFG within the SFB OSCAR (project number 277625399).
L.M. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), namely the Cluster of Excellence ‘Advanced Imaging of Matter’ (EXC 2056), Project No. 390715994.
The project is co-financed by ERDF of the European Union and by ’Fonds of the Hamburg Ministry of Science, Research, Equalities and Districts (BWFGB)’.
Author contributions:
E.B., M.R. and H.O. conceived the study. E.B. and M.R. performed the experiment and analyzed the data.
E.B. performed the GPE simulations and V.P.S. modeled classical-field simulations.
H.O. supervised the experiment. L.M. and L.A. supervised the theoretical part of the project.
E.B. prepared the initial version of the manuscript.
All authors contributed to the data interpretation and the writing of the manuscript.
Competing interests:
The authors declare no competing interests.
Data and materials availability:
All data presented in this paper will be deposited at Zenodo.
39
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Josephson(1962)]JOSEPHSON1962251
author author B. Josephson, title title Possible new effects in superconductive tunnelling, https://doi.org/https://doi.org/10.1016/0031-9163(62)91369-0 journal journal Physics Letters volume 1, pages 251 (year 1962)NoStop
[Tinkham(2004)]tinkham2004introduction
author author M. Tinkham, @noop title Introduction to superconductivity, Vol. volume 1 (publisher Courier Corporation, year 2004)NoStop
[Fagaly(2006)]10.1063/1.2354545
author author R. L. Fagaly, title title Superconducting quantum interference device instruments and applications, https://doi.org/10.1063/1.2354545 journal journal Review of Scientific Instruments volume 77, pages 101101 (year 2006), https://arxiv.org/abs/https://pubs.aip.org/aip/rsi/article-pdf/doi/10.1063/1.2354545/11171638/101101_1_online.pdf https://pubs.aip.org/aip/rsi/article-pdf/doi/10.1063/1.2354545/11171638/101101_1_online.pdf NoStop
[Koelle et al.(1999)Koelle, Kleiner, Ludwig, Dantsker, and Clarke]RevModPhys.71.631
author author D. Koelle, author R. Kleiner, author F. Ludwig, author E. Dantsker, and author J. Clarke, title title High-transition-temperature superconducting quantum interference devices, https://doi.org/10.1103/RevModPhys.71.631 journal journal Rev. Mod. Phys. volume 71, pages 631 (year 1999)NoStop
[Barone and Paterno(1982)]Barone
author author A. Barone and author G. Paterno, @noop title Physics and applications of the josephson effect (year 1982)NoStop
[Leggett(1987)]leggett1987macroscopic
author author A. J. Leggett, title title Macroscopic quantum tunnelling and related matters, @noop journal journal Japanese Journal of Applied Physics volume 26, pages 1986 (year 1987)NoStop
[Clarke and Wilhelm(2008)]Clarke2008
author author J. Clarke and author F. K. Wilhelm, title title Superconducting quantum bits, https://doi.org/10.1038/nature07128 journal journal Nature volume 453, pages 1031 (year 2008)NoStop
[Koch et al.(2007)Koch, Yu, Gambetta, Houck, Schuster, Majer, Blais, Devoret, Girvin, and Schoelkopf]PhysRevA.76.042319
author author J. Koch, author T. M. Yu, author J. Gambetta, author A. A. Houck, author D. I. Schuster, author J. Majer, author A. Blais, author M. H. Devoret, author S. M. Girvin, and author R. J. Schoelkopf, title title Charge-insensitive qubit design derived from the cooper pair box, https://doi.org/10.1103/PhysRevA.76.042319 journal journal Phys. Rev. A volume 76, pages 042319 (year 2007)NoStop
[Johnson et al.(2011)Johnson, Amin, Gildert, Lanting, Hamze, Dickson, Harris, Berkley, Johansson, Bunyk, Chapple, Enderud, Hilton, Karimi, Ladizinsky, Ladizinsky, Oh, Perminov, Rich, Thom, Tolkacheva, Truncik, Uchaikin, Wang, Wilson, and Rose]Johnson2011
author author M. W. Johnson, author M. H. S. Amin, author S. Gildert, author T. Lanting, author F. Hamze, author N. Dickson, author R. Harris, author A. J. Berkley, author J. Johansson, author P. Bunyk, author E. M. Chapple, author C. Enderud, author J. P. Hilton, author K. Karimi, author E. Ladizinsky, author N. Ladizinsky, author T. Oh, author I. Perminov, author C. Rich, author M. C. Thom, author E. Tolkacheva, author C. J. S. Truncik, author S. Uchaikin, author J. Wang, author B. Wilson, and author G. Rose, title title Quantum annealing with manufactured spins, https://doi.org/10.1038/nature10012 journal journal Nature volume 473, pages 194 (year 2011)NoStop
[Arute et al.(2019)Arute, Arya, Babbush, Bacon, Bardin, Barends, Biswas, Boixo, Brandao, Buell, Burkett, Chen, Chen, Chiaro, Collins, Courtney, Dunsworth, Farhi, Foxen, Fowler, Gidney, Giustina, Graff, Guerin, Habegger, Harrigan, Hartmann, Ho, Hoffmann, Huang, Humble, Isakov, Jeffrey, Jiang, Kafri, Kechedzhi, Kelly, Klimov, Knysh, Korotkov,
Kostritsa, Landhuis, Lindmark, Lucero, Lyakh, Mandrà, McClean, McEwen, Megrant, Mi, Michielsen, Mohseni, Mutus, Naaman, Neeley, Neill, Niu, Ostby, Petukhov, Platt, Quintana, Rieffel, Roushan, Rubin, Sank, Satzinger, Smelyanskiy, Sung, Trevithick, Vainsencher, Villalonga, White, Yao, Yeh, Zalcman, Neven, and Martinis]Arute2019
author author F. Arute, author K. Arya, author R. Babbush, author D. Bacon, author J. C. Bardin, author R. Barends, author R. Biswas, author S. Boixo, author F. G. S. L. Brandao, author D. A. Buell, author B. Burkett, author Y. Chen, author Z. Chen, author B. Chiaro, author R. Collins, author W. Courtney, author A. Dunsworth, author E. Farhi, author B. Foxen, author A. Fowler, author C. Gidney, author M. Giustina, author R. Graff, author K. Guerin, author S. Habegger, author M. P. Harrigan, author M. J. Hartmann, author A. Ho, author M. Hoffmann, author T. Huang, author T. S. Humble, author S. V. Isakov, author E. Jeffrey, author Z. Jiang, author D. Kafri, author K. Kechedzhi, author J. Kelly, author P. V. Klimov, author S. Knysh, author A. Korotkov, author F. Kostritsa, author D. Landhuis, author M. Lindmark, author E. Lucero, author D. Lyakh, author S. Mandrà, author J. R. McClean, author M. McEwen, author A. Megrant, author X. Mi, author K. Michielsen, author M. Mohseni, author J. Mutus, author O. Naaman, author M. Neeley, author C. Neill, author M. Y. Niu, author E. Ostby, author A. Petukhov, author J. C. Platt, author C. Quintana, author E. G. Rieffel, author P. Roushan, author N. C. Rubin,
author D. Sank, author K. J. Satzinger, author V. Smelyanskiy, author K. J. Sung, author M. D. Trevithick, author A. Vainsencher, author B. Villalonga, author T. White, author Z. J. Yao, author P. Yeh, author A. Zalcman, author H. Neven, and author J. M. Martinis, title title Quantum supremacy using a programmable superconducting processor, https://doi.org/10.1038/s41586-019-1666-5 journal journal Nature volume 574, pages 505 (year 2019)NoStop
[Shapiro(1963)]ShapiroOriginal
author author S. Shapiro, title title Josephson currents in superconducting tunneling: The effect of microwaves and other observations, https://doi.org/10.1103/PhysRevLett.11.80 journal journal Phys. Rev. Lett. volume 11, pages 80 (year 1963)NoStop
[Pöpel(1992)]popel1992josephson
author author R. Pöpel, title title The josephson effect and voltage standards, https://doi.org/10.1088/0026-1394/29/2/005 journal journal Metrologia volume 29, pages 153 (year 1992)NoStop
[Simmonds et al.(2001)Simmonds, Marchenkov, Davis, and Packard]Shapiro_superfluid
author author R. W. Simmonds, author A. Marchenkov, author J. C. Davis, and author R. E. Packard, title title Observation of the superfluid shapiro effect in a ^3he weak link, https://doi.org/10.1103/PhysRevLett.87.035301 journal journal Phys. Rev. Lett. volume 87, pages 035301 (year 2001)NoStop
[Albiez et al.(2005)Albiez, Gati, Fölling, Hunsmann, Cristiani, and Oberthaler]Oberthaler_2005
author author M. Albiez, author R. Gati, author J. Fölling, author S. Hunsmann, author M. Cristiani, and author M. K. Oberthaler, title title Direct observation of tunneling and nonlinear self-trapping in a single bosonic josephson junction, https://doi.org/10.1103/PhysRevLett.95.010402 journal journal Phys. Rev. Lett. volume 95, pages 010402 (year 2005)NoStop
[Ryu et al.(2013)Ryu, Blackburn, Blinova, and Boshier]Atom_squid
author author C. Ryu, author P. W. Blackburn, author A. A. Blinova, and author M. G. Boshier, title title Experimental realization of josephson junctions for an atom squid, https://doi.org/10.1103/PhysRevLett.111.205301 journal journal Phys. Rev. Lett. volume 111, pages 205301 (year 2013)NoStop
[Amico et al.(2021)Amico, Boshier, Birkl, Minguzzi, Miniatura, Kwek, Aghamalyan, Ahufinger, Anderson, Andrei, Arnold, Baker, Bell, Bland, Brantut, Cassettari, Chetcuti, Chevy, Citro, De Palo, Dumke, Edwards, Folman, Fortagh, Gardiner, Garraway, Gauthier, Günther, Haug, Hufnagel, Keil, Ireland, Lebrat, Li, Longchambon, Mompart, Morsch, Naldesi, Neely,
Olshanii, Orignac, Pandey, Pérez-Obiol, Perrin, Piroli, Polo, Pritchard, Proukakis, Rylands, Rubinsztein-Dunlop, Scazza, Stringari, Tosto, Trombettoni, Victorin, Klitzing, Wilkowski, Xhani, and Yakimenko]roadmap_atomtronic
author author L. Amico, author M. Boshier, author G. Birkl, author A. Minguzzi, author C. Miniatura, author L.-C. Kwek, author D. Aghamalyan, author V. Ahufinger, author D. Anderson, author N. Andrei, author A. S. Arnold, author M. Baker, author T. A. Bell, author T. Bland, author J. P. Brantut, author D. Cassettari, author W. J. Chetcuti, author F. Chevy, author R. Citro, author S. De Palo, author R. Dumke, author M. Edwards, author R. Folman, author J. Fortagh, author S. A. Gardiner, author B. M. Garraway, author G. Gauthier, author A. Günther, author T. Haug, author C. Hufnagel, author M. Keil, author P. Ireland, author M. Lebrat, author W. Li, author L. Longchambon, author J. Mompart, author O. Morsch, author P. Naldesi, author T. W. Neely, author M. Olshanii, author E. Orignac, author S. Pandey, author A. Pérez-Obiol, author H. Perrin, author L. Piroli, author J. Polo, author A. L. Pritchard, author N. P. Proukakis, author C. Rylands, author H. Rubinsztein-Dunlop, author F. Scazza, author S. Stringari, author F. Tosto, author A. Trombettoni, author N. Victorin, author W. v. Klitzing, author D. Wilkowski, author K. Xhani, and author A. Yakimenko, title title Roadmap on Atomtronics: State of the art and perspective, https://doi.org/10.1116/5.0026178 journal journal AVS Quantum Science volume 3, pages 039201
(year 2021), https://arxiv.org/abs/https://pubs.aip.org/avs/aqs/article-pdf/doi/10.1116/5.0026178/19676772/039201_1_5.0026178.pdf https://pubs.aip.org/avs/aqs/article-pdf/doi/10.1116/5.0026178/19676772/039201_1_5.0026178.pdf NoStop
[Amico et al.(2022)Amico, Anderson, Boshier, Brantut, Kwek, Minguzzi, and von Klitzing]Colloquium_Atomtronics
author author L. Amico, author D. Anderson, author M. Boshier, author J.-P. Brantut, author L.-C. Kwek, author A. Minguzzi, and author W. von Klitzing, title title Colloquium: Atomtronic circuits: From many-body physics to quantum technologies, https://doi.org/10.1103/RevModPhys.94.041001 journal journal Rev. Mod. Phys. volume 94, pages 041001 (year 2022)NoStop
[Giovanazzi et al.(2000)Giovanazzi, Smerzi, and Fantoni]Giovanazzi_2000
author author S. Giovanazzi, author A. Smerzi, and author S. Fantoni, title title Josephson effects in dilute bose-einstein condensates, https://doi.org/10.1103/PhysRevLett.84.4521 journal journal Phys. Rev. Lett. volume 84, pages 4521 (year 2000)NoStop
[Meier and Zwerger(2001)]Meier_Zwerger_2001
author author F. Meier and author W. Zwerger, title title Josephson tunneling between weakly interacting bose-einstein condensates, https://doi.org/10.1103/PhysRevA.64.033610 journal journal Phys. Rev. A volume 64, pages 033610 (year 2001)NoStop
[Singh et al.(2020)Singh, Luick, Sobirey, and Mathey]Singh_2020
author author V. P. Singh, author N. Luick, author L. Sobirey, and author L. Mathey, title title Josephson junction dynamics in a two-dimensional ultracold bose gas, https://doi.org/10.1103/PhysRevResearch.2.033298 journal journal Phys. Rev. Res. volume 2, pages 033298 (year 2020)NoStop
[Saha and Dubessy(2021)]PhysRevA.104.023316
author author A. K. Saha and author R. Dubessy, title title Dynamical phase diagram of a one-dimensional bose gas in a box with a tunable weak link: From bose-josephson oscillations to shock waves, https://doi.org/10.1103/PhysRevA.104.023316 journal journal Phys. Rev. A volume 104, pages 023316 (year 2021)NoStop
[Grond et al.(2011)Grond, Betz, Hohenester, Mauser, Schmiedmayer, and Schumm]Grond_2011
author author J. Grond, author T. Betz, author U. Hohenester, author N. J. Mauser, author J. Schmiedmayer, and author T. Schumm, title title The shapiro effect in atomchip-based bosonic josephson junctions, https://doi.org/10.1088/1367-2630/13/6/065026 journal journal New Journal of Physics volume 13, pages 065026 (year 2011)NoStop
[LeBlanc et al.(2011)LeBlanc, Bardon, McKeever, Extavour, Jervis, Thywissen, Piazza, and Smerzi]LeBlanc_2011
author author L. J. LeBlanc, author A. B. Bardon, author J. McKeever, author M. H. T. Extavour, author D. Jervis, author J. H. Thywissen, author F. Piazza, and author A. Smerzi, title title Dynamics of a tunable superfluid junction, https://doi.org/10.1103/PhysRevLett.106.025302 journal journal Phys. Rev. Lett. volume 106, pages 025302 (year 2011)NoStop
[Eckel et al.(2016)Eckel, Lee, Jendrzejewski, Lobb, Campbell, and Hill]Campbell_2016
author author S. Eckel, author J. G. Lee, author F. Jendrzejewski, author C. J. Lobb, author G. K. Campbell, and author W. T. Hill, title title Contact resistance and phase slips in mesoscopic superfluid-atom transport, https://doi.org/10.1103/PhysRevA.93.063619 journal journal Phys. Rev. A volume 93, pages 063619 (year 2016)NoStop
[Ji et al.(2022)Ji, Schweigler, Tajik, Cataldini, Sabino, Møller, Erne, and Schmiedmayer]Ji_2022
author author S.-C. Ji, author T. Schweigler, author M. Tajik, author F. Cataldini, author J. a. Sabino, author F. S. Møller, author S. Erne, and author J. Schmiedmayer, title title Floquet engineering a bosonic josephson junction, https://doi.org/10.1103/PhysRevLett.129.080402 journal journal Phys. Rev. Lett. volume 129, pages 080402 (year 2022)NoStop
[Valtolina et al.(2015)Valtolina, Burchianti, Amico, Neri, Xhani, Seman, Trombettoni, Smerzi, Zaccanti, Inguscio, and Roati]Roati_dc_jj_scince_2015
author author G. Valtolina, author A. Burchianti, author A. Amico, author E. Neri, author K. Xhani, author J. A. Seman, author A. Trombettoni, author A. Smerzi, author M. Zaccanti, author M. Inguscio, and author G. Roati, title title Josephson effect in fermionic superfluids across the bec-bcs crossover, https://doi.org/10.1126/science.aac9725 journal journal Science volume 350, pages 1505 (year
2015), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.aac9725 https://www.science.org/doi/pdf/10.1126/science.aac9725 NoStop
[Luick et al.(2020)Luick, Sobirey, Bohlen, Singh, Mathey, Lompe, and Moritz]Moritz_jj_science
author author N. Luick, author L. Sobirey, author M. Bohlen, author V. P. Singh, author L. Mathey, author T. Lompe, and author H. Moritz, title title An ideal josephson junction in an ultracold two-dimensional fermi gas, https://doi.org/10.1126/science.aaz2342 journal journal Science volume 369, pages 89 (year 2020), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.aaz2342 https://www.science.org/doi/pdf/10.1126/science.aaz2342 NoStop
[Kwon et al.(2020)Kwon, Pace, Panza, Inguscio, Zwerger, Zaccanti, Scazza, and Roati]Roati_acdc_jj_2020
author author W. J. Kwon, author G. D. Pace, author R. Panza, author M. Inguscio, author W. Zwerger, author M. Zaccanti, author F. Scazza, and author G. Roati, title title Strongly correlated superfluid order parameters from dc josephson supercurrents, https://doi.org/10.1126/science.aaz2463 journal journal Science volume 369, pages 84 (year 2020), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.aaz2463 https://www.science.org/doi/pdf/10.1126/science.aaz2463 NoStop
[Del Pace et al.(2021)Del Pace, Kwon, Zaccanti, Roati, and Scazza]Roati_acdc_jj_2021
author author G. Del Pace, author W. J. Kwon, author M. Zaccanti, author G. Roati, and author F. Scazza, title title Tunneling transport of unitary fermions across the superfluid transition, https://doi.org/10.1103/PhysRevLett.126.055301 journal journal Phys. Rev. Lett. volume 126, pages 055301 (year 2021)NoStop
[Singh et al.(2024)Singh, Polo, Mathey, and Amico]singh2023shapiro
author author V. P. Singh, author J. Polo, author L. Mathey, and author L. Amico, title title Shapiro Steps in Driven Atomic Josephson Junctions, https://doi.org/10.1103/PhysRevLett.133.093401 journal journal Phys. Rev. Lett. volume 133, pages 093401 (year 2024)NoStop
[Asteria et al.(2021)Asteria, Zahn, Kosch, Sengstock, and Weitenberg]Asteria_2021
author author L. Asteria, author H. P. Zahn, author M. N. Kosch, author K. Sengstock, and author C. Weitenberg, title title Quantum gas magnifier for sub-lattice-resolved imaging of 3d quantum systems, https://doi.org/10.1038/s41586-021-04011-2 journal journal Nature volume 599, pages 571 (year 2021)NoStop
[sup()]supp
@noop note See supplementaryNoStop
[Dayem and Martin(1962)]DayemPhotonAssTunel
author author A. H. Dayem and author R. J. Martin, title title Quantum interaction of microwave radiation with tunneling between superconductors, https://doi.org/10.1103/PhysRevLett.8.246 journal journal Phys. Rev. Lett. volume 8, pages 246 (year 1962)NoStop
[Hamilton(1972)]PhysRevB.5.912
author author C. A. Hamilton, title title Frequency dependence of the josephson current, https://doi.org/10.1103/PhysRevB.5.912 journal journal Phys. Rev. B volume 5, pages 912 (year 1972)NoStop
[Hamilton(2000)]HamiltonVoltStandard
author author C. A. Hamilton, title title Josephson voltage standards, https://doi.org/10.1063/1.1289507 journal journal Review of Scientific Instruments volume 71, pages 3611 (year 2000), https://arxiv.org/abs/https://pubs.aip.org/aip/rsi/article-pdf/71/10/3611/19100354/3611_1_online.pdf https://pubs.aip.org/aip/rsi/article-pdf/71/10/3611/19100354/3611_1_online.pdf NoStop
[Hamilton and Shapiro(1971)]PhysRevLett.26.426
author author C. A. Hamilton and author S. Shapiro, title title Experimental demonstration of the riedel peak, https://doi.org/10.1103/PhysRevLett.26.426 journal journal Phys. Rev. Lett. volume 26, pages 426 (year 1971)NoStop
[Burchianti et al.(2018)Burchianti, Scazza, Amico, Valtolina, Seman, Fort, Zaccanti, Inguscio, and Roati]Roati_dc_jj_2018_phaseslips
author author A. Burchianti, author F. Scazza, author A. Amico, author G. Valtolina, author J. A. Seman, author C. Fort, author M. Zaccanti, author M. Inguscio, and author G. Roati, title title Connecting dissipation and phase slips in a josephson junction between fermionic superfluids, https://doi.org/10.1103/PhysRevLett.120.025302 journal journal Phys. Rev. Lett. volume 120, pages 025302 (year 2018)NoStop
[Xhani et al.(2020)Xhani, Neri, Galantucci, Scazza, Burchianti, Lee, Barenghi, Trombettoni, Inguscio, Zaccanti, Roati, and Proukakis]Xhani_2020
author author K. Xhani, author E. Neri, author L. Galantucci, author F. Scazza, author A. Burchianti, author K.-L. Lee, author C. F. Barenghi, author A. Trombettoni, author M. Inguscio, author M. Zaccanti, author G. Roati, and author N. P. Proukakis, title title Critical transport and vortex dynamics in a thin atomic josephson junction, https://doi.org/10.1103/PhysRevLett.124.045301 journal journal Phys.
Rev. Lett. volume 124, pages 045301 (year 2020)NoStop
[Del Pace et al.(2024)Del Pace, Hernández-Rajkov, Singh, Grani, Frómeta Fernández, Nesti, Seman, Inguscio, Amico, and Roati]DelPace2024
author author G. Del Pace, author D. Hernández-Rajkov, author V. Singh, author N. Grani, author M. Frómeta Fernández, author G. Nesti, author J. Seman, author M. Inguscio, author L. Amico, and author G. Roati, @noop title Shapiro steps in strongly-interacting fermi gases (year 2024), note to be publishedNoStop
μm
ref
p
k
k^'
r
r^'
v
e
q
j
B
s
m
sc
pin
ex
phon
free
Hz
nK
ms
H
D
E
O
ext
B
s
m
μm
dc
ac
Hz
nK
ms
Supplementary Information for
Observation of Shapiro steps in an ultracold atomic Josephson junction
Ludwig Mathey
September 9, 2024
=====================================================================================================
§ SUPPLEMENTARY MATERIAL
§ EXPERIMENTAL DETAILS
§.§.§ Experimental procedure
We start by preparing a Bose-Einstein condensate (BEC) of around 180e3 ^87Rb atoms in a crossed optical dipole trap, consisting of a High Power (HP) beam and a Low Power (LP) beam.
To reach the required tube like geometry we ramp the HP linearly in 400ms from 20mW up to 150mW and the LP within the same time down to 0mW.
This results in a trapping geometry with trapping frequencies ω = 2 π× [1.6; 252; 250] Hz.
Because the atoms can expand in the dipole trap during this procedure and to set a defined length of the sample we use two repulsive barriers at positions x = ±37.5μ m from the center of the initial position of the BEC.
The barriers are ramped linearly within 40ms up to a intensity resulting in a potential much higher than the chemical potential of the cloud (≈ 10 μ).
When the ramp of the dipole trap is finished we wait 200ms, letting the system equilibrate, before starting the experiment.
After this wait time we ramp up the center barrier in 45ms to its intended value with an initial position which is 20μ m off center. Subsequently, the Shapiro protocol starts, see main text.
§.§.§ Barrier generation
The realization of the optical barriers are sketched in the following.
We use a 532nm laser to create a repulsive potential and an AOM to stabilize its intensity.
To generate the barriers we guide the beam through a two axis acousto-optical deflector (AOD).
By changing the AODs RF driving frequency we control the axial position and the transverse extent of the barriers.
After the AOD we use a f = 100mm scan lens to convert the beams angular displacement in a lateral displacement.
The light is then collected by a f = 750mm tube lens and guided to the NA=0.3 in-vacuum objective, which focus the beam down to the atoms.
The tube lens is chosen in a way that we get a diffraction limited spot at the atoms.
We estimate the size of a single spot by a measurement with a test target in the chamber. The Gaussian beam radius is round 1.1μ m.
To realize in the transverse direction a homogeneously extended and in the axial direction movable barrier, we drive the AOD by multiple RF frequencies <cit.>.
By applying a multitone RF signal of the form
S(t) = ∑_i S_i sin(ω_i t + ϕ_i),
to each of the AOD axis, we create three individual barriers, which are independently movable, when using time dependent ϕ_i(t).
For the experiments described in the main text we use 40mW laser power in total.
§.§.§ Calibration of the barrier height
To estimate the height of the barrier used in the experiment we measure the equation of state n(μ). The method is close to the one used in <cit.>.
We generate a block of 5 × 9 barrier spots, which corresponds to 5 times the experiments barrier size and measure their mean intensity separately on a camera.
This barrier block is projected onto the center of the BEC consisting of 180e3 atoms. We count the atoms at the position of the block by absorption imaging in the y-direction, applying the matter wave imaging scheme.
The result is shown in fig. S<ref>.
The initial decrease is linearly fitted and the zero crossing is set as the barrier height corresponding to the chemical potential V_0 = μ.
§.§.§ Imaging system
The atom cloud is probed via absorption imaging, using the 780nm Rubidium D2 transition.
Our experiment uses three different imaging systems.
First we have a standard time of flight imaging along the y-axis (horizontal direction) with a magnification of M_hor = 3.77, which is used to measure the atom number of the BEC.
Second we use the same imaging setup in a different mode, known as matter wave imaging <cit.>.
This procedure enables us to image the density distribution of the cloud in position space after a certain time of flight with a magnification in the axial direction of M_mwi = M_mwi' · M_hor = 32.8, were M_mwi' = 8.7.
the benefit of the matter wave imaging system is a strongly reduced optical density, which allows for a precise atom number determination.
Last, we can perform in-situ absorption imaging through the NA = 0.3 objective along the z-axis (vertical direction).
With this imaging scheme we can get a theoretical optical resolution of d_vert = 1.6μ m and a magnification of M_vert = 19.5.
However, the high optical density of the sample in the trap prevents us from correctly determining the density with in-situ imaging.
§ CHARACTERIZATION OF THE WEAK LINK
To effectively describe a Josephson junction in the undriven case, one usually applies the so-called resistively capacitively shunted junction (RCSJ) model.
There, an additional resistance R and capacity C is assigned to the junction and a simple circuit, see fig. S<ref>, is used to quantitatively model the junctions behaviour.
With Kirchoff's law we get the basic equation of the circuit
I_ext = I_c sin(ϕ) -1/RΔμ - C Δμ̇.
Here, the first Josephson equation I(ϕ) = I_csin(ϕ) is used to describe the supercurrent across the junction.
In case of ultracold atoms, where the chemical potential difference Δμ plays the role of the voltage, the second Josephson equation reads <cit.>
Δμ = - ħϕ̇.
To be able to fully describe the junction we use these equations to experimentally determine the systems parameters R, C, and I_c.
To this end, we measure Josephson plasma oscillations <cit.>. We first prepare an initial atom number imbalance across the junction and let the system evolve freely in time.
We then measure both, the atom number difference N = N_L - N_R and the relative phase ϕ = ϕ_L - ϕ_R between the two superfluids.
Applying a Gaussian filter to the experimental data and numerical differentiation of N gives dN/dt, and by fitting I_c to I(t) = dN/dt = I_csin(ϕ(t)) (see fig. S<ref>), we find I_c = 192e31/s.
In the same way, we can determine the capacity C.
Using the same measurement, we numerically differentiate ϕ(t) and together with Eq. <ref> we get - V(t) = - N(t)/C = dϕ/dt.
We find C = 59.6s/h.
The resistance R can be determined by measuring the BEC critical velocity v_c. Thereby, we run a dc Josephson protocol, by moving the barrier with constant velocity through the condensate.
The result is shown in fig. S<ref>. Below the critical velocity v_c, we find Δμ = 0, indicating the absence of any ”voltage” drop. Above v_c, a finite chemical potential difference Δμ builds up, signaling the onset of the ac Josephson branch.
Fitting Δ z = R √(v^2 - v^2_c), gives v_c = 0.42mm/s.
With I = v · I_c/v_c and Eq. <ref> we can rescale the measurement data and use again the fit function Δμ = R √(I^2 - I^2_c), and obtain R=0.9e-3h.
§.§ Current phase relation
The Josephson junction in our experiments has a barrier height which is significantly lower than the chemical potential. It is necessary to independently measure the current phase relation in order to verify, whether higher order terms in the Josephson relation are relevant <cit.>.
We repeat the experiment for the critical velocity and measure the phase between the two condensates at the end of the motion.
The phase is determined via interference of both condensates in time of flight.
We find the best contrast for 5ms time of flight, for which a substantial part of the cloud has already interfered with each other.
After subtracting a reference image without barrier we fit a sinusoidal function to extract the phase.
The resulting current phase relation is shown in fig. S<ref>.
Following Ref. <cit.> one can describe the current phase relation of a Josephson junction with I(φ) = ∑_n> 0 I_n sin(nφ) if the system exhibits time reversal symmetry.
In the limit of small coupling the above expression reduces to the first Josephson equation, I(φ) = I_csin(φ).
We fit our data up to the second order term and find I_1=0.995I_c and I_2=0.054I_c. The Josephson junction is therefore close to the ideal Josephson junction limit.
§.§ Speed of sound
We measure the speed of sound following the original protocol from Ref. <cit.>.
We prepare the same sample as in the Shapiro step measurements. We then instantaneously switch on a barrier with height V_B≈μ in the center of the condensate.
We follow the resulting phonon propagation via matter wave imaging, subtracting a reference image without barrier (see fig. S<ref>). We extract from a linear fit the speed of sound.
The density wave traveling to the right has a speed of c_s = 1.63 ± 0.03 mm/s, and the one traveling to the left has a speed of c_s = -1.38 ± 0.02 mm/s. The small difference probably rises from a residual motion of the condensate in the shallow trap.
§.§ Shapiro steps for different barrier height
To see the influence of the barrier height on the occurrence of the Shapiro steps, we repeat the same protocol for a large range of different barrier heights as shown in fig. S<ref>.
Shapiro steps are visible over the whole parameter range. However, the steps tend to smear out for very high barriers (corresponding to a smaller critical current), while for lowest barrier height (V_B =0.25 μ), the step height is reduced. We attribute the latter to the fact that the “weak link regime” is no longer valid for such low barriers.
§ MEASUREMENT EVALUATION
§.§ Chemical potential
To map the measured atom number differences Δ z to a chemical potential difference Δμ, we calculate μ for different total atom numbers N, see fig. S<ref>, using imaginary time evolution of the GPE <cit.>.
We then extract the chemical potential from the calculated ground state wave function, and use a phenomenological fit to get a function μ (N).
With
Δμ (z) = μ(N·(1+z)) - μ(N·(1-z))
we calculate Δμ(z).
§.§.§ Step height
To evaluate the step height, we rescale the data to chemical potential differences using Eq. <ref> and fit a logistic function
f(x) = ∑_i S/1+exp(-k_i(x-x_i)) + b,
where S is the step height, k_i the steepness of the steps and i-1 the number of visible steps.
§.§.§ Step width
We extract the width of the steps as illustrated in fig. S<ref>.
We first a apply a Gaussian filter to the data to reduce high frequency noise before we numerically calculate the first derivative ∂Δ z/∂ v of the recorded atom number imbalance Δ z.
For each step n we search the maxima of the derivative in the area where Δμ = n f_m±ϵ, with ϵ = 45Hz.
We define the width of a step I_n as the distance between the two maxima.
For the 0-th step we calculate the distance with respect to zero.
I_n is normalized by I_0, which is the width of 0-th step in the dc Josephson protocol.
§ CLASSICAL-FIELD SIMULATION METHOD
We simulate the dynamics of a driven atomic Josephson junction using a classical-field method within the truncated Wigner approximation <cit.>.
We consider a three-dimensional (3D) condensate of ^87Rb atoms confined in a cigar-shaped geometry.
The system is described by the Hamiltonian
Ĥ = ∫d r[ ħ^2/2m∇ψ̂^†( r) ·∇ψ̂( r) + V( r) ψ̂^†( r)ψ̂( r)
+ g/2ψ̂^†( r)ψ̂^†( r)ψ̂( r)ψ̂( r)],
where ψ̂( r) and ψ̂^†( r) are the bosonic annihilation and creation field operators, respectively.
The 3D interaction parameter is given by g=4π a_s ħ^2/m, where a_s is the s-wave scattering length and m is the mass.
For ^87Rb atoms a_s is 5.3 nm.
The external potential V( r) represents the harmonic trap V_trap( r)=m(ω_x^2x^2+ ω_y^2 y^2 + ω_z^2 z^2)/2, where the trap frequencies are chosen according to the experiment, i.e., (ω_x, ω_y, ω_z)= 2π×(1.6, 252, 250).
Within the classical-field approximation we replace the operators ψ̂ in Eq. <ref> and in the equations of motion by complex numbers ψ. We map real space on a lattice system of 760 × 25 × 25 sites with the lattice discretization length l= 0.1 μ.
We note that the continuum limit is satisfied by choosing l to be comparable or smaller than the healing length ξ = ħ/√(2mgn) and the de Broglie wavelength, where n is the density <cit.>.
We generate the initial states ψ(t=0) in a grand canonical ensemble of temperature T and chemical potential μ via a classical Metropolis algorithm. We use T= 35 and adjust μ such that the total atom number N is close to the experimental one.
We propagate each initial state using the classical equations of motion.
To create a Josephson junction and excite Shapiro steps we add a perturbation term ℋ_ex = ∫d r V(x,t) n( r, t), where n( r, t) = |ψ( r , t)|^2 is the local density and V(x, t) is the Gaussian barrier potential of the form
V(x,t) = V_0 (t) exp[- 2( x-x_0- x(t) )^2/w^2].
V_0 is the time-dependent strength and w is the width. x_0 is the initial location of the barrier and x(t) is the dc and ac driving term.
We choose w=1 and x_0= 19.
We linearly ramp up V_0(t) to the value V_0/μ=0.45 over 200 and then wait for 50.
This creates a weak link by suppressing the tunneling at location x_0.
While we move the barrier at a constant velocity v, we also periodically modulate its position as
x(t) = v t + x_sin(2π f_ t),
where x_ is the driving amplitude and f_ is the driving frequency <cit.>.
This induces an atom current I relative to the barrier motion, i.e., I = v I_c/v_c,
where I_c is the critical current and v_c is the critical velocity.
Similarly, the amplitude of the ac current is given by I_= 2π f_ x_ I_c/v_c.
For the calculation of the atom imbalance we fix the driving time to 33.
The atom imbalance z is determined by z=(N_R - N_L)/N,
where N_L (N_R) is the atom number in the left (right) reservoir, and N= N_L +N_R is the total atom number.
For various values of I, I_, and f_ we determine Δ z = z - z_ and convert it to the chemical potential difference Δμ using Eq. <ref>.
z_ is the imbalance determined at the final location without the barrier.
Without the ac driving the onset of a nonzero Δμ occurs above a certain v_c, which we determine using the prediction of the RCSJ circuit model, see Fig. <ref> A. We obtain v_c=0.42 mm/s in excellent agreement with the measurement of v_c.
In the presence of ac drive we find the creation of Shapiro steps whose height depends on the driving frequency f_. Using sigmoid fits we determine the height of the first Shapiro step, see Fig. <ref> B.
We determine the step height for f_ in the range between 50 and 185 by analyzing the driven response after 3 to 6 driving periods, and also average these results over a few values of I_.
We determine the width of zeroth and first step following the same procedure as in the experiment,
which involves calculating the differential resistance d Δμ/dI and
then determining the step width from the maximum of d Δμ /dI, see Fig. S<ref>.
The simulation results of the step width for f_=90 and varying I_/I_c are
presented in the main text of the paper.
For the phase evolution we calculate the local phase δϕ ( r)= ϕ ( r) - ϕ( r_) from the time evolution of a single trajectory,
where ϕ( r_) is the reference phase.
To count total number of solitons we analyze the phase evolution δϕ (x, t)= ϕ (x+l, t) - ϕ(x, t) of a single line along the x direction of a single trajectory of the driven system,
which is because the phase profile in the yz plane is almost uniform.
The oscillating motion of the barrier results in the creation of solitons, which we identify by a phase jump of less than π.
We count total number of such phase jumps to determine the soliton number N_s.
In Fig. S<ref>B we show N_s, averaged over few samples and total number of driving cycles, as a function of I/I_c, which features a step-like behavior, coinciding with the one observed in the I- Δμ characteristic in Fig. S<ref>A. There are on-average one, two and three solitons at the first, second and third Shapiro steps, respectively.
12
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Endres et al.(2016)Endres,
Bernien, Keesling, Levine,
Anschuetz, Krajenbrink, Senko, Vuletic, Greiner, and Lukin]EndresTweezer
author author M. Endres, author H. Bernien,
author A. Keesling, author H. Levine, author
E. R. Anschuetz, author
A. Krajenbrink, author
C. Senko, author V. Vuletic, author M. Greiner, and author M. D. Lukin, title title Atom-by-atom
assembly of defect-free one-dimensional cold atom arrays, https://doi.org/10.1126/science.aah3752 journal journal Science volume 354, pages
1024 (year 2016), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.aah3752
https://www.science.org/doi/pdf/10.1126/science.aah3752 NoStop
[Kwon et al.(2020)Kwon,
Pace, Panza, Inguscio,
Zwerger, Zaccanti, Scazza, and Roati]Roati_acdc_jj_2020
author author W. J. Kwon, author G. D. Pace,
author R. Panza, author M. Inguscio, author
W. Zwerger, author M. Zaccanti, author F. Scazza, and author G. Roati, title title Strongly
correlated superfluid order parameters from dc josephson supercurrents, https://doi.org/10.1126/science.aaz2463 journal
journal Science volume 369, pages 84 (year 2020), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.aaz2463
https://www.science.org/doi/pdf/10.1126/science.aaz2463 NoStop
[Asteria et al.(2021)Asteria, Zahn, Kosch, Sengstock, and Weitenberg]Asteria_2021
author author L. Asteria, author H. P. Zahn,
author M. N. Kosch, author K. Sengstock, and author C. Weitenberg, title
title Quantum gas magnifier for sub-lattice-resolved imaging of
3d quantum systems, https://doi.org/10.1038/s41586-021-04011-2
journal journal Nature volume 599, pages 571 (year
2021)NoStop
[Meier and Zwerger(2001)]Meier_Zwerger_2001
author author F. Meier and author W. Zwerger, title title Josephson tunneling
between weakly interacting bose-einstein condensates, https://doi.org/10.1103/PhysRevA.64.033610 journal journal Phys. Rev. A volume 64, pages 033610 (year 2001)NoStop
[Valtolina et al.(2015)Valtolina, Burchianti, Amico, Neri, Xhani, Seman, Trombettoni, Smerzi, Zaccanti,
Inguscio, and Roati]Roati_dc_jj_scince_2015
author author G. Valtolina, author A. Burchianti, author A. Amico,
author E. Neri, author
K. Xhani, author J. A. Seman, author A. Trombettoni, author A. Smerzi, author M. Zaccanti, author M. Inguscio, and author G. Roati, title title Josephson
effect in fermionic superfluids across the bec-bcs crossover, https://doi.org/10.1126/science.aac9725 journal journal Science volume 350, pages
1505 (year 2015), https://arxiv.org/abs/https://www.science.org/doi/pdf/10.1126/science.aac9725
https://www.science.org/doi/pdf/10.1126/science.aac9725 NoStop
[Golubov et al.(2004)Golubov, Kupriyanov, and Il'ichev]RevModPhys.76.411
author author A. A. Golubov, author M. Y. Kupriyanov, and author E. Il'ichev, title title The current-phase
relation in josephson junctions, https://doi.org/10.1103/RevModPhys.76.411 journal journal Rev. Mod. Phys. volume 76, pages 411 (year 2004)NoStop
[Andrews et al.(1997)Andrews, Kurn, Miesner, Durfee, Townsend, Inouye, and Ketterle]Andrews_1997
author author M. R. Andrews, author D. M. Kurn,
author H.-J. Miesner, author D. S. Durfee, author
C. G. Townsend, author
S. Inouye, and author
W. Ketterle, title title Propagation of sound in a bose-einstein condensate, https://doi.org/10.1103/PhysRevLett.79.553 journal journal Phys. Rev. Lett. volume 79, pages 553 (year 1997)NoStop
[Baals et al.(2021)Baals,
Moreno, Jiang, Benary, and Ott]BaalsStability
author author C. Baals, author A. G. Moreno,
author J. Jiang, author J. Benary, and author
H. Ott, title title Stability analysis and attractor dynamics of three-dimensional dark
solitons with localized dissipation, https://doi.org/10.1103/PhysRevA.103.043304 journal journal Phys. Rev. A volume 103, pages 043304 (year 2021)NoStop
[Singh et al.(2016)Singh,
Weimer, Morgener, Siegl,
Hueck, Luick, Moritz, and Mathey]Singh2016
author author V. P. Singh, author W. Weimer,
author K. Morgener, author J. Siegl, author
K. Hueck, author N. Luick, author H. Moritz, and author L. Mathey, title title Probing
superfluidity of Bose-Einstein condensates via laser stirring, https://doi.org/10.1103/PhysRevA.93.023634 journal journal Phys. Rev. A volume 93, pages 023634 (year 2016)NoStop
[Kiehn et al.(2022)Kiehn,
Singh, and Mathey]Kiehn2022
author author H. Kiehn, author V. P. Singh, and author L. Mathey, title title Superfluidity of a laser-stirred
Bose-Einstein condensate, https://doi.org/10.1103/PhysRevA.105.043317 journal journal Phys. Rev. A volume 105, pages 043317 (year 2022)NoStop
[Mora and Castin(2003)]Mora2003
author author C. Mora and author Y. Castin, title title Extension of Bogoliubov theory to
quasicondensates, https://doi.org/10.1103/PhysRevA.67.053615
journal journal Phys. Rev. A volume 67, pages 053615 (year
2003)NoStop
[Singh et al.(2024)Singh,
Polo, Mathey, and Amico]singh2023shapiro
author author V. P. Singh, author J. Polo,
author L. Mathey, and author L. Amico, title
title Shapiro Steps in Driven Atomic Josephson Junctions, https://doi.org/10.1103/PhysRevLett.133.093401 journal
journal Phys. Rev. Lett. volume 133, pages 093401 (year 2024)NoStop
|
http://arxiv.org/abs/2409.02339v1 | 20240904000115 | Data-driven 2D stationary quantum droplets and wave propagations in the amended GP equation with two potentials via deep neural networks learning | [
"Jin Song",
"Zhenya Yan"
] | cs.LG | [
"cs.LG",
"math-ph",
"math.MP",
"nlin.PS",
"physics.comp-ph",
"physics.optics"
] |
language=python
numbers=left,
numberstyle= ,
basicstyle=,
keywordstyle= ,
commentstyle= ,
frame=shadowbox,
rulesepcolor= ,
escapeinside=“,
breaklines,
showspaces=false,
xleftmargin=2em, aboveskip=1em,
framexleftmargin=2em
corollaryCorollarylemmaLemmapropositionPropositionremarkRemark=-0.6in =-0.80in
=-0.3in =0.00in
=225mm =175mm
=0.1in
[4]
=13pt plain
=14pt plain
=16pt
Data-driven 2D stationary quantum droplets and wave propagations in the amended GP equation with two potentials via deep neural networks learning
Jin Song^1,2 and Zhenya Yan^1,2,*
[^*Corresponding author. Email address:
[email protected]]
^1KLMM, Academy of Mathematics and Systems Science,
Chinese Academy of Sciences, Beijing 100190, China
^2School of Mathematical Sciences, University of Chinese Academy of
Sciences, Beijing 100049, China
Abstract: In this paper, we develop a systematic deep learning approach to solve two-dimensional (2D) stationary quantum droplets (QDs) and investigate their wave propagation in the 2D amended Gross–Pitaevskii equation with Lee-Huang-Yang correction and two kinds of potentials. Firstly, we use the initial-value iterative neural network (IINN) algorithm for 2D stationary quantum droplets of stationary equations. Then the learned stationary QDs are used as the initial value conditions for physics-informed neural networks (PINNs) to explore their evolutions in the some space-time region. Especially, we consider two types of potentials, one is the 2D quadruple-well Gaussian potential and the other is the -symmetric HO-Gaussian potential, which lead to spontaneous symmetry breaking and the generation of multi-component QDs. The used deep learning method can also be applied to study wave propagations of other nonlinear physical models.
=13pt
§ INTRODUCTION
Recent intensive researches have focused on quantum droplets (QDs), a new state of liquid matter <cit.>. QDs are characterized by a delicate balance between mutual attraction and repulsion, leading to their unique properties. QDs have potential applications in ultracold atoms and superfluids and have been studied widely <cit.>. As the ultra-dilute liquid matter, QDs are nearly incompressible, self-sustained liquid droplets, with distinctive properties such as extremely low densities and temperatures <cit.>. The Lee-Huang-Yang (LHY) effect <cit.>, driven by quantum fluctuations, has been introduced to prevent QDs from collapsing due to mean-field approximation, enabling the prediction of stable QDs in weakly interacting Bose-Einstein condensates (BECs) <cit.>.
Experimental realizations of QDs have been achieved in various systems, including single-component dipolar bosonic gases, binary Bose-Bose mixtures of different atomic states in ^39K, and in the heteronuclear mixture of ^41K and ^87Rb atoms <cit.>. The accurate description of QDs has been made possible by the amended Gross-Pitaevskii (GP) equation with Lee-Huang-Yang (LHY) correction, which has been shown to agree with experimental observations <cit.>.
The reduction of dimensionality from 3D to 2D has a significant impact on the form of the LHY term. In this case, the repulsive quartic nonlinearity is replaced by a cubic nonlinearity with an additional logarithmic factor <cit.>
such that the 2D amended GP equation in the binary BECs with two mutually symmetric components trapped in a potential can be written as the following dimensionless form after scaling
iψ_t=-12∇_^2ψ +2ln(2|ψ|^2)|ψ|^2ψ+U()ψ,
where the complex wave function ψ=ψ(, t),
=(x,y) stands for the 2D rescaled coordinates and t∈ℝ, ∇_^2=∂^2/∂ x^2+∂^2/∂ y^2, U() is an external potential, which can be real or complex.
A variety of trapping configurations in BECs have allowed for the direct observation of fundamental manifestations of QDs. For instance, stable 2D anisotropic vortex QDs have been predicted in effectively 2D dipolar BECs <cit.>. More importantly, vortical QDs have been found to be stable without the help of any potential by a systematic numerical investigation and analytical estimates <cit.>. Additionally, vortex-carrying QDs can be experimentally generated in systems with attractive inter-species and repulsive intra-species interactions, confined in a shallow harmonic trap with an additional repulsive Gaussian potential at the center <cit.>.
Furthermore, the exploration of QDs trapped in -symmetric potentials has also been pursued <cit.>.
Recently, there has been a surge in the development of deep neural networks for studying partial differential equations (PDEs). Various approaches, such as physics-informed neural networks (PINNs) <cit.>, deep Ritz method <cit.>, and PDE-net <cit.>, have been proposed to effectively handle PDE problems. Among them, the PINNs method incorporates the physical constraints into the loss functions, allowing the models to learn and represent the underlying physics more accurately <cit.>.
Moreover, these deep learning methods have been extended to solve a wide range of PDEs in various fields <cit.>.
For the general 2D stationary QDs in the form Ψ(x,y,t)=ϕ(x,y)e^-iμ t, solving for ϕ(x,y) is an important problem because ϕ serves as an initial-value condition of PINNs.
In general, the traditional numerical methods were developed thus far to compute solitary waves, including the Petviashvili method, accelerated imaginary-time evolution (AITEM)
method, squared-operator iteration (SOM) method, and Newton-conjugate-gradient (NCG) method <cit.>. More recently,
we proposed a new deep learning approach called the initial-value iterative neural network (IINN) for solitary wave computations of many types of nonlinear wave euqations <cit.>, which offers a mesh-free approach by taking advantage of automatic differentiation, and could overcomes the curse of dimensionality.
Motivated by the aforementioned discussions, the main objective of this paper is to develop a systematic deep learning approach to solve 2D stationary QDs, and investigate their evolutions in the amended Gross–Pitaevskii equation with potentials. Especially, we consider two types of potentials, one is 2D quadruple-well Gaussian potential and the other is -symmetric HO-Gaussian potential, which lead to spontaneous symmetry breaking and the generation of multi-component QDs.
The remainder of this paper is arranged as follows. Firstly, we introduce the PINNs deep learning
framework for the evolution of QDs. And then the IINN framework for stationary QDs is presented in Sec 2.
In Sec. 3, data-driven 2D QDs in the amended GP equation with two types of potential are exhibited, respectively. Finally, we give some conclusions and discussions in Sec. 4.
§ THE FRAMEWORK OF DEEP LEARNING METHOD
In the following, we focus on the trapped stationary QDs to Eq. (<ref>) in the form ψ(, t) = ϕ()e^-iμ t, where μ stands for the chemical potential <cit.>, and lim_||→∞ϕ()=0 for ϕ()∈ℝ[]. Substituting the stationary solution into Eq. (<ref>) yields the following nonlinear stationary equation obeyed by the nonlinear localized eigenmode ϕ():
μϕ=-1/2∇^2_ϕ+2ln(2|ϕ|^2)|ϕ|^2ϕ+U()ϕ.
In general, it is difficult to get the explicit, exact solutions of Eq. (<ref>) with the potentials. For general parametric conditions, one can usually use the numerical iterative methods to solve Eq. (<ref>) with zero-boundary conditions by choosing the proper initial value, such as Newton-conjugate-gradient (NCG) method <cit.>, the spectral renormalization method <cit.>, and the squared-operator iteration method <cit.>.
In this paper, we extend the deep learning IINN method for the computations of stationary QDs, and then we use the stationary QDs as the initial data to analyze the evolutions of QDs with the aid of PINNs.
§.§ The IINN framework for stationary QDs
Based on traditional numerical iterative methods and physics-informed neural networks (PINNs), recently we proposed the initial value iterative neural network (IINN) algorithm for solitary wave computations <cit.>.
In the following, we will introduce the main idea of IINN method.
Two identical fully connected neural networks are employed to learn the desired solution ϕ^* of Eq. (<ref>).
For the first network, we choose an appropriate initial value ϕ_0 such that it is sufficiently close to ϕ^*. Then we randomly select N training points
{_i}_i=1^N within the region and train the network parameters θ by minimizing the mean squared error loss ℒ_1, aiming to make the output of network ϕ̅ sufficiently close to initial value ϕ_0, where loss function ℒ_1 is defined as follows
ℒ_1:=1/N∑_i=1^N|ϕ̅(_i)-ϕ_0(_i)|^2.
For the second network, we initialize the network parameters θ={W,B} with the learned weights and biases from the first network,
that is
θ_0=argmin ℒ_1(θ).
For the network output ϕ̂,
we define the loss function ℒ_2 as follows and utilize SGD or Adam optimizer to minimize it.
ℒ_2:=1/N∑_i=1^N|Lϕ̂(_i)|^2/max_i(|ϕ̂(_i)|).
It should be noted that ℒ_2 is different from the loss function ℒ_0 defined in PINNs. Here we are not taking boundaries into consideration, instead we incorporate max(|ϕ̂|) to ensure that ϕ̂ does not converge to trivial solution.
§.§ The PINNs framework for the evolution of QDs
Base on the obtained the stationary QDs in Sec. 3.1, we utilize the PINNs deep learning framework <cit.> to address the data-driven solutions of Eq. (<ref>). The core concept of PINNs involves training a deep neural network to satisfy the physical laws and accurately represent the solutions for various nonlinear partial differential equations. In the case of the 2D amended GP equation (<ref>), we incorporate initial-boundary value conditions.
{[ iψ_t+12∇_^2ψ -2ln(2|ψ|^2)|ψ|^2ψ-U()ψ=0,
(, t)∈Ω× (0, T),ψ(,0)=ϕ(), ∈Ω, ψ(,t)|_∈∂Ω=ϕ_b(t), t∈ [0,T], ].
where ϕ() is the solution of stationary equation (<ref>), and we take ϕ_b(t)≡ 0, which is solved by the IINN method in Sec. 3.1.
We rewrite the wave-function as ψ(,t)=p(,t)+iq(,t) with p(,t) and q(,t) being its real and imaginary parts, respectively. Then the complex-valued PINNs ℱ(, t)=ℱ_p(, t)+iℱ_q(, t) with ℱ_p(, t) and ℱ_q(, t) being its real and imaginary parts, respectively can be defined as
[ ℱ(, t):= iψ_t+12∇_^2ψ -2ln(2|ψ|^2)|ψ|^2ψ-U()ψ,ℱ_p(, t):= -q_t+12∇^2_p-2ln[2(p^2+q^2)](p^2+q^2)p-real(U)p+imag(U)q,ℱ_q(, t):= p_t+12∇^2_q-2ln[2(p^2+q^2)](p^2+q^2)q-real(U)q-imag(U)p, ]
where real(U) and imag(U) represent the real and imaginary parts of the external potential U(), respectively.
Therefore, a fully-connected neural network NN(, t; W, B) with i hidden Layers and n neurons in every hidden layer can be constructed, where initialized parameters W = {w_j}_1^i+1 and B = {b_j}_1^i+1 being the weights and bias. Then, by the given activation function σ, one can obtain the expression in the form
A_j=σ(w_j· A_j-1+b_j),
where the w_j is a dim(A_j)×dim(A_j-1) matrix, A_0, A_i+1, b_i+1∈ℝ^2 and A_j, b_j∈ℝ^n.
Furthermore, a Python library for PINNs, DeepXDE, was designed to serve a research tool for solving problems in computational science and engineering <cit.>. Using DeepXDE, we can conveniently define the physics-informed neural network ℱ(, z) as
import deepxde as dde
def pde(x, psi):
p = psi[:, 0:1]
q = psi[:, 1:2]
p_xx = dde.grad.hessian(psi, x, component=0, i=0, j=0)
q_xx = dde.grad.hessian(psi, x, component=1, i=0, j=0)
p_yy = dde.grad.hessian(psi, x, component=0, i=1, j=1)
q_yy = dde.grad.hessian(psi, x, component=1, i=1, j=1)
p_t = dde.grad.jacobian(psi, x, i=0, j=2)
q_t = dde.grad.jacobian(psi, x, i=1, j=2)
F_p = -q_t + 0.5*(p_xx+p_yy) - 2*tf.log(2*(p**2+q**2))*(p**2+q**2)*p - (V*p - W*q)
F_q = p_t + 0.5*(q_xx+q_yy) - 2*tf.log(2*(p**2+q**2))*(p**2+q**2)*q - (V*q + W*p)
return [F_p, F_q]
In order to train the neural network to fit the solutions of Eq. (<ref>), the total mean squared error (MSE) is defined as the following loss function containing three parts
ℒ_0=MSE_F+MSE_I+MSE_B,
with
[ MSE_F=1/N_f∑_ℓ=1^N_f(|ℱ_p(_f^ℓ,t_f^ℓ)|^2
+|ℱ_q(_f^ℓ,t_f^ℓ)|^2), MSE_I=1/N_I∑_ℓ=1^N_I(|p(_I^ℓ,0)-p_0^ℓ|^2
+|q(_I^ℓ,0)-q_0^ℓ|^2), MSE_B=1/N_B∑_ℓ=1^N_B(|p(_B^ℓ,t_B^ℓ)|^2
+|q(_B^ℓ,t_B^ℓ)|^2), ]
where {_f^ℓ,t_f^ℓ}_ℓ^N_f are connected with the marked points in Ω×[0,T] for the PINNs ℱ(, t)=ℱ_p(, t)+iℱ_q(, t),
{_I^ℓ,p_0^ℓ,q_0^ℓ}_ℓ^N_I represent the initial data with ϕ(_I^ℓ)=p_0^ℓ+iq_0^ℓ, and {_B^ℓ,t_B^ℓ}_ℓ^N_B are linked with the randomly selected boundary training points in domain ∂Ω×[0,T].
And then, we choose a hyperbolic tangent function tanh(·) as the activation function (of course one can also choose other nonlinear functions as the activation functions), and use Glorot normal to initialize variate. Therefore, the fully connected neural network can be written in Python as follows
data = dde.data.TimePDE(
geomtime, pde,
initial-boundary value conditions,
N_f, N_B, N_I,
)
net = dde.maps.FNN(layer_size, "tanh", Glorot normal)
model = dde.Model(data, net)
And then, with the aid of some optimization approaches (e.g., Adam & L-BFGS) <cit.>, we minimize the whole MSE ℒ_0 to make the approximated solution satisfy Eq. (<ref>) and initial-boundary value conditions.
model.compile("adam", lr=1.0e-3)
model.train(epochs)
model.compile("L-BFGS")
model.train()
Therefore, for the given initial condition ϕ() solved by the IINNs, we can use PINNs to obtain solutions for the whole space-time region.
Therefore, the main steps by the combination of the IINN and PINNs deep learning methods to solve the amended GP equation (<ref>) with potentials and the initial-boundary value conditions are presented as follows:
1) Given an initial value that is sufficiently close to the stationary QDs we want to obtain. And a fully connected network NN_1 is trained to fit it.
Then the IINN method is used to solve Eq. (<ref>).
2) We initialize the network parameters of the second network NN_2 with the learned weights and biases from
the first network NN_1, that is θ_0=argmin ℒ_1(θ). And train the NN_2 by minimizing the loss function ℒ_2 in terms of the optimization algorithm.
3) Construct a fully-connected neural network NN(, t; θ) with randomly initialized parameters, and the PINNs ℱ(, t) is given by Eq. (<ref>).
4) Generate the training data sets for the initial value condition given by IINN method, and considered model respectively from the initial boundary and within the region.
5) Construct a training loss function ℒ_0 given by Eq. (<ref>) by summing the MSE of both the ℱ(, t) and initial-boundary value residuals.
And train the NN to optimize the parameters θ={W, B} by minimizing the loss function in terms of
the Adam & L-BFGS optimization algorithm.
In what follows, the deep learning scheme is used to investigate the data-driven QDs of the 2D amended GP equation (<ref>) with two types of potential (quadruple-well Gaussian potential and -symmetric HOG potential).
§ DATA-DRIVEN 2D QDS IN AMENDED GP EQUATION WITH POTENTIALS
§.§ Data-driven QDs in amended GP equation with quadruple-well Gaussian potential
Firstly, we consider the 2D quadruple-well Gaussian potential <cit.>
U()= V_0∑_j=1^4exp[-k(-_j)^2 ], V_0<0, k>0,
where _j=(± x_0,± y_0), j=1,2,3,4 control the locations of these four potential wells, and |V_0| and k regulate the depths and widths of potential wells, respectively. Recently, based on the usual numerical methods, the spontaneous symmetry breaking (SSB) of 2D QDs was considered for the amended GP equation with the potential (<ref>) <cit.>, in which the complete pitchfork symmetry breaking bifurcation diagrams were presented for the possible stationary states with four modes, which involve twelve different real solution branches and one complex solution branches (for the complex one, the norm N=∫_ℝ^2|ϕ|^2d^2 r is the same as for one real branch),
see Fig. <ref> for diagrams about the norm as a function of μ, and stable/unstable modes.
In the following, we use the deep learning method to consider the four branches, that is, branches A0, A1, A3 and A4 (see Fig. <ref>).
It should be noted that for the same potential parameters and chemical potential μ, Eq. (<ref>) can admit different solutions, which cannot be solved by general deep learning methods.
Here we take potential parameters as V_0 =-0.5 and k=0.1, and consider Ω = [-12, 12]×[-12,12], T = 5 and μ=-0.5.
If not otherwise specified, we choose a 4-hidden-layer deep neural network with 100 neurons per layer, and set learning rate α = 0.001.
Čǎšě ̌1̌.̌—In branch A0, we firstly obtain the stationary QDs by the IINN method. We set N = 20000, and take the initial value as
ϕ_0=∑_j=1^4a_jexp[-k(-_j)^2 ],
where a_j=0.46 (j=1,2,3,4) and k=0.1.
Through the IINN method, the learned QDs can be obtained at μ=-0.5, whose intensity diagram |ϕ()| and 3D profile
are shown in Figs. <ref>(a1, a2), after 20000 steps of iterations with NN_1 and 3000 steps of iterations with NN_2. The relative L_2 error is 8.255472e-03 compared to the exact solution (numerically obtained). And the module of absolute error is exhibited in Fig. <ref>(a3). The loss-iteration plot of NN_1 is displayed in Fig. <ref>(a1).
Then according to PINNs method, we take random sample points N_f=20000, N_B=150 and N_I=1000, respectively. Then, by using 40000 steps Adam and 10000 steps L-BFGS optimizations, we obtain the learned QDs solution ψ̂(,t) in the whole space-time region.
Figs. <ref>(b1, b2, b3) exhibit the magnitude of the predicted solution at different time t = 0, 2.5, and 5.0, respectively. And the initial state (ϕ()=ψ(,t=0)) of the learned solution by IINN method and PINNs method is shown in Figs. <ref>(c1, c2), respectively. Furthermore, nonlinear propagation simulation of the learned 2D QDs is displayed by the isosurface of learned soliton at values 0.1, 0.5 and 0.9 hereinafter (see Fig. <ref>(c3)). The relative L_2 norm errors of ψ(, t), p(, t) and q(, t), respectively, are 1.952e-02, 1.396e-02 and 1.061e-02. And the loss-iteration plot is displayed in Fig. <ref>(a2).
We should mention that the training stops in each step of the L-BFGS optimization when
L_k-L_k+1/max{|L_k|,|L_k+1|,1}≤ np.finfo(float).eps,
where L_k denotes loss in the n-th step L-BFGS optimization, and np.finfo(float).eps represent Machine Epsilon. Here we always set the default float type to `float64'. When the relative error between L_k and L_k+1 is less than Machine Epsilon, the iteration stops.
Čǎšě ̌2̌.̌—In branch A1, similarly we get the stationary QDs by IINN method. We set N = 20000, and take the initial value as
ϕ_0=∑_j=1^4a_jexp[-k(-_j)^2 ],
where a_1=0.46, a_2=a_3=a_4=0 and k=0.1.
According to the IINN method, the learned QDs can be obtained at μ=-0.5, whose intensity diagram |ϕ()| and 3D profile
are shown in Figs. <ref>(a1, a2), after 10000 steps of iterations with NN_1 and 5000 steps of iterations with NN_2. The relative L_2 error is 8.821019e-03 compared to the exact solution. And the module of absolute error is exhibited in Fig. <ref>(a3). The loss-iteration plot of NN_1 is displayed in Fig. <ref>(b1).
Then through the PINNs method, we take random sample points N_f=20000, N_B=150 and N_I=1000, respectively. Then, by using 40000 steps Adam and 10000 steps L-BFGS optimizations, we obtain the learned QDs solution ψ̂(,t) in the whole space-time region.
Figs. <ref>(b1, b2, b3) exhibit the magnitude of the predicted solution at different time t = 0, 2.5, and 5.0, respectively. And the initial state (ϕ()=ψ(,t=0)) of the learned solution by IINN method and PINNs method is shown in Figs. <ref>(c1, c2), respectively. Besides, nonlinear propagation simulation of the learned 2D QDs is displayed by the isosurface of learned soliton (see Fig. <ref>(c3)). The relative L_2 norm errors of ψ(, t), p(, t) and q(, t), respectively, are 3.364e-02, 3.767e-02 and 4.309e-02. And the loss-iteration plot is displayed in Fig. <ref>(b2).
Čǎšě ̌3̌.̌—In branch A3, we firstly obtain the stationary QDs by IINN method. We set N = 20000, and take the initial value as
ϕ_0=∑_j=1^4a_jexp[-k(-_j)^2 ],
where a_1=a_3=0.3, a_2=a_4=0 and k=0.1.
Through the IINN method, the learned QDs can be obtained at μ=-0.5, whose intensity diagram |ϕ()| and 3D profile
are shown in Figs. <ref>(a1, a2), after 15000 steps of iterations with NN_1 and 5000 steps of iterations with NN_2. The relative L_2 error is 7.201800e-03 compared to the exact solution. And the module of absolute error is exhibited in Fig. <ref>(a3).
Then according to PINNs method, we take random sample points N_f=20000, N_B=150 and N_I=1000, respectively. Then, by using 40000 steps Adam and 10000 steps L-BFGS optimizations, we obtain the learned QDs solution ψ̂(,t) in the whole space-time region.
Figs. <ref>(b1, b2, b3) exhibit the magnitude of the predicted solution at different time t = 0, 2.5, and 5.0, respectively. And the initial state (ϕ()=ψ(,t=0)) of the learned solution by IINN method and PINNs method is shown in Figs. <ref>(c1, c2), respectively. Furthermore, nonlinear propagation simulation of the learned 2D QDs is displayed by the isosurface of learned soliton (see Fig. <ref>(c3)). The relative L_2 norm errors of ψ(, t), p(, t) and q(, t), respectively, are 2.002e-02, 2.458e-02 and 2.356e-02.
Čǎšě ̌4̌.̌—In branch A4, we get the stationary QDs by IINN method. We set N = 20000, and take the initial value as
ϕ_0=∑_j=1^4a_jexp[-k(-_j)^2 ],
where a_1=a_2=a_3=0.46, a_4=0 and k=0.1.
Through the IINN method, the learned QDs can be obtained at μ=-0.5, whose intensity diagram |ϕ()| and 3D profile
are shown in Figs. <ref>(a1, a2), after 20000 steps of iterations with NN_1 and 3000 steps of iterations with NN_2. The relative L_2 error is 3.380430e-03 compared to the exact solution (numerically obtained). And the module of absolute error is exhibited in Fig. <ref>(a3).
Then according to PINNs method, we take random sample points N_f=20000, N_B=150 and N_I=1000, respectively. Then, by using 40000 steps Adam and 10000 steps L-BFGS optimizations, we obtain the learned QDs solution ψ̂(,t) in the whole space-time region.
Figs. <ref>(b1, b2, b3) exhibit the magnitude of the predicted solution at different time t = 0, 2.5, and 5.0, respectively. And the initial state (ϕ()=ψ(,t=0)) of the learned solution by IINN method and PINNs method is shown in Figs. <ref>(c1, c2), respectively. Furthermore, nonlinear propagation simulation of the learned 2D QDs is displayed by the isosurface of learned soliton at values 0.1, 0.5 and 0.9 (see Fig. <ref>(c3)). The relative L_2 norm errors of ψ(, t), p(, t) and q(, t), respectively, are 1.256e-02, 1.899e-02 and 1.499e-02.
§.§ Data-driven QDs in amended GP equation with -symmetric HOG potential
In this subsection, we consider the following -symmetric HO-Gaussian (HOG) potential with the real and imaginary parts being <cit.>
V()=r^2( 1+e^-r^2) +V_0 (e^-2x^2+e^-2y^2),
W()=W_0( xe^-x^2+ye^-y^2) ,
where r^2=x^2+y^2, the coefficient in front of the HO potential is set to be 1,
the real parameter V_0 modulates the profile of the external potential V(), and real W_0 is the strength of gain-loss distribution W(). The vortex solitons were produced for a variety of 2D spinning QDs in the -symmetric potential, modeled by the amended GP equation with Lee–Huang–Yang corrections <cit.>, where the dependence of norm N on chemical potential μ was illustrated for different families of droplet modes in -symmetric HOG potential (see Fig. <ref>).
In the following, we use the deep learning method to consider the multi-component QDs under different chemical potential.
Here we take potential parameters as V_0 =-1/16 and W_0=1, and consider Ω = [-8, 8]×[-8,8], T = 3.
Considering that the solution ϕ() of Eq. (<ref>) is a complex-valued function, similarly we set the network's output ϕ()=p()+iq() and then separate Eq. (<ref>) into its real and imaginary parts.
[ ℱ_p():= -12∇^2_p+2ln[2(p^2+q^2)](p^2+q^2)p+real(U)p-imag(U)q-μ p,ℱ_q():= -12∇^2_q+2ln[2(p^2+q^2)](p^2+q^2)q+real(U)q+imag(U)p-μ q. ]
Then the loss function ℒ_2 becomes
ℒ_2:=1/N∑_i=1^N(|ℱ_p(_i)|^2+|ℱ_q(_i)|^2)/max_i(√((p(_i)^2+q(_i)^2)).
Čǎšě ̌1̌.̌ ̌ ̌Q̌Ďš ̌w̌ǐťȟ ̌ťȟě ̌ǒňě-̌čǒm̌p̌ǒňěňť ̌šťřǔčťǔřě.̌—Firstly, we consider the -symmetric droplets with the simplest structure.
We can obtain the initial conditions by computing the spectra and eigenmodes in the linear regime, which can be given as follows
ℋΦ (𝐫)=λΦ (𝐫), ℋ
=-∇ _𝐫^2+U(),
where λ and Φ (𝐫) are the eigenvalue and localized
eigenfunction, respectively. The linear spectral problem (<ref>) can be solved
numerically by dint of the Fourier spectral method <cit.>.
We take the initial value as the linear mode Φ at ground state and N=10000.
Through the IINN method, the learned QDs can be obtained at μ=2, after 10000 steps of iterations with NN_1 and 10000 steps of iterations with NN_2.
Figs. <ref>(a1, a2, a3) exhibit the intensity diagram of real part, imaginary part and |ϕ()|. The module of absolute error is shown in Fig. <ref>(a4).
The relative L_2 errors of ϕ(), p() and q(), respectively, are 1.992564e-02, 5.547692e-02 and 2.075972e-02.
Then according to PINNs method, we take random sample points N_f=20000, N_B=150 and N_I=1000, respectively. Then, by using 30000 steps Adam and 10000 steps L-BFGS optimizations, we obtain the learned QDs solution ψ̂(,t) in the whole space-time region.
Figs. <ref>(b1, b2, b3) exhibit the magnitude of the predicted solution at different time t = 0, 1.5, and 3.0, respectively. And, nonlinear propagation simulation of the learned 2D QDs is displayed by the isosurface of learned soliton at values 0.1, 0.5 and 0.9 hereinafter (see Fig. <ref>(b4)). The relative L_2 norm errors of ψ(, t), p(, t) and q(, t), respectively, are 2.312e-02, 1.995e-02 and 2.331e-02.
Čǎšě ̌2̌.̌ ̌ ̌Q̌Ďš ̌w̌ǐťȟ ̌ťȟě ̌ťw̌ǒ-̌čǒm̌p̌ǒňěňť ̌šťřǔčťǔřě—Second, we consider the -symmetric droplets with the two-component structure.
We take the initial value as the linear mode Φ at the first excited state and N=10000.
Through the IINN method, the learned QDs can be obtained at μ=2.8, after 10000 steps of iterations with NN_1 and 10000 steps of iterations with NN_2.
Figs. <ref>(a1, a2, a3) exhibit the intensity diagram of real part, imaginary part and |ϕ()|. The module of absolute error is shown in Fig. <ref>(a4).
The relative L_2 errors of ϕ(), p() and q(), respectively, are 5.231775e-02, 1.546516e-02 and 6.117475e-02.
Then according to PINNs method, we take random sample points N_f=20000, N_B=150 and N_I=1000, respectively. Then, by using 30000 steps Adam and 15000 steps L-BFGS optimizations, we obtain the learned QDs solution ψ̂(,t) in the whole space-time region.
Figs. <ref>(b1, b2, b3) exhibit the magnitude of the predicted solution at different time t = 0, 1.5, and 3.0, respectively. And, nonlinear propagation simulation of the learned 2D QDs is displayed by the isosurface of learned soliton at values 0.1, 0.5 and 0.9 hereinafter (see Fig. <ref>(b4)). The relative L_2 norm errors of ψ(, t), p(, t) and q(, t), respectively, are 4.326e-02, 5.158e-02 and 5.182e-02.
Čǎšě ̌3̌.̌ ̌ ̌Q̌Ďš ̌w̌ǐťȟ ̌ťȟě ̌ťȟřěě-̌čǒm̌p̌ǒňěňť ̌šťřǔčťǔřě—Then, we consider the -symmetric droplets with the three-component structure.
We take the initial value as the linear mode Φ at the second excited state and N=10000.
Through the IINN method, the learned QDs can be obtained at μ=4.3, after 10000 steps of iterations with NN_1 and 10000 steps of iterations with NN_2.
Figs. <ref>(a1, a2, a3) exhibit the intensity diagram of real part, imaginary part and |ϕ()|. The module of absolute error is shown in Fig. <ref>(a4).
The relative L_2 errors of ϕ(), p() and q(), respectively, are 3.606203e-02, 5.148407e-02 and 4.234037e-02.
Then according to PINNs method, we take random sample points N_f=20000, N_B=150 and N_I=1000, respectively. Then, by using 30000 steps Adam and 15000 steps L-BFGS optimizations, we obtain the learned QDs solution ψ̂(,t) in the whole space-time region.
Figs. <ref>(b1, b2, b3) exhibit the magnitude of the predicted solution at different time t = 0, 1.5, and 3.0, respectively. And, nonlinear propagation simulation of the learned 2D QDs is displayed by the isosurface of learned soliton at values 0.1, 0.5 and 0.9 hereinafter (see Fig. <ref>(b4)). The relative L_2 norm errors of ψ(, t), p(, t) and q(, t), respectively, are 5.410e-02, 6.771e-02 and 6.189e-02.
Čǎšě ̌4̌.̌ ̌ ̌Q̌Ďš ̌ ̌w̌ǐťȟ ̌ťȟě ̌f̌ǒǔř-̌čǒm̌p̌ǒňěňť ̌šťřǔčťǔřě—Finally, we consider the -symmetric droplets with the four-component structure.
We take the initial value as the linear mode Φ at the three excited state and N=10000.
Through the IINN method, the learned QDs can be obtained at μ=4.2, after 10000 steps of iterations with NN_1 and 20000 steps of iterations with NN_2.
Figs. <ref>(a1, a2, a3) exhibit the intensity diagram of real part, imaginary part and |ϕ()|. The module of absolute error is shown in Fig. <ref>(a4).
The relative L_2 errors of ϕ(), p() and q(), respectively, are 1.186940e-02, 4.332541e-02 and 1.185871e-02.
Then according to PINNs method, we take random sample points N_f=20000, N_B=150 and N_I=1000, respectively. Then, by using 30000 steps Adam and 15000 steps L-BFGS optimizations, we obtain the learned QDs solution ψ̂(,t) in the whole space-time region.
Figs. <ref>(b1, b2, b3) exhibit the magnitude of the predicted solution at different time t = 0, 1.5, and 3.0, respectively. And, nonlinear propagation simulation of the learned 2D QDs is displayed by the isosurface of learned soliton at values 0.1, 0.5 and 0.9 hereinafter (see Fig. <ref>(b4)). The relative L_2 norm errors of ψ(, t), p(, t) and q(, t), respectively, are 5.128e-02, 5.841e-02 and 5.669e-02.
Řěm̌ǎřǩ.̌ It should be noted that the systematic results shown in Figs. <ref>and <ref> have been investigated by numerical methods in Refs. <cit.>. In this paper, we mainly consider partial solutions and their short evolutions by using the machine learning method. For their stability, in general it can be solved by linear eigenvalue problems and long time evolution. However, solving the eigenvalue problem via machine learning methods may be more difficult because it also involves multi-solution problems. This will be our future work to consider.
Furthermore, many methods already exist to return stable and accurate predictions across long temporal horizons by training multiple individual networks in different temporal sub-domains <cit.>. These approaches inevitably lead to a large computational cost and more complex network structure. We use the parallel PINNs to investigate the longer-time evolutions for both solutions via the domain decomposition (see Fig. <ref>). We can see that the solutions are still stable after the longer-time evolutions, compared with Fig. <ref>(c3) and Fig. <ref>(b4), respectively.
§ CONCLUSIONS AND DISCUSSIONS
In conclusion, we have investigated the 2D stationary QDs and their evolutions in amended Gross–Pitaevskii equation with potentials via deep learning neural networks. Firstly, we use the IINN method for learning 2D stationary QDs. Then the learned 2D stationary QDs are used as the initial-value conditions for PINNs to display their evolutions in the some space-time regions. Especially, we consider two types of potentials, one is 2D quadruple-well Gaussian potential and the other is -symmetric HO-Gaussian potential, which lead to spontaneous symmetry breaking and the generation of multi-component QDs.
On the other hand, in order to study the stability of the QDs, we can use deep learning methods to study the interactions between droplets. Furthermore, we can investigate the spinning QDs in terms of spinning coordinates, x^'=xcos (ω t)+ysin (ω t), y^'=ycos (ω t)-xsin (ω t) with angular velocity ω. We will investigate these issues in future.
̌̌ AcknowledgementŤhe work was supported by the National Natural Science Foundation of China under Grant No. 11925108.
991 D. S. Petrov, Quantum mechanical stabilization of a collapsing Bose-Bose mixture, Phys. Rev. Lett. 115 (2015) 155302.
4 G. E. Astrakharchik, B. A. Malomed, Dynamics of one-dimensional quantum droplets, Phys. Rev. A 98 (2018) 013631.
3 L. Chomaz, S. Baier, D. Petter, M. J. Mark, F. Wachtler, L. Santos, F. Ferlaino, Quantum-fluctuation-driven crossover from a dilute Bose–Einstein
condensate to a macrodroplet in a dipolar quantum fluid, Phys. Rev. X 6 (2016) 041039.
6 E. Shamriz, Z. Chen, B. A. Malomed, Suppression of the quasitwo-dimensional quantum collapse in the attraction field by the
Lee-Huang-Yang effect, Phys. Rev. A 101 (2020) 063628.
5 Y. V. Kartashov, B. A. Malomed, L. Torner, Metastability of quantum droplet clusters, Phys. Rev. Lett. 122 (2019) 193902.
8 Z. Luo, W. Pang, B. Liu, Y. Li, B. A. Malomed, A new form of liquid matter: Quantum droplets, Front. Phys. 16 (2021) 32201.
7 M. Tylutki, G. E. Astrakharchik, B. A. Malomed, D. S. Petrov, Collective excitations of a one-dimensional quantum droplet, Phys. Rev. A 101 (2020)
051601(R).
10 Y. V. Kartashov, B. A. Malomed, L. Torner, Structured heterosymmetric quantum droplets, Phys. Rev. Res. 2 (2020) 033522.
9 Y. V. Kartashov, V. V. Konotop, D. A. Zezyulin, L. Torner, Bloch oscillations in optical and Zeeman lattices in the presence of spin–orbit coupling, Phys.
Rev. Lett. 117 (2016) 215301.
11 C. Cabrera, L. Tanzi, J. Sanz, B. Naylor, P. Thomas, P. Cheiney, L. Tarruell,
Quantum liquid droplets in a mixture of Bose–Einstein condensates, Science 359 (2018) 301.
12 P. Cheiney, C. R. Cabrera, J. Sanz, B. Naylor, L. Tanzi, L. Tarruell, Bright
soliton to quantum droplet transition in a mixture of Bose–Einstein condensates, Phys. Rev. Lett. 120 (2018) 135301.
13 I. Ferrier-Barbut, H. Kadau, M. Schmitt, M. Wenzel, T. Pfau, Observation
of quantum droplets in a strongly dipolar Bose gas, Phys. Rev. Lett. 116 (2016) 215301.
14 D. Edler, C. Mishra, F. Wächtler, R. Nath, S. Sinha, L. Santos, Quantum
fuctuations in quasi-one-dimensional dipolar Bose–Einstein condensates, Phys. Rev. Lett. 119 (2017) 050403.
15 T. D. Lee, K. Huang, C. N. Yang, Eigenvalues and eigenfunctions of a Bose
system of hard spheres and its lowtemperature properties, Phys. Rev. 106 (1957) 1135.
ob1 H. Kadau, M. Schmitt, M. Wentzel, C. Wink, T. Maier, I. Ferrier-Barbut, and T. Pfau, Observing the Rosenzweig instability of a quantum ferrofluid, Nature 530,
194 (2016).
ob2 M. Schmitt, M. Wenzel, F. Büttcher, I. Ferrier-Barbut, and T. Pfau, Self-bound droplets of a dilute magnetic quantum liquid, Nature 539, 259 (2016).
22 V. Cikojević, L. V. Markić, G. E. Astrakharchik, J. Boronat, Universality in ultradilute liquid Bose-Bose mixtures, Phys. Rev. A 99 (2019) 023618.
23 V. Cikojević, K. Dzelalija, P. Stipanovic, L. V. Markić, J. Boronat, Ultradilute quantum liquid drops, Phys. Rev. B 97 (2018) 140502(R).
Li G. Li, X. Jiang, B. Liu, Z. Chen, B. A. Malomed, and Y. Li, Two-dimensional anisotropic vortex quantum droplets in dipolar Bose-Einstein condensates. Front. Phys., 19 (2024) 22202.
liyy Y. Li, Z. Chen, Z. Luo, C. Huang, H. Tan, W. Pang, B.A. Malomed, Two-dimensional vortex quantum droplets, Phys. Rev. A 98 (2018) 063602.
25 M.N. Tengstrand, P. Stürmer, E.Ö. Karabulut, S.M. Reimann, Rotating binary
Bose–Einstein condensates and vortex clusters in quantum droplets, Phys. Rev. Lett. 123 (2019) 160405.
26 Z. Zhou, X. Yu, Y. Zou, H. Zhong, Dynamics of quantum droplets in a
one-dimensional optical lattice, Commun. Nonlinear Sci. Numer. Simul. 78 (2019) 104881.
27 B. Liu, H. Zhang, R. Zhong, X. Zhang, X. Qin, C. Huang, Y. Li, B.A. Malomed,
Symmetry breaking of quantum droplets in a dual-core trap, Phys. Rev. A 99 (2019) 053602.
28 Z. Zhou, B. Zhu, H. Wang, H. Zhong, Stability and collisions of quantum
droplets in -symmetric dual-core couplers, Commun. Nonlinear Sci. Numer. Simul. 91 (2020) 105424.
29 J. Song, Z. Yan, Dynamics of 1D and 3D quantum droplets in
parity-time-symmetric harmonic-Gaussian potentials with two competing nonlinearities, Physica D 442 (2022) 133527.
pinn M. Raissi, P. Perdikaris, G. E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, J. Comput. Phys. 378 (2019) 686.
pinn1 S. Goswami, C. Anitescu, S. Chakraborty, T. Rabczuk, Transfer learning enhanced
physics informed neural network for phase-field modeling of fracture, Theor. Appl. Fract. Mech. 106 (2020) 102447.
pinn2 A. D. Jagtap, K. Kawaguchi, G. E. Karniadakis, Adaptive activation functions accelerate convergence in deep and physics-informed neural networks, J. Comput. Phys.
404 (2020) 109136.
pinn3 X. Meng, Z. Li, D. Zhang, G. E. Karniadakis, PPINN: parareal physics-informed neural network for time-dependent PDEs, J. Comput. Phys. 370 (2020) 1132.
deepxde L. Lu, X. Meng, Z. Mao, G.E. Karniadakis, DeepXDE: a deep learning library for solving differential equations, SIAM Rev. 63 (2021) 208–228.
deepritz W. E, B. Yu, The deep Ritz method: a deep learning-based numerical algorithm for
solving variational problems, Commun. Math. Stat. 6 (2018) 1–12.
pnet Z. Long, Y. Lu, X. Ma, B. Dong, PDE-net: learning PDEs from data, in: Proceedings
of the 35th International Conference on Machine Learning, in: PMLR, vol. 80, 2018,
pp. 3208–3216.
pnet1 Z. Long, Y. Lu, B. Dong, PDE-net 2.0: learning PDEs from data with a numericsymbolic hybrid deep network, J. Comput. Phys. 399 (2019) 108925.
twostage S. Lin, Y. Chen, A two-stage physics-informed neural network method based on
conserved quantities and applications in localized wave solutions, J. Comput. Phys. 457 (2022) 111053.
pinndq J. Pu, Y. Chen, Complex dynamics on the one-dimensional quantum droplets via
time piecewise PINNs, Physica D 454 (2023) 133851.
yan1 L. Wang, Z. Yan, Data-driven peakon and periodic peakon solutions and parameter discovery of some nonlinear dispersive equations via deep learning, Physica D 428 (2021) 133037.
yan2 J. Song, Z. Yan, Deep learning soliton dynamics and complex potentials recognition for 1D and 2D -symmetric saturable nonlinear Schrödinger equations, Physica D 448 (2023) 133729.
yan3 Z. Zhou, Z. Yan, Solving forward and inverse problems of the logarithmic nonlinear Schrödinger equation with -symmetric harmonic potential via deep learning,
Phys. Lett. A 387 (2021) 127010.
yan4 L. Wang, Z. Yan, Data-driven rogue waves and parameter discovery in the defocusing
NLS equation with a potential using the PINN deep learning, Phys. Lett. A 404 (2021) 127408.
yan5 M. Zhong, S. Gong, S.-F. Tian, Z. Yan, Data-driven rogue waves and parameters discovery in nearly
integrable PT-symmetric Gross–Pitaevskii equations via PINNs deep learning, Physica D 439 (2022) 133430.
yan6 Z. Zhou, Z. Yan, Is the neural tangent kernel of PINNs deep learning general partial
differential equations always convergent? Physica D 457 (2024) 133987.
chen1 H. Zhou, J. Pu, Y. Chen, Data-driven forward–inverse problems for the variable
coefficients Hirota equation using deep learning method, Nonlinear Dyn. 111 (2023) 14667–14693.
li1 J.H. Li, B. Li, Mix-training physics-informed neural networks for the rogue waves of nonlinear Schrödinger equation, Chaos, Solitons and Fractals 164 (2022) 112712.
pet T. I. Lakoba and J. Yang, A generalized Petviashvili iteration method forscalar and vector Hamiltonian equations with arbitrary form of
nonlinearity, J. Comp. Phys. 226 (2007) 1668-1692.
it J. Yang and T. I. Lakoba, Accelerated imaginary-time evolution methods for the computation of solitary waves, Stud. Appl. Math. 120 (2008)
265-292.
49 J. Yang, Newton-conjugate-gradient methods for solitary wave computations,
J. Comput. Phys. 228 (2009) 7007–7024.
51 J. Yang and T. I. Lakoba, Universally-convergent squared-operator iteration
methods for solitary waves in general nonlinear wave equations, Stud. Appl. Math.
118 (2007) 153–197.
52 J. Yang, Nonlinear Waves in Integrable and
Nonintegrable Systems (SIAM, 2010).
IINN J. Song, M. Zhong, G. E. Karniadakis, and Z. Yan, Two-stage initial-value iterative physics-informed neural networks
for simulating solitary waves of nonlinear wave equations, J. Comput. Phys. 505 (2024) 112917.
fourwell J. Song, H. Dong, D. Mihalache, Z. Yan, Spontaneous symmetry breaking, stability and adiabatic changes of 2D
quantum droplets in amended Gross–Pitaevskii equation with multi-well potential, Physica D 448 (2023) 133732.
spin J. Song, Z. Yan, B. A. Malomed, Formations and dynamics of two-dimensional spinning asymmetric quantum
droplets controlled by a -symmetric potential, Chaos 33 (2023) 033141.
50 M. J. Ablowitz, Z. H. Musslimani, Spectral renormalization method for computing self-localized solutions to nonlinear systems, Opt. Lett. 30 (2005) 2140–2142.
adam D. Kingma, J. Ba, Adam: a method for stochastic optimization, 2014, arXiv:1412.6980.
bfgs D.C. Liu, J. Nocedal, On the limited memory BFGS method for large scale optimization, Math. Program. 45 (1989) 503–528.
xPPINN K. Shukla, A. Jagtap, G. E. Karniadakis, Parallel physics-informed neural networks via domain decomposition. J Comput Phys. 447 (2021) 110683.
PPINN X. Meng, Z. Li, D. Zhang, G. E. Karniadakis, PPINN: Parareal physics-informed neural
network for time-dependent PDEs, Comput. Methods Appl. Mech. Eng. 370 (2020) 113250.
ednn Y. Du, T. Zaki, Evolutional deep neural network, Phys. Rev. E, 104 (2021) 045303.
longdeeponet S. Wang, P. Perdikaris, Long-time integration of parametric evolution equations with
physics-informed DeepONets, J. Comput. Phys. 475 (2023) 111855.
|
http://arxiv.org/abs/2409.02965v1 | 20240904021732 | Do We Trust What They Say or What They Do? A Multimodal User Embedding Provides Personalized Explanations | [
"Zhicheng Ren",
"Zhiping Xiao",
"Yizhou Sun"
] | cs.SI | [
"cs.SI",
"cs.IR",
"cs.LG"
] |
2022
Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0
International (CC BY 4.0).
CIKM MMSR'24: 1st Workshop on Multimodal Search and Recommendations at 33rd ACM International Conference on
Information and Knowledge Management, October 25, 2024, Boise, Idaho, USA
1]Zhicheng Ren[
[email protected],
]
[1]Aurora Innovation, 280 N Bernardo Ave, Mountain View, CA 94043
[1]
2]Zhiping Xiao[
[email protected],
]
[2]University of Washington,
1410 NE Campus Pkwy, Seattle, WA 98195
[1]
3]Yizhou Sun[
[email protected],
]
[3]University of California, Los Angeles, Los Angeles, CA 90095
[1] All work done at University of California, Los Angeles
§ ABSTRACT
With the rapid development of social media, the importance of analyzing social network user data has also been put on the agenda.
User representation learning in social media is a critical area of research, based on which we can conduct personalized content delivery, or detect malicious actors.
Being more complicated than many other types of data, social network user data has inherent multimodal nature.
Various multimodal approaches have been proposed to harness both text (i.e. post content) and relation (i.e. inter-user interaction) information to learn user embeddings of higher quality. The advent of Graph Neural Network models enables more end-to-end integration of user text embeddings and user interaction graphs in social networks.
However, most of those approaches do not adequately elucidate which aspects of the data – text or graph structure information – are more helpful for predicting each specific user under a particular task, putting some burden on personalized downstream analysis and untrustworthy information filtering.
We propose a simple yet effective framework called Contribution-Aware Multimodal User Embedding () for social networks.
We have demonstrated with empirical evidence, that
our approach can provide personalized explainable predictions, automatically mitigating the impact of unreliable information.
We also conducted case studies to show how reasonable our results are. We observe that for most users, graph structure information is more trustworthy than text information, but there are some reasonable cases where text helps more. Our work paves the way for more explainable, reliable, and effective social media user embedding which allows for better personalized content delivery.
Multi-modal representation learning, Social network analysis, User embeddings
Do We Trust What They Say or What They Do? A Multimodal User Embedding Provides Personalized Explanations
[
September 9, 2024
=========================================================================================================
§ INTRODUCTION
The advancement of social networks has placed the analysis and study of social network data at the forefront of priorities. User-representation learning is a powerful tool to solve many critical problems in social media studies. Reasonable user representations in vector space could help build a recommendation system <cit.>, conduct social analysis <cit.>, detect bot accounts <cit.>, and so on. To obtain user-embeddings of higher quality, many multimodal methods are proposed to fully utilize all types of available information from the social networks, including interactive graphs, user profiles, images, and texts from their posts <cit.>. Compared with models using single modality data, multimodal methods utilize more information from the social-media platforms, and hence usually achieve better results in downstream tasks.
Among all modalities in social networks, user-interactive graphs (i.e., what they do) and text content (i.e., what they say) are the two most frequently-used ones, due to their good availability across different datasets and large amount of observations. The graph-neural-network (GNN) models <cit.> makes it more convenient to fuse both the text information and graph-structure information of social-network users, where text-embeddings from language-models such as GloVe <cit.> or BERT <cit.> are usually directly incorporated into GNNs as node attributes. Although those approaches have achieved great performance in a bunch of downstream tasks <cit.>, the text information and graph-structure information are fully-entangled with each other, which makes it hard to illustrate the two modalities' respective contributions to learning each user's representation.
It is already found by researchers that different groups of users can behave very differently on social media <cit.>. If such differences are not correctly captured, it might cause significant bias in the user attribute prediction (e.g., political stance prediction) <cit.>. Hence, when learning multi-modal user-representation, it is not only important to ask what the prediction results are, but also important to ask why we are making such predictions for different users (e.g. Are those predictions due to the same reason?).
Only in that way, we could provide more insights into the user modelings, and potentially enable unbiased and personalized downstream analysis for different user groups.
On the other hand, under a multi-modality setting, if one aspect of a user's data is not trustworthy and misleading, it might still be fused into the model and make the performance lower than single-modality models <cit.>.
Consider the case when we want to make a political ideology prediction for Elon Musk based on his Twitter content before 2020 U.S presidential election (Figure <ref>), when he has not revealed his clear Republican political-stance yet. If we trust the follower-followee graph structure information, we can see that he is likely to be a Republican since he follows more Republicans than Democrats, and has more frequent interactions with the verified Republicans accounts. However, in his tweet content, his word choice also shows some Democratic traits. Due to the existence of such conflicting information, being able to automatically identify which modality is more trustworthy for each individual becomes essential in building an accurate social media user embedding for different groups of users.
To address the above two shortcomings of text-graph fusion in social networks, we propose a simple yet effective framework called Contribution-Aware Multimodal User Embedding (), which can identify and remove misleading modality from specific social network users during text-graph fusion, in an explainable way. CAMUE uses a learnable attention module to decide whether we should trust the text information or the graph structure information when predicting individual user attributes, such as political stance. Then, the framework outputs a clear contribution map for each modality on each user, allowing personalized explanations for downstream analysis and recommendations. For ambiguous users whose text and graph structure information disagree, our framework could successfully mitigate the unreliable information among different modalities by automatically adjusting the weight of different information accordingly.
We conduct experiments on the TIMME dataset <cit.> used for a Twitter political ideology prediction task. We observed that our contribution map can give us some interesting new insights. A quantitative analysis of different Twitter user sub-groups shows that link information (i.e., interaction graph) contributes more than text information for most users. This provides insights that political advertising agencies should gather more interaction graph information of Twitter users in the future when creating personalized advertisement content, instead of relying too much on their text data. We also observe that when the graph and text backbone are set to R-GCN and GloVe respectively, our approach successfully ignores the unreliable GloVe embedding and achieves better prediction results. When the text modality is switched to a more accurate BERT embedding, our framework can assign graph/text weights for different users accordingly and achieve comparable performance to existing R-GCN-based fusion methods. We pick 9 celebrities among the 50 most-followed Twitter accounts [<https://socialblade.com/twitter/top/100>], such as Elon Musk.
A detailed qualitative analysis of their specific Twitter behaviors shows that our contribution map models their online behaviors well. Finally, we run experiments on the TwiBot-20-Sub dataset <cit.> used for a Twitter human/bot classification task, showing that our framework could be generalized to other user attribute prediction tasks. By creating social media user embeddings that are more explainable, reliable, and effective, our framework enables improved customized content delivery.
§ PRELIMINARIES AND RELATED WORK
§.§ Multimodal Social Network User Embedding
Social network user embedding is a popular research field that aims to build accurate user representations. A desirable user embedding model should accurately map sparse user-related features in high-dimensional spaces to dense representations in low-dimensional spaces. Multimodal social network user embedding models utilize user different types of user data to boost their performance. Commonly-seen modality combinations include graph-structure (i.e. link) data and text data <cit.>, graph-structure data and tabular data <cit.>, and graph-structure data, text data and image data altogether <cit.> <cit.>, etc.
Among those multi-modality methods, the fusion of graph-structure data and text data has always been one of the mainstream approaches for user embedding. At an earlier stage, without much help from the GNN models, most works trained the network-embedding and text-embedding separately and fused them using a joint loss <cit.>. With help of the GNN models, a new type of fusion method gained popularity, where the users' text-embeddings are directly incorporated into GNNs as node attributes <cit.>.
Despite their good performance, all existing models did not explain how much the graph structure or text information of a particular user contributed to its final prediction result, make it difficult to give customized modality weight for downstream analysis or recommendations. Also, if one modality is very poorly learned, it can be counter-effective to the user embedding quality, make it even worse than their single-modality counterparts <cit.>. How to address this problem in a universally-learned way instead of heuristic-based information filtering, has largely gone under-explored. Hence, we propose a framework that would not only utilize both text and graph-structure information together but also reveal their relative importance along with our prediction result.
§.§ Graph Neural Network
Graph Neural Network (GNN) refers to a collection of deep learning models that learn node embedding through iterative aggregation of information from neighboring nodes, using a convolutional operator. The majority of the GNN architectures include a graph convolution layer in a form that can be characterized as message-passing and aggregation. A general formula for such convolution layers is:
H^(l) = σ(AH^(l-1)W^(l)) ,
where H^(l) represents the hidden node representation of all nodes at layer l, operator σ is a non-linear activation function, and the graph-convolutional filter A is a matrix that usually takes the form of a transformed (e.g., normalized) adjacency matrix A, and the layer-l's weight W^(l) is learnable.
In the past few years, GNN models have reached SOTA performances in various graph-related tasks, and are widely regarded as a very promising technique to generate node embedding for users in social-network graphs. <cit.>.
§.§ Neural Network-based Language Models
The field of natural language processing has undergone a significant transformation with the advent of neural-network-based language models. Word2Vec <cit.> introduced two architectures: Continuous Bag-of-Words (CBOW) and Skip-Gram. CBOW predicts a target word given its context, while Skip-Gram predicts context words given a target word. GloVe <cit.> model went beyond by incorporating global corpus statistics into the learning process. ELMo <cit.> was another significant step forward, as it introduced context-dependent word representations, making it possible for the same word to have different embeddings if the context is different. BERT <cit.> is a highly influential model that is built on the transformer architecture <cit.>, pre-trained on large text corpora using, for example, masked language modeling and next-sentence prediction tasks. Recently, large language models like GPT-3 <cit.>, InstructGPT <cit.>, and ChatGPT have achieved significant breakthroughs in natural-language-generation tasks. All of those large language models (LLMs) are frequently used to generate text embedding for social network users.
Our framework does not rely on any specific language model, and we do not have to use LLMs. Instead, we use language-models as a replaceable component, making it possible for either simpler ones like GloVe or more complicated ones like BERT to fit in. We will explore some different options in the experimental section.
§.§ Multimodal Explanation Methods
In the past, several methods have been proposed to improve the interpretability and explainability of multimodal fusions <cit.>. Commonly used strategies include attention-based methods <cit.>, counterfactual-based methods <cit.>, scene graph-based methods <cit.> and knowledge graph-based methods <cit.>. Unfortunately, most of them focus on the fusion of image modality and text modality, primarily the VQA task, while to the best of our knowledge, no work focuses on improving the explainability between the network structure data and text data in social-network user embedding.
§ PROBLEM DEFINITION
Our general goal is to propose a social network user embedding fusion framework that could answer: 1. which modality (i.e. text or graph structure, saying or doing) contributes more to our user attribute prediction, hence allowing more customized downstream user behavior analysis and 2. which modality should be given more trust for each user, and automatically filter out the untrustworthy information when necessary, in order to achieve higher-quality multi-modal user-embedding.
§.§ Problem Formulation
A general framework of our problem could be formulated as follows: given a social media interaction graph 𝒢 = (𝒱,ℰ) with node set 𝒱 representing users and edge set ℰ representing links between users. Let X = [x_1, x_2, x_3, ⋯, x_n] be the text content of n = |𝒱| users, Y = [y_1, y_2, y_3, ⋯, y_n] be the labels of those users, A = [ A^1, A^2, ⋯, A^m] be the adjacency matrices of 𝒢, m be the number of link types and A^i ∈ℝ^n × n, our training objective is:
min 𝔼[ ℒ(f (𝒢, X), Y) ]
Here, ℒ is the loss of our specific downstream task, and f is some function that combines the graph structure information and text information, producing a joint user embedding.
§.§ Preliminary Experiment
To investigate the effectiveness of the existing GNN-based multimodal fusion methods in filtering the unreliable modality when the graph structure and text contradict, we run experiments using a common fusion method that feeds the fine-tuned BERT features into the R-GCN backbone, similar to the approaches in <cit.> and <cit.>. We observe that this conventional fusion method fails to filter the unreliable information for some of those ambiguous users. Table <ref> show two politicians whose Twitter data contains misleading information, either in the graph structure or text data. While the single-modality backbones which are trained without misleading information give the correct predictions, the multi-modality fusion method is fooled by the misleading information and is not able to make correct predictions.
These insights revealed the importance of having a more flexible and explainable framework for learning multimodal user embedding.
§ METHODOLOGY
We propose a framework of Contribution-Aware
Multimodal User Embedding (), a fusion method for text data and graph structure data when learning user embedding in social networks. The key ingredient of this framework is an attention gate-based selection module which is learned together with the link and text data, and decide which information we want to trust more for each particular user.
Our framework has three main parts: a text encoder, a graph encoder, and an attention-gate learner. The text content of each user passes through the text encoder and generates a text embedding for that user. The embedding is then passed through a three-layer MLP for fine-tuning. The adjacency matrix of the users passes through the graph encoder and generates a node embedding for that user. At the same time, both the text embedding and the graph adjacency matrix pass through our attention gate learner. The output of this module is two attention weights, α and β, which control the proportion of our graph structure information and text information. Without loss of generality, if we make R-GCN our graph encoder and BERT our text encoder, our model will be trained in the following way (Equation 3-6, also illustrated in Figure <ref>):
H^(1) = σ(concat(A^1 + A^2 + ⋯ + A^m, BERTemb(X)W^(1))
H^(2) = σ(H^(1)W^(2))
[e_α, e_β] = H^(2)W^(3)
α = softmax(e_α), β = softmax(e_β)
Where H and W are hidden layers and weights of our attention gate learner, X = [x_1, x_2, x_3, ⋯, x_n] is the text content, BERTemb is the BERT encoding module, A = [ A^1, A^2, ⋯, A^m] is the adjacency matrices of 𝒢 and m is the number of link types.
Then, our overall training objective becomes:
min 𝔼 [ ℒ ((α + λ) R-GCNemb(𝒢)
+ (β +λ) BERTemb(X), Y
) ]
Here, λ acts as a regularizer to ensure our model is not overly dependent on a single modality.
Our methods offer two levels of separation. First, we separate the text encoder and graph encoder to allow better disentanglement on which data contributes more to our final prediction results. Second, we separate the learning of the downstream tasks and the learning of which data modality (i.e. text or graph structure) we can rely on more. This makes our framework adaptable to different downstream social media user prediction tasks. The learned trustworthiness of different modalities allows for auto-adjustment of the weight between graph structure and text modalities, hence filtering any unreliable information once they are discovered.
Figure <ref> shows the overall architecture of our framework, note that the graph structure encoder and text encoder could be replaced by any other models which serve the same purposes.
We give a short complexity analysis of our architecture for the case of R-GCN + BERT: Since we are using sparse adjacency matrix for R-GCN, the graph encoder part has a complexity of 𝒪(L_graphEF_graph + L_graphNF_graph^2) (according to <cit.>), where L is the number of layers, E is the number of edges, N is the number of nodes, and F is the feature dimension. Since we fixed the maximum text length to be a constant for the text encoder, it has a complexity of 𝒪(F_text^2) (based on <cit.>). Since F_text and F_graph are about comparable size, our fusion module has the complexity of 𝒪(F_graph^2 + F_text^2), so the overall complexity is 𝒪(L_graphEF_graph + L_graphNF_graph^2 + F_text^2), hence we are not adding extra time complexity.
§ EXPERIMENTS
§.§ Tasks and Datasets
We run experiments on two Twitter user prediction tasks: 1. Predicting the political ideology of Twitter users (Democrat vs Republican) and 2. Predicting whether a Twitter user account is a human or a bot.
§.§.§ TIMME
TIMME <cit.> introduced a multi-modality Twitter user dataset as a benchmark of political ideology prediction task for Twitter users. TIMME contains 21,015 Twitter users and 6,496,112 Twitter interaction links. Those links include follows, retweets, replies, mentions, and likes, together they form a large heterogeneous social network graph. TIMME also contains 6,996,310 raw Twitter content from those users. Hence, it will be a good dataset to study different fusion methods of text features and graph structure features. In TIMME, there are 586 labeled politicians and 2,976 randomly sampled users with a known political affiliation. Some of them are ambiguous users we investigated before. Labeled nodes belong to either Democrats or Republicans. Note that the dataset cut-off time is 2020, so the political polarity of many public figures (e.g. Elon Musk) have not been reviewed at that time.
§.§.§ TwiBot-20-Sub
TwiBot-20 <cit.> is an extensive benchmark for Twitter bot detection, comprising 229,573 Twitter accounts, of which 11,826 are labeled as human users or bots. The dataset also contains 33,716,171 Twitter interaction links and 33,488,192 raw Twitter content. The links in TwiBot-20 include follows, retweets, and mentions. To further examine the generalizability of our method, we run experiments for Twitter bot account detection on the TwiBot-20 dataset. To reduce the computation cost of generating node features and text features, we randomly subsample 3,000 labeled users and 27,000 unlabeled users from the TwiBot-20 dataset, and form a new dataset called TwiBot-20-Sub. In this way, the size and label sparsity of the TwiBot-20-Sub dataset becomes comparable with the TIMME dataset.
§.§.§ Train-test Split
We split the users of both datasets into an 80%:10%:10% ratio for the training set, validation set, and test set respectively.
§.§ Implementation Detail
To test the effectiveness of our framework across different models, we choose two single-modality text encoders, GloVe and BERT, and two single-modality graph encoders, MLP and R-GCN.
The GloVe embedding refers to the Wikipedia
2014 + Gigaword 5 (300d) pre-trained version. [glove.6B.zip from <https://nlp.stanford.edu/projects/glove/>] The BERT embedding refers to the sentence level ([CLS] token) embedding of BERT-base model <cit.> after fine-tuning the pre-trained model's parameters on the tweets from our training set consisting of 80% of the users. We chose a max sequence length of 32. After the encoding, we have a 300-dimension text embedding for GloVe and a 768-dimension text embedding for BERT.
We choose a modified version of R-GCN from TIMME <cit.> as an R-GCN graph encoder. R-GCN <cit.> is a GNN model specifically designed for heterogeneous graphs with multiple relations. In the TIMME paper, it is discovered that assigning different attention weights to the relation heads of the R-GCN model could improve its performance. Hence, we adopt their idea and use the modified version of R-GCN. We did not use the complete TIMME model since it is designed for multiple tasks which is outside our research scope, and will overly complicate our model.
We also choose a 3-layer MLP as another graph encoder for comparison, the adjacency list for each user is passed to the MLP.
Large language models (LLMs) like ChatGPT are powerful in understanding texts, but they usually have a great number of parameters, making traditional supervised fine-tuning a hard and costly task <cit.>. Instead, less resource-intensive methods like few-shot learning, prompt tuning, instruction tuning, and chain-of-thought are more frequently used to adapt LLMs on specific tasks <cit.>. We do not use large language models as one of the options for the text encoder since those methods are not compatible with our framework – they do not provide a well-defined gradient to train our attention gate learner.
We run experiments on a single NVIDIA Tesla A100 GPU. We used the same set of hyper-parameters as in the TIMME paper, with the learning rate being 0.01, the number of GCN hidden units being 100, and the dropout rate being 0.1, on a PyTorch platform. For a fair comparison, we run over 10 random seeds for each algorithm on each task.
§ RESULTS AND ANALYSIS
§.§ Contribution Map
To show that our framework could essentially provide personalized explanations during the fuse of modalities, we draw the contribution map based on α (graph weight) and β (text weight) attention for users from each dataset. The darker the color is, the weight of the corresponding modality is closer to 1. In the contribution map, pure white indicates a zero contribution (0) from a modality, while pure dark blue indicates a full contribution (1).
The top figure of Figure <ref> shows the contribution map output when the text encoder is BERT and the graph encoder is R-GCN, on a subgroup of the TIMME dataset consisting of some politicians and some random Twitter users. To avoid any misuse of personal data information, we hide the names of random Twitter users and only include politicians whose Twitter accounts are publically available at [<https://tweeterid.com/>]. As we can see, there is a clear cut between the percentage of contributions from different modalities to the final prediction. It is notable that for the two ambiguous politician users we mentioned earlier (Ryan Costello and Sheldon Whitehouse), could give correct attention, where we should trust more text data from Mr. Costello while trusting more graph structure data from Mr. Whitehouse.
The bottom figure of Figure <ref> shows the contribution map output when the text encoder is GloVe and the graph encoder is R-GCN, on the same subgroup of the TIMME dataset. Note that for all shown users text information does not contribute to the final prediction. This could be attributed to the fact that GloVe is not very powerful for sentence embedding, especially when the text is long. This contribution map shows us our framework filters out the text modality almost completely when it is not helpful for our user embedding learning. As we can see from table <ref>, the traditional fusion method for GloVe+R-GCN only yields an accuracy of 0.840, which is much lower than the single graph structure modality prediction (0.953) using R-GCN, due to unreliable GloVe embedding. In contrast, our CAMUE method obtains a higher accuracy (0.954) than the single modality models by disregarding the unreliable information.
Figure <ref> shows the contribution map output for the same set of encoders on a subgroup of the Twibot-20-Sub dataset. There is also a clear cut between the percentage of contributions from different modalities, for both the human Twitter accounts and bot accounts.
Hence, we verify that our framework could both provide personalized modality contribution and drop low-quality information during the fuse of modalities. Some quantitative analysis of how this low-quality information filtering could benefit the general model performance could be found in the next section, and some qualitative analysis about what new insights we could gain from the output of our framework could be found in the case study section.
§.§ General Performance
Table <ref> shows the performance of on different combinations of encoders. The traditional fusion method in Figure <ref> is denoted as “simple fusion”. For MLP, we do not have such a natural fusion method. We also add “, fixed params” as an ablation experiment to prove the effectiveness of our attention gate-based selection module.
We observe that within those combinations, sometimes simple fusion methods are significantly worse than single-modality methods (e.g. GloVe+R-GCN vs R-GCN only) due to some untrustworthiness in one of the modalities. However, any fusion under our framework always performs better than their respective single modality methods. That suggests that our algorithm can benefit from attending to the more reliable modality between text and graph structure, if one particular modality is not trustworthy (e.g. GloVe embedding), and learning not to consider it when making predictions (as we can see in Figure <ref>, bottom).
It is also notable that our method outperforms “, fixed params”. These results suggest that adjusting the weight of different modalities dynamically yields better performance than fixed weights of modalities. Finally, when the text modality is
switched to a more accurate BERT embedding, our framework still gives comparable performance to its corresponding simple fusion methods.
§.§ Case Studies
User Sub-groups Table <ref> gives a quantitative analysis when the text encoder is BERT and the graph encoder is R-GCN, for different sub-groups of Twitter users we are interested in. In general, graph structure information contributes the most when it comes to bot accounts. One possible explanation for this is the variety of bot accounts on Twitter, such as those for business advertising, political outreach, and sports marketing <cit.>. Bots with different usage might talk very differently, however, they may share some common rule-based policies when trying to interact with humans on Twitter <cit.>.
Graph structure information contributes the second highest when it comes to politicians. This is also not surprising since politicians are generally more inclined to retweet or mention events related to their political parties <cit.>. It is also notable that the weight of text information for Republicans is slightly less than it is for Democrats. This aligns with the findings in <cit.> that Democrats have a slightly more politically polarized word choice than Republicans.
For random users, the weight of text information is the largest, although still not as large as the weight of graph structure information. This could be attributed to the pattern that many random users interact frequently with their non-celebrity families and friends on Twitter, who are more likely to be politically neutral.
Table <ref> shows some predicted political stances and the main contributing modalities of a group of news agencies. We could see that the majority of them have more reliable graph structure information than text information. This is not surprising since most news agency tends to use neutral words to increase their credibility, hence it is hard to gather strong political stances from their text embedding, except for some of them like Fox News and Guardian which are known to use political polarized terms more often <cit.>. Our framework is able to capture this unique behavior pattern for Fox News and Guardian, meanwhile giving mostly accurate political polarity predictions aligning with results in <cit.> and [<https://www.allsides.com/media-bias/ratings>].
To conclude, we are able to obtain customized user behavior patterns through our multi-modality fusion. Those patterns could provide insights on which modality we should focus on more for different type of users, for downstream tasks such as personalized recommendations, social science analysis, or malicious user detection.
Selected Celebrities from TIMME Dataset Since we are not allowed to disclose regular Twitter users' information, we instead selected 9 celebrities among the top 50 most followed Twitter accounts from [<https://socialblade.com/twitter/top/100>], whose Twitter accounts appear in the TIMME dataset, as a case study to show our frameworks' capability to give personalized explanations. We run the political polarity prediction task on those people and obtained some predictions. We also record the percentage of text information and percentage of graph structure information that contributes to their political polarity prediction (See Figure <ref>).
* Elon Musk: Before 2020 (dataset cut-off), Elon Musk's political views in his tweet text content are often complex. He has claimed several times not to take the viewpoints in his tweets too seriously [<https://twitter.com/elonmusk/status/1007780580396683267>]. This aligns with the low contribution weight of his texts on his political stand prediction. However, on the graph level, 66.67% of the politicians Elon Musk liked have liked Trump at least once, which is significantly larger than the average number in the TIMME dataset (23.67%). This could be a strong reason why our graph structure weight is so high and why we predict Elon Musk to be Republican-leaning. Our prediction is proved correct when in 2022 (which is beyond our dataset cut-off time, 2020 <cit.>), Elon Musk claimed that he would vote for Republicans in his tweet [<https://twitter.com/elonmusk/status/1526997132858822658>]. This is a strong indicator that our framework is using correct information.
* LeBron James: In his tweets, LeBron James frequently shows his love and respect to Democratic President Obama [<https://twitter.com/KingJames/status/1290774046964101123>, <https://twitter.com/KingJames/status/1531837452591042561>]. Our prediction for him to be Democrat-leaning with a strong text contribution aligns with this observation.
* Lady Gaga: Similarly to James, Lady Gaga also expresses explicitly in her tweets about her support of Democratic candidates [<https://twitter.com/ladygaga/status/1325120729130528768>]. Our graph weight becomes 0 for her case, meaning that using the text alone we could make sure she is Democrat-learning.
* Bill Gates: He usually avoids making explicit statements about whether he supports Democrats or Republicans in his tweets. Although our model predicts him as the Republican, the probability edge is very marginal (11%).
* Oprah Winfrey: During the 2016 presidential campaign, she retweeted and mentioned her support for Democratic candidate Hillary Clinton frequently [<https://twitter.com/Oprah/status/780588770726993920>], making the graph structure information a strong indicator of her Democratic stance.
* Jimmy Fallon: Jimmy Fallon has managed to maintain a sense of political neutrality in his tweets. His text contribution to the final prediction is 0. Even though the Twitter graph structure information indicates that he is Democrat-leaning, we still do not know for sure in real life whether he is a Democrat or Republican.
* Katy Perry: Just like Oprah Winfrey, Katy Perry also interacted with and supported Hillary Clinton during the 2016 election, a reason why we predict her as Democrat-leaning from the graph structure. Although she supports some republican politicians in 2022 [<https://twitter.com/katyperry/status/1533246681910628352>], that is beyond the dataset cutoff.
* Justin Timberlake: Justin Timberlake has frequent positive interactions with President Obama [<https://twitter.com/jtimberlake/status/1025867320407846912>] and firmly supports Hillary Clinton in his tweets [<https://twitter.com/jtimberlake/status/768191007036891136>], both suggesting that he is Democrat-learning. Our model assigns a similar weight to text and graph structure, suggesting that both of them are effective in making that prediction.
* Taylor Swift: In the case of Taylor Swift, the model fails to give the correct prediction. Her tweets show that she voted for Biden during 2020 [<https://twitter.com/taylorswift13/status/1266392274549776387>], but the prediction is Republican. One reason is that at the graph structure level, the majority of Taylor Swift's followers are classified as Republican (67.09 %) in the dataset, which could mislead the graph encoder.
Overall, we find out that for those celebrities, graph structure information is usually more useful when making political polarity predictions. That aligns with the quantitative results in table <ref>. As we see, different celebrities could have very different behavior patterns, and those patterns could be correctly captured and explained by our contribution weight. That confirms the effectiveness of our framework.
§ CONCLUSION
In this paper, we investigate some potential limitations of existing fusion methods for text information and graph structure information in user representation learning from social networks. We then propose a contribution aware multimodal social-media user-embedding with a learnable attention module. Our framework can automatically determine the reliability of text and graph-structure information when learning user-embeddings. It filters out unreliable modalities for specific users across various downstream tasks. Since our framework is not bound to any specific model, it has great potential to be adapted to any graph-structure-embedding component and text-embedding component, if affordable. More importantly, our models can give a score on the reliability of different information modalities for each user. That gives our framework great capability for personalized downstream analysis and recommendation. Our work can bring research attention to identifying and removing misleading information modality due to differences in social network user
behavior, and paves the way for more explainable, reliable, and effective social media user representation learning.
Some possible future extensions include adding more modalities other than text and graphs (e.g., image and video data from user's posts). Also, we consider the user identities to be static throughout our analysis, which might not be the case in many scenarios. We can bring time as a factor to produce a multi-modality dynamic social media user embedding. For example, it is possible to observe that a user's text content is more trustworthy in the first few months, and then that user's interactive graph structure information or interaction becomes more reliable in longer-terms.
|
http://arxiv.org/abs/2409.02886v1 | 20240904172033 | Exploring cosmological gravitational wave backgrounds through the synergy of LISA and ET | [
"Alisha Marriott-Best",
"Debika Chowdhury",
"Anish Ghoshal",
"Gianmassimo Tasinato"
] | astro-ph.CO | [
"astro-ph.CO",
"hep-ph",
"hep-th"
] |
=1
plots/
equationsection
3pt0pt∼
1pt<
4pt1pt∼
1pt>
1.1
#1 .3ex#1-.75em1ex∼
|
http://arxiv.org/abs/2409.03446v1 | 20240905115241 | Light-curve analysis and shape models of NEAs 7335, 7822, 154244 and 159402 | [
"Javier Rodríguez Rodríguez",
"Enrique Díez Alonso",
"Santiago Iglesias Álvarez",
"Saúl Pérez Fernández",
"Alejandro Buendia Roca",
"Julia Fernández Díaz",
"Javier Licandro",
"Miguel R. Alarcon",
"Miquel Serra-Ricart",
"Noemi Pinilla-Alonso",
"Francisco Javier de Cos Juez"
] | astro-ph.EP | [
"astro-ph.EP"
] |
firstpage–lastpage
Automatic occlusion removal from 3D maps for maritime situational awareness
Felix Sattler1, Borja Carrillo Perez1, Maurice Stephan1, Sarah Barnes1
1German Aerospace Center (DLR), Institute for the Protection of Maritime Infrastructures,
Bremerhaven, Germany
Received Month dd, yyyy; accepted Month dd, yyyy
===============================================================================================================================================================================================
§ ABSTRACT
In an attempt to further characterise the near-Earth asteroid (NEA) population we present 38 new light-curves acquired between September 2020 and November 2023 for NEAs (7335) 1989 JA, (7822) 1991 CS, (154244) 2002 KL6 and (159402) 1999 AP10, obtained from observations taken at the Teide Observatory (Tenerife, Spain). With these new observations along with archival data, we computed their first shape models and spin solutions by applying the light curve inversion method. The obtained rotation periods are in good agreement with those reported in previous works, with improved uncertainties. Additionally, besides the constant period models for (7335) 1989 JA, (7822) 1991 CS and (159402) 1999 AP10, our results for (154244) 2002 KL6 suggest that it could be affected by a Yarkovsky–O’Keefe–Radzievskii–Paddack acceleration with a value of υ≃ -7×10^-9 rad d^-2. This would be one of the first detections of this effect slowing down an asteroid.
asteroids: general – minor planets, asteroids: individual: (7335) 1989 JA – minor planets, asteroids: individual: (7822) 1991 CS – minor planets, asteroids: individual: (154244) 2002 KL6 – minor planets, asteroids: individual: (159402) 1999 AP10 – techniques: photometric
§ INTRODUCTION
The known near-Earth asteroid (NEA) population is growing at a pace of ∼ 200 new asteroids discovered every month, with a total of 34467 NEAs discovered as of 8 March of 2024, according to the Center for Near Earth Object Studies (CNEOS)[<https://cneos.jpl.nasa.gov/stats/totals.html>], with surveys such as the Asteroid Terrestrial-impact Last Alert System (ATLAS; ), the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; ), the Catalina Sky Survey (CSS: ) or the Lowell Observatory Near-Earth-object Search (LONEOS; ) among others being the responsible of this number of discoveries[See <https://www.minorplanetcenter.net/iau/lists/MPDiscsNum.html> for a full list of objects discovered per survey.]. Since its number is high and constantly growing, most of them barely have estimations of their rotation period and diameter. Among this group lies another important subgroup know as Potentially Hazardous Asteroids (PHAs), which is even more interesting because its Minimum Orbit Intersection Distance (MOID) is less than 0.05 AU. This makes this subgroup dangerous to Earth because of a possible collision. In addition, from a future resource exploitation perspective, NEAs are extremely important due to their potential as resource sources. Their periodic close approaches to Earth make them ideal targets for extraction missions.
The characterization of the physical properties of the NEA population is one of the hot topics in asteroid research. Particularly important is the determination of their rotational properties (rotation period and pole) and shape. A widely used technique for determining the rotational properties and shapes of asteroids is light-curve inversion as we did in <cit.>.
In this work we present the light-curves and derive the rotational properties and shape for NEAs (7335) 1989 JA, (7822) 1991 CS and (159402) 1999 AP10. The observations were obtained in the framework of the Visible NEAs Observations Survey (ViNOS; ). Targets were selected because they have also observations done using radar techniques with the aim of providing complementary data. Among the asteroids studied in this work, 7335 and 7822 are PHAs, their MOID and absolute magnitude (H) are 0.022203 and 17.8, and 0.021713 and 17.292 respectively, values are below the threshold for PHAs according to CNEOS[<https://cneos.jpl.nasa.gov/about/neo_groups.html>] of MOID ≤ 0.05 AU and H ≤ 22.
For computing the shape models, we opted for the Convex Inversion Method as detailed in <cit.> and <cit.>. This method generates convex models with its spin parameters from a set of light-curves, which can be either dense, sparse or a suitable or appropriate set of both. Dense light-curves is data collected from high cadence observations, taken in spans of hours during a single night; this kind of data is typically acquired in the frame of follow-up programmes, as in ViNOS. The other kind, sparse light-curves, is data collected from observations taken during several nights, in a low cadence with a temporal span from months to years, typically extracted from sky patrol programmes such as ATLAS, the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE; ) or Pan-STARRS.
For this work, all data consisted of dense light-curves, as our new observations were conducted by the ViNOS project. Obtaining several light-curves of an asteroid at different epochs is crucial for the proper application of the inversion methods. At different epochs the viewing geometry, in particular the aspect angle (the angle between the observer's line of sight and the asteroid's rotation axis) varies, providing diverse perspectives of the asteroid's shape and surface features, which is reflected in significant differences in the light-curve shapes, especially in their amplitude. Single-epoch observations, or observations done at similar aspect angles can only provide limited and potentially misleading information about the object's shape and rotational characteristics. Unlike main belt asteroids, the viewing geometry of NEAs varies over a short period of time as they pass close to Earth, which in some cases allows for obtaining reasonable shape models with data collected during a single close approach.
Interestingly, NEAs can be affected by the Yarkovsky () and the YORP <cit.> effects. The Yarkovsky effect is caused by the thermal re-emission of the incident solar radiation over the asteroid´s surface, which is then re-emitted, changing the orbit's semi major axis, increasing it for prograde rotation asteroids and decreasing it for retrograde ones, while the YORP effect produce changes in the asteroid spin state due to the anisotropic character of the thermal re-emission. Determining asteroid rotational variations due to YORP provide information on its thermal properties. As of 18 April 2024 there are only 8 asteroids confirmed to be affected by YORP and another candidate: (6489) Golevka <cit.>, (1862) Apollo <cit.>, (54509) 2000 PH5 <cit.>, (1620) Geographos <cit.>, (25143) Itokawa <cit.>, (1685) Toro, (3103) Eger and (161989) Cacus <cit.>, while the candidate is (2100) Ra-Shalom <cit.>.
The paper is organized as follows: Sec. <ref> presents the observations and data reduction, Sec. <ref> describes the methodology used to study the light-curves and derive information on the asteroid rotation period and shape, in Sec. <ref> we present the results and discuss them for each asteroid and finally the conclusions are presented in Sec.<ref>.
§ OBSERVATIONS, DATA REDUCTIONS AND LIGHT-CURVES
Photometric observations were obtained using five telescopes located at Teide Observatory (TO, Tenerife, Canary Islands, Spain), the IAC80, the TTT1 and TTT2 (Two Meter Twin Telescope), and the TAR2 and TAR4 telescopes. The observational circumstances are shown in Table <ref>.
The Two-meter Twin Telescope facility (TTT) is located at the Teide Observatory (latitude: 28 18' 01.8" N; longitude: +16 30' 39.2" W; altitude: 2386.75 m), on the island of Tenerife (Canary Islands, Spain). Currently, it includes two 0.8m telescopes (TTT1 and TTT2) on altazimuth mounts. Each telescope has two Nasmyth ports with focal ratios of f/4.4 and f/6.8, respectively. The observations were made using QHY411M[<https://www.qhyccd.com/>] CMOS cameras <cit.> installed on the f/4.4 focus of each telescope. The QHY411M are equipped with scientific Complementary Metal–Oxide–Semiconductor (sCMOS) image sensors with 14K x 10K 3.76 μm pixel^-1 pixels. This setup provide an effective FoV of 51.4^'×38.3^' (with an angular resolution of 0.22" pixel^-1). Images were taken using the Luminance filter, that covers the 0.4 to 0.7 μm wavelength range, and the exposure time was dynamically set between to ensure a signal-to-noise ration (S/N) higher than 50. Images were bias and flat-field corrected in the standard way.
The IAC80 is a 82 cm telescope with an f/D = 11.3 in the Cassegrain focus. It is equipped with the CAMELOT-2 camera, a back-illuminated e2v 4K x 4K pixels CCD of 15 µm^2 pixels. This setupt provide a plate scale of 0.32 arcsec/pixel, and a field of view of 21.98 x 22.06 arcmin^2. Images were obtained using the Sloan r filter with the telescope in sidereal tracking, so the individual exposure time of the images was selected such the asteroid trail were smaller than the typical FWHM of the IAC80 images (∼ 1.0 ). The images were bias and flat-field corrected in the standard way.
TAR2 is a 46 cm f/D = 2.8 in the prime focus robotic telescope. Until July 2022 TAR2 was equipped with a SBIG STL 11000 CCD camera since then replaced by a FLI-Kepler KL400 camera. The SBIG STL 11000 has a front illuminated 4008 x 2672 píxeles CCD with 9μm pixel size. The FLI-Kepler KL400 camera has a back illuminated 2K x 2K pixels GPixel GSense400 CMOS with a pixel size of 11 μ m^2.
TAR4 is a 40cm MEADE 16" f/D = 2.8 in the Cassegrain focus robotic telescope, equipped with a FLI-Kepler KL400 camera. Images of both TAR telescopes were bias, dark and flat-field corrected in the standard way.
To obtain the light-curves we did aperture photometry of the final images using the Photometry Pipeline[<https://photometrypipeline.readthedocs.io/en/latest/>] (PP) software <cit.>, as we did in <cit.>. The images obtained with the L-filter were calibrated to the r SLOAN band using the Pan-STARRS catalogue while the other images were calibrated to the corresponding bands for the filters used.
§ LIGHT-CURVE INVERSION METHOD
In this study, we follow the same methodology as outlined in <cit.>, to derive pole solutions and morphology of the asteroids. Two codes were used for this purpose, the first is the code publicly available at the Database of Asteroid Models from Inversion Techniques (DAMIT; ) which generates models with constant rotation period (P) This code was previously used in studies by <cit.> among others. From now on, we will refer to this code as "No YORP" code. The second code is a modification of the former (No YORP code), allowing the rotation period to constantly change over time. This modification enables the code to take into account the YORP effect that may be affecting the asteroid. This code, previously used in studies such as <cit.>, will be referred as "YORP code". In contrast with the No YORP code, the YORP code is not publicly available and was provided by Josef Ďurech.
Next, the key parameters for the model creation are discussed. One crucial parameter needed to generate accurate models that fit the data is the rotation period (P), representing the time taken for an asteroid to complete a spin around its rotation axis considering the background stars as a reference frame. The ecliptic coordinates to which the spin axis points to, are Lambda (λ) and Beta (β) (its ecliptic longitude and latitude respectively), being their ranges 0^∘≤λ≤ 360^∘ and -90^∘≤β≤ 90^∘. It is possible to obtain a solution with a duality in λ, that is, the code could offer two solutions with 180^∘ of separation in λ values, while having almost identical value for β.
By computing the values of λ and β from the best fitting model, and the asteroid's inclination (i), longitude of ascending node (Ω) and the argument of pericenter (ω), obtained from the Horizons System <cit.> the obliquity (ϵ) can be calculated. If 0^∘≤ϵ≤ 90^∘, the asteroid would be a prograde rotator, whereas otherwise (90^∘ < ϵ≤ 180^∘) it would be retrograde.
The data used for computing the models was acquired from two sources: the new light-curves presented in Section <ref> and Table <ref>, and the light-curves hosted on the Asteroid Lightcurve Data Exchange Format (ALCDEF; ) database. Observational circumstances of the archival data from ALCDEF for asteroids 7822, 154244 and 159402 are detailed in Tables <ref>, <ref> and <ref>, respectively.
The initial step in the model generation process is to determine a P to adopt as initial value in the code. To accomplish this, we used the tool provided with both codes for this purpose. This tool identifies the best fitting period to the light-curves. The code, along other default parameters, requires the interval of periods to perform the search, a coefficient p of the period step and the convexity regularization weight (d), used to maintain the dark facet area below 1%. For each asteroid, an initial search was made around the synodic period of the light-curves to ensure the global minimum in terms of χ^2. Once the global minimum is found, another search is performed around it to refine the P that will be used as initial P for the subsequent models.
It is worth noting that the P values obtained for the asteroids in this study were in line with those already published and available on the ALCDEF database.
Upon defining the initial value for P for the asteroid, the following procedure is followed independently of the code used (No YORP or YORP code). Both codes, as the period search tool, have several parameters that were left as default, only modifying the λ, β, P, d value if the model presents a value > 1% in the dark facet area, and setting the YORP value to υ=1×10^-8 in the YORP code.
Initially, a medium resolution search is made across the entire sphere (0^∘ < λ≤ 360^∘, -90^∘≤β≤ 90^∘) with 5^∘ steps (∼ 2700 poles) and the obtained value of P. This medium search yields an initial solution in terms of χ^2. These initial poles are then reduced to the number of observations used for each asteroid, resulting in a solution in terms of χ_red^2 as follows:
χ_red^2=χ^2/ν
In Equation <ref>, χ_red^2 is the reduced χ^2 to the number of degrees of freedom ν, which in this case is the number of measures used in the model for each asteroid minus the number of parameters (∼ 100) <cit.>. When reducing the χ^2 to the data, the value of χ_red^2 ≃ 1 means that that the model matches the data almost perfectly, but this is usually not possible to be achieved since the observations are not perfect and contain some uncertainties.
Subsequently, a fine search is performed, narrowing down the area searched to a 30^∘ x 30^∘ square centered on the lowest χ_red^2 solution, with 2^∘ steps (∼ 250 poles) and the value for the initial period provided with that solution. Again, as in the previous search the poles are too in terms of χ^2, so the same process of reduction is done obtaining a final solution in terms of χ_red^2.
As neither the No YORP nor the YORP code compute the uncertainties of the best-fitting solution, a bootstrapping approach is adopted to fix it. This approach involves creating 100 subsets of light-curves from the initial set, randomly removing 25% of the measurements in it, since the data sets were large enough (∼ 3000 measurements). For each of these 100 subsets, a fine search was applied around the best solution from the medium search in terms of χ_red^2, yielding 100 solutions. Then the mean (which is almost identical to the best solution from the fine search with the main set of measurements) and the standard deviation (3σ level) were calculated, with the latter adopted as the uncertainty of the solution.
In an attempt to further validate the obtained results for asteroids affected by YORP effect, the method proposed in <cit.> was followed. This method yields a value for υ and its uncertainty at 3σ level. To apply this method, the YORP code is iterated, with all the values fixed on those obtained in the best fine solution, except the υ value, adopting the lowest value of υ in terms of χ_red^2 obtained by this method as the final solution (again the final value is almost identical to the one obtained in the best solution).
§ RESULTS & DISCUSSION
In the next subsections, the methods discussed in Section <ref> are applied for each asteroid individually and the results for each of them are discussed (see Table <ref> for a summary of the results).
§.§ (7335) 1989 JA
This asteroid belongs to the Apollo groupneo_groups, the asteroids in this group have a semi-major axis (a) > 1.0 AU and a perihelion distance (q) < 1.017 AU. There are two diameter values reported using WISE data: 0.932±0.153 km <cit.> and 0.73±0.02 km <cit.>.
Radar observations using Arecibo and Goldstone between May 4 and June 9, 1999, were reported in <cit.> and conclude that it has an effective diameter within a factor of two of 1 km and a rotation period less than half a day.
It is remarkable that this asteroid had also a close approach to Earth during May 2022, in which Goldstone Radar, besides confirming the estimated diameter by WISE (0.7 km), discovered a small satellite with a diameter between 100 and 200 meters with an orbital period of about 17 hours[<https://echo.jpl.nasa.gov/asteroids/1989JA/1989JA.2022.goldstone.planning.html>].
Asteroid 7335 has neither any light-curves published on the ALCDEF database nor any shape model published. Therefore, in this work, we present its first shape model determination, for which we used only the light-curves acquired by ViNOS. These 14 light-curves cover a temporal span of 39 days from 12 April 2022 to 21 May 2022. Other light-curves obtained during the same 2022 close approach were obtained and used to determine its rotation period: P=2.58988 ± 0.00005 h (Pravec 2022web[<https://www.asu.cas.cz/ asteroid/07335_2022a_p1.png>]), P=2.5900 ± 0.0002 h <cit.>, P=2.588 ± 0.001 h <cit.>, P=2.592 ± 0.006 h <cit.> and P=2.588 ± 0.001 h <cit.>.
Taking into account the previously obtained periods, we performed several searches in their proximity, finding in most cases a minimum in P=2.590536 h, as shown in Figure <ref>. This value was adopted as the initial period to the No YORP code. Since the temporal span is so small, it is not a candidate to detect whether it is affected by YORP or not; nevertheless, the YORP code was run without success.
With the initial period, we ran the code obtaining a medium solution with P=2.590421 h, λ = 250^∘, β = -60^∘, χ_red^2=1.05 (see Figure <ref> for a representation of the distribution of the best fitting solutions obtained in this medium search), and then a fine solution around that one of P=2.590543 h, λ = 243^∘, β = -61^∘, χ_red^2=1.04 and ϵ≃ 147^∘, which implies a retrograde rotation. The shape model presented in Figure <ref> is the best fitting to the data (see Figure <ref> and Figure <ref> is a graphical representation of the fit between light-curves and shape model).
In order to obtain the uncertainties for the solution, 100 subsets from the main data set (∼ 3300 measures) were created randomly removing 25% of the measurements. After running the code in a fine search around the best medium solution the following result were obtained: P=2.590432 ± 0.000391 h λ = 243^∘± 17^∘, β = -61^∘± 6^∘ and ϵ = 147^∘± 8^∘. It is worth mentioning that more observations covering a wider range of viewing geometries are needed to compute a more consistent model.
Finally we looked for mutual events between 7335 and its satellite. To do that we plotted the observed intensity minus the model intensity versus the Julian date (JD) of the observations. Results are shown in Figures <ref> and <ref>.
We find four possible mutual events in the light-curves obtained on 27/4/2022, 3/5/2022, 5/5/2022 and 21/5/2022, where drops in the intensity are visible, only one (3/5/2022) is complete and lasted ∼ 2 hr, while the other three would be partially recorded. More data is needed to try to obtain reliable values for the orbital elements of the satellite and the relative sizes of both asteroids. The object will be observable again between September and November 2029 with medium-size telescopes.
§.§ (7822) 1991 CS
In previous studies this asteroid has been classified as an S-type <cit.>. Dynamically is a member of the Apollo group. Radar observations of 7822 were obtained on August 1996 at Goldstone <cit.> concluded that the objects do not present a very elongated pole-on silhouette as the determine that "the hulls have a mean elongation and rms dispersion of 1.18 ± 0.02 and place a
lower bound on the maximum pole-on dimension of 1.3 km/cos(δ), where δ is the angle between the radar line-of-sight and the asteroid’s apparent equator".
Other reported diameter are 0.83 km <cit.>; 1.602±0.012 km <cit.> and 0.712±0.179 km <cit.>.
For this asteroid, there are 18 light-curves published on ALCDEF with a temporal span from 6 August 2015 to 23 August 2021, which were added to the new 6 presented in this work taken from 7 February 2022 to 5 March 2022. As for the previous object, there are five published periods: P=2.391 ± 0.001 h <cit.>, P=2.389 ± 0.001 h <cit.>, P=2.392 ± 0.002 h <cit.>, P=2.3896 ± 0.0005 h (Pravec 2021web[<https://www.asu.cas.cz/ asteroid/1991cs_c1.png>]) and P=2.388 ± 0.002 h <cit.>, but not a shape model, so the one presented in this work is the first shape model for this asteroid.
With both archival data and the new light-curves (24 light-curves in total) the period search tool was run in an interval between 2.38 and 2.40 h, finding as the best fitting period P=2.390159 h. As shown in Figure <ref>, there are more periods that falls under the 10% threshold from the lowest χ^2, which implies that more data with different viewing geometries is needed from future observations to obtain a more precise value. It is worth noting that all the values under this threshold are among the accepted periods already published (2.388-2.391 h).
With this initial value of P=2.390159 h, the medium search was made with the No YORP code, obtaining the following initial solution: P=2.390159 h, λ = 230^∘, β = -60^∘, χ_red^2=1.12 (in Figure <ref> a representation of the best fitting solutions is shown). With this initial solution, a fine search was performed, obtaining: P=2.390157 h, λ = 240^∘, β = -55^∘, ϵ≃ 175^∘, which implies again retrograde rotation, and with χ_red^2=1.10 as the best fitting solution (see Figure <ref> for a shape model of the best solution and Figures <ref> and <ref> for a fit between the best solution and the data). Is worth mentioning that since there are several periods under the threshold, models were created with the most significant of them as the initial period, but all of them resulted in worse χ_red^2 than the presented model. Although the temporal span is not sufficiently large (approximately 8 years), we still applied the YORP code to this asteroid. As expected, the YORP effect was not detected.
For the uncertainties the subsets were created from the main one (∼ 3000 measures), removing randomly 25% of the measurements, then run the code around the best fitting medium solution, obtaining: P=2.390157 ± 0.000002 h λ = 242^∘± 9^∘, β = -57^∘± 7^∘ and ϵ = 175^∘± 7^∘.
§.§ (154244) 2002 KL6
This asteroid belongs to the Amor groupneo_groups, which has a > 1 AU and 1.017 < q < 1.3 AU. This asteroid has been reported as belonging to the taxonomic groups Q <cit.> or Sq <cit.>, and its diameter has been estimated to be within a factor of two of 1 km (between 0.5 and 2 km) by its optical albedo of 0.18 [<https://echo.jpl.nasa.gov/asteroids/2002KL6/2002KL6_planning.html>]. Radar observations of 154244 were obtained using Arecibo and Goldstone radiotelescopes during July 2016[<http://mel.epss.ucla.edu/radar/object/info.php?id=a0154244>], but results are not published yet.
Regarding 154244 rotation period, it was already studied in previous works, yielding the following results: P=4.6063 ± 0.0002 h <cit.>, P=4.607 ± 0.001 h, P=4.6081 ± 0.0003 h, P=4.610 ± 0.002 h and P=4.605 ± 0.002 h <cit.>, P=4.60869 ± 0.00005 h <cit.>; P=4.609 ± 0.005 h <cit.>, P=4.607 ± 0.001 h <cit.>, P=4.608 ± 0.001 h, P=4.6052 ± 0.0003 h, P=4.6060 ± 0.0004 h and P=4.6096 ± 0.0006 h <cit.>, P=4.610 ± 0.001 h <cit.>. It is also worth noting that <cit.> published a shape model and a pole solution with P=4.610233 ± 0.000002 h, λ = 129^∘± 10^∘, β = -89^∘± 10^∘.
In this work, the archival data available on ALCDEF (53 light-curves) covering a temporal span from 18 June 2009 to 5 August 2023, was used along with our 9 new light-curves (observations from 20 July 2023 to 11 October 2023). With this set of light-curves, a period search was made in a interval between 4.605 and 4.611 h, obtaining a best fitting period of P=4.610235 h as shown in Figure <ref>.
With this initial period, the medium search was made with the No YORP code, obtaining two paired solutions: P=4.610235 h, λ = 330^∘, β = -90^∘, χ_red^2=1.16 and P=4.610235 h, λ = 150^∘, β = -90^∘, χ_red^2=1.16 (see Figure <ref> for a graphical representation of the medium pole search). These paired solutions are 180^∘ away from each other in terms of λ, and the value of β means that its uncertainty may be causing this change in λ due to its position in the sphere, which could imply that it is the same solution. After these medium results, one fine search was made around each of them, obtaining the following results: P=4.610235 h, λ = 150^∘, β = -90^∘, ϵ∼ 178^∘, χ_red^2=1.12 and P=4.610235 h, λ = 330^∘, β = -89^∘, ϵ∼ 177^∘, χ_red^2=1.12 (see Figures <ref> and <ref> for shape models of both solutions and Figures <ref>, <ref> and <ref>, <ref> for the fit to the data respectively). In both solutions, ϵ implies a retrograde rotation.
To calculate the uncertainties, a random 25% of the main data were removed (∼ 5000 measurements) creating the 100 subsets to be iterated around the 2 obtained medium solutions, yielding the following results for each of them: P=4.610235 ± 0.000001 h, λ = 334^∘± 28^∘, β = -90^∘± 4^∘ and ϵ = 176^∘± 3^∘ (iteration around λ=330^∘, β=-90^∘, P=4.610235 h) and P=4.610235 ± 0.000001 h, λ = 153^∘± 21^∘, β = -90^∘± 4^∘ and ϵ = 177^∘± 3^∘ (iteration around λ=150^∘, β=-90^∘, P=4.610235 h), again both solutions imply retrograde rotation. One of the two obtained results is pretty close to the one already mentioned from <cit.>, the difference in λ may be related to our new data from 2023.
Since the data set time span is relatively large (∼ 14 years), it was worth an attempt to check if the the asteroid is affected by YORP. As the obtained period (P=4.610235) fitted good enough the data, it was used as the initial period to the YORP code. The medium search obtained again two solutions: P=4.610232 h corresponding to 18 June 2009, λ = 335^∘, β = -90^∘, υ = -7.48×10^-9 rad d^-2, χ_red^2=1.12 and P=4.610232 h corresponding to the same date as the previous model, λ = 155^∘, β = -90^∘, υ = -7.48×10^-9 rad d^-2, χ_red^2=1.12 (see Figure <ref> for a graphical representation of the medium pole search). As in the No YORP code medium search, the solutions are 180^∘ away from each other. A fine search around each of those solutions was made with the following results: P=4.610232 h, λ = 151^∘, β = -90^∘, υ = -7.07×10^-9 rad d^-2, ϵ∼ 178^∘, χ_red^2=1.09 and P=4.610232 h, λ = 330^∘, β = -89^∘, υ = -6.95×10^-9 rad d^-2, ϵ∼ 178^∘, χ_red^2=1.08 (see Figures <ref> and <ref> for shape models of both solutions and Figures <ref>, <ref> and <ref>, <ref> for the fit to the data respectively).
Again the uncertainties were calculated in the same way as previously, obtaining: P=4.610232 ± 0.000001 h, λ = 333^∘± 18^∘, β = -89^∘± 2^∘, υ = (-7.14±1.93)×10^-9 rad d^-2 and ϵ = 177^∘± 2^∘ (iteration around λ=335^∘, β=-90^∘, P=4.610233 h) and P=4.610232 ± 0.000001 h, λ = 152^∘± 15^∘, β = -90^∘± 2^∘, υ = (-7.12±1.65)×10^-9 rad d^-2 and ϵ = 177^∘± 2^∘ (iteration around λ=155^∘, β=-90^∘, P=4.610233 h).
As an alternate way of estimating the uncertainty of the YORP effect, the 3σ method explained in Section <ref> was applied, iterating the values of υ around the best solutions ((P=4.610232 h, λ = 151^∘, β = -90^∘) and (P=4.610232 h, λ = 330^∘, β = -89^∘)) between -1.4×10^-9 and 0 with steps of 0.05×10^-9. The solutions obtained with this method were υ = (-6.83±2.70)×10^-9 rad d^-2 and υ = (-7.02±2.70)×10^-9 rad d^-2 respectively (See Figure <ref>) which is in agreement with the obtained solutions.
We present in this work four solutions, two with constant rotation period and the other two with linearly increasing rotation period, all of which fit the data well, but slightly better if the YORP effect is taken into account, but more future observations are needed to confirm these results. One hint of the YORP effect being present is the value of ϵ, in this case of ϵ∼ 178^∘, which is pretty close to the extreme value of 180^∘, which is a known consequence of this effect taking place <cit.>. If this asteroid is indeed affected by YORP, it would be another addition to the short list of asteroids affected by it. What is even more important is the negative value of υ, which implies that unlike all the other asteroids known to be under YORP effect, (154244) 2002 KL6 is being decelerated rather than accelerated. This could be the second occasion an asteroid is reported to be decelerating, as it happened with (25143) Itokawa in <cit.>, where it was reported a υ = (-8.95±0.15)×10^-8 rad d^-2, but later revised in <cit.> where, with a wider temporal span an acceleration of υ = (3.54±0.38)×10^-8 rad d^-2 was deemed as the best fitting. Moreover, Itokawa was again studied in <cit.>, where a positive value of υ in the order of 10^-7 rad d^-2 was found as the best fitting. The Itokawa case is specially hard to study since that asteroid is a contact binary, which is not the situation for our asteroid.
We also followed the method proposed in <cit.> where the modulus of the YORP effect can be estimated following the equation |dω/dt| = 1.20^+1.66_-0.86× 10^-2 (a^2√(1 - e^2)D^2)^-1, where a is the semi major axis in AU, while e is the eccentricity and D is the asteroid's diameter in km. We computed that equation for the smallest and the biggest diameters (0.5 km and 2 km as explained in Section <ref> introduction), being a=2.307249 AU, e=0.548644, obtaining ν=8.1^+11.2_-5.8× 10^-8 rad d^-2 for D=0.5 km and ν=5.1^+7.0_-3.6× 10^-9 rad d^-2 for D=2.0 km. We find that a diameter > 1 km makes our obtained result to be in agreement with this estimation, while a value of ∼ 1.7 km makes it the mean estimated value.
To support this claim regarding the negative value, our method to calculate the uncertainties can be useful, since for the 100 computed models created with light-curves with random measures removed the value of υ was consistently negative.
§.§ (159402) 1999 AP10
This asteroid also belongs to the Amor group and its diameter has been reported to be 1.20 km <cit.> and 1.20±0.29 km <cit.>, while its spectral class is reported as: Sq <cit.>, Sw <cit.> and in the S complex <cit.>.
Radar observations were obtained with Arecibo on October 2009 and with Goldstone on October 2020 but there is no report of the results published published yet[<http://mel.epss.ucla.edu/radar/object/info.php?search=159402>].
As the other asteroids this one's period was already studied, with the following results: P=7.908 ± 0.001 h <cit.>, P=7.911 ± 0.001 h <cit.>, P=7.9219 ± 0.0003 h <cit.>, P=7.9186 ± 0.0004 h and P=7.9219 ± 0.0003 h <cit.> and P=7.92 ± 0.01 h, P=7.922 ± 0.004 h and P=7.919 ± 0.005 h <cit.>. As for the shape model, the one presented on this work is the first.
For finding the best fitting period to our data set of light-curves (46 light-curves obtained from ALCDEF and 3 new presented in this work), which cover a temporal span from 21 September 2009 to 22 January 2021, the period search tool was applied in an interval between 7.918 to 7.926 h, being the best fitting period P=7.921915 h (see Figure <ref> for a graphical representation of the obtained periods).
Adopting this period, a medium search was conducted, resulting in: P=7.921919 h, λ = 50^∘, β = -60^∘, χ_red^2=1.85 (see Figure <ref> for a graphical representation of the solutions) as the best fit. The fine search around the best fitting medium search solution was: P=7.921917 h, λ = 49^∘, β = -60^∘, χ_red^2=1.80 and ϵ = 155^∘, which again implies retrograde rotation. The best fitting computed shape model is shown in Figure <ref>, with Figures <ref> and <ref> showing a graphical representation of the fit between the light-curves and the shape model.
The uncertainties for the solution were obtained randomly removing 25% of the measures from the main data set (∼ 8000 measures) to create the 100 subsets, repeating the fine for each dataset around the best fitting medium search. The results obtained were: P=7.921917 ± 0.000005 h λ = 49^∘± 2^∘, β = -60^∘± 3^∘ and ϵ = 155^∘± 2^∘.
Since the temporal span is relatively large enough (∼ 14 years), as for (154244) 2002 KL6, we conducted a search with the YORP code to see if it could be detected with our data, but the attempt was not conclusive, so with the data used in this work we rule out this possibility for this asteroid.
§ CONCLUSIONS
In this work we present 38 new light-curves obtained with five different telescopes located at Teide Observatory (Tenerife, Spain), along with new the derived shape models and rotation state parameters for NEAs (7335) 1989 JA, (7822) 1991 CS, (154244) 2002 KL6 and (159402) 1999 AP10.
For (7335) 1989 JA, a rotation period of P=2.590432 ± 0.000391 h is found, which is in agreement with previous results. Additionally, a pole solution of λ = 243^∘± 17^∘, β = -61^∘± 6^∘ and ϵ = 147^∘± 8^∘ is found to be the best fitting. Additionaly, at least four mutual events of this binary system may have been identified in our dataset.
For (7822) 1991 CS, the period found as the best fitting was P=2.390157 ± 0.000002 h, again, in agreement with previous results. The best fitting pole solution obtained from the data is λ = 242^∘± 9^∘, β = -57^∘± 7^∘ and ϵ = 175^∘± 7^∘.
For (159402) 1999 AP10, we found a period of P=7.921917 ± 0.000005 h, in agreement with previously reported results, the best fitting pole solution is: λ = 49^∘± 2^∘, β = -60^∘± 3^∘ and ϵ = 155^∘± 2^∘.
For (154244) 2002 KL6, a period of P=4.610235±0.000001 h is found, with 2 pole solutions yielding the same fit to the data, λ = 334^∘± 28^∘, β = -90^∘± 4^∘, ϵ = 176^∘± 3^∘ and λ = 153^∘± 21^∘, β = -90^∘± 4^∘, ϵ = 177^∘± 3^∘. Since the time span is large enough, a search for detecting the YORP effect was made, and another 2 solutions were found to fit the data slightly better than the constant period solutions. The initial period obtained taking into account the YORP effect was P=4.610232±0.000001 h, obtaining: λ = 333^∘± 18^∘, β = -89^∘± 2^∘, υ = (-7.14±1.93)×10^-9 rad d^-2, ϵ = 177^∘± 2^∘ and λ = 152^∘± 15^∘, β = -90^∘± 2^∘, υ = (-7.12±1.65)×10^-9 rad d^-2 and ϵ = 177^∘± 2^∘. The YORP detection can not be ruled out since, as previously mentioned, the fit is slightly better, but also the uncertainties are lower than with a constant period. It is worth mentioning that in the 100 models computed to obtain the uncertainties, in all of them, the value of υ was negative. If so, (154244) 2002 KL6 would be the very first detection of YORP decelerating an asteroid.
§ ACKNOWLEDGEMENTS
We thank Dr. Josef Ďurech for providing us the inversion code that includes the Yarkovsky–O’Keefe–Radzievskii–Paddack (YORP) acceleration and for his advises in using the inversion codes.
The work has been funded by HUNOSA through the collaboration agreement with reference SV-21-HUNOSA-2.
JL, MRA and MS-R acknowledge support from the Agencia Estatal de Investigacion del Ministerio de Ciencia e Innovacion (AEI-MCINN) under grant "Hydrated Minerals and Organic Compounds in Primitive Asteroids" with reference PID2020-120464GB-100.
This article includes observations made in the Two-meter Twin Telescope (TTT) sited at the Teide Observatory of the IAC, that Light Bridges operates in the Island of Tenerife, Canary Islands (Spain). The
Observing Time Rights (DTO) used for this research were provided by IAC. This article also includes observations made with the Telescopio IAC80 and TAR2 telescopes operated on the island of Tenerife by the Instituto de Astrofísica de Canarias in the Spanish Observatorio del Teide. This work uses data obtained from the Asteroid Lightcurve Data Exchange Format (ALCDEF) data base, which is supported by funding from NASA grant 80NSSC18K0851.
This work uses software MPO LC Invert from Brian Warner to plot the asteroids shapes as seen in Figures <ref>, <ref>, <ref>, <ref>, <ref>, <ref>, <ref>.
§ DATA AVAILABILITY
The data underlying this article will be shared on reasonable request to the corresponding author.
mnras
§ SUMMARY OF ARCHIVAL LIGHT-CURVES USED IN THIS WORK
§ STATISTICAL PLOT OF POLE SOLUTIONS
§ MODEL AND DATA FIT
§ 7335 SATELLITE DETECTION
|
http://arxiv.org/abs/2409.02992v1 | 20240904180003 | Ephemeral Superconductivity Atop the False Vacuum | [
"Gal Shavit",
"Stevan Nadj-Perge",
"Gil Refael"
] | cond-mat.supr-con | [
"cond-mat.supr-con",
"cond-mat.mes-hall",
"cond-mat.str-el"
] |
Department of Physics and Institute for Quantum Information and Matter, California Institute of Technology,
Pasadena, California 91125, USA
Walter Burke Institute of Theoretical Physics, California Institute of Technology, Pasadena, California 91125, USA
Department of Physics and Institute for Quantum Information and Matter, California Institute of Technology,
Pasadena, California 91125, USA
T. J. Watson Laboratory of Applied Physics, California Institute of
Technology, 1200 East California Boulevard, Pasadena, California 91125, USA
Department of Physics and Institute for Quantum Information and Matter, California Institute of Technology,
Pasadena, California 91125, USA
§ ABSTRACT
A many body system in the vicinity of a first-order phase transition may get trapped in a local minimum of the free energy landscape.
These so-called false-vacuum states may survive for exceedingly long times if the barrier for their decay is high enough.
The rich phase diagram obtained in graphene multilayer devices presents a unique opportunity to explore transient superconductivity on top of a correlated false vacuum.
Specifically, we consider superconductors which are terminated by an apparent first-order phase transition to a correlated phase with different symmetry.
We propose that quenching across this transition leads to a non-equilibrium ephemeral superconductor, readily detectable using straightforward transport measurements.
Besides enabling a simple detection scheme, the transient superconductor also generically enhances the false vacuum lifetime, potentially by orders of magnitude.
In several scenarios, the complimentary effect takes place as well: superconductivity is temporarily emboldened in the false vacuum, albeit ultimately decaying.
We demonstrate the applicability of these claims for two different instances of superconductivity terminated by a first order transition in rhombohedral graphene.
The obtained decay timescales position this class of materials as a promising playground to unambiguously realize and measure non-equilibrium superconductivity.
Ephemeral Superconductivity Atop the False Vacuum
Gil Refael
September 9, 2024
=================================================
§ INTRODUCTION
Phase transitions in many-body correlated systems may often be succinctly described by an appropriate classical or quantum field theory <cit.>.
The equilibrium many-body ground state is identified by the global minimum of the free-energy associated with this description.
However, other locally-stable minima may exist, albeit with a higher energy density.
These minima act as the “false-vacuum” (FV) of the system, and may be long-lived due to their metastable nature.
A system can generically get trapped in the FV state when it is quenched through a first-order phase transition.
Supercooling and superheating of water are well-known classical examples of this phenomenon <cit.>, yet quantum systems such as spin chains <cit.>, superconducting wires <cit.>, and atomic superfluids <cit.>
have been show to exhibit metastable phases and FV decay.
Further, the FV concept itself has originated in the context of cosmology, where it may have truly dire implications <cit.>.
In recent years, graphene multilayers have emerged as an exciting platform with a high degree of tunability to study correlated electron phenomena, topological phases, unusual superconductivity, and their interplay <cit.>.
A recurring theme in these systems is the peculiar vicinity of the superconducting phases to symmetry-breaking phase transitions.
In several cases, the superconducting dome itself is terminated by an abrupt transition where the Fermi surface undergoes significant reconstruction <cit.>, which in some instances is strongly indicative of a first-order phase transition <cit.>.
Recently, similar phenomenology was observed in twisted bilayer WSe_2, where hysteretic behavior was observed at the boundary between superconductivity an a correlated phase <cit.>.
In this work, we propose these materials as a platform for realizing and exploring out-of-equilibrium superconductivity, which exists as a metastable phase on top of the FV manifold of the symmetry-broken phase.
This extraordinary non-equilibrium metastable state arises in the vicinity of the true vacuum symmetry-broken phase.
A useful heuristic of the sort of scenarios we discuss is presented in Fig. <ref>.
In this generic phase diagram, superconductivity and a correlated phase are in close proximity, separated by a first-order transition line.
Specifically, we are interested in cases where the correlated phase preempts superconductivity and
overtakes it.
Thus, after a sudden quench from the superconducting phase across the transition (black arrow in Fig. <ref>), there exists a possibility of a long-lived transient superconductor, realized on top of the FV.
Alternatively, we also consider quenching from a parent normal phase directly into the suppressed superconductor through a first-order transition (green arrow).
Such a protocol may allow one to
reveal buried underlying
superconductivity in such systems, masked by competing phases.
We underline the regimes where FV superconductivity are most relevant and experimentally accessible.
This is accomplished by combining microscopic calculations for two candidate materials, rhombohedral trilayer graphene (RTG) and Bernal-stacked bilayer graphene (BBG), and a field theoretical description of the FV decay phenomena.
We estimate the expected lifetimes of the ephemeral superconductors to be of the order of ∼ 100 nanoseconds, enabling straightforward detection methods, relying on time-resolved transport measurements.
Remarkably, the unusual presence of superconductivity in the FV state is what enables such simplified detection schemes in a solid-state setting.
Superconductivity provides an unambiguous transport signal – a delay between a current driven through the system and the appearance of a voltage drop.
Furthermore, we show that the incompatibility between the correlated symmetry-broken phase and the superconductor non-trivially enhances the stability of the FV and its lifetime.
As we show, this is a generic feature in scenarios where a subordinate phase develops on top of a “primary” false vacuum.
This may be understood as a result of magnification of the surface tension between the true and false vacuum states of the system.
The strength of surface tension plays a major role in determining the energetics of the FV decay.
Interestingly, the FV superconductivity may actually survive at higher temperatures compared to its equilibrium counterpart on the other side of the transition.
This transient enhancement of superconductivity comes at a cost of incurring a finite lifetime.
Generically, what drives the symmetry-breaking transition which terminates the superconductor is the density of states (DOS) near the Fermi level ν.
Clearly, it is also an important factor in determination of the superconducting properties.
For example, conventionally the superconducting transition temperature T_c∝exp(-1/uν) (u is the pairing strength).
For weak-coupling superconductors, uν≪ 1, T_c is especially sensitive to ν.
In the FV, the superconductor temporarily experiences a higher DOS while the correlated phase is suppressed.
This facilitates favorable superconducting properties, potentially beyond those available at equilibrium under similar conditions.
The rest of the paper is organized as follows.
In Sec. <ref> we present the general theoretical scheme which we use to estimate the decay of the FV, and how its lifetime depends on properties of the relevant incompatible phases.
We then apply these phenomenological tools to microscopic calculations performed for the graphene devices in Sec. <ref>, considering different symmetry-breaking possibilities, and different transition scenarios.
We propose a simple experimental transport-based scheme to characterize the transient superconductivity in Sec. <ref>.
Finally, we summarize our results and conclude the discussion in Sec. <ref>.
§ FALSE VACUUM DECAY
We consider scenarios where as a function of some tuning parameter, r (which can be a magnetic field, an electric displacement field, pressure, etc.), a system undergoes a first-order phase transition with an order parameter ϕ̂.
Within this ϕ̂-ordered phase, it is further assumed that the disordered (e.g., paramagnetic) phase
remains a metastable local minimum of the effective free energy.
Let us also examine the consequences of an additional phase, with order parameter Ψ̂, which (i) also condenses in the vicinity of the ϕ̂ phase transition, and (ii) is incompatible with the ϕ̂ phase.
Generally, we will be interested in the case where the energy scales associated with ϕ̂ dominate over those of Ψ̂, and the ϕ̂ order is the equilibrium ground state, and the so-called “true vacuum”.
The scenario we describe above may be captured by the following Ginzburg-Landau free energy functional,
F =∫ d^2x[σ/2|∇ϕ|^2+16gϕ^2(ϕ-1)^2-Bϕ^2]
+∫ d^2x[κ/2|∇Ψ|^2-a(r)/2|Ψ|^2+b/4|Ψ|^4]
+λ/2∫ d^2x|Ψ|^2ϕ^2.
Clearly, the first-order transition is governed by the parameter denoted as B.
When B<0, ϕ=0 (the paramagnetic phase where the spontaneous ordering of ϕ̂ does not yet occur)
minimizes the free energy in the absence of Ψ̂.
At B=0, the minima at ϕ=0 , 1 are degenerate, and the ordered phase takes over at B>0, where ϕ≈ 1 becomes the global minimum.
In the metastable regime B<B_c=16g, there exists a finite energy barrier between the two minima, whose strength is g right at the transition.
The stiffness σ gives rise to a surface tension between domains of ϕ=0 and ϕ≈ 1,
J_ϕ = ∫_0^ϕ_>0 dϕ√(2σ V(ϕ)),
where V(ϕ)=16gϕ^2(ϕ-1)^2-Bϕ^2, and ϕ_>0 is the maximal ϕ for which V(ϕ)>0.
The correlation length is calculated by optimizing the energy of a domain-wall for B=0 (Appendix <ref>), ξ_ϕ=√(σ/8g).
In the second part of F, a(r) is associated with the second-order transition.
When a>0, the decoupled uniform solution is |Ψ|=Ψ_0√(a/b).
Here the appropriate correlation (coherence) length is ξ_Ψ=√(κ/(2a)), and the surface tension is
J_Ψ = 8/3ξ_Ψa^2/4b.
Finally, λ>0 couples the two sectors, and precludes ϕ̂-Ψ̂ order coexistence when it is sufficiently large as compared to the energy densities g and a.
§.§ Critical droplet analysis
We consider the system described by Eq. (<ref>), initialized to the ordered (superconducting) Ψ_0 phase, yet tuned to metastability, a>0 and 0<B<B_c.
The true vacuum is ϕ≈1, yet the initial state of the system is a locally stable minimum of the free energy.
We schematically evaluate the characteristic time necessary for the system to decay to the true vacuum, adapting the methods developed by Langer <cit.> and Coleman <cit.>.
Neglecting quantum fluctuations (whose role we later discuss),
relaxation is facilitated by thermal fluctuations of ϕ̂ droplets (or “bounces” <cit.>) within the Ψ̂ bulk, see illustration in Fig. <ref>a.
The energy of a droplet with radius R may be evaluated in the so-called thin-wall limit (R≫ξ_ϕ) by <cit.>
E_ droplet =π R^2 f_ true
-π(R+δ R)^2 f_ meta
+2π R J_ϕ + 2π(R+δ R) J_Ψ,
where f_ true=-B, f_ meta=-a^2/4b, and δ R ≈(ξ_ϕ+ξ_Ψ)/2, due to the necessary suppression of the Ψ̂ preceding the ϕ̂ droplet formation.
The droplet experiences an effective force ∝∂ E_ droplet/∂ R, pushing it towards either expansion or collapse, thus determining its fate.
Thus, the droplet energy threshold, i.e., the droplet energy at which it would tend to overtake the system, will be given by
E_ thresh.=E_ droplet(R_c),
and
R_c=J_ϕ+J_Ψ-δ Rf_ meta/f_ meta-f_ true,
determined by ∂ E_ droplet/∂ R |_R_c=0 (Fig. <ref>b).
The expression for R_c accounts for two effects, which are solely due to the ordered Ψ̂ phase, on top of the FV.
First, the denominator in Eq. (<ref>) is made smaller by the presence of the finite condensation energy of Ψ̂.
In our regime of interest, as we discuss below, one usually finds |f_ true|≫|f_ meta|, and this effect is of vanishing importance.
Second, the additional J_Ψ in the numerator, associated with the surface tension of the secondary (superconducting) phase, is reasonably expected to be rather small compared to J_ϕ, the surface tension contribution of the dominant correlated order, yet not negligible.
This is because the surface tension is roughly proportional to the product of the correlation length and the energy scale associated with the ordered phase.
While the latter might be much bigger for ϕ̂ as compared to Ψ̂, this difference is usually somewhat compensated by the ration of correlation lengths, ξ_Ψ≫ξ_ϕ.
In the regime where the denominator effect is negligible, one may approximate the threshold droplet radius
R_c≈ J_ϕ/B (1+c J_Ψ/J_ϕ), where c is an order-1 numerical factor which depends on microscopic details.
As we demonstrate for the particular systems we consider below, this mechanism indeed facilitates a more stable false-vacuum in the presence of the secondary Ψ̂ phase, which in our case is a superconductor.
§.§ Effects of quantum fluctuations
Our analysis utilizes the classical Ginzburg-Landau functional, Eq. (<ref>), producing a free-energy barrier towards nucleation and eventual FV decay.
Let us discuss the role of quantum fluctuations on the phenomenon described here.
In the zero temperature limit, β=1/T→∞, any finite energy barrier completely annihilates decay through thermal fluctuations.
However, fluctuations due to the quantum nature of the ϕ̂ field may overcome this limitation.
Setting aside Ψ̂ for a moment, one interprets the ϕ̂ part of Eq. (<ref>) as the classical limit of the imaginary-time action
S_ϕ=∫_0^β dτ∫ d^dx
[ρ/2(∂_τϕ)^2+σ/2|∇ϕ|^2+V(ϕ)].
This corresponds to the imaginary-time path integral partition function
Z=∫ Dϕ e^-S_ϕ.
In the ρ→∞ limit, variations along the τ dimension are suppressed, and one recovers the classical limit (with a prefactor of β replacing ∫ dτ integration).
Performing similar nucleation calculus in Euclidean space-time (see Appendix <ref>), one finds a temperature scale, T_Q, below which the quantum decay pathway is dominant,
T_Q=3/32Bξ_ϕ/J_ϕτ_ϕ^-1,
with τ_ϕ=√(ρ/8g).
We stress that below T_Q the FV phenomenon persists, yet the lifespan of the metastable phase saturates, and remains roughly the same as the temperature is lowered further.
Notably, at any finite temperature, a small enough bias B exists such that the quantum decay is much less efficient.
The reason for this dependence on the bias B stems from the effective higher dimension of the quantum problem, where the surface of the droplet is d-dimensional, whereas classically it is (d-1)-dimensional.
As a consequence (Appendix <ref>), the threshold energy scales as B^-d in the quantum case (B^1-d classically).
Thus, when the bias between the false and true vacuum is small enough, the quantum decay pathway is highly disfavored.
Without loss of generality, we henceforth assume this is the case for our detailed microscopic analysis below.
Another interesting possibility occurs in an intermediate temperature regime, where quantum behavior dominates the ϕ̂ sector, yet Ψ̂ (now introduced back in our discussion), remains effectively classical due to its much longer correlation time τ_Ψ.
The relevant temperature regime is
τ_ϕ/τ_Ψ<T/T_Q<1.
Heuristically, it takes considerably lower temperatures (as compared to T_Q) to saturate the effects of the Ψ̂ field redistribution energetics.
This regime thus enables further enhancement of the FV stability by the competing sub-leading order, as discussed in Appendix <ref>.
Once more, this enhancement takes place even if the discrepancy between
f_ true
and
f_ meta
is of orders of magnitude, due to combination of the correlation length effect ξ_Ψ≫ξ_ϕ (already discussed above), and a ∝ T_Q/T prefactor to the relative Ψ̂ contribution.
Thus, the lower the temperature in this regime, the larger the FV life-time enhancement becomes due to the ordering competition.
§ REALIZATION IN GRAPHENE MULTILAYERS
We consider first-order phase boundaries observed in both BBG and RTG separating correlated symmetry-broken phases and superconductivity, either incipient or fully formed.
In all cases, we follow the same methodology allowing us to extract the relevant energy densities and surface tensions.
We begin with an Hamiltonian of the form
H= H_0 + H_ int,
where H_0 describes the appropriate band structure, and
H_ int=U_c/Ω∑_ qρ_ qρ_ -q
is the electron-electron interaction, which we take as short-range for simplicity (ρ_ q is the momentum-q component of the density in the relevant band, U_c is the interaction strength, and Ω is the system area).
At a given density n_ tot, we compute the free energy as a function of the relevant order parameter ϕ̂,
F(ϕ) = ⟨ H⟩_ HF,(ϕ,n_ tot)
-
⟨ H⟩_ HF,(0,n_ tot),
where ⟨⟩_ HF,(ϕ,n) is the Hartree-Fock expectation value for ϕ̂=ϕ at fixed density n.
We use the zero-temperature expression, as the temperature is assumed to be far below the ϕ̂ phase transition.
Contributions to Eq. (<ref>) due to fluctuations around the mean-field solution, as well as due to other competing instabilities are beyond the current scope of this work.
As the energy scales associated with the order parameter jumps are comparable to the Fermi energy (see below) we will approximate the correlation length as the inter-particle separation, i.e., ξ_ϕ≈ n_ tot^-1/2.
The stiffness is thus approximated by σ≈ 8 ξ_ϕ^2 F^*, with the barrier height F^*=max F( ϕ∈[0,ϕ_0 ]).
ϕ_0 is the global minima of F, and we approximate B ≈ -F(ϕ_0).
In the normal state, we evaluate two crucial quantities regarding superconductivity.
Namely, the critical temperature T_c, and the superconducting condensation energy, playing the role of a^2/(4b) in our discussion above, Eq. (<ref>).
The superconducting coherence length is taken as a phenomenological parameters from experiments, and assumed to scale with T_c in the conventional manner.
The details of our superconducting calculations are presented in Appendix <ref>, where we use general considerations and avoid making assumptions on the origin of the pairing glue.
Moreover, we refrain from pinpointing the exact mechanisms by which the transition into the correlated phase extinguishes superconductivity, and keep our discussion as general as possible.
(In Appendix <ref>. we discuss a curious scenario where the transition suppresses the superconducting phase through a combination of substantial DOS reconstruction and retardation effects.)
Intervalley coherence in rhombohedral trilayer grpahene (RTG).—
A promising candidate to observe the phenomenon introduced here is the superconductor region denoted as SC1 in Ref. <cit.>.
Its boundary in the n_ tot–D plane (where D is the perpendicular displacement field) coincides with a transition to a symmetry-broken correlated phase, which appears to be first-order <cit.>.
Though the nature of the correlated phase terminating SC1 has not been confirmed, it is somewhat constrained.
The lack of spin and orbital ferromagnetism suggests either spin-valley locking, an intervalley-coherent phase (IVC), or combination thereof.
Theoretical studies have shown the IVC to be robust and ubiquitous throughout the phase diagram <cit.>, steering our focus to the IVC case for simplicity.
In Fig. <ref>a we plot the Hartree-Fock energy landscape as a function of the IVC order parameter and displacement field.
Moving from low to high fields, one clearly observes a region of metastability: the global minimum is at ∼ 3 meV, whilst the normal state remains locally stable.
At high enough values of D, the normal state finally becomes unstable.
Next, we compute the false-vacuum decay threshold, shown in Fig. <ref>b.
As expected, close to the transition point it diverges, due to a vanishing energy difference between the FV and the true vacuum, driving the critical droplet size increasingly larger [Eq. (<ref>)].
Notably, the threshold for decay becomes much smaller when the FV state is not superconducting, due to the surface tension contribution of the superconductor.
As shown in the inset, the relative strength of J_Ψ increases, signaling two effects.
The first is a decrease in the ordered phase surface tension J_ϕ [Eq. (<ref>)] as the bias between the true and false vacuum increases.
The second is a result of enhanced superconducting T_c, owing to an enhancement of DOS near the Fermi level as one moves deeper into the ordered state.
We now turn to estimate the life-time of this false vacuum,
τ_ decay∼τ_0 e^β E_ thresh.,
where τ_0 is the much debated <cit.> fluctuation time-scale prefactor.
For simplicity, we take the worst-case-scenario, and estimate it as the time two electrons separated by a correlation length can “know about eachother”,
τ_0≈ξ_ϕ / v_F ≈ 10^-13 sec.
Taking a reasonable β E_ thresh.∼ 15-20 leads to time scales
τ_ decay∼ O(100 nsec - 10 μ sec).
Stoner blockade in Bernal bilayer graphene (BBG).—
The phase diagram of BBG has been shown to hosts a multitude of sharp phase transitions, as well as superconductivity in the presence of either an in-plane magnetic field or proximity to WSe_2 <cit.>.
The phenomenology is suggestive of a vicinity of a superconductive phase in the absence of these two perturbations, where the formation of a competing correlated phase suppresses it <cit.>.
We consider an alternative route to circumvent the presumed correlated phase, by quenching across the transition.
We focus on the vicinity of the magnetic-field-induced superconducting regime, where phenomenology is consistent with a fourfold to twofold degenerate Stoner transition.
The Hartree-Fock free energy as a function of the polarization order parameter is shown in Fig. <ref>a.
A metastable regime is clearly developed near the transition.
The normal-state energetic are quite similar to those obtained for the RTG case, and lead to similar energy threshold, even when accounting for a lower T_c of the superconductor observed in experiments (see Appendix <ref>).
There is a notable difference in this case compared to the RTG scenario above.
Here, the system is in the normal state after the quench.
It decays to the correlated phase with the characteristic τ_ decay, yet superconductivity may form more rapidly, as it does not need to overcome an energy barrier.
We estimate the superconductor formation rate as γ_Ψ̂∼Δ_0, i.e., roughly proportional to the equilibrium superconducting gap <cit.>.
Combining the decay rate γ=τ_ decay^-1, γ_Ψ̂, and the decay rate in the absence of superconductivity γ_0, we may approximate the probability of the system being in the superconducting false vacuum (Appendix <ref>),
p_ false(t)=γ_Ψ̂/γ_Ψ̂+γ_0-γ[1-e^-(γ_Ψ̂+γ_0-γ)t]e^-γ t.
As demonstrated in Fig. <ref>b, at early times superconductivity builds up at a rate ∼γ_Ψ̂ to some finite fraction, and decays at long times at the rate γ.
Our calculations indicate superconductivity should remain visible up to O(100) nsec timescales.
In principle, for the specific scenario described in this section, where the system is quenched rapidly through a superconducting phase transition, one should take into account the formation of topological defects (vortices) due to the Kibble-Zurek mechanism <cit.>.
Physically, one expect that following the quench, independently coherent domains of size ∼ξ_Ψ form,
and coalesce at a time-scale
τ_GL≈π/(8|T_c-T|)^-1 <cit.>, the Ginzburg-Landau relaxation time <cit.>.
We note that τ_GL is of the same order of γ_Ψ̂^-1.
The relaxation dynamics of these defects, as well as oscillations in the order parameter magnitude <cit.> are presumed to play a secondary role, and are thus beyond the current scope of this work.
§ DETECTING EPHEMERAL SUPERCONDUCTIVITY
Thus far we have demonstrated the stabilization of the false vacuum by superconductivity, as well as the potential enhancement of the superconducting state in the metastable regime.
We now explore another intriguing consequence of the physics studied in this work.
Namely, the superconducting nature of the false vacuum make the phenomenon readily accessible to transport measurements.
Consider an experimental setup similar to the one pioneered in Ref. <cit.>, depicted in the inset of Fig. <ref>.
The tuning parameter in the system, e.g., displacement field, is quenched from time -t_ quench to t=0.
At t=0 a current pulse is fed to the device, and the transient voltage response is recorded.
At a time t=t_ delay voltage drop is eventually observed.
originally, the experiment was tailored to measure the relaxation time of equilibrium superconductors, and thus no signal would appear if the amplitude of the current pulse is less than the superconducting critical current I_c.
Here, the situation is made significantly richer by the decay of superconductivity itself.
Under some simple yet reasonable assumptions (Appendix <ref>), we find the delay as the time at which f(t_ delay)=0, with the initial conditions f(0)=1, where f obeys the time-evolution,
τ_GL∂_tf=-e^2t/τ_ decay(I/I_c)^24/27f^3+f(1-f^2).
In Fig. <ref> we demonstrate the dependence of the voltage signal delay time on the amplitude of the injected current.
If the false vacuum state is stable enough, a noticeable uptick in the delay is made clearly visible for currents slightly below the equilibrium critical values.
At relatively small values of the probe current, the expected delay is of the same order as the false-vacuum decay time τ_ decay.
§ CONCLUSIONS
Ephemeral superconductivity is a fascinating non-equilibrium state of matter, which is yet to be fully clarified <cit.>.
Here, we explore an unusual striking phenomenon: Superconductivity developing on top of a correlated false-vacuum manifold.
In contrast with previous scenarios where superconductivity is terminated by a first-order phase transition <cit.>, the transient superconductor is temporarily protected by the correlated FV.
Furthermore, the FV can facilitate the formation of a superconductor followed bu its decay (Fig. <ref>b), in otherwise superconductivity-excluded areas of the equilibrium phase diagram.
The proposed non-equilibrium superconductor is incompatible with the true many-body ground state of the system, and is thus ephemeral albeit metastable.
We elucidated how this FV superconductor can remarkably and significantly enhance the false-vacuum lifespan, possibly by a few orders of magnitude.
This enhancement occurs even if the superconducting condensation energy is negligible as compared to that of the correlated true vacuum, as we demonstrated by utilizing a general phenomenological framework [Eq. (<ref>)].
This unique effect is rooted in a notable surface tension contribution of the superconductor to the nucleation of critical threshold droplets, which facilitate the decay <cit.>.
The prospect of FV stabilization by a generic secondary order, which is adversarial to the corresponding true vacuum, is left for future work, as well as a broader view on quantum effects and the hybrid quantum-classical regime.
We demonstrated how the ephemeral superconducting state can be realized in multilayer graphene devices.
These systems ubiquitously show the coexistence of superconducting regions in the phase diagram, and apparent first-order phase transitions to correlated symmetry-broken phases <cit.>.
On occasions, these transitions actually terminate the superconducting dome, making such devices an ideal platform to explore the intertwined false-vacuum physics.
Combining Hartree-Fock calculations for the symmetry-broken transition energetics, theory of the superconductivity transition, and phenomenology obtained from experiments, we estimate the relevant FV lifetime for observing the FV decay to be ≳ O(100 nsec),
rendering the decay process as much slower as compared to recently observed transient superconducting phenomena <cit.>.
Another intriguing prospect in these graphene systems, is that competition with a correlated phase transition arguably undermines the potential equilibrium superconducting orders.
(This is reminiscent of the competing phases in high-T_C superconductors, e.g., <cit.>.)
The multilayer graphene superconductors were observed to be stabilized by some perturbations, e.g., magnetic fields or proximity to a non-trivial substrate.
However, we have pointed out a different route for bringing these low-lying superconductors to light: transiently suppressing the symmetry-broken state by quenching the system through the first-order transition.
The motivation for such a process to be feasible is simple.
The same factor that participates in driving the transitions, the DOS near the Fermi energy, is the one that is expected to enhance superconductivity when given the chance (see related discussion in Refs. <cit.>).
An important consequence of ephemeral superconductivity “dressing” the metastable false-vacuum is that direct experimental detection of its decay becomes simple and experimentally accessible.
The long time-scales for the superconducting life-times mentioned above are in fact conducive to straightforward transport measurements.
Resolution of ∼ 1 nsec in the voltage-delay experimental scheme we detail in Sec. <ref> can readily be obtained.
Tracking the temporal signal due to the decay of the superconductor, and as a consequence the establishment of the equilibrium ground-state, may be achieved in these graphene systems (as well as in another candidate system, twisted WSe_2 <cit.>), without the need of sophisticated light-based apparatus.
Moreover, conclusive transport signatures of the kind we discussed obviate many of the ambiguities associated with photo-measurements of non-equilibrium superconductors <cit.>.
These facts, along with the ability to electrically tune them across the phase diagram, make highly-correlated multilayer graphene devices ideal playgrounds for manipulating and probing the false-vacuum decay in the context of solid-state systems.
We thank John Curtis, Thomas Weitz, and Erez Berg for enlightening discussions.
S.N.-P. acknowledges support from the National Science Foundation (grant number DMR-1753306). G.R. and S.N.-P. also acknowledge the support of the Institute for
Quantum Information and Matter, an NSF Physics Frontiers
Center (PHY-2317110).
GS acknowledges support from the Walter Burke Institute for Theoretical Physics at Caltech, and from the Yad Hanadiv Foundation through the Rothschild fellowship.
Part of this work was done at the Aspen Center for Physics, which is supported by the NSF grant PHY-1607611.
aapmrev4-2
§ PHENOMENOLOGICAL MODEL
§.§ Surface tension calculations
Given the free energy
F_ϕ=∫ d^2x[σ/2|∇ϕ|^2+16gϕ^2(ϕ-1)^2],
we calculate the energy associated with a domain-wall solution of extent ξ, ϕ(x)=1/2( tanhx/ξ+1) per unit length along the domain wall direction L_y,
F_ϕ/L_y=[σ/8ξ+gξ]∫_-∞^∞dz1/cosh^4z.
Minimizing with respect to ξ, we obtain the correlation length
ξ_ϕ=√(σ/8g), and the surface tension
J_ϕ^0=8/3gξ_ϕ=√(8σ g/9).
When an additional bias is added to F_ϕ favoring the ordered phase this surface tension is only approximately correct.
The bias always tends to lower the surface tension, and can generally be calculated by integrating over the effective equations of motion, i.e.,
J_ϕ = ∫_0^ϕ_>0 dϕ√(2σ V(ϕ))
= 4gξ_ϕ∫_0^ϕ_>0 dϕ√(V(ϕ)/g),
where V(ϕ)=16gϕ^2(ϕ-1)^2-Bϕ^2, and ϕ_>0 is the maximal ϕ for which V(ϕ)>0.
In the right hand side of Eq. (<ref>) we have expressed the surface tension in terms of the parameters g and ξ_ϕ, which we can later input from either phenomenology or mean-field calculations.
One easily verifies that for B=0 one recovers exactly J_ϕ^0.
Repeating the first part of the calculation above, now for the Ψ̂ part, with the free energy
F_Ψ=∫ d^2x[κ/2|∇Ψ|^2-a/2|Ψ|^2+b/4|Ψ|^4],
we obtain the correlation length
ξ_Ψ = √(2κ/a), and the surface tension
J_Ψ = 8/3ξ_Ψa^2/4b.
§.§ Explicit results from the phenomenological model
In Fig. <ref> we demonstrate how the critical bubble radius and the value of the energy threshold depend on the metastability bias parameter B.
Clearly, as the ordered ϕ≈ 1 phase becomes increasingly favorable, true vacuum bubbles become easier to nucleate.
The impact of the competing Ψ̂ phase, and the enhancement of metastability due to its presence are quite clear – both R_c and E_ thresh. increase substantially as compared to the case of its absence.
§.§ Quantum regime
It is instructive to explore the false vacuum decay of the ϕ̂ phase in arbitrary dimensionality d, and with possibly important quantum fluctuations.
For that end, we introduce the imaginary time action
S_ϕ=∫ dτ∫ d^dx[4gτ_ϕ^2(∂_τϕ)^2+4gξ_ϕ^2|∇ϕ|^2+16gϕ^2(ϕ-1)^2-Bϕ^2],
where we have re-written the σ term from Eq. (<ref>) in terms of the correlation length, and τ_ϕ can be thought of as a correlation time characterizing quantum fluctuations.
For example, in the limit τ_ϕ→∞, fluctuations with respect to τ freeze out, and we may ignore the (∂_τϕ)^2 contribution entirely.
Thus one replaces the integration over the Lagrangian ∫ dτ L_ϕ, by a simple factor of the “length” of the system in imaginary-time, β L_ϕ, i.e., we recover the classical expression.
To simplify the analysis, we rescale the action S_ϕ as the following, x→ xξ_ϕ, τ→ττ_ϕ,
S_ϕ/τ_ϕξ_ϕ^d=∫ d^d+1x[4g|∇_d+1ϕ|^2+16gϕ^2(ϕ-1)^2-Bϕ^2].
With this isotropic form, one may readily employ the bubble formalism we use in the main text.
The action barrier of forming a true vacuum bubble is approximately given by
S_ bubble/τ_ϕξ_ϕ^d=π^d+1/2/Γ(d+1/2+1)[-R^d+1B+(d+1)R^dJ̃_̃ϕ̃],
where Γ is Euler's gamma function, and the rescaled surface tension is J̃_̃ϕ̃=16g∫_0^ϕ_>0dϕ√(ϕ^2(ϕ-1)^2-B/16gϕ^2).
The critical (dimensionless) R̃_̃c̃ is then found to simply be
R̃_̃c̃=dJ̃_̃ϕ̃/B.
We thus find the action barrier for the formation of the critical bubble,
S_ Q=π^d+1/2d^d/Γ(d+1/2+1)τ_ϕ(J̃_̃ϕ̃/B)^dξ_ϕ^dJ̃_̃ϕ̃.
Going through the same steps for the classical case (τ_ϕ→∞), we find the classical barrier under the same conditions,
S_ C=π^d/2(d-1)^d-1/Γ(d/2+1)β(J̃_̃ϕ̃/B)^d-1ξ_ϕ^dJ̃_̃ϕ̃.
One clearly sees that due to the different dependence on J̃_̃ϕ̃/B, the classical energy threshold is much lower close to the transition (B≪J̃_̃ϕ̃), and thus thermal fluctuations dominate.
At low enough temperatures however, the β prefactor can become significant, and quantum decay of the false vacuum “short-circuits” the classical path.
We may estimate the temperature below which this becomes important,
T_Q≈Γ(d+1/2+1)(d-1)^d-1/√(π)Γ(d/2+1)d^dB/J̃_̃ϕ̃τ_ϕ^-1→_d=23/32B/J̃_̃ϕ̃τ_ϕ^-1,
and on the right hand side we explicitly plugged in d=2, which is appropriate for our considerations in this work.
The upshot here is that close enough to the transition, one may find a regime where the classical decay paths are most important.
Let us make another comment on scenarios where τ_ϕ may be rather pathological.
In cases where the order parameter commutes with the Hamiltonian, on a mean-field level one expects to find τ_ϕ^-1=0, and the false vacuum can only decay through thermal fluctuations.
This is the case for example for conventional Stoner transitions, and for the transverse field Ising model with vanishing transverse field.
Realistically, due to corrections and phenomena beyond our simplified descriptions, τ_ϕ^-1 may be finite, yet one expects it to be rather suppressed as compared to other generic symmetry breaking transitions, e.g., an IVC order.
§.§ Hybrid regime
In the presence of both the ϕ̂ and Ψ̂ phases in the false vacuum decay problem, there exists another regime of interest.
Assume that the temperature is below T_Q, which is set by the properties of the ϕ̂ action, yet still significantly higher than the temperature where variations of Ψ in imaginary time τ are enabled.
Namely,
T_Qτ_ϕ/τ_Ψ<T<T_Q.
Let us simplify the analysis in this regime by limiting ourselves to the d=2 case.
The critical bubble will assume an anomalous shape here.
Its core would be comprised of an ellipsoid with radius ∼R̃_̃c̃ξ_ϕ and height ∼2R̃_̃c̃τ_ϕ (we have simplified even more, by assuming the Ψ̂ action has little effect on the value of the critical R̃_̃c̃).
However, this ellipsoid would be embedded inside a cylinder of height β and radius ∼R̃_̃c̃ξ_ϕ+δ R of the intermediate ϕ=0, Ψ=0 state, akin to the corona in Fig. <ref>a.
Since τ_Ψ is too large, the Ψ̂ sector still attempts to avoid variations in the imaginary time direction, leading to the cylindrical shape of the domain wall.
Under these assumptions, the action barrier S_Q^Ψ now reads
S_Q^Ψ=S_Q[1+8T_Q/T(1+ξ_Ψ/ξ_ϕB/4J̃_̃ϕ̃)^2a^2/4b/B].
Notice that even if the ratio a^2/4b/B is rather small (as in the scenarios we considered in the main text), a significant stabilization of the false vacuum decay may occur due to the combination of the large prefactors 8T_Q/T and ξ_Ψ/ξ_ϕ.
For example, taking the reasonable parameters B=0.4J̃_̃ϕ̃, ξ_Ψ=20ξ_ϕ, and T_Q=5T one finds
S_Q^Ψ=S_Q[1+360a^2/4b/B].
It thus becomes plausible in this regime that the Ψ̂ contribution to the metastability becomes comparable and possibly dominates over the ϕ̂ contribution, even if the discrepancy in energy scales is very large.
§ MICROSCOPIC CALCULATIONS
§.§ Crystalline graphene band structure
We begin by calculating the the non-interacting band structure of biased Bernal-stacked bilayer graphene.
Expanded around the valley K/K' points in momentum space, the Hamiltonian is <cit.>
H_BLG=∑_𝐤,τ,sc_τ s𝐤^†h_τ(𝐤)c_τ s𝐤,
with c_τ s𝐤=(A_1,τ s𝐤,B_1,τ s𝐤,A_2,τ s𝐤,B_2,τ s𝐤)^T,
where X_i,τ s𝐤 annihilates an electron on sub-lattice
X in layer i, with spin s, and momentum 𝐤 near
the valley τ. The matrix h_τ is given by
h_τ(𝐤)=[ U/2 v_0π_τ^* -v_4π_τ^* -v_3π_τ; v_0π_τ U/2+Δ' γ_1 -v_4π_τ^*; -v_4π_τ γ_1 -U/2+Δ' v_0π_τ^*; -v_3π_τ^* -v_4π_τ v_0π_τ -U/2 ],
with π_τ=τ k_x+ik_y, and the parameters v_i=√(3)/2aγ_i,
a=0.246 nm, γ_0=2.61 eV, γ_1=361 meV, γ_3=283
meV, γ_4=138 meV, and Δ'=15 meV <cit.>.
The interlayer potential difference U is approximately U≈-dD/ϵ,
where the interlayer distance is d≈0.33nm, ϵ≈4.3,
and D is the displacement field.
We diagonalize H_BLG at each momentum, and extract the dispersion relation of the lowest-lying valence band, which we denote by ϵ_τ,𝐤.
For the rhombohedral trilayer graphene calculations, we follow a similar method.
We diagonalize the Hamiltonian adapted from Refs. <cit.>,
h_τ( k)=[ Δ+δ_1+δ_2 γ_2/2 v_0π^† v_4π^† v_3π 0; γ_2/2 -Δ+δ_1+δ_2 0 v_3π^† v_4π v_0π; v_0π 0 Δ+δ_2 γ_1 v_4π^† 0; v_4π v_3π γ_1 -2δ_2 v_0π^† v_4π^†; v_3π^† v_4π^† v_4π v_0π -2δ_2 γ_1; 0 v_0π^† 0 v_4π γ_1 -Δ+δ_2 ],
where Δ is the interlayer potential difference (proportional to the displacement field D.
Notice we have written the Hamiltonian in the basis (A_1,B_3,B_1,A_2,B_2,A_3), where A_j/B_j correspond to different graphene sublattices in layer j.
Here, we use the adopt the parameters γ_0=3.1 eV, γ_1=0.38 eV, γ_2=-15 meV, γ_3=-0.29 eV, γ_4=-141 meV, δ_1=-10.5 meV, δ_2=-2.3 meV.
Frot the trilayer calculations in this work, we once again are only interested in the dispersion relation of the lowest lying valence band.
§.§ IVC phase transition
We consider the phase transition at a given total density n from a normal symmetric phase, where each of the four spin-valley flavors has occupation n/4, and between an intervalley coherent (IVC) phase.
The non-interacting spectrum of the relevant valence band at the τ=± valley is ϵ_τ( k), and depends on the value of the displacement field.
For simplicity, we assume the IVC is doubly degenerate, and no further symmetry breaking. Denoting the momentum-independent IVC order parameter Δ_ IVC, we can calculate the mean field free-energy with respect to the normal state,
F_ IVC(Δ_ IVC)=2[F_ kin(Δ_ IVC)-F_ kin(Δ_ IVC=0)+ΩΔ_ IVC^2/g],
with Ω the system volume, g the relevant coupling constant
in the IVC channel, the factor of 2 accounts for the two-fold degeneracy,
and
F_ kin(Δ_ IVC)=∑_𝐤,iE_𝐤,i Θ[μ(Δ_ IVC)-E_𝐤,i].
Here, the mean field energies are
E_𝐤,1/2=ϵ_++ϵ_-/2±√((ϵ_+-ϵ_-/2)^2+Δ_ IVC^2),
and the chemical potential μ(Δ_ IVC) is
determined self-consistently by the relation
2/Ω∑_𝐤,iΘ[μ(Δ_ IVC)-E_𝐤,i]=n.
§.§ Stoner transition
Here, we outline the calculation of the free energy in the vicinity of the first-order phase transitions we consider in this work.
We simplify, by assuming the relevant energetics do not change much in the sub-Kelvin temperature ranges (where superconductivity is measured in experiments).
This is justified if the transition temperature associated with these transitions is on the order of several Kelvin or higher, which appears to be experimentally consistent <cit.>.
Assuming a short-range density-density interaction of strength U, the system can lower its energy on a mean-field level by re-distributing the electronic densities between different flavors with density n_i
E_ int = U/2[(∑_i n_i)^2 - ∑_i n_i^2].
We will mostly be interested in transitions from a fully flavor-symmetric 2d-fold degenerate state, to a d-fold degenerate state.
Concretely, in graphene systems with valley degeneracy, d=2, and the transition we focus on corresponds to, e.g., spin polarization or spin-valley locking.
Thus, fixing the total density n_ tot, one has d flavors at density n_+=n_ tot/2d+δ, and d flavors with n_-=n_ tot/2d-δ.
Thus, the contribution from the spontaneous polarization to the free energy density is
F_ int/d = - U δ^2.
Whereas the interaction U favors the Stoner transition, the kinetic energy associated with the band disfavors it.
This can be quantified by the energy cost associated with populating (depopulating) states with higher (lower) energies as compared to the non-interacting Fermi level,
F_ kin/d=[∫_E_F^E_+ - ∫_E_-^E_F]
dϵ N(ϵ)
(ϵ-E_F),
where N is the non-interacting DOS, and the Fermi energies are obtained by the relations
∫_-∞^E_F dϵ N(ϵ) =n_ tot/2d,
∫_-∞^E_± dϵ N(ϵ) =n_ tot/2d±δ.
§.§ Superconductivity calculations
The nature of the superconducting order parameter, as well as the origin of the pairing glue, is not yet resolved in the graphene materials we discuss in the main text.
Further more, different materials and different superconducting regions in the phase diagram of the same device may have different properties and origins.
We thus refrain from an attempt to clarify such important questions regarding these superconductors, and keep our discussion as general as possible.
We assume some retardation energy scale, ω^*, at which an attractive interaction in the Cooper channel g_ att. is introduced, and restrict ourselves to a superconducting gap with trivial symmetry.
Accounting for the suppression of the initial repulsive interaction in the Cooper channel U_0 by the Anderson-Morel mechanism <cit.>, we solve the following equation for T_c,
(g_ att.-U_0/1+U_0ℓ) D=1,
where
ℓ = ∫_ω^*^∞ dξ N(ξ)/|ξ|,
D(T_c)= ∫_0^ω^*dξ N(ξ)tanhξ/2T_c/ξ,
and N(ξ) is the DOS of at a distance ξ away from the Fermi level.
We note that in the case of a four-fold degenerate normal state (spin and valley degenerate phase), the initial U_0 is actually U_0≈ U_ intra-U_ inter, with U_ intra (U_ inter) being the intra- (inter-) valley interaction strength.
This is a well-known consequence of multiband superconductivity enhanced by the interband interactions.
At a given inverse temperature β=1/T the gain in free energy due to condensation of the superconducting phase is given by
F_Δ =|Δ|^2/g_ eff.-4∫_0^ω^*dξ N(ξ)(√(ξ^2+|Δ|^2)-|ξ|)
-8/β∫_0^ω^*dξ N(ξ)log(1+e^-β√(ξ^2+|Δ|^2)/1+e^-β|ξ|),
where the effective coupling constant is precisely the one in Eq. (<ref>), i.e.,
g_ eff.=g_ att.-U_0/1+U_0ℓ.
Minimizing F_Δ with respect to Δ would produce the analog of a^2/(4b) from Eq. (<ref>).
We note that one may also expand Eq. (<ref>) to find suitable expressions for a and b, yet this expansion becomes less reliable for extracting the condensation energy in the regime T≪ T_c.
In order to estimate the behavior of the coherence length, we make use of a phenomenological parameter, ξ_0(T_c,0).
It is the zero-temperature coherence length expected for a superconductor with critical temperature T_c.
We assume here that the relation between the coherence length and the critical temperature remains the same within our regime of interest (although T_c may change considerably).
Employing conventional Ginzburg-Landau relations, we may approximate,
ξ_Δ(T_c,T)≈ξ_0(T_c,0)√(T_c,0^2/T_c(T_c-T)).
Therefore, calculation of T_c, in combination with the aforementioned phenomenological parameter, will give us ξ_Δ at any temperature.
§.§ Extended data
Let us put the first-order transitions we study in the main text in the context of a broader region of the experimental phase diagram, i.e., in a broad regime of total density and displacement field.
In Fig. <ref> we highlight the phase boundaries for the IVC transition in RTG (<ref>a), and for the Stoner transition in BBG (<ref>b), for the same parameters we explore the stability of the false vacuum in.
Additionally, we explore a different region of the RTG phase diagram where a Stoner-like transition may occur (lower density and displacement fields, Fig. <ref>c).
It is clear that on the level of our mean field analysis, the first-order nature of the transition is quite robust over a broad range of variables.
As stated in the main text, we do not specify how superconductivity is suppressed by the spontaneously formed order.
Here, however, we elaborate on such a possible mechanism near an IVC transition in RTG.
The mechanism can be most readily understood by examination of the DOS near the Fermi level, before or right after the IVC order condenses.
As shown in Fig. <ref>a, the DOS right at the Fermi energy remains mostly unchanged, so a dramatic effect would seem peculiar.
Moreover, it appears that a large portion of the DOS has actually moved closer to the Fermi level, which should naively only help superconductivity.
The answer to this puzzle is encoded in the calculation of ℓ, which determines how the initial Coulomb repulsion is screened and renormalized in the Cooper channel.
The renormalization ℓ is solely determined by the DOS outside the ω^* window around the Fermi level, where the retarded attraction becomes effective.
Right outside this retardation window, the prominent DOS feature in the IVC phase is a substantial “pseudogap” present between the IVC-split bands.
Thus, following the IVC transition the effective interaction
g_ eff.=g_ att.-U_0/1+U_0ℓ
plummets, making the superconductor phase virtually undetectable, see Fig.<ref>b.
Finally, for the transitions explored in the main text, as well as the RTG Stoner transition, we provide some extended results regarding the metastability diagrams (left column of Fig. <ref>), the superconducting parameters needed for extracting the energetics relevant to the superconducting phase (Fig. <ref>, center column), and the size of the free-energy barriers and the role of the secondary superconducting order in determining them (right column of Fig. <ref>).
Qualitatively, the results share great similarities.
This suggests that the sort of physical phenomena we discuss in this work is quite robust, and should appear quite ubiquitously in theses strongly correlated graphene systems.
§ HEURISTIC MASTER EQUATION
Let us consider a heuristic approach to the state of the system, where three points of interest exist in phase space.
The first, is the true ground state of the system, to which we assign the occupation probability p_ true.
The second, is the false vacuum state, with p_ false.
Lastly we have also the normal phase, where both of the phases, i.e., ϕ̂ and Ψ̂, are disordered, to which we assign p_0.
We may estimate the state of the system as a function of time by considering the time evolution
d/dt[ p_ true; p_0; p_ false ]=[ 0 γ_0 γ; 0 -γ_Ψ̂-γ_0 0; 0 γ_Ψ̂ -γ ][ p_ true; p_0; p_ false ].
Here, γ corresponds to τ_ decay^-1 which we compute in the presence of the secondary competing order (so one always expects γ_0>γ), γ_0 is the decay rate in the absence of this competition, and γ_Ψ̂ is the time it takes the Ψ̂ phase to form in the system starting from the normal state (assuming the absence of the true vacuum order).
Essentially, there are two scenarios of interest, depending on the initial conditions at the time of the quench.
In the first case, the system starts in the false vacuum state,
(p_ true(0),p_0(0),p_ false(0))=(0,0,1).
This is the scenario relevant for example in RTG next to the supposed IVC transition, when the system is quenched from the superconducting state into the IVC regime.
In that case, p_0 becomes irrelevant and we recover the expected decay,
p_ false(t) = e^-γ t = e^-t/τ_ decay.
In the second more complicated case, the initial conditions are
(p_ true(0),p_0(0),p_ false(0))=(0,1,0).
The system is quenched from a normal phase into a regime where formation of the Ψ̂ order competes with the false vacuum.
The solution for p_ false is less simple yet quite straightforward,
p_ false(t)=γ_Ψ̂/γ_Ψ̂+γ_0-γ[1-e^-(γ_Ψ̂+γ_0-γ)t]e^-γ t.
In the interesting scenario, where the superconducting-like phase is formed first and γ_Ψ̂ is the largest rate, the false vacuum first forms in a time-scale ∼γ_Ψ̂^-1, occupies a fraction determined by the ratio of the different rates, and then goes on to decay with the characteristic time-scale τ_ decay as before.
§ ORDER PARAMETER RELAXATION
Let us estimate the manner by which the superconducting order parameter relaxation would manifest in measurements of t_d, the delay time between a current driven through the system, and the appearance of a voltage signal.
Our starting point is the time-dependent Ginzburg-Landau equation <cit.> (neglecting the role of the sub-leading thermodynamic order parameter fluctuations),
-a/2τ_GL(∂_t+2ieφ)Ψ=δ F/δΨ^*.
Here, F is the functional introduced in Eq. (<ref>) (disregarding the coupling to the ϕ̂-sector, to be considered shortly), and φ is the scalar potential.
We rewrite the equation in terms of the normalized order parameter Ψ≡√(b/a)ψ (a>0 in the ordered phase), and find
τ_GL(∂_t+2ieφ)ψ=2ξ_Ψ^2∂_x^2ψ+ψ(1-|ψ|^2).
Notice we assumed the possibility of current flowing solely in the x̂ direction with density
j_s=ie/m^*a/b(ψ^*∂_xψ-ψ∂_xψ^*),
and m^* is the effective mass.
We employ the ansatz ψ(x)=f e^iθ(x), assuming that f remains fairly constants over length scales ≫ξ_Ψ.
The current may then be expressed as a function of ∂_x θ,
j_s = -2e/ma/b∂_xθ[1-2ξ_Ψ^2(∂_xθ)^2], and maximizing over this variable allows one to obtain the critical current
j_c=-e/m^*a/b4/3√(6)ξ_Ψ.
We now combine our ansatz, the definition of the current operator, Eq. (<ref>), and the real part of Eq. (<ref>), and find the following connection between the current density and the relaxation of the order parameter,
τ_GL∂_tf=-(j_s/j_c)^24/27f^3+f(1-f^2).
In the standard experiment demonstrated in Ref. <cit.>, one obtains the delay time by integrating Eq. (<ref>) over the change of f from unity to zero,
t_d^0/τ_GL = ∫^1_0 df/(I/I_c)^24/27f^3-f+f^3,
where I and I_c are the applied current and critical current.
The delay time t_d^0 diverges when approaching I_c from the I>I_c side, and is infinite (by definition) for I<I_c.
Next, we modify Eq. (<ref>) as to be appropriate for our considered scenario.
As a first order approximation, we introduce this modification via the critical current alone.
We assume an independent temporal decay of the superfluid density |Ψ_0|^2=a/b with a time scale τ_ decay≫τ_GL, facilitating a similar decay of the critical current.
As a result, the modified evolution we consider is
τ_GL∂_tf=-e^2t/τ_ decay(j_s/j_c)^24/27f^3+f(1-f^2).
The validity of Eq. (<ref>) becomes questionable around j_s→ 0, as one finds relaxation times much longer than τ_ decay, violating our working assumption.
This is due to the fact that Eq. (<ref>) implicitly assumes the dominance of the GL-induced relaxation over the decay.
Thus, it is valid strictly in the regime
j_s≫ j_c√(τ_GL/τ_ decay).
|
http://arxiv.org/abs/2409.03634v1 | 20240905154802 | Surface-Centric Modeling for High-Fidelity Generalizable Neural Surface Reconstruction | [
"Rui Peng",
"Shihe Shen",
"Kaiqiang Xiong",
"Huachen Gao",
"Jianbo Jiao",
"Xiaodong Gu",
"Ronggang Wang"
] | cs.CV | [
"cs.CV"
] |
Surface-Centric Modeling for High-Fidelity Surface Reconstruction
R. Peng et al.
1School of Electronic and Computer Engineering, Peking University
2Peng Cheng Laboratory 3University of Birmingham 4Alibaba
Surface-Centric Modeling for High-Fidelity Generalizable Neural Surface Reconstruction
Rui Peng1,2 Shihe Shen1 Kaiqiang Xiong1 Huachen Gao1
Jianbo Jiao3 Xiaodong Gu4 Ronggang Wang1,2
=====================================================================================================
§ ABSTRACT
Reconstructing the high-fidelity surface from multi-view images, especially sparse images, is a critical and practical task that has attracted widespread attention in recent years. However, existing methods are impeded by the memory constraint or the requirement of ground-truth depths and cannot recover satisfactory geometric details. To this end, we propose SuRF, a new Surface-centric framework that incorporates a new Region sparsification based on a matching Field, achieving good trade-offs between performance, efficiency and scalability. To our knowledge, this is the first unsupervised method achieving end-to-end sparsification powered by the introduced matching field, which leverages the weight distribution to efficiently locate the boundary regions containing surface. Instead of predicting an SDF value for each voxel, we present a new region sparsification approach to sparse the volume by judging whether the voxel is inside the surface region. In this way, our model can exploit higher frequency features around the surface with less memory and computational consumption. Extensive experiments on multiple benchmarks containing complex large-scale scenes show that our reconstructions exhibit high-quality details and achieve new state-of-the-art performance, i.e., 46% improvements with 80% less memory consumption. Code is available at <https://github.com/prstrive/SuRF>.
§ INTRODUCTION
Reconstructing surface from multi-view images is a fundamental and challenging task in computer vision with wide-ranging applications, including autonomous driving, robotics, virtual reality, and more. While many typical methods <cit.> have achieved satisfactory results through tedious multi-stage processes (, depth estimation, filtering and meshing), recent neural implicit methods <cit.> attract increasing attention due to their concise procedures and impressive reconstructions. They can directly extract the geometry through Marching Cube <cit.>, and avoid the accumulated errors. Despite their effectiveness, these methods are hampered by the cumbersome per-scene optimization and the requirement of a large number of input views, which makes them unsuitable for many applications. Even recent fast methods like <cit.> and 3D Gaussian Splatting methods <cit.> struggle to extract meshes in seconds and perform poorly under sparse input.
Recently, some generalizable neural surface methods <cit.> have been proposed to mitigate these problems by combining neural implicit representations with prior image information. However, as the pipeline comparisons shown in Fig. <ref>, they either rely on the non-end-to-end pipeline that leads to accumulated errors, or require constructing the dense volumes (or even separate volumes) for each view and consumes excessive memory and computation. We are interested in the question: why unsupervised end-to-end sparsification has not been achieved yet? To sparse the volume for the next fine model initialization, previous methods like SparseNeuS <cit.> require predicting SDF values for a large number of voxels and determining whether the SDF values are within a threshold. This is a time-consuming operation (about 10s), making it impossible to train the coarse and fine stages together.
We note that some concurrent methods <cit.> directly use a large reconstruction model to achieve sparse reconstruction, but these methods are computational expensive and can only generate the low-resolution 3D representation, thus limiting their reconstruction fidelity.
In this paper, we present SuRF, the first attempt, to our knowledge, towards simultaneously unsupervised, sparsified and end-to-end approach, which provides good trade-offs between performance, efficiency, and scalability. The main idea behind this is the surface-centric modeling we adopt, which focuses more attention on regions near the surface, called “surface regions”, a practice that improves both performance and efficiency. On the one hand, the projection feature in surface regions is more multi-view consistent and more useful for geometric reasoning. On the other hand, since the surface region only occupies a small proportion of the scene bounding box, this focusing strategy can obviously save memory and computational overhead, and enable the usage of high-resolution volumes. To achieve this, we design a module called Matching Field to locate surface regions, which poses two advantages: 1) it is the first to use the weight distribution along rays to represent the geometry, and enable the use of image warping loss to achieve unsupervised training; 2) it is highly efficient that only needs an additional single-channel volume and the very-fast trilinear interpolation. Concretely, at each scale, in addition to the n-channel feature volume used for final geometric inference, we construct another single-channel matching volume for predicting the matching field.
Based on the matching field, we propose a new strategy called Region Sparsification to generate sparse volumes for later high-resolution scales. Instead of predicting the SDF values for each voxel using MLPs like existing methods, we retain only voxels in surface regions visible from at least two views, which can circumvent the influence of occlusion. Thus, we can generate multi-scale and surface-centric feature volumes to remarkably improve the geometric details of the reconstruction with less memory and computational consumption, as shown in Fig. <ref>. Extensive experiments on DTU <cit.> BlendedMVS <cit.>, Tanks and Temples <cit.> and ETH3D <cit.> datasets validate the efficiency of the proposed model, surpassing the baseline model <cit.> by more than 46% and saving more than 80% memory consumption compared with previous state-of-the-art methods <cit.>. In summary, our main contributions are highlighted below:
* We make the first attempt to achieve unsupervised end-to-end sparsification in neural surface model for high-fidelity sparse reconstruction.
* We present a novel matching field to locate surface regions, which apply the weight distribution to represent the geometry and use image warping loss to achieve unsupervised training.
* We introduce a new region sparsification strategy based on the extracted surface region that is robust to occlusions.
* Extensive experiments on standard benchmarks validate the effectiveness of our approach from the perspectives of accuracy, efficiency and scalability.
§ RELATED WORKS
Multi-view stereo. Multi-view stereo (MVS) is a type of methods that take the stereo correspondence as the main cue to reconstruct geometry from multi-view images. Taking the scene representation as an axis of taxonomy, it can be broadly categorized into three types: voxel grids-based <cit.>, point clouds-based <cit.>, and depth map-based <cit.>. Among them, depth map-based methods decompose complex 3D reconstructions into explicit 2D depth map estimates, becoming the most common one due to convenience. In particular, many learning-based methods <cit.> have been proposed to improve the matching accuracy through a more robust cost volume. However, the surface reconstruction of these methods is based on a multi-stage pipeline, which is cumbersome and inevitably introduces accumulated errors.
Neural surface reconstruction. Although previous volumetric methods <cit.> have achieved high-quality reconstructions, neural implicit functions have recently revealed significant potential in 3D reconstruction <cit.> and appearance modeling <cit.>. Some work <cit.> apply surface rendering to reconstruct plausible geometry without 3D supervision, but they often require extra priors like object masks <cit.> or sparse points <cit.>. Inspired by the success of NeRF <cit.> in novel view synthesis, more and more methods integrate volume rendering into shape modeling. They treat the density of volume rendering as the function of different implicit representations, , <cit.> adopts the occupancy network to represent the geometry and <cit.> apply the signed distance function to replace the local transparency function. Nevertheless, such methods suffer from lengthy per-scene optimization, cannot generalize to new scenes and perform poorly with sparse inputs.
Generalizable neural surface reconstruction. Similar to the generalizable novel view synthesis methods <cit.>, several methods <cit.> are proposed to solve the generalization of neural surface reconstruction. By replacing the input from spatial coordinates with image features, these methods can achieve impressive cross-scene generalization. Method <cit.> is the first attempt to achieve this through a multi-stage pipeline, but still struggles to recover geometric details. Even recent methods have tried to improve this through the view-dependent representation <cit.>, transformer architecture <cit.> and even build a separate cost volume for each view <cit.>, still failing to balance performance, efficiency, and scalability. To be specific, these methods are restricted by the requirement of the ground-truth depth <cit.>, cannot use high-resolution feature volumes <cit.> and cannot scale to cases with more input views <cit.> due to memory constraints. In this paper, we propose SuRF, which can reconstruct more geometric details with limited memory consumption.
§ METHODOLOGY
In this paper, our goal is to reconstruct the finely detailed and globally smooth surface 𝒮 from an arbitrary number of inputs with limited memory and computational consumption, which is achieved through our surface-centric modeling. The overall framework of our model is illustrated in Fig. <ref>. We first introduce our overall pipeline in Sec. <ref>, including how to aggregate multi-view features and reason about geometry and appearance. Then we depict our matching field in Sec. <ref>, including the unsupervised training and surface regions localization, and detail how to construct the multi-scale surface-centric feature volumes based on our new sparsification strategy in Sec. <ref>. The combination of the final loss function is described in Sec. <ref>.
§.§ Overall Pipeline
Given a set of calibrated images {I_i ∈ℝ^3 × H × W}_i=1^N captured from N different viewpoints, we first extract the multi-scale features {F_i^j ∈ℝ^C × H × W}_i,j=1,1^N,L through a weight-shared FPN <cit.> network ℱ_img. To aggregate these multi-view features, we adopt an adaptive cross-scale fusion strategy, which can grasp both global and local features and is more robust to occlusion.
Cross-scale fusion. For a volume V with U number of voxels, we project each voxel 𝐯=(x,y,z) to the pixel position of corresponding viewpoint with camera intrinsics {K_i}_i=1^N and extrinsic {[R,𝐭]_i}_i=1^N:
𝐪_i=π(K_iR_i^T(𝐯-𝐭_i)),
where π([x,y,z]^T)=[x/z,y/z]^T. The corresponding multi-scale features {𝐟_i^j ∈ℝ^C}_i,j=1,1^N,L are then sampled from all image planes via bilinear interpolation. We treat the high-scale features as detail residuals of low-scale features, and sum them together as multi-view features {𝐟_i ∈ℝ^C}_i=1^N, which are then input to a fusion network ℱ_fus to generate view's fusion weights {w_i}_i=1^N. The final fused feature for each voxel is the concatenation of weighted mean and variance features [Mean(𝐯), Var(𝐯)]:
Mean(𝐯)=∑_i=1^Nw_i𝐟_i, Var(𝐯)=∑_i=1^Nw_i(𝐟_i-Mean(𝐯))^2.
Further regularizing above fused features through a 3D network ℱ_3d, we can get the final single-channel matching volume V_m ∈ℝ^1× U and n-channel feature volume V_f ∈ℝ^C'× U. Note that in our surface-centric modeling, we will generate the multi-scale feature volumes {V_f^j}_j=1^L and only voxels in the surface regions will be retained at high-resolution scales. The detailed procedure of surface region localization using our matching field will be stated in Sec. <ref>, and how to construct the multi-scale surface-centric feature volumes using our new region sparsification strategy will be elaborated in Sec. <ref>.
In this way, the surface can be reconstructed by the zero-level set of SDF values, which is estimated through a surface prediction network ℱ_sdf, which concatenate interpolations of multi-scale feature volumes as input:
𝒮={𝐩∈ℝ^3 | ℱ_sdf(𝐩,<{V_f^j(𝐩)}_j=1^L>)=0},
where <·> is a concatenation operator. Meanwhile, since the traditional sampling operator cannot interpolate from the sparse volume, we implement a sparse trilinear sampling algorithm to achieve interpolation efficiently. Inherited from <cit.>, most methods employ a similar blending strategy to predict the color of each point on a ray:
𝐜=∑_i=1^Nη_i𝐜̂_̂î,
where {𝐜̂_̂î}_i=1^N is the projected colors from source views, and {η}_i=1^N is the softmax-activated blending weights estimated through a color prediction network ℱ_color,
which takes projected image features and viewing direction differences as input.
Finally, alpha-composition of samples {𝐩(t_k)=𝐨+t_k𝐝|k=1,...,M} is performed to produce the color of a ray emitting from camera center 𝐨 in view direction 𝐝:
Ĉ=∑_k=1^M T_kα_k𝐜_k, T_k=∏_l=1^k-1(1-α_l),
where α is formulated in an unbiased and occlusion-aware conversion of SDF values:
α_k=max(Φ_s(ℱ_sdf(𝐩(t_k)))-Φ_s(ℱ_sdf(𝐩(t_k+1)))/Φ_s(ℱ_sdf(𝐩(t_k))),0),
where Φ is the sigmoid function and s is an anneal factor, please refer to <cit.> for more details.
§.§ Matching Field
Fig. <ref> illustrates the overall pipeline of our matching field, which is the cornerstone of surface-centric modeling. In this section, we will elaborate on it in twofold: how it achieves the surface region localization, and how it achieves unsupervised training. Note that these procedures are the same for all scales, and we omit the subscript of scales for convenience.
Surface region localization. For efficiency, we need this procedure to hold three important properties:
* It requires encoding the entire scene geometry with limited memory consumption
. Existing methods like <cit.> or <cit.> borrow the main idea of MVS methods <cit.> to construct a separate cost volume for each view, which is sometimes impractical for surface reconstruction, especially when there are many input views.
* It needs to rapidly locate the surface region with a small computational cost, which makes multi-stage training or the use of extra networks unworkable.
* It needs to be occlusion-aware and view-dependent, , those surfaces that are behind or not visible from the input views are unuseful and unsolvable.
Motivated by these properties, we implement the matching field as a weight distribution along the ray obtained from matching volume interpolation.
As shown in Fig. <ref>, instead of representing the geometry as the occupancy, density or SDF value, we employ the view-dependent weight distribution, where larger values represent closer proximity to the surface. Concretely, to extract the surface of a ray 𝐫=(𝐨,𝐝), we first uniformly sample M_s points {𝐩(t_k)=𝐨+t_k𝐝}_k=1^M_s within the current surface region (Note that M_s decreases as the scale increases). Next, we directly interpolate the corresponding value for each point from the matching volume V_m, and then go through a softmax operator to generate the weight distribution {γ_k}_k=1^M_s along this ray. In this way, we can infer the rough position of the surface point 𝐩_s=𝐨+t_s𝐝, where:
t_s=∑_k=1^M_sγ_kt_k.
Finally, the surface region that we need is defined as: sr=[t_s-ϵ, t_s+ϵ], and ϵ is a hyperparameter that gradually decreases as the scale increases. And the surface region is set to the length of scene bounds for the first scale.
Unsupervised training. With the surface points, we can conveniently leverage the image warping loss <cit.> to constrain the matching field. Supposing the reference image I_0 has a resolution of H× W, through the matching field, we can efficiently retrieve the surface point of all rays emitting through the pixels of I_0 to form the “surface map” E_0∈ℝ^3× H× W. Then we project these points to the pixel positions of source images {I_i}_i=1^N through Eq. (<ref>), and interpolate the colors to generate the warped images {I_i^0}_i=1^N. Theoretically, the projected colors of these surface points should remain consistent across multiple viewpoints. Therefore, we can generate the constraints through the difference between the ground-truth reference image and the warped images from source images. Furthermore, we combine the pixel-wise color loss with the patch-based SSIM <cit.>:
WL_i=0.8 ×1 - SSIM(I_0, I_i^0)/2+0.2 × |I_0-I_i^0|.
To avoid the influence of occlusions, we take the average of the K smallest warping losses as the final constraint to optimize our matching field:
L_wl=1/K∑_i=1^KWL_i.
In this way, we can optimize our matching field unsupervised to locate the surface region efficiently.
§.§ Feature Volume Construction based on Region Sparsification
As aforementioned and the pipeline comparisons shown in Fig. <ref>, previous methods <cit.> rely on the dense volume or multi-stage training <cit.>, and they either are limited by the memory constraints or introduce cumulative errors. To this end, based on the surface region located through our matching field, we propose a region sparsification strategy to construct the multi-scale and surface-centric feature volumes to mitigate these drawbacks.
Region sparsification. Taking a certain scale j as an example, assume that we have generated the matching volume V_m^j∈ℝ^1× U^j and feature volume V_f^j∈ℝ^C'× U^j according to the process in Sec. <ref>, and obtained the surface maps {E_i^j∈ℝ^3× H× W}_i=1^N of all views following the pipeline in Sec. <ref>. To prune those voxels away from the surface, we project all voxels Vox^j={𝐯_h}_h=1^U^j to the pixel position of all surface maps through Eq. (<ref>) and interpolate the corresponding surface points {Ê_i^j ∈ℝ^3× U^j}_i=1^N visible from each view through bilinear sampling. We then can determine whether the voxel is inside the surface region based on the distance between the voxel and the interpolated surface point:
H_i^j(𝐯)=𝚏𝚕𝚘𝚊𝚝(∥Ê_i^j(𝐯)-𝐯∥_2 < ϵ^j),
where 𝚏𝚕𝚘𝚊𝚝 is the operator that converts bool values to float values. Furthermore, to maintain the view consistency, we only retain voxels that simultaneously fall into the surface region of at least two views:
Vox^j+1={𝐯 | 𝚜𝚞𝚖(H^j(𝐯))≥ 2},
where 𝚜𝚞𝚖 is the summation operator. This is an important step to mitigate the impact of occlusion, as regions visible only from a small number of views are meaningless, and we depict some examples in Fig. <ref>. Then we can halve these surviving voxels to aggregate higher-frequency information for the next scale.
Repeating the region sparsification for each scale, we can generate the final multi-scale feature volumes {V_f^j}_j=1^L. While this multi-scale strategy is beneficial for the model to reconstruct surfaces with high-frequency detail and global smoothness like <cit.>, our volumes are surface-centric and can achieve higher resolution with less memory consumption. Meanwhile, since the surface region in the coarse stage is wide, using multi-scale features to predict the geometry makes the model more robust when the surface region location in the fine stage is wrong. Before employing the volume rendering to produce the color of a ray, we propose surface sampling to efficiently sample more points for surface regions.
Surface sampling. With the off-the-shelf surface regions {sr^j}_j=1^L provided by the matching field, it's natural to sample more points within these regions because voxels inside these regions contain the most valuable information about the surface. We uniformly sample a decreasing number of points within the surface region from low-resolution to high-resolution scales, which results in more sampling points near the surface as shown in Fig. <ref>. When interpolating from multi-scale feature volumes, we fill the feature of those sampling points that are outside the surface region of certain scales with zero. Therefore, we do not need other networks to resample more fine points like <cit.>.
§.§ Loss Function
Our overall loss function consists of three components:
L=L_surf + L_mf,
where L_surf is used to optimize surface network and L_mf is used to optimize multi-stage matching fields.
Following existing methods <cit.>, our surface loss L_surf is defined as:
L_surf=L_color + L_mfc +α L_ek + β L_pe,
where L_color is computed as the average color loss of all sampled pixels Q:
L_color=1/|Q|∑_q∈ Q |C(q)-Ĉ(q)|,
L_mfc is the feature consistency following <cit.>, and eikonal loss <cit.> is:
L_ek=1/|P|∑_p∈ P(||∇ℱ_sdf(p)||_2-1)^2,
where P is a set of sampled 3D points. Similar to <cit.>, we leverage the pseudo label generated from the unsupervised multi-view stereo method <cit.> to enhance and accelerate model convergence. We apply a very strict filtering strategy to obtain relatively accurate pseudo point clouds P̂. The pseudo loss is:
L_pe=1/|P̂|∑_p∈P̂|ℱ_sdf(p)|.
The loss of the matching field is defined as the weighted sum of all scales' warping loss:
L_mf=∑_j=1^Lμ^j L_wl^j,
where μ_j is the weight of stage j, and it increases from coarse to fine scale.
§ EXPERIMENTS
In this section, we first introduce our implementation details and compared datasets, then we enumerate extensive experiments and ablation studies. Note that the reported results here are based on our model without any finetuning. Please refer to Supp. Mat. for the finetuning results.
Implementation details.
We implement our model in PyTorch <cit.>, and build our surface-centric feature volume in L=4 scales. The range of surface regions for each scale is defined as ϵ^1:ϵ^2:ϵ^3:ϵ^4=1:0.3:0.1:0.01. In our matching field, we set the number of sampling points for each scale as 128, 64, 32, 16 to extract surface regions. To apply the final volume rendering, the total number of sampling points of each ray is set to M=120, which consists of 64, 32, 16 and 8 for our surface sampling. During training, we adopt Adam optimizer <cit.> to train our model for 10 epochs. The base learning rate is set to 1e-3 for feature networks and 5e-4 for MLPs. We set the number of source images as N=4 and resize the resolution to 640× 480. The volume resolution of the first low-resolution scale is set to R^1=64×64×64. The weight of each loss term is set to α=0.1, β=1.0, and the warping loss weights of each scale are set to 0.25, 0.5, 0.75 and 1.0. During testing, we take N=2 source images with a resolution of 800 × 576 as input, and set R^1=80×80×80. Our meshes are extracted using Marching Cubes <cit.>.
Datasets.
Following existing practices <cit.>, we train our model on the DTU dataset <cit.>, and we employ the same splitting strategies as <cit.>, , 75 scenes for training and two sets of images from 15 non-overlapping scenes for testing. To validate our generalization ability, we further conduct some qualitative comparisons on BlendedMVS <cit.>, Tanks and Temples <cit.> and ETH3D <cit.> datasets.
§.§ Results on DTU
For a fair comparison, we adopt the same evaluation strategy as previous methods <cit.>, , reconstruct the surface using only three input views and report the average chamfer distance of two image sets.
The quantitative results on the DTU dataset are summarized in Tab. <ref>, which indicate that our SuRF can bring a satisfactory improvement to the baseline, , more than 46% improvement compared with SparseNeuS <cit.>. Meanwhile, we can surpass those methods <cit.> that employ ground-truth depth for supervision. Even compared with the method that construct separate cost volumes for each input view <cit.>, our model still offers plausible advantages in efficiency as well as scalability, that is, we only need to construct a global volume and is computation and memory insensitive to the number of input views. For those classical MVS methods <cit.>, our method can win out in most metrics. Compared with the recent fast method <cit.> that converges in minutes, our model can still reconstruct finer details in seconds. Qualitative results in Fig. <ref> further show that our SuRF can reconstruct finer surfaces only through fast network inference. The results in Fig. <ref> indicate that the 3DGS-based method SuGaR <cit.> failed with sparse inputs and our approach shows much better performance even with sparse inputs.
Number of input views.
Fig. <ref> indicates that the reconstruction quality of our model gradually improves as the number of views increases, but it tends to stabilize when the input views are enough. Meanwhile, a larger number of inputs does not lead to a significant increase in consumption, which is an obvious advantage compared to C2F2NeuS <cit.>, as the comparison shown in Tab. <ref>.
§.§ Generalization
To verify the generalization capabilities of our method, we further test on BlendedMVS <cit.>, Tanks and Temples <cit.> and ETH3D <cit.> datasets using the model pre-trained on the DTU dataset <cit.>. Some qualitative comparisons with existing methods are shown in Fig. <ref> and Fig. <ref>. The results prove that our method exhibits great generalization ability even under these difficult scenes. The volume-based method struggles on large scenes since most voxels are empty, but ours can still reconstruct meshes with fine details even without any finetuning.
§.§ Ablation Studies
Our ablation experiments are performed on the first image set of the DTU dataset, which is the same as <cit.>.
To investigate the effectiveness of our end-to-end sparsification, we compare it with existing solutions, , the multi-stage training strategy in SparseNeuS <cit.> and the multi-scale dense structure in GenS <cit.>. The results in (a) of Tab. <ref> show that all these solutions bring improvements to the baseline model, and our solution performs the best mainly because our features are surface-centric and can be continuously optimized. We perform another study to understand how the resolution of volumes and images affects the model. The results in (b) of Tab. <ref> show that the higher resolution favors the model but stabilizes when a certain resolution is reached.
Efficiency and scalability. Compared with existing methods, our SuRF exhibits advantages in terms of efficiency and scalability as shown in Fig. <ref>. Our model can leverage volumes of higher resolution with less memory and computational consumption, , only 3G memory for the model with the highest 256^3 resolution volumes while at least 20G for previous methods <cit.>. Compared with methods that require predicting depth maps <cit.> or constructing cost volumes <cit.> for each view, the running time and memory consumption of our model is relatively insensitive to the number of input views as shown in Tab. <ref>, making it scalable to handle different input numbers.
Meanwhile, the capability of using high-resolution volumes gives our method the potential to reconstruct very large-scale scenes as the results shown in Fig. <ref>.
§ CONCLUSION
In this paper, we proposed a new generalizable neural surface model, SuRF, to accomplish high-fidelity reconstruction even from sparse inputs with satisfactory trade-offs between performance, efficiency and scalability. To the best of our knowledge, it is the first unsupervised method to achieve end-to-end sparsification based on our surface-centric modeling, which consists of a novel matching field module and a new region sparsification strategy. The proposed matching field adopts the weight distribution to represent geometry and introduces the image warping loss to achieve unsupervised training, which can efficiently locate the surface region. Then we adopted the region sparsification strategy to prune voxels outside the surface regions and generated the multi-scale surface-centric feature volumes. Extensive experiments on multiple public benchmarks demonstrate that our model exhibits great generalization ability in diverse scenes and can reconstruct higher-frequency details with less memory and computational consumption.
§ ACKNOWLEDGEMENTS
This work is financially supported by Outstanding Talents Training Fund in Shenzhen, Shenzhen Science and Technology Program-Shenzhen Cultivation of Excellent Scientific and Technological Innovation Talents project(Grant No. RCJ-C20200714114435057), Shenzhen Science and Technology Program-Shenzhen Ho-ng Kong joint funding project (Grant No. SGDX20211123144400001), National Natural Science Foundation of China U21B2012, R24115SG MIGU-PKU META VISION TECHNOLOGY INNOVATION LAB, Guangdong Provincial Key Laboratory of Ultra High Definition Immersive Media Technology.
Jianbo Jiao is supported by the Royal Society Short Industry Fellowship (SIF∖R1∖231009).
In addition, we sincerely thank all assigned anonymous reviewers in ECCV 2024, whose comments were constructive and very helpful to our writing and experiments.
splncs04
Surface-Centric Modeling for High-Fidelity Surface Reconstruction
R. Peng et al.
Supplementary Materials
=======================
§ RESULTS OF FINE-TUNING
As the qualitative and quantitative comparisons shown in our main paper, the reconstructions of our model without fine-tuning exhabit finest geometric details even compared with some fine-tuned models like SparseNeuS-ft <cit.>. Here, we illustrate some results of our model after fast fine-tuning. Different with methods <cit.> that require reconstructing separate cost volume for each view, our model only builds the global volume, which makes our model easily fine-tuned (only 2.5k iterations, about 10 minutes). The quantitative results in Tab. <ref> show that our model still ranks the first in most scenes and has the best mean chamfer distance. Meanwhile, it is worth noting that our volume is sparse and more memory and computationally efficient. And the qualitative results of some scenes are visulized in Fig. <ref>. Note that there are only three input views during fine-tuning.
§ MORE COMPARISONS WITH RC-MVSNET
We show some visual and metric comparisons with the TSDF fusion result of RC-MVSNet <cit.> in Fig. <ref>. The results show that the reconstruction of our model is smoother and more complete, especially in low-texture regions, leading to better results in the chamfer distance metric. To further verify the effectiveness of our surface-centric modeling, we compare with two baselines which directly use the surface point of RC-MVSNet to prune voxels: Baseline1 directly replaces the surface region of our trained model with that of RC-MVSNet; Baseline2 uses the surface region of RC-MVSNet to train a new model. Results in Tab. <ref> show that even simply using the surface region of RC-MVSNet can achieve superior results. And our full model, trained together with the surface location module (Our matching field), achieves the best performance. This is reasonable because the surface region of these two baselines was not optimized or corrected with the model when directly using the results of RC-MVSNet.
§ DETAILED RESULTS OF VOLRECON AND RETR
As mentioned in our main paper, we report the reproduced results of VolRecon <cit.> and ReTR <cit.> on two image sets using their official repositories and released model checkpoints. The detailed reproduction results of all scenes at two image sets are illustrated in Tab. <ref>, which are slightly different from the results reported in their papers. We speculate that there are something inconsistent in the experimental configurations, but this inconsistency doesn't affect the valuable of their contributions.
§ MORE ABLATION RESULTS
Here, we report more ablation results of our model, and we set the training time to a quarter of the overall process (different from our main paper to save time) and only test on the first image set for convenience.
r0.5
Ablation results on DTU dataset.
1.0!
Number of scales Surface sampling Cross-scale fusion Mean
1 scales 51 51 1.38
2 scales 51 51 1.22
3 scales 51 51 1.15
4 scales 51 51 1.11
5 scales 51 51 1.13
4 scales 51 55 1.15
4 scales 55 51 1.13
Number of scales. We conduct some ablations to evaluate the effect of the number of scales. We set the resolution of the finest stage of each model to be similar. The results in Tab. <ref> show that the overall quality first remarkably increases and then slightly decreses, reaching the optimum in 4 scales. We illustrate some visual results of the model with differnet scales in Fig. <ref>. The single-scale model performs the worst, with reconstructions that are noisy and lack geometric detail, while the four-scales model can reconstruct smooth geometry and restore more geometric details.
r0.5
Ablation results of loss weight on DTU dataset.
1.0!
Method β μ^1 : μ^2 : μ^3 : μ^4 Mean
A 0.0 0.25:0.50:0.75:1.00 1.17
B 1.0 0.25:0.50:0.75:1.00 1.11
C 1.0 1.00:1.00:1.00:1.00 1.16
D 1.0 1.00:0.75:0.50:0.25 1.25
Ablations on loss weight. We further conduct some experiments to verify the effect of the weight of each loss term on model performance. Concretely, we change the weight of the pseudo loss β and the weight combination of different stages of matching field loss μ^j. The ablation model is based on the 4-scales model and the results are shown in Tab. <ref>. Through the comparison between model A and model B, we can see that the pseudo point clouds generated from the unsupervised multi-view stereo method <cit.> can guide the model towards better convergence. To avoid the influence of erroneous pseudo points, we apply a very strict filtering strategy, , Only point clouds whose projection distance from at least 3 viewpoints does not exceed 0.2 pixels and whose relative depth error does not exceed 0.001 can be left. From the results of the model (B, C, D) adopt different weight combinations of the matching field loss, we can see that model B which has μ^1 : μ^2 : μ^3 : μ^4=0.25:0.50:0.75:1.00 performs the best and model D performs the worst. This indicates that applying greater weight to the high-resolution scale is beneficial to model convergence. Because there is no need to obtain very accurate predictions in the low-resolution scale, and the gradient of the high-resolution scale will be transmitted back to the low-resolution scale, it is reasonable to have a lower weight in the low-resolution scale. And we show some visual comparisons of these models in Fig. <ref>.
r0.5
Ablation results of the range of surface regions.
1.0!
Method ϵ^1 : ϵ^2 : ϵ^3 : ϵ^4 Mean
B 1.00:0.40:0.10:0.01 1.11
E 1.00:0.30:0.10:0.01 1.10
F 1.00:0.30:0.05:0.01 1.12
Ablations on the range of surface region. Here, we employ an additional ablation experiment to study the sensitivity of the range of surface regions. ϵ^0 is the range of the first scale, and its value is fixed at 1, which represents covering the entire near and far area. And the value of later scales means the percentage of coverage. As the results shown in Tab. <ref>, the differences of these three groups of experiments are not large, as long as the surface region is gradually tightened, and model F which has a range combination of ϵ^1 : ϵ^2 : ϵ^3 : ϵ^4=1.00:0.30:0.10:0.01 performs the best.
§ MORE RESULTS
Because C2F2NeuS <cit.> doesn't release the code, the memory of C2F2NeuS in Tab. <ref> is refer to the implementation of CasMVSNet <cit.>. Fig. <ref> shows additional comparisons with COLMAP <cit.>, NeuS <cit.>, SparseNeuS <cit.>, SparseNeuS-ft <cit.>, VolRecon <cit.> and ReTR <cit.> on DTU dataset. We can see that our method can stably achieve superior results and exhibit finer geometry details. We further show some visual comparisons with the fast per-scene overfitting method Voxurf <cit.> in Fig. <ref>. While Voxurf still requires more than 30 minutes of training time per scene, it struggles to reconstruct smooth and accurate surface from sparse inputs.
§ VISUALIZATION OF THE SURFACE REGION
To understand how the surface region changes as scale increases, we show some visualization results of the surface region at different scales in Fig. <ref>. For convenience, we show the depth of the middle position of the surface region. We can see that the located surface region at the higher resolution scale is indeed sharper, which proves the effectiveness of our design.
§ LIMITATIONS AND FUTURE WORK
Despite exhibiting efficiency over existing methods, our model still struggled to extract the surface in real time due to the inherent drawback of MLP-based implicit methods. In the future, we will be focusing on addressing this deficiency issue, and we have constructed a lite-version model, which will be released latter. Furthermore, we plane to train our model on more large-scale dataset like Objaverse <cit.> and expand the scale of the model like <cit.>.
|
http://arxiv.org/abs/2409.03337v1 | 20240905082929 | Global prescribed-time control of a class of uncertain nonholonomic systems by smooth time-varying feedback | [
"Kang-Kang Zhang",
"Bin Zhou",
"Chenchen Fan",
"James Lam"
] | math.OC | [
"math.OC"
] |
1.0
Global prescribed-time control of a class of uncertain nonholonomic systems by smooth
time-varying feedback
Kang-Kang Zhang, Bin Zhou, Chenchen Fan, James Lam, Fellow, IEEE
This work was supported by the National Science Found for
Distinguished Young Scholars (62125303), the Science Center Program of
National Natural Science Foundation of China (62188101), the
Fundamental Research Funds for the Central Universities
(HIT.BRET.2021008), and HKU CRCG (2302101740). (Corresponding authors: Bin Zhou)
Kang-Kang Zhang is with the Department of Mechanical Engineering, University of Hong Kong,
Hong Kong, China, and the Department of Computer Science, KU Leuven, B-3001 Heverlee, Belgium; James Lam is with the Department of Mechanical Engineering, University of Hong Kong,
Hong Kong, China;
Bin Zhou is with the Center for Control Theory and Guidance Technology, Harbin Institute of Technology, Harbin, 150001, China; Chenchen Fan is with the Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, China (email: [email protected], [email protected], [email protected], [email protected]).
Received ?? ; accepted ??
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
This paper investigates the prescribed-time smooth control problem for a class of
uncertain nonholonomic systems. With a novel smooth time-varying state transformation, the uncertain chained nonholonomic system is reformulated as an uncertain linear time-varying system.
By fully utilizing the properties of a class of parametric Lyapunov equations and constructing time-varying Lyapunov-like functions,
smooth time-varying high-gain state and output feedback controllers are designed. The states and controllers are proven to converge to zero at any prescribed time. The proposed smooth time-varying method combines the advantage of a
time-varying high-gain function, which enhances control performance, and a
smooth time-varying function that can drive the states to zero at the prescribed time. The
effectiveness of the proposed methods is verified by a numerical example.
Shell
et al.: Bare Demo of IEEEtran.cls for IEEE Journals
Nonholonomic systems;
Uncertain nonholonomic systems;
Prescribed-time control;
Smooth time-varying high-gain feedback.
§ INTRODUCTION
Since nonholonomic systems have a wide range of applications, such as wheeled
mobile robots and surface vessels, the control problem of nonholonomic systems
has received widespread attention
<cit.>. However, nonholonomic systems
do not satisfy the necessary condition for the existence of a smooth
time-invariant feedback for a nonlinear system, called Brockett's necessary
conditions <cit.>. Therefore researchers can only turn their
attention to discontinuous time-invariant control/hybrid control and smooth
time-varying control. For discontinuous time-invariant control, a general strategy for designing discontinuous control laws that can achieve asymptotic convergence for a class of nonholonomic control systems was proposed in <cit.>. For smooth
time-varying control, a control law which globally asymptotically stabilizes
nonholonomic system in chained/power form was presented in <cit.>. In order to improve the convergence rate,
by using a time-varying state transformation, a smooth time-varying feedback
control law that can achieve exponential stability was proposed in
<cit.>. However, the above control laws can only ensure
that the states converge to zero asymptotically (exponentially), which
sometimes cannot meet the requirement in practical engineering.
Compared with traditional asymptotic stability (or exponential stability),
finite-/fixed-time control has higher convergence speed, faster
convergence accuracy and stronger robustness to external disturbances
<cit.>. However, for both finite-time control and
fixed-time control, the actual convergence time (rather than the upper bound)
depends on the initial condition <cit.>. Recently, a so-called prescribed-time control method, whose convergence time does
not depend on the initial state, have received renewed attention
<cit.>. By using
time-varying high-gain feedback, the prescribed-time control problem of
normal-form nonlinear systems was solved in <cit.>. Such time-varying high-gain methods have been utilized to address various
problems, such as prescribed-time stabilization <cit.>,
prescribed-time observer <cit.>, adaptive prescribed-time
control <cit.>, prescribed-time output feedback
<cit.>, and prescribed-time tracking
control <cit.>. With the aid of properties of a class of
parametric Lyapunov equations (PLEs) and the scalarization technique
<cit.>, the finite-time and prescribed-time control problems for
linear and a class of nonlinear systems were addressed in
<cit.> based on time-varying
high-gain feedback. Very recently, a new approach called time-space
deformation methods has been proposed in <cit.> to address
numerical singular problems that may arise in time-varying high-gain feedback.
To the best of the authors' knowledge, there are no smooth time-varying
feedback results for the prescribed-time control of nonholonomic systems.
In this paper, the smooth prescribed-time control problem for a class of
uncertain nonholonomic systems will be investigated. Firstly, with a novel smooth time-varying state
transformation, uncertain chained nonholonomic systems are reformulated as
uncertain linear time-varying systems. Secondly, by fully utilizing the
properties of a class of parametric Lyapunov equations and constructing
time-varying Lyapunov-like functions, smooth time-varying high-gain feedback
controllers are designed. It is proved that the states and controllers
converge to zero at any prescribed time. Both state feedback and
observer-based output feedback are considered.
The contributions of this paper are twofold. On the one hand, different from
the traditional discontinuous control methods
<cit.>
, which may cause singularity or chattering problems, the proposed methods in
this paper are smooth. On the other hand, different from
<cit.>, where only the state
feedback controllers were constructed to ensure that the state converges to zero
exponentially, the proposed controllers are not only based on state feedback
but also on output feedback, and can guarantee that the state converges to zero
at any prescribed time.
Notation: For a (symmetric) matrix M, let M^T,
‖ M‖, λ_max(M), and λ_min(M)
denote its transpose, 2-norm, maximal eigenvalue, and minimal eigenvalue,
respectively. In addition, diag{A_1,…, A_n} denotes a
diagonal matrix whose ith diagonal element is A_i .
§ PROBLEM INTRODUCTION AND PRELIMINARIES
§.§ Problem Introduction
Consider the following uncertain nonholonomic system
{[ ẋ_0 =u_0,; ẋ_1 =u_0x_2+ϕ_1( t,u,x) ,; ⋮; ẋ_n-1 =u_0x_n+ϕ_n-1( t,u,x) ,; ẋ_n =u+ϕ_n( t,u,x) ,; y =[x_0,x_1]^T, ].
where [x_0,x^T]^T=[x_0,x_1,…,x_n
]^T∈𝐑^n+1 is the state vector, u_0∈𝐑
and u∈𝐑 are two control inputs, y∈𝐑^2 is the
output vector, and ϕ_i(t,u,x), i=1,2,…,n are some unknown continuous
functions satisfying the following assumption.
The nonlinear functions ϕ_i(t,u,x), i=1,2,…,n, satisfy
|ϕ_i( t,u,x) |≤ c_i1|
x_1| +c_i2| x_2| +⋯+c_ii|
x_i|,
where c_ij, j=1,2,…,i are some known constants.
In this paper, regarding system (<ref>), we are interested in the
following problem.
Suppose that Assumption <ref> holds true. Let T>0 be a
prescribed time. Design a smooth time-varying state feedback controllers
u_0(t) and u(t) such that, for any initial condition, the states and
controls converge to zero at the prescribed time T.
Since not all states are actually measurable, it is necessary to study
observer-based output feedback control strategies.
Suppose that Assumption <ref> holds true. Let T>0 be a
prescribed time. Design a smooth time-varying output feedback
{[ ξ̇( t) =Fξ( t) +Gu( t)
+L( t) y( t) ,; u( t) =H( t) ξ( t) , ].
such that, for any initial condition, the states and controls converge to zero
at the prescribed time T.
§.§ Preliminaries
§.§.§ Some properties of ϕ(t,u,x)
Consider the time-varying state transformation
z( t) =L_n( 1/T-t) x( t)
, t∈0,T),
where
L_n( γ) =diag{γ^n-1,γ
^n-2,…,1} .
For later use, denote, for i=1,2,…,n,
φ_i(t,u,x) =( T-t) ^i-nϕ_i(t,u,x),
ψ_i(t,u,x) =φ_i(t,u,x)+n-i/T-tz_i.
Therefore, by using Assumption <ref> and (<ref>), it can be
obtained that, for i=1,2,…,n,
|φ_i(t,u,x)| = ( T-t)
^i-n|ϕ_i(t,u,x)|
≤ ( T-t) ^i-n∑_j=1^ic_ij|
x_j|
= ( T-t) ^i-n∑_j=1^ic_ij|(
T-t) ^n-jz_j|
= ∑_j=1^ic_ij|( T-t) ^i-jz_j| ,
and further
|ψ_i(t,u,z)|≤ ∑_j=1^ic_ij|(T-t)^i-jz_j|+n-i/T-t|z_i|
= 1/T-t∑_j=1^ic_ij|(T-t)^i-j+1z_j|+n-i/T-t|z_i|
≤ 1/T-t∑_j=1^ig_ij| z_j| ,
where g_ij=c_ijT^i+1-j, j=1,2,…,i-1, i=2,3,…,n,
andg_ii=c_iiT+n-i, i=1,2,…,n.
With above preparations, we can state the following result by referring Lemma
1 in <cit.>.
Suppose that Assumption <ref> holds true. Then, for any
z∈𝐑^n and any t∈0,T), there holds
Φ^2( t,u,z) ≜( L_nψ)
^T( L_nψ) ≤ d^2γ^2(
L_nz) ^T( L_nz) ,
where γ=1/T-t, γ_0=1/T, L_n=L_n(γ),
and
d^2=max{∑_i=1^ng_i1^2i/γ_0^2( i-1) ,⋯,
∑_i=n^ng_in^2i/γ_0^2( i-n) } .
It follows from (<ref>) that
ψ_i^2( t,u,z) ≤1/( T-t) ^2(
∑_j=1^i
g_ij| z_j|) ^2
≤i/( T-t) ^2∑_j=1^i
g_ij^2| z_j| ^2.
Then in view of (<ref>) and (<ref>), we have
Φ^2( t,u,z) =( L_nψ)
^T( L_nψ)
=
∑_i=1^nγ^2( n-i) ψ_i^2( t,u,z)
≤1/( T-t) ^2∑_i=1^nγ^2( n-i) i
∑_j=1^i
g_ij^2| z_j| ^2
≤ d^2γ^2( |γ^n-1z_1|
^2+|γ^n-2z_2| ^2+⋯+|γ
^0z_n| ^2) .
The proof is finished.
§.§.§ Some Properties of the PLE
We now introduce some properties of the parametric Lyapunov equations (PLEs)
that will be used in this paper. Denote
A=[
[ 0 1 ; ⋮ ⋱ ; 0 1 ; 0 0 ⋯ 0 ]] , b=[
[ 0; ⋮; 0; 1 ]] , c=[
[ 1; 0; ⋮; 0 ]] ^T.
Consider the following PLE <cit.>
A^TP+PA-Pbb^TP=-γ P,
and its dual form
AQ+QA^T-Qc^TcQ=-γ Q,
where γ>0 is a parameter to be designed.
<cit.> Let (A,b)∈(𝐑^n× n
,𝐑^n×1) be given by (<ref>) and γ>0.
* The PLE (<ref>) has a unique positive definite solution
P(γ)=γ L_nP_nL_n,
where P_n=P(1) and L_n=L_n(γ).
* The solution P(γ) satisfies
dP(γ)/dγ>0, b^T
P(γ)b=nγ.
* There holds
P(γ)/nγ≤dP(γ)/dγ≤δ_cP(γ)/nγ, ∀γ>0,
where δ_c=n(1+λ_max(E_n+P_nE_nP_n^-1))
with E_n=diag{n-1,n-2,…,1,0}.
* There holds
A^TPA≤3n^2γ^2P.
<cit.> Let ( A,c) ∈( 𝐑^n× n
,𝐑^1× n) be given by (<ref>) and γ>0.
* The unique positive definite solution Q(γ) is given by
Q(γ)=γ^2n-1L_n^-1Q_nL_n^-1,
where Q_n=Q(1) and L_n=L_n(γ).
* There holds
Q( γ) /nγ≤dQ( γ) /dγ≤δ_oQ( γ) /nγ, ∀γ>0,
with δ_o is a constant.
* The matrices P_n,Q_n,P_n^-1, and Q_n^-1 are similar to
each other and thus
λ_max(P_n) =λ_max(Q_n)=λ_min^-1
(P_n)=λ_max(Q_n^-1)
=λ_min^-1(Q_n)=λ_min^-1(Q_n^-1)≜Λ.
* There holds
cQ(γ)c^T=nγ.
§ STATE FEEDBACK
A solution to Problem <ref> can be stated as follows.
Let T>0 be a prescribed time, β=2n+δ
_c/n+2dΛ, and γ_0=1/T. Design the
smooth time-varying state feedback
{[ u_0( t) =-3/T-tx_0( t)-β/2(T-t),; u( t) =-β b^TP( γ (t)
) L_n( 1/T-t) x( t),; γ ( t) =T/T-tγ_0. ].
Then, for any initial condition and all t∈0,T), the states and
controls of the closed-loop system consisting of (<ref>) and
(<ref>) satisfy
| x_0(t)| ≤ ( T-t) ^2(
T-t/T^3| x_0(0)| +β t/2T)
,
| u_0(t)| ≤(T-t)( 3( T-t)
( | x_0(0) |/T^3+β/2T)
+β) ,
‖ x( t) ‖ ≤υ_1(
T-t) ^3/2e^υ_10| x_0
(0)| t‖ x( 0) ‖ ,
‖ u(t) ‖ ≤υ_2(
T-t) ^1/2e^υ_20| x_0
(0)| t‖ x( 0) ‖ ,
where υ_1, υ_10, υ_2 and υ_20 are
some positive constants, namely, Problem <ref> is solved.
Consider the x_0-subsystem in system (<ref>). The closed-loop system
consisting of (<ref>) and the first equation in (<ref>) can be
written as
ẋ_0=-3/T-tx_0-β/2(T-t),
whose solution can be given as
x_0(t)= exp( -∫_0^t3/T-sd
s) x_0(0)
-β/2∫_0^texp( -∫_s^t3/T-τdτ) (T-s)ds
= ( T-t/T) ^3x_0(0)-β/2
(T-t)^2+β/2T(T-t)^3
= (T-t)^2( T-t/T^3x_0(0)-β t/2T) ,
which proves (<ref>). Then the control u_0 can be written as the
open-loop form
u_0(t)= -3/T-tx_0-β/2(T-t)
= -3/T-t( ( T-t) ^2( T-t/T^3x_0(0)-β t/2T) )
-β/2(T-t)
= (T-t)( 3( T-t) ( -x_0(0)/T^3
-β/2T) +β)
≜ (T-t)( θ(t)+β) ,
which proves (<ref>).
Consider the x-subsystem in system (<ref>). By using (<ref>),
system (<ref>) can be written as
{[ ż_i=n-i/T-tz_i+( β+θ(t))
z_i+1+( T-t) ^i-nϕ_i,; ż_n=u+ϕ_n, ].
where i=1,2,…,n-1, which can be re-expressed as
ż=1/T-tA_0z+θ(t)Az+β Az+φ+bu,
where φ=φ(t,u,x)=[φ_1(t,u,x),φ_2(t,u,x),…
,φ_n (t,u,x)]^T, and
A_0=diag{n-1,n-2,…,2,1}.
Such a system can be further expressed as
ż=β( A-bb^TP) z+θ(t)Az+ψ
,
where ψ=ψ(t,u,z)=[ψ_1(t,u,z),ψ_2(t,u,z),…,ψ_n
(t,u,z)]^T. Choose the Lyapunov-like function
V( t,z(t)) =γ(t) z^T(t)P(γ(t))z(t),
whose time-derivative along the trajectory of the closed-loop system
(<ref>) can be written as
V̇( t,z) = γ̇z^TPz+γγ̇z^TdP/dγz
+βγ z^T( A^TP+PA-2Pbb^TP) z
+2γθ z^TPAz+2γ z^TPψ
≤ γ̇z^TPz+δ_c/nγ̇z^TPz-βγ^2z^TPz
+2γθ z^TPAz+2γ z^TPψ,
where we have used (<ref>) and (<ref>). By using the Young's
inequalities with k_0>0 and k_1>0, we have
2z^TPAz ≤ k_0γ z^TPz+1/k_0γz^TA^TPAz,
2z^TPψ ≤ k_1γ z^TPz+1/k_1γψ^TPψ.
In addition, it can be obtained from Lemma <ref>, (<ref>),
and (<ref>) that
ψ^TPψ =ψ^Tγ L_nP_nL_nψ≤Λγ( L_nψ) ^T(
L_nψ)
=d^2Λγ^3( L_nz) ^T(
L_nz) ≤ d^2Λ^2γ^2z^TPz.
With this and (<ref>) in Lemma <ref>, (<ref>) can be continued as
V̇( t,z) ≤ γ̇z^TPz+δ_c/nγ̇z^TPz-βγ
^2z^TPz
+k_0|θ|γ^2z^TPz+|θ|/k_0z^TA^TPAz
+k_1γ^2z^TPz+1/k_1ψ^TPψ
≤ γ̇z^TPz+δ_c/nγ̇z^TPz-βγ^2z^TPz
+k_0|θ|γ^2z^TPz+3n^2|θ|/k_0γ^2z^TPz
+k_1γ^2z^TPz+1/k_1d^2Λ
^2γ^2z^TPz
= ( γ̇+δ_c/nγ̇
-βγ^2+k_1γ^2+1/k_1d^2Λ
^2γ^2) z^TPz
+( k_0|θ|γ^2+3n^2/k_0|θ|γ^2) z^TPz
= ( 1+δ_c/n) μ_zz^TPz+2|θ|√(3)nγ^2z^TPz
= -γ^2z^TPz+θ_0γ z^TPz,
where
θ_0 =6( | x_0(0)|/T^3
+β/2T) √(3)n,
μ_z =γ̇-n/n+δ_cβγ^2
+2ndΛ/n+δ_cγ^2=-n/n+δ_cγ^2,
and we have taken k_0=√(3)n and k_1=dΛ. By using
the comparison lemma in <cit.>, V(t,z(t)) satisfies
V(t,z(t)) ≤exp( -∫_0^tγ(s)ds)
e^θ_0tV(0,z(0))
=( T-t) e^θ_0tV(0,z(0)), t∈0,T).
It follows from (<ref>) that
V(t,z(t))=γ z^TPz≥γ^2λ_min(L_n(γ
_0)P_nL_n(γ_0))‖ z(t)‖^2.
Therefore, we have
‖ z( t) ‖ ^2≤( T-t)
^3e^θ_0tV( 0,z( 0) ) /λ_min( L_n( γ_0) P_nL_n(
γ_0) ) ,
which, together with (<ref>), indicates that
‖ x( t) ‖ ≤‖ L_n(
T-t) ‖‖ z( t) ‖
≤‖ L_n( T) ‖( T-t)
^3/2e^1/2θ_0t√(V( 0,z(
0) ) )/√(λ_min( L_n( γ
_0) P_nL_n( γ_0) ) ).
In addition, it can be obtained from (<ref>) and (<ref>) that
V( 0,z( 0) ) =γ_0z^T(0)P(
γ_0) z(0)
≤γ_0^2λ_max( L_n( γ_0)
P_nL_n( γ_0) ) ‖ z(0)‖
^2
≤γ_0^2λ_max( L_n( γ_0)
P_nL_n( γ_0) ) ‖ L_n( 1/T) ‖ ^2‖ x(0)‖ ^2,
combining which with (<ref>), indicates that
‖ x( t) ‖ ≤‖ L_n(
T) ‖( T-t) ^3/2e
^1/2θ_0t√(V( 0,z( 0) ) )/√(λ_min( L_n( γ_0) P_nL_n(
γ_0) ) )
≤ γ_0√(ω_1)‖ L_n( T)
‖( T-t) ^3/2e^1/2θ_0t‖ L_n( 1/T) ‖‖
x(0)‖ ,
where ω_1=λ_max(L_n(γ_0)P_nL_n(γ
_0))/λ_min(L_n(γ_0)P_nL_n (γ_0)). Thus we
have proven (<ref>).
Finally, it follows from (<ref>), (<ref>), (<ref>) that
‖ u(t)‖^2 =β^2z^TP(γ(t))bb^TP(γ(t))z
≤β^2b^TP(γ(t))bz^TP(γ(t))z
=β^2nγ z^TP(γ(t))z
=β^2nV(t,z(t))
≤β^2n(T-t)e^θ_0tV(0,z(0))
=β^2n(T-t)e^θ_0tλ_max(L_n(γ
_0)P_nL_n(γ_0))
×‖ L_n( 1/T) ‖ ^2‖
x(0)‖^2,
which proves (<ref>). The proof is finished.
If the x_0-subsystem in system (<ref>) is of the following form
<cit.>
ẋ_0=u_0+c_0x_0,
where c_0 is a known constant, we can get the following result.
Let T>0 be a prescribed time, β=(2n+δ_c)e
^| c_0| T/n+2dΛe^|
c_0| T, and γ_0=1/T. Then, for any initial condition
and all t∈0,T), the state and control of the closed-loop system
consisting of (<ref>) with x_0-subsystem be defined in (<ref>) and
the following smooth time-varying feedback
{[ u_0( t) =-3/T-tx_0-β/2e
^c_0t(T-t),; u( t) =-βe^c_0tb^TP(
γ( t) ) L_n( 1/T-t) x,; γ( t) =T/T-tγ_0, ].
satisfy
| x_0(t)| ≤( T-t) ^2e
^c_0t( ( T-t) | x_0(0)|/T^3+β t/2T) ,
| u_0(t)| ≤( T-t) e^c_0
t( 3( T-t) ( | x_0(0)|/T^3+β/2T) +β) ,
‖ x( t) ‖ ≤υ_11(
T-t) ^3/2e^υ_12| x_0
(0)| t‖ x( 0) ‖ ,
‖ u( t) ‖ ≤υ_21(T-t)^1/2e^υ_22| x_0(0)| t‖
x( 0) ‖ ,
where υ_11, υ_12, υ_21 and υ_22
are some positive constants.
The closed-loop system consisting of (<ref>) and u_0( t)
in the first equation in (<ref>) can be written as
ẋ_0(t)=-3/T-tx_0(t)+c_0x_0(t)-β/2e^c_0
t(T-t),
whose solution can be given as
x_0(t)= e^c_0texp( -∫_0^t3/T-sds) x_0(0)
-β/2∫_0^te^c_0texp( -∫_s
^t3/T-τdτ) (T-s)ds
= ( T-t/T) ^3x_0(0)e^c_0t
-β/2e^c_0t( T-t) ^2
+βe^c_0t( T-t) ^3/2T
= ( T-t) ^2( ( T-t) x_0
(0)/T^3e^c_0t-tβ/2Te^c_0t) ,
which proves the expression of | x_0(t)| in this
proposition. Then the control u_0 can be written as the open-loop form
u_0(t)=-3/T-tx_0-β/2e^c_0t(T-t)
= -3/T-t( ( T-t) ^2( (
T-t) x_0(0)/T^3e^c_0t-tβ/2Te^c_0t) )
-β/2e^c_0t(T-t)
= ( T-t) ( -3( T-t) ( x_0
(0)/T^3e^c_0t+βe^c_0t/2T)
+βe^c_0t)
≜(T-t)( θ(t)+βe^c_0t) ,
which proves the expression of | u_0(t)| in this
proposition. The rest of the proof is similar as the proof of Theorem
<ref>, and is omitted for brevity.
When c_0=0, the above theorem reduces to Theorem <ref>. In addition,
similar to Remark 4 in <cit.>, the proposed method can also
handle the nonholonomic system (<ref>) with the nonlinear functions
satisfying
|ϕ_i( t,u,x) |≤φ
(x_0)(| x_1| +| x_2|
+⋯+| x_i| )
in which φ(x_0) are some known positive continuous functions. Due to
space limitations, the details are not given here.
§ OBSERVED-BASED OUTPUT FEEDBACK
To state a solution to Problem <ref>, we first denote the parameter
β as
β=max{β_1,β_2} ,
where
β_1 =8√(2)ndΛcQ_nP_nQ_nc^T+2( 2+δ_c/n) ,
β_2 =4√(2)dΛ+2( 2n+2-1/n) ,
in which d is defined in (<ref>).
Let T>0 be a prescribed time and γ_0=1/T. Design the smooth
time-varying observed-based output feedback
{[ ξ̇(t)=β Aξ(t)+bu(t)+β Q(γ)c^T(
γ^n-1y_2-cξ(t)) ,; u(t)=-β b^TP(γ)ξ(t),; γ=γ(t)=T/T-tγ_0. ].
Then, for any initial condition and all t∈0,T), the states
x_0(t), x(t), ξ( t), and controls u_0(t),
u( t) of the closed-loop system consisting of (<ref>) and
(<ref>) satisfy, (<ref>), (<ref>) and
‖ x( t) ‖ ≤υ_3e
^υ_30| x_0| t( T-t) ^3/2( ‖ x( 0) ‖ +‖ξ(
0) ‖) ,
‖ξ( t) ‖ ≤υ_4e^υ_40| x_0| t( T-t)
^3/2( ‖ x( 0) ‖ +‖ξ( 0) ‖) ,
‖ u( t) ‖ ≤υ_5e
^υ_50| x_0| t( T-t) ^1/2( ‖ x( 0) ‖ +‖ξ(
0) ‖) ,
where υ_3, υ_30, υ_4, υ_40,
υ_5 and υ_50 are some positive constants, namely,
Problem <ref> is solved.
In this proof, if not specified, we drop the dependence of variables on t,
for example, we use P=P(γ(t)) and Q=Q(γ(t)). Consider the
x-subsystem in (<ref>). Denote e=z-ξ. By using (<ref>), the
closed-loop system can be written as
ξ̇ =β ( A-bb^TP) ξ+β Qc^Tce,
ė =β Ae+θ Az+ψ-β Qc^Tce.
Choose the Lyapunov-like function
V_ξ( t,ξ(t)) =γ(t) ξ^T(t)P(γ(t))ξ(t),
whose time-derivative along system (<ref>) can be written as
V̇_ξ( t,ξ ) = γ̇ξ^T
Pξ+γγ̇ξ^TdP/dγξ+2βγξ^TPQc^Tce
+βγξ^T( A^TP+PA-2Pbb^TP) ξ
≤ γ̇ξ^TPξ+δ_cγ̇/nξ^TPξ
-βγ^2ξ^TPξ+2βγξ^TPQc^Tce,
where we have used (<ref>) and (<ref>). By using Young's
inequality with k_2>0, it follows that
2βξ^TPQc^Tce≤ k_2βγξ^TPξ+β/k_2γe^Tc^TcQPQc^Tce.
In addition, according to Theorem 2 in <cit.>, we have
c^TcQPQc^Tc≤γ^2n+2nk_3Q^-1,
where k_3=cQ_nP_n Q_nc^T.
Therefore, (<ref>) can be continued as
V̇_ξ( t,ξ) ≤ γ̇ξ^T
Pξ+δ_cγ̇/nξ^TPξ-βγ^2ξ^TPξ+k_2βγ^2ξ^T
Pξ
+β/k_2γ^2n+2nk_3e^TQ^-1e
= ( γ̇/γ+δ_cγ̇/nγ-βγ+k_2βγ) γξ^TPξ
+β/k_2nk_3γ^2n+2e^TQ^-1e.
Take another Lyapunov-like function
V_e( t,e(t)) =γ^2n+1(t)e^T(t)Q^-1(γ(t))e(t),
whose time-derivative along system (<ref>) can be written as
V̇_e(t,e)= (2n+1)γ^2nγ̇e^T
Q^-1e+γ^2n+1γ̇e^TdQ^-1/dγe
+γ^2n+1(β Ae+θ Az+ψ-β Qc^Tce)^TQ^-1e
+γ^2n+1e^TQ^-1(β Ae+θ Az+ψ-Qc^Tce)
≤ (2n+1)γ^2nγ̇e^TQ^-1e-γ̇/nγ^2ne^TQ^-1e
+γ^2n+1(β Ae+θ Az+ψ-β Qc^Tce)^TQ^-1e
+γ^2n+1e^TQ^-1(β Ae+θ Az+ψ-β
Qc^Tce)
≤ ( 2n+1-1/n) γ^2nγ̇e^TQ^-1e+2γ^2n+1e^TQ^-1ψ
+γ^2n+1β e^TQ^-1(QA^T+AQ-2Qc^TcQ)Q^-1e
+2θγ^2n+1e^TQ^-1Az,
where we have used (<ref>). By using Young's inequalities with k_4>0
and k_5>0, we can get
2z^TA^TQ^-1e ≤k_4/γ
z^TA^TQ^-1Az+γ/k_4e^T
Q^-1e,
2e^TQ^-1ψ ≤k_5/γψ^T
Q^-1ψ+γ/k_5e^TQ^-1e.
On the one hand, it follows from Lemma <ref>, (<ref>) and (<ref>)
that
ψ^TQ^-1ψ =ψ^T( γ^2n-1
L_n^-1Q_nL_n^-1) ^-1ψ
≤Λ/γ^2n-1( L_nψ)
^T( L_nψ)
≤d^2Λ/γ^2n-3( L_nz)
^T( L_nz) .
On the other hand, in light of z=ξ+e, (<ref>), (<ref>) and
(<ref>), we can get
( L_nz) ^T( L_nz) =(
L_n( ξ+e) ) ^T( L_n(
ξ+e) )
≤2( L_nξ) ^T( L_nξ)
+2( L_ne) ^T( L_ne)
≤2Λ( ξ^TPξ/γ
+γ^2n-1e^TQ^-1e) .
Therefore, we have
ψ^TQ^-1ψ ≤2d^2Λ^2/γ^2n-3( ξ^TPξ/γ+γ
^2n-1e^TQ^-1e)
=2d^2Λ^2/γ^2n-2ξ^T
Pξ+2d^2Λ^2γ^2e^TQ^-1e.
In view of L_nA=AL_nγ, (<ref>) and (<ref>), it yields that
A^TQ^-1A =A^T1/γ^2n-1L_n
Q_n^^-1L_nA
=1/γ^2n-3L_nA^TQ_n^^-1AL_n
≤λ_max( A^TQ_n^^-1A) 1/γ^2n-3L_nL_n
=λ_max( A^TQ_n^^-1A) Λ1/γ^2n-3L_nΛ^-1L_n
≤λ_max( A^TQ_n^^-1A)
Λ1/γ^2n-3L_nQ_n^^-1L_n
=λ_max( A^TQ_n^^-1A) Λγ^2Q^-1≜ k_8γ^2Q^-1.
Again in view of z=ξ+e, (<ref>), and (<ref>), it follows that
z^TQ^-1z =( ξ+e) ^TQ^-1(
ξ+e)
≤2ξ^TQ^-1ξ+2e^TQ^-1e
≤2/γ^2n-1ξ^TL_nQ_n^^-1L_nξ+2e^TQ^-1e
≤2Λ/γ^2n-1ξ^TL_nL_nξ+2e^TQ^-1e
≤2Λ^2/γ^2n-1ξ^TL_n
P_nL_nξ+2e^TQ^-1e
=2Λ^2/γ^2nξ^TPξ
+2e^TQ^-1e.
Then we have
z^TA^TQ^-1Az ≤ k_8γ^2z^TQ^-1z
≤2Λ^2k_8/γ^2n-2ξ^T
Pξ+2k_8γ^2e^TQ^-1e.
With the above results and (<ref>), (<ref>) can be written as
V̇_e( t,e) ≤ ( 2n+1-1/n)
γ^2nγ̇e^TQ^-1e
-γ^2n+2β e^TQ^-1e+|θ|γ^2nk_4z^TA^TQ^-1Az
+|θ|1/k_4γ^2n+2e^TQ^-1e+γ^2nk_5ψ^TQ^-1ψ
+1/k_5γ^2n+2e^TQ^-1e
≤ ( ( 2n+1-1/n) γ̇/γ-βγ) γ^2n+1e^TQ^-1e
+( 2k_5d^2Λ^2+1/k_5)
γ^2n+2e^TQ^-1e
+2|θ|Λ^2k_4k_8γ^2ξ^TPξ
+|θ|( 2k_4k_8+1/k_4)
γ^2n+2e^TQ^-1e
+2k_5d^2Λ^2γ^2ξ^TPξ.
Take the total Lyapunov-like function
𝒱=𝒱( t,ξ,e) =V_ξ( t,ξ)
+k_7V_e( t,e) ,
where k_7 is a positive constant to be designed later. It can be calculated from
(<ref>) and (<ref>) that
𝒱̇= V̇_ξ( t,ξ) +k_7V̇
_e( t,e)
≤ ( γ̇+δ_cγ̇/n
-βγ^2+k_2βγ^2) ξ^TPξ
+β/k_2γ^2n+2nk_3e^TQ^-1e
+k_7( ( 2n+1-1/n) γ̇-βγ
^2) γ^2ne^TQ^-1e
+k_7( 2d^2Λ^2k_5+1/k_5)
γ^2n+2e^TQ^-1e
+2|θ|Λ^2k_4k_7k_8γ^2ξ^TPξ
+|θ|( 2k_4k_8+1/k_4)
k_7γ^2n+2e^TQ^-1e
+2d^2Λ^2k_5k_7γ^2ξ^TPξ
= μ_ξξ^TPξ+μ_ek_7γ^2ne^TQ^-1e+2|θ|Λ^2k_7k_4
k_8γ^2ξ^TPξ
+|θ|/k_4k_7γ^2n+2
e^TQ^-1e+2k_4k_8|θ| k_7γ^2n+2e^TQ^-1e,
where
μ_ξ= γ̇+δ_cγ̇/n
-βγ^2+k_2βγ^2+2d^2Λ^2k_5
k_7γ^2
μ_e= ( 2n+1-1/n) γ̇-βγ
^2+1/k_5γ^2
+2d^2Λ^2k_5γ^2+β nk_3/k_2k_7γ^2.
Take the parameters as k_2=1/2, k_5=1/(√(2)dΛ) and
k_7=4nk_3. Then it follows from (<ref>) that
μ_ξ= ( 1+δ_c/n) γ̇-( 1-k_2) βγ^2+2d^2Λ^2k_5
k_7γ^2
≤ ( 1+δ_c/n) γ̇-1/2β_1γ^2+4√(2)dΛcQ_nP_nQ_n
c^Tγ^2
= ( 1+δ_c/n) γ̇-(
1+δ_c/n) γ^2-γ^2 = -γ^2,
μ_e= ( 2n+1-1/n) γ̇-(
1-nk_3/k_2k_7) βγ^2+1/k_5γ
^2
+2d^2Λ^2k_5γ^2
≤ ( 2n+1-1/n) γ̇-1/2β
_2γ^2+2√(2)dΛγ^2
= ( 2n+1-1/n) γ̇-( 2n+1-1/n) γ^2-γ^2
= -γ^2.
Therefore, we have
𝒱̇( t,ξ,e) ≤ -γ𝒱(
t,ξ,e) +2|θ|Λ^2k_7
k_4k_8γ^2ξ^TPξ
+|θ|1/k_4k_7γ^2n+2
e^TQ^-1e
+2|θ| k_4k_8k_7γ^2n+2e^TQ^-1e
= -γ𝒱( t,ξ,e) +6|
x_0(0)|/T^2Λ^2k_7k_4k_8γξ^TPξ
+3βΛ^2k_7k_4k_8γξ^TPξ
+3( | x_0(0)|/T^2+β/2) 1/k_4k_7γ^2n+1e^TQ^-1e
+3( 2| x_0(0)|/T^2+β)
k_4k_8k_7γ^2n+1e^TQ^-1e
≤ -γ𝒱( t,ξ,e) +λ𝒱(
t,ξ,e) ,
where
λ= 3( 2| x_0(0)|/T^2
+β) Λ^2k_7k_4k_8+3( |
x_0(0)|/T^2+β/2) 1/k_4
+3( 2| x_0(0)|/T^2+β)
k_4k_8,
which, together with the comparison lemma in <cit.>, indicates
𝒱(t,ξ(t),e(t))≤(T-t)e^λ t𝒱(0,ξ
(0),e(0)), t∈0,T).
It can be obtained from (<ref>) and (<ref>) that
V_ξ( t,ξ ) =γξ^TP(
γ) ξ≥γ^2ξ^TL_n( γ_0) P_n
L_n( γ_0) ξ
≥γ^2λ_min( L_n( γ_0)
P_nL_n( γ_0) ) ‖ξ‖
^2,
V_e( t,e) =γ^2n+1e^TQ^-1(
γ) e
≥γ^2e^TL_n( γ_0) Q_n^^-1L_n( γ_0) e
≥γ^2λ_min( L_n( γ_0)
Q_n^^-1L_n( γ_0) ) ‖ e‖
^2,
which indicates that
𝒱( t,ξ,e) =V_ξ( t,ξ) +k_7
V_e( t,e) ≥γ^2ω_1( γ_0)
‖χ‖ ^2,
where χ=[ξ^T,e^T]^T and ω
_1(γ_0)=min{λ_min(L_n(γ_0)P_n L_n
(γ_0)),λ_min(L_n(γ_0)Q_n^^-1L_n(γ
_0))}. Therefore, it yields that
‖χ ( t) ‖ ^2 ≤𝒱
( t,ξ(t),e(t)) /γ^2( t) ω_1(
γ_0) ≤( T-t) ^3e^λ t𝒱(
0,ξ(0),e(0)) /ω_1( γ_0)
≤( T-t) ^3ω_2( γ_0)
/T^2ω_1( γ_0) e^λ t‖χ( 0) ‖ ^2,
where ω_2(γ_0)=max{λ_max(L_n(γ_0
)P_nL_n(γ_0)),λ_max(L_n (γ_0)Q_n^^-1
L_n(γ_0))}. Thus (<ref>) is proven. Moreover, according to
(<ref>) and e=z-ξ, it follows that
‖ x(t) ‖ ≤‖ L_n(
T-t) ‖‖ z(t) ‖
≤‖ L_n( T-t) ‖( ‖
e( 0) ‖ +‖ξ (0) ‖)
≤2‖ L_n(T) ‖‖χ(
t) ‖
≤2‖ L_n( T) ‖( T-t)
^3/2√(ω_2( γ_0) )/T√(ω_1( γ_0) )e^λ t/2‖χ( 0) ‖
≤2‖ L_n( T) ‖( T-t)
^3/2√(ω_2( γ_0) )/T√(ω_1( γ_0) )
×e^λ t/2( ‖ L_n(
1/T) ‖‖ x( 0) ‖
+2‖ξ( 0) ‖) ,
which proves (<ref>).
Finally, by using (<ref>), (<ref>) and (<ref>), we can obtain
that
‖ u( t) ‖ ^2 =β^2ξ^TP( γ ) bb^TP( γ) ξ
≤β^2b^TP( γ ) bξ^TP( γ ) ξ
=β^2nγξ^TP( γ) ξ
=β^2nV_ξ( t,ξ)
≤β^2n𝒱( t,ξ,e)
≤β^2n( T-t) e^λ t𝒱(
0,ξ(0),e(0))
≤β^2nγ_0^2ω_2( γ_0) (
T-t) e^λ t‖χ ( 0) ‖
^2
≤β^2nγ_0^2ω_2( γ_0) (
T-t) e^λ t
×( ‖ L_n( 1/T) ‖‖ x( 0) ‖ +2‖ξ( 0)
‖) ,
which prove (<ref>). The proof is finished.
For nonholonomic systems, compared to time-invariant discontinuous feedback,
such as <cit.> and their citations,
there are few related results on smooth time-varying feedback. To the best of
the authors' knowledge, there are no smooth time-varying feedback results for
prescribed-time control of nonholonomic systems available in the literature.
In this paper, on the one hand, different from the traditional discontinuous
control methods <cit.>, the
proposed methods are smooth, time-varying and even linear. On the other hand,
different from smooth time-varying methods
<cit.>, where only the state
feedback controllers were considered and drive the state to converge to zero
exponentially, the proposed controllers are not only based on state feedback
but also on output feedback, and can drive the state to converge to zero
globally at any prescribed time. In addition, different from
<cit.>, the parameters in our controllers do not depend
on the initial state of the x_0-subsystem.
§ A NUMERICAL EXAMPLE
In this part, we consider the uncertain bilinear model
<cit.>
ẋ_0=( 1-ε^2/2) v, ż_1
=z_2v, ż_2=u+φ( z_1) ,
where ε is the bias in orientation and is assumed to be much
smaller than one, v and u are the two controls, and φ(
z_1) =z( 1+θ_1^2) with θ_1=θ
_1(t) being an uncertain function.
Denote the output y=[x_0,z_1]^T and
u_0 =( 1-ε^2/2) v, x_1
=z_1, x_2=2/2-ε^2z_2,
u_1 =2/2-ε^2u, ϕ ( x_1)
=2/2-ε^2φ ( x_1) ,
the system (<ref>) can be expressed as
ẋ_0=u_0, ẋ_1=x_2u_0, ẋ_2=u_1+ϕ (
x_1) , y=[x_0,x_1]^T.
For this system, the existing results are all based on the σ process (see
<cit.>), that is, x_0(t) appears on the denominator (see
<cit.>). When x_0(t) is close to zero, this may cause
singular problems. For the situation when x_0(0)=0, most of the existing
methods rely on using switching controls (see
<cit.>), and the control strategy proposed in this paper
is workable for all initial states of x_0(t), thus having greater potential for application.
In the simulation, we take ε=0.1, [x_0(0),z_1(0),z_2
(0)]=[0,-1,1], [ξ_1(0),ξ_2(0)]=[0,0], θ_1=sin(t),
T=2.5s and β=100. The states and controllers are plotted in Fig.
<ref>, from which it can be observed that the proposed method can
indeed drive the states and controllers to converge to zero at the prescribed
time T=2.5s.
§ CONCLUSION
The smooth prescribed-time control problem for uncertain chained nonholonomic
systems was solved in this paper. With a novel smooth time-varying state
transformation, uncertain chained nonholonomic systems were reformulated as
uncertain linear time-varying systems. By fully utilizing the properties of a
class of parametric Lyapunov equations and constructing time-varying
Lyapunov-like functions, smooth time-varying high-gain controllers were
constructed. It was proved that the states and controllers can converge to zero at
the prescribed time. Both state feedback and observer-based output feedback
were considered. The effectiveness of the proposed methods was verified by a
numerical example.
99
Astolfi96sclA. Astolfi. Discontinuous control of nonholonomic
systems. Systems & Control Letters, 27(1), 37-45, 1996.
BBsiam00S. P. Bhat and D. S. Bernstein. Finite-time stability of
continuous autonomous systems. SIAM Journal on Control and
Optimization, 38(3), 751-766, 2000.
Brockett83R. W. Brockett. Asymptotic stability and feedback
stabilization, in: Differential Geometric Control Theory, 27(1),
181-191, 1983.
Ding23autoT. F. Ding, M. F. Ge, C. Xiong, Z. W. Liu, and G. Ling.
Prescribed-time formation tracking of second-order multi-agent networks with
directed graphs. Automatica, 152, 110997, 2023.
Du15autoH. Du, G. Wen, X. Yu, S. Li, and M. Z. Chen. Finite-time
consensus of multiple nonholonomic chained-form systems based on recursive
distributed observer. Automatica, 62, 236-242, 2015.
esp22autoN. Espitia, D. Steeves, W. Perruquetti, and M. Krstic,
Sensor delay-compensated prescribed-time observer for LTI systems,
Automatica, 135, 110005, 2022.
Ge03autoS. S. Ge, Z. Wang, and T. H. Lee. Adaptive stabilization of
uncertain nonholonomic systems by state and output feedback.
Automatica, 39(8), 1451-1460, 2023.
Holl19tacJ. Holloway and M. Krstic, Prescribed-time observers for
linear systems in observer canonical form, IEEE Transactions on
Automatic Control, 64(9), 3905-3912, 2019.
Holl19autoJ. Holloway and M. Krstic, Prescribed-time output
feedback for linear systems in controllable canonical form,
Automatica, 107, 77-85, 2019.
Hongtac05Y. Hong, J. Wang, and Z. Xi. Stabilization of uncertain
chained form systems within finite settling time. IEEE Transactions on
Automatic Control, 50(9), 1379-1384, 2005.
Hou21astD. Hou, F. Gao, J. Huang, and Y. Wu. A switching-based
state-scaling design for prescribed-time stabilization of nonholonomic systems
with actuator dead-zones. Aerospace Science and Technology, 118,
106986, 2021.
Huatac19C. C. Hua, P. Ning, and K. Li. Adaptive prescribed-time
control for a class of uncertain nonlinear systems. IEEE Transactions
on Automatic Control. 67(11), 6159-6166, 2021.
Jiang00autoZ. P. Jiang, Robust exponential regulation of
nonholonomic systems with uncertainties. Automatica, 36(2), 189-209, 2000.
KhalilH. K. Khalil, Nonlinear Systems (3rd ed.). Englewood
Cliffs, USA: Prentice-Hal, 2002.
Kri20autoP. Krishnamurthy, F. Khorrami, and M. Krstic, A dynamic
high-gain design for prescribed-time regulation of nonlinear systems,
Automatica, 115, 108860, 2020.
Kri20ejcP. Krishnamurthy, F. Khorrami, and M. Krstic, Robust
adaptive prescribed-time stabilization via output feedback for uncertain
nonlinear strict-feedback-like systems, European Journal of Control,
55, 14-23, 2020.
Li21tacW. Li and M. Krstic, Stochastic nonlinear prescribed-time
stabilization and inverse optimality, IEEE Transactions on Automatic
Control, 67(3), 1179-1193, 2021.
Li23tacW. Li and M. Krstic, Prescribed-time output-feedback control
of stochastic nonlinear systems, IEEE Transactions on Automatic
Control, 68(3), 1431-1446, 2023.
LQtac02W. Lin, R. Pongvuthithum, and C. Qian. Control of high-order
nonholonomic systems in power chained form using discontinuous feedback.
IEEE Transactions on Automatic Control, 47(1), 108-115, 2002.
Murry93tacR. M. Murray and S. S. Sastry. Nonholonomic motion
planning: Steering using sinusoids. IEEE Transactions on Automatic
Control, 38(5), 700-716, 1993.
Han18tacB. Ning and Q. L. Han. Prescribed finite-time consensus
tracking for multiagent systems with nonholonomic chained-form dynamics.
IEEE Transactions on Automatic Control, 64(4), 1686-1693, 2018.
Orlov22autoY. Orlov, Time space deformation approach to
prescribed-time stabilization: Synergy of time-varying and non-Lipschitz
feedback designs, Automatica, 144, 110485, 2022.
Song17autoY. Song, Y. Wang, J. Holloway, and M. Krstic.
Time-varying feedback for regulation of normal-form nonlinear systems in
prescribed finite time. Automatica, 83, 243-251, 2017.
Sun19autoZ. Y. Sun, Y. Shao, and C. C. Chen. Fast finite-time
stability and its application in adaptive control of high-order nonlinear
system. Automatica, 106, 339-348, 2019.
Teel95ijcA. R. Teel, R. M. Murray, and G. C. Walsh. Non-holonomic
control systems: from steering to stabilization with sinusoids.
International Journal of Control, 62(4), 849-870, 1995.
Tian00ieeeY. P. Tian and S. Li. Smooth exponential stabilization of
nonholonomic systems via time-varying feedback. In Proceedings of the
39th IEEE Conference on Decision and Control (Cat. No. 00CH37187). 2000.
Tian02autoY. P. Tian and S. Li. Exponential stabilization of
nonholonomic dynamic systems by smooth time-varying control.
Automatica, 38(7), 1139-1146, 2002.
Ye22tacH. Ye and Y. Song. Prescribed-time tracking control of MIMO nonlinear systems under non-vanishing uncertainties. IEEE Transactions on Automatic Control, 68(6), 3664-3671, 2022.
Ye23sclH. Ye and Y. Song. Prescribed-time control for time-varying nonlinear systems: A temporal scaling based robust adaptive approach. Systems & Control Letters, 181, 105602, 2023.
Wang18autoH. Wang and Q. Zhu. Adaptive output feedback control of
stochastic nonholonomic systems with nonlinear parameterization.
Automatica, 98, 247-255, 2018.
Zhang22sclK. K. Zhang, B. Zhou, M. Hou, and G. R. Duan.
Prescribed-time control of high-order nonholonomic systems in chained form by
time-varying feedback. Systems & Control Letters, 166, 105307, 2022.
zhang2024autoK. K. Zhang, B. Zhou, and G. R. Duan. Global
prescribed-time output feedback control of a class of uncertain nonlinear
systems by linear time-varying feedback. Automatica, Accept, 2024.
Zhang20tacP. Zhang, T. Liu, and Z. P. Jiang. Systematic design of robust event-triggered state and output feedback controllers for uncertain nonholonomic systems. IEEE Transactions on Automatic Control, 66(1), 213-228, 2020.
zhou14bookB. Zhou. Truncated Predictor Feedback for
Time-Delay Systems. Springer Berlin Heidelberg, 2014.
Zhou20autoaB. Zhou. Finite-time stabilization of linear systems by
bounded linear time-varying feedback. Automatica, 113, 108760, 2020.
zhou21tacB. Zhou and Y. Shi. Prescribed-time stabilization of a
class of nonlinear systems by linear time-varying feedback. IEEE
Transactions on Automatic Control, 66(12), 6123-6130, 2021.
zhou24gncB. Zhou and K. K. Zhang. Smooth time-varying feedback control of nonholonomic systems with applications to the attitude control of underactuated spacecraft. Guidance, Navigation and Control, 04(01), 2450004, 2024.
zhou23autoB. Zhou, K. K. Zhang, and H. Jiang. Prescribed-time
control of perturbed nonholonomic systems by time-varying feedback.
Automatica, 155, 111125, 2023.
|
http://arxiv.org/abs/2409.02601v1 | 20240904103337 | ChatGPT vs Social Surveys: Probing the Objective and Subjective Human Society | [
"Muzhi Zhou",
"Lu Yu",
"Xiaomin Geng",
"Lan Luo"
] | cs.CY | [
"cs.CY"
] |
A Software Visualization Approach for
Multiple Visual Output Devices
Malte Hansen
Department of Computer Science
Kiel University
Kiel, Germany
[email protected]
Heiko Bielfeldt
Department of Computer Science
Kiel University
Kiel, Germany
[email protected]
Armin Bernstetter
GEOMAR Helmholtz Centre for Ocean Research Kiel
Kiel, Germany
[email protected]
Tom Kwasnitschka
GEOMAR Helmholtz Centre for Ocean Research Kiel
Kiel, Germany
[email protected]
Wilhelm Hasselbring
Department of Computer Science
Kiel University
Kiel, Germany
[email protected]
September 9, 2024
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The extent to which Large Language Models (LLMs) can simulate the data-generating process for social surveys remains unclear. Current research has not thoroughly assessed potential biases in the sociodemographic population represented within the language model's framework. Additionally, the subjective worlds of LLMs often show inconsistencies in how closely their responses match those of groups of human respondents. In this paper, we used ChatGPT-3.5 to simulate the sampling process and generated six socioeconomic characteristics from the 2020 US population. We also analyzed responses to questions about income inequality and gender roles to explore GPT's subjective attitudes. By using repeated random sampling, we created a sampling distribution to identify the parameters of the GPT-generated population and compared these with Census data. Our findings show some alignment in gender and age means with the actual 2020 US population, but we also found mismatches in the distributions of racial and educational groups. Furthermore, there were significant differences between the distribution of GPT's responses and human self-reported attitudes. While the overall point estimates of GPT's income attitudinal responses seem to align with the mean of the population occasionally, their response distributions follow a normal distribution that diverges from human responses. In terms of gender relations, GPT's answers tend to cluster around a single category, demonstrating a deterministic pattern. We conclude by emphasizing the distinct design philosophies of LLMs and social surveys: LLMs aim to predict the most suitable answers, while social surveys seek to reveal the heterogeneity among social groups.
§ INTRODUCTION
sections/1_introduction.tex
§ RELATED WORK
sections/2_relatedwk.tex
§ RESEARCH DESIGN
sections/3_methodology.tex
§ STUDY 1: CHATGPT AND PROBABILITY SAMPLING: THE OBJECTIVE WORLD
sections/4_study1.tex
§ STUDY 2: CHATGPT AND ATTITUDES TOWARDS INCOME AND GENDER: THE SUBJECTIVE WORLD
sections/5_0study2_Newprompt.tex
sections/5_1study2_income.tex
sections/5_2study2_gender.tex
§ DISCUSSION AND CONCLUSION
sections/6_discussion.tex
§ ETHICAL CONSIDERATIONS
sections/7_ethics.tex
§ APPENDIX
sections/appendix1.tex
sections/appendix2.tex
|
http://arxiv.org/abs/2409.03482v1 | 20240905124557 | Generating arbitrary superpositions of nonclassical quantum harmonic oscillator states | [
"S. Saner",
"O. Băzăvan",
"D. J. Webb",
"G. Araneda",
"D. M. Lucas",
"C. J. Ballance",
"R. Srinivas"
] | quant-ph | [
"quant-ph",
"physics.atom-ph"
] |
§ ABSTRACT
Full coherent control and generation of superpositions of the quantum harmonic oscillator are not only of fundamental interest but are crucial for applications in quantum simulations, quantum-enhanced metrology and continuous-variable quantum computation. The extension of such superpositions to nonclassical states increases their power as a resource for such applications. Here, we create arbitrary superpositions of nonclassical and non-Gaussian states of a quantum harmonic oscillator using the motion of a trapped ion coupled to its internal spin states. We interleave spin-dependent nonlinear bosonic interactions and mid-circuit measurements of the spin that preserve the coherence of the oscillator. These techniques enable the creation of superpositions between squeezed, trisqueezed, and quadsqueezed states, which have never been demonstrated before, with independent control over the complex-valued squeezing parameter and the probability amplitude of each constituent, as well as their spatial separation. We directly observe the nonclassical nature of these states in the form of Wigner negativity following a full state reconstruction. Our methods apply to any system where a quantum harmonic oscillator is coupled to a spin.
Generating arbitrary superpositions of nonclassical quantum harmonic oscillator states
S. Saner, O. Băzăvan, D. J. Webb, G. Araneda, D. M. Lucas, C. J. Ballance, R. Srinivas
Department of Physics, University of Oxford, Clarendon Laboratory,
Parks Road, Oxford OX1 3PU, United Kingdom
Email: [email protected]
September 9, 2024
=====================================================================================================================================================================================================================================================
One of the most fundamental and remarkable aspects of quantum mechanics is superposition, where quantum objects can exist in multiple states simultaneously. This phenomenon was famously described by Schrödinger using the example of a cat that is both dead and alive <cit.>. Superpositions are not just a curiosity of nature but are central to any application of quantum mechanics. In sensing <cit.>, it is the superposition of two states that respond differently to a given signal, such as a shift in frequency, that enables their use in atomic clocks <cit.>.
In quantum computing, the ability to be in multiple states at once increases the amount of information that can be stored and processed simultaneously and is the basis of quantum advantage <cit.>.
Superpositions of two-level systems can be controlled to almost arbitrary precision in many systems <cit.>. However, they ultimately only have two degrees of freedom: the relative amplitude and phase of their complex probabilities, which limits the amount of quantum information and how robustly it can be stored. Creating superpositions of a quantum harmonic oscillator, which is, in contrast, infinite-dimensional, has already led to a plethora of resource-efficient error correction protocols <cit.>, metrological applications <cit.>, and quantum simulations <cit.>. Nonetheless, constituents of those superpositions have been limited to two <cit.> or three <cit.> Fock states, classical states such as coherent states leading to cat states <cit.>, or Gaussian states such as displaced squeezed states leading to Gottesman-Kitaev-Preskill (GKP) states <cit.>.
Instead, superpositions between arbitrary squeezed states <cit.>, or between non-Gaussian states such as trisqueezed states, exhibit increased Wigner negativities <cit.> and discrete rotational symmetries which can be a resource for new sensing <cit.> and error correction schemes <cit.>.
However, the lack of any strong, native interactions that create such superpositions has precluded any experimental realisation.
Here, we use a hybrid oscillator-spin system to create such superposition states <cit.> of the harmonic oscillator; we use our recently demonstrated strong spin-dependent nonlinear interactions of the oscillator <cit.> together with mid-circuit measurements of the spin. We create an arbitrary combination of generalised squeezed states <cit.> such as squeezed, trisqueezed, and quadsqueezed harmonic oscillator states.
These techniques not only enable arbitrary control of the complex probability amplitudes of the constituents forming the superposition, but also over the interactions that generate them. We can create constituents from interactions of the same type with different strengths and phases, or from entirely different nonlinear interactions.
The hybrid system is central to our demonstration.
Applying the spin-dependent nonlinear interaction to a superposition of spin states leads to
an entangled state of the spin and the nonclassical motional states.
A mid-circuit measurement of the spin subsequently disentangles the spin from the motion, leaving the harmonic oscillator in a superposition state.
This method enables the repeated application of the nonlinear interaction within a single sequence, as the interaction is unitary and can act on any starting state without destroying it.
We create the spin-dependent nonlinear interaction
Ĥ_k =ħΩ_k/2σ̂_z (â^k e^-iϕ + (â^†)^k e^iϕ),
by combining two noncommuting spin-dependent forces (SDF) <cit.> (see Supplement <cit.>).
The spin Pauli operator is defined as σ̂_z = |1_s⟩⟨1_s| - |0_s⟩⟨0_s|, while â^† (â) describes the creation (annihilation) operator of a quantum harmonic oscillator. The order of the generalised squeezing interaction <cit.> is k; setting k=2, 3, 4 generates the squeezing, trisqueezing, and quadsqueezing interactions respectively. Squeezing interactions have been used extensively in metrology <cit.>, while the higher-order trisqueezing <cit.> and quadsqueezing <cit.> create non-Gaussian states of the harmonic oscillator <cit.>.
The coupling strength of the interaction is Ω_k, and ϕ is the phase relative to the oscillator, which also defines the generalised squeezing axis.
When applying this interaction to the initial state of the hybrid system |0_osc,0_s⟩, where |0_osc⟩ represents the harmonic oscillator in its ground state and |0_s⟩ denotes the spin configuration prepared in an eigenstate of σ̂_z, we obtain a generalised squeezed state |ζ_k, 0_s⟩:=exp(-iĤ_k t/ħ)|0_osc, 0_s⟩ with squeezing magnitude ζ_k = Ω_k e^iϕ t.
A superposition between two generalised squeezed states |ζ_k⟩ and |-ζ_k⟩, with orthogonal squeezing axes, can be created by applying the sequence shown in Fig. <ref>A. Initially, a R_y(π/2) rotation brings the spin part of the ion into a superposition state |+_s⟩= (|0_s⟩+|1_s⟩)/√(2) before applying the σ̂_z conditioned nonlinear coupling. After the nonlinear coupling and another R_y(π/2) rotation have been applied, the resulting entangled state of the hybrid system takes the form
|ψ⟩ = N_k,+|ψ_k,+⟩|1_s⟩ + N_k,-|ψ_k,-⟩|0_s⟩,
where N_k,± denote the probability amplitudes of the even (|ψ_k,+⟩) and odd (|ψ_k,-⟩) superpositions of the oscillator, which arise directly from the normalisation coefficient of |ψ_k,±⟩ = 1/(2N_k,±)· (|ζ_k⟩±|-ζ_k⟩). The normalisation coefficients depend on the overlap between |ζ_k⟩ and |-ζ_k⟩ and, consequently, on the magnitude and order of the applied nonlinear interaction.
To project into the desired oscillator superposition state, we perform a mid-circuit measurement detecting |0_s⟩ or |1_s⟩, which occurs with a probability of |N_k,±|^2. This measurement collapses the state into the corresponding superposition of the oscillator, disentangled from the spin state. After the collapse, the created superposition can be employed in further experiments, or characterised through full tomography.
Experimentally, we can select the outcomes with the desired superposition by postselecting based on the mid-circuit measurement result.
We experimentally demonstrate the creation of these superpositions on a trapped ^88Sr^+ ion in a 3D Paul trap <cit.>. The harmonic oscillator is formed by the axial motion of the confined ion with frequency 1.2. In practice, we cannot initialise a pure vacuum state of the oscillator |0_osc⟩ but rather a thermal state near its ground state with average occupation n̅=0.1. The coherence time of this oscillator is limited by its heating rate ṅ̅̇=300 quanta/s.
The internal structure forms the spin system, with the states |0_s⟩≡|5S_1/2, m_j = -1/2⟩, and |1_s⟩≡|4D_5/2, m_j = -3/2⟩. The electronic state |2_s⟩≡|5S_1/2, m_j = 1/2⟩ is used as an auxiliary state.
The |0_s⟩↔|1_s⟩ quadrupole transition is driven by a 674 nm laser; the two noncommuting spin-dependent forces required to generate the nonlinear interaction are created by two 674 nm beams. The |2_s⟩ state is isolated from the interaction, and an RF antenna drives the |0_s⟩↔|2_s⟩ transition.
The projective measurement is achieved by driving the 5S_1/2↔ 5P_1/2 transition
which distinguishes bright states (|0_s⟩ or |2_s⟩), which scatter photons, from the dark state (|1_s⟩), which does not. This measurement is only non-destructive to the motional state in the case where |1_s⟩ is detected, and hence no photons are actually scattered. Thus, we design our pulse sequences such that the desired oscillator state is heralded upon |1_s⟩, i.e., the desired harmonic superposition is correlated with |1_s⟩ just before the measurement. For example, for the state described in Eq. (<ref>), we would select the even superposition |ψ_k,+⟩.
The projective measurement is used mid-circuit for selecting the desired superposition as well as for the final detection step. In both cases, we collect photons for 200, yielding a combined dark state preparation and measurement fidelity of 0.993 <cit.>.
We create both even and odd superpositions between the state |ζ_k⟩ and |-ζ_k⟩ as illustrated in Eq. (<ref>) and Fig. <ref>. By measuring the qubit immediately after this creation step, we obtain the probability of collapsing in |0_s⟩ and |1_s⟩ respectively and, hence, the probability of generating the even and odd superposition. For example, for k=2 in Eq. (<ref>), (<ref>) we obtain the superposition between two oscillator states squeezed about orthogonal axes. The corresponding probabilities for the even (|ψ_2,+⟩) and odd (|ψ_2,-⟩) states are given by |N_2,±|^2 = (2± 2/√(cosh(2|ζ_2|)))/4, which depend on the magnitude of the squeezing parameter |ζ_2| (see Supplement <cit.>).
In the limit of small squeezing parameter |ζ_2|, the generation of the odd superposition vanishes, which indicates that the overlap between |ζ_k⟩ and |-ζ_k⟩ tends to 1 as the oscillator state is unchanged from the initial state.
In the limit |ζ_2| →∞, the odd and even state have equal generation probability of 1/2 which indicates that |ζ_2⟩ and |-ζ_2⟩ are orthogonal and their overlap is 0. We confirm this behaviour experimentally; the data in Fig. <ref>A agree well with the theoretical prediction from an independent estimate of |ζ_2|.
Extending this protocol to higher-order nonlinear interactions, we reconstruct the Wigner quasi-probability distribution of even and odd superpositions for squeezing (k=2), trisqueezing (k=3), quadsqueezing (k=4) (Fig. <ref> B-G). To do so, we measure the characteristic function χ(β) = ⟨D̂(β)|_⟩osc of the oscillator state and infer the Wigner distribution via a Fourier transform <cit.>. The characteristic function χ(β) is measured via the spin using a spin-dependent displacement D̂(σ̂_yβ/2) as shown in the tomography step in Fig. <ref>A. By varying the complex-valued displacement parameter β, we can sample different values of χ(β) (see Supplement <cit.>).
We find good agreement between the experimental results and numerical simulations using independently measured parameters (Fig. <ref> H-M). In the Supplement <cit.>, we show the experimental data and numeric predictions of |N_k,±|^2 as a function of |ζ_k| for k ∈{3,4}.
The superpositions created, especially the odd ones, have a large amount of Wigner negativity. In continuous variable quantum computation, large Wigner negativities are a requirement for quantum states that yield an advantage over classical computation <cit.> and hence highly sought after.
We quantify the resourcefulness of these states by evaluating their Wigner logarithmic negativity (WLN) <cit.>, neglecting decoherence effects, and benchmark them against Fock and cat states.
We find that for a fixed average Fock state occupation, the superpositions created in this work exhibit larger WLN than Fock or cat states <cit.> (see Supplement <cit.>).
Another important feature is that the non-vanishing Fock state occupations of the superpositions are spaced by 2k, where k is the order of the interaction (see Supplement <cit.>). As k increases, so does the spacing, and the states have increased rotational symmetries as seen in Fig. <ref>. These rotational symmetries can be used to encode more robust logical qubits <cit.>. For example, a single phonon loss in a (two-component) cat-qubit encoding results in an irreparable bit-flip error <cit.>. However, when the same loss channel acts on a qubit encoding formed by a superposition of squeezed states, it projects outside the code space, making the error correctable.
Thus far, the superpositions have only involved constituents generated from the same nonlinear interactions. To create arbitrary superpositions with nonclassical constituents in the most general form
|ψ⟩∝ a |ζ_k⟩ + b |ζ_k'⟩,
we require individual control of a,b,ζ_k, ζ_k'∈ℂ. To create this state, we extend the Hilbert space of the spin to a qutrit (three-level system), {|0_s⟩,|1_s⟩,|2_s⟩} (see Fig. <ref>A, B).
We create the constituent states via the same interaction described in (<ref>), which only couples to |0_s⟩ when applied to the subspace {|0_s⟩, |2_s⟩}. Thus, we utilize this feature to temporarily hide one of the constituents |ζ_k,2_s⟩ while creating the other using a separate interaction <cit.>.
The pulse sequence for creating the arbitrary superpositions is described in Fig. <ref>A; for simplicity, we first focus on the case a=b.
After the first nonlinear interaction is applied, the hybrid system is in a state |0_osc,2_s⟩ + |ζ_k, 0_s⟩. A subsequent R_y(π)_02 rotation, where the indices indicate the two spin sublevels |0_s⟩↔|2_s⟩ coupled, then swaps the spin states, hiding the state |ζ_k, 2_s⟩ from the effect of the second nonlinear interaction. The second nonlinear interaction then creates the state |ζ_k,2_s⟩ + |ζ_k'', 0_s⟩. One final R_y(π/2)_02 pulse followed by the shelving R_y(π)_01 pulse creates a state
described by Eq. (<ref>), with |ψ_k,±⟩∝|ζ_k⟩±|ζ_k''⟩ where ζ_k'' is no longer -ζ_k.
We then characterise the state using the same techniques as before. As mentioned previously, the unitarity of the nonlinear interaction is essential to prevent the destruction of the intermediate state by subsequent applications of the nonlinear interaction.
Splitting the sequence into two parts allows us to change the second nonlinear interaction relative to the first one <cit.>.
In Fig. <ref>C, we show that we can create a superposition of squeezed states with variable orientation of the respective squeezing axis, i.e. ζ_2'=e^i 2ϕζ_2, by adjusting the phase ϕ of the nonlinear interaction in the second part of the sequence relative to the first part. Hence, the created state takes the form |ζ_2⟩+|e^i 2ϕζ_2⟩. We measure the value of the characteristic function at a fixed magnitude |β| but vary its complex phase (β). At ϕ=π/2 we create a superposition of states
squeezed about orthogonal axes (equal superposition of squeezed states shown in Fig. <ref>B, H), which gives rise to four equally spaced maxima in χ when scanning (β). By scanning the degree of freedom ϕ, the two orthogonal squeezing axes gradually merge into a single one at ϕ={0, π} (regular squeezed state), giving rise to two equally spaced maxima. This behaviour results in the sawtooth pattern displayed in Fig. <ref>C.
In Fig. <ref>D, we show that we can control the magnitude of squeezing of each constituent independently <cit.>.
We keep the squeezing parameter ζ_2 of the first constituent fixed while adjusting the magnitude of the second constituent by a factor c ∈ℝ which results in a state |ζ_2⟩+|c·ζ_2⟩ i.e. ζ_2'=cζ_2. For c=0, this corresponds to a superposition of a squeezed state with the initial ground state, whereas for c=-1, this corresponds to the equal superposition shown in Fig. <ref>B, H.
We measure the variance of the state along the squeezing axis of the first constituent and an axis orthogonal to it (i.e., two orthogonal quadratures). We compute the variances using the reconstructed Wigner functions for different settings of c. As shown in Fig. <ref>D, the variance closely follows the simulated prediction.
The variance of each constituent does not vary independently. In fact, altering the magnitude of the second constituent affects the variance of the resulting state in both quadratures: the quadrature dominated by the second constituent (blue) and the quadrature dominated by the first constituent (green), even though the first constituent's squeezing magnitude remains fixed. This behaviour is a direct consequence of the interference between both constituents.
To demonstrate that we can change the order k of the nonlinear interaction for each constituent, we combine a squeezed state (k=2) as the first constituent with a trisqueezed state (k=3) as the second constituent. Fig. <ref>E shows the resulting Wigner function of this superposition state.
While we do not include the data in the main text, we discuss in the Supplement <cit.> that the sequence can be amended to vary the relative amplitude and phase of the probability coefficients a and b. By replacing the first R_y(π/2) rotation by R_y(θ) to vary the probability amplitude ratio between the two constituents cos(θ/2)|ζ_k⟩ + sin(θ/2)|-ζ_k⟩. Further, it is also possible to replace the R_y(π) rotation with a rotation about an axis σ̂_γ = cos(γ)σ̂_x + sin(γ)σ̂_y to introduce a complex phase factor between the probability amplitude of the two constituents |ζ_k⟩ + |-ζ_k⟩ e^i2γ. Thus, we demonstrate complete control over a, b, ζ_k and ζ_k'' in Eq. (<ref>).
Finally, we show that the sequence depicted in Fig. <ref>A can be extended further to construct spatially separated superposition states. Aside from a foundational quantum optics interest, such spatially-separated superpositions are essential in various applications, including continuous variable quantum error correction codes (e.g., cat <cit.> or GKP <cit.> states), and metrology <cit.>. We create these states using an additional spin-dependent displacement followed by a mid-circuit measurement as shown in Fig. <ref>. The second mid-circuit detection disentangles the spin from the harmonic oscillator, projecting the latter into a cat-like state, where the two spatially displaced states are themselves nonclassical and non-Gaussian. We use this protocol to create a spatially separated superposition of squeezed states
|ψ⟩∝ D(α)|ψ_2,+⟩+D(-α)|ψ_2,+⟩,
where |ψ_2,+⟩∝|ζ_2⟩+ |-ζ_2⟩.
Thus, we show that we can create an even broader class of oscillator superpositions by concatenating additional spin-dependent interactions and mid-circuit measurements.
Our work demonstrates the versatility of hybrid oscillator-spin systems in generating superpositions of oscillator states. By integrating nonlinear spin-dependent interactions with mid-circuit measurements, we have established arbitrary control of the phase, amplitude, and interaction type for the constituents of the superpositions. Using these techniques, we have experimentally demonstrated many nonclassical states that have until now only been explored theoretically <cit.>. Expanding complex harmonic oscillator states from cat <cit.> and GKP <cit.> states to superpositions of squeezed or superpositions of non-Gaussian states is not only of foundational interest but opens new avenues for error correction, sensing protocols, and continuous variable or hybrid oscillator-spin quantum computing <cit.>. For example, these superposition states have increased rotational symmetries, which enable robust error correction <cit.>. Further, their large Wigner negativities indicate that they generally cannot be efficiently simulated classically <cit.>. Their utility applies not only to quantum computation but also to quantum-enhanced metrology. For example, the squeezed superposition state is suitable as a displacement sensor <cit.>, which, in the case of trapped ions, can be used for sensing small electric fields <cit.>.
The demonstrated techniques can be extended to multiple oscillators by coupling to additional motional modes of the ion, for example, to create two-mode squeezed superpositions <cit.> or in general to increase the size of the hybrid quantum system. Aside from trapped ions, the tools developed here can be applied to any physical system with a quantum harmonic oscillator coupled to a spin such as superconducting circuits <cit.>, nanoparticles <cit.>, or atoms coupled to a cavity <cit.> or in optical tweezers <cit.>. For systems with more massive oscillators, these superpositions could be used to test not only the boundaries of the classical and quantum world <cit.>, but how quantum physics interacts with gravity <cit.>.
§ ACKNOWLEDGEMENTS
We would like to thank Alejandro Bermudez, Alexander Lvovsky, Andrew Daley, Joshua Combes and his research group, Mattia Walschaers, and Scott Parkins for very insightful discussions and comments on the manuscript.
This work was supported by the US Army Research Office (W911NF-20-1-0038) and the UK EPSRC Hub in Quantum Computing and Simulation (EP/T001062/1). GA acknowledges support from Wolfson College, Oxford. CJB acknowledges support from a UKRI FL Fellowship. RS acknowledges funding from the EPSRC Fellowship EP/W028026/1 and Balliol College, Oxford.
§ AUTHOR CONTRIBUTIONS
SS and OB led the experiments and analysed the results
with assistance from DJW, GA, and RS; SS, OB, DJW, and GA updated and maintained the experimental apparatus; SS, OB, and RS conceived the experiments and wrote the manuscript with input from all authors; RS supervised the work with support from DML and CJB; DML and CJB secured funding for the work.
§ COMPETING INTERESTS
RS is partially employed by Oxford Ionics Ltd. CJB is a director of Oxford Ionics Ltd. All other authors declare no competing interests.
§ DATA AVAILABILITY
Source data for all plots and analysis code that support the plots are available from the corresponding authors upon reasonable request.
apsrev4-2
Supplemental Material for:
Generating arbitrary superpositions of nonclassical quantum harmonic oscillator states
§ THEORY
§.§ Generating the nonlinear spin-dependent interaction
While the method employed to generate the nonlinear interaction is described in detail in Refs. . We summarise the relevant details below.
We combine two spin-dependent forces (SDF) with non-commuting spin-conditionings [σ̂_α, σ̂_α'] ≠ 0 and ϕ the oscillator phase of the second SDF which results in a Hamiltonian
Ĥ = ħΩ_α/2σ̂_α (â e^-iΔ t+ â^† e^iΔ t)
+ħΩ_α'/2σ̂_α' (â e^-i (mΔ t+ϕ) + â^† e^i(mΔ t+ ϕ)).
The two forces are detuned from resonance with the oscillator by Δ and mΔ, respectively. By choosing k = 1-m, we can resonantly drive effective generalised squeezing interactions
Ĥ_k =ħΩ_k/2σ̂_β (â^k e^-iϕ + (â^†)^k e^iϕ),
which for σ̂_β = σ̂_z corresponds to Eq. (<ref>) in the main text, where
σ̂_β ∝
[σ̂_α, σ̂_α'] if k mod 2=0
σ̂_α' otherwise.
While we can choose our spin bases such that σ̂_β = σ̂_z, and do so for the majority of the data,
it was convenient for Fig. <ref>DE to choose σ̂_β = σ̂_x which maximises the strength and avoids two single qubit rotations. We have indicated the native spin-conditioning for every sequence in Sec. <ref>.
The coupling strength Ω_k
Ω_2,3,4 = {Ω_α'Ω_α/Δ, Ω_α'Ω_α^2/2 Δ^2,Ω_α'Ω_α^3/8 Δ^3}·sin(θ_α,α'),
which is proportional to sin(θ_α, α') where θ_α,α' denotes the angle between the spin bases α and α' and is maximised if the two spin bases are orthogonal. We utilise this angle to adjust the squeezing magnitude for the constituent |c·ζ_2⟩ in the superposition of Fig. <ref>D, while keeping the optical power and the duration of the interaction fixed.
For the quantum circuits presented in the main text, we adjust Δ, Ω_α, Ω_α', and σ̂_β. The settings are tabulated in Sec. <ref>.
§.§ Normalisation coefficient
We require the squeezed state superposition to be normalised i.e. ⟨ψ_± |ψ_±|=⟩ 1, and find a normalisation coefficient
|N_2,±|^2 = 1/4(⟨ζ_2|±⟨ζ_2|)(|ζ_2⟩±|ζ_2⟩)
=1/4⟨ζ_2|ζ_2|+⟩⟨-ζ_2|-ζ_2|±⟩2 Re(⟨-ζ_2|ζ_2|)⟩
= (2 ±2/√(cosh(2|ζ_2|)))/4,
where we assume that each squeezed state is normalised. The inner product ⟨-ζ_2|ζ_2|$⟩ can be determined from the Fock state expression of a vacuum squeezed state <cit.>:
|ζ_2⟩ = 1/√(cosh(|ζ_2|))∑_n=0^∞ (- e^i (ζ_2)tanh(|ζ_2|))^n √((2n)!)/2^n n!|2n⟩,
⟨-ζ_2|ζ_2|=⟩1/cosh(|ζ_2|)∑_n=0^∞ e^i π ntanh(|ζ_2|)^2n(2n)!/2^2n (n!)^2
⟨-ζ_2|ζ_2|⟩= 1/cosh(|ζ_2|)1/√(tanh(|ζ_2|)^2 +1 ) = 1/√(cosh(2|ζ_2|)).
For the superpositions involving non-Gaussian (i.e. trisqueezed or quadsqueezed) constituents, it is unclear if closed-form solutions exist <cit.>.
§.§ Native gate sequences
In this section, we give a more detailed description of the pulse sequence used to generate the superposition states. For simplicity, we omitted the spin-echoR(π)pulses described below from the main text and Fig. <ref>.
All nonlinear interactionsÛ_NLexcept for trisqueezing in Fig. <ref> are conditioned onσ̂_z= |1_s⟩⟨1_s| - |0_s⟩⟨0_s|.
For squeezing and quadsqueezing, we setσ̂_α = σ_xandσ_α'= σ̂_ywhich results inσ̂_β= σ̂_z.
For the trisqueezing superposition shown in Fig. <ref>, we keep this setting which results in the spin basis of the nonlinear interactionσ̂_β= σ̂_x.
For trisqueezing shown in Fig. <ref>, we change the trisqueezing spin basis compared to Fig. <ref>. We set the spin conditioning of the SDF toσ̂_α'=σ̂_zsuch that the resulting nonlinear interaction is conditioned onσ̂_β=σ̂_z.
|0_osc,0_s⟩ R_y(π/2)→|0_osc, 0_s⟩ +|0_osc, 1_s⟩/√(2)Û_ NL→|ζ/2, 0_s⟩ + |-ζ/2, 1_s⟩/√(2)
R_y(π)→-|ζ/2, 1_s⟩ + |-ζ/2, 0_s⟩/√(2)Û_ NL|_π→-|ζ, 1_s⟩ + |-ζ, 0_s⟩/√(2)
R_y(π/2)→ - N_+ (|ζ⟩ +|-ζ⟩) |0_s⟩ + N_- (|ζ⟩ -|-ζ⟩) |1_s⟩.
We can adjust the phases of the qubit rotations to adjust which harmonic oscillator superposition is heralded by|1_s⟩. For example, we can change the axis of theR_y(π)pulse toR_x(π). The sequence continued from Eq. (<ref>) then looks like
R_x(π)→-i|ζ/2, 1_s⟩ -i|-ζ/2, 0_s⟩/√(2)Û_ NL|_π→-i|ζ, 1_s⟩ - i|-ζ, 0_s⟩/√(2)
R_y(π/2)→ i N_- (|ζ⟩ -|-ζ⟩) |0_s⟩ - i N_+ (|ζ⟩ +|-ζ⟩) |1_s⟩.
For the trisqueezing superposition shown in Fig. <ref>, we omit theR(π/2)andR(π)pulses.
To herald the even (odd) superposition upon detecting|1_s⟩, we initialise the starting state in|1_s⟩(|0_s⟩) before applying the interaction.
For the sequence in Fig. <ref>, we require all of the applied nonlinear interaction to be conditioned onσ̂_zsuch that we can treat its impact on the subsystem{|0_s⟩, |2_s⟩}as a closed system.
§.§ Adjusting the probability amplitude coefficient of the superpositions
We can arbitrarily set the probability amplitudes of each constituent instead of only balanced superpositions.
To adjust the relative amplitudes, we can replace the firstR_y(π/2)rotation byR_y(θ)rotation such that the initial spin superposition is the form
|ψ⟩ = cos(θ/2)|0_osc, 0_s⟩ + sin(θ/2)|0_osc, 1_s⟩.
The rest of the sequence remains unchanged. This results in an amplitude ratiocos(θ/2)/sin(θ/2)between the constituents of the superposition. For example, forθ=π/2, the amplitudes are equal and the sequence is the same as in Eq. <ref>.
As a proof of principle, we produce a squeezed state superposition withθ=π/4i.e.a |ζ_2⟩ + b|-ζ_2⟩with an amplitude ratioa/b = √(2)+1(see Fig. <ref>).
To adjust the relative phase of the complex probability amplitudes, we vary the axisσ̂_γ= cos(γ)σ̂_x + sin(γ)σ̂_yof theR_γ(π)rotation. After the full sequence, we obtain
|ψ⟩ = -N_-^(γ)(|ζ⟩i e^iγ -|-ζ⟩ie^-iγ)|0_s⟩
+N_+^(γ)(|ζ⟩i e^iγ +|-ζ⟩ie^-iγ)|1_s⟩.
After rearranging, we obtain
|ψ⟩ = -ie^iγ N_-^(γ)(|ζ⟩ -|-ζ⟩e^-i2γ)|0_s⟩
+ ie^iγ N_+^(γ)(|ζ⟩ +|-ζ⟩e^-i2γ)|1_s⟩.
In Fig. <ref>, we show the herald probability|N_+^(γ)|^2and the resulting Wigner functions when changing the relative phase of the probability amplitudeγ.
§.§ Measuring the variance of squeezed superpositions
To estimate the variance in Fig. <ref>D based on the experimentally reconstructed Wigner functionW, we estimate the probability distributionP_xandP_pwhich can be found by integrating the Wigner function along the orthogonal quadraturespandxrespectively:
P_x(x) = ∫ W(x,p) dp,
P_p(p) = ∫ W(x,p) dx.
The variances are then
Var(x̂) = ⟨x̂^2|-⟩⟨x̂|^⟩2,
Var(p̂) = ⟨p̂^2|-⟩⟨p̂|^⟩2,
where the moments ofxandpare given by
⟨x̂^k| ⟩= ∫ x^k P_x(x) dx,
⟨p̂^k| ⟩= ∫ p^k P_p(p) dp.
For the experimental data, we rotate the matrix representing the discretised Wigner function such that the principle axes are aligned withxandp. For the simulated data, we extract a discretised Wigner function before performing the same analysis. We calculate the probability distributions and the moments by approximating the integration via the trapezoidal rule.
§ EXTENDED EXPERIMENTAL DATA AND PARAMETERS
§.§ Experimental parameters for the results in the main text
For Figs. <ref>, <ref>, and <ref> we used the experimental parameters tabulated in Tab. <ref>.
For Fig. <ref> we use the same parameter configuration as for Fig. <ref>B and adjust the number of shots such that we have more than300successful heralds per point.
For Fig. <ref> we use the same parameter configuration as for Fig. <ref>C and adjust the number of shots such that we have more than300successful heralds per point.
§.§ Mid-circuit detection error
The fidelity of the mid-circuit detection directly determines the degree of certainty to which we know the heralded state. We consider the imperfect detection in the case of heralding the odd superposition|ψ_k,-⟩. Just before detection, the wave function is
|ψ⟩ = N_k,-|ψ_k,-⟩|1_s⟩ + N_k,+|ψ_k,+⟩|0_s⟩.
With perfect detection, we could obtain the true description of|ψ_k,-⟩. With detection errors, if we herald on|1_s⟩(dark) the state reads
ρ_ detect ∝ P( dark| dark) |N_k, -|^2 |ψ_k,-⟩⟨ψ_k,-|
+ P( dark| bright) |N_k, +|^2 |ψ_k,+⟩⟨ψ_k,+|,
whereP(dark|dark)is the detection probability of dark given dark, which is the complementary event ofP(bright|dark)(false bright) andP(dark|bright)the probability of dark given bright (false dark).
Hence, the ratio of the mixturemof making the desired state, to the false dark state is given by
m = P( dark| dark) |N_k, -|^2/P( dark| bright)|N_k, +|^2 ≈|N_k, -|^2/P( dark| bright)|N_k, +|^2 .
In Fig. <ref>, we measure and fit the histograms of our detection sequence. From the fits (see Fig. <ref>), we estimate the false dark probability to beP(dark|bright) = 4.9e-5.
For the interactions considered the true herald ratio|N_k,-|^2/|N_k,+|^2 ≥0.1. Hence, given our readout errors, the resultingm > 10^3. If the true herald ratio is even lower, the experimental threshold can be adjusted at the expense of a small loss in overall detection events.
We can also measure a cumulative state-preparation and readout (SPAM) error by calculating the dark and bright probability based on the actual histogram data. We findP(dark) = 0.993. Hence, the probability of preparing and measuring dark is dominated by the state preparation process.
Another error contribution in our experiment was leakage of the 1033 deshelving laser.
This results in a decay of the excited|1_s⟩state that is faster than the lifetime of the transition. This leakage is observed in the baseline measurement of Fig. <ref>A (green crosses). At 880, we observe a contrast loss of∼5%. This could be resolved by improving the extinction of the 1033 when it is switched off.
§.§ Effect of mid-circuit detection on ion motion
We verify that performing a mid-circuit detection on a dark state (|1_s⟩) leaves the motional state unperturbed. We first prepare a regular squeezed state|ζ_2, 1_s⟩, followed by the mid-circuit readout before finally measuring the characteristic function along the principle axes of the squeezed state.
We investigate three different versions of the mid-circuit measurement: regular measurement (i.e. collect fluorescence for 200), adding 200 delay but not applying the fluorescence laser, and omitting the mid-circuit measurement altogether. The measurement of the characteristic function for the three different settings is shown in Fig. <ref>. We conclude that the mid-circuit measurement does not perturb the motional state beyond increasing the sequence length by the mid-circuit measurement duration.
§.§ Detection probabilities for equal superpositions shown in Fig. <ref>.
We repeat the detection probability measurements shown in Fig. <ref>A for the trisqueezed and quadsqueezed superpositions (see Fig. <ref>). A closed-form analytic solution does not exist for these superpositions <cit.>; we replace the theory line with a numerically simulated curve.
We compare the detection probabilities of these superpositions to the probabilities obtained when no interaction is applied (i.e. system left idle).
Especially for the quadsqueezed state, the change in probability from leaving the system idle becomes comparable to the detection probability. This explains why the quadsqueezed superposition substantially deviates from the numerical prediction.
§ CHARACTERISING THE SUPERPOSITIONS
§.§ Wigner negativity of different ideal superpositions
There are several measures to quantify the utility, or more precisely, the resourcefulness of the oscillator state superpositions. Here, we choose the Wigner logarithmic negativity (WLN) <cit.>
𝐖(ρ) = log(∫ dx dp|W_ρ(x,p)|),
whereWis the Wigner-function of stateρ.
WLN is a resource monotone, but it is still an active area of research under which circumstances it adequately quantifies the difficulty of simulating the state classically.
We calculate the expected WLN𝐖of the ideal version of the equal superposition states studied in this work (coherent, squeezed and trisqueezed superpositions) and pure Fock states using a discrete version of Eq.(<ref>). We measure the negativity as a function of the mean phonon numbern̅ = ⟨N̂|_⟩oscfor the specific oscillator state.
§.§ Comparing the realistic and ideal states
While we have good agreement between experiment and numerical simulation with realistic system parameters i.e. finite initial oscillator occupation ofn̅=0.1and non-negligible heating rate ofṅ̅̇= 300 quanta/s, there is a difference compared to an ideal system with perfect initial groundstate and no heating rate.
The macroscopic properties, such as the overall shape of the Wigner function, stay the same, but certain details or features become more or less accentuated. We qualitatively compare the realistic and ideal states in Fig. <ref>. The Wigner functions with finite temperature and heating effects are identical to the plots shown for Fig. <ref>HI.
We extract the WLN described in the previous section for the numerically simulated and experimentally reconstructed states.
The WLN for the experimental states is close to the value of the realistic simulation but reduced compared to the ideal states due to experimental imperfections. We present the extracted values in Table <ref>. However, we emphasise that these values should give a qualitative estimation but do not represent a quantitative characterisation.
We would need to improve the Wigner function reconstruction protocol to deal with spurious ripples in the Wigner function for largerx, pvalues, which arise due to shot noise and other experimental imperfections during the tomography. We have windowed the Wigner function around the origin to avoid biasing the results. The window size is set such that(∫_window W(x,p) dx dp)/(∫W(x,p) dx dp) =0.95. For reference, we also extract the minimum value of the Wigner functionmin(W), which is occasionally stated in the literature. However, we stress that this measure is bounded tomin(W)≥-1/πand only confirms that the state has Wigner negativity but does not measure resourcefulness. For example, the ideal Fock state|1⟩has a minimum value ofmin(W) = -1/π.
§.§ Reconstruction of the Wigner function
We obtain the Wigner function via Fourier transform of the characteristic functionχ(β) ∈ℂ, following Refs. .
As an example, we provide the characteristic function of the even (Fig. <ref>B) and odd squeezed superposition states (Fig. <ref>C).
In Fig. <ref>, we show the experimentally measured characteristic function.
In the experimentally measured characteristic function, and consequently, in the reconstructed Wigner function, we find a tilt relative to thex,paxes of∼0.44, with daily variations on the order of0.06. This tilt is the result of a constant motional phase offset between the SDF used for reconstruction and the axis of the squeezing interaction. To make the comparison to the numerically simulated data easier, we correct the shift by digitally rotating the reconstructed Wigner function by this amount.
§.§ Density matrices of equal superpositions in Fig. <ref>
To provide more intuition about the odd and even superposition states, we show the density matrices in the Fock basis (see Fig. <ref>). As expected only the Fock states|k·2n⟩for the even superpositions and|k·(2n + 1)⟩for the odd superpositions, wheren∈ℕ_0andkthe order of the nonlinear interaction, are populated. |
http://arxiv.org/abs/2409.03136v1 | 20240905001215 | A New Forward Discriminant Analysis Framework Based On Pillai's Trace and ULDA | [
"Siyu Wang"
] | stat.ME | [
"stat.ME",
"stat.CO",
"stat.ML"
] |
Article Title]A New Forward Discriminant Analysis Framework Based On Pillai's Trace and ULDA
[1]Siyu [email protected]
*[1]Department of Statistics, University of Wisconsin-Madison, 1300 University Ave, Madison, 53706, WI, USA
Linear discriminant analysis (LDA), a traditional classification tool, suffers from limitations such as sensitivity to noise and computational challenges when dealing with non-invertible within-class scatter matrices. Traditional stepwise LDA frameworks, which iteratively select the most informative features, often exacerbate these issues by relying heavily on Wilks' Λ, potentially causing premature stopping of the selection process. This paper introduces a novel forward discriminant analysis framework that integrates Pillai's trace with Uncorrelated Linear Discriminant Analysis (ULDA) to address these challenges, and offers a unified and stand-alone classifier. Through simulations and real-world datasets, the new framework demonstrates effective control of Type I error rates and improved classification accuracy, particularly in cases involving perfect group separations. The results highlight the potential of this approach as a robust alternative to the traditional stepwise LDA framework.
[
*
Received: 20 June 2024 / Accepted: 05 July 2024
===================================================
§ INTRODUCTION
LDA seeks to find linear combinations of features that can best separate groups by maximizing the ratio of between-group variance to within-group variance. However, LDA is sensitive to noise variables and prone to overfitting. To address these issues, stepwise LDA is introduced, which iteratively adds or removes variables based on predefined inclusion and exclusion criteria. Various versions of stepwise LDA have been developed, ranging from stand-alone programs like DIRCRIM <cit.> and ALLOC-1 <cit.>, to options within statistical packages such as BMDP <cit.>, SPSS^® <cit.>, and SAS^® <cit.>. While specific implementations may differ in variable selection criteria, most follow a common framework discussed in <cit.>. Nonetheless, the heavy reliance on Wilks' Λ presents several challenges, some of which can be mitigated by substituting it with Pillai's trace.
Traditional LDA relies on the inverse of the within-class scatter matrix, leading to computational issues when the matrix is non-invertible. In contrast, ULDA <cit.> uses a different loss function based on the trace to solve this problem. Since both ULDA and Pillai's trace use trace-based criteria, it is logical to integrate them to develop a more robust stepwise LDA framework.
This paper is organized as follows: Section <ref> discusses the limitations of Wilks' lambda in the traditional stepwise LDA framework and highlights the computational and statistical challenges that arise. Section <ref> introduces our proposed forward selection framework based on Pillai's trace and ULDA, focusing on the algorithmic advancements and theoretical properties developed to resolve the challenges discussed. In Section <ref>, we present empirical analyses, including simulations and real data analyses, to demonstrate the effectiveness of the proposed method in controlling Type I error rates and improving classification accuracy when Wilks' Λ fails. We conclude in Section <ref>.
§ PROBLEMS WITH WILKS' Λ
First, we briefly introduce the most widely used stepwise LDA framework. Suppose we have a data matrix 𝐗∈ℝ^N × M with N observations and M features. Our response 𝐲∈ℝ^N is a factor vector containing J classes. Let 𝐱_ji∈ℝ^M represent the ith observation from class j, 𝐱̅_j ∈ℝ^M be the mean vector for class j derived from its n_j instances, and 𝐱̅∈ℝ^M denote the overall mean vector across all samples. 𝐇_B ∈ℝ^J × M and 𝐇_W ∈ℝ^N × M are defined as:
𝐇_B = [√(n_1)(𝐱̅_1-𝐱̅), √(n_2)(𝐱̅_2-𝐱̅), …, √(n_J)(𝐱̅_J-𝐱̅)]^T,
𝐇_W = [(𝐱_11 - 𝐱̅_1), …, (𝐱_1n_1 - 𝐱̅_1), (𝐱_21 - 𝐱̅_2), …, (𝐱_2n_2 - 𝐱̅_1), …, (𝐱_Jn_J - 𝐱̅_J)]^T.
Then, the between-class scatter matrix 𝐒_B, within-class scatter matrix 𝐒_W, and total scatter matrix 𝐒_T can be defined as:
𝐒_B = ∑_j=1^J n_j(𝐱̅_j-𝐱̅)(𝐱̅_j-𝐱̅)^' = 𝐇_B^T𝐇_B
𝐒_W = ∑_j=1^J ∑_i=1^n_j(𝐱_ji-𝐱̅_j)(𝐱_ji-𝐱̅_j)^' = 𝐇_W^T𝐇_W
𝐒_T = ∑_j=1^J ∑_i=1^n_j (𝐱_ji-𝐱̅)(𝐱_ji-𝐱̅)^' = 𝐒_B + 𝐒_W.
Let 𝐒_T(1,2,…,p) and 𝐒_W(1,2,…,p) be the total and within-class scatter matrix with p variables {𝐱^(1), 𝐱^(2), …, 𝐱^(p)} added. Then the Wilks' Λ is defined as:
Λ(1,2,…,p) = (𝐒_W(1,2,…,p))/(𝐒_T(1,2,…,p)).
After adding 𝐱^(p+1), we use partial Wilks' Λ to evaluate its marginal effect:
Λ(p+1)=Λ(1,2, …, p, p+1)/Λ(1,2, …, p).
The null hypothesis H_0 states that the variables {𝐱^(1), 𝐱^(2), …, 𝐱^(p+1)} are from a multivariate normal distribution and are independent of the response 𝐲. Unless otherwise specified, this H_0 will be assumed as the null hypothesis throughout the remainder of this paper. Under H_0, the partial F-statistic follows an F-distribution:
F=N-J-p/J-11-Λ(p+1)/Λ(p+1)∼ F_J-1, N-J-p.
In the (p+1)-th step, partial F-statistics are calculated for the remaining M-p variables, and the variable with the largest F-statistic is selected. It will be included in the model if it meets specific inclusion criteria, such as F ≥ 4, or if the corresponding p-value is below α.
Following the addition of a variable, the deletion phase begins. With p+1 variables now in the model, p+1 new pairs of scatter matrices (𝐒_W_i, 𝐒_T_i) are computed, each excluding one variable 𝐱^(i). The partial F-statistics are then calculated for each pair, and the variable associated with the smallest F-statistic is considered for removal if the exclusion criterion is satisfied (e.g., F < 3.996 in BMDP). This stepwise process continues until all variables have been added, or no further variables can be added or removed.
Next, we introduce three major drawbacks of using Wilks' lambda in the current stepwise LDA framework.
§.§ Premature Stopping
When perfect linear dependency exists in the data matrix, we would expect 0/0 on the right-hand side of equation (<ref>), causing errors in some stepwise LDA programs, such as in R. Wilks' Λ is not well-defined under perfect linear dependency, and to allow the stepwise selection to continue, a quick fix is to manually set it to 1, indicating no discrimination power.
We know from equation (<ref>) that the partial Λ is the ratio of two Wilks' Λ. Most programs will stop the stepwise LDA process when Λ = 0, as all subsequent partial Λ calculations become 0/0 and are thus ill-defined. However, stopping at Λ = 0 isn't always appropriate, as it indicates that one group of classes is perfectly separable from another, but it doesn't necessarily imply perfect separation of all classes in non-binary classifications, as shown in Figure <ref>. After selecting X_2, Wilks' Λ = 0 since the within-class variance is zero on X_2, causing the stepwise selection to stop. It successfully separates class A from classes B and C but cannot distinguish class B from class C. Additionally, when multiple variables result in Λ = 0, only one is selected, leading to the potential waste of useful information contained in the remaining variables.
§.§ Partial F's Distribution Under Stepwise Selection
Here, we use an example to demonstrate that the distribution of the partial F-statistic does not follow an F-distribution under the stepwise selection framework, casting doubt on the validity of the associated hypothesis testing. Under H_0, the partial F-statistic follows an F-distribution <cit.>. However, the original proof assumes that variables are ordered randomly, rather than selected through a stepwise process. Intuitively, if we maximize the F-statistic at each step, the result will be biased, as noted in <cit.>.
The simulation setup is as follows: in each round, we simulate N = 150 observations from J = 3 classes, with each class having the same sampling probability of 1/3. We simulate X_1 from a standard normal distribution in the one-variable scenario and simulate X_1 and X_2 from independent standard normal distributions in the two-variable scenario. We simulate 10,000 rounds and record the partial F-statistic from each round at each step. We then compare the simulated F-statistic with the theoretical distribution, as summarized in Figure <ref>.
The upper plot corresponds to the one-variable scenario. Since we have only one variable, X_1, no selection occurs, and the simulated distribution matches the theoretical distribution closely. The lower plot shows the two-variable scenario. Here, the stepwise selection first chooses X^(1), which has a larger partial F-statistic (and a smaller partial Λ-statistic) compared to the other variable, resulting in an upward bias in the first partial F-statistic. The second partial Λ is the ratio of the overall Wilks' Λ (with two variables) to the first (partial) Wilks' Λ, so its partial Λ-statistic is biased upwards and its partial F-statistic is biased downwards. Note that the theoretical distributions for the first and second partial F are different (F_2,147 and F_2,146), but the difference is negligible, so we assign the same color to both distributions in the plot.
§.§ Inflated Type I Error Rate
In most programs, a fixed threshold of 4 is applied to the partial F-statistic. Another possible criterion is comparing the p-value of the partial F-statistic with the predefined α. Here, we simulate two scenarios to demonstrate that the type I error is inflated since both methods fail to account for the number of variables screened. For simplicity, forward selection is used instead of stepwise selection throughout this paper unless otherwise stated.
We use the iris dataset for our first simulation. It contains N = 150 flowers from J = 3 species (50 setosa, 50 versicolor, and 50 virginica), along with four features that characterize the flowers. We then add M mutually independent standard normal noise variables to it. Stepwise selections with both types of thresholds are performed, and we conclude that a type I error is made if at least one noise variable is selected. We let M = 1, 2, 4, 8, 16, 32, 64, 128, and for each M we repeat the simulation 2,000 times to obtain a confidence band. α is set to 0.05 throughout this paper unless otherwise specified.
In the second scenario, we simulate the null case where no variables are informative. We reuse the setup from the first scenario but remove all four original features from the iris dataset, leaving all remaining features independent of the species. The results from both scenarios are summarized in Figure <ref>. Both methods fail to control the type I error in either scenario. Due to the issue with multiple testing, the type I error rate increases as the number of noise variables grows. The fixed threshold of 4 performs slightly better than using the p-value, partly because a threshold of 4 corresponds to a p-value of approximately 0.02 in this setting, based on F_2,147(4) ≈ F_2,143(4) ≈ 0.02.
§ THE PROPOSED ALGORITHM
In this section, we first introduce ULDA and compare it to classical LDA. Next, we present our enhancements to ULDA and make it a stand-alone classifier. We then introduce the forward ULDA framework and derive the distributions of the test statistics used. Finally, we demonstrate that the type I error rate is well controlled within the new framework and summarize the algorithm.
§.§ ULDA: Overview and Enhancements
Uncorrelated LDA (ULDA) is an extension of LDA that addresses scenarios where the within-class scatter matrix, 𝐒_W, is not invertible. Fisher's criterion aims to find transformation vectors 𝐰∈ℝ^M that maximizes the ratio:
max _𝐰𝐰^T 𝐒_B 𝐰/𝐰^T 𝐒_W 𝐰
The optimal 𝐖 is derived by solving an eigenvalue decomposition on 𝐒_W^-1𝐒_B. The resulting eigen vectors 𝐖 = [𝐰_1, 𝐰_2, …] projects the original data 𝐗 into orthogonal linear discriminant scores 𝐗𝐰_i, ranked in descending order of their signal-to-noise ratios (eigenvalues). However, challenges arise when 𝐒_W is not invertible, such as when there are more variables than observations or when variables are linearly dependent. On the other hand, ULDA uses a different criterion:
𝐖=max _𝐖^T 𝐒_T 𝐖=Itrace((𝐖^T 𝐒_T 𝐖)^+(𝐖^T 𝐒_B 𝐖)),
where 𝐀^+ denotes the Moore-Penrose inverse of matrix 𝐀. This ensures that the ULDA solution always exists, and <cit.> shows that ULDA is equivalent to classical LDA when 𝐒_T is nonsingular. Several insights can be drawn from equation (<ref>):
* 𝐖^T 𝐒_T 𝐖 = I, meaning we deliberately discard the null space of 𝐒_T. This approach is reasonable because the null space of 𝐒_T is the intersection of the null spaces of 𝐒_W and 𝐒_B, and equation (<ref>) is not affected by vectors from the null space of 𝐒_B.
* If we change the constraint from 𝐖^T 𝐒_T 𝐖=I to 𝐖^T𝐖=I, it becomes Orthogonal LDA (OLDA) <cit.>. However, the constraint 𝐖^T 𝐒_T 𝐖=I is more beneficial when constructing an LDA classifier. To solve equation (<ref>), <cit.> presents an algorithm based on Generalized Singular Value Decomposition (GSVD), where 𝐒_B, 𝐒_W, and 𝐒_T are diagonalized simultaneously. Suppose the rank of 𝐒_T is M, then
𝐖^T 𝐒_B 𝐖 = diag(α_1^2, α_2^2, …, α_M^2)
𝐖^T 𝐒_W 𝐖 = diag(β_1^2, β_2^2, …, β_M^2)
𝐖^T 𝐒_T 𝐖 = diag(α_1^2 + β_1^2, α_2^2 + β_2^2, …, α_M^2 + β_M^2)
= diag(1, 1, …, 1)
Since the likelihood-based LDA classifier depends on the inverse of 𝐒_W, computational resources are saved if it has already been diagonalized.
* Some commonly used test statistics from MANOVA are related to equation (<ref>). Pillai's trace is defined as V = trace(𝐒_T^-1𝐒_B), which equals ∑_i=1^M α_i^2 after transformation. In other words, ULDA can be viewed as maximizing the generalized Pillai's trace under certain constraints. On the other hand, Wilks' Λ = ∏_i=1^M β_i^2. We discussed the issue with Λ = 0 in Section <ref>. Λ = 0 means that β_i = 0 and α_i = 1 for some i. In the stepwise selection framework, since Wilks' Λ is a product, once it becomes zero, it remains zero. On the other hand, Pillai's trace is a summation, and adding another 1 has no side effect. Pillai's trace is also superior to Wilks' Λ in other aspects <cit.>.
Next, we introduce our speed enhancement for the ULDA algorithm when N > M. <cit.> presents a ULDA algorithm that diagonalizes 𝐒_T and 𝐒_B separately. Based on our experience, it is slower by a constant factor compared to the GSVD-based version <cit.>, which is described in Algorithm <ref> (rewritten to suit our needs). However, when the sample size N is large, the SVD decomposition on 𝐊∈ℝ^(J+N) × M in the line <ref> of Algorithm <ref> creates a runtime bottleneck. This can be resolved by reducing the dimension of 𝐊 before performing SVD (or complete orthogonal decomposition). Since 𝐇_W contributes most of the dimensionality, and the SVD depends on 𝐇_W^T𝐇_W, one possible solution is to replace 𝐇_W with 𝐆_W ∈ℝ^M × M, where 𝐇_W^T𝐇_W = 𝐒_W = 𝐆_W^T𝐆_W. We suggest performing a reduced QR decomposition 𝐇_W = 𝐐_W𝐑_W and replacing 𝐇_W with 𝐑_W, so that we have
𝐇_W^T𝐇_W = 𝐑_W^T𝐐_W^T𝐐_W𝐑_W = 𝐑_W^T𝐑_W.
<cit.> follows a similar approach, using a Cholesky decomposition 𝐒_W = 𝐂_W^T𝐂_W and replacing 𝐇_W with 𝐂_W. We now use a simulation to evaluate the performance of these two variants and the original ULDA.
The simulation setup is as follows: in each round, we simulate N = 10000 observations from J = 10 classes, with each class having the same sampling probability of 1/10. The features are M mutually independent standard normal noise variables. We then use the three algorithms to calculate the transformation matrix 𝐖. We let M = 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, and for each M we repeat the simulation 30 times to obtain a confidence band. The results are summarized in Figure <ref>. Their differences in runtime become larger as the number of features increases. For a data matrix of dimension 10000 × 1024, ULDA with QR decomposition is 38.6% faster than the original GSVD-based ULDA implementation (6.2 seconds vs. 10.1 seconds). Consequently, we added this additional QR decomposition step into the ULDA pipeline.
Another enhancement is our integration of the likelihood structure. The original ULDA method primarily serves as a dimension reduction tool and leaves the classification task to K-nearest-neighbors or other classifiers. However, LDA assumes Gaussian density for the features, and posterior probabilities naturally serve as a powerful classifier, as implemented in the R package and the Python package . One difficulty lies in the invertibility of 𝐒_W. Occasionally, we find a highly discriminative direction such that α^2 = 1 (or Wilks' Λ = 0), meaning the ratio of the between-class variance to the total variance is 1, and one group of classes is perfectly separable from another. We address this by manually setting the within-class variance to 10^-5, which has proven effective in our experience. To understand why it works, note that the discriminant function in LDA is:
δ_j(𝐱)=𝐱^T Σ_W^-1μ_j-1/2μ_j^T Σ_W^-1μ_j+logπ_j,
where π_j is the prior for class j, and we have μ_j = 𝐱̅_j and Σ_W = 𝐒_W / (N - J). 𝐱 will be classified to class k if k = max_j δ_j(𝐱). When perfect separation occurs, we aim to capture this through the discriminant function. By setting the within-class variance to 10^-5, the Mahalanobis norm associated with that direction is magnified 10^5 times, which is large enough to dominate the effects from all other directions.
For missing values, we impute them with the median for numerical variables and assign a new level for categorical variables. Additionally, we include missing value indicators for numerical variables. For categorical variables, we use one-hot encoding and transform them into dummy variables. We also accommodate unequal misclassification costs. Let 𝐂 represent the misclassification costs, where C_ij = C(i | j) is the cost of classifying an observation into class i given that it belongs to class j. Suppose the predicted posterior probability for an observation is 𝐩 = (p̂_1, p̂_2, …, p̂_j); then the cost of predicting it to class i is
C_i = ∑_j = 1^J C(i | j) * p̂_j,
and 𝐱 will be classified to class k if k = max_i C_i(𝐱).
§.§ Forward ULDA: Distribution and Threshold
We now derive the distribution of the test statistics used in the forward ULDA framework. Without loss of generality, we assume 𝐒_T is invertible throughout this section, as redundant columns that cause 𝐒_T to be non-invertible have no discriminative power and can always be removed.
Pillai's trace is non-decreasing when new variables are added to the model.
Suppose 𝐗∈ℝ^N × K have been included in the model, and the new variable to be added is 𝐳∈ℝ^N. The new between-class and total scatter matrices with (K+1) variables can be written as:
𝐒_B =
[ 𝐁_x 𝐛_x; 𝐛_x^' b_z ] 𝐒_T =
[ 𝐓_x 𝐭_x; 𝐭_x^' t_z ],
where 𝐁_x and 𝐓_x are the previous between-class and total scatter matrices for 𝐗. If K = 0, the difference in Pillai's trace will be b_z/t_z. This value is non-negative since the between-class scatter matrix is positive semi-definite (b_z ≥ 0) and the total scatter matrix is positive definite (t_z > 0). For K ≥ 1, we aim to show
trace(𝐒_T^-1𝐒_B) - trace(𝐓_x^-1𝐁_x) ≥ 0.
Since 𝐒_T and 𝐓_x are invertible, we have
!𝐒_T^-1 =
[ 𝐓_x^-1 + 𝐓_x^-1𝐭_x𝐭_x^'𝐓_x^-1(t_z - 𝐭_x^'𝐓_x^-1𝐭_x)^-1 -𝐓_x^-1𝐭_x(t_z - 𝐭_x^'𝐓_x^-1𝐭_x)^-1; -(t_z - 𝐭_x^'𝐓_x^-1𝐭_x)^-1𝐭_x^'𝐓_x^-1 (t_z - 𝐭_x^'𝐓_x^-1𝐭_x)^-1 ].
According to the block matrix multiplication and the properties of the trace, we have
trace(𝐒_T^-1𝐒_B) = trace(𝐓_x^-1𝐁_x)
+ (𝐭_x^'𝐓_x^-1, -1)𝐒_B(𝐭_x^'𝐓_x^-1, -1)^'
× (t_z - 𝐭_x^'𝐓_x^-1𝐭_x)^-1.
(t_z - 𝐭_x^'𝐓_x^-1𝐭_x)^-1 is the Schur complement of the block 𝐓_x of the matrix 𝐒_T. Since 𝐓_x and 𝐒_T are both positive definite, we have (t_z - 𝐭_x^'𝐓_x^-1𝐭_x)^-1 > 0. (𝐭_x^'𝐓_x^-1, -1)𝐒_B(𝐭_x^'𝐓_x^-1, -1)^' is a quadratic form, and since the middle matrix is positive semi-definite, it is non-negative.
At its core, ULDA seeks to maximize Pillai's trace V = trace(𝐒_T^-1𝐒_B). According to Theorem <ref>, with each variable added, the current Pillai's trace increases (or remains the same). Let V^(M) denote the Pillai's trace with all M variables included. The goal of forward selection is to approximate V^(M) using V^(K), where K ≪ M.
Suppose the variable set {𝐱_1, 𝐱_2, …, 𝐱_K-1} has been selected after the first (K-1) steps, and the Pillai's trace of that variable set is V^(K-1)_max. Here, the subscript indicates that this Pillai's trace is not of (K-1) randomly selected variables but is instead maximized at each step through forward selection. At step K, we calculate V^(K)_(1), V^(K)_2, …, V^(K)_(M-K+1), where V^(K)_(i) denotes the Pillai's trace of the variable set {𝐱_1, 𝐱_2, …, 𝐱_K-1, 𝐱_(i)}. Let k = max_i V^(K)_(i). We then select 𝐱_(k) as the best candidate at step K, and V^(K)_max = V^(K)_(k). To establish an inclusion criterion, we must measure the marginal effect of the added variable 𝐱_(k), which corresponds to V^(K)_max - V^(K-1)_max.
At step K, let t_K be the (1 - α)^1/(M-K+1) quantile of B(J-1/2, N-J/2). Then, P(V^(K)_max - V^(K-1)_max≥ t_K) ≤α as N →∞ under H_0, where the newly added variable 𝐳 is normally distributed and independent of both 𝐗 and 𝐲.
When K = 1, V^(1) under H_0 follows a beta distribution B(J-1/2, N-J/2) <cit.>. Since V^(1)_max - V^(0)_max = V^(1)_max is the maximum of V^(1)_1, V^(1)_2, …, V^(1)_M, which are M i.i.d. random variables from the beta distribution, its CDF can be written as I_x^M(J-1/2, N-J/2) where I_x(J-1/2, N-J/2) is the CDF of B(J-1/2, N-J/2). To control the type I error below α, the threshold t must satisfy I_t^M(J-1/2, N-J/2) ≤ 1 - α, which is equivalent to I_t(J-1/2, N-J/2) ≤ (1 - α)^1/M. Then t is the (1 - α)^1/M quantile of B(J-1/2, N-J/2).
When K > 1, from equation (<ref>) we know that
V^(K)_(i) - V^(K-1)_max = (𝐭_x^'𝐓_x^-1, -1)𝐒_B(𝐭_x^'𝐓_x^-1, -1)^'
× (t_z - 𝐭_x^'𝐓_x^-1𝐭_x)^-1.
This equation still holds if we replace 𝐒_B and 𝐒_T with 𝐒_B/(N-J) and 𝐒_T/(N-J), which are the least squares estimators of the between-class and total covariance matrices. Since 𝐗 and 𝐳 are independent, their covariance 𝐭_x →0 as N →∞. Note that 𝐒_B/(N-J) and 𝐒_T/(N-J) are finite as N →∞. Substituting 𝐭_x = 0 into equation (<ref>), we get
V^(K)_(i) - V^(K-1)_max = (0, -1)𝐒_B(0, -1)^'(t_z - 0)^-1
= b_z/t_z
b_z/t_z is Pillai's trace for 𝐳. Therefore, the distribution of V^(K)_(i) - V^(K-1)_max can be approximated by V^(1), and the rest follows the scenario where K = 1.
Based on our experience, this asymptotic approximation sometimes leads to a very conservative threshold, with the type I error falling well below the predefined α. Therefore, we introduce an empirical approximation to mitigate this problem and achieve higher power. Suppose we have already added K-1 variables and the current Pillai's trace is V^(K-1)_max. Since the maximum Pillai's trace for J classes is J-1, the maximum Pillai's trace that can be added is bounded by J - 1 - V^(K-1)_max, which can be viewed as the maximum Pillai's trace for a classification problem with J - V^(K-1)_max classes. Thus, at the k-th step, the threshold becomes the quantile from B(J^'-1/2, N-J^'/2) instead of B(J-1/2, N-J/2), where J^' = J - V^(K-1)_max.
§.§ Type I Error: Analysis and Control
Here, we analyze the type I error under the forward ULDA framework and demonstrate that the family-wise error rate is controlled at the nominal level α. Suppose we have M variables in total, some of which are noise variables (𝐱∈ S_n) and some are informative (𝐱∈ S_i). At each step, there are three possible outcomes: a noise variable is selected, the selection stops, or an informative variable is selected. The entire process is illustrated in Figure <ref>.
Suppose 𝐱 is the variable with the largest Pillai's trace and is selected at the K-th step. Conditional on whether 𝐱∈ S_n or 𝐱∈ S_i, there are four possible outcomes:
p_K1 = P(𝐱 is added|𝐱∈ S_n)
p_K2 = P(𝐱 is not added|𝐱∈ S_n)
p_K3 = P(𝐱 is added|𝐱∈ S_i)
p_K4 = P(𝐱 is not added|𝐱∈ S_i)
In situations with p_K2 and p_K4, 𝐱 is not added, and forward selection stops. Therefore, no type I error is made or will be made. This corresponds to the green regions in Figure <ref>. For the situation with p_K3, since an informative variable is added, no type I error is made at the current step, corresponding to the yellow regions in Figure <ref>. The purple region in Figure <ref> reflects scenarios where a type I error is made, with p_K1 being the only situation that results in such an error. Theorem <ref> shows that under H_0, p_K1≤α, meaning that at each step, the probability of branching into the purple region is controlled at α. Now, we aim to show that, overall, the probability of ending up in any purple region is controlled at α.
The probability of reaching node 2 is p_11≤α. The probability of reaching node 5 is p_13× p_21≤ p_21≤α. The reason we can use the product of p_13 and p_21 to calculate this probability is that under H_0, the variable selected in the first step is assumed to be independent of the variable selected in the second step. For nodes like node 2 and node 5, where the first noise variable is added in the current step, the probability of reaching them can be written as
p_K1∏_k = 1^K-1p_k3≤ p_K1≤α.
Meanwhile, the probability of reaching their child nodes is also controlled at α, because reaching these nodes requires first reaching their parent node. All purple nodes fall into one of these two scenarios, so the family-wise type I error rate is controlled at α. This means that if the forward ULDA selects K variables {𝐱_(1), 𝐱_(1), …, 𝐱_(K)}, then the probability that at least one 𝐱_(i) is a noise variable is controlled at α.
The forward selection framework is summarized in Algorithm <ref>.
§ EMPIRICAL ANALYSIS
In this section, we use simulation and real data to showcase the performance of three forward LDA variants:
* : the proposed variant using Pillai's trace (see Algorithm <ref>).
* : the original variant based on Wilks' Λ. The inclusion criterion is based on p-value, with a variable included if the p-value is below the predefined α.
* : This variant applies an additional Bonferroni correction to the p-value compared to the second variant. If there are (M-K+1) variables to choose from at the K-th step, the p-value is multiplied by (M-K+1) to adjust for the multiple testing.
Note that these variants are for selection alone. They help check the type I error and power (whether the desired variables are included). To further compare the testing accuracy, we apply ULDA (see Section <ref>) to the selected variables.
§.§ Type I Error Evaluation on Iris: Pure Noise and Mixed Cases
We use the same simulation settings from Section <ref> to compare the three forward LDA variants, and the results are summarized in Figure <ref>. and successfully control the type I error in both scenarios. In contrast, suffers from an inflated type I error rate due to multiple testing. These results validate Theorem <ref>, demonstrating that the type I error rate is well-controlled under H_0, where the noise variables are normally distributed and independent of both the informative variables and the response.
§.§ Handling Λ=0: Analysis on Simulated and Real Data
In this section, we illustrate the primary advantage of our proposed method over the original framework: its ability to handle scenarios where Wilks' Λ = 0.
First, we use a simulated dataset, which contains 2,000 observations. The response variable is randomly selected from 10 classes, each with an equal probability of 1/10. We then create a dummy matrix (one-hot encoding) of the response, resulting in 10 columns, each consisting of 1s or 0s. These 10 columns are used as our features. Ideally, these 10 features can perfectly predict the response, and a robust forward selection method should select them all. However, stops after selecting only one feature, I_Class One, the indicator of the first class. Here, I_Class One = 1 for observations from class one and I_Class One = 0 for the other classes. Therefore, the within-class variance is 0, leading to Wilks' Λ = 0. Meanwhile, with , the feature I_Class One contributes a value of 1 to the overall Pillai's trace, which does not trigger a stop. continues adding features, with each feature I_Class i contributing a value of 1 to the overall Pillai's trace. It selects 9 features and then stops, as the maximum Pillai's trace of J - 1 = 9 is reached. With these 9 features, we have enough information to perfectly predict the response.
Next, we use real data from the National Highway Traffic Safety Administration (NHTSA) and its Vehicle Crash Test Database. This database contains data from various crash tests, including those conducted for research, the New Car Assessment Program (NCAP), and compliance purposes. The dataset captures a wide range of vehicle attributes, crash conditions, and safety outcomes. Our focus here is to predict the type of engine used in the tested vehicles. This dataset is challenging to analyze for several reasons:
* Cyclic variables: Variables like impact angle are cyclic, where 359^∘ and 1^∘ should be very similar in real life but are very different in their numerical representation. We address this by transforming all angles to their cos and sin values.
* Missing values: For missing values, we impute them with the median for numerical variables and assign a new level for categorical variables. Additionally, we include missing value indicators for numerical variables.
* Multicollinearity: Variables BX1 to BX21 are measurements of different parts of the car, and some of them are highly correlated, which affects the performance of classical LDA, making forward ULDA more promising.
After data preprocessing, there are 3,273 crash tests and 173 (982 if all categorical variables are transformed into dummy variables) predictor variables. Our response variable, engine type, has 18 different types, with an imbalanced distribution: the most frequent class, 4CTF, has 1,250 occurrences, while the least frequent class, NAPP, has only one occurrence. We then apply both forward LDA variants, and , to this dataset.
stops after selecting variables I_Model=RX and I_Model=RX-8. The reason is that only these two Mazda models (RX and RX-8) use the ROTR engine type. Therefore, the within-class variance becomes zero, allowing it to perfectly identify the ROTR engine. However, this premature stopping prevents it from identifying all the other engine types. The order of the variables selected also reveals its lack of statistical power. The first selected variable should be the most discriminative. However, the top two variables selected by can only correctly identify 3 instances out of 3,273, clearly not the most discriminative variables. This bias towards the perfect separation of certain classes, regardless of how few the instances are, dominates the forward selection process via smaller Wilks Λ, indicating that Wilks Λ is not an ideal test statistic and fails to capture the most discriminative information across all classes.
In contrast, ends up selecting 175 variables. Using ULDA as the classifier and a 10-fold cross-validation, the prediction accuracies of and are 0.38 and 0.65, respectively. Results from simulation and real data demonstrate that outperforms when Wilks' Λ = 0.
§ CONCLUSION
In this paper, we present a new forward discriminant analysis framework based on Pillai's trace, which demonstrates superiority in situations involving perfect group separations (Wilks' Λ = 0) compared to traditional methods. Our approach effectively controls the type I error rate and integrates seamlessly with ULDA, providing a unified classifier.
Despite these advancements, there are some limitations to consider. The criterion used in forward selection, whether based on Wilks' Λ or Pillai's trace, serves only as a measure of goodness of fit within the MANOVA framework, as mentioned in <cit.>. It does not directly reflect the model's training or testing accuracy, although these two typically align well. For those primarily concerned with accuracy, we recommend using the forward selection framework to rank variables, followed by methods such as cross-validation to choose the best subset of variables.
Additionally, our derivation of the distribution of the test statistic in Section <ref> is asymptotic and includes an empirical approximation. There is potential to develop a more precise distribution for finite sample size under alternative assumptions, which could be a direction for future research.
In this paper, we focus primarily on forward selection rather than stepwise selection. The main reason is the difficulty in theoretically justifying a valid exclusion criterion. Moreover, based on our experience, the addition of a variable deletion step typically results in minimal improvement in classification accuracy while significantly increasing runtime.
The related R package is available on CRAN.
Acknowledgements
The author gratefully acknowledges Prof. Wei-Yin Loh from UW-Madison for his unwavering guidance throughout the author's PhD journey.
§ DECLARATIONS
Conflict of interest: The author declares no conflict of interest.
plainnat
|
http://arxiv.org/abs/2409.02464v1 | 20240904062633 | Nonlinear Precoding in the RIS-Aided MIMO Broadcast Channel | [
"Dominik Semmler",
"Michael Joham",
"Wolfgang Utschick"
] | eess.SP | [
"eess.SP"
] |
AoDangle of departure
AoAangle of arrival
ULAuniform linear array
CSIchannel state information
LOSline of sight
EVDeigenvalue decomposition
BSbase station
MSmobile station
mmWavemillimeter wave
DPCdirty paper coding
IRSintelligent reflecting surface
AWGNadditive white gaussian noise
MIMOmultiple-input multiple-output
ULuplink
DLdownlink
OFDMorthogonal frequency-division multiplexing
TDDtime-division duplex
LSleast squares
MMSEminimum mean square error
SINRsignal to interference plus noise ratio
OBPoptimal bilinear precoder
LMMSElinear minimum mean square error
MRTmaximum ratio transmitting
M-OBPmulti-cell optimal bilinear precoder
S-OBPsingle-cell optimal bilinear precoder
SNRsignal to noise ratio
THPTomlinson-Harashima Precoding
dTHPdistributed THP
cTHPcentralized THP
RISreconfigurable intelligent surface
SEspectral efficiency
MSEmean squared error
ASDangular standard deviation
ZF-THPzero-forcing THP
Nonlinear Precoding
in the RIS-Aided MIMO Broadcast Channel
Dominik Semmler, Michael Joham, and Wolfgang Utschick
School of Computation, Information and Technology, Technical University of Munich, 80333 Munich, Germany
email: {dominik.semmler,joham,utschick}@tum.de
3 September 2024
====================================================================================================================================================================================================================
§ ABSTRACT
We propose to use THP for the RIS-aided MIMO broadcast channel where we assume a LOS connection between the BS and the RIS.
In this scenario, nonlinear precoding, like THP or DPC, has certain advantages compared to linear precoding as it is more robust in case the BS-RIS channel is not orthogonal to the direct channel.
Additionally, THP and DPC allow a simple phase shift optimization which is in strong contrast to linear precoding for which the solution is quite intricate.
Besides being difficult to optimize, it can be shown that linear precoding has fundamental limitations for statistical and random phase shifts which do not hold for nonlinear precoding.
Moreover, we show that the advantages of THP/DPC are especially pronounced for discrete phase shifts.
THP, LOS, DPC, vector perturbation, lattice
§ INTRODUCTION
RIS are viewed as a key technology for future communications systems (see <cit.>).
These surfaces consist of many passive elements which allow to reconfigure the channel resulting in an improved system performance.
While this technology gives new degrees of freedom for system optimization, the high number of reflecting elements also drastically increase the complexity.
The channel estimation as well as the joint optimization of the precoders and the reflecting elements in a downlink scenario pose two major challenges for the technology.
We consider perfect CSI in this article and the main focus will be on the second point, where we give a new algorithm for the joint optimization of the precoders and the reflecting elements in order to maximize the sum SE.
For sum SE maximization under perfect CSI, there is already significant literature available, see, e.g., <cit.>.
Additionally, we assume that the BS-RIS channel is dominated by a LOS channel which is motivated by the fact that both the BS and the RIS are both deployed in considerable height and in LOS conditions by purpose (see <cit.>).
This assumption has already been considered in the literature (see, e.g. <cit.>).
Recently, it has been shown in <cit.> that especially nonlinear precoding techniques such as, e.g., DPC, are particularly suited for such a scenario.
Following <cit.>, it has been pointed out that it is crucial for the performance of the system that the BS-RIS channel is orthogonal to the direct channels.
This condition is not met in a practical scenario and DPC is considerably more robust in comparison to linear precoding if this condition is not completely fulfilled.
Furthermore, linear precoding suffers from fundamental limitations when considering random or statistical phase shifts (see <cit.>).
As an example, in a rank improvement scenario under Rayleigh fading, the SE of linear precoding for random phase shifts is upper bounded by a constant whereas for DPC it is increasing monotonically with the number of reflecting elements.
Because of the above mentioned benefits of DPC, we propose an efficent algorithm in this arcticle based on THP (see <cit.>).
THP is a vector perturbation technique (see <cit.>) with reduced complexity which we show to share the same benefits as DPC.
There exist fundamental advantages over linear precoding that are especially pronounced for random or statistical phase shifts.
We focus on random phase shifts and instantanteous phase shifts in this article, whereas statistical phase shifts will be analyzed in future work.
Particularly, for discrete phase shifts, we observe that also for instantanous phase shifts, THP has a significant advantage.
In comparison to DPC, THP is simple to implement.
However, it also comes with a certain performance loss.
Specifically, it suffers from a shaping loss and we analyze its performance within this article.
Our contributions are:
* We derive the high-SNR optimal solution for DPC in case where there are more users than base station antennas.
Additionally, a low-complexity phase shift optimization for DPC is derived which can be shown to be optimal for specific scenarios.
* We propose THP for the RIS-aided scenario and give a low-complexity algorithm based on that method.
In particular, we use dTHP which appears to be better suited in comparison to cTHP in a RIS-aided scenario.
* We compare DPC, linear precoding, and THP (including the shaping loss) in this article and show that THP has the same benefits as DPC shown in <cit.>,
specifically, we show discrete phase shifts to be an interesting aspect in favor of THP/DPC.
It has to be noted that throughout the article, we consider the conventional phase shift model which is the most popular model in the literature.
However, a more accurate model has been given in <cit.>.
The algorithms and results in this article can be directly extended to the model in <cit.> and, additionally, to mutual coupling by considering decoupling networks (see <cit.>).
§ SYSTEM MODEL
A scenario with one BS serving K single-antenna users is considered.
The system is supported by one RIS with reflecting elements.
Furthermore, we assume the BS-RIS channel to be rank-one, i.e., _ = ab^, where w.l.o.g. we assume b_2=1.
Hence, the channel of user k reads as
_k^ = ^_ + _^ab^ ∈1 ×
with _^∈1 × being the direct channel from the BS to the k-th user, _^∈1 × being the reflecting channel from the RIS
to user k, and = () ∈× with ∈{z∈: z_n=1, ∀ n} being the phase manipulation at the RIS.
By defining _,k^ = _,k^(a) we arrive at the equivalent expression
_k^ = ^_ + _,k^b^ ∈1 ×.
Stacking the user channels into the composite channel matrices H_d =[_,1,…,_,K]^ and H_ =[_,1,…,_,K]^, we obtain
H = H_d + H_b^∈ K ×
where H=[_1,…,_K]^. Due to b_2=1 and by defining
C = H_dP_b^⊥H_d^, = [ H_c, H_db ],
= [ ^ 1 ]^
the Gram channel matrix can be written (similar to <cit.> and <cit.>) as
^ = + ^^ ∈K × K.
§ DIRTY PAPER CODING
As performance metric, we consider the sum SE.
In the high-SNR regime a scaled identity transmit covariance matrix in the dual uplink is optimal for DPC (see <cit.>).
Together with (<ref>) and p̅=P_Tx/K, where P_Tx is the transmit power, the SE is given by
SE_DPC = log_2( + p̅^)
=log_2( +p̅) + log_2( 1+ ^^(/p̅ +)^).
The asymptotic expression for p̅→∞ is therefore
SE_DPC = log_2(p̅) + log_2( ^^^)
in case is full rank.
If = K one eigenvalue of is exactly zero (see next section) and we obtain the asymptotic expression (p̅→∞)
SE_DPC = k=1K-1log_2( λ_k p̅) + log_2( p̅ _K^^2)
where λ_k are the eigenvalues of in decreasing order and _k are the corresponding eigenvectors.
§.§ Optimal Phase Shift Solution
From (<ref>), we can infer that if one eigenvalue of is zero, we can obtain the optimal solution of the phases by alignment as
= exp( (∠(_^_K)-1∠(b^_^_K))).
*Quadratic System (N_ B =K)
The matrix has a zero eigenvalue if b∈range(_^).
For specific scenarios, this is approximately fulfilled.
However, in case =K, one eigenvalue of is always exactly zero due to the orthogonal projector P_b^⊥.
For b∈range(_^), we have
_K = _^+,b/_^+,b_2
where for = K the pseudoinverse _^+ is equal to _^.
*Rank Improvement Scenario
Additionally, the matrix also has a zero eigenvalue if _ has a zero singular value.
This happens if the direct channels of some users are colinear or when we have a rank-improvement scenario where some users are blocked and, hence, have zero direct channel.
For a rank-one BS-RIS channel only one of the blocked users is allocated (see <cit.>) and we have
_K = e_l
where l is the user with negligible direct channel.
In this case, (<ref>) is actually a channel gain maximization of user l (see <cit.>).
*Optimal solution for high-SNR in case K ≥ N_ B
From the discussion above, we can directly give the optimal solution for a high-SNR scenario where we have more users than BS antennas.
At high-SNR, users will be allocated and the solution for the transmit covariance matrix is given by p̅.
According to above, the optimal phase shifts at the RIS are given by (<ref>) with (<ref>) as we have a quadratic system with =K,
due to the user allocation.
§.§ Phase Shift Heuristic
For = K, the optimal phase shift vector is given by (<ref>) with (<ref>).
In case > K, we choose
_opt = exp( (∠()-1∠(w))).
where w̅^ = [^,w] is the eigenvector corresponding to the maximum eigenvalue of ^(/p̅ + )^ to maximize SE_DPC in (<ref>).
To avoid the computation of an eigenvector in the dimension of the reflecting elements,
we first compute the principal eigenvector w̅^'∈ℂ^K of the matrix ( /p̅ + )^^ and then obtain the desired eigenvector (scaled version) w̅= w̅^'.
The complexity of the multiplication ^ is only linear in .
This also has to be calculated only once and for each user allocation (see Section <ref>) the specific submatrix can be deduced.
It is important to note, that the above heuristic phase shift configuration is optimal in case one eigenvalue of is zero.
The solution can be further refined with a local algorithm (e.g. element-wise) by using (<ref>) as an initial guess.
§ THOMLINSON HARASHIMA PRECODING
To avoid the implementation difficulties of DPC, we opt for vector perturbation precoding (see <cit.>).
One suboptimal method is THP which avoids the computationally complex lattice search and can be employed with simple modulo transmitters/receivers.
Specifically, the sent symbol in case of THP can be expressed as
x =Pv∈
with the transmit filter P and the symbol
= Mod( s + ( - B) ) ∈K
after the modulo chain, where s∈K is the uniformly distributed data symbol
and B is the unit lower-triangular feedback filter.
In particular, the real and imaginary part of the data symbol s_i is chosen to be uniformly distributed between -0.5 and 0.5.
We use the popular reformulation (see, e.g., <cit.>) of the modulo operation which can be equivalently written by adding an integer vector a
resulting in = s + ( - B) + a and, hence, in the transmitted symbol
x = PB^(s + a).
The receiver also employs a modulo operator and the signal
y = Mod(FHx + Fn)
= FHx + Fn +ã
is received. Here, n∼0, F is the receive filter and ã the integer vector corresponding to the modulo operator.
§.§ Zero-Forcing dTHP
Throughout this article, we consider ZF-THP.
Combining ZF-THP with a user allocation leads to good results also for moderate SNR values.
There are two possibilities for ZF-THP (see, e.g., <cit.>):
dTHP as well as cTHP.
We observed that dTHP is particularly suited for a RIS-aided scenario and we will especially focus on this version within this article.
A comment on cTHP is given in Section <ref>.
In this subsection, we give a short overview over THP, specifically, ZF-THP.
For THP as well as other approaches like MMSE-THP see <cit.>.
Particularly, we select the transmit filters
B = (l)^L , P = βQ^
where L and Q are from the LQ-decomposition = LQ and l = [L_11,…, L_KK]^.
Because B is unit lower-triangular, 1/L_iij=1i-1L_ij v_j from v_i = Mod( s_i - 1/L_iij=1i-1L_ij v_j ) only depends on s_j, j<i and, hence, is independent of s_i.
As, additionally, the real and imaginary parts of s_i are uniformly distributed between -0.5 and 0.5, it follows that the real and imaginary parts of v_i are uniformly distributed between -0.5 and 0.5 as well.
A proof for this well-known result is given in the Appendix.
Hence, with the variance of a uniform distribution, we get
v_i^2 = 1/6.
Since is unitary, the signal power of the sent symbol x can be written as x^2 = β^2 K/6.
Hence, we obtain the scaling
β = √(P_Tx 6/K)
to match the power constraint x^2≤ P_Tx with equality.
Choosing
F = ^(l)1/β
as the receive filter, the received symbol is given by
y = Mod(FPB^(s + a) + Fn)
= Mod(s + a + 1/β^(l)n)
where a vanishes due to the modulo operation.
We arrive with K complex scalar modulo channels where for each of them, we can get for the SE (see, e.g., <cit.>) as
SE_k = -h( Mod(√(K/P_Tx 6 L_kk^2)n))
with the high-SNR asymptote (by using p̅ = P_Tx /K)
SE_k = log_2(6/π ep̅ L_kk^2 ).
Therefore, at high SNR the sum SE reads as
SE = log_2 (p̅^) - Klog_2(π e/6)
since (^) = L^2. This is exactly the sum SE of DPC at high-SNR, however, we additionally have the shaping loss of approximately Klog_2(π e/6) ≈K/2 bits for THP that is non-negligible.
§.§ Phase Shift Optimization
As the THP high-SNR sum SE is just the high-SNR DPC sum SE with an additional offset, the optimization of THP at high SNR can be directly transferred from Section <ref>.
The only difference is that ^ ^ [see (<ref>)] is considered in comparison to ^( /p̅ + )^ for > K.
In case one eigenvalue of is zero, e.g., when =K, we arrive at (<ref>).
§.§ Decoding Order
The high SNR sum SE only depends on the determinant of the Gram channel matrix and, hence, is independent of the decoding order.
However, for lower SNR values this changes and we optimize the order based on the MSE.
This metric was e.g. used in <cit.> and is more tractable than the SE in (<ref>).
Following <cit.>, by defining d = x + a at the transmitter and d̂= y-a at the receiver, we can give the MSE as
d̂ - d^2 = Fn^2 = K /P_Tx 6k=1K1/L_kk^2.
To avoid a computationally complex search, the decoding order is determined by a successive approach.
As in <cit.>, we start with the user who is decoded lastly and then successively add users according to the MSE criterion.
It is important that we avoid a joint optimization of the phase shifts and the decoding order.
Hence, for a certain user allocation, we first determine the high-SNR phase shift according to Section <ref> and then optimize the order for these phases.
§.§ User Allocation
In case, we have more users than BS antennas or if we only have medium SNR, it is not optimal to allocate all users for ZF-THP.
Thus, we opt for a user allocation scheme.
Specifically, we use a greedy scheme.
This is often done in the literature for similar problems and has been applied for vector perturbation precoding in <cit.>.
Accordingly, we start with the first user and succesively add users one by one in a greedy manner as long as the sum SE increases.
For the sum SE, we use the lower bound max(0,SE_k) from (<ref>) as a metric.
Note that the phase shifts as well as the decoding order have to be optimized for each allocation.
This complexity can be significantly reduced by exploiting the eigenvalue result in <cit.> as has been done in <cit.> for linear precoding.
In this case, only two allocations have to be considered which clearly reduces the complexity.
Additionally, one can follow <cit.> and use a two-norm relaxation for evaluating a user allocation by replacing ^^^ with just λ_max(^^)=λ_max( ^^).
§ CENTRALIZED ZF-THP
We briefly comment on cTHP which is the alternative to dTHP (see, e.g., <cit.>).
Here, the receive filter F at the users is an identity matrix and, hence, the receivers just consist of a simple modulo receiver.
While this reduces the receiver complexity, for cTHP, a joint optimization of the decoding order and the phases is important.
Additionally, the high-SNR asymptote is not just the determinant and no simple solution is available as in case of dTHP and DPC.
We also observed a significantly worse performance of cTHP and in summary it appears that it is not particularly suited for a RIS-aided scenario.
§ DISCRETE PHASE SHIFTS
The simplicity of the objective for non-linear methods [see (<ref>) and (<ref>)] is especially beneficial for discrete phase shifts and we also evaluate the algorithms for a binary RIS, i.e., θ_n ∈{-1,1}.
The user allocation is still determined with continous phase shifts.
However, afterwards, the argument in the logarithm in (<ref>) and (<ref>) is then optimized by applying an element-wise algorithm where the continous phase shift is chosen as an initial guess.
Note that this initial guess is not feasible and only after the first iteration a valid discrete vector is obtained.
Additionally, we extend the algorithm in <cit.> by replacing the element-wise algorithm based on continous phase shifts by an element-wise algorithm on discrete phase shifts.
§ SIMULATIONS
We consider an =6 antenna BS at (0, 0) m serving K=6 single-antenna users which are uniformly distributed in a circle with radius 5 m at (75, 10) m.
For all simulations, 3 of the users have an extra 60 dB pathloss of the direct channel.
Hence, we simulate a rank-improvemnt scenario where at maximum 4 users will be served when including the RIS (see <cit.>).
The position of the RIS is at (100, 0) m and the noise power is given by σ^2=-110dBm in all plots.
We assume Rayleigh fading for the direct channels, Rician fading with a Rician factor of 0 dB for the channels between the RIS and the users and a pure LOS channel between the BS and the RIS.
This pure LOS channel is defined by an outer product of two half-wavelength ULA vectors with both angles given by π/2.
For Rayleigh/Rician fading, we assume the covariance matrices to follow a Laplacian angle density (see <cit.>) with an ASD of σ_ASD.
Additionally, we choose the logarithmic pathloss model L_dB = α + β 10log_10(d) where d is the distance in meter.
We consider three different sets of pathloss values for different channel strengths.
Specifically, we use L_dB,weak = 35.1 + 36.7log_10(d/m), L_dB,strong = 37.51 + 22log_10(d/m), and L_dB,LOS = 30 + 22log_10(d/m).
We are considering linear precoding as well as THP without the RIS (LISAwoRIS <cit.>/THPwoRIS), random phase shifts (LISARandom/THPRandom) as well as optimized versions with continous (THP/AddOne-RIS-LISA) as well as discrete phase shifts (THP-Discrete/AddOne-RIS-LISA-Discrete).
Furthermore, we use DPC-AO <cit.> as an upper bound with 10 initial phase shifts where the best is taken.
Additionally, we include VP-Bound, which is the vector perturbation upper bound in <cit.> with the same user allocation and phase optimization method as in this article.
Note that all THP results include the shaping loss [see (<ref>)].
We first analyze the scenario w.r.t. the conditioning of the channel via the ASD.
Here, we choose the direct, the RIS-user, as well as the BS-RIS channel to have an equally strong pathloss with L_dB strong.
In Fig. <ref>, we can see that THP is able to handle worse conditioning better than linear precoding.
Specifically, we can see that THP allocates a new user (via the RIS) for a lower ASD in comparison to linear precoding.
This has significant consequences as there are scenarios where the RIS has strong impact in case of THP but not for linear precoding.
This is further investigated in Fig. <ref> based on σ_ASD = 15.
Here, linear precoding has severe problems in the sense that even for a higher number of elements no or only a very slight increase in the SE is possible.
In comparison, the SE of THP is significantly increased, also for a smaller number of elements.
We choose now a scenario where the channels are better conditioned σ_ASD = 30 and the RIS has more impact, which is beneficial for the linear methods.
Specifically, the users are assumed to be closer to the RIS by shifting the circle from (75, 10) m to (95, 10) m.
Additionally, we assume L_dB, weak for the direct channels and L_dB, LOS for the BS-RIS channel.
From Fig. <ref> we can infer that the linear methods achieve now also a good performance.
However, a higher number of reflecting elements is needed, which is especially pronounced for discrete phase shifts.
In Fig. <ref> we set an additional pathloss of 20 dB for the 3 stronger users and, hence, the RIS has an even stronger impact.
Here, we can see the fundamental limits for random and statistical phase shifts according to <cit.> and linear precoding with random phase shifts is severly limited in comparison to THP with random phase shifts.
For instantanous phase shifts on the other hand, the linear methods result in a comparable performance, even though THP still leads to better results.
The discrete phases in case of linear precoding are still limited and there exists a major gap to the THP methods.
§ CONCLUSION
Similar to DPC, THP, taking into account the shaping loss, leads to fundamental advantages in comparison to linear precoding in a RIS assisted scenario.
Especially, when the BS-RIS channel is not orthogonal to the direct channel or for random/statistical phase shifts, THP has its advantages which are even more pronounced for discrete phases.
Future work will consider non-ZF THP, vector perturbation precoding and the important case of statistical phase shifts.
§ MODULO OPERATOR & UNIFORM DISTRIBUTION
For two independent random variables X and N where X ∼𝒰(-0.5,0.5), the distribution
Y = Mod(X+N) is again uniformly distributed between -0.5 and 0.5.
To see this, we first give the distribution for W = X+N which is
f_W(w) = ∫_-∞^∞ f_N(τ) f_X(w-τ)dτ.
As f_X(x) = 1 for -0.5≤ x ≤ 0.5 we have
f_W(w) = ∫_w-0.5^w + 0.5 f_N(τ) dτ.
The distribution of Y = Mod(X+N) = Mod(W) is given by
f_Y(y) =
k ∈ℤ f_W(y + k) -0.5 ≤ y ≤ 0.5
0 else.
And as, additionally,
k ∈ℤ f_W(y + k) = k ∈ℤ∫_w+k-0.5^w+k + 0.5 f_N(τ) dτ
=∫_-∞^∞ f_N(τ) dτ =1
holds, we have Y ∼𝒰(-0.5,0.5).
IEEEtran
|
http://arxiv.org/abs/2409.03165v1 | 20240905014933 | Search for singly charmed dibaryons in baryon-baryon scattering | [
"Yao Cui",
"Xinmei Zhu",
"Yuheng Wu",
"Hongxia Huang",
"Jialun Ping"
] | hep-ph | [
"hep-ph",
"nucl-th"
] |
[email protected]
[email protected]
[email protected]
[email protected](Corresponding author)
[email protected]
^1School of Physics and Technology, Nanjing Normal University, Nanjing 210097, People's Republic of China
^2Department of Physics, Yangzhou University, Yangzhou 225009, People's Republic of China
^3Department of Physics, Yancheng Institute of Technology, Yancheng 224000, People's Republic of China
§ ABSTRACT
We perform a systematical investigation of the singly charmed dibaryon system with strangeness numbers S=-1, -3 and -5 in the framework of the chiral quark model.
Two resonance states with strangeness numbers S=-1 are obtained in the baryon-baryon scattering process. In the ΛΛ_c scattering phase shifts, the ΣΣ_c appears as a resonance state with the mass and width 3591 MeV and 11.1 MeV, respectively. In the NΞ_c and NΞ^'_c scattering phase shifts, the ΣΣ^∗_c exhibits as a resonance state with the mass and width 3621-3624 MeV and 14.9 MeV, respectively. All these heavy-flavor dibaryons are worth searching for in experiments. Besides, we would like to emphasize that the coupling calculation
between the bound channels and open channels is indispensable. The study of the scattering process maybe an effective way to look for
the genuine resonances.
Search for singly charmed dibaryons in baryon-baryon scattering
Jialun Ping^1
School of Physics, Nankai University, Tianjin, 300071, China
===================================================================
§ INTRODUCTION
In the last two decades, a growing number of exotic particles have been discovered in experiment. A series of XYZ states, P_c states, and charm T_cc^+ state were reported in experiment, which has led to extensive research into exotic hadrons <cit.>. Understanding hadron-hadron interactions and searching for exotic hadron states are important topics in hadron physics, among which questing for dibaryons is a long-standing challenge. The well-known dibaryon is deuteron discovered in 1932 <cit.>. In 2014, the Wide Angle Shower Apparatus (WASA) detector at the Cooler Synchrotron (COSY) <cit.> collaboration established the narrow resonance state d^∗ with I(J^P)=0(3^+), and given the first clear-cut experimental evidence for the existence of a true dibaryon resonance <cit.>. The d^∗ (2380) may be a ΔΔ dibaryon state or a six-quark state, and extensively investigated within various theoretical approaches <cit.>.
For the strange dibaryon, the progress of the NΩ searches in the experiment attracted more and more attention for this state, which was observed in Au+Au collisions by STAR experiment at the Relativistic Heavy Ion Collider (RHIC) <cit.>. And before that, the dibaryon NΩ was investigated by different theoretical methods such as quark models <cit.>, and the lattice QCD <cit.>.
The research of charmed dibaryon is further inspired by the experimental discovery of the doubly charmed baryon Ξ_cc by the Large Hadron Collider beauty (LHCb) Collaboration <cit.>. For the dibaryons with heavy quarks, the NΛ_c system with one heavy quark was both studied on the hadron level <cit.> and on the quark level <cit.>. The dibaryon systems with two heavy quarks were researched in the one-pion-exchange model <cit.> and one-boson-exchange model <cit.>. Besides, the dibaryon systems with three heavy quarks were also investigated from the lattice QCD <cit.>, the QCD sum rule <cit.>, one-boson-exchange <cit.> and the quark model <cit.>. Recently, Junnarkar and Mathur reported the first lattice QCD study of the heavy quark flavor deuteron-like dibaryons <cit.>, and suggested that the dibaryons Ω_cΩ_cc(sscscc), Ω_bΩ_bb(ssbsbb) and Ω_ccbΩ_cbb(ccbcbb) were stable under strong and electromagnetic interactions. They also found that the binding of these dibaryons became stronger as they became heavier in mass. In addition, there are many other investigations on deuteron-like states. In Ref.<cit.>, they perform a systematic study of the possible loosely bound states composed of two charmed baryons or a charmed baryon and an anticharmed baryon within the framework of the the one-boson exchange model. And in Ref.<cit.>, they also adopted the one-boson-exchange model to perform a systematic investigation of interactions between a doubly charmed baryon (Ξ_cc) and a S-wave charmed baryon (Λ_c, Σ_c^∗ and Ξ_c^',∗), which can be easily bound together to form shallow molecular hexaquarks. Taking inspiration from the research on the dibaryon states containing heavy quarks, it is meaningful to use various methods to study and search for these heavy dibaryons.
Quantum chromodynamics (QCD) is a theory describing strong interactions based on regular field theory. The equivalent degrees of freedom are quarks and gluons, and QCD is asymptotically free at high energies and can be solved precisely by perturbation theory. Generally, hadronic structure and hadron interactions belong to the low-energy physics of QCD, which are much harder to calculate directly from QCD because of the nonperturbative nature of QCD. One must rely on effective theories or models inspired by QCD to gain insight into the phenomena of the hadronic world. The constituent quark model is one of them, which transforms the complicated interactions between current quarks into dynamic properties of constituent quarks. The chiral quark model (ChQM) is a typical one of the constituent quark model. The ChQM was successfully used to calculate mesons <cit.>, baryons, tetraquarks <cit.>, pentaquarks <cit.> and dibaryons <cit.>. In particular, for dibaryon systems, the ChQM is able to calculate the dibaryon systems from light to heavy quarks very well, such as nucleon-nucleon interaction <cit.>, NΩ <cit.> and the fully heavy dibaryon systems <cit.>, which is consistent with the results of the lattice QCD.
In the present work, we systematically investigate the singly charmed dibaryons in the ChQM, where the effective potential between two baryons are evaluated, and the search of possible bound states are performed with the coupled channel effects. Moreover, based on the conservation of the quantum numbers and the limitation of the phase space, we also study the baryon-baryon scattering process to look for the existence of any resonance states in the singly charmed dibaryon systems.
The structure of this paper is as follows. A brief introduction
of the quark model and calculation methods are given in Section II. Section III is devoted
to the numerical results and discussions. Section 4 is a
summary and the last section is Appendix, which shows the way of constructing wave functions.
§ QUARK MODEL AND CALCULATION METHODS
Phenomenological model is an important tool to analyze the nature of multi-quark states. Here, the chiral quark model(ChQM) is used to study the singly charmed dibaryon systems with IJ=01. In addition, the six-body problem is transformed into a two-body problem by using the resonance group method(RGM) for simplified calculations.
§.§ The chiral quark model
The model has become one of the most common approaches to describe hadron spectra, hadron-hadron interactions and multiquark states <cit.>. The construction of the ChQM is based on the breaking of chiral symmetry dynamics <cit.>. The model mainly uses one-gluon-exchange potential to describe the short-range interactions, a σ meson exchange (only between u, d quarks) potential to provide the mid-range attractions, and Goldstone boson exchange potential for the long-range effects <cit.>. In addition to the Goldstone bosons exchange, there are additional D meson that can be exchanged between u/d and c quarks, D_s meson that can be exchanged between s and c quarks, and η_c that can be exchanged between any two quarks of the u, d, s and c quarks. In order to incorporate the charm quark well and study the effect of the D, D_s, η_c meson exchange interaction, we extend the model to SU(4), and add the interaction of these heavy mesons interactions. The extension is made in the spirit of the phenomenological approach of Refs. <cit.>.
The detail of ChQM used in the present work can be found in the references <cit.>. In the following, only the Hamiltonian and parameters are given.
H = ∑_i=1^6 (m_i+p_i^2/2m_i) -T_c
+∑_i<j[ V^CON(r_ij)+V^OGE(r_ij) + V^σ(r_ij)+ V^OBE(r_ij)
],
V^CON(r_ij) = -a_c λ_i ·λ_j [r_ij^2+V_0],
V^OGE(r_ij) = 1/4α_sλ_i ·λ_j
[ 1/r_ij-π/2(1/m_i^2
+1/m_j^2+4σ_i·σ_j/3m_im_j)
δ(r_ij)-3/4m_im_jr^3_ijS_ij],
V^σ (r_ij ) = -g_ch^2/4πΛ _σ^2m_σ/Λ _σ^2-m_σ^2 [ Y ( m_σr_ij )
-Λ _σ/m_σY ( Λ _σr_ij ) ]
V^OBE(r_ij) = v^π(r_ij) ∑_a=1^3λ_i^a·λ_j^a+v^K(r_ij)∑_a=4^7λ_i^a·λ _j^a +v^η(r_ij)[(λ _i^8·λ _j^8)cosθ_P-(λ _i^0·λ_j^0) sinθ_P]
+v^D(r_ij) ∑_a=9^12λ_i^a·λ_j^a+v^D_s(r_ij) ∑_a=13^14λ_i^a·λ_j^a
+v^η_c(r_ij)
λ_i^15·λ_j^15
v^χ(r_ij) = -g_ch^2/4πm_χ^2/12m_im_jΛ^2/Λ^2-m_χ^2m_χ{[Y(m_χ r_ij)-Λ^3/m_χ^3Y(Λ r_ij)
] σ_i ·σ_j .
.+ [ H(m_χ r_ij)-Λ^3/m_χ^3
H(Λ r_ij) ] S_ij}λ^F_i ·λ^F_j, χ=π,K,η,D,D_s,η_c
S_ij = (σ_i · r_ij)
(σ_j ·
r_ij)/r_ij^2-1/3 σ_i ·σ_j.
Where T_c is the kinetic energy of the center of mass; S_ij is quark tensor operator. We only consider
the S-wave systems at present, so the tensor force dose not work here; Y(x) and H(x) are
standard Yukawa functions <cit.>; α_ch is the chiral coupling constant, determined as usual from the π-nucleon coupling constant; α_s is the quark-gluon coupling constant <cit.>. Here m_χ is the mass of the mesons, which are experimental value; Λ_χ is the cut-off parameters of different mesons, which can refer to Ref <cit.>. The coupling constant g_ch for scalar chiral field is determined from the NNπ coupling constant through
g_ch^2/4π =(3/5 )^2g_π NN^2/4πm_u,d^2/m_N^2
All other symbols have their usual meanings.
All parameters were determined by fitting the masses of the baryons of light and heavy flavors. The model parameters and the fitting masses of baryons are shown in Table <ref> and Table <ref>, respectively.
§.§ Calculation methods
In this work, RGM <cit.> is used to carry out a dynamical calculation. In the framework of RGM, which split the dibaryon system into two clusters,
the main feature of RGM is that for a system consisting of two clusters, it can assume that the two clusters are frozen inside, and only consider the relative motion between the two clusters, so the conventional ansatz for the two-cluster wave function is:
ψ_6q = A [[ϕ_B_1ϕ_B_2]^[σ]IS⊗χ_L(R)]^J,
where the symbol A is the anti-symmetrization operator. With the SU(4) extension, both the light and heavy quarks are considered as identical particles. So A = 1-9P_36. [σ]=[222] gives the total color symmetry and all other symbols have their usual meanings. ϕ_B_i is the 3-quark cluster wave function. From the variational principle, after variation with respect to the relative motion wave function χ(𝐑)=∑_Lχ_L(𝐑), one obtains the RGM equation
∫ H(𝐑,𝐑')χ(𝐑')d𝐑'=E∫ N(𝐑,𝐑')χ(𝐑')d𝐑'
where H(𝐑,𝐑') and N(𝐑,𝐑') are Hamiltonian and norm kernels.
The RGM can be written as
∫ L(𝐑,𝐑')χ(𝐑')d𝐑'=0
where
L(𝐑,𝐑') = H(𝐑,𝐑')
-EN(𝐑,𝐑')
= [ -▽ _𝐑'^2/2μ
+V_rel^D(𝐑')-E_rel]δ (𝐑-𝐑')
+ H^EX(𝐑,𝐑')
-EN^EX(𝐑,𝐑')
where μ is the approximate mass between the two quark clusters; E_rel=E-E_int is the relative motion energy; V_rel^D is the direct term in the interaction potential.
By solving the RGM equation, we can get the energies E and the wave functions. In fact, it is not convenient to work with the RGM expressions. Then, we expand the relative motion wave function χ(𝐑) by using a set of gaussians with different centers,
χ_L(R) = 1/√(4π)(3/2π b^2)^3/4∑_i=1^n C_i
×∫exp[-3/4b^2(R-S_i)^2] Y_LM(Ŝ_̂î)dŜ_̂î,
where L is the orbital angular momentum between two clusters. Since the system we studied are all S-waves, L=0 in this work, and S_i, i=1,2,...,n are the generator coordinates, which are introduced to expand the relative motion wave function. By including the center of mass motion:
ϕ_C (R_C) = (6/π b^2)^3/4e^-3R^2_C/b^2,
the ansatz Eq.(<ref>) can be rewritten as
ψ_6q = A∑_i=1^n C_i∫dŜ_̂î/√(4π)∏_α=1^3ϕ_α(S_i) ∏_β=4^6ϕ_β(-S_i)
×[[χ_I_1S_1(B_1)χ_I_2S_2(B_2)]^ISY_LM(Ŝ_̂î)]^J
× [χ_c(B_1)χ_c(B_2)]^[σ],
where χ_I_1S_1 and χ_I_2S_2 are the product of the flavor and spin wave functions, and χ_c is the color wave function. The flavor, spin, and color wave functions are constructed in two steps. First, constructing the wave functions for the baryon and baryon clusters; then, coupling the two wave functions of two clusters to form the wave function for the dibaryon system. The detail of constructing the wave functions are presented in Appendix. For the orbital wave functions, ϕ_α(S_i) and ϕ_β(-S_i) are the single-particle orbital wave functions with different
reference centers:
ϕ_α(S_i)=(1/π
b^2)^3/4e^ -(r_α-S_i/2)^2/2b^2,
ϕ_β(-S_i)=(1/π
b^2)^3/4e^ -(r_β+S_i/2)^2/2b^2 .
By expanding the relative motion wave function between two clusters in the RGM equation by gaussians, the integro-differential equation of RGM can be reduced to an algebraic equation, which is the generalized eigen-equation.
With the reformulated ansatz, the RGM equation Eq.(<ref>) becomes an algebraic eigenvalue equation:
∑_j C_jH_i,j= E ∑_j C_jN_i,j.
where H_i,j and N_i,j are the Hamiltonian matrix elements and overlaps, respectively. Besides, to keep the matrix dimension manageably small, the baryon-baryon separation is taken to be less than 6 fm in the calculation. By solving the generalized energy problem, we can obtain the energy and the corresponding wave functions of the dibaryon system. On the basis of RGM, we can further calculate scattering problems to find resonance states.
For a scattering problem, the relative wave function of the baryon-baryon is expanded as
χ_L( R) =∑_i=1^nC_iũ_L(R,S_i)/RY_L,M ( R̂ )
with
ũ_L ( R,S_i )
={ α _iu_L ( R,S_i ), R≤R_C
[ h_L^- ( k,R )-s_ih_L^+ ( k,R ) ]R, R≥R_C.
where
u_L ( R )=√(4π) ( 3/2π b^2 )e^-3/4b^2 ( R^2+r_i^2 )j_L ( -i3/2b^2Rr_i )
C_i are the expansion coefficients, and C_i satisfy ∑_i=1^nC_i=1. n is the number of Gaussion bases (which is determined by the stability of the results), and j_L is the Lth spherical Bessel function. h_L^± are the Lth spherical Hankel functions, k is the momentum of the relative motion with k =√(2μ E_cm), μ is the reduced mass of two baryons of the open channel, E_cm is the incident energy of the relevant open channels, and R_C is a cutoff radius beyond which all of the strong interactions can be disregarded. α_i and s_i are complex parameters that determined in terms of continuity conditions at R=R_C. After performing the variational procedure by the Kohn-Hulthén-Kato(KHK) variational method <cit.>, a Lth partial-wave equation for the scattering problem can be reduced as
∑_j^nℒ_ij^L C_j =ℳ_ij^L (i=0,1,...,n-1),
with
ℒ_ij^L=𝒦_ij^L-𝒦_i0^L-𝒦_0j^L+𝒦_00^L
ℳ_i^L=ℳ_ij^L-𝒦_i0^L
and
𝒦_ij^L=⟨ϕ̂_̂Âϕ̂_̂B̂ũ_L(R',S_i )/R'Y_L,M(R')| H-E | .
.·𝒜 [ ϕ̂_̂Âϕ̂_̂B̂ũ_L(R,S_j )/RY_L,M(R) ] ⟩
By solving Eq.(<ref>) we obtain the expansion coefficients C_i. Then, the S matrix element S_L and the phase shifts δ_L are given by
S_L≡ e^2iδ_L=∑_i=1^nC_iS_i
Through the scattering process, not only can we better study the interaction between hadrons, but it can also help us research resonance states. The general scattering phase shift diagram should be a smooth curve, that is, the phase shift will change gently as the incident energy increases. But in some cases, the phase shift will be abrupt, the change will be more than 90 degrees, which is the resonance phenomena. The rapid phase change is a general feature of resonance phenomena, see Fig.<ref>. The center of mass energy with phase shift π/2 gives the mass of the resonance (M^' in Fig.<ref>), and the difference of the energies with phase shift 3π/4 and π/4 gives the partial decay width of the resonance (Γ in Fig.<ref>).
§ THE RESULTS AND DISCUSSIONS
In this work, we perform a systematical investigation of the S-wave singly charmed dibaryon systems with strange S=-1,-3,-5, isospin I=0, and the angular momentum J=1. To study the interaction between two hadrons, we calculate the effective potential of the system. Then, a dynamic calculation are carried out to search for bound states. Besides, the scattering process is also investigated to look for the existence of any resonance states.
§.§ Effective potentials
The effective potential between two baryons is shown as
V(S_i)=E(S_i)-E(∞)
where S_i stands for the distance between two clusters and E(∞) stands for a sufficient large distance of two clusters, and the expression of E(S_i) is as follow.
E(S_i)=⟨Ψ _6q(S_i) | H | Ψ _6q(S_i) ⟩/⟨Ψ _6q(S_i)|Ψ _6q(S_i) ⟩
Ψ _6q(S_i) represents the wave function of a certain channel. Besides, ⟨Ψ _6q(S_i) | H | Ψ _6q(S_i) ⟩ and ⟨Ψ _6q(S_i)|Ψ _6q(S_i) ⟩ are the Hamiltonian matrix and the overlap of the states.
The effective potentials of all channels with different strange numbers are shown in Fig.<ref>, Fig.<ref> and Fig.<ref> respectively.
For S=-1 system, as shown in Fig.<ref>, all of the seven channels are attractive, the potentials for the four channels ΣΣ_c,ΣΣ_c^*,Σ^*Σ_c and Σ^*Σ_c^* are deeper than the other three channels ΛΛ_c,NΞ_c and NΞ_c^', which indicates that the ΣΣ_c,ΣΣ_c^*,Σ^*Σ_c and Σ^*Σ_c^* are more likely to form bound states or resonance states.
For S=-3 system, from Fig.<ref> we can see that the potentials of the ΞΞ_c^*,Ξ^*Ξ_c^* and ΛΩ_c^* are attractive, while the potentials for the other six channels are repulsive. The attraction of ΞΞ_c^* and Ξ^*Ξ_c^* is much stronger than that of ΛΩ_c^*, which implies that it is more possible for ΞΞ_c^* and Ξ^*Ξ_c^* to form bound states or resonance states. However, compared to S=-1, the attraction is much weaker.
For S=-5 system, see Fig.<ref>, there are only two channels in this system, one of which is ΩΩ_c, a purely repulsive state; and the other is ΩΩ_c^*, which is weakly attractive. Therefore, it is difficult for these channels to form any bound state. However, we still need to confirm the existence of bound states or resonance states by performing the dynamic calculations.
§.§ Bound state calculation
In order to see whether there is any bound state, a dynamic calculation based on RGM <cit.> has been performed. The energies of each channel as well as the one with channel coupling calculation are listed in Table <ref>, Table <ref> and Table <ref>. The first column is the state of every channel; the second column E_th denotes the theoretical threshold of each corresponding state; the third column E_sc represents the energy of every single channel; the fourth column B_sc stands for the binding energy of every single channel, which is B_sc=E_sc-E_th; the fifth column E_cc denotes the lowest energy of the system by channel coupling calculation; and the last column B_cc represents the binding energy with all channels coupling, which is B_cc= E_cc-E_th. Here, we should notice that when the state is unbound, we label it as “ub".
S=-1:
The single channel calculation shows that the channels ΣΣ_c, ΣΣ_c^*, Σ^*Σ_c and Σ^*Σ_c^* are bound states with the binding energies -23 MeV, -25 MeV, -32 MeV and -25 MeV, respectively (see Table <ref>). This conclusion is consistent with the property that there is a strong effective attraction of these channels.
However, for the ΛΛ_c,NΞ_c and NΞ_c^' channels, which are unbound, the energies obtained by single channel calculations are above their corresponding thresholds due to the weak attraction of these channels. For the calculation of the channel coupling, the lowest energy is still above the lowest threshold (ΛΛ_c). Therefore, for this system, no bound states below the lowest threshold were found. For higher-energy single-channel bound states, they can be coupled to the open channels and the scattering process is needed to determine the existence of resonance states.
S=-3:
From Table <ref>, the single channel calculation shows that all these nine channels are unbound. After the channel coupling calculation, the lowest energy of this system is 3790 MeV (still higher than the threshold of the lowest channel ΞΞ_c ), which indicates that the singly charmed dibaryon system with IJ=01, S=-3 is unbound. This is reasonable. The attractions of the channels ΞΞ_c^*,Ξ^*Ξ_c^* and ΛΩ_c^* are not strong enough to form any bound state, and the interaction of the other channels are repulsive, as shown in Fig.<ref>.
S=-5: The situation is similar to that of the S=-3 system. As shown in Table <ref>, both of the channels ΩΩ_c and ΩΩ_c^* are unbound. The lowest energy of the system is higher than the threshold of the ΩΩ_c by the channel coupling calculation. So the system with S=-5 is unbound.
§.§ Resonance states
As mentioned above, some channels are bound due to the strong attractions of the system. However, these states will decay to the corresponding open channels by coupling with them and become resonance states. Besides, some states will become scattering state by the effect of coupling to both the open and closed channels. To further check the existence of the resonance states, we studied the scattering phase shifts of all possible open channels. Since no resonance states are obtained in the S=-3 and S=-5 systems, we only show the scattering phase shifts of the S=-1 system here.
In the S=-1 system, four singly bound states are obtained, which are ΣΣ_c, ΣΣ_c^*, Σ^*Σ_c and Σ^*Σ_c^*, and there are three open channels, which are ΛΛ_c,NΞ_c and NΞ_c^'. We analyze two types of channel coupling in this
work. The first is the two-channel coupling with a singly bound state and a related open channel, while the other is the five-channel coupling with four bound states and a
corresponding open channel. The general features of the calculated results are as follows.
Here, we should note that the horizontal axis E_c.m. in Fig.<ref> is the incident energy without the theoretical threshold of the corresponding open channel. So the resonance mass M^' is obtained by adding E_c.m. and the theoretical threshold of the corresponding open channel. In order to minimize the theoretical errors and compare our predictions with future experimental data, we shift the resonance mass by M=M^'-E_th+E_exp, where E_th and E_exp are the theoretical and experimental thresholds of the resonance state, respectively. Taking the resonance state ΛΛ_c in the ΣΣ_c channel as an example, the resonance mass shown in Fig.<ref>(a) is M^'=3597 MeV, the theoretical threshold is M_th=3595 MeV, and the experimental threshold is M_exp=3618 MeV. Then the final resonance mass M=3597-3595+3618=3620 MeV. The estimated masses and widths of the resonances in different channels are listed in Table<ref>, where M is the resonance mass, Γ_i is the partial decay width of the resonance state decaying to different open channels, and Γ_total is the total decay width of the resonance state.
For the case of the two-channel coupling, in ΛΛ_c scattering process, it is obvious that ΣΣ_c and Σ^*Σ_c^* appear as resonance states, as shown in Fig.<ref>(a) and Fig.<ref>(d), respectively. The resonance mass and decay width of every resonance state are obtained from the ΛΛ_c scattering phase shifts. At the same time, ΣΣ_c^* and Σ^*Σ_c do not behave as resonance states in ΛΛ_c scattering process, as shown in Fig.<ref>(b) and Fig.<ref>(c), respectively. There may be two reasons: the one is that stronger coupling between the two channels causes the bound state to be pushed above the threshold and become a scattering state; the other one is that the coupling between the two channels is so weak that the resonance state does not manifest during the scattering process. To clarify this issue, we calculate the cross matrix elements between the two channels (ΛΛ_c and ΣΣ_c^*/Σ^*Σ_c), they are all close to zero, which means that the coupling between ΛΛ_c and ΣΣ_c^*/Σ^*Σ_c is very weak. Therefore, neither ΣΣ_c^* nor Σ^*Σ_c behaves as a resonance state in the ΛΛ_c scattering phase shifts.
However, in the NΞ_c scattering process, the situation reversed. From Fig.<ref>, both ΣΣ_c^* and Σ^*Σ_c appear as resonance states, while the other two channels ΣΣ_c and Σ^*Σ_c^* do not. The cross matrix elements between NΞ_c and ΣΣ_c/Σ^*Σ_c^* show that the coupling between them is very weak, which results in the absence of resonance state ΣΣ_c/Σ^*Σ_c^* in the NΞ_c scattering phase shift. In the NΞ_c^' scattering process, as shown in Fig.<ref>, the conclusion is similar to the one in the NΞ_c scattering process. All the resonance masses and decay width are shown in Table<ref>.
For the case of five-channel coupling, the scattering phase shifts are shown in Fig.<ref>, and the resonance masses and decay widths are listed in Table<ref>.
There is only one resonance state ΣΣ_c appears in the ΛΛ_c phase shifts, as shown in Fig.<ref>(a). From Table<ref>, the resonance mass of the ΣΣ_c in the five-channel coupling case is 3591 MeV, which is lower than the one in the two-channel coupling case (3620 MeV). This is because the coupling between closed channels will push down the channels with lower energy. At the same time, the channel coupling can also raises the energy of the higher state, even pushes the higher state above the threshold. Therefore, the resonance state Σ^*Σ_c^* in two-channel coupling disappears in the five-channel coupling. Similarly, there is only one resonance state ΣΣ_c^* appears in the NΞ_c phase shifts, which is shown in Fig.<ref>(b). In the NΞ_c^' phase shifts, the situation is slightly different. Two resonance states ΣΣ_c^* and Σ^*Σ_c are shown in Fig.<ref>(c). By comparing with the results in the two-channel coupling, the resonance mass of ΣΣ_c^* is 30 MeV lower, while the one of Σ^*Σ_c is 9 MeV higher.
However, since the resonance Σ^*Σ_c disappears in the NΞ_c scattering phase shifts, it will decay through the NΞ_c open channel. So the Σ^*Σ_c cannot be identified as a resonance state.
All these results show that the existence of the resonance states and the resonance energy are both affected by the multi-channel coupling. So the effect of the channel coupling cannot be ignored in the multi-quark system.
§ SUMMARY
The S-wave singly charmed dibaryon systems with strangeness numbers S=-1, -3 and -5 are systemically investigated by using the RGM in the framework of ChQM. Our goal is to search for any bound state or resonance state of singly charmed dibaryon systems. Herein, the effective potentials are calculated to explore the interactions of between two baryons. Both the single-channel and the coupled-channel dynamic bound-state calculations are carried out to search for possible
states. Meanwhile, the study of the scattering process of the open channels is carried out to confirm possible resonance states.
According to the numerical results, in the S=-1 system, the attractions between Σ/Σ^* and Σ_c/Σ_c^* are large enough to form singly bound states ΣΣ_c, ΣΣ^*_c, Σ^*Σ_c and Σ^*Σ_c^*. However, these states can couple with the corresponding open channels, and become resonance states or scattering states. By including the effect of channel-coupling, two resonance states with strangeness numbers S=-1 are obtained. The one is the ΣΣ_c state with the mass and width 3591 MeV and 11.1 MeV, respectively, and the decay channel is ΛΛ_c. The other is the ΣΣ^∗_c state with the mass and width 3621-3624 MeV and 14.9 MeV, respectively, and the decay channels are NΞ_c and NΞ^'_c. All these dibaryons are worth searching for in experiments, although it will be a challenging subject.
In the past two decades, numerous heavy-flavor hadrons have been discovered in experiments, which are considered as promising candidates for tetraquarks and pentaquarks. In Ref. <cit.>, the authors claimed that the existence of molecular states in DD^*, DD̅^*, and Σ_cD̅^(*) systems leads to the emergence of a large number of deuteronlike hexaquarks in the heavy flavor sectors. The systems composed of charmed baryons and hyperons are predicted by the mass spectra calculation. In Ref. <cit.>, the charmed-strange molecular dibaryons are investigated in a quasipotential Bethe-Salpeter approach together with the one-boson-exchange model. The results suggested that attractions widely exist in charmed-strange system, and the S-wave bound states can be produced
from most of the channels. In this work, fewer charmed dibaryon resonance states are obtained, since the coupling with the open channels are considered.
The study of the scattering process is an effective way to look for the genuine resonances. However, to distinguish the various
explanations and confirm the existence of the exotic hadron states is still very difficult, and requires the joint
efforts of both theorists and experimentalists.
This work is supported partly by the National Natural Science Foundation of China under Contracts Nos. 11675080, 11775118 and 11535005.
§ APPENDIX
Here, we only list the wave functions we used in this work. The spin wave function of a q^3 cluster is labeled as χ_s,s_z^σ, where s and s_z are the spin quantum number and the third component, respectively. For wave functions with the same quantum number but different symmetries, we distinguish them with different numbers. For example, χ_1/2,1/2^σ1 and χ_1/2,1/2^σ2 represent respectively the symmetric and antisymmetric spin wave functions with spin quantum number 1/2.
χ_3/2, 3/2^σ = ααα
χ_3/2, 1/2^σ = 1/√(3)(ααβ+αβα+βαα)
χ_3/2,-1/2^σ = 1/√(3)(αββ+βαβ+ββα)
χ_3/2,-3/2^σ = βββ
χ_1/2,1/2^σ1 = √(1/6)(2 ααβ-αβα-βαα)
χ_1/2,1/2^σ2 = √(1/2)(αβα-βαα)
χ_1/2,-1/2^σ1 = √(1/6)(αββ+βαβ-2 ββα)
χ_1/2,-1/2^σ2 = √(1/2)(αββ-βαβ)
The flavor wave functions of the q^3 cluster χ_I, I_z^f (I and I_z are the isospin quantum number and the third component, respectively) are as follows. Here, both the light and heavy quarks are considered as identical particles with the SU(4) extension.
χ_0,0^f1 = 1/2(usd+sud-sdu-dsu)
χ_0,0^f2 = √(1/12)(2uds-2dsu+sdu+usd-sud-dsu)
χ_0,0^f3 = 1/2(ucd+cud-cdu-dcu)
χ_0,0^f4 = √(1/12)(2ucd-2dcu+cdu+ucd-cud-dcu)
χ_0,0^f5 = √(1/6)(2ssc-scs-css)
χ_0,0^f6 = √(1/2)(scs-css)
χ_0,0^f7 = √(1/3)(ssc+scs+css)
χ_0,0^f8 = sss
χ_1/2,-1/2^f1 = 1/2(dcs+cds-csd-scd)
χ_1/2,-1/2^f2 = √(1/12)(2dsc-2sdc+csd+dcs-cds-scd)
χ_1/2,-1/2^f3 = √(1/6)(udd+dud-2ddu)
χ_1/2,-1/2^f4 = √(1/2)(udd-dud)
χ_1/2,-1/2^f5 = √(1/12)(2dsc+2sdc-csd-dcs-cds-scd)
χ_1/2,-1/2^f6 = 1/2(dcs+scd-csd-cds)
χ_1/2,-1/2^f7 = √(1/6)(dss+sds-2ssd)
χ_1/2,-1/2^f8 = √(1/2)(dss-sds)
χ_1/2,-1/2^f9 = √(1/6)(dsc+sdc+csd+dcs+cds+scd)
χ_1/2,-1/2^f10 = √(1/3)(dss+sds+ssd)
χ_1/2,1/2^f1 = √(1/6)(2uud-udu-duu)
χ_1/2,1/2^f2 = √(1/2)(udu-duu)
χ_1/2,1/2^f3 = 1/2(ucs+cus-csu-scu)
χ_1/2,1/2^f4 = √(1/12)(2usc-2suc+csu+ucs-cus-scu)
χ_1/2,1/2^f5 = √(1/12)(2usc+2suc-csu-ucs-cus-scu)
χ_1/2,1/2^f6 = 1/2(ucs+scu-csu-cus)
χ_1/2,1/2^f7 = √(1/6)(uss+sus-2ssu)
χ_1/2,1/2^f8 = √(1/2)(uss-sus)
χ_1/2,1/2^f9 = √(1/6)(usc+suc+csu+ucs+cus+scu)
χ_1/2,1/2^f10 = √(1/3)(uss+sus+ssu)
χ_1,-1^f1 = √(1/6)(2ddc-dcd-cdd)
χ_1,-1^f2 = √(1/2)(dcd-cdd)
χ_1,-1^f3 = √(1/6)(2dds-dsd-sdd)
χ_1,-1^f4 = √(1/2)(dsd-sdd)
χ_1,-1^f5 = √(1/3)(ddc+dcd+cdd)
χ_1,-1^f6 = √(1/3)(dds+dsd+sdd)
χ_1,0^f1 = √(1/12)(2uds+2dus-sdu-usd-sud-dsu)
χ_1,0^f2 = 1/2(usd+dsu-sdu-sud)
χ_1,0^f3 = √(1/12)(2udc+2duc-cdu-ucd-cud-dcu)
χ_1,0^f4 = 1/2(ucd+dcu-cdu-cud)
χ_1,0^f5 = √(1/6)(udc+duc+cdu+ucd+cud+dcu)
χ_1,0^f6 = √(1/6)(uds+dus+sdu+usd+sud+dsu)
χ_1,1^f1 = √(1/6)(2uus-usu-suu)
χ_1,1^f2 = √(1/2)(usu-suu)
χ_1,1^f3 = √(1/6)(2uuc-ucu-cuu)
χ_1,1^f4 = √(1/2)(ucu-cuu)
χ_1,1^f5 = √(1/3)(uuc+ucu+cuu)
χ_1,1^f6 = √(1/3)(uus+usu+suu)
The color wave function of a color-singlet q^3 cluster is:
χ^c = √(1/6)(r g b-r b g+g b r-g r b+b r g-b g r)
The total flavor-spin-color wave function of the dibaryon system can be acquired by substituting the wave functions of the flavor, the spin, and the color parts according to the given quantum number of the system, and the total flavor-spin-color wave function for each channel is shown as follows. ϕ_I_z,s_z^B represents the wave function of the q^3 cluster (I_z and s_z are the third component of the isospin and spin quantum numbers, B is the corresponding baryon). Then we couple the two baryon wave functions by Clebsch-Gordan coefficients according to the total quantum number requirement, and we can obtain the total wave functions.
There are seven channels for the C=1,S=-1 system:
| ΛΛ _c⟩ = ϕ _0,1/2^Λϕ _0,1/2^Λ _c
| NΞ _c⟩ = √(1/2) [ ϕ _1/2,1/2^pϕ _-1/2 ,1/2^Ξ _c -ϕ _-1/2,1/2^nϕ _1/2 ,1/2^Ξ _c ]
| NΞ _c^'⟩ = √(1/2) [ ϕ _1/2,1/2^pϕ _-1/2 ,1/2^Ξ _c ^' -ϕ _-1/2,1/2^nϕ _1/2 ,1/2^Ξ _c^' ]
| ΣΣ _c⟩ = √(1/3) [ ϕ _1,1/2^Σϕ _-1,1/2^Σ _c-ϕ _0,1/2^Σϕ _0,1/2^Σ _c +ϕ _-1,1/2^Σϕ _1,1/2^Σ _c ]
|ΣΣ _c^*⟩= 1/2 [ ϕ _0,-1/2^Σϕ _0,3/2^Σ _c^*-ϕ _1,-1/2^Σϕ _-1,3/2^Σ _c^*-ϕ _-1,-1/2^Σϕ _1,3/2^Σ _c^* ]
-√(1/12) [ ϕ _0,1/2^Σϕ _0,1/2^Σ _c^*- ϕ _1,1/2^Σϕ _-1,1/2^Σ _c^*-ϕ _-1,1/2^Σϕ _1,1/2^Σ _c^* ]
|Σ^*Σ _c⟩= 1/2 [ ϕ _1,3/2^Σ^*ϕ _-1,-1/2^Σ _c-ϕ _0,3/2^Σ^*ϕ _0,-1/2^Σ _c+ϕ _-1,3/2^Σ^*ϕ _1,-1/2^Σ _c ]
-√(1/12) [ ϕ _1,1/2^Σ ^*ϕ _-1,1/2^Σ _c- ϕ _0,1/2^Σ ^*ϕ _0,1/2^Σ _c-ϕ _-1,1/2^Σ^*ϕ _1,1/2^Σ _c ]
|Σ^*Σ _c^*⟩= √(1/10) [ ϕ _1,3/2^Σ^*ϕ _-1,-1/2^Σ _c^*-ϕ _0,3/2^Σ^*ϕ _0,-1/2^Σ _c^*+ϕ _-1,3/2^Σ^*ϕ _1,-1/2^Σ _c^*.
.+ϕ _1,-1/2^Σ^*ϕ _-1,3/2^Σ _c^*-ϕ _0,-1/2^Σ^*ϕ _0,3/2^Σ _c^*+ϕ _-1,-1/2^Σ^*ϕ _1,3/2^Σ _c^* ]
-√(2/15) [ ϕ _1,1/2^Σ ^*ϕ _-1,1/2^Σ _c^*- ϕ _0,1/2^Σ ^*ϕ _0,1/2^Σ _c^*+ϕ _-1,1/2^Σ^*ϕ _1,1/2^Σ _c^* ]
nine channels for the C=1,S=-3 system:
| ΛΩ _c⟩= ϕ _0,1/2^Λϕ _0,1/2^Ω _c
| ΛΩ _c^*⟩ = 1/2ϕ _0,1/2^Λϕ _0,1/2^Ω _c^*-√(3/4)ϕ _0,-1/2^Λϕ _0,3/2^Ω _c^*
| Λ _cΩ⟩ = 1/2ϕ _0,1/2^Λ _cϕ _0,1/2^Ω-√(3/4)ϕ _0,-1/2^Λ _cϕ _0,3/2^Ω
|ΞΞ _c⟩ = √(1/2) [ ϕ _1/2,1/2^Ξϕ _-1/2,1/2^Ξ _c - ϕ _-1/2,1/2^Ξϕ _1/2,1/2^Ξ _c ]
|ΞΞ _c^'⟩ = √(1/2) [ ϕ _1/2,1/2^Ξϕ _-1/2,1/2^Ξ _c^' - ϕ _-1/2,1/2^Ξϕ _1/2,1/2^Ξ _c^' ]
|ΞΞ _c^*⟩ = √(3/8) [ ϕ _-1/2,-1/2^Ξϕ _1/2,3/2^Ξ _c^* - ϕ _1/2,-1/2^Ξϕ _-1/2,3/2^Ξ _c^* ]
-√(1/8) [ϕ _-1/2,1/2^Ξϕ _1/2,1/2^Ξ _c^* - ϕ _1/2,1/2^Ξϕ _-1/2,1/2^Ξ _c^* ]
|Ξ^*Ξ _c⟩ = √(3/8) [ ϕ _1/2,3/2^Ξ ^* ϕ _-1/2,-1/2^Ξ _c - ϕ _-1/2,3/2^Ξ ^* ϕ _1/2,-1/2^Ξ _c ]
-√(1/8) [ϕ _1/2,1/2^Ξ^* ϕ _-1/2,1/2^Ξ _c - ϕ _-1/2,1/2^Ξ ^* ϕ _1/2,1/2^Ξ _c ]
|Ξ^*Ξ _c^'⟩ = √(3/8) [ ϕ _1/2,3/2^Ξ ^* ϕ _-1/2,-1/2^Ξ _c^' - ϕ _-1/2,3/2^Ξ ^* ϕ _1/2,-1/2^Ξ _c^' ]
-√(1/8) [ϕ _1/2,1/2^Ξ^* ϕ _-1/2,1/2^Ξ _c^' - ϕ _-1/2,1/2^Ξ ^* ϕ _1/2,1/2^Ξ _c^' ]
|Ξ^*Ξ _c^*⟩ = √(3/20) [ ϕ _1/2,3/2^Ξ ^* ϕ _-1/2,-1/2^Ξ _c^* - ϕ _-1/2,3/2^Ξ ^* ϕ _1/2,-1/2^Ξ _c^*.
.+ϕ _1/2,-1/2^Ξ ^* ϕ _-1/2,3/2^Ξ _c^*-ϕ _-1/2,-1/2^Ξ ^* ϕ _1/2,3/2^Ξ _c^* ]
-√(3/10) [ϕ _1/2,1/2^Ξ ^* ϕ _-1/2,1/2^Ξ _c^*-ϕ _-1/2,1/2^Ξ ^* ϕ _1/2,1/2^Ξ _c^* ]
and two channels for the C=1,S=-5 system:
| ΩΩ _c⟩= √(-3/4)ϕ _0,-1/2^Ω_cϕ _0,3/2^Ω+√(1/4)ϕ _0,1/2^Ω_cϕ _0,1/2^Ω
| ΩΩ _c^*⟩ = √(3/10) [ ϕ _0,3/2^Ωϕ _0,-1/2^Ω_c^* +ϕ _0,-1/2^Ωϕ _0,3/2^Ω_c^* ]
-√(2/5)ϕ _0,1/2^Ωϕ _0,1/2^Ω _c ^*
where the expression of ϕ_I_z,s_z^B is shown as follows:
ϕ _0,1/2^Λ= √(1/2) ( χ _0,0^f1χ _1/2,1/2^σ 1 +χ _0,0^f2χ _1/2,1/2^σ 2 )χ ^c
ϕ _0,1/2^Λ_c= √(1/2) ( χ _0,0^f3χ _1/2,1/2^σ 1 +χ _0,0^f4χ _1/2,1/2^σ 2 )χ ^c
ϕ _1/2,1/2^p= √(1/2) ( χ _1/2,1/2^f1χ _1/2,1/2^σ 1 +χ _1/2,1/2^f2χ _1/2,1/2^σ 2 )χ ^c
ϕ _-1/2,1/2^Ξ_c= √(1/2) ( χ _1/2,-1/2^f1χ _1/2,1/2^σ 1 +χ _1/2,-1/2^f2χ _1/2,1/2^σ 2 )χ ^c
ϕ _-1/2,1/2^n= √(1/2) ( χ _1/2,-1/2^f3χ _1/2,1/2^σ 1 +χ _1/2,-1/2^f4χ _1/2,1/2^σ 2 )χ ^c
ϕ _1/2,1/2^Ξ_c= √(1/2) ( χ _1/2,1/2^f3χ _1/2,1/2^σ 1 +χ _1/2,1/2^f4χ _1/2,1/2^σ 2 )χ ^c
ϕ _-1/2,1/2^Ξ_c^'= √(1/2) ( χ _1/2,-1/2^f5χ _1/2,1/2^σ 1 +χ _1/2,-1/2^f6χ _1/2,1/2^σ 2 )χ ^c
ϕ _1/2,1/2^Ξ_c^'= √(1/2) ( χ _1/2,1/2^f5χ _1/2,1/2^σ 1 +χ _1/2,1/2^f6χ _1/2,1/2^σ 2 )χ ^c
ϕ _1,1/2^Σ= √(1/2) ( χ _1,1^f1χ _1/2,1/2^σ 1 +χ _1,1^f2χ _1/2,1/2^σ 2 )χ ^c
ϕ _-1,1/2^Σ_c= √(1/2) ( χ _1,-1^f1χ _1/2,1/2^σ 1 +χ _1,-1^f2χ _1/2,1/2^σ 2 )χ ^c
ϕ _0,1/2^Σ= √(1/2) ( χ _1,0^f1χ _1/2,1/2^σ 1 +χ _1,0^f2χ _1/2,1/2^σ 2 )χ ^c
ϕ _0,1/2^Σ_c= √(1/2) ( χ _1,0^f3χ _1/2,1/2^σ 1 +χ _1,0^f4χ _1/2,1/2^σ 2 )χ ^c
ϕ _-1,1/2^Σ= √(1/2) ( χ _1,-1^f3χ _1/2,1/2^σ 1 +χ _1,-1^f4χ _1/2,1/2^σ 2 )χ ^c
ϕ _1,1/2^Σ_c= √(1/2) ( χ _1,1^f3χ _1/2,1/2^σ 1 +χ _1,1^f4χ _1/2,1/2^σ 2 )χ ^c
ϕ _0,-1/2^Σ= √(1/2) ( χ _1,0^f1χ _1/2,-1/2^σ 1 +χ _1,0^f2χ _1/2,-1/2^σ 2 )χ ^c
ϕ _1,-1/2^Σ= √(1/2) ( χ _1,1^f1χ _1/2,-1/2^σ 1 +χ _1,1^f2χ _1/2,-1/2^σ 2 )χ ^c
ϕ _-1,-1/2^Σ= √(1/2) ( χ _1,-1^f3χ _1/2,-1/2^σ 1 +χ _1,-1^f4χ _1/2,-1/2^σ 2 )χ ^c
ϕ _0,1/2^Ω_c= √(1/2) ( χ _0,0^f5χ _1/2,1/2^σ 1 +χ _0,0^f6χ _1/2,1/2^σ 2 )χ ^c
ϕ _0,-1/2^Λ= √(1/2) ( χ _0,0^f1χ _1/2,-1/2^σ 1 +χ _0,0^f2χ _1/2,-1/2^σ 2 )χ ^c
ϕ _1/2,1/2^Ξ= √(1/2) ( χ _1/2,1/2^f7χ _1/2,1/2^σ 1 +χ _1/2,1/2^f8χ _1/2,1/2^σ 2 )χ ^c
ϕ _-1/2,1/2^Ξ= √(1/2) ( χ _1/2,-1/2^f7χ _1/2,1/2^σ 1 +χ _1/2,-1/2^f8χ _1/2,1/2^σ 2 )χ ^c
ϕ _-1/2,-1/2^Ξ= √(1/2) ( χ _1/2,-1/2^f7χ _1/2,-1/2^σ 1 +χ _1/2,-1/2^f8χ _1/2,-1/2^σ 2 )χ ^c
ϕ _1/2,-1/2^Ξ= √(1/2) ( χ _1/2,1/2^f7χ _1/2,-1/2^σ 1 +χ _1/2,1/2^f8χ _1/2,-1/2^σ 2 )χ ^c
ϕ _-1/2,-1/2^Ξ_c^'= √(1/2) ( χ _1/2,-1/2^f5χ _1/2,-1/2^σ 1 +χ _1/2,-1/2^f6χ _1/2,-1/2^σ 2 )χ ^c
ϕ _1/2,-1/2^Ξ_c^'= √(1/2) ( χ _1/2,1/2^f5χ _1/2,-1/2^σ 1 +χ _1/2,1/2^f6χ _1/2,-1/2^σ 2 )χ ^c
ϕ _0,-1/2^Ω_c= √(1/2) ( χ _0,0^f5χ _1/2,-1/2^σ 1 +χ _0,0^f6χ _1/2,-1/2^σ 2 )χ ^c
ϕ_0,3/2^Σ_c^*= χ_1,0^f5χ_
3/2,3/2^σχ^c, ϕ_-1,3/2^Σ_c^*=χ_1,-1^f5χ_3/2,3/2^σχ^c
ϕ_1,3/2^Σ_c^*= χ_1,1^f5χ_
3/2,3/2^σχ^c, ϕ_0,1/2^Σ_c^*=χ_1,0^f5χ_3/2,1/2^σχ^c
ϕ_-1,1/2^Σ_c^*= χ_1,-1^f5χ_
3/2,1/2^σχ^c, ϕ_1,1/2^Σ_c^*=χ_1,1^f5χ_3/2,1/2^σχ^c
ϕ_1,3/2^Σ^*= χ_1,1^f6χ_
3/2,3/2^σχ^c, ϕ_0,3/2^Σ^*=χ_1,0^f6χ_3/2,3/2^σχ^c
ϕ_-1,3/2^Σ^*= χ_1,-1^f6χ_
3/2,3/2^σχ^c, ϕ_1,1/2^Σ^*=χ_1,1^f6χ_3/2,1/2^σχ^c
ϕ_0,1/2^Σ^*= χ_1,0^f6χ_
3/2,1/2^σχ^c, ϕ_-1,1/2^Σ^*=χ_1,-1^f6χ_3/2,1/2^σχ^c
ϕ_-1,-1/2^Σ_c^*= χ_1,-1^f5χ_
3/2,-1/2^σχ^c, ϕ_0,-1/2^Σ_c^*=χ_1,0^f5χ_3/2,-1/2^σχ^c
ϕ_1,-1/2^Σ_c^*= χ_1,1^f5χ_
3/2,-1/2^σχ^c, ϕ_1,-1/2^Σ^*=χ_1,1^f6χ_3/2,-1/2^σχ^c
ϕ_0,-1/2^Σ_c^*= χ_1,0^f6χ_
3/2,-1/2^σχ^c, ϕ_-1,-1/2^Σ^*=χ_1,-1^f6χ_3/2,-1/2^σχ^c
ϕ_0,3/2^Ω_c^*= χ_0,0^f7χ_
3/2,3/2^σχ^c, ϕ_0,1/2^Ω_c^*=χ_0,0^f7χ_3/2,1/2^σχ^c
ϕ_0,1/2^Ω= χ_0,0^f8χ_
3/2,1/2^σχ^c, ϕ_0,3/2^Ω=χ_0,0^f8χ_3/2,3/2^σχ^c
ϕ_1/2,3/2^Ξ_c^*= χ_
1/2,1/2^f9χ_3/2,
3/2^σχ^c, ϕ_-1/2,3/2^Ξ_c^*=χ_
1/2,-1/2^f9χ_3/2,
3/2^σχ^c
ϕ_1/2,1/2^Ξ_c^*= χ_
1/2,1/2^f9χ_3/2,
1/2^σχ^c, ϕ_-1/2,1/2^Ξ_c^*=χ_
1/2,-1/2^f9χ_3/2,
1/2^σχ^c
ϕ_1/2,3/2^Ξ^*= χ_
1/2,1/2^f10χ_3/2,
3/2^σχ^c, ϕ_-1/2,3/2^Ξ^*=χ_
1/2,-1/2^f10χ_3/2,
3/2^σχ^c
ϕ_1/2,1/2^Ξ^*= χ_
1/2,1/2^f10χ_3/2,
1/2^σχ^c, ϕ_-1/2,1/2^Ξ^*=χ_
1/2,-1/2^f10χ_3/2,
1/2^σχ^c
ϕ_-1/2,-1/2^Ξ_c^*= χ_
1/2,-1/2^f9χ_3/2,
-1/2^σχ^c, ϕ_1/2,-1/2^Ξ_c^*=χ_
1/2,1/2^f9χ_3/2,
-1/2^σχ^c
ϕ_1/2,-1/2^Ξ^*= χ_
1/2,1/2^f10χ_3/2,
-1/2^σχ^c, ϕ_-1/2,-1/2^Ξ^*=χ_
1/2,-1/2^f10χ_3/2,
-1/2^σχ^c
ϕ_0,-1/2^Ω= χ_0,0^f8χ_
3/2,-1/2^σχ^c, ϕ_0,3/2^Ω_c^*=χ_0,0^f7χ_3/2,-1/2^σχ^c
70
XYZ1 H. X. Chen, W. Chen, X. Liu and S. L. Zhu, Phys. Rept. 639, 1-121 (2016)
XYZ2 Y. R. Liu, H. X. Chen, W. Chen, X. Liu and S. L. Zhu, Prog. Part. Nucl. Phys. 107, 237-320 (2019)
XYZ3 N. Brambilla, S. Eidelman, C. Hanhart, A. Nefediev, C. P. Shen, C. E. Thomas, A. Vairo and C. Z. Yuan, Phys. Rept. 873, 1-154 (2020)
XYZ4 H. X. Chen, W. Chen, X. Liu, Y. R. Liu and S. L. Zhu, Rept. Prog. Phys. 86, no.2, 026201 (2023)
deutron Harold C. Urey, F. G. Brickwedde, and G. M. Murphy, A Hydrogen Isotope of Mass 2, Phys. Rev. D 39, 164 (1932).
WASA1 M. Bashkanov, C. Bargholtz, M. Berlowski, D. Bogoslawsky, H. Calen, H. Clement, L. Demiroers, E. Doroshkevich, D. Duniec and C. Ekstrom, et al. Phys. Rev. Lett. 102, 052301 (2009)
WASA2 P. Adlarson et al. [WASA-at-COSY], Phys. Rev. Lett. 106, 242302 (2011)
WASA3 P. Adlarson et al. [WASA-at-COSY], Phys. Rev. C 90, no.3, 035204 (2014)
dstar1 A. Gal and H. Garcilazo, Phys. Rev. Lett. 111, 172301 (2013)
dstar2 M. Bashkanov, S. J. Brodsky and H. Clement, Phys. Lett. B 727, 438-442 (2013)
dstar3 J. L. Ping, H. X. Huang, H. R. Pang, F. Wang and C. W. Wong, Phys. Rev. C 79, 024001 (2009)
dstar4 H. Huang, J. Ping and F. Wang, Phys. Rev. C 89, no.3, 034001 (2014)
nomega1 J. Adam et al. [STAR], Phys. Lett. B 790, 490-497 (2019)
nomega2 M. Oka, Phys. Rev. D 38, 298 (1988)
nomega3 H. r. Pang, J. l. Ping, F. Wang, J. T. Goldman and E. g. Zhao, Phys. Rev. C 69, 065207 (2004)
nomega4 M. Chen, H. Huang, J. Ping and F. Wang, Phys. Rev. C 83, 015202 (2011)
nomega5 H. Huang, J. Ping and F. Wang, Phys. Rev. C 92, 065202 (2015)
nomega6 Q. B. Li and P. N. Shen, Eur. Phys. J. A 8, 417-421 (2000)
nomega7 F. Etminan et al. [HAL QCD], Nucl. Phys. A 928, 89-98 (2014)
nomega8 T. Iritani et al. [HAL QCD], Phys. Lett. B 792, 284-289 (2019)
LHCb R. Aaij et al. (LHCb Collaboration), Phys. Rev. Lett. 119, 112001 (2017)
c1 Y. R. Liu and M. Oka, Phys. Rev. D 85, 014015 (2012).
c2 H. X. Huang, J. L. Ping, and F. Wang, Phys. Rev. C 87, 034002 (2013).
cc1 W. Meguro, Y. R. Liu, and M. Oka, Phys. Lett. B 704, 547 (2011).
cc2 N. Lee, Z. G. Luo, X. L. Chen, and S. L. Zhu, Phys. Rev. D 84, 014031 (2011).
cc3 N. Li and S. L. Zhu, Phys. Rev. D 86, 014020 (2012).
ccc1 P. Junnarkar and N. Mathur, Phys. Rev. Lett. 123, no.16, 162003 (2019)
ccc2 Z. G. Wang, Phys. Rev. D 102, no.3, 034008 (2020)
ccc3 Y. W. Pan, M. Z. Liu and L. S. Geng, Phys. Rev. D 102, no.5, 054025 (2020)
ccc5 R. Chen, F. L. Wang, A. Hosaka and X. Liu, Phys. Rev. D 97, no.11, 114011 (2018)
ccc4 H. Huang, J. Ping and F. Wang, Phys. Rev. C 101, no.1, 015204 (2020)
2 J. Vijande, F. Fernandez and A. Valcarce, J. Phys. G 31, 481 (2005)
meson J. Ping, C. Deng, H. Huang, F. F. Dong and F. Wang, EPJ Web Conf. 20, 01007 (2012)
4 X. Chen and J. Ping, Eur. Phys. J. C 76, no.6, 351 (2016)
chqm2 D. R. Entem, F. Fernandez and A. Valcarce, Phys. Rev. C 62, 034002 (2000)
5 X. Hu and J. Ping, Eur. Phys. J. C 82, no.2, 118 (2022)
nn P. Xu, H. X. Huang, J. L. Ping and F. Wang, Chin. Phys. Lett. 28, 031301 (2011)
full H. Huang, J. Ping, X. Zhu and F. Wang, Eur. Phys. J. C 82, no.9, 805 (2022)
ChQM(RPP) A. Valcarce, H. Garcilazo, F. Fernandez and P. Gonzalez, Rept. Prog. Phys. 68, 965-1042 (2005)
chqm1 I. T. Obukhovsky and A. M. Kusainov, Phys. Lett. B 238, 142-148 (1990)
chqm3 F. Fernandez, A. Valcarce, U. Straub and A. Faessler, J. Phys. G 19, 2013-2026 (1993)
Glozman L.Y. Glozman, D.O. Riska, Nucl. Phys. A 603 326 (1996).
Stancu F. Stancu, Eur. Phys. J. C 79, 957 (2019).
ChQM1 J. Vijande, F. Fernandez and A. Valcarce, J. Phys. G 31, 481 (2005)
ChQM2 Y. Tan, W. Lu and J. Ping, Eur. Phys. J. Plus 135, no.9, 716 (2020)
ChQM3 X. Hu and J. Ping, Eur. Phys. J. C 82, no.2, 118 (2022)
D Z. Xia, S. Fan, X. Zhu, H. Huang and J. Ping, Phys. Rev. C 105, no.2, 025201 (2022)
PDG J. Beringer, et al., Particle Data Group, Phys. Rev. D 86, 010001 (2012).
RGM1 J. A. Wheeler, Phys. Rev. 32, 1083 (1937).
RGM2 M. Kamimura, Supp. Prog. Theo. Phys. 62, 236 (1977).
KHK Supplement of the Progress of Theoretical Physics, No. 62, 1977
Wang:2024riuB. Wang, K. Chen, L. Meng and S. L. Zhu,
Phys. Rev. D 110, no.1, 014038 (2024)
Kong:2022rvd S. Y. Kong, J. T. Zhu and J. He,
Eur. Phys. J. C 82, no.9, 834 (2022)
|
http://arxiv.org/abs/2409.02066v2 | 20240903171355 | Robust Clustering on High-Dimensional Data with Stochastic Quantization | [
"Anton Kozyriev",
"Vladimir Norkin"
] | cs.LG | [
"cs.LG",
"math.OC",
"90C15"
] |
Anton Kozyriev and Vladimir Norkin
National Technical University of Ukraine ”Igor Sikorsky Kyiv Polytechnic Institute”, Kyiv, 03056, Ukraine [email protected] V.M.Glushkov Institute of Cybernetics, National Academy of Sciences of Ukraine, Kyiv, 03178, Ukraine [email protected]
Robust Clustering on High-Dimensional Data with Stochastic Quantization
Anton Kozyriev1 Vladimir Norkin1,2
September 9, 2024
=======================================================================
§ ABSTRACT
This paper addresses the limitations of traditional vector quantization (clustering) algorithms, particularly K-Means and its variant K-Means++, and explores the Stochastic Quantization (SQ) algorithm as a scalable alternative for high-dimensional unsupervised and semi-supervised learning problems. Some traditional clustering algorithms suffer from inefficient memory utilization during computation, necessitating the loading of all data samples into memory, which becomes impractical for large-scale datasets. While variants such as Mini-Batch K-Means partially mitigate this issue by reducing memory usage, they lack robust theoretical convergence guarantees due to the non-convex nature of clustering problems. In contrast, the Stochastic Quantization algorithm provides strong theoretical convergence guarantees, making it a robust alternative for clustering tasks. We demonstrate the computational efficiency and rapid convergence of the algorithm on an image classification problem with partially labeled data, comparing model accuracy across various ratios of labeled to unlabeled data. To address the challenge of high dimensionality, we trained Triplet Network to encode images into low-dimensional representations in a latent space, which serve as a basis for comparing the efficiency of both the Stochastic Quantization algorithm and traditional quantization algorithms. Furthermore, we enhance the algorithm's convergence speed by introducing modifications with an adaptive learning rate.
§ INTRODUCTION
Quantization and clustering are fundamental encoding techniques that provide compact representations of original data <cit.>. Clustering algorithms have emerged as prominent tools for unsupervised learning, with recent applications spanning diverse domains such as location-allocation problems <cit.>, document classification <cit.>, and data compression <cit.>.
In this context, we consider a random variable ξ with values in Euclidean space ℝ^n and distribution P(dξ), representing the original distribution. The encoded discrete distribution is parameterized by a set of atoms {y_1, …, y_K} with corresponding probabilities {q_1, …, q_K}. The optimal quantization problem aims to find the encoded distribution that minimizes the distance to the original distribution. This mathematical structure is analogous to the optimal clustering problem, where the objective is to determine the positions of K cluster centers {y_1, …, y_K} such that the sum of distances from each element ξ to the nearest cluster center is minimized.
The K-Means algorithm, proposed by Lloyd <cit.>, has been widely used for solving quantization and clustering problems, with numerous extensions <cit.>. Bottou and Bengio <cit.> interpreted the K-Means algorithm as an analogue of Newton's method and proposed several stochastic gradient descent algorithms for solving optimal quantization and clustering problems. However, traditional clustering algorithms are limited by the requirement to load all training data into memory, rendering them non-scalable for large datasets. To address this limitation, Sculley <cit.> introduced the Mini-Batch K-Means algorithm, which utilizes only a small subset of possible ξ values at each iteration.
The Stochastic Quantization algorithm reframes the clustering problem as a stochastic transportation problem <cit.> by minimizing the distance between elements of the original distribution {ξ} and atoms of the encoded discrete distribution {y_k}. This approach employs Stochastic Gradient Descent (SGD) <cit.> to search for an optimal minimum, leveraging its computational efficiency in large-scale machine learning problems <cit.>. The use of stochastic approximation allows the algorithm to update parameters with only one element ξ per iteration, ensuring memory efficiency without compromising convergence to the minimum <cit.>.
This paper explores advanced modifications of the Stochastic Quantization algorithm, incorporating accelerated variants of SGD <cit.> and adaptive learning rate techniques <cit.> to enhance convergence speed. Norkin et al. <cit.> provide a comprehensive comparison of various SGD variants, highlighting their respective advantages and limitations, while also offering convergence speed estimations.
Given that the optimal quantization problem is non-smooth and non-convex, specialized methods are required <cit.>. To validate its convergence, we apply the general theory of non-smooth, non-convex stochastic optimization <cit.>. While traditional clustering algorithms lack theoretical foundations for convergence guarantees and rely primarily on seeding techniques <cit.>, stochastic optimization theory provides specific conditions for the local convergence of the Stochastic Quantization algorithm, which we supplement in this research.
This paper introduces a novel approach to address semi-supervised learning challenges on high-dimensional data by integrating the Stochastic Quantization algorithm with a deep learning model based on the Triplet Network architecture <cit.>. The proposed method encodes images into low-dimensional representations in the ℝ^3 latent space, generating meaningful encoded features for the Stochastic Quantization algorithm. By employing the Triplet Network, this approach overcomes the limitations of quantization and clustering algorithms in high-dimensional spaces, such as visualization difficulties and decreased precision as the number of dimensions increases <cit.>.
To illustrate the efficiency and scalability of the Stochastic Quantization algorithm, we conducted experiments on a semi-supervised image classification problem using partially labeled data from the MNIST dataset <cit.>. The Triplet Network is initially trained on the labeled portion of the dataset as a supervised learning model. Subsequently, the trained network is utilized to project the remaining unlabeled data onto the latent space, which serves as input for training the Stochastic Quantization algorithm. The performance of the proposed solution is evaluated using the F1-score metric <cit.> for multi-label classification across various ratios of labeled to unlabeled data.
§ STOCHASTIC QUANTIZATION
Unlike traditional clustering methods that minimize the distance between each element of Ξ = {ξ_i, i = 1, …, I} and the nearest center Y = {y_k, k = 1, …, K}, Stochastic Quantization conceptualizes the feature set Ξ and cluster centers Y as discrete probability distributions. The Wasserstein (or Kantorovich–Rubinstein) distance is employed to minimize distortion between these distributions when representing a continuous distribution by a discrete one <cit.>. Subsequent research <cit.> has explored the application of quantization algorithms to solve optimal allocation problems for service centers, where each atom of the discrete distribution represents the location of facilities and customers, respectively.
<cit.>. Optimal quantization minimizes the weighted sum of distances between elements of the feature set {ξ_i}⊂ℝ^n and centers {y_k}⊂ℝ^n:
min_y = { y_1, …, y_K }∈ Y^K ⊂ℝ^nKmin_q = { q_1, …, q_K }∈ℝ^K_+min_x = { x_ij≥ 0 }∑_i=1^I ∑_k=1^K d(ξ_i, y_k)^r x_ik
subject to constraints:
∑_k=1^K x_ik = p_i, ∑_k=1^K q_k = 1, i = 1, …, I
where p_i > 0, ∑_i=1^I p_i = 1 are normalized supply volumes, x_ik are transportation volumes, d(ξ_i, y_k)_p = ξ_i - y_k _p = (∑_j=1^n | ξ_ij - y_kj |^p)^1/p is the l_p norm defining the distance between elements in the objective function (<ref>), Y ⊂ℝ^n is a common constraint set for variables {y_k, k = 1, …, K}, and n, I, K ∈ℕ.
In this research, we employ the Euclidean norm (p = 2) as the distance metric, defined as d(ξ_i, y_k)_2 = √(∑_j=1^n | ξ_ij - y_kj |^2). The choice of distance metric may vary depending on the problem domain. For instance, the cosine similarity function d(ξ_i, y_j)_cos = cos(ξ_i, y_j) = ξ_i · y_j/ξ_i · y_j is utilized in text similarity tasks <cit.>, while Kolmogorov and Levy metrics are employed for probability and risk theory problems <cit.>.
It is evident that in the optimal plan, all mass at point ξ_i is transported to the nearest point y_k. Consequently, problem (<ref>)-(<ref>) can be reduced to the following non-convex, non-smooth global stochastic optimization problem, with the objective function defined as:
min_y = { y_1, …, y_K }∈ Y^K ⊂ℝ^nK F(y_1, …, y_k)
where
F(y) = F(y_1, …, y_k) = ∑_i=1^I p_i min_1 ≤ k ≤ K d(ξ_i, y_k)^r = 𝔼_i ∼ pmin_1 ≤ k ≤ K d(ξ_i, y_k)^r
Here, 𝔼_i ∼ p denotes the expected value over the random index i that takes values {1, …, I} with probabilities {p_1, …, p_I}, respectively.
In the global optimum y^* = (y_1^*, …, y_K^*) of (<ref>), all {y_1^*, …, y_K^*} belong to the convex hull of elements {ξ_1, …, ξ_I} in the feature set.
Assume, by contradiction, that there exists some y_k^*^* ∉conv{ξ_1, …, ξ_I}. Consider the projection y̅_k^*^* of y_k^*^* onto conv{ξ_1, …, ξ_I} and points y_k^*^*(t) = (1 - t)y_k^*^* + ty̅_k^*^*, t ∈ [0, 1]. We observe that ∀ξ_i, t ∈ (0, 1]: y_k^*^*(t) - ξ_i < y_k^*^* - ξ_i. If y_k^*^* - ξ_i^* = min_1 ≤ k ≤ Ky_k^* - ξ_i^* for some i^*, then
min{ y_k^*^*(t) - ξ_i^*, min_k ∉ k^* y_k^* - ξ_i^*} < min_k y_k^* - ξ_i^*
Thus, y^* = (y_1^*, …, y_K^*) is not a local minimum of the objective function (<ref>). Now, consider the case where y_k^*^* - ξ_i > min_k y_k^* - ξ_i for all i. By assumption, min_k y_k^* - ξ_i' for some i'. The vector y' = (y_1^*, …, y_k^* - 1^*, ξ_i', y_k^* + 1^*, …, y_K^*) satisfies F(y') < F(y^*), contradicting the assumption that y^* is a minimum. This completes the proof.
For a continuous probability distribution P(dξ), we can interpret the objective function (<ref>) as a mathematical expectation in a stochastic optimization problem <cit.>:
min_y = { y_1, …, y_K }∈ Y^K ⊂ℝ^nK[F(y_1, …, y_k) = 𝔼 f(y, ξ) = ∫_ξ∈Ξ f(y, ξ) P(d ξ)]
with
f(y, ξ) = min_1 ≤ k ≤ K d(ξ, y_k)^r,
where the random variable ξ may have a multimodal continuous distribution. The empirical approximation of F(y) in (<ref>) is:
F_N(y) = 1/N∑_i=1^N min_1 ≤ k ≤ K d(ξ_i, y_k)^r
where {ξ_i, i = 1, …, N} are independent, identically distributed initial samples of the random variable ξ. If K = 1, Y is convex, and r≥ 1, then problem (<ref>) is unimodal and reduces to a convex stochastic optimization problem:
min_y ∈ Y [ F(y) = 𝔼_ĩ∼ p d(ξ_ĩ, y)^r ]
However, for K ≥ 2, the function f(ξ, y) = min_1 ≤ k ≤ K d(ξ, y_k)^r, y = (y_1, …, y_K) is non-smooth and non-convex. In terms of <cit.>, f(ξ, y) is a random generalized differentiable function, its generalized gradient set can be calculated by the chain rule:
∂ f(ξ, y) = conv{ (0, …, 0, g_k^*, 0, …, 0), k^* ∈ S(ξ, y), 0 ∈ℝ^n }
S(ξ, y) = { k^*: ξ - y_k^* = min_1 ≤ k ≤ Kξ - y_k }
g_k^* = r ξ - y_k^*^r - 2 (y_k^* - ξ)
The expected value function (<ref>) is also generalized differentiable, and the set 𝔼_ξ∂ f(ξ, y) is a generalized gradient set of the function F <cit.>. Vectors g(ξ) = (0, …, 0, g_k, 0, …, 0), k ∈ S(ξ, y), 0 ∈ℝ^n, are stochastic generalized gradients of the function F(y_1, …, y_K).
These gradients can be utilized to find the optimal element y_k^* in a feature set Ξ using Stochastic Gradient Descent (SGD) <cit.>:
y_k+1 = π_Y (y_k - ρ_k g_k^*), π_Y (x) = _y ∈ Y x - y, y^0 ∈ Y, k ∈ℕ,
where ρ_k > 0 is a learning rate parameter, and π_Y is the projection operator onto the set Y. The iterative process (<ref>)-(<ref>) for finding the optimal element is summarized in Algorithm <ref>. While SGD is an efficient local optimization algorithm, the ultimate task is to find global minima of (<ref>). The research in <cit.> proposes a stochastic branch and bound method applicable to the optimization algorithm (<ref>). The idea is to sequentially partition the initial problem into regions (with constraint set Y_1 ×…× Y_K) and use upper and lower bounds to refine partitions with the so-called interchanges relaxation to obtain lower bounds:
min_{ y_k ∈ Y_k } F(y_1, …, y_K)
≥ ∑_i=1^I p_i min_y ∈ Y_1×…× Y_Kmin_1 ≤ k ≤ K d(ξ_i, y_k)^r
≥ ∑_i=1^I p_i min_1 ≤ k ≤ K d(ξ_i, π_k(ξ_i)^r.
The local convergence conditions of the stochastic generalized gradient method for solving problem (<ref>) are determined in Theorem <ref>, with the proof provided in <cit.>.
<cit.>. Consider the iterative sequence { y^(t) = (y_1^(t), …, y_K^(t)) }:
y_k^(t) := π_Y (y_k^(t) - ρ_t g_k^(t)) k^(t) = S(ξ̃^(t), y^(t)) t = 0, 1, 2, …
g_k^(t) = r ξ̃^(t) - y_k^(t)^r - 2 (y_k^(t) - ξ̃^(t)) k ∈{ 1, …, K }
Assume that {ξ̃^(t) = ξ̃_k^(t)} are independent sample points from the set {ξ_i, i = 1, …, I } taken with probabilities { p_i, i = 1, …, I }:
ρ_t > 0, ∑_t=0^∞ρ_t = ∞, ∑_t=0^∞ρ_t^2 < ∞
Let F(Y^*) denote the set of values of F on critical (stationary) points Y^* of problem (<ref>), where Y^* = { y = (y_1, …, y_K): ∂ F(y) ∈ N_Y (y_1) ×…× N_Y (y_K) } and N_Y (y_k) represents the normal cone to the set Y at point y_k. If F(Y^*) does not contain intervals and the sequence { y^(t)} is bounded, then { y^(t)} converges to a connected component of Y^*, and the sequence { F(y^(t)) } has a limit.
§.§ Adaptive Stochastic Quantization
The minimization of the objective function (<ref>) is a non-smooth, non-convex, multiextremal, large-scale stochastic optimization problem. Although the parameter update recurrent sequence based on SGD (<ref>) can converge under conditions (<ref>), Qian et al. <cit.> demonstrated that the variance of gradient oscillations increases proportionally to the size of training samples:
𝕍 (g_ℬ_k) ∝I^2/b𝕍 (g_k)
where 𝕍 represents the variance over a set, g_ℬ_k = 1/b∑_i=1^b g_i (ξ_ℬ_i) is the averaged gradient value over a subset ξ_ℬ_i⊂Ξ, and b = | ξ_ℬ_i |. These gradient oscillations reduce the algorithm's stability and slow down the convergence speed. While strategies such as manually tuned learning rate ρ > 0, annealing schedules <cit.>, or averaged gradient over a subset can improve convergence stability, the slow convergence speed in high-dimensional models <cit.> remains a significant drawback of the SGD algorithm.
Polyak <cit.> proposed the Momentum Gradient Descent (or the ”Heavy Ball Method”) as an alternative modification to the SGD by introducing an acceleration multiplier 0 < γ < 1 to the recurrent sequence (<ref>), using a physical analogy of the motion of a body under the force of friction:
y_k+1 = y_k + γ (y_k - y_k-1) - ρ_k g_k^*
Nesterov <cit.> further improved the modified recurrent sequence (<ref>) by introducing an extrapolation step for parameter estimation (Nesterov Accelerated Gradient or NAG):
ỹ_k = y_k - ρ_k g_k^*, y_k+1 = ỹ_k + γ (ỹ_k - ỹ_k-1)
Although modifications (<ref>) and (<ref>) can improve convergence speed, they often encounter the vanishing gradient problem on sparse data <cit.>. The root cause is the fixed learning rate value, which performs equal updates for both significant and insignificant model parameters. Duchi et al. <cit.> address this issue by introducing an adaptive learning rate ρ̃_k = ρ_k / √(G_k + ε), where the hyperparameter value is normalized over the accumulated gradient value to increase the update for more significant parameters (AdaGrad):
y_k+1 = y_k - ρ_k/√(G_k + ε) g_k^*
where G_k = G_k-1 + g_k^*^2 is a linear combination of accumulated gradients from previous iterations, and ε≪ 10^-8 is a denominator smoothing term. While approach (<ref>) solves the convergence issue on sparse data, it introduces the problem of uncontrollable vanishing of the learning rate with each iteration, i.e., lim_k →∞ | ρ̃_k | = 0. Tieleman et al. <cit.> proposed another approach (RMSProp) for accumulated gradient normalization using a moving average G_k = β G_k-1 + (1 - β) g_k^*^2, which substitutes the denominator G_k with a stochastic approximation of the expected value 𝔼 G_k to control learning rate vanishing with an averaging multiplier 0 < β < 1.
Kingma et al. <cit.> introduced a further modification to (<ref>) by adding adaptive estimation of the gradient value g_k^* (ADAM):
m_k = β_1 m_k-1 + (1 - β_1) g_k
v_k = β_2 v_k-1 + (1 - β_2) g_k^2
y_k+1 = y_k - ρ_k/√(v_k + ε) m_k
where m_k is the adaptive first moment (expected value) estimation, v_k is the adaptive second moment (variance) estimation, and 0 < β_1 < 1, 0 < β_2 < 1 are averaging multipliers. It is important to note that the values m_i and v_i may be biased (i.e., the expected value of the parameter does not equal the value itself), which can cause unexpected behavior in the oscillation's variance. The authors proposed corrected estimations for (<ref>) as:
m̅_k = m_k/1 - β_1, v̅_k = v_k/1 - β_2
Norkin et al. <cit.> provide an overview of these adaptive parameter update strategies and present a detailed comparison of their convergence speed in various problem settings.
§ TRADITIONAL CLUSTERING MODEL
The optimal clustering problem seeks to find K cluster centers { y_1, ..., y_K } that minimize the sum of distances from each point ξ to the nearest center. Lloyd's K-Means algorithm <cit.> is a prominent method for solving quantization and clustering problems, with numerous extensions <cit.>. Bottou and Bengio <cit.> interpreted the K-Means algorithm as an analogue of Newton's method and proposed several stochastic gradient descent algorithms for optimal clustering. These stochastic K-Means algorithms use only a subset of possible ξ values or even a single element at each iteration.
<cit.>. K-Means iterative algorithm starts with the set of current cluster centers { y_k^t, k = 1, ..., K }, the feature set {ξ_i, i = 1, ..., I } is subdivided into K non-intersecting groups { I_1^t, ..., I_K^t }: point ξ_i belongs to group I_s^t if
ξ_i - y_s^t ^2 = min_1 ≤ k ≤ K{ξ_i - y_k^t ^2 },
r>0.
Denote N_k^t the number of points in group I_k^t, N_k^t = 0 if I_k^t = ∅, ∑_k=1^K N_k^t = I. Remark that I_k^t and N_k^t depend on y^t. K-Means iteratively evaluates next cluster centers y^t+1 with the estimation:
y_k^t + 1 = 1/N_k^t∑_i ∈ I_k^tξ_i, k = 1, ..., K; t = 0, 1, ...
We can represent these vectors as
y_k^t + 1 = y_k^t - 1/N_k^t∑_i ∈ I_k^t (y_k^t - ξ_i) = y_k^t - 1/N_k^t∑_i ∈ I_k^t y_k^t + 1/N_k^t∑_i ∈ I_k^tξ_i
= 1/N_k^t∑_i ∈ I_k^tξ_i, k = 1, ..., K; t = 0, 1, ...
In <cit.> the form of K-Means algorithm (<ref>) was connected to Newton's step for solving at each iteration the smooth quadratic problem
min_y_1,…,y_K[F^t(y) = 1/2∑_k=1^K ∑_i ∈ I_k^t || ξ_i - y_k ||^2].
with block diagonal Hessian and with diagonal number 1/N_k^t in block k. Moreover, it is easy to see that (<ref>) is the exact analytical solution of the unconstrained quadratic minimization problem (<ref>) under fixed partition {I_1^t,…,I_K^t} of the index set {1,…,I}. In that paper it was also considered stochastic batch and online version of the stochastic gradient methods with learning rate 1/t + 1 for solving a sequence of problems (<ref>) but without rigorous convergence analysis.
The initial positions of cluster centers { y_k^0, k = 1, ..., K } are set either at random among {ξ_i, i = 1, ..., I } or according the algorithm K-Means++ <cit.>. With K-Means++ initialization strategy, the rate of convergence to local optimum is estimated to be 𝔼 [F] ≤ 8(ln k + 2 ) F^*, where F^* - an optimal solution <cit.>. Assume { y_1^0, ..., y_k^0 } (k<K) initial cluster centers have already been chosen. The next center y_k+1^0 (k+1<K) is sampled from the set {ξ_i, i = 1, ..., I } with probabilities:
q_j = min_1 ≤ s ≤ k || ξ_j - y_s^0 ||^2/∑_i=1^I min_1 ≤ s ≤ k || ξ_i - y_s^0 ||^2
The next positions of the cluster centers { y_1^t, ..., y_K^t } for t > 0 are calculated as in the original Lloyd algorithm <cit.>, the expectation-maximization (EM) approach to K-Means algorithm. Consider problem for r = 2:
F(y) = F(y_1, ..., y_K) = ∑_i=1^I p_i min_1 ≤ k ≤ K || y_k - ξ_i ||^2 = 𝔼_i ∼ pmin_1 ≤ k ≤ K || y_k - ξ_i ||^2
Given objective function (<ref>) is a generalized differentiable function <cit.>, and to find its optima we utilize a generalized gradient set is calculated by the chain rule:
∂ F(y) = ∑_i=1^I p_i ·conv_k ∈ K_i{ 2 (y_k - ξ_i) }, K_i = _1 ≤ k ≤ K || y_k - ξ_i ||
And its some generalized gradient is the compound vector g(y) = (g_k(y), k = 1, ..., K):
g_k(y) = ∑_i=1^I 2 p_i (y_k_i - ξ_i), k_i ∈ K_i
Sculley <cit.> addresses the limitation of Lloyd algorithm with update rule (<ref>), highlighting that objective function F(y) calculation is expensive for large datasets, due to O(K · I · d) time complexity for a given feature set {ξ_i }, where d - dimensionality of each sample ξ_i. The author proposed a solution by introducing a Mini-Batch K-Means modification to the Lloyd algorithm, where the set of points {ξ_i, i = 1, ..., I } is subdivided into non-intersecting subsets I_k, k = 1, ..., K, such that i ∈ I_k if y_k - ξ_i = min_k^'∈{ 1, ..., K } y_k^' - ξ_i. Some I_k may occur empty, for example, if
max_1 ≤ i ≤ Imin_1 ≤ k ≤ Ky_k - ξ_i < min_1 ≤ i ≤ Iy_k - ξ_i.
In this modifications the generalized gradients of function F(y) is g(y) = (g_1(y), ..., g_K(y)), where:
g_k(y) = ∑_i ∈ I_k 2 p_i (y_k_i - ξ_i), I_k ≠∅
0, I_k = ∅
The standard generalized gradient method for solving problem (<ref>) takes on the form (for p_i = 1/I, k = 1, ..., K, t = 0, 1, ...):
y_k^t+1 = y_k^t - ρ_t g_k(y^t) =
y_k^t - ρ_t 2/I∑_i ∈ I_k (y_k^t - ξ_i), I_k ≠∅
0, I_k = ∅
Recent studies <cit.> have examined stochastic K-Means algorithms as methods for solving corresponding non-convex, non-smooth stochastic optimization problems. However, the convergence properties of these algorithms lack rigorous validation. Let N_k^t is the number of elements in I_k ≠∅ at iteration t. If we chose ρ_t dependent on k, namely, ρ_tk = 0.5 I/N_k^t, then process (<ref>) becomes identical to K-Means one (<ref>). Here ρ_tk≤ 0.5 I and can be rather large that does not guarantee convergence of (<ref>). A more general choice can be ρ_tk = ρ_t I/N_k^t with ρ_t ≥ 0 satisfying conditions
(<ref>) and thus lim_t →∞max_k ρ_tk = 0.
§.§ Modifications of K-Means algorithm
Robust clustering model assumes solving problem (<ref>), (<ref>) with parameter r<2. Such choice of parameter r makes the quantization and clustering model more robust to outliers. However, then stochastic generalized gradients of the objective function should be calculated by formula (<ref>) and the stochastic clustering algorithm takes form (<ref>).
One also can consider a complement to the sequence (<ref>) by the Cesàro trajectory averaging:
y̅_k^t+1 = (1 - σ_t+1) y̅_k^t + σ_t+1 y_k^t+1, σ_t+1 = ρ_t+1/∑_s=1^t+1ρ_s, k = 1, ..., K
Conditions of convergence for this averaged sequence were studied in <cit.>, in particular, they admit learning rate ρ_t proportional to 1 / √(t+1). A similar approach for K-Means generated sequences (<ref>) aims to average sequence by the feature set size N_k:
ỹ_k^t+1 = (1 - σ̃_t+1) ỹ_k^t + σ̃_t+1 y_k^t+1, σ̃_k, t+1 = 1/N_k^t+1 / ∑_s=1^t+11/N_k^s, k = 1, ..., K
The standard K-Means algorithm requires finding the nearest cluster center _1 ≤ k ≤ K y_k - ξ_i to each point {ξ_i, i = 1, ..., I }. This can be a time consuming operation in case of very large number I. Moreover, points ξ_i may be sampled sequentially from some continuous distribution and thus the sample {ξ_i, i = 1, 2, ... } can be potentially arbitrary large. The Stochastic Quantization algorithm (<ref>) uses only one sampled point ξ̃^t at each iteration and thus only one problem _1 ≤ k ≤ K y_k - ξ̃^t is solved at iteration t. But one can use a batch of m such points {ξ̃^t_i_1^t, ..., ξ̃^t_i_m^t} instead of the whole sample {ξ_i }, m < I.
§ DATA ENCODING WITH TRIPLET NETWORK
Our proposed semi-supervised classification approach addresses a critical challenge in clustering high-dimensional data by integrating Contrastive Learning techniques <cit.> with Stochastic Quantization. While the Stochastic Quantization algorithm (<ref>) effectively mitigates scalability issues for large datasets, it remains susceptible to the ”curse of dimensionality,” a phenomenon common to clustering methods that rely on distance minimization (e.g., K-Means and K-Means++). Kriegel et al. <cit.> elucidated this phenomenon, demonstrating that the concept of distance loses its discriminative power in high-dimensional spaces. Specifically, the distinction between the nearest and farthest points becomes increasingly negligible:
lim_n →∞max d(ξ_i, y_k) - min d(ξ_i, y_k)/min d(ξ_i, y_k) = 0.
Our study focuses on high-dimensional data in the form of partially labeled handwritten digit images <cit.>. However, it is important to note that this approach is not limited to image data and can be applied to other high-dimensional data types, such as text documents <cit.>. While efficient dimensionality reduction algorithms like Principal Component Analysis (PCA) <cit.> exist, they are primarily applicable to mapping data between two continuous spaces. In contrast, our objective necessitates an algorithm that learns similarity features from discrete datasets and projects them onto a metric space where similar elements are grouped into clusters, a process known as similarity learning.
Recent research <cit.> has employed a Triplet Network architecture to learn features from high-dimensional discrete image data and encode them into low-dimensional representations in the latent space. The authors proposed a semi-supervised learning approach where the Triplet Network is trained on a labeled subset of data to encode them into latent representations in ℝ^n, and subsequently used to project the remaining unlabeled fraction onto the same latent space. This approach significantly reduces the time and labor required for data annotation without compromising accuracy.
The Triplet Network, introduced by <cit.>, is a modification of the Contrastive Learning framework <cit.>. Its core idea is to train the model using triplets of samples:
* An anchor sample ξ_i: a randomly sampled element from the feature set Ξ
* A positive sample ξ^+_i: an element with a label similar to the anchor ξ_i
* A negative sample ξ^-_i: an element with a label different from the anchor ξ_i
Unlike traditional Contrastive Learning, which compares only positive ξ^+_i and negative ξ^-_i samples, the Triplet Network learns to minimize the distance between the anchor and positive samples while maximizing the distance between the anchor and negative samples. This is achieved using the triplet loss objective function (see Fig. <ref>):
ℒ_triplet(θ) = max (0, d(f_θ(ξ_i), f_θ(ξ^+_i)) - d(f_θ(ξ_i), f_θ(ξ^-_i)) + α)
where f_θ: Ξ→ℝ^n is a parameterized abstract operator mapping discrete elements Ξ into latent representations (in our case, a Triplet Network with weights θ), d: [ℝ^n, ℝ^n] →ℝ is a distance metric between samples, and α is a margin hyperparameter enforcing a minimum separation between positive and negative pairs. Analogous to the Stochastic Quantization distance metric (<ref>), we employed the Euclidean norm l_2 for d(ξ_i, ξ_j) in (<ref>).
In our research, we utilized a Convolutional Network architecture as f_θ, as proposed by <cit.>. The detailed overview of the architecture, its training using the Backpropagation algorithm, and accuracy evaluation are beyond the scope of this paper; <cit.>, <cit.>, and <cit.> provide extensive coverage of these topics. However, notice that function (<ref>) is non-smooth and non-convex and the standard Backpropagation technique is not validated for such case. The extension of this technique to the non-smooth non-convex case was made in <cit.>.
Regarding triplet mining strategies for (ξ_i, ξ^+_i, ξ^-_i), it is crucial to select an approach that produces the most informative gradients for the objective function (<ref>). Paper <cit.> discusses various online triplet mining strategies, which select triplets within each batch of a training set on each iteration. We employed the semi-hard triplet mining strategy, which chooses an anchor-negative pair that is farther than the anchor-positive pair but within the margin α:
ξ^-_i = _ξ: C(ξ) ≠ C(ξ_i)
d(f_θ(ξ_i), f_θ(ξ)) > d(f_θ(ξ_i), f_θ(ξ^+_i)) d(f_θ(ξ_i), f_θ(ξ))
where C(ξ) denotes the label of an element ξ.
By applying ideas from <cit.>, we can utilize the encoded latent representations of the Triplet Network to train a Stochastic Quantization (<ref>) algorithm. This novel approach enables us to solve supervised or semi-supervised learning problems of classification on high-dimensional data. The semi-supervised learning process using the combined algorithm, assuming we have a labeled subset ξ⊂Ξ and remaining unlabeled data ξ̅⊂Ξ, ξ̅∩ξ = ∅, can be summarized as follows:
* Train a Triplet Network f_θ on labeled data ξ and produce encoded latent representations space Ξ̅⊂ℝ^n
* Utilize the trained Triplet Network f_θ to project the remaining unlabeled data ξ̅ onto the same latent representation space Ξ̅
* Employ both labeled and unlabeled latent representations to train a Stochastic Quantization (<ref>) algorithm
§ NUMERICAL EXPERIMENTS
To implement and train the Triplet Network, we utilized PyTorch 2.0 <cit.>, a framework designed for high-performance parallel computations on accelerated hardware. The Stochastic Quantization algorithm was implemented using the high-level API of Scikit-learn <cit.>, ensuring compatibility with other package components (e.g., model training and evaluation), while leveraging NumPy <cit.> for efficient tensor computations on CPU. All figures presented in this study were generated using Matplotlib <cit.>. The source code and experimental results are publicly available in our GitHub repository <cit.>.
For our experiments, we employed the original MNIST handwritten digit dataset <cit.> (see Fig. <ref>), comprising 60,000 grayscale images of handwritten digits with a resolution of 28x28 pixels, each associated with a class label from 0 to 9. Additionally, the dataset includes a corresponding set of 10,000 test images with their respective labels. It is noteworthy that we did not apply any data augmentation or preprocessing techniques to either the training or test datasets.
We approached the image classification task as a semi-supervised learning problem, training models on varying fractions of labeled training data (10%, 30%, 60%, 80%, and 100%). The training dataset was split using uniform sampling into labeled and unlabeled portions according to the specified percentages. For the Triplet Network, we employed a Convolutional Neural Network architecture consisting of two Convolutional Layers with 3x3 filters (feature map dimensions of 32 and 64, respectively, with stride=1 and padding=1), followed by 2x2 Max-Pooling layers, and two Dense Layers. ReLU (non-differentiable) activation functions were used throughout the network. Together with non-smooth Triplet loss function this makes the learning problem highly non-smooth and non-convex (see discussion of this issue in <cit.>). We trained separate Triplet Network models for each labeled data fraction with the following hyperparameters: 50 epochs, batch size of 1000 ×fraction, learning rate ρ = 10^-3, and l_2 regularization rate λ = 10^-5. For the triplet loss (<ref>) and triplet mining (<ref>), we set the margin hyperparameter α = 1.0. To facilitate meaningful feature capture while enabling visualization, we chose a latent space dimensionality of ℝ^3 (see Fig. <ref>).
The Triplet Network was used to project latent representations onto ℝ^3 space from both labeled and unlabeled training data. These representations were then used to train the Stochastic Quantization algorithm as an unsupervised learning model. For each set of latent representations corresponding to different labeled data fractions, we trained the Stochastic Quantization algorithm and its adaptive variants (<ref>)–(<ref>) from subsection <ref>. We employed the K-Means++ initialization strategy (<ref>) for all variants, with a rank hyperparameter r = 3. To ensure convergence, we used different learning rates for each variant: ρ = 0.001 for SGD, Momentum, and NAG; ρ = 0.9 for AdaGrad; and ρ = 0.01 for RMSProp and ADAM. With these hyperparameters, all Stochastic Quantization variants converged to the global optima (see Fig. <ref>), with most converging on the first iteration (see Fig. <ref>).
The accuracy of the trained classification models, combining Triplet Network and Stochastic Quantization, was evaluated using the F1-score metric <cit.> for weighted multi-label classification. Our experiments demonstrated that our approach achieved results comparable to state-of-the-art performance with Triplet Network and Siamese Network, as reported in <cit.>, even with limited labeled data:
§ CONCLUSIONS
In this paper, we introduced a novel approach to solving semi-supervised learning problems by combining Contrastive Learning with the Stochastic Quantization algorithm. Our robust solution addresses the challenge of scalability in large datasets for clustering problems, while also mitigating the ”curse of dimensionality” phenomenon through the integration of the Triplet Network. Although we introduced modifications to the Stochastic Quantization algorithm by incorporating an adaptive learning rate, there is potential for further enhancement by developing an alternative modification using the finite-difference algorithm proposed in <cit.>, which will be the focus of future research. The semi-supervised nature of the problems addressed in this paper arises from the necessity of using labeled data to train the Triplet Network. By employing other Deep Neural Network architectures to produce low-dimensional representations in the latent space, the semi-supervised approach of Stochastic Quantization with Encoding could be extended to unsupervised learning. Additionally, alternative objective functions for contrastive learning, such as N-pair loss <cit.> or center loss <cit.>, offer further avenues for exploration in future studies. The results obtained in this paper demonstrate that the proposed approach enables researchers to train classification models on high-dimensional, partially labeled data, significantly reducing the time and labor required for data annotation without substantially compromising accuracy.
splncs04
|
http://arxiv.org/abs/2409.03544v1 | 20240905140657 | CTMBIDS: Convolutional Tsetlin Machine Based Intrusion Detection System for DDoS attacks in an SDN environment | [
"Rasoul Jafari Gohari",
"Laya Aliahmadipour",
"Marjan Kuchaki Rafsanjani"
] | cs.CR | [
"cs.CR"
] |
Article Title]CTMBIDS: Convolutional Tsetlin Machine Based Intrusion Detection System for DDoS attacks in an SDN environment
1]Rasoul Jafari [email protected]
2*]Laya [email protected]
3]Marjan Kuchaki [email protected]
[1,2*,3]Department of Computer Science, Shahid Bahonar University of Kerman, Paghohesh Square, Kerman, 7616913439, Kerman, Iran
Software Defined Networks (SDN) face many security challenges today. A great deal of research has been done within the field of Intrusion Detection Systems (IDS) in these networks. Yet, numerous approaches still rely on deep learning algorithms, but these algorithms suffer from complexity in implementation, the need for high processing power, and high memory consumption. In addition to security issues, firstly, the number of datasets that are based on SDN protocols are very small. Secondly, the ones that are available encompass a variety of attacks in the network and don't focus on a single attack. For this reason, to introduce an SDN-based IDS with a focus on Distributed Denial of Service (DDoS) attacks, it is necessary to generate a DDoS-oriented dataset whose features can train a high-quality IDS. In this work, in order to address two important challenges in SDNs, in the first step, we generate three DDoS attack datasets based on three common and different network topologies. Then, in the second step, using the Convolutional Tsetlin Machine (CTM) algorithm, we introduce a lightweight IDS for DDoS attack dubbed "CTMBIDS", with which we implement an anomaly-based IDS. The lightweight nature of the CTMBIDS stems from its low memory consumption and also its interpretability compared to the existing complex deep learning models. The low usage of system resources for the CTMBIDS makes it an ideal choice for an optimal software that consumes the SDN controller's least amount of memory. Also, in order to ascertain the quality of the generated datasets, we compare the empirical results of our work with the DDoS attacks of the KDDCup99 benchmark dataset as well. Since the main focus of this work is on a lightweight IDS, the results of this work show that the CTMBIDS performs much more efficiently than traditional and deep learning based machine learning algorithms. Furthermore, the results also show that in most datasets, the proposed method has relatively equal or better accuracy and also consumes much less memory than the existing methods.
[
[
=====
§ INTRODUCTION
The shortcomings of conventional networks have recently made room for the creation of Software Defined Network (SDN). This evolution today is considered a milestone and has diversified itself into other important fields such as SD-Wide Area Networks (SD-WAN) <cit.>, SDN-Internet of Things (SDNIoT) <cit.>, SD5g and SD6g <cit.>. This is due to the fact that SDN architecture allows for the separation of data layer and control layer in the network <cit.>, which as a result yields a very functional environment, in which network administrators can work very efficiently. This efficiency stems from the flexibility, programmability and manageability of SDN architecture that has roots in the decoupling of data layer and control layer and centralizing the network management in an SDN controller. However, with all the benefits that SDN architecture puts forward, SDN is still susceptible to a wide range of network attacks. Therefore, network attacks and their detection is still considered an urgent need. Numerous methods have been proposed to combat this challenge, one of which is the implementation of Intrusion Detection System (IDS) in the network. The idea of an IDS in the computer networks becomes even more functional when combined with the programmability side of SDN architecture. In this way, not only network administrators can easily implement an IDS in the control layer part of the SDN architecture to have a comprehensive view of the network, but also they lower the costs substantially via removing all the vendor-based IDSs from the network <cit.>. Moreover, an IDS in conventional networks can only detect attacks as far as its access and observability is concerned while SDN flexibility gives room to an IDS to be far more advanced in terms of access and comprehensive view of the entire network.
Network Intrusion Detection Systems (NIDS) are considered the ideal approach to perform intrusion detection across the network <cit.>. The fundamental utility of a NIDS is to monitor the traffic throughout the network and detect any intrusion that occurs so it can report it to network administrators. NIDSs are normally categorized into signature-based and anomaly-based intrusion detection systems. Signature-based NIDS is capable of detecting intrusions based on a body of knowledge stored in a database as signatures of previous attacks. This means the slightest shift in the behavior of the intrusion will allow the attacker to bypass the signature-based IDS. Furthermore, keeping the database of the attack signatures updated is another major hurdle that can become an arduous task when the network administrators have to take the storage of the signatures into account. Anomaly-based IDSs on the other hand are more suited for unknown attacks <cit.>. The advantage of such IDS is substantially increasing the scope of intrusion detections such that newly implemented attacks that deviate from the signatures of previously-implemented intrusions are far more easily detectable. Machine Learning (ML) algorithms enable IDS to detect specific patterns and anomalies in the network. The main focus of the research community is primarily on anomaly-based IDS since the complex patterns and new methodologies in different types of attacks can be easily discovered.
An applicable ML algorithm that normally induces better results in terms of accuracy and low cost is Deep Learning (DL) algorithm. Not only DL-based models are extremely efficient for high-dimensional datasets, they are less labor-intensive when it comes to feature engineering. This is due to the fact that feature-engineering is automatically carried out during the training process, which as a result makes them an ideal approach among researchers. DL-based models applications in the ML research community are extremely vast, some of which include image processing, Natural Language Processing (NLP), anomaly detection and fraud detection. However, the downsides of DL-based algorithms may bring about major hurdles that may prevent the ML models from learning efficiently <cit.>. In ML projects, the interpretability of algorithms plays a significant part in the quality assurance process of the project. This as a result calls for ML algorithms that can be more easily analyzed. Although DL-based algorithms have shown a great deal of applicability in regards to pattern recognition tasks that are nonlinear, their behavior is still considered extremely difficult to interpret <cit.>. Hence, the interest among researchers in regards to ML algorithms that can be both interpretable and capable of solving non-linear problems has increased. Moreover, complex DL algorithms suffer from another flaw that may prevent the deployment of ML algorithms when it comes to computing resources. RAM usage in complex DL algorithms is exponential and in some cases, may even cause the server to crash during model training. Therefore, alternative approaches are considered a necessity when abundance of computer RAM is not at our disposal. Tsetlin Machine (TM) is an exceptional ML algorithm that has proven to have near accuracy and in some cases better accuracy compared to DL algorithms in pattern recognition tasks <cit.>. Berge et al. for instance implemented a TM-based model for text-categorization that yielded more accurate results compared to other DL-based methods <cit.>. One of the variants of TM that can result in equal or even better accuracy is the Convolutional Tsetlin Machine (CTM) <cit.>. The proposed method by Tunheim et al. in <cit.> implemented an image-classification model that yields promising results compared to DL-based models. Furthermore, the performance of their model in comparison is more efficient and less intensive.
In this paper we propose an IDS called CTMBIDS (Convolutional Tsetlin Machine Based Intrusion Detection System) in the SDN environment for Distributed Denial of Service (DDoS) attacks. The key advantages of our approach compared to other complex implementations of IDSs are the interpretability of the CTM algorithm in our model as well as less memory usage that result in efficient learning and maintaining similar or better accuracy. Additionally, the SDN architecture functions as the fundamental foundation of our proposed model. Therefore, compared to traditional networks, the implementation of the proposed model can be easily done in an SDN controller. Since SDN networks have grown substantially in recent years <cit.>, the dire need for SDN-based datasets is considered a necessity. Most IDSs that are ML-driven are trained on datasets whose data is generated in traditional networks <cit.>. Consequently, training ML-driven IDSs on datasets that are generated in an SDN environment is still an open challenge <cit.>. In addition to proposing CTMBIDS, we also generate three different datasets whose data stem from three distinct SDN topologies in the network. In order to have a better understanding of the CTMBIDS approach, we also provide performance evaluations compared to other common ML-based and DL-based algorithms. In summary, the main contributions of our work are as follows:
* We propose a computationally less intensive IDS called CTMBIDS, which consumes much less memory and maintains relatively better accuracy in most datasets compared to DL-based IDSs.
* We generate three different datasets that are based on the SDN environment in order to provide our model with real SDN data so that the training of the CTMBIDS model can be as close to real scenarios as possible. We also discuss the data-generation algorithm that we used in the SDN controller in order to understand the process of obtaining the data.
* We also use the KDDCup99 dataset to compare the quality of our generated datasets with a well-known benchmark in the research community.
* In this paper, we also provide performance evaluation to compare the accuracy of our model with well-known ML-based and DL-based algorithms. These algorithms include: K-nearest Neighbor (KNN), Logistic Regression (LR), Support Vector Machine (SVM), Random Forest, Naive Bayes, Tsetlin Machine and Convolutional Neural Networks-Long Short Term Memory (CNN-LSTM).
The remaining sections of the paper are structured as follows: Section 2 discusses the necessary topics such as SDN architecture, the TM and CTM algorithm. After that, the third section will discuss the related works that proposed similar methods for implementing IDSs in a network, whether a traditional or an SDN network. Section 4 will provide a detailed explanation of our proposed method. Section 5 will talk about the empirical results in detail. Section 5 also provides a discussion of the proposed method. Finally, in section 6 a comprehensive conclusion and future work that the authors may pursue will be provided.
§ PRELIMINARIES
In this section, we provide a comprehensive background for SDN architecture, TM architecture, CTM architecture, and the classifiers that are applied in our work for the purpose of comparison and model evaluation.
§.§ SDN architecture
Most of the existing traditional networks suffer from various shortcomings. These flaws include scalability, cost, security and manageability. Network devices are extremely costly and demand high-maintenance <cit.>, which as a result prevents networks from expanding. This problem becomes even more difficult when vendor-based devices are added to the network, which as a result forces the network administrators to deal with different types of manual configurations. This type of arduous network management is prone to human error and may ultimately lead to unwanted security loopholes that can cause irreparable damage to the network <cit.>.
SDNs on the other hand provide a flexible architecture that eliminates the shortcomings of traditional networks via decoupling the data layer and the control layer, allowing the network to be substantially cheaper and yet more scalable and more manageable in a vendor-less environment. This as a result gives room to the research community to more easily implement innovative approaches as softwares in the SDN controller <cit.>. As figure <ref> shows, after decoupling the data and control layer, main network functions can be easily implemented as applications or softwares in the application layer. The three layers are in communication with each other via Application Programming Interface (API). Therefore, applications and network devices have no direct communication with each other.
Although SDN's scalability brings about numerous advantages, the downside of security threats are still regarded as one of the most important parts of an SDN architecture that needs to be addressed. The work of Chica et al. in <cit.> covered the attack surface of each layer in SDN architecture, and as it can be perceived, maintaining the reliability and security of SDN is of high priority. The focus of this paper is on DDoS intrusions in the SDN environment, hence the implementation of a novel Intrusion Detection System (IDS) for DDoS attack with lowest memory consumption and highest accuracy.
§.§ DoS and DDoS Attack in SDN
DDoS and DoS attacks are among the most harmful types of network intrusions that can cripple the entire network via resource exhaustion or service incapacitation. Since services in the SDN environment are mounted on SDN controllers in the form of virtual applications, SDN controllers become the primary target for network attackers. In this way, the legitimate clients will be deprived of network services and are not able to communicate with network applications. As it is demonstrated in figure <ref>, the attack in the SDN environment can generate flows in an SDN environment that uses spoofed IP addresses using a single device (DoS) or attackers who do the same exploiting multiple devices (DDoS). Normally, requests made from spoofed IP addresses do not exist in the flow table of the OpenFlow switch to be matched. This, as a result, ends up with a table miss condition that ultimately forces the switch to bombard the controller with flow messages that exhaust the bandwidth, CPU and memory in the data and the control layer <cit.>.
Bearing the above mentioned situation and the architecture of the SDN in mind, the most optimal solution is the deployment of an anomaly-based IDS that can act as an application in the application layer of the SDN environment. In this way, DDoS and DoS attack detection can be carried out more efficiently in comparison to traditional networks. There are publicly available datasets that can be utilized in order to train a state-of-the-art IDS. In the next section we will compare and discuss these datasets.
§.§ DDoS Datasets
Depending on the type of IDS that needs to be trained, datasets can play a significant role in the efficiency of the IDS. The quality of the IDS in the real-world scenarios can drop drastically if the quality of the dataset features do not meet the required standard. Table <ref> discusses the main properties of 8 most important datasets that can be used to train an IDS. The testbed for all the datasets are traditional networks except for the InSDN dataset, which was curated with 46 features in 2016. Some of these datasets have restricted access while most of them are available for training an IDS. Moreover, KDDCup99 and DARPA DDoS datasets are regarded as legacy datasets. KDDCup99 is one of the most important datasets that have been used as a reference so that other datasets can have a benchmark to be compared with. Although the dataset suffers from imbalanced data and is created in a simulated environment, its features are suited to train an IDS with. Later on, the dataset was improved under the title of NSL-KDD in order to cover the drawbacks of the KDDCup99 dataset. As it can be seen from table <ref>, the number datasets in the SDN environment are heavily outnumbered against the datasets in traditional networks. Therefore, it becomes imperative to have high-quality datasets for the SDN environment. In section 3, we will discuss our proposed generated datasets and the classification mechanism for implementing an IDS against DDoS attacks.
§.§ Tsetlin Machine Architecture
The TM algorithm consists of many parts. However, the algorithm's core component is a two-action Tsetlin Automaton (TA) whose responsibility is to take actions and train the a TM model based on the input and the current state. This procedure follows the basic principle of reward and penalty for choosing the best action. Other than TAs, inputs of the TM algorithm also play a critical role during the training as well. A binary input and its negation form a literal in the TM algorithm. The literals are then used as inputs for automatons that decide whether to include or exclude the input during the training process. Finally, after deciding which literals to include or exclude, the TM uses an AND operator to calculate the literals in order to form a clause with either a positive or negative polarity <cit.>. Figure <ref> demonstrates the architecture of TM from input to clause calculation.
§.§ Convolutional Tsetlin Machine
As the conception of CNN had a great impact on DL approaches, CTM algorithm also made room for more advanced approaches using TA in order to gain better accuracy. The CTM algorithm utilizes the same recognition procedure as the TM algorithm <cit.>. However, there are distinct differences between the two. We will discuss them in this part to better understand the CTM algorithm. Just like the CNN algorithm, the CTM algorithm's biggest advantage mostly revolves around image processing tasks. Imagine an image size of X + Y, which also consists of Z layers. Therefore, in comparison to the classical TM algorithm, each clause of the CTM algorithm functions as a kernel. The kernel's dimensionality is W × W. Considering that each image has Z layers, each clauses will ultimately have W × W × Z × 2 literals. Another big advantage of CTM is its location-awareness functionality. Consequently, this allows the model to identify patterns and memorize their locations. The final evaluation of each CTM kernel is B = B_X × B_Y. In this equation, B_X = [X-W/d]+1 and B_Y = [Y-W/d]+1, in both of which d is the kernel step of the convolution.
In the end, once the convolution process is finished, the CTM algorithm produces B values per image, which is on the contrary to the classic TM algorithm that outputs only a single value. C_j^b,+ is denoted as the output of a positive clause j on kernel b. In order to convert multiple outputs C_j^1,+, ... , C_j^B,+ of clause j with positive polarity into one single output, which we denote as C_j^+, we need to utilize logical OR for all the outputs as shown in the equation (<ref>) below:
C_j^+ = ⋁_b=1^B C_j^b,+
§.§ CTM Classification
The classification in the CTM algorithm happens exactly the same way as the TM algorithm. It is based on two-class data, i.e., the subpatterns in each class should be detected. Therefore, the classes are divided into two distinct clauses; clauses with even indices whose role is to detect subpatterns associated with output 0 are given negative polarity (C_j^-), and clauses with odd indices whose role is to detect subpatterns associated with output 1 are given positive polarity (C_j^+). As soon as a subpattern is detected, the CTM algorithm moves forward with a voting procedure among all of the clauses based on their polarity. The voting procedure is the equation (<ref>):
v = ∑_j C_j^+ - ∑_j C_j^-
which consists of a summation of all of the clauses with negative polarity subtracted from summation of all of the clauses with positive polarity. Finally, the classification output y will be according to the condition below, which interestingly resembles the functionality of ReLU function:
y =
1 if v ≥ 0
0 if v < 0
§.§ CTM Computational Complexity
The computational complexity of the CTM algorithm increases linearly as the number of clauses m and kernel B increases. Yet, the computations can be parallelized due to the decentralized nature of the CTM algorithm. This complexity is somewhat different in the TM algorithm since the absence of a kernel in the nature of TM algorithm makes a distinction. Therefore the big O for the CTM algorithm is O(m B), where, as mentioned above, m represents clauses and B the kernels in the algorithm <cit.>.
§.§ CTM hyperparameters
After thorough examination of the CTM algorithm, we can discuss the hyperparameters of this algorithm. Table <ref> provides a description of hyperparameters that can be set in the CTM algorithm. As it is clearly explained, for each N clause, we need to double the states. Also, the learning sensitivity is the sensitivity of the algorithm during training to change its states in regards to its input. In other words, it is the equivalent of learning rate in deep learning models. We will discuss the hyperparameters of our proposed model CTMBIDS in the Section 4.
§.§ Clause Feedback Activation Function
CTM algorithm utilizes two types of feedback, both of which take advantage of clause feedback activation function. The use case of this function becomes prominent when a certain type of feedback can become problematic as soon as its frequency is left unchecked. Therefore, a new function is required to manipulate the frequency of feedback for a particular pattern, so that high-frequency and unnecessary feedback can be prevented in the model. Hence, to enhance the allocation of sparse pattern representation resources offered by the clauses, equation (<ref>) and (<ref>) are utilized for feedback type I and type II respectively, in order to decrease the frequency of each type of feedback as the number of clauses voting from that pattern approaches a threshold value T.
T - max(-T, min(T, ∑_I = 1^m C_j)/2T
T + max(-T, min(T, ∑_I = 1^m C_j)/2T
As it can be perceived from the probability of activation functions, the probability of activation decreases as the number of votes approaches the threshold T and ultimately as soon as T is reached, the probability equals 0. This indicates that Type I feedback will not be activated when enough clauses produce the correct number of votes, causing the affected clauses to become "frozen" as TAs will no longer change their state. Thus, other clauses are freed to seek other sub-patterns since the "frozen" pattern is no longer seeked by the TAs. This same reasoning applies to Type II feedback via adopting the clause feedback activation function, which in turn makes room for more effective resource allocation.
§ RELATED WORKS
In this section, we will investigate various famous works that are ML- and DL-based. We will also discuss their results and the algorithm that they utilized to achieve the desired output.
Tan et al. <cit.> proposed an IDS within wireless sensor networks that utilizes Random Forest algorithm combined with Synthetic Minority Oversampling Technique (SMOTE). Their method utilizes SMOTE for increasing the accuracy and oversampling the data. After the implementation of oversampling using SMOTE, the training set is constructed again in order to balance the imbalanced class of the data. Thereafter, they use Random Forest for training on the balanced data. Their method was implemented on KDDCup99 dataset <cit.>, whose end result after the use of SMOTE demonstrated that precision and accuracy on imbalanced classes of attacks in the dataset were improved compared to the precision and accuracy without using SMOTE. SMOTE is not the only methodology that can deal with imbalanced data. Wazirali in <cit.> introduced a semi-supervised IDS whose performance was improved using hyper parameter tuning on KNN algorithm. This approach utilized 5-fold cross validation strategy for model validation until the overall k is achieved. The model also utilizes principal Component Analysis (PCA) in order to select the most relevant features in the data. The proposed method achieves the highest accuracy compared to other proposed models.
Comparison of methods for intrusion detection in software-defined networks.
Methods Year Approach Datasets SDN based Big O Notation Advantages Disadvantages
Besharati et al. <cit.> 2019 Logistic Regression NSL-KDD No O(m × n × k), the number of training examples (m), features (n) and optimization iterations (k). New approach for feature selection, Improvement in accuracy. Lack of details in practical implementation.
Wazirali <cit.> 2020 KNN NSL-KDD No O(d × n ×log(k)), the dimensionality (d), the number of training examples (n) and the number of neighbors (k). Improvement of KNN detection rate, computationally efficient on resource constrained devices. Requires a large amount of data for high performance, Sensitive to the choice of hyperparameters.
Anton et al. <cit.> 2019 SVM Modbus-based gas pipeline control traffic, OPC UA based batch processing traffic. No O(m^2 × n), the number of training examples (m) and features (n). Capable of detecting a variety of industrial protocols, including Modbus and OPC UA, Capable of handling missing data. Vulnerability to false positives.
Tan et al. <cit.> 2019 Random Forest KDDCup99 No O(m × n × log(n)), the number of trees (m), features (n) and log(n) due to sorting operations. Capable of handling imbalance data. Vulnerability to false positives and dependency on hyperparameter values.
Wisanwanichthan and Thammawichai <cit.> 2021 Naive Bayes NSL-KDD No O(m × n), the number of training examples (m) and features (n). High detection rates and low false alarm rates, Robust to noise and outliers. Complex model which lacks interpretability, Sensitivity towards the parameters.
Abeyrathna et al. <cit.> 2020 Tsetlin Machine KDDCup99 No O(m),the number of clauses (m). Interpretability of the model, Competitive performance compared to recent IDS. Lack of details in preprocessing step.
Abdallah et al. <cit.> 2021 CNN-LSTM NSL-KDD Yes O(L × n), processing data through layers (L) with an input size of (N). Capability in capturing temporal features, Improved performance in terms of generalization. Demanding in computations, Complex compared to other IDSs.
Anton et al. <cit.> proposed an IDS that utilizes a hybrid model taking advantage of both SVM and Random Forest algorithms in an industrial environment. Their model utilizes Random Forest Importance Score for feature selection as well as SVM for the classification part of the model. In addition to using Random Forest for feature selection, their method also utilized a full-feature dataset, which in the end resulted in a very promising accuracy compared to the accuracy of the model when features were selected. The result also showed that, in spite of relatively good accuracy, the model training was extremely slow when using SVM algorithm. Naive Bayes is another approach that can yield relatively good results in classification tasks. Wisanwanichthan and Thammawichai <cit.> utilized Naive Bayes and SVM as a hybrid model on the KDDCup dataset. Their method obtains the highest accuracy among other approaches on two types of attacks in the KDDCup dataset. The proposed model utilizes Intersectional Correlated Feature Selected (ICFS) in order to drop the irrelevant features. However, their proposed method suffers from bias toward frequent attacks, which signifies that the proposed model underestimates the attacks that occur less frequently.
Although traditional machine learning models are applicable approaches, gaining high accuracy while dealing with large datasets and preprocessing the data can be frustrating <cit.>. Therefore, it becomes a necessity for IDSs to harness the power of DL-based models. However, the one biggest disadvantage of DL models is memory consumption that may be considered an obstacle when hardware resources are limited. Besharati et al. proposed a host-based IDS that utilizes Logistic Regression inside VMs in cloud environments <cit.>. Their framework takes advantage of the NSL-KDD dataset. However, their significant contribution is using the logistic regression algorithm for feature selection. Their work yields a %97.51 accuracy in the cloudsim environment. Their one big disadvantage, according to the authors, is the additional computational complexity that makes the proposed model somewhat more complex. Abdallah et al. proposed a hybrid model that uses a combination CNN and Long Short-Term Memory (LSTM) algorithms. This approach made an effort to utilize the capabilities of both algorithms, that is, the spatial feature extraction capability of CNN algorithm and the temporal feature extraction capability of LSTM algorithm. The evaluation of the proposed model was carried out on three datasets: CIC-IDS 2017, UNSW-NB15, and WSN-DS <cit.>. The criteria for evaluating the model's performance were accuracy, precision, detection rate, F1-score, and False Alarm Rate (FAR). Apart from deep learning approaches, TM algorithm can also be considered another alternative when maintaining the high accuracy of deep learning models is in play. In fact, the TM algorithm not only can maintain the relatively high accuracy, but it can also maintain the interpretability of the model. Abeyrathna et al. implemented an IDS using the TM algorithm that was trained on the KDDCup99 dataset. Their model outperformed numerous other algorithms including SVM, Decision Tree and Random Forests <cit.>.
Finally, It is also worth mentioning that, to the best of our knowledge, Mohsin et al.'s study in <cit.> employed a similarly generated dataset to address the security concerns posed by DDoS attacks. In spite of claiming to generate a new dataset, their work has the fundamental flaw of not providing any technical details of their data generation algorithm. Furthermore, their method fails to demonstrate the necessary details in regards to the generated data such as the type of generated features and the necessary preprocessing steps for cleaning the generated dataset.
Considering the above discussed methods, the works are introduced to address the threat of network intrusions either in SDN environments or in traditional networks using state-of-the-art approaches. We selected these works and presented a summary of them in table <ref> in order to later on compare the proposed CTMBIDS method with these approaches. As it is evident, each work has a unique approach, which allows us to have a comprehensive comparison. Moreover, the advantages and disadvantages of each work make room for a more in-depth understanding of this comparison so that it can be used as a benchmark for future works. As it can be seen, each work in the table is provided with its advantages and disadvantages as well as its algorithm, the dataset that was used for the proposed IDS and finally whether or not the work is implemented in an SDN environment.
§ PROPOSED METHOD
Our proposed method consists of 2 phases. Phase 1 consists of preparation of the SDN testbed for data generation. Phase 2 consists of data preprocessing and model training. In this phase, we propose a model called CTMBIDS for detecting DDoS attacks in an SDN environment. Figure <ref> shows the workflow demonstrating the processes of each phase in detail. Hence, in this section, we go over each phase and its details in order to discuss what each phase consists of.
§.§ Phase 1: Network Topologies
As we discussed earlier, one of the main contributions of our work is data generation so that the trained models can learn on real world SDN data. For this reason, the first step of our model starts with three main network topologies in an SDN environment. These topologies differ in size of their hosts and their network devices. This is to make sure that the generated data is not biased towards a specific network topology. Moreover, three topologies are an indication that there are three different datasets for model training, which can be extremely useful for evaluating the model's performance with other existing models and datasets. Table <ref> shows the details of our three SDN network topologies. As it can be seen, each topology consists of a number of Virtual Machines (VMs) that are connected to the SDN Open vSwitch. For example in the third topology, there are 10 SDN switches and for every switch, there are 10 VMs that are connected to it.
Figure <ref> shows the SDN testbed for topology 3, which consists of 10 switches and 100 VMs or nodes. In each Local Area Network (LAN), we have 9 VMs whose network traffic is normal across the entire network while 1 host in each LAN is randomly chosen to generate malicious traffic. We will discuss the data generation part of the method more in the next section. The network controller for all topologies is Ryu Controller. The reason for choosing this controller is its compatibility with the Python programming language, which is the same programming language that we used throughout the entire steps of the proposed CTMBIDS model.
As for the data layer of all the topologies, we utilized Open vSwitch alongside the Mininet simulator for creating all the nodes in the data layer. Other alternatives for creating the data layer existed but due to its compatibility with Ryu controller, we decided to use Open vSwitch. It is worth noting that version 1.3 of OpenFlow was used for all the Open vSwitches in all the topologies that we mentioned.
§.§ Phase 1: Data Generation Algorithm
After creating three different topologies, we used the proposed data generation algorithm to generate three distinct datasets. As their name suggests, Data18 is created based on the topology that has 18 hosts, Data50 is generated based on the topology that has 50 hosts and Data100 is generated based on the topology that has 100 hosts, respectively. The DataGenerationApp algorithm functions as a software in the application layer of the SDN architecture. Figure <ref> demonstrates the placement of the algorithm in the proposed SDN architecture. As it can be seen in figure <ref> , the SDN network devices are located in the data layer. They interact with the Ryu controller through the southbound interface. This interface is responsible for relaying the messages between the control layer and the data layer. In other words, the Ryu controller utilizes the southbound interface in order to configure, manage and collect data from the existing devices. The application layer on the other hand, utilizes northbound interface to retrieve information from the control layer in the form of events. This process uses protocols that take advantage of SDN controller event handlers. As a result all network devices are exposed to applications via the controller so that the end result can be information retrieval and device management in the SDN environment.
Algorithm <ref> explains how the proposed algorithm called DataGenerationApp functions in the application layer in order to perform data generation across the entire network. At first, all the available datapaths in the network are stored in a python dictionary, in which the datapath id is its key and the datapath information is its value. Afterwards, using an infinite loop, the DataGenerationApp asynchronously and continuously monitors the OpenFlow switches through a request that is generated from the Ryu controller event handler in order to request statistics from OpenFlow Switches. The controller acts as an intermediary between network switches and the DataGenerationApp. This process also includes a 10-second inactivity gap after each event handler request simply due to the fact that we wanted to make sure we write all the requested information. The main function FlowStatsReplyHandler receives a reply from the switch in order to get the flow statistics message. This function's input is the statistics message that was sent from the switch to the controller and function's main purpose is to return the statistics of the specified features. All the openflow flow statistic reply and request from the input message is implemented using Ryu event handler methods since the Ryu controller is the intermediary between switches and the DataGenerationApp. This indicates that the FlowStatsReplyHandler function acts as a "parser" that analyzes the flow statistics reply and then writes the statistics to a CSV file. Once the statistics of the message is parsed, the generated data is annotated as 0 if the parsed statistics are of normal traffic. Otherwise, the statistics are parsed as malicious traffic, which means that data should be annotated as 1.
§.§ Phase 1: Data Generation Tools
The creation of the datasets consisted of two major parts. The first part was gathering normal traffic across the network via the DataGenerationApp. This part relied only on the legitimate devices in the network. The second part however, relied on the randomly chosen hosts in each LAN. As it can be seen from figure <ref>, one host is randomly chosen inside each LAN to generate malicious traffic. In our proposed method, these malicious data consist of DDoS attacks. Table <ref> shows the tools we used to generate the attack traffic in our dataset. We used Micro Core Linux as the main OS for each device in the network. The reason for choosing this lightweight OS is simply due to the fact that the high number of hosts in network topologies require so much memory. Therefore, choosing an OS such as Kali Linux with too many redundant tools that may not be used during the data generation process as far as DDoS attacks are concerned. The only difference in DoS and DDoS attacks are the number of attacking hosts in the network that constitutes different types of DoS attack.
§.§ Phase 1: Dataset Description
As mentioned earlier, Data18, Data50 and Data100 were the three datasets that we created. Table <ref> shows the detailed information about the features of the three main datasets. As table <ref> demonstrates, we created 21 features, which include 7 categorical and 14 numerical features. Later in the next section, we will discuss the feature selection methods that we used to discuss what features were used for the CTMBIDS model training.
All the traffic generation processes were implemented in the Mininet simulator as well as OpenFlow and Open vSwitch. Mininet is a simulation tool that is used extremely in the research community and has gained popularity with Python developers. However, in order to make sure that the generated datasets were close to other standard datasets, we also used KDDCup99 as our reference dataset. Table <ref> shows the details of each generated dataset in terms of numbers of data and the time it took for the data generation process.
§.§ Phase 2: Data Preprocessing
After generating data using three different network topologies, the three datasets were created with 21 features. However, preprocessing is an essential prerequisite before model training. Therefore, in our proposed method, the preprocessing consists of 4 different steps, which are data analysis, data normalization, feature selection and data encoding. The following sections describe each step of preprocessing and how it results in four preprocessed datasets that are ready to be used for CTMBIDS training.
§.§ Phase 2: Data Analysis
In order to better understand the data that were created, we perform a comprehensive data analysis so that we could explore what data type each column feature contained. With the analysis of our data, not only we could better understand what type of encoding we must use later, but also how much correlation there is between one column feature and another. Figure <ref> shows the triangle correlation heatmap for one of our generated datasets, namely Data100. The correlation heatmap illustrates that the most correlated features are byte count and packet count both in terms of size and time duration in flow messages. Other dataset features that are fairly correlated with each other are source and destination port with source and destination IP addresses.
§.§ Phase 2: Data Normalization
An IDS with high accuracy in classification requires data that has been normalized and preprocessed. The reason for the utilization of data normalization is because the generated as well as the KDDCup99 datasets all have numerical values that have different ranges. Thus, rescaling and normalizing the data becomes essential. In our CTMBIDS method, for rescaling and normalizing the data, we used min-max normalization for each numerical column feature according to the equation (<ref>), where x is the value to be normalized, x_min is the smallest value in a feature column and x_max is the largest value in a feature column:
x_normalized = x - x_min/x_max-x_min
§.§ Phase 2: Feature Selection
All the three generated datasets have 21 features. Therefore, reducing the number of features can be beneficial to gain more accuracy for training the CTMBIDS model. In our proposed method, we used Recursive Feature Elimination (RFE) with a Random Forest classifier using gini index <cit.>. This algorithm recursively iterates over all the features. Once a feature is selected to be removed, the Random Forest classifier evaluates its performance on the remaining features. This process occurs iteratively until the desired number of features is reached. After implementing the RFE on both the three generated datasets and the reference KDDCup99 dataset, 16 features were chosen from each dataset. Table <ref> shows the selected features by RFE algorithm in the three generated datasets as well as in the KDDCup99 dataset. The selected features for the generated datasets in table <ref> are chosen from all the features that we demonstrated in table <ref>. The selected features for the KDDCup99 dataset are chosen from the KDDCup99 dataset.
Feature selection is extremely beneficial when computational resources are not abundant. However, in order to compare the quality of the generated datasets with our reference KDDCup99 dataset, and also to compare the performance of the feature selection algorithm, we will also train the CTMBIDS model with full feature datasets as well. Not only this can provide a benchmark for future reference, but also a baseline for the accuracy of the proposed CTMBIDS.
§.§ Phase 2: Data Encoding
Almost all the existing machine learning algorithms require an encoding scheme to prepare the data for model training. As stated above, both numerical and categorical features exist in all datasets. Therefore, it is a necessity to use the appropriate encoding scheme for each type of data type. It is worth mentioning that encoding in the proposed method was implemented after the data normalization. We used one-hot encoding to encode categorical features and thresholding method for binarizing the normalized numerical features <cit.>. Hence, the final datasets that were used to be fed to the CTMBIDS model were all binary. This is an inseparable part of preprocessing for the TM and CTM algorithm since they work with binary data. The reason that the TM and CTM algorithms consume less memory is simply due to utilization of binary data, which is the closest language to the machine language. Finally, it is also worth mentioning that the thresholding method was only used for our proposed CTMBIDS method. Other models that we implemented in order to compare the CTMBIDS model with, took one-hot encoding and data normalization steps without utilizing thresholding method.
§.§ Phase 2: Data Sampling
Any machine learning model requires a training and a test set so that the desired model can learn with the training set and it can be evaluated with the test set. However, due to the novelty of our generated datasets, the CTMBIDS model as a result requires a strategy to avoid overfitting. Therefore, for the evaluation criterion of all models as well as the CTMBIDS model, we used k-fold cross validation with 10 being the k value in the proposed CTMBIDS model <cit.>. In the end, the final average accuracy, precision and f1-score of all iterations are measured after 10-fold cross validation is completed.
§.§ Phase 2: CTMBIDS Learning
Once the train and test sets are preprocessed and then divided into 10 folds, the training of CTMBIDS model can commence. The main classification algorithm in the CTMBIDS model is the CTM algorithm, which uses automatons in its core. Figure <ref> represents the training workflow of the CTMBIDS model. First, the hyperparameters precision S, threshold T, clause m and kernel size k must be set. Table <ref> shows the hyperparameters values before the training. Once the hyperparameters are set, the CTMBIDS continues to create 2n clauses. Afterwards, a loop iterates over every clause m to calculate each clause output. The output for each clause m is calculated using equation (<ref>). The next step for the CTMBIDS method depends on two things, the clause output and the clause polarity. For instance for the j^th clause, if the clause output is 0 with negative polarity, the CTMBIDS uses feedback type I. Moreover, to control unnecessary and high-frequency feedback in the CTMBIDS, the model uses equations (<ref>) and (<ref>) for clause feedback activation function. In other words, when the activation probability decreases and the number of votes reaches the T hyperparameter that had already been set, the feedback does not activate. This indicates that the TAs do not change their state anymore. This process makes room for remaining clauses to search for other sub-patterns. Therefore, the resource allocation can be done far more efficiently. This main principle also holds true for feedback type II. After all the models are trained, we start the model evaluation process. In the next section, we will provide the empirical results of the CTMBIDS model as well as other models that we implemented for comparison.
As mentioned before, our proposed CTMBIDS method will be compared with 7 other models in order to have a benchmark of the proposed method. Table <ref> shows the details of these 7 models that we use to compare the CTMBIDS model with. In the following section, we will discuss the results of the proposed CTMBIDS method.
§ RESULTS EVALUATION AND DISCUSSION
As we said before, this paper contains two main phases. One phase created 3 main SDN-based dataset that were discussed in detail. In the second phase, the preprocessing data yielded 16 features using the RFE method. Our experiment was implemented in two phases as well. Phase 1 of the experiment includes performance of the proposed CTMBIDS model along with 7 other state-of-the-art models. Phase 2 includes clause variation for the CTMBIDS, which shows the performance of the CTMBIDS model utilizing different clauses for training. The implementation details and codes are available on the github account [https://github.com/russelljeffrey/CTMBIDSCTMBIDS github].
§.§ Phase 1: Experimental Configuration
The models were implemented using Python programming language as well as a number of libraries that are extensively used in Python programming language. Table <ref> demonstrates the experimental configuration of our proposed method along with all the libraries that were used to design and implement all the models.
§.§ Phase 1: Evaluation Metrics
In order to evaluate all models, we used three of the most common evaluation metrics such as accuracy, precision and f1-score. Furthermore, we used memory_profiler <cit.> in order to compare the CTMBIDS model with other models in terms of its memory usage. The following equations demonstrate how accuracy, precision and f1-score are calculated:
Accuracy = TP + TN/TP + TN + FP + FN
Precision = TP/TP + FP
F1-Score = 2 × TP/2 × TP + FP + FN
In above equations, True Positive (TP) and True Negative (TN) represent the values that were correctly classified and False Positive (FP) and False Negative (FN) represent the values that were incorrectly predicted. We provide the analysis of our results in great detail. These results include the CTMBIDS model and 7 other models for comparative analysis. These 7 models will be used as a benchmark to compare the performance of CTMBIDS with other state of the art IDSs. These models were compared in detail in table <ref>. The algorithms that are used for each model are according to table <ref>, i.e, Logistic Regression <cit.>, KNN <cit.>, SVM <cit.>, Random Forest <cit.>, Naive Bayes <cit.>, Tsetlin Machine <cit.> and CNN-LSTM <cit.>.
As we discussed in the feature selection section, our empirical results include both full features of the dataset as well as the 16 features that were chosen by the RFE algorithm. We will address the datasets with full features (Table <ref>) and selected features (Table <ref>) as full-feature datasets and sub-feature datasets, respectively. As we mentioned earlier in the Data Sampling section, we conducted our method using 10 fold cross validation. Each fold uses 10,000 samples of data, which indicates that 100,000 data were chosen from the dataset for training and testing. Table <ref> and table <ref> represent the accuracy, precision, f1-score and memory consumption of all models for sub-feature and full-feature datasets, respectively.
The comparison for each model is carried out based on individual datasets. Thus, the bold value in each dataset column represents the highest value for accuracy, precision and f1-score for that specific dataset. Moreover, the lowest value for memory consumption in each dataset column is in bold to indicate the least memory consumption for each model in regards to a dataset. In table <ref>, it can be seen that Random Forest has the highest accuracy, precision and f1-score for Data18. In spite of Random Forest accuracy, its memory consumption is considerably high. It is clearly evident that the proposed CTMBIDS does not have the highest accuracy for sub-feature Data18, Data50 and Data100 datasets. However, when we compare the proposed model accuracy with the Random Forest model, the memory consumption is far lower. It seems unreasonable for the Random Forest model to consume 4 times the memory of the proposed CTMBIDS model in order to gain less than 1 percent more accuracy. For the KDDCup99 dataset however, the proposed CTMBIDS model shows to have the highest accuracy among other models in spite of using the least memory.
Table <ref>, illustrates the accuracy, precision and f1-score of all the models for full-feature datasets. For Data18, Logistic Regression, SVM and CNN-LSTM models have the highest accuracy, precision and f1-score compared to other models. Once again, Logistic Regression, SVM and CNN-LSTM show that they require more memory compared to the CTMBIDS model. In the remaining datasets, CTMBIDS excels far better than the remaining models. What is more interesting in case of full-feature datasets is that the proposed CTMBIDS can reasonably keep the memory consumption low while handling a high number of features in the dataset. This can be easily comprehended when we take the number of KDDCup99 features into account. The more features there are for the model to be trained with, the more memory the model consumes. This makes the CTMBIDS model more appropriate in a resource-constrained environment. This issue is particularly important in an SDN environment where there is no abundance of memory available for the anomaly-based IDS in an SDN controller.
For a better understanding of models' memory consumption, we provided the memory consumption of all the models for both sub-feature and full-feature datasets in figure <ref> and <ref>, respectively. It is clear that the proposed CTMBIDS model consumes the least memory in all the datasets while CNN-LSTM and Logistic Regression have the highest memory consumption for sub-feature and full-feature datasets, respectively. This all indicates that the interpretability of the CTMBIDS model as well as its lightweight nature in memory consumption makes it a great approach for environments where resource abundance is not a reasonable option. These environments can be SDN environments, IoT and wireless networks.
§.§ Phase 2: Clause Variation
After implementation of all models, we implemented the CTMBIDS once more on Data18 and KDDCup datasets. This time we trained the model using a different number of clauses to see the model performance based on the number of clauses. Figure <ref> shows the accuracy of the CTMBIDS model on Data18 and KDDCup99 datasets both with sub-feature and full-feature datasets. As it can be seen, we used 10 different clauses for training the CTMBDS model. They are 10, 20, 50, 100, 200, 450, 800, 1200, 2000 and 4000. The steep curve in the figure shows that increasing the number of clauses from a certain point on does not increase the accuracy. In other words, the model overfits and the learning halts at a certain point. However, the dataset and the model architecture affect the model's learning curve and its accuracy as well. The more clauses are used in the model, the more memory the model consumes. Moreover, the more clauses are used in the model, the more time the model requires to complete the computations. Hence, it takes more time for the model to complete its training. Adjusting the "clause" hyperparameter is a necessity in order to maintain an ideal memory consumption and reasonable time complexity during the model training.
§ CONCLUSION AND FUTURE WORK
The SDN environment is extremely beneficial for turning innovative ideas into applicable softwares. Its flexibility and programmability facilitate the growth of the network. However, this growth is coupled with threats and security risks that if not handled, can wreak havoc throughout the entire network and halt the widespread development of the SDN architecture. Thus, development and enhancement of IDSs using novel approaches can both fortify the SDN architecture from security flaws in the network and help network administrators more carefully handle the network management. In this work, we provided a lightweight IDS for detecting DDoS and DoS attacks in the SDN environments, which utilizes the least amount of memory compared to other state-of-the-art proposed approaches. This efficiency saves memory and makes sure the IDS can still maintain its high accuracy. Another important hurdle of the SDN environments was the lack of data compared to data in traditional networks. This work also addressed this issue by generating three new datasets that can be used to train various types of anomaly based IDSs in the SDN environment.
This study has aimed to pave the way for lightweight models that can take advantage of Tsetlin and Convolutional Tsetlin Machine Algorithms. Not only it can be used for network security, but also for Image Processing tasks. In the future, we will focus on utilizing CTM algorithm on image processing tasks that can yield an acceptable accuracy compared to state-of-the-art models such as CNN. We also would like to enhance the performance of the CTM algorithm using evolutionary algorithms in order to obtain better accuracy. Finally, we would like to address the fact that wireless networks are in need to utilize low-power models where energy consumption is of top priority. Hence, our future work will also include models in wireless networks, in which we can take advantage of both TM and CTM algorithms' lightweight classification properties.
§ DECLARATIONS
§.§ Competing Interests
The authors have no financial or proprietary interests in any material discussed in this article.
§.§ Authors Contribution Statement
Author Rasoul Jafari Gohari collected the data in simulation in addition to conceptualizing and designing the IDS while all authors collaboratively structured the article, prepared the manuscript, analyzed data, reviewed and edited drafts, and finally approved the publication.
§.§ Code availability
All the codes and implementations for each phase of this publication are available and easily accessible via Github (https://github.com/russelljeffrey/CTMBIDSCTMBIDS Link).
§.§ Ethical and Informed Consent
This research utilizes the KDDCup99 dataset, which is publicly available for academic use. We adhere to ethical principles throughout our research. We prioritize data privacy, responsible use, and avoiding potential harm, focusing our methods on modeling based on KDDCup99 dataset without any unethical irresponsibility.
99
1 Yang Z, Cui Y, Li B, Liu Y, Xu Y (2019) Software-Defined Wide Area Network (SD-WAN): Architecture, Advances and Opportunities. 2019 28th International Conference on Computer Communication and Networks (ICCCN), Valencia, Spain, 1-9. doi: 10.1109/ICCCN.2019.8847124.
2 Jazaeri SS, Jabbehdari S, Asghari P et al (2021) Edge computing in SDN-IoT networks: a systematic review of issues, challenges and solutions. Cluster Comput 24(24):3187-3228. doi: 10.1007/s10586-021-03311-6.
3 Long Q, Chen Y, Zhang H et al (2022) Software Defined 5G and 6G Networks: a Survey. Mobile Netw Appl 27(27):1792-1812. doi: 10.1007/s11036-019-01397-2.
4 Jiménez MB, Fernández D, Rivadeneira JE et al (2021) A Survey of the Main Security Issues and Solutions for the SDN Architecture. IEEE Access 9:122016-122038. doi: 10.1109/ACCESS.2021.3109564.
5 Yang G, Shin C, Yoo Y et al (2021) A Case for SDN-based Network Virtualization. 2021 29th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), Houston, TX, USA, 1-8. doi: 10.1109/MASCOTS53633.2021.9614291.
6 Jafari Gohari R, Aliahmadipour L, Kuchaki Rafsanjani M (2022) Deep learning-based intrusion detection systems: A comprehensive survey of four main fields of cyber security. Journal of Mahani Mathematical Research, November 2022.
7 Yurekten O, Demirci M (2021) SDN-based cyber defense: A survey. Future Generation Computer Systems 115:126-149. doi: 10.1016/j.future.2020.09.006.
8 Acun B, Murphy M, Wang X et al (2021) Understanding Training Efficiency of Deep Learning Recommendation Models at Scale. 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), Seoul, Korea (South), 802-814. doi: 10.1109/HPCA51647.2021.00072.
9 Li X, Xiong H, Li X et al (2022) Interpretable deep learning: interpretation, interpretability, trustworthiness, and beyond. Knowl Inf Syst 64(64):3197-3234.
10 Granmo O-C (2019) The Tsetlin machine - a game theoretic bandit driven approach to optimal pattern recognition with propositional logic. <https://arxiv.org/abs/1804.01508>.
11 Berge GT, Granmo O-C, Tveit TO et al (2019) Using the Tsetlin Machine to Learn Human-Interpretable Rules for High-Accuracy Text Categorization With Medical Applications. IEEE Access 7:115134-115146. doi: 10.1109/ACCESS.2019.2935416.
12 Granmo O-C, Glimsdal S, Jiao L et al (2019) The Convolutional Tsetlin Machine. arXiv preprint arXiv:1905.09688.
13 Tunheim SA, Jiao L, Shafik R et al (2022) A Convolutional Tsetlin Machine-based Field Programmable Gate Array Accelerator for Image Classification. 2022 International Symposium on the Tsetlin Machine (ISTM), Grimstad, Norway, 21-28. doi: 10.1109/ISTM54910.2022.00013.
14 Ahmad A, Harjula E, Ylianttila M, Ahmad I (2020) Evaluation of Machine Learning Techniques for Security in SDN. 2020 IEEE Globecom Workshops (GC Wkshps), Taipei, Taiwan, 1-6. doi: 10.1109/GCWkshps50303.2020.9367477.
15 Anerousis N, Chemouil P, Lazar AA et al (2021) The Origin and Evolution of Open Programmable Networks and SDN. IEEE Communications Surveys & Tutorials 23(3):1956-1971. doi: 10.1109/COMST.2021.3060582.
16 Singh J, Behal S (2020) Detection and mitigation of DDoS attacks in SDN: A comprehensive review, research challenges and future directions. Comput Sci Rev 37:100279. doi: 10.1016/j.cosrev.2020.100279.
17 Facchini H, Perez S, Blanchet R et al (2021) Experimental performance contrast between SDN and traditional networks. 2021 IEEE CHILEAN Conference on Electrical, Electronics Engineering, Information and Communication Technologies (CHILECON), Valparaíso, Chile, 1-6. doi: 10.1109/CHILECON54041.2021.9702982.
18 Abdou A, van Oorschot PC, Wan T (2018) Comparative Analysis of Control Plane Security of SDN and Conventional Networks. IEEE Communications Surveys & Tutorials 20(4):3542-3559. doi: 10.1109/COMST.2018.2839348.
19 Ahmad S, Mir AH (2021) Scalability, Consistency, Reliability and Security in SDN Controllers: A Survey of Diverse SDN Controllers. J Netw Syst Manage 29(29):9.
20 Rawat DB, Reddy SR (2017) Software Defined Networking Architecture, Security and Energy Efficiency: A Survey. IEEE Communications Surveys & Tutorials 19:325-346.
21 Correa Chica JC, Cuatindioy Imbachi J, Botero Vega JF (2020) Security in SDN: A comprehensive survey. Journal of Network and Computer Applications 159:102595.
22 Eliyan LF, Pietro RD (2021) DoS and DDoS attacks in Software Defined Networks: A survey of existing solutions and research challenges. Future Generation Computer Systems 122:149-171.
23 University of California (1999) KDD Cup 1999 Data. <https://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html>.
24 A Detailed Analysis of the KDD CUP 99 Data Set. (2009). Proceedings of the Second IEEE Symposium on Computational Intelligence for Security and Defense Applications (CISDA'09), Ottawa, ON, Canada, 1-6.
25 University of Southern California-Information Sciences Institute. <http://dx.doi.org/10.23721/109/1358116>.
26 CAIDA. (2007). The CAIDA D̈DoS Attack 2007D̈ataset, 2007. <https://www.caida.org/catalog/datasets/ddos-20070804_dataset/>.
27 Moustafa N, Slay J (2015) UNSW-NB15: a comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set). 2015 Military Communications and Information Systems Conference (MilCIS).
28 Sharafaldin I, Lashkari AH, Ghorbani AA (2018) Toward generating a new intrusion detection dataset and intrusion traffic characterization. ICISSP 1 (2018), 108-116.
29 Koroniotis N, Moustafa N, Sitnikova E, Turnbull B (2019) Towards the development of a realistic botnet dataset in the Internet of Things for network forensic analytics: botiot dataset. Future Generat. Comput. Syst. 100, 779-796. doi: 10.1016/j.future.2019.05.041.
30 Elsayed MS, Le-Khac NA, Jurcut AD (2020) InSDN: A Novel SDN Intrusion Dataset. IEEE Access 8, 165263-165284. doi: 10.1109/ACCESS.2020.3022633.
31 Zhang X, Jiao L, Granmo O-C, Goodwin M (2022) On the Convergence of Tsetlin Machines for the IDENTITY- and NOT Operators. IEEE Transactions on Pattern Analysis and Machine Intelligence 44(10), 6345-6359. doi: 10.1109/TPAMI.2021.3085591.
32 Bhattarai B, Granmo O-C, Jiao L (2022) ConvTextTM: An explainable convolutional Tsetlin machine framework for text classification. In Proceedings of the Thirteenth Language Resources and Evaluation Conference (LREC), 3761-3770.
33 Tan X, Su S, Huang Z et al (2019) Wireless Sensor Networks Intrusion Detection Based on SMOTE and the Random Forest Algorithm. Sensors 19, 203. doi: 10.3390/s19010203.
34 Wazirali R (2020) An Improved Intrusion Detection System Based on KNN Hyperparameter Tuning and Cross-Validation. Arab J Sci Eng 45, 10859-10873. doi: 10.1007/s13369-020-04504-w.
35 Anton SDD, Sinha S, Schotten HD (2019) Anomaly-based Intrusion Detection in Industrial Data with SVM and Random Forests. 2019 International Conference on Software, Telecommunications and Computer Networks (SoftCOM), 1-6. doi: 10.23919/SOFTCOM.2019.8903672.
36 Wisanwanichthan T, Thammawichai M (2021) A Double-Layered Hybrid Approach for Network Intrusion Detection System Using Combined Naive Bayes and SVM. IEEE Access 9, 138432-138450. doi: 10.1109/ACCESS.2021.3118573.
37 Chen M, Fan C, Wang X et al (2021) A Review on Data Preprocessing Techniques Toward Efficient and Reliable Knowledge Discovery From Building Operational Data. Front. Energy Res. 9:652801. doi: 10.3389/fenrg.2021.652801.
38 Besharati E, Naderan M, Namjoo E (2019) LR-HIDS: logistic regression host-based intrusion detection system for cloud environments. J Ambient Intell Human Comput 10, 3669-3692. doi: 10.1007/s12652-018-1093-8.
39 Abdallah M, Khac NL, Jahromi H et al (2021) A Hybrid CNN-LSTM Based Approach for Anomaly Detection Systems in SDNs. Proceedings of the 16th International Conference on Availability, Reliability and Security (ARES '21), Vienna, Austria, 34-41. doi: 10.1145/3465481.3469190.
40 Abeyrathna KD, Pussewalage HSG, Ranasinghe SN et al (2020) Intrusion Detection with Interpretable Rules Generated Using the Tsetlin Machine. 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Canberra, ACT, Australia, 1121-1130. doi: 10.1109/SSCI47803.2020.9308206.
41 Mohsin MA, Hamad AH (2022) Performance evaluation of SDN DDoS attack detection and mitigation based random forest and K-nearest neighbors machine learning algorithms. Revue d'Intelligence Artificielle 36(2), 233-240. doi: 10.18280/ria.360207.
42 Megantara AA, Ahmad T (2020) Feature Importance Ranking for Increasing Performance of Intrusion Detection System. 2020 3rd International Conference on Computer and Informatics Engineering (IC2IE), Yogyakarta, Indonesia, 37-42. doi: 10.1109/IC2IE50715.2020.9274570.
43 Abeyrathna KD, Granmo OC, Zhang X, Goodwin M (2019) A Scheme for Continuous Input to the Tsetlin Machine with Applications to Forecasting Disease Outbreaks. In: Advances and Trends in Artificial Intelligence. From Theory to Practice. IEAAIE 2019, 11606. Springer, Cham. doi: 10.1007/978-3-030-22999-3_49.
44 Pal K, Patel BV (2020) Data Classification with k-fold Cross Validation and Holdout Accuracy Estimation Methods with 5 Different Machine Learning Techniques. 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 83-87. doi: 10.1109/ICCMC48092.2020.ICCMC-00016.
45 Memory Profiler. <https://pypi.org/project/memory-profiler/>.
|
http://arxiv.org/abs/2409.03030v1 | 20240904184957 | JuliaQCD: Portable lattice QCD package in Julia language | [
"Yuki Nagai",
"Akio Tomiya"
] | hep-lat | [
"hep-lat",
"nucl-th"
] |
[email protected]
Information Technology Center, The University of Tokyo, 6-2-3 Kashiwanoha, Kashiwa, Chiba 277-0882, Japan
[email protected]
Department of Information and Mathematical Sciences, Tokyo Woman’s Christian University, Tokyo 167-8585, Japan
RIKEN Center for Computational Science, Kobe 650-0047, Japan
§ ABSTRACT
We develop a new lattice gauge theory code set JuliaQCD using the Julia language. Julia is well-suited for integrating machine learning techniques and enables rapid prototyping and execution of algorithms for four dimensional QCD and other non-Abelian gauge theories. The code leverages LLVM for high-performance execution and supports MPI for parallel computations. Julia's multiple dispatch provides a flexible and intuitive framework for development.
The code implements existing algorithms such as Hybrid Monte Carlo (HMC), many color and flavor, supports lattice fermions, smearing techniques, and full QCD simulations.
It is designed to run efficiently across various platforms, from laptops to supercomputers, allowing for seamless scalability. The code set is currently available on GitHub <https://github.com/JuliaQCD>.
JuliaQCD: Portable lattice QCD package in Julia language
Akio Tomiya
September 9, 2024
========================================================
§ INTRODUCTION
Quantum Chromo-dynamics (QCD) is the fundamental theory describing the interactions of quarks and gluons, the elementary particles that constitute the subatomic world.
Since they can be created and annihilated from the vacuum through relativistic effects, they must be handled as a quantum many-body system with uncertain particle number.
QCD is a quantum field theory and an accurate description of our universe.
However, perturbation theory fails to apply in QCD due to the strong coupling constant. Lattice QCD addresses this challenge by employing lattice regularization, introducing a discrete spacetime cutoff to regulate the path integral. This approach allows for the computation of quantum expectation values of physical observables.
Lattice QCD was first formulated by K. Wilson in 1974 <cit.>, and the numerical study of QCD using this method was initiated by M. Creutz <cit.>. This framework enables the detailed study of the QCD vacuum structure and provides precise insights into the Standard Model. Additionally, it facilitates investigations into phenomena such as the quark-gluon plasma and candidates for dark matter.
As we mentioned above,
to formulate lattice QCD, a spacetime lattice cutoff is introduced. This cutoff is not merely an approximation for numerical calculations; it fundamentally defines QCD itself. Traditionally, quantum field theory is formulated on a continuous spacetime, but in the presence of interactions, such theories suffer from divergences. Although these divergences can be renormalized, they are inherent in the continuum formulation.
In QCD, we aim to calculate the quantum expectation value of a gauge-invariant operator O [For certain purposes, we sometimes calculate the expectation value of a gauge variant operator. In such cases, gauge fixing is required due to Elizur's theorem <cit.>.]. In the path integral formalism, this is expressed as
⟨ O ⟩
=
1/Z∫𝒟U 𝒟q̅𝒟q e^-S_g[U] - S_f[U, q, q̅] O(U, q, q̅),
where q and q̅ are the quark and anti-quark fields, respectively, and U is the gauge field on the lattice. The term S_g[U] represents the gauge field action, and S_f[U, q, q̅] denotes the action for the quarks. The factor Z is a normalization constant ensuring that ⟨ 1 ⟩ = 1.
One can obtain observables in QCD evaluating this integral.
The standard algorithm used for simulating lattice QCD is the Hybrid Monte Carlo (HMC) algorithm <cit.>. HMC has become the de facto standard because of its efficiency and effectiveness in dealing with the large-dimensional integrals for lattice gauge fields with dynamical fermions. It has been successfully used in a wide range of studies, from calculating hadron spectrum <cit.> and form factors to determining the critical temperature (T_c) of QCD. The success of lattice QCD in these areas highlights the power of numerical methods in exploring non-perturbative aspects of quantum field theory.
Recent developments in machine learning (ML) have opened up new avenues for enhancing lattice QCD calculations. Many people are developing new algorithms that integrate machine learning techniques into lattice QCD simulations. The importance of prototyping in this context cannot be overstated, as we aim to create efficient algorithms for lattice QCD that leverage ML techniques. One of the central trade-offs in this process is between code simplicity and execution speed. While ease of implementation is important, a code that is too slow is unusable in practice. Therefore, we focus on optimizing the total time required for lattice QCD studies, which includes both the time spent developing the algorithm and the time taken for its execution.
Portability and parallelizability are also crucial considerations. The code must be able to run efficiently on a wide range of platforms, from laptops to supercomputers, and should be optimized to reduce the overall computational time. This allows researchers to take advantage of the available computational resources and scale their simulations as needed.
The Julia language is a new open-source scientific programming language using LLVM <cit.>.
Unlike interpreters, Julia is a compiled language with a just-in-time (JIT) compiler. This allows it to achieve performance comparable to Fortran and C <cit.>, while maintaining the simplicity of Python.
Julia's flexibility and speed make it an ideal choice for developing new lattice QCD algorithms, as it allows for rapid prototyping without sacrificing performance.
Thanks to these advantages, Julia is attractive language for high energy physics <cit.>.
In this paper we introduce a new lattice QCD code JuliaQCD written in Julia language.
The motivation behind the development of this code set is to minimize the combined time of algorithm implementation and execution. Existing languages such as Python are too slow for large-scale lattice QCD simulations, while C++/C/Fortran present challenges in terms of ease of use and development time. Julia strikes the right balance, offering both the speed required for intensive computations and the simplicity needed for fast prototyping. Our goal is to create a flexible and high-performance framework that can be easily adapted to new developments in both lattice QCD and machine learning.
§.§ JuliaQCD project
JuliaQCD is a suite of first open-source lattice QCD code by Julia programming language.
(Ultimate) High performance is not within our scope but we are focus on prototyping with good scale.
We developed this code with the following goals:
* Excellent portability
* Quick and easy setup
* Comprehensive suite
* Highly modifiable
* Competitive performance
The system offers excellent portability, as it runs smoothly on any machine with Julia installed, without the need for complicated setup processes. It is also quick and easy to set up, enabling users to start working in less than 5 minutes without requiring complex compilation steps. In addition, the system provides a comprehensive suite that supports both full-QCD and quenched configuration generation, along with a variety of measurement tools. Its highly modifiable structure makes it ideal for rapid prototyping and experimenting with different approaches. Finally, the system delivers competitive performance, comparable to the speed of traditional Fortran 90 codes, ensuring efficiency without sacrificing flexibility.
§.§ Related Works
In the broader landscape, traditional codes such as the MILC code and the Columbia Physics System (CPS) have also played significant roles. Despite their tradition, these codes remain functional and are used in various research contexts. MILC, in particular, has been a cornerstone for many researchers, supporting a wide range of computations, including those utilizing GPU architectures <cit.>. CPS continues to be a relevant reference point for its C-based implementation.
FermiQCD <cit.> has contributed to the field. FermiQCD, a collection of C++ classes and parallel algorithms, facilitates fast development of lattice applications and includes optimizations for cluster computing, such as SSE2 instructions. Its focus on distributed processing has allowed it to remain an important tool in parallel lattice QCD.
OpenQCD has established itself as a flexible and advanced package for lattice QCD simulations, particularly in high-performance environments. It supports a wide range of theories, including O(a)-improved Wilson quarks, and includes specialized algorithms such as HMC and SMD. openQCD's flexibility allows for large-scale simulations and efficient use of contemporary processors, making it a widely used tool for complex lattice QCD computations.
The development of lattice QCD codes has been significantly influenced by contributions from various international communities, including notable efforts from Japan. Two key projects, the Lattice Tool Kit in Fortran 90 (LTK) <cit.> and Bridge++ <cit.>, originate from this community. LTK, primarily written in Fortran, has served as a foundational tool, though it is not optimized for contemporary high-performance computing (HPC) environments. In contrast, Bridge++ was developed focusing on providing a comprehensive and efficient toolset for lattice QCD simulations on modern supercomputers. Both projects reflect Japan's strong engagement in advancing computational techniques for lattice QCD.
In recent years, there has been a trend towards utilizing C++, like Bridge++, for its advanced features and improved performance in complex simulations. Grid <cit.> is a prime example of this trend, offering robust support for both CPU and GPU computations. The Grid Python Toolkit (GPT) extends this functionality to Python, allowing for easy integration with data analysis and machine learning workflows. Another significant development is SIMULATeQCD <cit.>, which specializes in multi-GPU implementations, enabling high-performance lattice QCD computations. Additionally, QUDA (QCD on CUDA) <cit.> serves as a sub-library that provides specialized routines for running lattice QCD computations on NVIDIA GPUs. By leveraging CUDA architectures, QUDA offers substantial speedups for specific operations within larger QCD packages, rather than functioning as a standalone QCD software suite.
The increasing adoption of Python in scientific computing has led to the development of tools like Lyncs-API and pyQCD <cit.> and also GPT. These projects provide high-level interfaces for lattice QCD calculations, combining Python's user-friendly syntax with the computational efficiency of C++ backends. Lyncs-API, as discussed by <cit.>, offers a flexible API for a variety of QCD computations, while pyQCD similarly integrates Python and C++ for efficient lattice QCD research.
This landscape illustrates the evolution of lattice QCD software from traditional Fortran and C implementations to modern, multi-language frameworks designed for the latest HPC environments. LatticeQCD.jl contributes to this ongoing evolution by leveraging Julia's capabilities, providing a new, efficient, and accessible option for researchers.
§ SOFTWARE DESCRIPTION
We support lattice gauge theory in 4-dimensional Euclidean spacetime.
We cover most of standard technologies.
Also we implement a Wizard for parameter files.
Our code LatticeQCD.jl has following functionalities.
0.44
Gauge fields
* Optimized SU(2), SU(3)
* General SU(N_c)
Fermion action
* Wilson (2 flavor)
* Staggered fermion (1-8 tastes)
* Standard Domain-wall (2 flavor)
Measurements
* Plaquette
* Polyakov loop
* Chiral condensates (Wilson, staggered)
* Momentum projected pion correlator (Wilson, staggered)
* R× T Wilson loop
* Energy density
* Topological charge (plaquette, clover, and O(a^2) improved definition)
* Load & measurement mode (load and measure all configurations in a directory)
Smearing
* Stout
* Gradient flow for a generic action
0.44
Configuration generation algorithms
* Cold/Hot start for SU(N_c). One instanton configuration for SU(2)
* Heatbath for SU(N_c) & overrelaxation for a general gauge action
* Even-odd heatbath for the plaquette action
* Quenched HMC with SU(N_c) for a general gauge action
* HMC (2 flavor Wilson) with SU(N_c) with a general gauge action
* HMC (4 taste staggered fermions) with SU(N_c) with a general gauge action
* RHMC (any flavor staggered) with SU(N_c) for a general gauge action
* SU(N_c) stout smeared dynamical fermions
* Self-learning HMC with the plaquette action
I/O for gauge configurations
* ILDG format (Binary)
* JLD format (Default binary file for Julia, one of HDF5)
* Text file for Bridge++ (Bridgetext)
If one specified other than N_f=4, 8 with the staggered fermion HMC, RHMC is automatically used. For a machine with the Apple Silicon, N_f=1-8 is available.
Parallelization is supported by MPI, which can be used in
https://github.com/akio-tomiya/Gaugefields.jlGaugefields.jl and
https://github.com/akio-tomiya/LatticeDiracOperators.jlLatticeDiracOperators.jl (see Appendix <ref>).
We perform consistency checks with several references. See
Appendix <ref>.
§.§ Code Structure
JuliaQCD is a project name and it is constructed by several packages.
* https://github.com/akio-tomiya/LatticeQCD.jlLatticeQCD.jl : LatticeQCD.jl is a wrapper of the following packages. Interface of this package is user friendly.
* https://github.com/akio-tomiya/Wilsonloop.jlWilsonloop.jl: Wilsonloop.jl helps us to treat the Wilson loops and generic Wilson lines in any N_c and dimensions. Wilson lines can is defined symbolically.
* https://github.com/akio-tomiya/Gaugefields.jlGaugefields.jl: Gaugefields.jl is a package for lattice SU(N_c) gauge fields. It handles gauge fields (links), gauge actions with MPI, and autograd. This can generate quenched configurations.
* https://github.com/akio-tomiya/LatticeDiracOperators.jlLatticeDiracOperators.jl: LatticeDiracOperators.jl is a package for Dirac operators and fermions on the lattice. It handles pseudo-fermion fields with various lattice Dirac operators, fermion actions with MPI. This can generate configurations with dynamical fermions.
* https://github.com/akio-tomiya/QCDMeasurements.jlQCDMeasurements.jl: QCDMeasurements.jl is a package for measuring physical quantities. It includes measurements for basic quantities like chiral condensates, plaquettes, pion correlators, and topological charge with several definitions. It also includes the gradient flow with several actions.
§ USAGE OF LATTICEQCD.JL
We show the example code to perform lattice QCD calculations using LatticeQCD.jl.
One can start a lattice QCD calculation in 5 steps.
* Install Julia language from https://julialang.org/downloads/Julialang.org.
* In Julia REPL, press the key to enter the package mode and type,
[language=JuliaLocal,style=julia]
add LatticeQCD
and then press the “return” key. Press the “backspace” key (or “delete” key for Mac) to exit the package mode. One can get the latest version via . All dependencies will be solved automatically.
* Include the package with:
[language=JuliaLocal,style=julia]
using LatticeQCD
* Make a parameter file with the wizard:
[language=JuliaLocal,style=julia]
run_wizard()
Choose parameters.
* Start the simulation with the created parameter file:
[language=JuliaLocal,style=julia]
run_LQCD("my_parameters.toml")
One will get results.
One can write a parameter file.
§.§ User interfaces
We support the following two user interfaces for LatticeQCD.jl,
* Julia REPL interface (For beginners, just after the lattice QCD textbook)
* General interface (Experience with another code, for batch jobs, customized purpose)
Usage 1 is explained above.
For Usage 2, in Julia REPL, press the key to enter the package mode and type,
[language=JuliaLocal,style=julia]
add LatticeQCD
Then, LatticeQCD.jl will be installed.
The “PARAMETER_FILE” can be created through the wizard. To use the wizard on the shell, please write the following code and save it as ,
[language=JuliaLocal,style=julia]
using LatticeQCD
run_wizard()
Then, one can run the wizard,
[language=JuliaLocal,style=julia]
julia wizard.jl
Please write the following code and save it as ,
[language=JuliaLocal,style=julia]
using LatticeQCD
run_LQCD(ARGS[1])
Then, one can execute it like this,
[language=JuliaLocal,style=julia]
julia run.jl PARAMETER_FILE
§ CONCLUSION AND OUTLOOK
In this work, we introduce JuliaQCD, a new and flexible lattice QCD code set developed in the Julia programming language. By leveraging Julia’s high-performance just-in-time compilation and ease of use, this project aims to strike a balance between rapid prototyping and computational efficiency. JuliaQCD offers a range of functionalities, including support for full-QCD simulations and various smearing techniques, while maintaining scalability across different computational environments from personal systems to high-performance computing clusters.
Our approach is motivated by the need to reduce the time required for both algorithm implementation and execution, allowing researchers to experiment with and refine techniques without sacrificing performance. The portability and ease of setup provided by JuliaQCD make it an ideal tool for researchers working at the intersection of lattice QCD and machine learning, where quick adaptation to new algorithms and methodologies is critical.
While the performance of JuliaQCD is comparable to established Fortran, its focus remains on flexibility and usability, providing a framework that can be easily extended to incorporate modern computational techniques. Future work will explore further optimizations, including deeper integration with GPU architectures and advanced machine learning techniques to further enhance the capabilities of lattice QCD simulations.
JuliaQCD represents an important step in the evolution of lattice QCD software, contributing to the broader trend of adopting more modern, user-friendly languages without compromising on performance. We anticipate that this work will help advance both the development and application of lattice QCD, facilitating new discoveries in quantum chromodynamics and related fields.
The authors thank to the https://github.com/tsuchim/Lattice-Tool-KitLattice Tool Kit written in Fortran 90.
The authors thank to Taku Izubuchi, Issaku Kanamori, Okuto Morikawa, Satoshi Terasaki and Hiromasa Watanabe for comments and contributions.
The work of authors was partially supported by JSPS KAKENHI Grant Numbers 20K14479,
22K03539, 22H05112, and 22H05111, and MEXT as “Program for Promoting Researches on the Supercomputer
Fugaku” (Simulation for basic science: approaching the new quantum era; Grant Number JPMXP1020230411, and
Search for physics beyond the standard model using large-scale lattice QCD simulation and development of AI
technology toward next-generation lattice QCD; Grant Number JPMXP1020230409).
§ WILSONLOOP.JL
A gauge action is constructed by gauge invariant objects, Wilson loops, in discretized spacetime. Wilsonloop.jl helps us to treat with the Wilson loops and generic Wilson lines in any Nc and dimensions.
Wilsonloop.jl has the following functionalities:
* From a symbolic definition of Wilson lines, this returns SU(N_c)-valued Wilson lines as objects
* Constructing all staples from given symbolic Wilson lines
* Constructing derivatives of given symbolic Wilson lines (auto-grad for SU(N_c) variables)
We can easily generate a plaquette shown as
[language=JuliaLocal,style=julia]
println("plaq")
plaq = make_plaq()
display(plaq)
The staple of the plaquette is given as
[language=JuliaLocal,style=julia]
for μ=1:4
println("μ = μ")
staples = make_plaq_staple(μ)
display(staples)
end
An arbitrary Wilson loop is constructed as
[language=JuliaLocal,style=julia]
loop = [(1,+1),(2,+1),(1,-1),(2,-1)]
println(loop)
w = Wilsonline(loop)
println("P: ")
show(w)
Its adjoint is calculated as
[language=JuliaLocal,style=julia]
println("P^+: ")
show(w')
The derivative of the linesdw/dU_μis calculated as
[language=JuliaLocal,style=julia]
for μ=1:4
dU = derive_U(w,μ)
for i=1:length(dU)
show(dU[i])
end
end
Note that the derivative is a rank-4 tensor.
The derivatives are usually used for making the smearing of the gauge fields (Stout smearing can be used in Gaugefields.jl).
§ GAUGEFIELDS.JL
This package has following functionalities
* SU(N_c) (Nc > 1) gauge fields in 2 or 4 dimensions with arbitrary actions.
* Z(N_c) 2-form gauge fields in 4 dimensions, which are given as 't Hooft flux.
* U(1) gauge fields in 2 dimensions with arbitrary actions.
* Configuration generation
* Heatbath
* quenched Hybrid Monte Carlo
* quenched Hybrid Monte Carlo being subject to 't Hooft twisted b.c.
with external (non-dynamical) Z(N_c) 2-form gauge fields [Thanks for O. Morikawa.]
* quenched Hybrid Monte Carlo for SU(N_c)/Z(N_c) gauge theory
with dynamical Z(N_c) 2-form gauge fields
* Gradient flow via RK3
* Yang-Mills gradient flow
* Yang-Mills gradient flow being subject to 't Hooft twisted b.c.
* Gradient flow for SU(N_c)/Z(N_c) gauge theory
* I/O: ILDG and Bridge++ formats are supported (c-lime will be installed implicitly with CLIME_jll )
* MPI parallel computation(experimental)
* quenched HMC with MPI being subject to 't Hooft twisted b.c.
In addition, this supports followings
* Autograd for functions with SU(N_c) variables
* Stout smearing
* Stout force via backpropagation
We note that Autograd can be worked for general Wilson lines except for ones have overlaps.
§.§ File loading and saving
§.§.§ ILDG format
We can use ILDG format, one of standard formats for configurations.
We can read ILDG format as follows:
[language=JuliaLocal,style=julia]
using Gaugefields
NX = 4
NY = 4
NZ = 4
NT = 4
NC = 3
Nwing = 1
Dim = 4
U = Initialize_Gaugefields(NC,Nwing,NX,NY,NZ,NT,condition = "cold")
filename = "hoge.ildg"
ildg = ILDG(filename)
i = 1
L = [NX,NY,NZ,NT]
load_gaugefield!(U,i,ildg,L,NC)
With the use of the configuration, we can calculate the plaquette:
[language=JuliaLocal,style=julia]
temp1 = similar(U[1])
temp2 = similar(U[1])
comb = 6
factor = 1/(comb*U[1].NV*U[1].NC)
@time plaq_t = calculate_Plaquette(U,temp1,temp2)*factor
println("plaq_t = plaq_t")
poly = calculate_Polyakov_loop(U,temp1,temp2)
println("polyakov loop =(real(poly)) (imag(poly))")
We can write a configuration as the ILDG format as follows:
[language=JuliaLocal,style=julia]
filename = "hoge.ildg"
save_binarydata(U,filename)
§.§.§ Text format for Bridge++
Gaugefields.jl also supports a text format for Bridge++.
A file loading and saving are expressed as follows:
[language=JuliaLocal,style=julia]
using Gaugefields
filename = "testconf.txt"
load_BridgeText!(filename,U,L,NC)
[language=JuliaLocal,style=julia]
filename = "testconf.txt"
save_textdata(U,filename)
§.§ Gradient flow
To smear Gauge fields is important in LatticeQCD.
We show the codes of the Lüscher's gradient flow as follows.
[language=JuliaLocal,style=julia]
NX = 4
NY = 4
NZ = 4
NT = 4
Nwing = 0
NC = 3
U = Initialize_Gaugefields(NC, Nwing, NX, NY, NZ, NT, condition="hot")
temp1 = similar(U[1])
temp2 = similar(U[1])
comb = 6
factor = 1 / (comb * U[1].NV * U[1].NC)
g = Gradientflow(U)
for itrj = 1:100
flow!(U, g)
@time plaq_t = calculate_Plaquette(U, temp1, temp2) * factor
println("itrj plaq_t =plaq_t")
poly = calculate_Polyakov_loop(U, temp1, temp2)
println("itrj polyakov loop =(real(poly)) (imag(poly))")
end
§.§ Hybrid Monte Carlo
With the use of the Gaugefields.jl, we can easily make the Hybrid Monte Carlo method as following.
[language=JuliaLocal,style=julia]
using Random
using Gaugefields
using LinearAlgebra
function calc_action(gauge_action, U, p)
NC = U[1].NC
Sg = -evaluate_GaugeAction(gauge_action, U) / NC
Sp = p * p / 2
S = Sp + Sg
return real(S)
end
function MDstep!(gauge_action, U, p, MDsteps, Dim, Uold, temp1, temp2)
Δτ = 1.0 / MDsteps
gauss_distribution!(p)
Sold = calc_action(gauge_action, U, p)
substitute_U!(Uold, U)
for itrj = 1:MDsteps
U_update!(U, p, 0.5, Δτ, Dim, gauge_action)
P_update!(U, p, 1.0, Δτ, Dim, gauge_action, temp1, temp2)
U_update!(U, p, 0.5, Δτ, Dim, gauge_action)
end
Snew = calc_action(gauge_action, U, p)
println("Sold = Sold, Snew =Snew")
println("Snew - Sold = (Snew-Sold)")
ratio = min(1, exp(-Snew + Sold))
if rand() > ratio
substitute_U!(U, Uold)
return false
else
return true
end
end
function U_update!(U, p, ϵ, Δτ, Dim, gauge_action)
temps = get_temporary_gaugefields(gauge_action)
temp1 = temps[1]
temp2 = temps[2]
expU = temps[3]
W = temps[4]
for μ = 1:Dim
exptU!(expU, ϵ * Δτ, p[μ], [temp1, temp2])
mul!(W, expU, U[μ])
substitute_U!(U[μ], W)
end
end
function P_update!(U, p, ϵ, Δτ, Dim, gauge_action, temp1, temp2) # p -> p +factor*U*dSdUμ
NC = U[1].NC
temp = temp1
dSdUμ = temp2
factor = -ϵ * Δτ / (NC)
for μ = 1:Dim
calc_dSdUμ!(dSdUμ, gauge_action, μ, U)
mul!(temp, U[μ], dSdUμ) # U*dSdUμ
Traceless_antihermitian_add!(p[μ], factor, temp)
end
end
function HMC_test_4D(NX, NY, NZ, NT, NC, β)
Dim = 4
Nwing = 0
Random.seed!(123)
U = Initialize_Gaugefields(NC, Nwing, NX, NY, NZ, NT, condition="hot", randomnumber="Reproducible")
println(typeof(U))
temp1 = similar(U[1])
temp2 = similar(U[1])
comb = 6 #4*3/2
factor = 1 / (comb * U[1].NV * U[1].NC)
@time plaq_t = calculate_Plaquette(U, temp1, temp2) * factor
println("0 plaq_t =plaq_t")
poly = calculate_Polyakov_loop(U, temp1, temp2)
println("0 polyakov loop = (real(poly))(imag(poly))")
gauge_action = GaugeAction(U)
plaqloop = make_loops_fromname("plaquette")
append!(plaqloop, plaqloop')
β = β / 2
push!(gauge_action, β, plaqloop)
p = initialize_TA_Gaugefields(U) #This is a traceless-antihermitian gauge fields. This has NC^2-1 real coefficients.
Uold = similar(U)
substitute_U!(Uold, U)
MDsteps = 100
numaccepted = 0
numtrj = 10
for itrj = 1:numtrj
t = @timed begin
accepted = MDstep!(gauge_action, U, p, MDsteps, Dim, Uold, temp1, temp2)
end
if get_myrank(U) == 0
println("elapsed time for MDsteps: (t.time) [s]")
end
numaccepted += ifelse(accepted, 1, 0)
if itrj
@time plaq_t = calculate_Plaquette(U, temp1, temp2) * factor
println("itrj plaq_t = plaq_t")
poly = calculate_Polyakov_loop(U, temp1, temp2)
println("itrj polyakov loop = (real(poly))(imag(poly))")
println("acceptance ratio ", numaccepted / itrj)
end
end
return plaq_t, numaccepted / numtrj
end
function main()
β = 5.7
NX = 8
NY = 8
NZ = 8
NT = 8
NC = 3
HMC_test_4D(NX, NY, NZ, NT, NC, β)
end
main()
§.§ HMC with MPI
Here, we show the HMC with MPI. the REPL and Jupyter notebook can not be used when one wants to use MPI. At first, in Julia REPL in the package mode,
[language=JuliaLocal,style=julia]
add MPIPreferences
and
[language=JuliaLocal,style=julia]
using MPIPreferences
MPIPreferences.use_system_binary()
With the use of
[language=JuliaLocal,style=julia]
add MPI
we can use MPI in Julia.
We show the sample code:
[language=JuliaLocal,style=julia]
using Random
using Gaugefields
using LinearAlgebra
using MPI
if length(ARGS) < 5
error("USAGE: ","""
mpiexecjl -np 2 exe.jl 1 1 1 2 true
""")
end
const pes = Tuple(parse.(Int64,ARGS[1:4]))
const mpi = parse(Bool,ARGS[5])
function calc_action(gauge_action,U,p)
NC = U[1].NC
Sg = -evaluate_GaugeAction(gauge_action,U)/NC #evaluate_Gauge_action(gauge_action,U) = tr(evaluate_Gaugeaction_untraced(gauge_action,U))
Sp = p*p/2
S = Sp + Sg
return real(S)
end
function MDstep!(gauge_action,U,p,MDsteps,Dim,Uold,temp1,temp2)
Δτ = 1.0/MDsteps
gauss_distribution!(p)
Sold = calc_action(gauge_action,U,p)
substitute_U!(Uold,U)
for itrj=1:MDsteps
U_update!(U,p,0.5,Δτ,Dim,gauge_action)
P_update!(U,p,1.0,Δτ,Dim,gauge_action,temp1,temp2)
U_update!(U,p,0.5,Δτ,Dim,gauge_action)
end
Snew = calc_action(gauge_action,U,p)
if get_myrank(U) == 0
println("Sold = Sold, Snew =Snew")
println("Snew - Sold = (Snew-Sold)")
end
ratio = min(1,exp(-Snew+Sold))
r = rand()
if mpi
r = MPI.bcast(r, 0, MPI.COMM_WORLD)
end
#ratio = min(1,exp(Snew-Sold))
if r > ratio
substitute_U!(U,Uold)
return false
else
return true
end
end
function U_update!(U,p,ϵ,Δτ,Dim,gauge_action)
temps = get_temporary_gaugefields(gauge_action)
temp1 = temps[1]
temp2 = temps[2]
expU = temps[3]
W = temps[4]
for μ=1:Dim
exptU!(expU,ϵ*Δτ,p[μ],[temp1,temp2])
mul!(W,expU,U[μ])
substitute_U!(U[μ],W)
end
end
function P_update!(U,p,ϵ,Δτ,Dim,gauge_action,temp1,temp2) # p -> p +factor*U*dSdUμ
NC = U[1].NC
temp = temp1
dSdUμ = temp2
factor = -ϵ*Δτ/(NC)
for μ=1:Dim
calc_dSdUμ!(dSdUμ,gauge_action,μ,U)
mul!(temp,U[μ],dSdUμ) # U*dSdUμ
Traceless_antihermitian_add!(p[μ],factor,temp)
end
end
function HMC_test_4D(NX,NY,NZ,NT,NC,β)
Dim = 4
Nwing = 0
Random.seed!(123)
if mpi
PEs = pes#(1,1,1,2)
U = Initialize_Gaugefields(NC,Nwing,NX,NY,NZ,NT,condition = "hot",mpi=true,PEs = PEs,mpiinit = false)
else
U = Initialize_Gaugefields(NC,Nwing,NX,NY,NZ,NT,condition = "hot")
end
if get_myrank(U) == 0
println(typeof(U))
end
temp1 = similar(U[1])
temp2 = similar(U[1])
if Dim == 4
comb = 6 #4*3/2
elseif Dim == 3
comb = 3
elseif Dim == 2
comb = 1
else
error("dimensionDim is not supported")
end
factor = 1/(comb*U[1].NV*U[1].NC)
@time plaq_t = calculate_Plaquette(U,temp1,temp2)*factor
if get_myrank(U) == 0
println("0 plaq_t = plaq_t")
end
poly = calculate_Polyakov_loop(U,temp1,temp2)
if get_myrank(U) == 0
println("0 polyakov loop =(real(poly)) (imag(poly))")
end
gauge_action = GaugeAction(U)
plaqloop = make_loops_fromname("plaquette")
append!(plaqloop,plaqloop')
β = β/2
push!(gauge_action,β,plaqloop)
p = initialize_TA_Gaugefields(U) #This is a traceless-antihermitian gauge fields. This has NC^2-1 real coefficients.
Uold = similar(U)
substitute_U!(Uold,U)
MDsteps = 100
temp1 = similar(U[1])
temp2 = similar(U[1])
comb = 6
factor = 1/(comb*U[1].NV*U[1].NC)
numaccepted = 0
numtrj = 100
for itrj = 1:numtrj
t = @timed begin
accepted = MDstep!(gauge_action,U,p,MDsteps,Dim,Uold,temp1,temp2)
end
if get_myrank(U) == 0
println("elapsed time for MDsteps:(t.time) [s]")
end
numaccepted += ifelse(accepted,1,0)
if itrj
plaq_t = calculate_Plaquette(U,temp1,temp2)*factor
if get_myrank(U) == 0
println("itrj plaq_t =plaq_t")
end
poly = calculate_Polyakov_loop(U,temp1,temp2)
if get_myrank(U) == 0
println("itrj polyakov loop =(real(poly)) (imag(poly))")
println("acceptance ratio ",numaccepted/itrj)
end
end
end
return plaq_t,numaccepted/numtrj
end
function main()
β = 5.7
NX = 8
NY = 8
NZ = 8
NT = 8
NC = 3
HMC_test_4D(NX,NY,NZ,NT,NC,β)
end
main()
The command is like:
[language=JuliaLocal,style=julia]
mpiexecjl -np 2 julia mpi_sample.jl 1 1 1 2 true
We can also use MPI in LatticeDiracOperators.jl.
§ LATTICEDIRACOPERATORS.JL
LatticeDiracOperators.jl handles fermions on a lattice.
This package have the following functionalities:
* Constructing actions and its derivative for staggered fermion with 1-8 tastes with the use of the rational HMC
* Constructing actions and its derivative for Wilson fermion
* Constructing actions and its derivative for Standard Domainwall fermion
* Hybrid Monte Carlo method with fermions
With the use of the Gaugefields.jl, we can also do the HMC with stout smearing.
This package can be regarded as the additional package of the Gaugefields.jl to treat with lattice fermions.
§.§.§ Definition of pseudo-fermion fields
The pseudo-fermion fields can be defined as
[language=JuliaLocal,style=julia]
using Gaugefields
using LatticeDiracOperators
NX = 4
NY = 4
NZ = 4
NT = 4
Nwing = 0
Dim = 4
NC = 3
U = Initialize_4DGaugefields(NC,Nwing,NX,NY,NZ,NT,condition = "cold")
x = Initialize_pseudofermion_fields(U[1],"Wilson")
Here, x is a pseudo fermion fields for Wilson Dirac operator.
The element of x is x[ic,ix,iy,iz,it,ialpha].
ic is an index of the color. ialpha is the internal degree of the gamma matrix.
The staggered fermions can be defined as
[language=JuliaLocal,style=julia]
x = Initialize_pseudofermion_fields(U[1],"staggered")
If one wants to obtain the Gaussian distributed pseudo-fermions, the code is written as
[language=JuliaLocal,style=julia]
gauss_distribution_fermion!(x)
§.§ Definition of Dirac operators
The Dirac operators are important basic parts in lattice QCD simulations.
The Wilson Dirac operator can be defined as
[language=JuliaLocal,style=julia]
params = Dict()
params["Dirac_operator"] = "Wilson"
params["κ"] = 0.141139
params["eps_CG"] = 1.0e-8
params["verbose_level"] = 2
D = Dirac_operator(U,x,params)
We can treat the Dirac operator as a matrix.
Thus, we can apply the Dirac operator to the pseudo-fermion fields as follows.
[language=JuliaLocal,style=julia]
using LinearAlgebra
y = similar(x)
mul!(y,D,x)
And we can solve the equationDx = b:
[language=JuliaLocal,style=julia]
solve_DinvX!(y,D,x)
The convergence property can be seen by setting "" flag:
[language=JuliaLocal,style=julia]
params["verbose_level"] = 3
D = Dirac_operator(U,x,params)
gauss_distribution_fermion!(x)
solve_DinvX!(y,D,x)
println(y[1,1,1,1,1,1])
The adjoint of the Dirac operatorD^†andD^† Doperator can be defined as
[language=JuliaLocal,style=julia]
gauss_distribution_fermion!(x)
solve_DinvX!(y,D',x)
println(y[1,1,1,1,1,1])
DdagD = DdagD_operator(U,x,params)
gauss_distribution_fermion!(x)
solve_DinvX!(y,DdagD,x)
println(y[1,1,1,1,1,1])
We can similarly define the Dirac operator for the staggered fermion as follows:
[language=JuliaLocal,style=julia]
x = Initialize_pseudofermion_fields(U[1],"staggered")
gauss_distribution_fermion!(x)
params = Dict()
params["Dirac_operator"] = "staggered"
params["mass"] = 0.1
params["eps_CG"] = 1.0e-8
params["verbose_level"] = 2
D = Dirac_operator(U,x,params)
y = similar(x)
mul!(y,D,x)
println(y[1,1,1,1,1,1])
solve_DinvX!(y,D,x)
println(y[1,1,1,1,1,1])
The "tastes" of the staggered fermion is defined in the action.
§.§ Definition of fermion actions
With the use of the LatticeDiracOperators.jl, we can define actions for pseudo-fermions.
The sample codes are written as
[language=JuliaLocal,style=julia]
NX = 4
NY = 4
NZ = 4
NT = 4
Nwing = 0
Dim = 4
NC = 3
U = Initialize_4DGaugefields(NC, Nwing, NX, NY, NZ, NT, condition="cold")
x = Initialize_pseudofermion_fields(U[1], "Wilson")
gauss_distribution_fermion!(x)
params = Dict()
params["Dirac_operator"] = "Wilson"
params["κ"] = 0.141139
params["eps_CG"] = 1.0e-8
params["verbose_level"] = 2
D = Dirac_operator(U, x, params)
parameters_action = Dict()
fermi_action = FermiAction(D, parameters_action)
Then, the fermion action with given pseudo-fermion fields is evaluated as
[language=JuliaLocal,style=julia]
Sfnew = evaluate_FermiAction(fermi_action,U,x)
println(Sfnew)
We can also calculate the derivative of the fermion actiondSf/dUas
[language=JuliaLocal,style=julia]
UdSfdUμ = calc_UdSfdU(fermi_action,U,x)
The function calculates theU dSf/dU.
We can also use .
In the case of the staggered fermion, we can choose "taste".
The action is defined as
[language=JuliaLocal,style=julia]
x = Initialize_pseudofermion_fields(U[1],"staggered")
gauss_distribution_fermion!(x)
params = Dict()
params["Dirac_operator"] = "staggered"
params["mass"] = 0.1
params["eps_CG"] = 1.0e-8
params["verbose_level"] = 2
D = Dirac_operator(U,x,params)
Nf = 2
println("Nf = Nf")
parameters_action = Dict()
parameters_action["Nf"] = Nf
fermi_action = FermiAction(D,parameters_action)
Sfnew = evaluate_FermiAction(fermi_action,U,x)
println(Sfnew)
UdSfdUμ = calc_UdSfdU(fermi_action,U,x)
This package uses the RHMC techniques to consider the tastes.
§.§ Hybrid Monte Carlo
We show a sample code for the Hybrid Monte Carlo method with pseudo-fermion fields.
The codes are written as
[language=JuliaLocal,style=julia]
using Gaugefields
using LatticeDiracOperators
using LinearAlgebra
using InteractiveUtils
using Random
function MDtest!(gauge_action,U,Dim,fermi_action,η,ξ)
p = initialize_TA_Gaugefields(U) #This is a traceless-antihermitian gauge fields. This has NC^2-1 real coefficients.
Uold = similar(U)
substitute_U!(Uold,U)
MDsteps = 10
temp1 = similar(U[1])
temp2 = similar(U[1])
comb = 6
factor = 1/(comb*U[1].NV*U[1].NC)
numaccepted = 0
Random.seed!(123)
numtrj = 10
for itrj = 1:numtrj
@time accepted = MDstep!(gauge_action,U,p,MDsteps,Dim,Uold,fermi_action,η,ξ)
numaccepted += ifelse(accepted,1,0)
plaq_t = calculate_Plaquette(U,temp1,temp2)*factor
println("itrj plaq_t =plaq_t")
println("acceptance ratio ",numaccepted/itrj)
end
end
function calc_action(gauge_action,U,p)
NC = U[1].NC
Sg = -evaluate_GaugeAction(gauge_action,U)/NC
Sp = p*p/2
S = Sp + Sg
return real(S)
end
function MDstep!(gauge_action,U,p,MDsteps,Dim,Uold,fermi_action,η,ξ)
Δτ = 1/MDsteps
NC,_,NN... = size(U[1])
gauss_distribution!(p)
substitute_U!(Uold,U)
gauss_sampling_in_action!(ξ,U,fermi_action)
sample_pseudofermions!(η,U,fermi_action,ξ)
Sfold = real(dot(ξ,ξ))
println("Sfold = Sfold")
Sold = calc_action(gauge_action,U,p) + Sfold
println("Sold = ",Sold)
for itrj=1:MDsteps
U_update!(U,p,0.5,Δτ,Dim,gauge_action)
P_update!(U,p,1.0,Δτ,Dim,gauge_action)
P_update_fermion!(U,p,1.0,Δτ,Dim,gauge_action,fermi_action,η)
U_update!(U,p,0.5,Δτ,Dim,gauge_action)
end
Sfnew = evaluate_FermiAction(fermi_action,U,η)
println("Sfnew =Sfnew")
Snew = calc_action(gauge_action,U,p) + Sfnew
println("Sold = Sold, Snew =Snew")
println("Snew - Sold = (Snew-Sold)")
accept = exp(Sold - Snew) >= rand()
if accept != true #rand() > ratio
substitute_U!(U,Uold)
return false
else
return true
end
end
function U_update!(U,p,ϵ,Δτ,Dim,gauge_action)
temps = get_temporary_gaugefields(gauge_action)
temp1 = temps[1]
temp2 = temps[2]
expU = temps[3]
W = temps[4]
for μ=1:Dim
exptU!(expU,ϵ*Δτ,p[μ],[temp1,temp2])
mul!(W,expU,U[μ])
substitute_U!(U[μ],W)
end
end
function P_update!(U,p,ϵ,Δτ,Dim,gauge_action) # p -> p +factor*U*dSdUμ
NC = U[1].NC
temps = get_temporary_gaugefields(gauge_action)
dSdUμ = temps[end]
factor = -ϵ*Δτ/(NC)
for μ=1:Dim
calc_dSdUμ!(dSdUμ,gauge_action,μ,U)
mul!(temps[1],U[μ],dSdUμ) # U*dSdUμ
Traceless_antihermitian_add!(p[μ],factor,temps[1])
end
end
function P_update_fermion!(U,p,ϵ,Δτ,Dim,gauge_action,fermi_action,η)
temps = get_temporary_gaugefields(gauge_action)
UdSfdUμ = temps[1:Dim]
factor = -ϵ*Δτ
calc_UdSfdU!(UdSfdUμ,fermi_action,U,η)
for μ=1:Dim
Traceless_antihermitian_add!(p[μ],factor,UdSfdUμ[μ])
end
end
function test1()
NX = 4
NY = 4
NZ = 4
NT = 4
Nwing = 0
Dim = 4
NC = 3
U = Initialize_4DGaugefields(NC,Nwing,NX,NY,NZ,NT,condition = "cold")
gauge_action = GaugeAction(U)
plaqloop = make_loops_fromname("plaquette")
append!(plaqloop,plaqloop')
β = 5.5/2
push!(gauge_action,β,plaqloop)
show(gauge_action)
x = Initialize_pseudofermion_fields(U[1],"Wilson")
params = Dict()
params["Dirac_operator"] = "Wilson"
params["κ"] = 0.141139
params["eps_CG"] = 1.0e-8
params["verbose_level"] = 2
D = Dirac_operator(U,x,params)
parameters_action = Dict()
fermi_action = FermiAction(D,parameters_action)
y = similar(x)
MDtest!(gauge_action,U,Dim,fermi_action,x,y)
end
test1()
We can easily switch the Wilson fermion to the staggered fermions.
§ QCDMEASUREMENTS.JL
This package has following functionalities
* Plaquette measurement.
* Polyakov loop measurement.
* Pion correlator measurement.
* Chiral condensate measurement.
* Topological charge measurement.
* Energy density measurement.
* Wilson loop measurement
§.§ Sample
To measure variables, one have to make an instance
[language=JuliaLocal,style=julia]
m_plaq = Plaquette_measurement(U)
By using this , one can measure and get the plaquette:
[language=JuliaLocal,style=julia]
plaq = get_value(measure(m_plaq,U))
The example is shown as follows.
[language=JuliaLocal,style=julia]
using QCDMeasurements
using Gaugefields
function test()
println("SU3test")
NX = 4
NY = 4
NZ = 4
NT = 4
Nwing = 0
Dim = 4
NC = 3
U = Initialize_4DGaugefields(NC,Nwing,NX,NY,NZ,NT,condition = "cold")
filename = "testconf.txt"
L = [NX,NY,NZ,NT]
load_BridgeText!(filename,U,L,NC)
m_plaq = Plaquette_measurement(U)
m_poly = Polyakov_measurement(U)
plaq = get_value(measure(m_plaq,U))
poly = get_value(measure(m_poly,U))
println("plaq: plaq")
println("poly:poly")
m_energy = Energy_density_measurement(U)
m_topo = Topological_charge_measurement(U)
energy = get_value(measure(m_energy,U))
topo = get_value(measure(m_topo,U))
println("energy: energy")
println("topo:topo")
m_wilson = Wilson_loop_measurement(U,printvalues=true)
wilsonloop = get_value(measure(m_wilson,U))
println("wilson loop: ",wilsonloop)
m_pion = Pion_correlator_measurement(U)
m_pion_Staggered = Pion_correlator_measurement(U,fermiontype = "Staggered")
m_pion_Wilson = Pion_correlator_measurement(U,fermiontype = "Wilson")
pion = get_value(measure(m_pion,U))
pion_s = get_value(measure(m_pion_Staggered,U))
pion_w = get_value(measure(m_pion_Wilson,U))
println("pion: pion")
println("pion correlator with Staggered fermion:pion_s")
println("pion correlator with Wilson fermion: pion_w")
m_chiral_Staggered = Chiral_condensate_measurement(U,fermiontype = "Staggered")
m_chiral_Wilson = Chiral_condensate_measurement(U,fermiontype = "Wilson")
chiral_s = get_value(measure(m_chiral_Staggered,U))
chiral_w = get_value(measure(m_chiral_Wilson,U))
println("Chiral condensate with Staggered fermion:chiral_s")
println("Chiral condensatewith Wilson fermion: chiral_w")
TC_methods = ["plaquette","clover"]
m_topo = Topological_charge_measurement(U,TC_methods = TC_methods)
g = Gradientflow(U)
for itrj=1:100
flow!(U,g)
@time plaq_t = get_value(measure(m_plaq,U))
@time poly = get_value(measure(m_poly,U))
println("itrj plaq_t = plaq_t")
println("itrj polyakov loop = (real(poly))(imag(poly))")
@time topo = get_value(measure(m_topo,U))
print("itrj topological charge: ")
for (key,value) in topo
print("key value ")
end
println("")
end
end
test()
One can also use the dictionary type:
[language=JuliaLocal,style=julia]
using QCDMeasurements
using Gaugefields
function SU3test()
println("SU3test")
NX = 4
NY = 4
NZ = 4
NT = 4
Nwing = 0
Dim = 4
NC = 3
U = Initialize_4DGaugefields(NC,Nwing,NX,NY,NZ,NT,condition = "cold")
filename = "testconf.txt"
L = [NX,NY,NZ,NT]
load_BridgeText!(filename,U,L,NC)
method = Dict()
methodname = "Eigenvalue"
method["methodname"] = methodname
method["fermiontype"] = "Wilson"
κ = 0.141139
method["hop"] = κ
method["nev"] = 1 #number of eigenvalues
m = prepare_measurement_from_dict(U,method)
value,vectors = get_value(measure(m,U)) #eigenvalues and eigenvectors
println("methodnamevalue")
method = Dict()
methodname = "Pion_correlator"
method["methodname"] = methodname
method["fermiontype"] = "Staggered"
method["mass"] = 1
method["Nf"] = 4
m = prepare_measurement_from_dict(U,method)
value = get_value(measure(m,U))
println("methodnamevalue")
method = Dict()
methodname = "Pion_correlator"
method["methodname"] = methodname
method["fermiontype"] = "Wilson"
method["hop"] = 1
m = prepare_measurement_from_dict(U,method)
value = get_value(measure(m,U))
println("methodnamevalue")
methodsname = ["Plaquette","Polyakov_loop","Topological_charge","Chiral_condensate",
"Pion_correlator","Energy_density","Wilson_loop","Eigenvalue"]
method = Dict()
for methodname in methodsname
method["methodname"] = methodname
m = prepare_measurement_from_dict(U,method)
value = get_value(measure(m,U))
if methodname == "Eigenvalue"
println("methodname(value[1])")
else
println("methodname(value)")
end
end
end
SU3test()
§ CONSISTENCY CHECK
We compared results of LatticeQCD.jl to the following papers and codes
* N_f=4 SU(3) staggered HMC <cit.>
* Quenched SU(2) improved thermodynamics <cit.>
* RHMC <cit.>
* HMC for Wilson and Clover Wilson fermions https://github.com/tsuchim/Lattice-Tool-KitLattice Tool Kit
* Pion correlator with the Wilson-Dirac operator <cit.>
* Pion correlator with the staggered Dirac operator <cit.> |
http://arxiv.org/abs/2409.02670v1 | 20240904125820 | Precise asymptotics of the spin $+2$ Teukolsky field in the Kerr black hole interior | [
"Sebastian Gurriaran"
] | gr-qc | [
"gr-qc",
"math.AP",
"35Q75, 58J37, 58J45, 58K55, 83C57, 83C75"
] |
§ ABSTRACT
Using a purely physical-space analysis, we prove the precise oscillatory blow-up asymptotics of the spin +2 Teukolsky field in the interior of a subextremal Kerr black hole. In particular, this work gives a new proof of the blueshift instability of the Kerr Cauchy horizon against linearized gravitational perturbations that was first shown by Sbierski <cit.>. In that sense, this work supports the Strong Cosmic Censorship conjecture in Kerr spacetimes. The proof is an extension to the Teukolsky equation of the work <cit.> by Ma and Zhang that treats the scalar wave equation in the interior of Kerr. The analysis relies on the generic polynomial decay on the event horizon of solutions of the Teukolsky equation that arise from compactly supported initial data, as recently proved by Ma and Zhang <cit.> and Millet <cit.> in subextremal Kerr.
Independence Constrained Disentangled Representation Learning from Epistemological Perspective
Ruoyu Wang 1 Lina Yao 1,2
September 9, 2024
==============================================================================================
§ INTRODUCTION
§.§ The Kerr black hole interior
We begin with a review of the main features of the interior of Kerr black holes. The Kerr metric describes the spacetime around and inside a rotating non-charged black hole. It is a two-parameter family of stationary and axisymmetric solutions of the Einstein vacuum equations that write, denoting 𝐑𝐢𝐜[𝐠] the Ricci tensor associated to a Lorentzian metric 𝐠,
𝐑𝐢𝐜[𝐠]=0.
The two parameters are the mass M and the angular momentum per unit mass a of the black hole. In this work, we only consider subextremal Kerr with non-zero angular momentum, i.e. such that 0<|a|<M. The Kerr metric is given in Boyer-Lindquist coordinates (t,r,θ,ϕ)∈ℝ^2×𝕊^2 by
𝐠_a,M=-(Δ-a^2sin^2θ)/q^2 t^2-4aMr/q^2sin^2θ t ϕ+q^2/Δ r^2+q^2θ^2+Σ^2/q^2sin^2θϕ^2 ,
where Δ:=r^2-2rM+a^2, q^2:=r^2+a^2cos^2θ, Σ^2:=(r^2+a^2)^2-a^2sin^2θΔ. We define
μ:=Δ/r^2+a^2, r_±:=M±√(M^2-a^2),
where r_± are the roots of Δ. The singularities {r=r_±} of the metric are coordinate singularities that vanish when considering Eddington-Finkelstein coordinates
u:=r^*-t, :=r^*+t, ϕ_±:=ϕ± r_mod mod 2π,
where r^*/ r=μ^-1, r_mod/ r=a/Δ, see Section <ref>. The event horizon {r=r_+} and the Cauchy horizon {r=r_-} can then be properly attached to the Lorentzian manifold (r_-,r_+)××𝕊^2 equipped with the metric 𝐠_a,M, see <cit.> for more details. We will mainly be interested in a region containing the right event horizon and the right Cauchy horizon that are respectively defined by
_+:={r=r_+}∩{u=-∞}, :={r=r_-}∩{=+∞}.
Note that a result similar to our main theorem can be deduced in a region containing the left event and Cauchy horizons, that are defined by
_+':={r=r_+}∩{=-∞}, 𝒞ℋ'_+:={r=r_-}∩{u=+∞}.
We denote ^2_:='_+∩_+ and ^2_𝒞ℋ:=𝒞ℋ'_+∩ the bifurcations spheres, and i_+, ℐ_+ (resp. i_+', ℐ_+') the right (resp. left) timelike and null infinities.
In this work, by 'Kerr interior' we mean the resulting Lorentzian manifold (,𝐠_a,M) which is ((r_-,r_+)××𝕊^2,𝐠_a,M) to which we attach its boundaries, the event and Cauchy horizons. We will be interested in the asymptotics at of solutions to the Teukolsky equation, that arise from compactly supported initial data on a spacelike hypersurface Σ_0. See Figure <ref> for the Penrose diagram of Kerr interior and an illustration of the hypersurface Σ_0.
§.§ Teukolsky equations
Teukolsky <cit.> found that, when linearising a gravitational perturbation of a Kerr black hole in the Newman-Penrose formalism, two curvature components decouple from the linearised gravity system and satisfy wave equations, called the Teukolsky equations. More precisely, we first define the pair of null vector fields given in Boyer-Lindquist coordinates by
e_3:=1/2(-∂_r+r^2+a^2/Δ∂_t+a/Δ∂_ϕ), e_4:=1/2(Δ/r^2+a^2∂_r+∂_t+a/r^2+a^2∂_ϕ),
which is aligned with the principal null directions of the Kerr spacetime, and which is regular on _+. We then define the rescaled null pair
:=(-μ) e_3, :=(-μ)^-1 e_4,
which is regular on . We also define the slightly modified null pair
n:=-(r^2+a^2)/q^2=[(r^2+a^2)∂_t-Δ∂_r+a∂_ϕ]/(2q^2), l:=-2=r^2+a^2/Δ∂_t+∂_r+a/Δ∂_ϕ,
and the complex vector field
m:=1/√(2)(r+iacosθ)(iasinθ∂_t+∂_θ+i/sinθ∂_ϕ).
Then, denoting 𝐑̇ the linearized curvature tensor, the scalars
:=𝐑̇_lmlm, :=(r-iacosθ)^4𝐑̇_nmnm,
are called respectively the spin +2 and spin -2 Teukolsky scalars. They satisfy the Teukolsky equations that write, for s=± 2,
-[(r^2+a^2)^2/Δ-a^2 sin ^2 θ] ∂_t^2_s -4 M a r/Δ∂_t∂_ϕ_s -[a^2/Δ-1/sin ^2 θ] ∂_ϕ^2_s
+Δ^-s∂_r(Δ^s+1∂_r _s)+1/sinθ∂_θ(sinθ∂_θ_s)+2 s[a(r-M)/Δ+i cosθ/sin ^2 θ] ∂_ϕ_s
+2 s[M(r^2-a^2)/Δ-r-i a cosθ]∂_t_s -[s^2 cos ^2 θ/sin ^2 θ-s]_s=0.
The rescaled scalars
:=Δ^2, :=Δ^-2,
satisfy a rescaled version of the Teukolsky equation, see Section <ref> for the precise definition of the Teukolsky wave operators. Notice that and are projections of the linearized curvature on a frame that is regular on _+, while , are projections of the linearized curvature on a frame that is regular on . Thus the main result of this work, namely the blow-up asymptotics of at , is a linear curvature instability statement for the Kerr Cauchy horizon.
The Teukolsky equations were originally introduced to study the stability of the exterior of Kerr black holes. For a review of the litterature concerning the Teukolsky equations, see the introduction of <cit.>, where the decay estimates for the nonlinear analog of the Teukolsky equations derived in <cit.> are used to prove the nonlinear stability of the exterior of slowly rotating Kerr black holes in <cit.>.
§.§ Blueshift effect and Strong Cosmic Censorship conjecture
§.§.§ Blueshift effect
The Kerr spacetime is globally hyperbolic only up to the Cauchy horizon ∪𝒞ℋ'_+. Thus the part of the spacetime that is beyond the Cauchy horizon, in the region {r≤ r_-}, can be thought as unphysical as it is not determined by the data on any spacelike hypersurface inside {r≥ r_-}. In other words, the Kerr spacetime can be extended across the Cauchy horizon as a regular solution of the Einstein vacuum equations (<ref>) in infinitely many ways, which goes against determinism in general relativity.
However, it is expected that this unphysical feature is just an artefact of the ideal Kerr spacetime. Indeed, realistic astrophysical black holes are perturbations of Kerr, i.e. they are the maximal globally hyperbolic developpement of initial data close to the one of Kerr. It is expected that these perturbations kill the non-physical features of ideal Kerr. In the linear setting, this is the so-called blueshift effect, introduced by Simpson and Penrose in 1972 <cit.>. It is a heuristic argument according to which the geometry of Kerr interior (they initially wrote the argument for Reissner–Nordström)
forces propagating waves to blow-up in some way at the Cauchy horizon. This effect is illustrated on Figure <ref>, and is linked to the Strong Cosmic Censorship conjecture.
§.§.§ Strong Cosmic Censorship conjecture
The Strong Cosmic Censorship (SCC) conjecture was formulated by Penrose in <cit.>, and, in its rough version, states the following :
The maximal globally hyperbolic developpement (MGHD) of generic initial data for the Einstein equations is inextendible.
In other words, this conjecture states that the unphysical, non-deterministic behavior of spacetimes with non-empty Cauchy horizons (for example Kerr and Reissner-Nordström spacetimes) is non-generic, and thus vanishes upon small perturbations. See <cit.> for more modern versions of the SCC conjecture.
A fundamental question in SCC is the regularity for which the MGHD of initial data should be inextendible. The C^0 formulation of SCC was disproved in Kerr by Dafermos and Luk <cit.>. They showed that generic perturbations of the interior of Kerr still present a Cauchy horizon across which the metric is continuously extendible. They also argued that the perturbed Cauchy horizon may be a so-called weak null singularity, which is a singularity weaker than a spacelike curvature singularity as in Schwarzschild. For references on weak null singularities, see <cit.>, <cit.>. See <cit.> for a link between weak null singularities and the C^0,1_loc formulation of SCC.
In spherical symmetry, the C^2 instability of the Cauchy horizon for the model of the Einstein-Maxwell-scalar field system was proven in <cit.> and <cit.>, extending the results of <cit.>. See also <cit.> for analog results for the Einstein-Maxwell-Klein-Gordon equations. In Kerr, which is only axisymmetric, the full nonlinear problem for the Einstein equations is still open, and we focus in this paper on the model of linearized gravity, where the Teukolsky scalar represents a specific component of the perturbed curvature tensor. Our main result, namely the blow-up of on , thus supports the SCC conjecture in the linearized setting.
§.§ Black hole interior perturbations
§.§.§ Results related to Price's law
The starting point to prove the instability of solutions of the Teukolsky equation in Kerr interior is Price's law for Teukolsky, i.e. the polynomial lower bound on the event horizon for solutions of the Teukolsky equation arising from compactly supported initial data, see <cit.> for the original works on Price's law. A version of Price's law for the Teukolsky equations in Kerr was heuristically found by Barack and Ori in <cit.>.
In this paper we use the precise Price's law asymptotics on _+ given by Ma and Zhang in <cit.> for the Teukolsky equations[The Price's law in <cit.> holds for |a|≪ M, and for |a|<M conditionally on an energy-Morawetz bound. This energy-Morawetz estimate has since been proved for |a|<M by Teixeira da Costa and Shlapentokh-Rothman in <cit.>, so that the Price's law in <cit.> holds for the full range |a|<M.]. For another proof of the polynomial lower bound in the full subextremal range |a|<M for solutions of the Teukolsky equation, see the work of Millet <cit.>, that uses spectral methods. For a complete account of results related to Price's law, see <cit.>.
§.§.§ Previous results on black hole interior perturbations
The first works on the linear instability of the Cauchy horizons in Kerr and Reissner-Nordström black holes consisted in finding explicit solutions that become unbounded in some way at the Cauchy horizon, see for example <cit.>. In <cit.>, a heuristic power tail asymptotic for scalar waves in the interior of Kerr black holes was obtained. Regarding the Teukolsky equations, the oscillatory blow-up asymptotic of our main result in the interior of Kerr black holes, see (<ref>), was first predicted heuristically by Ori <cit.>, writing the azimuthal m-mode of the solution as a late-time expansion ansatz of the form
∑_kψ_k(r,θ)t^-k.
The asymptotic behavior (<ref>) was also confirmed in a numerical simulation <cit.>.
A rigorous boundedness statement for solutions of the scalar wave equation inside the spherically symmetric Reissner-Nordström spacetime was proven in <cit.>. Still for the scalar wave equation in the interior of Reissner-Nordström black holes, the blow-up of the energy of generic scalar waves was obtained in <cit.>. A scattering approach to Cauchy horizon instability in Reissner-Nordström, as well as an application to mass inflation, was presented in <cit.>, on top of the non-linear instability results <cit.> already mentionned in Section <ref>.
In Kerr, for the scalar wave equation, a generic blow-up result for the energy of solutions on the Cauchy horizon was obtained in <cit.>, while the boundedness of solutions at the Cauchy horizon was proven in <cit.> in the slowly rotating case. The boundedness result was then extended to the full subextremal range in <cit.>. A construction of solutions that remain bounded but have infinite energy at the Cauchy horizon was presented in <cit.>. Finally, the precise asymptotics of the scalar field in the interior of a Kerr black hole was proven in <cit.> using a purely physical-space analysis.
Concerning the Teukolsky equations in Kerr interior, the method of proof of <cit.> was extended to the spin +2 Teukolsky equation in the work <cit.>, that proved the blow-up of a weighted L^2 norm on a hypersurface transverse to the Cauchy horizon, relying on frequency analysis.
The goal of the present paper is to rigorously prove the oscillatory blow-up asymptotics of the spin +2 Teukolsky field in the Kerr black hole interior, by extending the physical-space approach of <cit.> to Teukolsky equations, thus providing a new proof of the blow-up results of <cit.>.
We discussed here the references on black hole interior perturbations that are the most relevant to this work. For a more complete account of the results related to black hole interior perturbations, for example in Schwarschild interior or in the cosmological setting, see the introduction in <cit.>.
§.§ Rough version of the main theorem
This paper rigorously proves the blueshift instability on the Kerr Cauchy horizon for solutions of the spin +2 Teukolsky equations, by finding the precise oscillatory blow-up asymptotics of the spin +2 Teukolsky scalar. The rough version of the main result of this paper is the following, see Theorem <ref> for the precise formulation.
Assume that the Price's law
polynomial lower bounds on the event horizon for solutions of the Teukolsky equations hold, as proven in <cit.>. Denote the spin +2 Teukolsky scalar obtained in a principal null frame regular on the Cauchy horizon. Then blows up at the Cauchy horizon, exponentially in the Eddington-Finkelstein coordinate , and oscillates at a frequency that blows up at the Cauchy horizon. More precisely,
* the amplitude || is proportionnal to Δ^-2(u,)/^7,
* the oscillation frequency blows-up like log(r-r_-).
We remark the following :
* Anticipating some of the notations that will be introduced later on, we actually show the following precise asymptotic behavior near :
∼Δ^-2(u,)/^7∑_|m|≤ 2A_m(r_-)Q_m,2e^2imr_mod(u,)Y_m,2^+2(cosθ)e^imϕ_-,
where the constants Q_m,2 depend on the initial data on Σ_0 and are generically non-zero, the constants A_m(r_-) are non-zero for m≠ 0 (see Remark <ref>), ϕ_- is an angular coordinate that is regular on , the functions Y_m,2^+2(cosθ) are the s=+2 spin-weighted spherical harmonics, and r_mod∼log(r-r_-) near , see Sections <ref> and <ref> for more details.
* The blueshift instability at for was first proven recently by Sbierski <cit.> who showed the blow-up of a weighted L^2 norm along a hypersurface transverse to . This result suggests a blow-up that is exponential in the Eddington-Finkelstein coordinate , the Cauchy horizon corresponding to =+∞. We prove in this paper a pointwise exponential blow-up, along with an oscillatory behavior, which were both heuristically predicted by Ori <cit.>.
* We will use the version of Price's law proven in <cit.>. It holds true in subextremal Kerr conditionally on the existence of an energy and Morawetz estimate for the Teukolsky equation in the whole subextremal range |a|<M, which was recently proven in <cit.>.
§.§ Structure of the proof
Although our main result will be about the precise asymptotics of at , we will actually also obtain precise estimates for near _+, and we will use the Teukolsky-Starobinsky identities (see Section <ref>) to link and there.
For s=± 2, we denote _s the differential operator on the left-hand side of (<ref>), called the Teukolsky operator, and _s=Δ^s_s(Δ^-s · ) the rescaled Teukolsky operator, such that
_s_s=0, _sψ_s=0.
The analysis is done entirely in physical space, using energy estimates to prove upper bounds on 'error' quantities, in the hope that this method of proof is robust enough to be applied in a more general setting. The first step is to notice that the energy estimates to get polynomial upper bounds done in <cit.> for the scalar wave equation _gψ=0 in the Kerr interior can be extended to the Teukolsky equations _sψ_s=0, but only for negative spin close to _+, and for positive spin close to . This is because we need a fixed sign for the scalar s(r-M) that appears at crucial places in the energy estimates, where s=± 2 is the spin, and because r_+-M>0 while r_–M<0.
We denote the region containing _+, and the regions close to where Δ has exponential decay in , and the intermediate region between and ∪. See Figure <ref> and Section <ref> for the precise definitions of the regions. The assumptions on _+ that we will use are the ones given by <cit.>, i.e. that the error quantities
:=-,
:=-,
are bounded by ^-7-δ on _+∩{≥ 1} where δ>0, see Sections <ref> and <ref> for the definitions of Q_m,2, A_m(r), Y_m,2^± 2(cosθ), ϕ_+. The main steps of the proof then go as follows :
* First, we propagate Price's law lower bound for in . To do this, we propagate the O(^-7-δ) upper bound of from _+ to using a redshift energy estimate. This is only possible in region that contains the event horizon.
* We then use the Teukolsky-Starobinsky identity (<ref>) to propagate in the O(^-7-δ) Price's law upper bound on _+ for .
* Next, similarly as in we propagate the lower bound for in using an effective blueshift energy estimate for , which is only possible for r close to r_-. This step in region is necessary to propagate the lower bound up to where the analysis becomes more delicate (namely, where we can't anymore get sharp decay of the energy from an energy estimate because of the geometry of the Kerr black hole interior near the Cauchy horizon).
* We then
obtain a non-sharp L^∞ bound for in using an energy estimate.
* To get the blow-up asymptotics in , we rewrite the Teukolsky equation _+2=0 as a 1+1 wave equation
((r^2+a^2)Δ^2e_4)=O(Δ),
where we use the previous L^∞ bound to control the right-hand-side. This is where we can effectively see the blueshift heuristic in action. The geometry of Kerr interior is such that is regular on , and reporting that in the Teukolsky equation gives a factor Δ on the source term that becomes negligeable in . Integrating (<ref>) from Γ directly gives the announced blow-up asymptotics for . This is where we use the definition of region : Δ decays exponentially in in , which easily bounds the error terms.
* We cannot integrate the 1+1 wave equation in because the slices {u=cst} starting inside do not cross Γ. Instead, commuting e_3 and the Teukolsky operator, we estimate e_3 in , which allows us to propagate the blow-up of from to by integration.
Here are some further remarks on the analysis :
* To control the derivatives of the error terms, we commute the Teukolsky equation with operators that have good commutations properties, namely ∂_t, ∂_ϕ, e_3, and the Carter operator.
* Note that, at any point in the analysis, in any region , , , , any estimate that we get can be turned into an estimate in the symmetric region {u≥ 1} by replacing with u, ψ_s with _-s, e_3 with and e_4 with . As in <cit.>, this will be useful in region .
Here are some differences from the previous works in the method of proof :
* Our physical-space proof differs substantially from <cit.> that relies in a crucial way on frequency analysis.
* In <cit.> for the scalar wave equation, the proof is made by decomposing the scalar field ψ onto its ℓ=0 and ℓ≥ 1 modes : ψ=ψ_ℓ=0+ψ_ℓ≥ 1. Using the decay of both quantities on the event horizon, it is shown, commuting the wave equation with the projection on the modes, that the ^-3 lower bound for ψ_ℓ=0 and the O(^-4-δ) better decay of ψ_ℓ≥ 1 propagate. This is different from our proof, where there is no projection on the (spin-weighted) spherical harmonics. We do not need to assume that the modes ℓ≥ 3 of ψ_± 2 decay better than ^-7 on _+. Moreover, in the blueshift region (i.e. for r close to r_-) we will use easier estimates than <cit.> that introduces log multipliers, taking advantage of the fact that we deal with a non-zero spin.
§.§ Overview of the paper
In Section <ref>, we introduce the geometric background, the operators, the coordinates that we will use in the analysis, as well as the system of equations that we consider. In Section <ref>, we recall the decay assumptions on the event horizon, the so-called Price's law, and we write the precise version of the main theorem. In Section <ref>, we obtain the redshift energy estimates in for and use the Teukolsky-Starobinsky identity to get the lower bound for in . Section <ref> deals with the effective blueshift energy estimates to propagate the lower bound for in . In Section <ref>, we get the non-sharp L^∞ bound for in and we compute and integrate the 1+1 wave equation that will eventually give the precise oscillatory blow-up asymptotics for in , before propagating this blow-up to region .
§.§ Acknowledgments
The author would like to thank Jérémie Szeftel for his support and many helpful discussions, and Siyuan Ma for helpful suggestions. This work was partially supported by ERC-2023 AdG 101141855 BlaHSt.
§ PRELIMINARIES
We first introduce some notations. By 'LHS' and 'RHS' we mean respectively 'left-hand side' and 'right-hand side'. If P is an operator acting on a (spin-weighted) scalar ψ, for an integer k≥ 1 and for any norm ·, we use the notation
P^≤ kψ:=∑_j=0^kP^jψ.
If f,g are two non-negative scalars, we write f≲ g whenever there is a constant C>0 that depends only on the black hole parameters a,M, on the initial data, and on the smallness constants , γ, such that f≤ Cg on the region considered. We write f=O(g) when |f|≲ |g|. We write f∼ g when f≲ g and g≲ f.
§.§ Geometry of the interior of subextremal Kerr spacetimes
First, we fix a notation for the Boyer-Lindquist (B-L) coordinate Killing vector fields :
T:=∂_t, Φ:=∂_ϕ.
Note that when we write ∂_r, ∂_θ, we mean the B-L coordinate vector fields. We recall the definition of the null pair that we use :
e_3=1/2(-∂_r+r^2+a^2/ΔT+a/ΔΦ), e_4=1/2(Δ/r^2+a^2∂_r+T+a/r^2+a^2Φ).
This pair satisfies
𝐠_a,M(e_3,e_3)=𝐠_a,M(e_4,e_4)=0, 𝐠_a,M(e_3,e_4)=-q^2/2(r^2+a^2),
and is regular on _+, as can be seen by expressing e_3 and e_4 with the ingoing Eddington-Finkelstein coordinate vector fields, see below. We also define the rescaled null pair
=(-μ)e_3, =(-μ)^-1e_4,
that is regular on .
As recalled in the introduction, the B-L coordinates are not adapted to the geometry of Kerr, because they are singular at both the event and Cauchy horizons. In this paper, we will use both the Eddington-Finkelstein coordinates and double null-like coordinates (we borrow the terminology from <cit.>) introduced in Section <ref>. Notice that the scalar function
μ=Δ/r^2+a^2
vanishes only on the horizons, and that unlike in the Kerr black hole exterior region, μ≤ 0 on . Define the tortoise coordinate r^* by
r^*/ r=μ^-1, r^*(M)=0.
Notice that r^*→ -∞ as r→ r_+ and r^*→ +∞ as r→ r_-. More precisely, defining κ_+ and κ_- the surface gravities of the event and Cauchy horizons :
κ_+:=r_+-r_-/4Mr_+, κ_-:=r_–r_+/4Mr_-,
then the asymptotics of r^* at the horizons are given by
r^*=1/2κ_+ln(r_+-r)+h_+(r)=1/2κ_-ln(r-r_-)+h_-(r),
where h_±(r) has a finite limit as r→ r_±. Notice that (<ref>) implies that, for r close to r_+,
-Δ∼exp(2κ_+r^*),
while for r close to r_-,
-Δ∼exp(-2|κ_-|r^*).
Next, we define the coordinates
:=r^*+t, u:=r^*-t.
The range of the coordinates u,,r^* is indicated on Figure <ref>. Recall that the right event horizon corresponds to {u=-∞}, while the right Cauchy horizon corresponds to {=+∞}.
As in <cit.>, define also the function r_mod such that
r_mod/ r=a/Δ, r_mod(M)=0.
Then define the ingoing and outgoing angular coordinates
ϕ_+:=ϕ+r_mod mod 2π, ϕ_-:=ϕ-r_mod mod 2π.
The coordinates and ϕ_+ are regular on _+={u=-∞} and 𝒞ℋ'_+={u=+∞}, while u and ϕ_- are regular on '_+={=-∞} and ={=+∞}.
§.§.§ Ingoing Eddington-Finkelstein coordinates
The ingoing Eddington-Finkelstein coordinates are (_in=, r_in=r, θ_in=θ, ϕ_+). It is a set of coordinates that is regular on _+ and 𝒞ℋ'_+. The coordinate vector fields are
∂__in=T, ∂_r_in=-2e_3, ∂_θ_in=∂_θ, ∂_ϕ_+=Φ.
§.§.§ Outgoing Eddington-Finkelstein coordinates
The outgoing Eddington-Finkelstein coordinates are (u_out=u, r_out=r, θ_out=θ, ϕ_-). It is a set of coordinates that is regular on '_+ and . The coordinate vector fields are
∂_u_out=T, ∂_r_out=-2, ∂_θ_out=∂_θ, ∂_ϕ_-=Φ.
§.§.§ Double null-like coordinate systems
As in <cit.>, we also use the ingoing double null-like coordinates (,u,θ,ϕ_+) that are regular at _+ (where u=-∞), with coordinate vector fields
∂_=e_4-a/r^2+a^2Φ, ∂_u=-μ e_3, ∂_θ=Θ, ∂_ϕ_+=Φ,
where we denote Θ the B-L coordinate vector field ∂_θ to avoid confusion. The equivalent outgoing double null-like coordinates (,u,θ,ϕ_-) are regular at (where =+∞), with coordinate vector fields
∂_=e_4, ∂_u=-μ e_3+a/r^2+a^2Φ, ∂_θ=Θ, ∂_ϕ_+=Φ.
Note that we will only use the ingoing double null-like coordinate system in the redshift region , and the outgoing double null-like coordinate system in the blueshift region ∪∪ so we use the same notations for the ingoing and outgoing double null-like coordinate vector fields ∂_u, ∂_, as there is no danger of confusion.
§.§.§ Constant and w spacelike hypersurfaces
We need a family of spacelike hypersurfaces, for which we will apply the energy estimates and get decay in . Note that the constant , u hypersurfaces are not spacelike. Indeed we have
𝐠_a,M(∇,∇)=𝐠_a,M(∇ u,∇ u)=a^2sin^2θ/q^2
so that the constant , u hypersurfaces are null at the poles and timelike away from the poles. We define, as in <cit.>,
:=-r+r_+, w:=u-r+r_-,
such that the constant and w hypersurfaces are spacelike. Indeed, we have <cit.>
𝐠_a,M(∇,∇)=𝐠_a,M(∇ w,∇ w)=-r^2+2Mr+a^2cos^2θ/q^2<-1.
§.§ Spin-weighted scalars
§.§.§ Spin-weighted scalars and spin-weighted spherical operators
In this section, we consider the round sphere ^2 equipped with its volume element, written in coordinates (θ,ϕ)∈(0,π)×[0,2π) :
ν := sinθθϕ.
Let s be an integer. Note that in this work we only consider s=± 2. A spin s scalar is a scalar function that has zero boost weight and proper spin weight, as defined by Geroch, Held and Penrose <cit.>. See <cit.> for a rigorous presentation of the spaces of spin-weighted scalars on the sphere ^2 and on the Kerr interior, and a proof that the Teukosky scalar obtained in the linearisation of a gravitational perturbation in the Newman-Penrose formalism is a spin-weighted scalar on spacetime. See also <cit.> for a precise review of the geometric background of the Teukolsky equation. By 'spin-weighted operator' we mean an operator that takes a spin-weighted scalar into a spin-weighted scalar. Note that a spin 0 scalar is a scalar function.
In Kerr spacetime, the volume element induced on the topological spheres
S(u,):={r=r(u,)}∩{t=t(u,)}
by the metric 𝐠_a,M is
ν(u,):=Σsinθθϕ
where Σ=√(r^4+2r^2a^2cos^2θ+a^4cos^2θ+2Mra^2sin^2θ)∼ 1 in the Kerr interior. Thus, although they are not round, we still rely on the round volume element ν on the Kerr spheres S(u,) to define L^2(S(u,)) norms.
For ψ a spin-weighted scalar on , we define
ψ_L^2(S(u,)):=(∫_S(u,)|ψ^2|ν)^1/2.
Notice that on a region {r_0≤ r≤ r_+} with r_0∈(r_-,r_+), in the ingoing Eddington-Finkelstein coordinates, as ϕ_+-ϕ=r_mod is constant on S(u,), the definition of the L^2(S(u,)) norm gives
ψ_L^2(S(u,))^2=∫_0^π∫_0^2π|ψ|^2(u,,θ,ϕ_+)sinθθϕ_+,
which is a regular norm up to the event horizon {u=-∞}∪{=-∞}. For the same reason, on a region {r_-≤ r≤ r_0} with r_0∈(r_-,r_+), we have
ψ_L^2(S(u,))^2=∫_0^π∫_0^2π|ψ|^2(u,,θ,ϕ_-)sinθθϕ_-,
which is a regular norm up to the Cauchy horizon {=+∞}∪{u=+∞}.
We recall the definition of the following standard spin-weighted differential operators, called the spherical eth operators :
_s :=∂_θ+i/cosθ∂_ϕ-sθ,
'_s :=∂_θ-i/cosθ∂_ϕ+sθ.
The spherical eth operators modify the spin when applied to a spin-weighted scalar. More precisely, increases the spin by 1 while ' decreases the spin by 1. See in Section <ref> their effect on spin-weighted spherical harmonics. Note that the spin of the scalar to which we apply the eth operators will be clear in the context, so we drop the subscript s in what follows.
We define the spin-weighted Laplacian as
'=1/sinθ∂_θ(sinθ∂_θ)+1/sin^2θ∂_ϕ^2+2iscosθ/sin^2θ∂_ϕ-(s^2^2θ-s).
We also have the expression
'=1/sinθ∂_θ(sinθ∂_θ)+1/sin^2θ∂_ϕ^2+2iscosθ/sin^2θ∂_ϕ-(s^2^2θ+s)='-2s.
§.§.§ Spin-weighted spherical harmonics
Let s be a fixed spin. The spin-weighted spherical harmonics are the eigenfunctions of the spin-weighted Laplacian, that is self-adjoint on L^2(^2). They are given by the following family
Y_m,ℓ^s(cosθ)e^imϕ, ℓ≥ |s|, -ℓ≤ m≤ℓ
of spin s scalars on the sphere ^2. They form a complete orthonormal basis of the space of spin s scalars on ^2, for the L^2(^2) scalar product. When considering the spheres of Kerr interior, as the B-L coordinate ϕ is singular at the horizons, we need a slightly modified family. Let r_0∈(r_-,r_+). Then for (u,) such that r(u,)∈ [r_0,r_+], the spin s scalars
Y_m,ℓ^s(cosθ)e^imϕ_+, ℓ≥ |s|, -ℓ≤ m≤ℓ
form a complete orthonormal basis of the space of spin s scalars on S(u,), for the L^2(S(u,)) scalar product. Similarly, if r(u,)∈ [r_-,r_0], the spin s scalars
Y_m,ℓ^s(cosθ)e^imϕ_-, ℓ≥ |s|, -ℓ≤ m≤ℓ
form a complete orthonormal basis of the space of spin s scalars on S(u,), for the L^2(S(u,)) scalar product.
In this work, we will never project equations or spin-weighted scalars on some ℓ or ≥ℓ modes, unlike in <cit.> and <cit.>. Instead, we will simply propagate the dominant term of on _+ up to . This dominant term is a linear combination of spin-weighted spherical harmonics, and satisfies key properties (see Proposition <ref>) that rely on their algebraic properties.
The following facts are standards. We have :
'(Y_m,ℓ^s(cosθ)e^imϕ_±) =-(ℓ-s)(ℓ+s+1)Y_m,ℓ^s(cosθ)e^imϕ_±,
'(Y_m,ℓ^s(cosθ)e^imϕ_±) =-(ℓ+s)(ℓ-s+1)Y_m,ℓ^s(cosθ)e^imϕ_±,
(Y_m,ℓ^s(cosθ)e^imϕ_±) =-√((ℓ-s)(ℓ+s+1))Y_m,ℓ^s+1(cosθ)e^imϕ_±,
'(Y_m,ℓ^s(cosθ)e^imϕ_±) =√((ℓ+s)(ℓ-s+1))Y_m,ℓ^s-1(cosθ)e^imϕ_±.
§.§ Functional inequalities
We start this section by recalling <cit.> with α=0, which will be used to deduce decay of the energy from an energy estimate.
Let p>1 and let f:[1,+∞) →[0,+∞) be a continuous function. Assume that there are constants C_0>0, C_1>0, C_2 ≥ 0, C_3 ≥ 0 such that for 1 ≤ x_1<x_2,
f(x_2)+C_1 ∫_x_1^x_2 f(x) d x ≤ C_0 f(x_1)+C_2 ∫_x_1^x_2 x^-p d x+C_3 x_1^-p .
Then for any x_1 ≥ 1,
f(x_1) ≤ C x_1^-p
where C is a constant that depends only on f(1), C_0, C_1, C_2, C_3, and p.
See <cit.>.
§.§.§ _s and spin-weighted operators in Kerr spacetime
We define the spin-weighted Carter operator
_s:=a^2sin^2θ T^2-2iascosθ T +'_s+1_s.
The main property of the spin-weighted Carter operator is that it commutes with the Teukolsky operator (see (<ref>), (<ref>)). We will use it to bound the angular derivates of solutions to the Teukolsky equation. The following operator is (up to a bounded potential) the spin-weighted scalar equivalent of the tensor covariant derivative ∇_2, where
e_2:=1/sinθΦ+asinθ T,
and will be useful to get a Poincaré-type inequality to absorb the zero order terms of the Teukolsky operator when doing energy estimates.
We define the spin-weighted operator
:=1/sinθΦ+asinθ T+isθq^2/r^2+a^2.
Note that is not regular at the poles, but for a spin-weighted scalar ψ on spacetime we still have ψ∈ L^∞(S(u,)), see for example <cit.>. We will need the following integration by parts lemma for .
Let ψ, φ be spin s scalars on . Then
∫_S(u,)φ ψ ν=T(∫_S(u,)asinθφψ ν)-∫_S(u,)φψ ν.
Define the real valued function
f(r,θ):=θq^2/r^2+a^2.
We have
∫_S(u,)φ ψ ν =∫_S(u,)φ(1/sinθΦ+asinθ T)ψ ν+∫_S(u,)φisf(r,θ)ψ ν
=T(∫_S(u,)asinθφψ ν)-∫_S(u,)(1/sinθΦ+asinθ T)φψ ν
-∫_S(u,)isf(r,θ)φψ ν
=T(∫_S(u,)asinθφψ ν)-∫_S(u,)φψ ν,
as stated.
Notice that we have the expression
T =r^2+a^2/q^2(μ e_3+e_4)-asinθ/q^2+iascosθ/r^2+a^2,
Φ =sinθ(r^2+a^2)/q^2(-asinθ e_4-asinθμ e_3)-iscosθ.
In particular, T=O(μ)e_3+O(1)e_4+O(1)+O(1), and Φ=O(μ)e_3+O(1)e_4+O(1)+O(1).
§.§.§ Poincaré and Sobolev inequalities for spin-weighted scalars
We first recall the standard Poincaré inequality for spin-weighted scalars, see for example <cit.>.
Let ψ be a spin ± 2 scalar on ^2. Then
2∫_^2|ψ|^2ν ≤∫_^2(|∂_θψ|^2+|1/sinθ∂_ϕψ+isθψ|^2)ν.
Next, we define the energy for wich we will show decay for solutions of the Teukolsky equation.
Let ψ be a spin-weighted scalar on . We define its energy density and degenerate energy density as the scalars
𝐞[ψ] :=|e_3ψ|^2+|e_4ψ|^2+|ψ|^2+|∂_θψ|^2,
𝐞_deg[ψ] :=μ^2|e_3ψ|^2+|e_4ψ|^2+|ψ|^2+|∂_θψ|^2.
The following result is a re-writing of the standard Poincaré inequality (<ref>), that will be useful in the energy estimates.
Let ψ be a spin ±2 scalar on . For any (u,) we have the Poincaré inequality
ψ^2_L^2(S(u,))≲∫_S(u,)𝐞_deg[ψ]ν.
By the standard Poincaré inequality (<ref>), we have
ψ^2_L^2(S(u,))≲(|∂_θψ|^2+|1/sinθ∂_ϕψ+isθψ|^2)ν.
Moreover, we have in view of (<ref>),
1/sinθ∂_ϕ+isθ=r^2+a^2/q^2(-asinθ e_4-asinθμ e_3),
and reinjecting in (<ref>) concludes the proof.
We now recall the standard Sobolev embedding for spin-weighted scalars, see for example Lemma 4.27 and Lemma 4.24 of <cit.>.
Let ψ be a spin ± 2 scalar on ^2. We have
ψ^2_L^∞(^2)≲∫_^2(|ψ|^2+|'ψ|^2)ν.
We will need the following reformulation of the standard Sobolev embedding :
Let ψ be a spin s=± 2 scalar on . For any (u,) we have
ψ^2_L^∞(S(u,))a,M≲∫_S(u,)(|T^≤ 2ψ|^2+|_sψ|^2) ν.
We begin the proof with the standard spherical Sobolev estimate (<ref>)
ψ^2_L^∞(S(u,))≲(|ψ|^2+|'ψ|^2)ν
and it remains to express ' with the Carter operator using (<ref>).
§.§ System of equations
§.§.§ Different expressions of the Teukolsky operators
Recall from (<ref>) the expression of the Teukolsky operator obtained in a frame regular at , originally found by Teukolsky <cit.> :
_s:= -[(r^2+a^2)^2/Δ-a^2 sin ^2 θ] T^2 -4 M a r/Δ T Φ -[a^2/Δ-1/sin ^2 θ] Φ^2
+Δ^-s∂_r(Δ^s+1∂_r )+1/sinθ∂_θ(sinθ∂_θ)+2 s[a(r-M)/Δ+i cosθ/sin ^2 θ] Φ
+2 s[M(r^2-a^2)/Δ-r-i a cosθ] T -[s^2 cos ^2 θ/sin ^2 θ-s],
such that for s=± 2, the spin s Teukolsky equation writes
_s_s=0.
The rescaled Teukolsky operator, obtained in the rescaled frame that is regular on _+ is
_s:=Δ^s_s(Δ^-s · ),
such that for s=± 2, recalling ψ_s=Δ^s_s,
_sψ_s=0.
The expression of _s in B-L coordinates is
_s= -[(r^2+a^2)^2/Δ-a^2 sin ^2 θ] T^2 -4 M a r/Δ T Φ -[a^2/Δ-1/sin ^2 θ] Φ^2
+Δ^-s∂_r(Δ^s+1∂_r )+1/sinθ∂_θ(sinθ∂_θ)+2 s[a(r-M)/Δ+i cosθ/sin ^2 θ] Φ
+2 s[M(r^2-a^2)/Δ-r-i a cosθ] T -[s^2 cos ^2 θ/sin ^2 θ+s] -4 s(r-M) ∂_r.
Notice that
_s=_s+2s+4s(r-M)∂_r=_s+2s+4s(r-M)(μ^-1e_4-e_3).
To do the energy estimates, it is convenient to write the Teukolsky operators in terms of e_3, e_4, and .
We have :
_s= -4(r^2+a^2)e_3e_4+ ^2+1sinθ∂_θ(sinθ∂_θ)-4 iascosθ T+2isa^2sinθcosθ(r^2+a^2)
+2 r (e_4-μ e_3)+4 s[(r-M)e_3- rT]+2ar r^2+a^2Φ- s-s^2a^4sin^2θcos^2θ(r^2+a^2)^2,
_s= -4(r^2+a^2)e_3e_4+ ^2+1/sinθ∂_θ(sinθ∂_θ)-4 iascosθ T+2isa^2sinθcosθ/(r^2+a^2)
+2 r (e_4-μ e_3)+4 s[(r-M)μ^-1e_4- rT]+2ar /r^2+a^2Φ+ s-s^2a^4sin^2θcos^2θ/(r^2+a^2)^2.
The computation is done in the easiest way by rewriting
^2+2isa^2cosθsinθ/(r^2+a^2)-s^2a^4sin^2θcos^2θ/(r^2+a^2)^2=^2, :=1/sinθΦ+asinθ T+isθ,
and by using (<ref>) to deduce (<ref>) from (<ref>).
The Teukolsky operators can also be written <cit.> as
_s=-(r^2+a^2)^2-a^2Δsin^2θ/ΔT^2+∂_r(Δ∂_r)-4aMr/ΔTΦ-a^2/ΔΦ^2
+_s-1'_s-2iascosθ T+4s[(r-M)e_3-rT],
_s=-(r^2+a^2)^2-a^2Δsin^2θ/ΔT^2+∂_r(Δ∂_r)-4aMr/ΔTΦ-a^2/ΔΦ^2
+'_s+1_s-2iascosθ T+4s[(r-M)μ^-1e_4-rT].
Note that these expressions show the crucial fact that T, Φ and the Carter operator commute with the Teukolsky equation :
[_s,T]=[_s,_s]=[_s,Φ]=[_s,T]=[_s,_s]=[_s,Φ]=0.
To compute the commutator of the Teukolsky operator with e_3, we will rewrite _s in terms of e_3, T, and the angular operators.
We have
𝐓_s=4Δ e_3^2+ a^2sin^2θ T^2-4aΦ e_3-4(r^2+a^2)Te_3+2aTΦ
+_s-1'_s+4(r-M)(s-1)e_3+(2r(1-2s)-2aiscosθ)T.
Using (<ref>) and
∂_r=-2e_3+a/ΔΦ+r^2+a^2/ΔT,
we get
_s =-(r^2+a^2)^2-a^2Δsin^2θ/ΔT^2+(-2e_3+a/ΔΦ+r^2+a^2/ΔT)(-2Δ e_3+aΦ+(r^2+a^2)T)
-4aMr/ΔTΦ-a^2/ΔΦ^2+_s-1'_s-2iascosθ T+4s[(r-M)e_3-rT]
=4Δ e_3^2+a^2sin^2θ T^2-4(r-M)e_3-4aΦ e_3-4(r^2+a^2)Te_3+a^2/ΔΦ^2+2a(r^2+a^2)/ΔTΦ+2r T
-4aMr/ΔTΦ-a^2/ΔΦ^2+_s-1'_s-2iascosθ T+4s[(r-M)e_3-rT],
simplifying the coefficients gives the result.
We have
[_s,e_3]=4(r-M)e_3^2-4rTe_3+2(s-1)e_3+(1-2s)T.
We simply use the above expression for _s and the fact that [_s-1,e_3]=['_s,e_3]=0.
§.§.§ Teukolsky-Starobinsky identities
On top of the Teukolsky equations, we also assume that the Teukolsky-Starobinsky identities (TSI) hold. The Teukolsky-Starobinsky identities are a PDE system relating the 4th order angular and radial derivatives of ψ_± 2. Like the Teukolsky equations, they are obtained from the linearisation of a gravitational perturbation of the Einstein vacuum equations around Kerr spacetime. Their differential form was first derived in <cit.> in frequency space, while their covariant form is derived in <cit.>. Recalling from (<ref>), (<ref>) the coordinate vector fields
∂_r_out=∂_r+r^2+a^2/ΔT+a/ΔΦ=-2, ∂_r_in=∂_r-r^2+a^2/ΔT-a/ΔΦ=-2e_3,
the TSI for the spin ± 2 write <cit.>, in Kerr spacetime,
('-iasinθ T)^4-12MT =Δ^2∂_r_out^4(Δ^2),
(+iasinθ T)^4+12MT =∂_r_in^4().
As mentionned in <cit.>, (<ref>) and (<ref>) are physical space versions of the frequency space TSI's obtained in <cit.>, and it is also possible to obtain (<ref>) and (<ref>) from the covariant TSI in <cit.>. We will actually only use (<ref>) in this work, close to _+, and not (<ref>).
§ STATEMENT OF THE MAIN THEOREM
Recall that we denote by , the spin -2 and +2 scalars that are solutions of the spin ±2 Teukolsky equations :
_+2=0, _-2=0.
As before, we denote =Δ^-2 that satisfies _+2=0. In this section, we state our main result on the precise asymptotics of at . To this end, we first introduce the different subregions of the Kerr interior that we will consider.
§.§ The different regions of the Kerr black hole interior
Fix r_𝔟∈ (r_-,r_+) close to r_-, γ>0 small, that will both be chosen later in the energy estimates. More precisely, is chosen in Appendix <ref> and γ is chosen in Section <ref>. We define the following subregions of the Kerr black hole interior, see Figure <ref> :
𝐈:={≤ r≤ r_+}∩{≥ 1},
𝐈𝐈:={r_-≤ r≤}∩{2r^*≤^γ}∩{w≤ w_,γ},
𝐈𝐈𝐈:={r_-≤ r≤}∩{2r^*≥^γ}∩{w≤ w_,γ},
:={≥_,γ}∩{w≥ w_,γ},
where w_,γ:=2^*-(2^*)^1/γ-+r_- and _,γ:=(2^*)^1/γ are such that {w=w_,γ} and {=_,γ} intersect Γ and {r=}, where the hypersurface Γ is defined by
Γ:={2r^*=^γ}.
Region is the redshift region that contains _+, region ∪ is the blueshift region very close to where the scalar Δ decays exponentially towards the Cauchy horizon, and region is an intermediate region, where the blueshift effect is already effective.
In region we will obtain redshift energy estimates for the Teukolsky equation _sψ_s=0 for s=-2, while in ∪∪ we will derive effective blueshift estimates for s=+2. For the energy estimates, we use the coordinate system (u,,θ,ϕ_+) in 𝐈, and the coordinate system (u,,θ,ϕ_-) in ∪∪.
In the next section, we provide the assumptions on _+ on which all the analysis is based. The goal of the analysis will be to successively propagate the polynomial bounds on _+ to regions , , and .
§.§ Main assumptions on the event horizon
We first define our initial spacelike hypersurface Σ__0 as the union of three spacelike hypersurfaces :
Σ_0:=Σ_τ_0∪Σ_int∪Σ_τ'_0,
similarly as in <cit.>. More precisely, we
define Σ_τ_0 as the hypersurface defined in <cit.>, i.e. a constant τ hypersurface, where (τ,r,θ,ϕ_+) is a hyperboloidal coordinate system on the right part of the exterior of Kerr spacetime. Similarly, Σ_τ'_0 is a constant τ' hypersurface, where (τ',r,θ,ϕ_-) is a hyperboloidal coordinate system on the left part of the exterior of Kerr spacetime. We chose Σ_int as any spacelike hypersurface inside the Kerr interior that joins Σ_τ_0 and Σ_τ'_0 such that the union of the three hypersurfaces is spacelike, see Figure <ref>. The Cauchy problem for the Teukolsky equation with initial data on Σ_0 is well-posed on the future maximal Cauchy development ^+(Σ_0) of Σ_0. We will prove the precise asymptotics of the Teukosky field on ∩^+(Σ_0), that is the part of (thus in the grey shaded area) that is located above Σ_int in Figure <ref>. Without loss of generality we can assume that at the intersection of Σ_0 and _+, we have ≤ 1, and symmetrically that u≤ 1 at Σ_0∩'_+.
The conclusion of the main theorem will be applicable to solutions of the Teukolsky equations that arise from compactly supported initial data on Σ_0, but we will actually only rely on the Price's law results of <cit.>, that we write down now.
We denote by the set of tangential derivatives on the event horizon. More precisely, we use
∇:={T,Φ, , '},
and for k=(k_1,k_2,k_3,k_4)∈ℕ^4, we define |k|=k_1+k_2+k_3+k_4 and ^k=T^k_1Φ^k_2^k_3(')^k_4.
Let N_j^±, N_k^±, (N_j^+)', (N_k^+)'≥ 1 be sufficiently large integers that we will choose later, see Remark <ref> for the precise values.
We will consider ψ_± 2 such that there is δ>0 such that for j≤ N_j^- and |k|≤ N_k^-,
|T^j^k(-)|≲^-7-j-δ on _+∩{≥ 1},
and such that for j≤ N_j^+, |k|≤ N_k^+, and l≤ 3,
|T^j^ke_3^l(-)|≲^-7-j-δ on _+∩{≥ 1}.
We also assume the less precise assumption on '_+ : for j≤ (N_j^+)', |k|≤ (N_k^+)', and l≤ 1,
|T^j^k^l|≲ u^-7-j on '_+∩{u≥ 1}.
Assumptions (<ref>), (<ref>), (<ref>) with |k|=l=0 correspond to the so-called Price's law, and were recently shown to hold true in subextremal Kerr by Ma and Zhang <cit.>. More precisely, they have shown that this holds for initially smooth and compactly supported solutions on Σ_0 of the Teukolsky equations _± 2ψ_± 2=0, where :
* The constants Q_m,2 are defined as 2^7ℚ_m,2/5 where the constants ℚ_m,2 are defined in <cit.>, depend on the values of ψ_± 2 on the initial hypersurface Σ_0, and are generically non-zero.
* The functions A_m(r) are defined as (r^2+a^2)^2𝔣_+2,m(r) where 𝔣_+2,m(r) is precisely defined in <cit.>. The explicit computation of A_m(r) gives
A_m(r)=1/3[3Δ^2 +(r-M)(4(a^2-M^2)+6Δ)iam-(2Δ+6(a^2-M^2)+4(r-M)^2)a^2m^2
-4(r-M)ia^3m^3+2a^4m^4],
see Appendix <ref>.
Statements (<ref>), (<ref>), (<ref>) with non-zero |k|,l can be deduced[The statements about the tangential derivatives on the event horizon can be obtained easily using the fact that Φ, T and the Carter operator commute with the Teukolsky equation and directly applying the main theorem of <cit.>. Statement (<ref>) for the e_3^≤ 3 derivatives can be deduced as follows. Define as in (<ref>) and let r_0>r_+. Differentiating <cit.> by e_3^n for n=1,2,3, restricting on {r=r_0}, and using the bounds <cit.> as well as e_3∼μ(r_0)∂_ρ+O(1)T on {r=r_0} gives |e_3^n|≲^-7-δ on {r=r_0}. Then the TSI (<ref>) gives |e_3^4|≲^-7-δ on {r_0≤ r≤ r_+}, and all is left to do is integrate this bound from r=r_0 to r=r_+.] directly from the results in <cit.>, for solutions of the Teukolsky equations arising from smooth and compactly supported initial data on Σ_0.
§.§ Precise version of the main theorem
We state the main result of this paper.
Let ψ_± 2 be solutions of the Teukolsky equations _± 2ψ_±2=0 on the interior of Kerr spacetime. Assume that the Teukolsky-Starobinsky identity (<ref>) holds in , as well as the assumptions (<ref>), (<ref>), (<ref>) on the event horizon. Then, denoting =Δ^-2, in region ∪ close to we have the following asymptotic behaviour :
(u,,θ,ϕ_-)=Δ^-2(u,)/^7∑_|m|≤ 2A_m(r_-)Q_m,2e^2imr_mod(u,)Y_m,2^+2(cosθ)e^imϕ_-+O(Δ^-2^-7-δ).
A few remarks are in order.
* As discussed in Section <ref>, is the Teukolsky scalar obtained from the linearisation of a gravitational perturbation obtained in the Newman-Penrose tetrad that is regular at . Thus Theorem <ref> should be interpreted as a linear curvature instability statement for the Kerr Cauchy horizon, as Δ^-2/^7 blows up on , exponentially in in ∪. Moreover, since the function r_mod blows up on as r_mod∼log(r-r_-), (<ref>) implies large oscillations of close to , as announced by Ori <cit.>.
* Notice that the expression of A_m(r) gives
A_m(r_-)=2am(am+2i√(M^2-a^2))(M^2+a^2(m^2-1)).
Thus for any subextremal Kerr black hole parameters (a,M), for m=± 1 and m=± 2, we have A_m(r_-)≠ 0. Note that the identity A_0(r_-)=0 confirms the heuristics arguments of Ori <cit.> according to which the m=0 azimuthal mode of decays better than Δ^-2^-7.
* We also prove, see Proposition <ref>, in intermediate region ,
(u,,θ,ϕ_-)=Δ^-2(u,)/^7∑_|m|≤ 2A_m(r)Q_m,2e^2imr_mod(u,)Y_m,2^+2(cosθ)e^imϕ_-+O(Δ^-2^-7-δ).
* In view of the asymptotic behaviors (<ref>) and (<ref>), there exists C>1 large enough such that the following uniform lower bound
_L^2(S(u,))≳Δ^-2(u,)/^7, in {≥ C}∩{r_-< r≤},
holds for generic initial data, in accordance with the results of <cit.>.
* Inspecting the proof, we find that the minimal values of N_j^±, N_k^± and (N_j^+)', (N_k^+)' for which we prove (<ref>) are N_j^+=5, N_k^+=5, N_j^-=15, N_k^-=14, and (N_j^+)'=2, (N_k^+)'=3. We did not try to optimize this loss of derivatives. We mainly lose derivatives when we apply the Sobolev embedding on the spheres (which loses each time two T and angular derivatives) and when we integrate the Teukolsky-Starobinsky identity from _+. Assuming that higher order
derivatives decay on the event horizon leads to an asymptotic (<ref>) that holds for higher order T, , and Φ derivatives.
* By assuming the equivalent statement of assumptions (<ref>), (<ref>) for _± 2 on '_+, we can get an equivalent statement in the symmetric region {u≥ 1} at any point in the analysis, by replacing with u, ψ_s with _-s, e_3 with and e_4 with . As in <cit.> for the scalar wave equation, this will not be useful until we try to get the asymptotics on the upper part of , i.e. in region . There, we will need the boundedness of the energy density [e_3^≤ 1] on {u≥ 1}∩{r=}. This is why we also assume (<ref>), which ensures this energy density bound, and which holds for physical initial data compactly supported near the right part of the Kerr exterior. We note that to prove the final result in ∪, that contains the lower part of , the analysis does not require any assumptions on ℋ'_+.
* We also prove a result on the precise asymptotics of ψ_- 2 in redshift region , see Proposition <ref>, and Theorem <ref> for a more general result on the spin -2 Teukolsky equation in . See Proposition <ref> for the precise asymptotics of in .
The rest of the paper is devoted to the proof of Theorem <ref>. In Section <ref> we prove the precise asymptotics of ψ_± 2 in region , see Propositions <ref> and <ref>. In Section <ref>, we prove the precise asymptotics of in region , see Proposition <ref>. Finally, in Section <ref>, we prove the precise asymptotic behavior (<ref>) of in regions and , see Theorems <ref> and <ref>, hence concluding the proof of Theorem <ref>.
§ PRECISE ASYMPTOTICS IN REDSHIFT REGION 𝐈 NEAR _+
We begin the proof of Theorem <ref>, with the description of the precise asymptotics in region . In Section <ref>, we show a redshift energy estimate, that we will eventually apply to in Section <ref> to propagate the ansatz for from _+ to . Finally, in Section <ref>, using the Teukolsky-Starobinsky identity (<ref>) we propagate the ansatz for from _+ to .
§.§ Energy method for the spin -2 Teukolsky equation in
We begin this section with the following definition.
Let V and c be real numbers. We define the following spin-weighted operator :
_s^(c,V):= -4(r^2+a^2)e_3e_4+ ^2+1sinθ∂_θ(sinθ∂_θ)-4 iascosθ T+2isa^2sinθcosθ(r^2+a^2)
+2 r (e_4-μ e_3)+4cs[(r-M)e_3- rT]+2ar r^2+a^2Φ- s-s^2a^4sin^2θcos^2θ(r^2+a^2)^2+V.
We remark the following :
* We introduce the modified Teukolsky operator _s^(c,V) in order to do energy estimates for a more general class of operators, which will be useful after commuting the Teukolsky operator with e_3.
* We will use the estimates of this section for a finite number of explicit constants (c,V) thus we still write the bounds that depend on (c,V) with notations O, ≲.
* Using (<ref>) we get _s=_s^(1,0). More generally,
_s^(c,V)=_s+4(c-1)s[(r-M)e_3-rT]+V,
which proves, using (<ref>),
[_s^(c,V),T]=[_s^(c,V),]=0.
In all this section, we denote ψ a spin -2 scalar such that there are constants
V∈ℝ, c>0, β>1,
such that for 0≤ j≤ N_j^-, 0≤ |k|≤ N_k^-, and 0≤ 2 k_1+k_2≤ N_k^-,
∙ T^j^kψ=O(^-β-j) on _+∩{≥ 1},
∙ _-2^(c,V)T^j_-2^k_1Φ^k_2ψ=O(^-β-j) in .
The goal of this section is to propagate the upper bound (<ref>) for ψ on the event horizon to region using a redshift energy estimate, see Proposition <ref>. Recall the definition (<ref>) of the energy density [ψ]=|e_3ψ|^2+|e_4ψ|^2+|Uψ|^2+|∂_θψ|^2.
Assume that ψ is a spin -2 scalar that satisfies (<ref>), (<ref>) with c>0, β>1. Then for 0≤ j≤ N_j^-, 0≤ 2 k_1+k_2≤ N_k^–1, and for _1≥ 1,
∬_{w=w_1}∩𝐈𝐞[T^j_-2^k_1Φ^k_2ψ](-μ) ν u≲_1^-2β-2j.
The choice of the negative -2 spin is the right one in the redshift region 𝐈 to be able to obtain a positive bulk term in the energy estimate. Actually, we will see that the sign that matters is the one of s(r-M) where s is the spin, thus we already anticipate that we will only be able to control solutions with positive +2 spin in the blueshift region ∪∪.
In what follows, we denote s=-2. The computations are done for a general spin s, but the bulk term will only be positive for s< 0. In view of the assumptions (<ref>), (<ref>), it suffices to treat the case j=k_1=k_2=0. Recall that in 𝐈, in coordinates (u,,θ,ϕ_+), we have ∂_u=-μ e_3 and ∂_=e_4-a/(r^2+a^2)Φ. Thus multiplying (<ref>) by μ we get the following expressions :
μ_s^(c,V) =4(r^2+a^2)∂_ue_4+μ𝒰^2+μsinθ∂_θ(sinθ∂_θ)-4μ iascosθ T+2isμa^2sinθcosθ(r^2+a^2)U
+2 rμ (e_4+∂_u)-4cs[(r-M)∂_u+μ rT]+2arμr^2+a^2Φ-μ s-s^2μa^4sin^2θcos^2θ/(r^2+a^2)^2+μ V
=4(r^2+a^2)e_4∂_u+μ𝒰^2+μsinθ∂_θ(sinθ∂_θ)-4μ iascosθ T+2isμa^2sinθcosθ(r^2+a^2)U
+2 rμ (e_4+∂_u)-4cs[(r-M)∂_u+μ rT]-2arμr^2+a^2Φ-μ s-s^2μa^4sin^2θcos^2θ/(r^2+a^2)^2+μ V,
where we used (<ref>) to get
4(r^2+a^2)[∂_u,e_4]=-4arμr^2+a^2Φ.
As in <cit.> we now multiply (<ref>) by μ and by the complex conjugate of
X(ψ):=f(r)∂_uψ+g(r)e_4ψ,
where we choose
f(r)=(-μ)^-1(r^2+a^2)^p, g(r)=(r^2+a^2)^p
with a real number p=p(a,M,c,V)≫1 large enough that will be chosen in Appendix <ref>. We then integrate over S(u,) against ν, and take the real part, to get
∫_S(u,)(f(r)∂_uψμ_s^(c,V)ψ)+(g(r)e_4ψμ_s^(c,V)ψ)ν=∫_S(u,)μ(X(ψ)O(^-β))ν.
The computation of the left-hand side of (<ref>), done in Appendix <ref> gives :
∂_(∫_S(u,)𝐅_[ψ]ν)+∂_u(∫_S(u,)𝐅_u[ψ]ν)+∫_S(u,)𝐁[ψ]ν=∫_S(u,)μ(X(ψ)O(^-β))ν,
where
𝐅_[ψ]:=2(r^2+a^2)f(r)|∂_uψ|^2-12μ g(r)(|∂_θψ|^2+|Uψ|^2)+asinθ f(r)μℜ(∂_uψUψ)+asinθ g(r)μℜ(e_4ψUψ),
𝐅_u[ψ]:=2(r^2+a^2)g(r)|e_4ψ|^2-12μ f(r)(|∂_θψ|^2+|Uψ|^2)-asinθ f(r)μℜ(∂_uψUψ)-asinθ g(r)μℜ(e_4ψUψ),
and the bulk term is
𝐁[ψ]: =2(rμ g(r)-∂_u((r^2+a^2)g(r)))|e_4ψ|^2+2(rμ f(r)-e_4((r^2+a^2)f(r)))|∂_uψ|^2
+12(∂_u(μ f(r))+e_4(μ g(r)))(|∂_θψ|^2+|Uψ|^2)
+μ g(r)sra^2μcosθsinθ/(r^2+a^2)^2ℑ(ψ Uψ)-4cs(r-M)f(r)|∂_uψ|^2
+4μ g(r)ascosθℑ(e_4ψTψ)-4srg(r)μℜ(e_4ψTψ)+g(r)[2rμ-4cs(r-M)]ℜ(e_4ψ∂_uψ)
+2g(r)arμr^2+a^2ℜ(e_4ψΦψ)-g(r)(μ s+s^2μa^4sin^2θcos^2θ(r^2+a^2)^2-μ V)ℜ(e_4ψψ)
-2sg(r)μa^2sinθcosθ(r^2+a^2)ℑ(e_4ψUψ)+μ f(r)sra^2μcosθsinθ/(r^2+a^2)^2ℑ(ψ Uψ)
+4μ f(r)ascosθℑ(∂_uψTψ)-4srf(r)μℜ(∂_uψTψ)+2rμ f(r)ℜ(∂_uψe_4ψ)
-2f(r)arμr^2+a^2ℜ(∂_uψΦψ)-f(r)(μ s+s^2μa^4sin^2θcos^2θ(r^2+a^2)^2-μ V)ℜ(∂_uψψ)
-2sf(r)μa^2sinθcosθ(r^2+a^2)ℑ(∂_uψUψ).
Setting s=0 in the LHS of (<ref>), we find the same expression as in <cit.> for the scalar wave.
See Appendix <ref>.
We follow <cit.> and integrate (<ref>) in 𝐈∩{_1≤≤_2} with respect to u to get
∬_{w=w_2}∩𝐈𝒯_w[ψ] νd u+∬_{r=r_𝔟}∩{w_1 ≤w≤w_2}𝒯_r[ψ] νdu+∭_{w_1 ≤w≤w_2}∩𝐈𝐁[ψ] νd u du
= ∬_{w=w_1}∩𝐈𝒯_w[ψ] νd u+∬_ℋ_+∩{w_1 ≤w≤w_2}𝒯_u[ψ] νdu+∭_{w_1 ≤w≤w_2}∩𝐈μ(X(ψ)O(^-β)) νd u du,
where
𝒯_w[ψ]=(1-μ/2)𝐅_u[ψ]-12μ𝐅_u[ψ], 𝒯_r[ψ]=-12μ(𝐅_u[ψ]+𝐅_u[ψ]), and 𝒯_u[ψ]=𝐅_u[ψ].
We now estimate the different quantities using the choice of f and g.
Control of the bulk terms. First, we prove a lower bound for the bulk term 𝐁[ψ], which is a manifestation of the redshift effect.
For p=p(a,M,c,V) chosen large enough and for s=-2, we have in
∫_S(u,)𝐁[ψ]ν≳ (-μ)∫_S(u,)[ψ]ν.
See Appendix <ref>.
To control the other bulk term on the RHS of (<ref>), we write, for ε>0,
|∭ _{w_1 ≤w≤w_2}∩𝐈μ(X(ψ)O(^-β)) νd u du|≤
ε∭_{w_1 ≤w≤w_2}∩𝐈 |X(ψ)|^2(-μ)νd u du+ε^-1∭_{w_1 ≤w≤w_2}∩𝐈^-2β (-μ)νd u du.
Moreover, changing variables[Note that on =cst, we have μ u=(2-μ) r.] from u to r, and the fact that r is bounded, we get
∭_{w_1 ≤w≤w_2}∩𝐈^-2β (-μ)νd u du≲∬_{w_1 ≤w≤w_2}∩𝐈^-2βd r du≲∫__1^_2^-2β.
We also have |X(§ψ)|^2≲ |e_3ψ|^2+|e_4ψ|^2≤[ψ]. Thus, choosing ε>0 small enough such that the first term on the RHS of (<ref>) is absorbed in the left-hand side of (<ref>), we get
∬_{w=w_2}∩𝐈𝒯_w[ψ] νd u+∬_{r=r_𝔟}∩{w_1 ≤w≤w_2}𝒯_r[ψ] νdu+∭_{w_1 ≤w≤w_2}∩𝐈𝐞[ψ] (-μ)νd u du
≲ ∬_{w=w_1}∩𝐈𝒯_w[ψ] νd u+∬_ℋ_+∩{w_1 ≤w≤w_2}𝒯_u[ψ] νdu+∫__1^_2^-2β.
Control of the boundary terms. We first deal with the boundary terms on =cst :
𝒯_w[ψ] =(1-μ/2)𝐅_u[ψ]-12μ𝐅_u[ψ]
=2(r^2+a^2)f(r)|∂_uψ|^2-12μ g(r)(|∂_θψ|^2+|ψ|^2)+asinθμℜ(X(ψ)ψ)
-μ/2[2(r^2+a^2)f(r)|∂_uψ|^2+2(r^2+a^2)g(r)|e_4ψ|^2-12μ (f(r)+g(r))(|∂_θψ|^2+|ψ|^2)]
∼ (-μ)[ψ],
where we absorbed the term asinθμℜ(X(ψ)ψ) using
|asinθμ f(r) ℜ(∂_uψψ)|≤ a^2f(r)|∂_uψ|^2+1/4μ^2f(r)|ψ|^2
and
|asinθμ g(r) ℜ(e_4ψψ)|≤ -3/4μ a^2g(r)|e_4ψ|^2-1/3μ g(r)|ψ|^2.
Then, for the term on {r=}, we have
𝒯_r[ψ]=-12μ(𝐅_u[ψ]+𝐅_u[ψ])∼[ψ], on {r=}.
To control the term on the event horizon, we use[Notice that e_4=-μ e_3+O(1)T+O(1)Φ.] 𝒯_u[ψ]∼ |e_4ψ|^2≲ |Tψ|^2+|Φψ|^2 on _+. Thus, using (<ref>) we get on _+∩{≥ 1},
∫_S(-∞,)𝒯_u[ψ]ν≲∫_S(-∞,)|Tψ|^2+|Φψ|^2ν≲^-2β.
This concludes the estimates of the boundary terms. Together with (<ref>), this yields
∬ _{w=w_2}∩𝐈𝐞[ψ](-μ) νd u+∭_{w_1 ≤w≤w_2}∩𝐈𝐞[ψ](-μ) νd u du
+∬_{w_1 ≤w≤w_2}∩{r=}𝐞[ψ]νdu≲∬_{w=w_1}∩𝐈𝐞[ψ](-μ) νd u+∫_w_1^w_2w^-2βdw.
Thus using Lemma <ref> concludes the proof, dropping the term on {r=} which is non-negative, and using the fact that for any _0≥ 1, the initial energy
∬_{w=w_0}∩𝐈𝐞[ψ](-μ) νd u≲ 1
is finite[In the notations of Lemma <ref>, this guaranties that f(1) is finite.]. This is clear by standard existence results for linear wave equations with smooth initial data, and because {w=w_0}∩𝐈 is a compact region inside a globally hyperbolic spacetime.
The following result will be useful to deduce decay in L^2(S(u,)) norm from energy decay.
Let ψ be a ± 2 spin-weighted scalar on . We define the scalar function
f(,r):=ψ_L^2(S(u(r,),(r,))).
Then for ≥ 1 and r_2≤ r_1,
f(,r_2)≲ f(,r_1)+(∫_r_2^r_1∫_S(,r')[ψ]ν r')^1/2.
In coordinates (,r,θ,ϕ_+) we have ∂_r_in=-2e_3, ∂_=T.
Thus for a fixed ,
r(f(,r))= r(f(=+r-r_+,r))=(∂_r_in+∂_)f(,r).
Moreover, by a Cauchy-Schwarz inequality,[Notice that even in {r_-≤ r≤}, we can rewrite the L^2(S(u,)) norm as an integration with respect to ϕ_+, doing the change of variables ϕ_+=ϕ_-+2r_mod, as shown in Section <ref>.]
|(∂_r_in+∂_)(f^2)| =|(∂_r_in+∂_)(∫_0^π∫_0^2π|ψ|^2(,r,θ,ϕ_+)sinθθϕ_+)|
=|2ℜ(∫_0^π∫_0^2π(ψ (∂_r_in+∂_)ψ)(,r,θ,ϕ_+)sinθθϕ_+)|
≲ 2f(∂_r_in+∂_)ψ_L^2(S(,r))=2f(T-2e_3)ψ_L^2(S(,r)),
which yields
|(∂_r_in+∂_)f|=1/2f|(∂_r_in+∂_)(f^2)|≲(T-2e_3)ψ_L^2(S(,r)).
Integrating (<ref>), together with the bound (<ref>) gives
f(,r_2)≲ f(,r_1)+∫_r^r_+(∫_S(,r')|Tψ|^2+|e_3ψ|^2ν)^1/2 r',
and we conclude the proof of Lemma <ref> using the decomposition (<ref>) that gives T=O(1)e_3+O(1)e_4+O(1)+O(1), the Poincaré inequality (<ref>), and a Cauchy-Schwarz inequality.
Assume that ψ is a spin -2 scalar that satisfies (<ref>), (<ref>) with c>0, β>1. Then we have in 𝐈, for 0≤ j≤ N_j^- and 0≤ 2 k_1+k_2≤ N_k^–1,
T^j^k_1Φ^k_2ψ_L^2(S(u,))≲^-β-j.
As T, Φ, and commute with _-2^(c,V), it suffices to prove the case j=k_1=k_2=0. Let (u,)∈ and denote _1=(u,), r=r(u,). Using Lemma <ref> with r_1=r_+ and r_2=r we get
ψ_L^2(S(u,))≲ψ_L^2(S(-∞,_1))+(∫_r^r_+∫_S(,r')[ψ]ν r')^1/2.
Changing variables from r to u, taking into account Footnote <ref>, we get using Proposition <ref> :
(∫_r^r_+∫_S(,r')[ψ]ν r')^1/2≲(∬_{w=w_1}∩𝐈𝐞[ψ](-μ) ν u)^1/2≲_1^-β.
Moreover, ψ_L^2(S(-∞,_1))≲_1^-β by (<ref>), thus using _1^-β≲^-β we get
ψ_L^2(S(u,))≲_1^-β≲^-β.
The previous energy and L^2(S(u,)) decay estimates, combined with the Sobolev embedding (<ref>) give the following polynomial bound propagation result.
Assume that ψ is a spin -2 scalar that satisfies (<ref>), (<ref>) with c>0, β>1. Then we have in , for 0≤ j≤ N_j^–2 and 0≤ 2 k_1+k_2≤ N_k^–3,
|T^j_-2^k_1Φ^k_2ψ|≲^-β-j.
It suffices to treat the case j=k=0. Using the Sobolev embedding (<ref>) and Proposition <ref>, we get pointwisely,
|ψ|^2≲∫_S(u,)|T^≤ 2ψ|^2ν+∫_S(u,)|_-2ψ|^2ν≲^-2β,
which is the stated estimate for j=k=0.
We conclude this section with the following result, that proves decay of e_3ψ under additionnal assumptions on _+. We also prove a precise energy bound on {r=}.
Assume that ψ is a spin -2 scalar that satisfies, for 0≤ j≤ N_j^-, 0≤ |k|≤ N_k^-, and 0≤ 2k_1+k_2≤ N_k^-,
∙ e_3^≤ 1T^j^kψ=O(^-β-j) on _+∩{≥ 1},
∙ e_3^≤ 1_-2T^j_-2^k_1Φ^k_2ψ=O(^-β-j) in ,
where β>1. Then for j≤ N_j^–2 and 2k_1+k_2≤ N_k^–3, we have in ,
|e_3^≤ 1T^j^k_1Φ^k_2ψ|≲^-β-j,
and for 1≤_1≤_2 we have the enery bound on {r=}
∬_{w_1 ≤w≤w_2}∩{r=}𝐞[T^j^k_1Φ^k_2e_3^≤ 1ψ]νdu≲_1^-2β-2j+∫_w_1^w_2w^-2β-2jdw.
We already proved (<ref>) in the case without the e_3 derivative in Proposition <ref>. Thus it remains to prove the bound (<ref>) with the e_3 derivative, as well as (<ref>). The proof is based on the energy estimates done in Section <ref>, and on a commutation of the Teukolsky operator with e_3. Using Proposition <ref>, we find that e_3ψ satisfies
(_-2-4[(r-M)e_3-rT]+6)[e_3ψ]=5Tψ+e_3_-2ψ.
Commuting with T^j^k_1Φ^k_2 and using (<ref>), (<ref>) without the e_3 derivative, and (<ref>) yields[Note that the bound for the RHS holds for j≤ N_j-3 but after integration on the spheres (which is the only bound that we need in the energy estimates) it holds for j≤ N_j-2 by Proposition <ref>.]
_-2^(3/2,6)[T^j^k_1Φ^k_2e_3ψ]=O(^-β-j), in ,
Thus using assumption (<ref>) on _+ with the e_3 derivative, by Proposition <ref> with the parameters β>1, c=3/2>0 and V=6, we get for j≤ N_j^–2 and 2k_1+k_2≤ N_k^–3,
|T^j^k_1Φ^k_2e_3ψ|≲^-β-j, in ,
as stated. To get the energy bound (<ref>), notice that dropping the non-negative first and second terms on the LHS of (<ref>) gives in this context, for _2≥_1,
∬_{w_1 ≤w≤w_2}∩{r=} 𝐞[T^j^k_1Φ^k_2e_3^≤ 1ψ]νdu≲
∬_{w=w_1}∩𝐈𝐞[T^j^k_1Φ^k_2e_3^≤ 1ψ](-μ) νd u+∫_w_1^w_2w^-2β-2jdw,
where β=7+δ. Using Proposition <ref> for e_3^≤ 1ψ yields
∬_{w_1 ≤w≤w_2}∩{r=}𝐞[T^j^k_1Φ^k_2e_3^≤ 1ψ]νdu≲_1^-2β-2j+∫_w_1^w_2w^-2β-2jdw,
as stated.
Actually, further commutations with e_3 only improve the redshift effect. More precisely, for k≥ 0, e_3^kψ satisfies
_-2^(c_k,V_k)[e_3^kψ]=e_3^k_-2ψ+T^≤ 1e_3^≤ k-1ψ,
where c_k=(k+2)/2. Thus, assuming decay of e_3^k_-2ψ in and of e_3^kψ on _+ allows one to use Proposition <ref> to successively control all the derivatives e_3^kψ, k≥ 0, in .
§.§ Precise asymptotics of ψ_-2 in region 𝐈
We state the main result of this section.
Assume that satisfies (<ref>). Then, we have in 𝐈,
=+,
where for j≤ N_j^–2 and 2k_1+k_2≤ N_k^–3,
|T^j_-2^k_1Φ^k_2|≲^-7-j-δ, in .
If, additionnally, we assume that for j≤ N_j^- and |k|≤ N_k^-,
|e_3T^j^k|≲^-7-j-δ on _+∩{≥ 1},
then for j≤ N_j^–2 and 2k_1+k_2≤ N_k^–3,
|e_3T^j_-2^k_1Φ^k_2|≲^-7-j-δ, in .
Let
:=-.
Using _-2ψ_-2=0, as well as [T,_-2]=[_-2,_-2]=[Φ,_-2]=0, we get in
e_3^≤ 1_-T^j_-2^k_1Φ^k_2=-e_3^≤ 1T^j_-2^k_1Φ^k_2_-2().
Notice that e_3(θ)=e_3()=e_3(ϕ_+)=0 so that
e_3()=0.
Thus using the expression (<ref>) of the Teukolsky operator, we get
_-2 ()
=(a^2sin^2θ T^2-4(r^2+a^2)Te_3+2aTΦ-(6r+4iacosθ)T)[],
since we also have ' Y_m,2^-2(cosθ)=0 by (<ref>). The explicit computation of (<ref>) yields
e_3^≤ 1T^j_-2^k_1Φ^k_2_-2()=O(^-8-j).
Thus satisfies (<ref>) and (<ref>) with β=7+δ>1, c=1, and V=0 thanks to the assumption (<ref>) on _+. Using Proposition <ref>, we get in
|T^j_-2^k_1Φ^k_2|≲^-7-j-δ,
as stated. The result (<ref>) with the e_3 derivative is a direct application of Theorem <ref>.
Notice that (<ref>) implies in particular the energy bound on {r=}∩{≥ 1} :
∬_{r=}∩{≥ 1}[T^j_-2^k_1Φ^k_2e_3^≤ 1]ν≲ 1.
We will use the symmetric version of this bound for , in the region {u≥ 1}. This is where we will use the initial assumption (<ref>) on '_+. Recall that it will only be used to deduce the precise asymptotics of on the upper part of the Cauchy horizon, i.e. in , see Remark <ref>. The symmetric argument in the region {u≥ 1}, together with assumption (<ref>) gives the following analog of (<ref>) :
∬_{r=}∩{u≥ 1}[T^j^k_1Φ^k_2^≤ 1]ν u≲ 1.
§.§ Precise asymptotics of ψ_+2 in region 𝐈
We have the ansatz for on the event horizon, given by (<ref>). We show that this ansatz propagates to region , using the asymptotics for in derived in Section <ref>, and the Teukolsky-Starobinsky identity (TSI) (<ref>). The idea is that the TSI can be rewritten as a relation between and , with error terms that can be bounded using the fact that T derivatives of gain powers of ^-1. This will imply that the O(^-7-δ) bound for in also holds for .
Assume that satisfies (<ref>). Then we have in ,
=+,
where for j≤min(N_j^--10,N_j^+) and 2k_1+k_2≤min(N_k^--9,N_k^+),
|e_3^≤ 3T^j_+2^k_1Φ^k_2|≲^-7-j-δ.
Let
:=-.
To highlight the important points of the argument, we begin with the case k_1=0. We commute with T^jΦ^k_2 and develop the LHS of TSI (<ref>) to get
()^4T^jΦ^k_2+∑_1≤ p≤ 4,0≤ q≤ 3O(1)()^qT^p+jΦ^k_2=∂_r_in^4T^jΦ^k_2.
Next, we substract from both sides of (<ref>) the quantity
()^4T^jΦ^k_2() =24T^jΦ^k_2(∑_|m|≤ 2Q_m,2Y_m,2^+2(cosθ)e^imϕ_+)
=∂_r_in^4T^jΦ^k_2(),
where we used (<ref>) four times, and the expression (<ref>) of A_m(r) to get ∂_r_in^4(A_m(r))=A_m^(4)(r)=24. This gives
()^4T^jΦ^k_2+∑_1≤ p≤ 4,0≤ q≤ 3O(1)()^qT^p+jΦ^k_2=∂_r_in^4T^jΦ^k_2.
The crucial point is that the second term on the LHS of (<ref>) contains at least one extra T derivative[Also notice that we lose a lot of derivatives to control this term.] compared to the other terms in (<ref>). In order to estimate this term, we use the Sobolev embedding (<ref>) and (<ref>) to obtain in
|∑_1≤ p≤ 4,0≤ q≤ 3O(1)()^qT^p+jΦ^k_2| ≲∑_1≤ p≤ 4,0≤ q≤ 3(')^≤1()^qT^p+jΦ^k_2_L^2(S(u,))
≲∑_p=1^4(')^≤ 3T^j+pΦ^k_2_L^2(S(u,)).
Using the definition (<ref>) of the Carter operator we developp
(')^≤ 3T^j+pΦ^k_2=∑_0≤ n≤ 3, 0≤ m≤ 6 O(1)_-2^nT^mT^j+pΦ^k_2.
Thus using Proposition <ref> we get[Notice that the bound |(')^≤ 3T^j+pΦ^k_2|≲^-8-j holds only for j≤ N_j-12 by Proposition <ref>, but using instead Proposition <ref> we get (')^≤ 3T^j+pΦ^k_2_L^2(S(u,))≲^-8-j for j≤ N_j-10.] for p≥ 1, |(')^≤ 3T^j+pΦ^k_2|≲^-8-j, and hence
|∑_1≤ p≤ 4,0≤ q≤ 3O(1)()^qT^p+jΦ^k_2|≲^-8-j.
Next, we use the Sobolev embedding again to get
|()^4T^jΦ^k_2|≲(')^≤3T^jΦ^k_2_L^2(S(u,))
and we can developp again the RHS to get
(')^≤3T^jΦ^k_2=∑_0≤ n≤ 3, 0≤ m≤ 6O(1)_-2^nT^mT^jΦ^k_2
which gives, using Proposition <ref>,
|()^4T^jΦ^k_2|≲^-7-j-δ.
Combining (<ref>), (<ref>) and (<ref>) yields
|∂_r_in^4T^jΦ^k_2|≲^-7-j-δ in .
Now, in the Eddington-Finkelstein coordinates (,r,θ,ϕ_+), and using the initial condition (<ref>) on _+, we easily integrate (<ref>) four times from r_+ to r on =cst to get |e_3^≤ 3T^jΦ^k_2|≲^-7-j-δ in , which concludes the proof of (<ref>) in the case k_1=0.
Finally, to treat the case k_1≠ 0, first notice that we can developp again
T^j _+2^k_1Φ^k_2=∑_0≤ n≤ k,0≤ m≤ 2kO(1)(')^nT^m+jΦ^k_2
so we only need to show the O(^-7-j-δ) decay of (')^nT^jΦ^k_2 for any j,n. Differentiating (<ref>) by (')^n gives
(')^n()^4T^jΦ^k_2+∑_1≤ p≤ 4,0≤ q≤ 3O(1)(')^n()^qT^p+jΦ^k_2=∂_r_in^4(')^nT^jΦ^k_2,
and applying the exact same techniques as in the case k_1=0, controlling any angular derivative ,' using the Carter operator, gives
|∂_r_in^4T^j _+2^k_1Φ^k_2|≲^-7-j-δ,
and thus (<ref>) by integrating from _+ on
=cst as in the case k_1=0.
Using the fact that the integral is taken on {r=}, the control (<ref>) of T^j^k_1Φ^k_2 in as well as the relation =e_3+O(1)T+O(1)Φ on {r=}, we can rewrite the energy bound (<ref>) as
∬_{r=}∩{u≥ 1}[T^j^ke_3^≤ 1]ν u≲ 1.
We continue this section by proving an energy bound for on {r=}∩{≥ 1}. This will be useful information on the initial data when doing the energy estimates for the spin +2 Teukolsky equation in region . The following result is a corollary of Proposition <ref>.
Assume that satisfies (<ref>). Then we have, for j≤min(N_j^–12,N_j^+-2) and 2k_1+k_2≤min(N_k^–11,N_k^+-2),
∫_S(u,)[T^j_+2^k_1Φ^k_2e_3^≤ 1]ν=O(^-14-2δ-2j) on {r=}∩{≥ 1}.
Using (<ref>) we get
|e_3^≤ 2T^j _+2^k_1Φ^k_2|≲^-7-j-δ, on {r=}.
Next, we write
e_4=-μ e_3+T+a/r^2+a^2Φ.
Using Proposition <ref>, we can bound
Φ T^j _+2^k_1Φ^k_2e_3^≤ 1_L^2(S(u,))≲^-7-j-δ
on {r=}, as well as |TT^j _+2^k_1Φ^k_2e_3^≤ 1|≲^-8-j. Thus we get
|e_4T^j _+2^k_1Φ^k_2e_3^≤ 1|ν≲^-14-2δ-2j, on {r=}.
Finally, denoting
ψ:=T^j _+2^k_1Φ^k_2e_3^≤ 1,
to bound |∂_θψ|^2+|𝒰ψ|^2 we first write
|∂_θψ|^2+|𝒰ψ|^2≲ |∂_θψ|^2+1/sin^2θ|Φψ+iscosθψ|^2+|ψ|^2.
Then we use an integration by parts formula (see for example <cit.>) to get
∫_S(u,)|∂_θψ|^2+1/sin^2θ|Φψ+2icosθψ|^2ν =∫_S(u,)('+2)ψ·ψν
≲ψ^2_L^2(S(u,))+(')ψ^2_L^2(S(u,)).
This gives, reinjecting the definition of the Carter operator (<ref>), and using Proposition <ref> ,
∫_S(u,)|∂_θ T^j _+2^k_1Φ^k_2e_3^≤ 1|^2+|𝒰T^j _+2^k_1Φ^k_2e_3^≤ 1|^2ν
≲T^≤ 2_+2^≤ k+1T^je_3^≤ 1^2_L^2(S(u,))≲^-14-2δ-2j,
which concludes the proof of Corollary <ref>.
We continue with an energy boundedness result on {r=} for that will be used only at the end of the paper to get the precise asymptotics in region .
Assume that satisfies (<ref>). Then we have for j≤min(N_j^–12,N_j^+-2) and 2k_1+k_2≤min(N_k^–11,N_k^+-2),
∬_{r=}[T^j^k_1Φ^k_2e_3^≤ 1]≲ 1.
The proof is made by combining Corollary <ref> on {r=}∩{≥ 1} and the energy bound (<ref>) on {r=}∩{u≥ 1}.
§ PRECISE ASYMPTOTICS IN BLUESHIFT INTERMEDIATE REGION 𝐈𝐈
§.§ Energy method for the spin +2 Teukolsky equation in
In this section, we consider ψ a spin +2 scalar such that there are constants
V∈ℝ, c>1/4, β>1,
such that for 0≤ j≤ N_j', and 0≤ 2k_1+k_2≤ N_k',
∙ T^j_+2^k_1Φ^k_2ψ=O(^-β-j) on {r=r_𝔟}∩{≥ 1}
,
∙ [T^j_+2^k_1Φ^k_2ψ]ν=O(^-2β-2j) on {r=r_𝔟}∩{≥ 1},
∙ _+2^(c,V)T^j_+2^k_1Φ^k_2ψ=O(^-β-j) in .
Recall that we have
e_3=-μ e_3, e_4=(-μ)^-1e_4.
The goal of this section is to propagate the polynomial decay (<ref>) on the spacelike hypersurface {r=} to region , see Proposition <ref>. In this region, we consider the positive +2 spin because it provides a positive bulk term in the energy estimate for the Teukolsky equation. We begin by proving the following energy estimate, that holds only for c>1/4 (more precisely, see (<ref>)).
Assume that ψ is a spin +2 scalar satisfying (<ref>) and (<ref>) with c>1/4, β>1. Then for γ>0 small enough, we have, for j≤ N_j', 2k_1+k_2≤ N_k', and _1≥ 1,
∬_{=_1}∩𝐈𝐈[T^j_+2^k_1Φ^k_2ψ](-μ) ν d u≲_1^-2β-2j.
From now on, we assume that γ>0 is small enough such that Proposition <ref> holds.
In what follows, we denote s=+2. Similarly as in , the computations work for a general spin s, but the bulk term will only be positive for s>0. Once again, it suffices to treat the case j=k_1=k_2=0. Recall that in the coordinates (u,,θ,ϕ_-), we have ∂_u=-μ e_3+a/(r^2+a^2)Φ, ∂_=e_4. Thus we compute, in region 𝐈𝐈 :
μ_s^(c,V) =4(r^2+a^2)e_3∂_+ μ𝒰^2+μ/sinθ∂_θ(sinθ∂_θ)-4 iasμcosθ T+2isμa^2sinθcosθ/(r^2+a^2)U
+2 rμ (e_3+∂_)-4 s[(r-M)e_3+ rμ T]+2arμ/r^2+a^2Φ+ sμ -s^2μa^4sin^2θcos^2θ/(r^2+a^2)^2,
=4(r^2+a^2)∂_e_3+ μ𝒰^2+μ/sinθ∂_θ(sinθ∂_θ)-4 iasμcosθ T+2isμa^2sinθcosθ/(r^2+a^2)U
+2 rμ (e_3+∂_)-4 s[(r-M)e_3+ rμ T]-2arμ/r^2+a^2Φ+ sμ -s^2μa^4sin^2θcos^2θ/(r^2+a^2)^2,
where we used
4(r^2+a^2)[∂_,]=4arμ/r^2+a^2Φ.
Next, similarly as in , we multiply (<ref>) by μ and by the complex conjugate of
X(ψ):=f(r)e_3ψ+g(r)∂_ψ
where
f(r):=(r^2+a^2)^p(-μ)^-1, g(r)=(r^2+a^2)^p,
we take the real part, and we integrate on S(u,) with respect to ν. We get
(f(r)e_3ψμ_s^(c,V)ψ)ν+(g(r)∂_ψμ_s^(c,V)ψ)ν=μ(X(ψ)O(^-β))ν.
Making the substitutions[Recall that our convention for the derivatives ∂_u, ∂_ is that we use the ingoing double null like coordinates in {r≥} and the outgoing double null like coordinates in {r≤}.]
∂_u→, e_4→∂_,
the same computation as in the proof of (<ref>) in Appendix <ref> for the energy method in the redshift region gives
∂_(∫_S(u,)𝐅_[ψ]ν)+∂_u(∫_S(u,)𝐅_u[ψ]ν)+∫_S(u,)𝐁[ψ]ν=μ(X(ψ)O(^-β))ν,
where
𝐅_[ψ]=2(r^2+a^2)f(r)|e_3ψ|^2-1/2μ g(r)(|∂_θψ|^2+|Uψ|^2)+asinθμℜ(X(ψ)Uψ),
𝐅_u[ψ]=2(r^2+a^2)g(r)|∂_ψ|^2-1/2μ f(r)(|∂_θψ|^2+|Uψ|^2)-asinθμℜ(X(ψ)Uψ)),
and
𝐁[ψ] =2(rμ f(r)-∂_((r^2+a^2)f(r)))|e_3ψ|^2+2(rμ g(r)-∂_u((r^2+a^2)g(r)))|∂_ψ|^2
+1/2(∂_u(μ f(r))+∂_(μ g(r)))(|∂_θψ|^2+|Uψ|^2)
+μ g(r)sra^2μcosθsinθ/(r^2+a^2)^2ℑ(ψ Uψ)
+4μ g(r)ascosθℑ(∂_ψTψ)-4srg(r)μℜ(∂_ψTψ)
+2g(r)arμ/r^2+a^2ℜ(∂_ψΦψ)-g(r)(μ s+s^2μa^4sin^2θcos^2θ/(r^2+a^2)^2-μ V)ℜ(∂_ψψ)
-2sg(r)μa^2sinθcosθ/(r^2+a^2)ℑ(∂_ψUψ)+g(r)(2rμ-4cs(r-M))(∂_ψe_3ψ)
+μ f(r)sra^2μcosθsinθ/(r^2+a^2)^2ℑ(ψ Uψ)
+4μ f(r)ascosθℑ(e_3ψTψ)-4srμℜ(e_3ψTψ)+2 rμ f(r)ℜ(e_3ψ∂_ψ)
-2f(r)arμ/r^2+a^2ℜ(e_3ψΦψ)-f(r)(μ s+s^2μa^4sin^2θcos^2θ/(r^2+a^2)^2-μ V)ℜ(e_3ψψ)
-2sf(r)μa^2sinθcosθ/(r^2+a^2)ℑ(e_3ψUψ)-4cs(r-M)f(r)|e_3ψ|^2.
Integrating (<ref>) on {_1≤≤_2}∩ with respect to u gives
∬_{=_2}∩𝐈𝐈𝒯_[ψ] ν d u+∬_Γ∩{_1 ≤≤_2}𝒯_Γ[ψ] ν d u+∭_{_1 ≤≤_2}∩𝐈𝐈𝐁[ψ] ν u
= ∬_{=_1}∩𝐈𝐈𝒯_[ψ] ν d u+∬_{r=r_𝔟}∩{_1 ≤≤_2}𝒯_r[ψ] ν d u+∭_{w_1 ≤w≤w_2}∩𝐈μ(X(ψ)O(^-β)) νd u du,
where
𝒯_[ψ]=(1-1/2μ) 𝐅_[ψ]-1/2μ𝐅_u[ψ], 𝒯_r[ψ]=-1/2μ(𝐅_u[ψ]+𝐅_u[ψ]),
and
𝒯_Γ[ψ]=𝐅_u[ψ]+(1-γ^γ-1)𝐅_[ψ].
Now we estimate the different quantities involved, with the previous choice of f and g.
Control of the bulk terms. We first prove a lower bound for the bulk term, which holds thanks to an effective blueshift effect, taking place for strictly positive spins.
For p=p(a,M)≫ 1 large enough, and =(a,M) sufficiently close to r_-, we have in {r_-≤ r≤},
𝐁[ψ]ν≳(-μ)[ψ]ν.
See Appendix <ref>, this is where we use the assumption c>1/4.
To control the other bulk term on the RHS of (<ref>), we write, for ε>0, similarly as in ,
|∭_{_1 ≤≤_2}∩𝐈𝐈μ(X(ψ) O(^-β))ν u|≲
ε∭_{_1 ≤≤_2}∩𝐈𝐈[ψ](-μ)ν u+ε^-1∫__1^_2^-2β.
Thus, choosing ε>0 small enough such that the first term on the RHS of (<ref>) is absorbed in the LHS of (<ref>), we get
∬_{=_2}∩𝐈𝐈𝒯_[ψ] ν d u+∬_Γ∩{_1 ≤≤_2}𝒯_Γ[ψ] ν d u+∭_{_1 ≤≤_2}∩𝐈𝐈[ψ] ν u
= ∬_{=_1}∩𝐈𝐈𝒯_[ψ] ν d u+∬_{r=r_𝔟}∩{_1 ≤≤_2}𝒯_r[ψ] ν d u+∫__1^_2^-2β.
Control of the boundary terms. As in region , we have
𝒯_[ψ]∼(-μ)[ψ], in ,
and
𝒯_r[ψ]∼[ψ], on {r=}.
We also have, as in <cit.>
𝒯_Γ[ψ]=𝐅_u[ψ]+(1-γ^γ-1)𝐅_[ψ]≥ f(r)|ψ|^2+g(r)|e_4ψ|^2-μ(f(r)+g(r))()≥ 0
for γ>0 small enough. Thus combining these boundary terms estimates with (<ref>) yields
∬_{=_2}∩𝐈𝐈[ψ] (-μ)ν d u+∭_{_1 ≤≤_2}∩𝐈𝐈[ψ] ν u
≲∬_{=_1}∩𝐈𝐈[ψ] (-μ)ν d u+∬_{r=r_𝔟}∩{_1 ≤≤_2}[ψ] ν du+∫__1^_2^-2β.
Using the energy assumption (<ref>) on {r=}, we get
∬_{=_2}∩𝐈𝐈[ψ] (-μ)ν d u+∭_{_1 ≤≤_2}∩𝐈𝐈[ψ] ν u
≲∬_{=_1}∩𝐈𝐈[ψ] (-μ)ν d u+∫__1^_2^-2β,
and we conclude the proof of Proposition <ref> using Lemma <ref>, using the fact that for any _0≥ 1, the initial energy
∬_{=_0}∩𝐈𝐈[ψ](-μ) ν d u≲ 1
is finite, as in the end of the proof of Proposition <ref>.
Assume that ψ is a spin +2 scalar satisfying (<ref>), (<ref>), and (<ref>) with c>1/4, β>1. Then we have in , for j≤ N_j' and 2k_1+k_2≤ N_k',
T^j_+2^k_1Φ^k_2ψ_L^2(S(u,))≲^-β-j.
The proof is exactly the same as the one of Proposition <ref>, i.e. we use Lemma <ref> and use the energy decay given by Proposition <ref> to bound the integrated term.
Assume that ψ is a spin +2 scalar satisfying (<ref>), (<ref>), and (<ref>) with c>1/4, β>1. Then we have in , for j≤ N_j'-2 and 2k_1+k_2≤ N_k'-2,
|T^j_+2^k_1Φ^k_2ψ|≲^-β-j.
It suffices to treat the case j=k_1=k_2=0. Using the Sobolev embedding (<ref>) and Proposition <ref>, we get pointwisely,
|ψ|^2≲∫_S(u,)|T^≤ 2ψ|^2ν+∫_S(u,)|_-2ψ|^2ν≲^-2β,
which concludes the case j=k_1=k_2=0.
§.§ Precise asymptotics of ψ_+2 in region 𝐈𝐈
We use the energy method of Section <ref> to get the following results.
Assume that satisfies (<ref>). Then we have in ,
ψ_+2=+,
where for j≤min(N_j^–14,N_j^+-4) and 2k_1+k_2≤min(N_k^–13,N_k^+-4),
|T^j_+2^k_1Φ^k_2|≲^-7-δ-j.
The proof of Proposition <ref> requires the following lemma.
In , for j,k≥ 0 we have
e_3^≤ 1_+T^j_+2^k_1Φ^k_2(1/^7∑_|m|≤2A_m(r)Q_m,Y_m,^+(cosθ)e^imϕ_+)=O(^-8-j).
The computation requires the precise expression of A_m(r), and the expression (<ref>) of the Teukolsky operator _+2, see Appendix <ref> for the complete proof.
By Proposition <ref>, we get that satisfies (<ref>) with β=7+δ. Moreover, by Corollary <ref>, satisfies (<ref>). In order to use Proposition <ref>, it remains to check that satisfies (<ref>). Using Lemma <ref> and _+2=0, we get, in ,
_+T^j_+2^k_1Φ^k_2=O(^-8-j).
This proves that satisfies (<ref>) with β=7+δ>1, c=1>1/4 and V=0, thus by Proposition <ref> we get in
|T^j_+2^k_1Φ^k_2|≲^-7-δ-j,
as stated.
The rest of this section is devoted to obtaining O(^-7-δ-j) pointwise decay for e_3 T^j_+2^k_1Φ^k_2 and e_4^k_1Φ^k_2T^j in . This will be used in Section <ref> as initial data on Γ to integrate a 1+1 wave equation that will eventually lead to the blow-up of on . Unlike in region where we already controled and could use TSI to get decay of e_3, the proof in is done by commuting the Teukolsky equation with e_3 and applying the energy method of Section <ref>.
Assume that satisfies (<ref>). Then we have, in , for j≤min(N_j^–14,N_j^+-4) and 2k_1+k_2≤min(N_k^–13,N_k^+-4),
|T^j_+2^k_1Φ^k_2e_3|≲^-7-j-δ.
By Proposition <ref>, we get that e_3 satisfies (<ref>) with β=7+δ. Moreover, by Corollary <ref>, e_3 satisfies (<ref>). In order to use Proposition <ref>, it remains to check that e_3 satisfies (<ref>). Using the commutator between _+2 and e_3 given by Proposition <ref>, and Lemma <ref> to write
T^j^k_1Φ^k_2e_3_+2()=O(^-8-j),
we get
(_+2-4[(r-M)e_3 +rT]-2)[T^j^k_1Φ^k_2e_3]
=T^j^k_1Φ^k_2(-3T-e_3_+2())
=O(^-8-j),
where we also used[Once again, the reader interested in the count of the loss of derivatives will notice that the bound for the RHS holds only for j≤ N_j-15, but also for j≤ N_j-14 after integration on the sphere, which is the only bound that we use in the energy estimates.] Proposition <ref>. Using (<ref>) gives
_+2^(1/2,-2)T^j^k_1Φ^k_2e_3=O(^-8-j).
This proves that e_3 satisfies (<ref>) with β=7+δ>1, c=1/2>1/4 and V=-2, thus by Proposition <ref> we get in
|T^j_+2^k_1Φ^k_2e_3|≲^-7-j-δ,
as stated.
Assume that satisfies (<ref>). Then we have, in , for j≤min(N_j^–15,N_j^+-5) and 2k_1+k_2≤min(N_k^–14,N_k^+-5),
|e_4 T^j_+2^k_1Φ^k_2|≲^-7-δ-j.
We have e_4=-μ e_3+O(1)T+O(1)Φ, which gives the stated bound by Propositions <ref> and <ref>.
§ PRECISE ASYMPTOTICS IN BLUESHIFT REGION 𝐈𝐈𝐈∪ NEAR
In all this section, we assume that ψ_± 2 satisfy the assumptions of Theorem <ref>.
§.§ Energy estimate and pointwise bounds for in
We have, for (w_1,_1)∈, and for j≤min(N_j^–12,N_j^+-2) and 2k_1+k_2≤min(N_k^–11,N_k^+-2),
∬_{=_1, w≤ w_1}∩(∪)[T^j_+2^k_1Φ^k_2]ν r≲ 1.
As T and _+2 commute with the Teukolsky equation, it suffices to treat the case j=k=0. We implement the same energy method as in region (see Section <ref>), noticing that the computations of the energy method work also in , and we integrate (<ref>) with[In this case, _+2^(c,V)ψ=_+2=0, so the RHS in (<ref>) is exactly zero.] ψ= on {w≤ w_1, ≤_1}∩(∪). We get
∬_{w≤ w_1, =_1}∩(∪)𝒯_[] ν d u+∬_{w= w_1, ≤_1}∩(∪)𝒯_w[] ν d u
+∭_{w≤ w_1, ≤_1}∩(∪)𝐁[] ν u= ∬_{r=}∩{_(w_1)≤≤_1}𝒯_r[] ν d u,
where the boundary term on {w=w_1} is
𝒯_w[] :=(1-μ/2)𝐅_u[]-(μ/2)𝐅_[]
=2(r^2+a^2)g(r)|∂_ψ|^2-12μ f(r)(|∂_θψ|^2+|ψ|^2)-asinθμℜ(X(ψ)ψ)
-μ/2[2(r^2+a^2)f(r)|ψ|^2+2(r^2+a^2)g(r)|∂_ψ|^2-12μ (f(r)+g(r))(|∂_θψ|^2+|ψ|^2)]
∼ (-μ)[ψ],
where we absorbed the term asinθμℜ(X(ψ)ψ) using
|asinθμ g(r) ℜ(∂_ψψ)|≤ a^2g(r)|∂_ψ|^2+1/4μ^2g(r)|ψ|^2
and
|asinθμ f(r) ℜ(ψψ)|≤ -3/4μ a^2f(r)|ψ|^2-1/3μ f(r)|ψ|^2.
Using (<ref>) and (<ref>), as well as Lemma <ref>, this yields
∬_{w≤ w_1, =_1}∩(∪)[](-μ)ν u+∬_{≤_1, w=w_1}∩(∪)[](-μ)ν u
+∭_{w≤ w_1, ≤_1}∩(∪)[](-μ)ν u≲∬_{r=}∩{_(w_1)≤≤_1}[]ν.
Using Corollary <ref>, and the trivial bound
[]≲^-14,
we get
∬_{r=}∩{_(w_1)≤≤_1}[]ν≲∫_{_(w_1)≤≤_1}^-14≲ 1.
Since we have by changing variable :
∬_{w≤ w_1, =_1}∩(∪)[](-μ)ν u∼∬_{w≤ w_1, =_1}∩(∪)[]ν r,
the conclusion of the proof of Proposition <ref> follows from (<ref>) and (<ref>).
We have in , for j≤min(N_j^–14,N_j^+-4) and 2k_1+k_2≤min(N_k^–13,N_k^+-4),
|T^j_+2^k_1Φ^k_2|≲ 1.
As in the proof of Proposition <ref>, we use Lemma <ref> and use the energy decay given by Proposition <ref> to bound the integrated term, which yields
T^j_+2^k_1Φ^k_2_L^2(S(u,))≲ 1, in .
Using the Sobolev embedding (<ref>), we infer
|T^j^k_1Φ^k_2|^2≲∫_S(u,)|T^≤ j+2^k_1Φ^k_2|^2ν+∫_S(u,)|T^j^k+1|^2ν≲ 1,
as stated.
§.§ The coupling of _s to ψ_s through a 1+1 wave equation
The goal of this section is to reformulate the Teukolsky equation _+2=0 as a 1+1 wave equation for with a right-hand-side that we can control, so that we can solve explicitely for by integrating twice the equation.
We have, for j,k_1,k_2≥ 0 and s=± 2,
((r^2+a^2)Δ^s e_4T^j_s^k_1Φ^k_2_s)=1/4(-μ)(_s+2aTΦ-2(1+2s)rT)[T^j_s^k_1Φ^k_2ψ_s].
Notice that we control the right-hand-side of (<ref>) thanks to the L^∞ bounds of Proposition <ref>. The blow-up of on will come from the integration of (<ref>), as the inverse of the factor Δ^ blows up exponentially on .
We have
((r^2+a^2)Δ^se_4_s) =rμΔ^s e_4_s+sΔ^s(r-M)e_4_s+(r^2+a^2)Δ^s e_4_s
=1/4μΔ^s(-4(r^2+a^2)e_3e_4+4s(r-M)μ^-1e_4+4re_4)[_s].
Using e_4-μ e_3=μ∂_r=2e_4-T-a/(r^2+a^2)Φ we get
((r^2+a^2) Δ^s e_4_s)=
1/4μΔ^s(-4(r^2+a^2)e_3e_4+4s(r-M)μ^-1e_4+2r(e_4-μ e_3)+2r T+2ra/r^2+a^2Φ)[_s].
Next, using the expression (<ref>) we get
_s= -4(r^2+a^2)e_3e_4+ U^2+1/sinθ∂_θ(sinθ∂_θ)-4iascosθ T
+2 r (e_4-μ e_3)+4s[(r-M)μ^-1e_4- rT]+2ar /r^2+a^2Φ+ s,
where
U^2=1/sin^2θΦ^2+a^2sin^2 T^2+2a TΦ+2aiscosθ T+2iscosθ/sin^2θΦ-s^2^2θ.
We infer, using the definition of the Carter operator (<ref>),
_s= -4(r^2+a^2)e_3e_4+2 r (e_4-μ e_3)+4s(r-M)μ^-1e_4+2ar /r^2+a^2Φ
+'+a^2sin^2 T^2+2aTΦ-2aiscosθ T-4srT
=-4(r^2+a^2)e_3e_4+2 r (e_4-μ e_3)+4s(r-M)μ^-1e_4+2ar /r^2+a^2Φ
+_s+2aTΦ-4srT.
Thus combining this with the Teukolsky equation _s_s=0 gives (<ref>) with j=k_1=k_2=0, where we use the fact that Δ^s commutes with _s, T and Φ. We then extend this to general non zero j,k_1,k_2 by commuting with T^j_s^k_1Φ^k_2.
§.§ Precise asymptotics of in and blow-up at ∩
In this section we will use the crucial fact that 2r^*≥^γ in , thus by (<ref>),
-Δ∼exp(-2|κ_-|r^*)≤exp(-|κ_-|^γ) in .
We have in ,
(u,,θ,ϕ_-)=Δ^-2(u,)/^7∑_|m|≤2A_m(r_-)e^2imr_mod(u,)Q_m,Y_m,^+(cosθ)e^imϕ_-
+Err[],
where for j≤min(N_j^–15,N_j^+-5) and 2k_1+k_2≤min(N_k^–14,N_k^+-5),
|T^j^k_1Φ^k_2Err[]|≲Δ^-2^-7-j-δ in .
The proof is basically done by integrating the 1+1 wave equation (<ref>). A bit of work is necessary at the end of the proof to get rid of the dependance in u that comes from the boundary terms on Γ, i.e. to prove that the upper bound for Err[] is uniform in u. Recall that we have the ansatz given by Proposition <ref>:
=1/^7∑_|m|≤2A_m(r)Q_m,Y_m,^+(cosθ)e^imϕ_++, on Γ
where
|e_4^≤ 1T^j^k_1Φ^k_2|≲^-7-j-δ on Γ⊆,
using Proposition <ref> and Corollary <ref>. This implies
e_4= μ∂_r(Δ^-2)/2^7∑_|m|≤2A_m(r)Q_m,Y_m,^+(cosθ)e^imϕ_++Δ^-2/2^7∑_|m|≤2μ A_m'(r)Q_m,Y_m,^+(cosθ)e^imϕ_+
+aΔ^-2/(r^2+a^2)^7∑_|m|≤2imA_m(r)Q_m,Y_m,^+(cosθ)e^imϕ_++Err[e_4],
with
T^j^k_1Φ^k_2Err[e_4]=O(Δ^-2^-7-j-δ) on Γ.
Also, notice that the term
S:=Δ^-2μ/2^7∑_|m|≤2 A_m'(r)Q_m,Y_m,^+(cosθ)e^imϕ_+
has an extra factor μ that decays exponentially on Γ, such that |^k_1Φ^k_2T^jS|≲-Δ^-1^-7-j. We can thus add S to the error term while still satisfying (<ref>), to get on Γ,
Δ^2e_4=1/^7∑_|m|≤2A_m(r)(ima-2(r-M)/r^2+a^2)Q_m,Y_m,^+(cosθ)e^imϕ_++Δ^2Err[e_4].
Next, we define
Z(r,θ,ϕ_+):=∑_|m|≤2A_m(r)(ima-2(r-M)/r^2+a^2)Q_m,Y_m,^+(cosθ)e^imϕ_+.
Then we integrate the 1+1 wave equation (<ref>) on a curve from Γ to (u,,θ,ϕ_+) with constant ,θ,ϕ_+, using =∂_u in these coordinates. Using Proposition <ref> to bound the RHS, and denoting u_Γ()=^γ-, we get in
|(r^2+a^2)Δ^2T^j^k_1Φ^k_2e_4-T^j^k_1Φ^k_2(r_Γ()^2+a^2)/^7 Z(r_Γ(),θ,ϕ_+)+O(^-7-δ-j)|
≲∫_u_Γ()^u(-μ)(u',) u'
≲(u-u_Γ())e^-|κ_-|^γ≲^-7-δ-j,
where we used the definition of to write the exponential decay of μ with in , and the fact that u-u_Γ()=u+-^γ≲ in , as u≲ 1 in . Moreover,
|^k_1Φ^k_2(r_Γ()^2+a^2/r^2+a^2)Z(r_Γ(),θ,ϕ_+)-^k_1Φ^k_2∑_|m|≤2A_m(r_-)ima-2(r-M)/r^2+a^2Q_m,Y_m,^+(cosθ)e^imϕ_+|
≲∑_|m|≤2|A_m(r_Γ())(ima-2(r_Γ()-M))-A_m(r_-)(ima-2(r-M))|
≲ r_Γ()-r_-≲|μ(r_Γ())|=O(exp(-|κ_-|^γ)),
in , which together with (<ref>) yields
T^j^k_1Φ^k_2e_4(u,,θ,ϕ_+)=T^j^k_1Φ^k_2 Δ^-2(u,)/^7∑_|m|≤2A_m(r_-)ima-2(r-M)/r^2+a^2Q_m,Y_m,^+(cosθ)e^imϕ_+
+O(Δ^-2^-7-j-δ), in .
We finally integrate (<ref>) on a curve from Γ to (u,,θ,ϕ_-) with constant u,θ,ϕ_-, using e_4=∂_ in these coordinates, and we get, using Proposition <ref>,
T^j^k_1Φ^k_2(u,,θ,ϕ_-)
=T^j^k_1Φ^k_2Δ^-2(r_Γ(u))/_Γ(u)^7∑_|m|≤2A_m(r_Γ(u))Q_m,Y_m,^+(cosθ)e^imϕ_-+2imr_mod(r_Γ(u))+O(Δ^-2(r_Γ(u))/_Γ(u)^7+j+δ)
+∫__Γ(u)^ T^j^k_1Φ^k_2Δ^-2(u,')/(')^7∑_|m|≤2[A_m(r_-)ima-2(r'-M)/(r')^2+a^2Q_m,Y_m,^+(cosθ)e^imϕ_-+2imr_mod(u,')]'
+ ∫__Γ(u)^ O(Δ^-2(u,')(')^-7-j-δ)'.
We now simplify the terms on the RHS of (<ref>). First, we have
∫__Γ(u)^Δ^-2(u,')(')^-7-j-δ' =∫__Γ(u)^-r^2+a^2/2(r-M)/(Δ^-2(u,'))(')^-7-j-δ'
∼∫__Γ(u)^/(Δ^-2(u,'))(')^-7-j-δ'
∼Δ^-2(u,)^-7-j-δ-Δ^-2(u,_Γ(u))_Γ(u)^-7-j-δ
+(7+j+δ)∫__Γ(u)^Δ^-2(u,')(')^-8-j-δ'
where we used the fact that ∂_Δ^-2(u,)=e_4Δ^-2=(μ/2)∂_rΔ^-2(r). This implies
∫__Γ(u)^Δ^-2(u,')(')^-7-j-δ'∼Δ^-2(u,)^-7-j-δ-Δ^-2(u,_Γ(u))_Γ(u)^-7-j-δ
and thus
∫__Γ(u)^ O(Δ^-2(u,')(')^-7-j-δ)'=O(Δ^-2(u,)^-7-j-δ)+O(Δ^-2(u,_Γ(u))_Γ(u)^-7-j-δ).
We now compute the second lign on the RHS of (<ref>). We have
∫__Γ(u)^ T^j^k_1Φ^k_2Δ^-2(u,')/(')^7ima-2(r-M)/r^2+a^2e^2imr_mod(u,')Y_m,^+(cosθ)e^imϕ_-'
=∫__Γ(u)^/(Δ^-2(u,')e^2imr_mod(u,'))T^j^k_1Φ^k_21/(')^7Y_m,^+(cosθ)e^imϕ_-'
=T^j(Δ^-2(u,)/^7e^2imr_mod(u,)-Δ^-2(u,_Γ(u))/_Γ(u)^7e^2imr_mod(u,_Γ(u)))^k_1Φ^k_2Y_m,^+(cosθ)e^imϕ_-
+7∫__Γ(u)^ T^j^k_1Φ^k_2Δ^-2(u,')/(')^8e^2imr_mod(u,')Y_m,^+(cosθ)e^imϕ_-'
=T^j^k_1Φ^k_2Δ^-2(u,)/^7e^2imr_mod(u,)Y_m,^+(cosθ)e^imϕ_-
-T^j^k_1Φ^k_2Δ^-2(u,_Γ(u))/_Γ(u)^7e^2imr_mod(r_Γ(u))Y_m,^+(cosθ)e^imϕ_-
+O(Δ^-2(u,)^-7-j-δ)+O(Δ^-2(u,_Γ(u))_Γ(u)^-7-j-δ),
using the previous computation (<ref>) with δ=1. Next, notice that the first term on the RHS of (<ref>) can be rewritten
T^j^k_1Φ^k_2Δ^-2(r_Γ(u))/_Γ(u)^7∑_|m|≤2A_m(r_Γ(u))Q_m,Y_m,^+(cosθ)e^imϕ_-+2imr_mod(r_Γ(u))
=T^j^k_1Φ^k_2Δ^-2(r_Γ(u))/_Γ(u)^7∑_|m|≤2A_m(r_-)Q_m,Y_m,^+(cosθ)e^imϕ_-+2imr_mod(r_Γ(u))+O(Δ^-1(r_Γ(u))_Γ(u)^-7-j)
=T^j^k_1Φ^k_2Δ^-2(r_Γ(u))/_Γ(u)^7∑_|m|≤2A_m(r_-)Q_m,Y_m,^+(cosθ)e^imϕ_-+2imr_mod(r_Γ(u))+O(Δ^-2(r_Γ(u))_Γ(u)^-7-j-δ).
Thus (<ref>) rewrites, using (<ref>), (<ref>) and (<ref>),
T^j^k_1Φ^k_2(u,,θ,ϕ_-) =T^j^k_1Φ^k_2Δ^-2(u,)/^7∑_|m|≤ 2A_m(r_-)Q_m,2e^2imr_mod(u,)Y_m,^+(cosθ)e^imϕ_-
+O(Δ^-2(u,)^-7-j-δ)+O(Δ^-2(u,_Γ(u))_Γ(u)^-7-j-δ).
We now finish the proof of Theorem <ref> by showing
Δ^-2(u,_Γ(u))_Γ(u)^-7-j-δ=O(Δ^-2(u,)^-7-j-δ)
in . This requires some care. Note that in a region {-1≤ u≲ 1} where u is bounded from below and from above, the term that we are trying to bound is O(1) which is controlled by Δ^-2(u,)^-7-j-δ. Thus we restrict the remaining part of the analysis to {u≤ -1}∩, where we want to prove
Δ^-2(r_Γ(u))/_Γ(u)^7+δ+j≲Δ^-2(u,)/^7+δ+j.
Notice that Δ^-2(r_Γ(u))≤Δ^-2(u,), but that we cannot directly control _Γ(u)^-7-δ-j by ^-7-δ-j in , as a priori we only have _Γ(u)≤. Let γ'∈(γ,1). We write =𝐀∪𝐁 where
𝐀:={2r^*≥^γ'}∩, 𝐁:={2r^*≤^γ'}∩.
* In 𝐀, we have Δ^-2(r_Γ(u))∼exp(2|κ_-|_Γ(u)^γ)≤exp(2|κ_-|^γ). Moreover by the definition of 𝐀, Δ^-2(u,)≳exp(2|κ_-|^γ'),
thus
Δ^-2(r_Γ(u))^7+δ+j/Δ^-2(u,)_Γ(u)^7+δ+j≲exp(2|κ_-|(^γ-^γ'))^7+j+δ≲ 1.
* In 𝐁, we use
Δ^-2(r_Γ(u))/_Γ(u)^7+δ+j≲Δ^-2(u,)/_Γ(u)^7+δ+j,
thus we only need to show that we can control _Γ(u)^-7-δ-j by ^-7-δ-j there. We recall u+_Γ(u)=_Γ(u)^γ thus _Γ(u)∼ -u as u→-∞. Moreover in 𝐁 we have u+≤^γ' thus ≤^γ'-u and hence
-u/≥ 1-^γ'-1.
As we are interested in the asymptotics on , we can restrict the analysis to {≥ 2}, and we get in 𝐁 :
-u/≥ 1-2^γ'-1>0.
We finally reinject this back into (<ref>) to get in 𝐁 :
Δ^-2(r_Γ(u))/_Γ(u)^7+δ+j≲Δ^-2(u,)/_Γ(u)^7+δ+j≲Δ^-2(u,)/(-u)^7+δ+j≲Δ^-2(u,)/^7+δ+j.
This concludes the proof of Theorem <ref>.
§.§ End of the proof of Theorem <ref>.
It remains to prove the asymptotic behavior (<ref>) in region ={≥_,γ}∩{w≥ w_,γ}, where w_,γ=2^*-(2^*)^1/γ-+r_- and _,γ=(2^*)^1/γ. The first step is to notice that we can extend the bounded energy method of Section <ref> to get non-sharp L^∞ bounds for in .
We have in , for j≤min(N_j^–12,N_j^+-2) and 2k_1+k_2≤min(N_k^–11,N_k^+-2),
|T^j^k_1Φ^k_2|≲ 1.
The proof is an extension of the argument in Section <ref> taking into accound the symmetric bounds on {u≥ 1}∩{r=}. Integrating (<ref>) with[Notice that in this case, the RHS is exactly zero.] ψ=T^j^k_1Φ^k_2 on a triangle-shaped region ℛ:={r_-<r<}∩{w≤ w_1}∩{≤_1}, with (w_1,_1)∈, gives similarly as in (<ref>),
∬_{w≤ w_1, =_1}∩ℛ[T^j^k_1Φ^k_2](-μ)ν u≲∬_∂ℛ∩{r=}[T^j^k_1Φ^k_2]ν≲ 1,
where we bounded the energy term on {r=} using Corollary <ref>. As before, using (<ref>) together with Lemma <ref>, and using the initial ^-7-j bound on {r=} given by Proposition <ref>, we obtain T^j^k_1Φ^k_2_L^2(S(u,))≲ 1 in . We conclude using the Sobolev embedding (<ref>).
The following result, together with Theorem <ref>, concludes the proof of Theorem <ref>.
We have in ,
(u,,θ,ϕ_-)=Δ^-2(u,)/^7∑_|m|≤2A_m(r_-)e^2imr_mod(u,)Q_m,Y_m,^+(cosθ)e^imϕ_-
+Err[],
where for j≤min(N_j^–15,N_j^+-5) and 2k_1+k_2≤min(N_k^–14,N_k^+-5),
|T^j^k_1Φ^k_2Err[]|≲Δ^-2^-7-j-δ.
Note that Theorem <ref> shows that this result holds on {w= w_,γ}∩{≥_,γ}. We need to prove sufficient decay for the derivative ∂_u (see (<ref>)), that is transverse to the hypersurfaces {w=cst}, to infer the result in ={w≥ w_,γ}∩{≥_,γ} from the result on {w= w_,γ}∩{≥_,γ} by integration. Using Proposition <ref>
we find that e_3 satisfies the PDE
(_+2-4[(r-M)e_3-rT]-2)[e_3]=-3T.
Thus using Lemma <ref> and commuting with T^j^k_1Φ^k_2 we obtain
_+2^(1/2,-2)T^j^k_1Φ^k_2e_3=O(1), in .
This together with the computations of the energy method in ∪, that holds also in , shows that (<ref>) holds in {≥_,γ}∩{r_-<r≤} for ψ=T^j^k_1Φ^k_2e_3 and β=0. Let (u,)∈ and denote the corresponding values (w_1,_1)=(w_1(u,),_1(u,)). Then, integrating (<ref>) on :={r_-<r<}∩{w≤ w_1}∩{≤_1}, with ψ=T^j^k_1Φ^k_2e_3, and with β=0 for the RHS gives :
∬_{w≤ w_1, =_1}∩[ψ](-μ)ν u+∬_{≤_1, w=w_1}∩[](-μ)ν u
+∭_{w≤ w_1, ≤_1}∩[ψ](-μ)ν u≲
∬_{r=}∩{_(w_1)≤≤_1}[ψ]ν+∭_{w≤ w_1, ≤_1}∩|X(ψ)|O(-μ)ν u,
where we also used (<ref>) and (<ref>), as well as Lemma <ref>.
Recall that, |X(ψ)|^2≲[ψ] thus using a Cauchy-Schwarz inequality we can bound the last term on the RHS of (<ref>) by
ε∭_{w≤ w_1, ≤_1}∩[ψ](-μ)ν u+ε^-1∭_{w≤ w_1, ≤_1}∩O(-μ)ν u.
Choosing ε>0 small enough so that the first term of (<ref>) gets absorbed on the LHS of (<ref>). Moreover,
∭_{w≤ w_1, ≤_1}∩(-μ)ν u =4π∫__(w_1)^_(_1)∫_u_()^u(,w_1)(-μ) u
∼4π∫__(w_1)^_(_1)∫_^r(,w_1) r
≲_(_1)-_(w_1)
≲ w_1+_1+K,
where K(a,M)>0 is a constant, and where we used
_(_1) =_1+-r_+,
_(w_1) =2^*-(w_1+-r_-)=-w_1+cst.
Using Corollary <ref> we also have the bound
∬_{r=}∩{_(w_1)≤≤_1}[T^j^k_1Φ^k_2e_3]ν≲ 1.
Thus we have shown
∬_{w≤ w_1, =_1}∩[T^j^k_1Φ^k_2e_3](-μ)ν u≲ u+.
Using Lemma <ref> yields[Notice that u+=2r^*>0 in {r_-≤ r≤}.]
T^j^k_1Φ^k_2e_3_L^2(S^2(u,))≲ T^j^k_1Φ^k_2e_3_L^2(S^2(u(,_1),(,_1)))
+(∬_{w≤ w_1, =_1}∩[T^j^k_1Φ^k_2e_3](-μ)ν u)^1/2
≲√(u+) in ,
where we used (<ref>) and (<ref>) to bound the term on {r=}. Using the Sobolev embedding (<ref>) finally gives
|T^j^k_1Φ^k_2e_3|(u,_1,θ,ϕ_-)≲√(u+)≲ u+, ().
We will now use the fact that =∂_u in coordinates (u,,θ,ϕ_+), and integrate the previous estimate on =cst from {w= w_,γ} to w(u,) to get information in from the lower bound that we have on {w= w_,γ} from the Theorem <ref>. We have the estimate
| T^j^k_1Φ^k_2|(u,,θ,ϕ_-)≲ -Δ(u+), ().
Thus integrating on =cst,θ=cst,ϕ_+=cst, we get
T^j^k_1Φ^k_2(u,,θ,ϕ_-)=T^j^k_1Φ^k_2(u(w_,γ, ),,θ,ϕ_-|_w=w_,γ)
+∫_u(w_,γ,)^uO(-Δ(u',)(u'+)) u',
where ϕ_-|_w=w_,γ=ϕ_-+2r_mod(,u(w_,γ,))-2r_mod(,u). Using Theorem <ref> on {w= w_,γ}∩{≥_,γ} we obtain
T^j^k_1Φ^k_2(u,,θ,ϕ_-) =T^j^k_1Φ^k_2Δ^-2(u,)/^7∑_|m|≤2A_m(r_-)e^2imr_mod(u,)Q_m,Y_m,^+(cosθ)e^imϕ_-
+O(Δ^-2(u,)^-7-j-δ)+Δ^-2(u,)∫_u(w_,γ,)^uO(-Δ(u',)(u'+)) u'.
We conclude by proving that in ,
∫_u(w_,γ,)^uO(-Δ(u',)(u'+)) u'=O(^-7-j-δ).
We have in , -Δ(u,)∼exp(-|κ_-|(u+)). Thus
|∫_u(w_,γ,)^u O(-Δ(u',)(u'+)) u'|
≲exp(-|κ_-|)[∫_u(w_,γ,)^u |u'|exp(-|κ_-|u') u'+∫_u(w_,γ,)^u exp(-|κ_-|u') u']
≲exp(-|κ_-|)(C_1+ C_2)
≲^-7-j-δ
in , where we defined
C_1=∫_w_,γ-r_++r_-^+∞ |u'|exp(-|κ_-|u') u', C_2=∫_w_,γ-r_++r_-^+∞exp(-|κ_-|u') u'.
This concludes the proof of Theorem <ref>.
§ COMPUTATIONS FOR THE ENERGY METHOD
§.§ Proof of Lemma <ref>
In , we compute
∫_S(u,)(f(r)∂_uψμ_s^(c,V)ψ)+(g(r)e_4ψμ_s^(c,V)ψ)ν
using integration by parts on the spheres. We have using (<ref>) :
∫_S(u,)(g(r)e_4ψ μ_s^(c,V)ψ) ν
= 4(r^2+a^2)g(r)∫_S(u,)ℜ(e_4ψ∂_u e_4ψ) ν+μ g(r)∫_S(u,)ℜ(e_4ψ𝒰^2ψ) ν
+μ g(r)∫_S(u,)ℜ(e_4ψ1sinθ∂_θ(sinθ∂_θψ))ν
+4μ g(r) as ∫_S(u,)cosθℑ(e_4ψ Tψ)ν
+2rμ g(r)∫_S(u,)|e_4ψ|^2ν+ g(r) [2rμ -4cs(r-M)]∫_S(u,)ℜ(e_4ψ∂_uψ)ν
-4srμ g(r)∫_S(u,)ℜ(e_4ψTψ)ν+2g(r)arμr^2+a^2∫_S(u,)ℜ(e_4ψΦψ)ν
-g(r)(μ s+s^2μa^4sin^2θcos^2θ(r^2+a^2)^2-μ V)∫_S(u,)ℜ(e_4ψψ)ν
-2sg(r)μa^2sinθcosθ(r^2+a^2)∫_S(u,)ℑ(e_4ψUψ)ν.
We begin with the term
4(r^2+a^2)g(r) ∫_S(u,)ℜ(e_4ψ∂_u e_4ψ) ν=2(r^2+a^2)g(r)∂_u(∫_S(u,) |e_4ψ|^2 ν)
=∂_u(2(r^2+a^2)g(r)∫_S(u,) |e_4ψ|^2 ν)-2∂_u((r^2+a^2)g(r))∫_S(u,) |e_4ψ|^2 ν.
Using Lemma <ref> and
[e_4,U]=e_4(isθq^2/r^2+a^2)=isra^2μcosθsinθ/(r^2+a^2)^2,
we get, in view of T=∂_-∂_u,
μ g(r)∫_S(u,)ℜ( e_4ψ𝒰^2ψ) ν
=T(μ g(r)∫_S(u,)asinθℜ(e_4ψUψ) ν)-μ g(r)∫_S(u,)ℜ(Ue_4ψ Uψ) ν
=(∂_-∂_u)(μ g(r)∫_S(u,)asinθℜ(e_4ψUψ) ν)-12∂_(μ g(r)∫_S(u,)|Uψ|^2 ν)
+12e_4(μ g(r))∫_S(u,)|Uψ|^2 ν+μ g(r)sra^2μcosθsinθ/(r^2+a^2)^2∫_S(u,)ℑ(ψ Uψ) ν.
We also have, using [e_4,∂_θ]=0,
μ g(r)∫_S(u,)ℜ(e_4ψ1sinθ∂_θ (sinθ∂_θψ))ν
=-μ g(r)∫_S(u,)ℜ(∂_θe_4ψ∂_θψ)ν
=-12μ g(r)∫_S(u,)e_4|∂_θψ|^2ν
=∂_(-12μ g(r)∫_S(u,)|∂_θψ|^2ν)+12e_4(μ g(r))∫_S(u,)|∂_θψ|^2ν.
All the remaing terms will be put in the bulk term 𝐁[ψ]. Now we do the same computations for
∫_S(u,)ℜ(f(r)∂_uψ μ_s^(c,V)ψ) ν
= 4(r^2+a^2)f(r)∫_S(u,)ℜ(∂_uψ e_4∂_uψ) ν+μ f(r)∫_S(u,)ℜ(∂_uψ𝒰^2ψ) ν
+μ f(r)∫_S(u,)ℜ(∂_uψ1sinθ∂_θ(sinθ∂_θψ))ν
+4μ f(r) as ∫_S(u,)cosθℑ(∂_uψ Tψ)ν
+f(r) [2rμ -4cs(r-M)]∫_S(u,)|∂_uψ|^2ν+ 2rμ f(r)∫_S(u,)ℜ(∂_uψe_4ψ)ν
-4srμ f(r)∫_S(u,)ℜ(∂_uψTψ)ν-2f(r)arμr^2+a^2∫_S(u,)ℜ(∂_uψΦψ)ν
-f(r)(μ s+s^2μa^4sin^2θcos^2θ(r^2+a^2)^2-μ V)∫_S(u,)ℜ(∂_uψψ)ν
-2sf(r)μa^2sinθcosθ(r^2+a^2)∫_S(u,)ℑ(∂_uψUψ)ν.
We begin with the term
4(r^2+a^2)f(r) ∫_S(u,)ℜ(∂_uψ e_4∂_uψ) ν=2(r^2+a^2)f(r)e_4(∫_S(u,) |∂_uψ|^2 ν)
=∂_(2(r^2+a^2)f(r)∫_S(u,) |∂_uψ|^2 ν)-2e_4((r^2+a^2)f(r))∫_S(u,) |∂_uψ|^2 ν.
Next we have
μ f(r)∫_S(u,)ℜ (∂_uψ𝒰^2ψ) ν
=T(μ f(r)∫_S(u,)asinθℜ(∂_uψUψ) ν)-μ f(r)∫_S(u,)ℜ(U∂_uψ Uψ) ν
=(∂_-∂_u)(μ f(r)∫_S(u,)asinθℜ(∂_uψUψ) ν)-12∂_u(μ f(r)∫_S(u,)|Uψ|^2 ν)
+12∂_u(μ f(r))∫_S(u,)|Uψ|^2 ν+μ f(r)sra^2μcosθsinθ/(r^2+a^2)^2∫_S(u,)ℑ(ψ Uψ) ν,
μ f(r)∫_S(u,)ℜ(∂_uψ1sinθ∂_θ (sinθ∂_θψ))ν
=-μ f(r)∫_S(u,)ℜ(∂_θ∂_uψ∂_θψ)ν
=-12μ f(r)∫_S(u,)∂_u|∂_θψ|^2ν
=∂_u(-12μ f(r)∫_S(u,)|∂_θψ|^2ν)+12∂_u(μ f(r))∫_S(u,)|∂_θψ|^2ν.
Combining everything, (<ref>) gives
∂_(∫_S(u,)𝐅_[ψ]ν)+∂_u(∫_S(u,)𝐅_u[ψ]ν)+∫_S(u,)𝐁[ψ]ν=∫_S(u,)μ(X(ψ)O(^-β))ν,
where
𝐅_[ψ]=2(r^2+a^2)f(r)|∂_uψ|^2-12μ g(r)(|∂_θψ|^2+|Uψ|^2)+asinθ f(r)μℜ(∂_uψUψ)+asinθ g(r)μℜ(e_4ψUψ),
𝐅_u[ψ]=2(r^2+a^2)g(r)|e_4ψ|^2-12μ f(r)(|∂_θψ|^2+|Uψ|^2)-asinθ f(r)μℜ(∂_uψUψ)-asinθ g(r)μℜ(e_4ψUψ),
and
𝐁[ψ] =2(rμ g(r)-∂_u((r^2+a^2)g(r)))|e_4ψ|^2+2(rμ f(r)-e_4((r^2+a^2)f(r)))|∂_uψ|^2
+12(∂_u(μ f(r))+e_4(μ g(r)))(|∂_θψ|^2+|Uψ|^2)
+μ g(r)sra^2μcosθsinθ/(r^2+a^2)^2ℑ(ψ Uψ)-4cs(r-M)f(r)|∂_uψ|^2
+4μ g(r)ascosθℑ(e_4ψTψ)-4srg(r)μℜ(e_4ψTψ)+g(r)[2rμ-4cs(r-M)]ℜ(e_4ψ∂_uψ)
+2g(r)arμr^2+a^2ℜ(e_4ψΦψ)-g(r)(μ s+s^2μa^4sin^2θcos^2θ(r^2+a^2)^2-μ V)ℜ(e_4ψψ)
-2sg(r)μa^2sinθcosθ(r^2+a^2)ℑ(e_4ψUψ)+μ f(r)sra^2μcosθsinθ/(r^2+a^2)^2ℑ(ψ Uψ)
+4μ f(r)ascosθℑ(∂_uψTψ)-4srf(r)μℜ(∂_uψTψ)+2rμ f(r)ℜ(∂_uψe_4ψ)
-2f(r)arμr^2+a^2ℜ(∂_uψΦψ)-f(r)(μ s+s^2μa^4sin^2θcos^2θ(r^2+a^2)^2-μ V)ℜ(∂_uψψ)
-2sf(r)μa^2sinθcosθ(r^2+a^2)ℑ(∂_uψUψ),
which concludes the proof of Lemma <ref>.
§.§ Lower bound for the bulk term in for s=-2
We have
r μ g(r)-∂_u((r^2+a^2) g(r)) =-μ p r(r^2+a^2)^p,
r μ f(r)-e_4((r^2+a^2) f(r)) =(-μ)^-1[(r-M)-μ(p+1) r](r^2+a^2)^p
≳ (-μ)^-1(1-μ p)(r^2+a^2)^p,
in for p large, as r_+-M>0. We also have
∂_u(μ f(r))+e_4(μ g(r)) =-μ(r^2+a^2)^p-1(2 M p r^2r^2+a^2+r μ-(r-M))≳ -μ p(r^2+a^2)^p.
Denoting the principal bulk term
𝐁_pr[ψ]:=2(rμ g( r)-∂_u((r^2+a^2)g(r)))|e_4ψ|^2+2(rμ f(r)-e_4((r^2+a^2)f(r)))|∂_uψ|^2
+12(∂_u(μ f(r))+e_4(μ g(r)))(|∂_θψ|^2+|Uψ|^2),
we have shown
𝐁_pr[ψ]≳ (-μ)(r^2+a^2)^p[p|e_4ψ|^2+(1-μ p)|e_3ψ|^2+p(|∂_θψ|^2+|Uψ|^2)].
The only thing left to prove is that we can take p large enough so that 𝐁[ψ]-𝐁_pr[ψ] can be absorbed in 𝐁_pr[ψ] after integrating on S(u,). This is due to the following mix between weighted Cauchy-Schwarz of the type
|ab|≤ε a^2+ε^-1b^2/2
and the Poincaré inequality (<ref>). We have the following bounds :
|μ g(r)sra^2μcosθsinθ/(r^2+a^2)^2ℑ(ψ Uψ)ν| ≲ (-μ)(r^2+a^2)^p(_deg[ψ]ν+|Uψ|^2ν)
≲(-μ)(r^2+a^2)^p_deg[ψ]ν,
|4μ g(r)ascosθℑ(e_4ψTψ)-4srg(r)μℜ(e_4ψTψ)ν|≲(-μ)(r^2+a^2)^p_deg[ψ]ν,
where we used (<ref>) to write
T=O(μ)e_3+O(1)e_4+O(1)U+O(1).
We continue with
|2rμ g(r)ℜ(e_4ψ∂_uψ)|≲(-μ)(r^2+a^2)^p_deg[ψ],
|2g(r)arμr^2+a^2ℜ(e_4ψΦψ)ν|≲(-μ)(r^2+a^2)^p_deg[ψ]ν,
where we used (<ref>) to get Φ=O(μ)e_3+O(1)e_4+O(1)U+O(1). Next,
|-g(r)(μ s+s^2μa^4sin^2θcos^2θ(r^2+a^2)^2-μ V)ℜ(e_4ψψ)ν|≲(-μ)(r^2+a^2)^p_deg[ψ]ν,
|2sg(r)μa^2sinθcosθ(r^2+a^2)ℑ(e_4ψUψ)|≲(-μ)(r^2+a^2)^p_deg[ψ],
|μ f(r)sra^2μcosθsinθ/(r^2+a^2)^2ℑ(ψ Uψ)ν|≲(-μ)(r^2+a^2)^p_deg[ψ]ν,
|4μ f(r) ascosθℑ(∂_uψTψ)-4srf(r)μℜ(∂_uψTψ)ν|
≲(-μ)ε^-1(r^2+a^2)^p|Tψ|^2ν+(-μ)(r^2+a^2)^pε|e_3ψ|^2ν
≲(-μ)ε^-1(r^2+a^2)^p_deg[ψ]ν+(-μ)(r^2+a^2)^pε|e_3ψ|^2ν,
|2rμ f(r)ℜ(∂_uψe_4ψ)|≲(-μ)ε^-1(r^2+a^2)^p_deg[ψ]+(-μ)ε (r^2+a^2)^p|e_3ψ|^2,
|-2f(r)arμr^2+a^2 ℜ(∂_uψΦψ)ν|≲
(-μ)ε^-1(r^2+a^2)^p_deg[ψ]ν+(-μ)(r^2+a^2)^pε|e_3ψ|^2ν,
|-f(r)(μ s+ s^2μa^4sin^2θcos^2θ/(r^2+a^2)^2-μ V)ℜ(∂_uψψ)ν|≲
(-μ)ε^-1(r^2+a^2)^p_deg[ψ]ν+(-μ)(r^2+a^2)^pε|e_3ψ|^2ν,
|2sf(r)μa^2sinθcosθ(r^2+a^2)ℑ(∂_uψUψ)|≲(-μ)ε^-1(r^2+a^2)^p_deg[ψ]+(-μ)ε(r^2+a^2)^p|e_3ψ|^2,
|-4cs(r-M)g(r)ℜ(e_4ψ∂_uψ)|≲ (-μ)(r^2+a^2)^pε^-1_deg[ψ]+(-μ)(r^2+a^2)^pε|e_3ψ|^2.
Notice that thanks to the μ in front of |e_3ψ|^2 in the definition (<ref>) of _deg[ψ], the integral on S(u,) of _deg[ψ] can be absorbed in the one of (<ref>), for p large enough. Moreover, we chose the value of ε in the weighted Cauchy-Schwarz inequalities above so that all the terms bounded by a constant times (-μ)(r^2+a^2)^pε|e_3ψ|^2 can be absorbed in the term (1-μ p)|e_3ψ|^2≥|e_3ψ|^2 of (<ref>), for ε>0 small enough, after integration on the sphere. The only remaining term in the bulk that we want to absorb is
-4cs(r-M)f(r)|∂_uψ|^2=-4cs(r-M)(-μ)(r^2+a^2)^p|e_3ψ|^2.
As it lacks a factor μ, we can't bound it pointwisely (nor its integral on S(u,)) by _deg[ψ]. But as r_+-M>0, we have r-M≳ 1 for r close to r_+ in , say for r∈[r_+-ε_0,r_+]. As we chose s=-2<0 and c>0, we have
-4cs(r-M)f(r)|∂_uψ|^2≥ 0, r∈[r_+-ε_0,r_+].
And for r∈ [,r_+-ε_0], we can absorb -4cs(r-M)f(r)|∂_uψ|^2 in the term
-μ p(r^2+a^2)^p|e_3ψ|^2≳ -μ(r_+-ε_0)p(r^2+a^2)^p|e_3ψ|^2
that appears in 𝐁_pr[ψ], for p large enough. This concludes the proof of Lemma <ref>.
§.§ Lower bound for the bulk term in {r_-<r≤} for s=+2
We have
r μ g(r)-∂_u((r^2+a^2) g(r)) =-μ p r(r^2+a^2)^p,
r μ f(r)-e_4((r^2+a^2) f(r)) =(-μ)^-1[(r-M)-μ(p+1) r](r^2+a^2)^p.
We also have
∂_u(μ f(r))+e_4(μ g(r)) =-μ(r^2+a^2)^p-1(2 M p r^2r^2+a^2+r μ-(r-M))≳ -μ p(r^2+a^2)^p.
Define the principal bulk
𝐁_pr[ψ]:=2(rμ g( r)-∂_u((r^2+a^2)g(r)))|e_4ψ|^2+2(rμ f(r)-e_4((r^2+a^2)f(r)))|ψ|^2
+12(∂_u(μ f(r))+e_4(μ g(r)))(|∂_θψ|^2+|Uψ|^2)-4cs(r-M)f(r)|ψ|^2.
Note that unlike in the redshift region, we add a term -4cs(r-M)f(r)|ψ|^2 in the principal bulk. The positive spin will help us get a positive simple bulk term, without the need of replacing f and g by more complicated log multipliers, as is needed for the scalar wave equation in <cit.>. We have
𝐁_pr[ψ]≳ (-μ)p(r^2+a^2)^p |e_4ψ|^2+2(-μ)^-1[(r-M)(1-2cs)-μ(p+1) r](r^2+a^2)^p|ψ|^2
+(-μ)p(r^2+a^2)^p(|∂_θψ|^2+|Uψ|^2).
Notice that in , we have r-M≲ -1 thus for s=+2 and c>1/4,
(r-M)(1-2cs)-μ(p+1) r=(r-M)(1-4c)-μ(p+1)≳ 1-μ p, r∈(r_-,].
This is where we use the positivity of the spin, to get an effective blueshift effect. We have shown
𝐁_pr[ψ]≳ (-μ)(r^2+a^2)^p[p|e_4ψ|^2+(1-μ p)|e_3ψ|^2+p(|∂_θψ|^2+|Uψ|^2)].
The only thing left to prove is that we can take p large enough so that 𝐁[ψ]-𝐁_pr[ψ] can be absorbed in 𝐁_pr[ψ] after integrating on S(u,). This is due to the following mix between weighted Cauchy-Schwarz of the type
|ab|≤ε a^2+ε^-1b^2/2
and the Poincaré inequality (<ref>) :
|μ g(r)sra^2μcosθsinθ/(r^2+a^2)^2ℑ(ψ Uψ)ν| ≲ (-μ)(r^2+a^2)^p(_deg[ψ]ν+|Uψ|^2ν)
≲(-μ)(r^2+a^2)^p_deg[ψ]ν,
|4μ g(r)ascosθℑ(e_4ψTψ)-4srg(r)μℜ(e_4ψTψ)ν|≲(-μ)(r^2+a^2)^p_deg[ψ]ν,
where we used again (<ref>) to get
T=O(μ)e_3+O(1)e_4+O(1)U+O(1).
We continue with
|2rμ g(r)ℜ(e_4ψψ)|≲(-μ)(r^2+a^2)^p_deg[ψ],
|-4sc(r-M)g(r)(∂_ψψ)|≲ (-μ)(r^2+a^2)^p(ε|e_3ψ|^2+ε^-1_deg[ψ]),
|2g(r)arμr^2+a^2ℜ(e_4ψΦψ)ν|≲(-μ)(r^2+a^2)^p_deg[ψ]ν,
where we used (<ref>) to get Φ=O(μ)e_3+O(1)e_4+O(1)U+O(1). Next,
|-g(r)(μ s+s^2μa^4sin^2θcos^2θ(r^2+a^2)^2-μ V)ℜ(e_4ψψ)ν|≲(-μ)(r^2+a^2)^p_deg[ψ]ν,
|2sg(r)μa^2sinθcosθ(r^2+a^2)ℑ(e_4ψUψ)|≲(-μ)(r^2+a^2)^p_deg[ψ],
|μ f(r)sra^2μcosθsinθ/(r^2+a^2)^2ℑ(ψ Uψ)ν|≲(-μ)(r^2+a^2)^p_deg[ψ]ν,
|4μ f(r) ascosθℑ(ψTψ)-4srf(r)μℜ(ψTψ)ν|
≲(-μ)ε^-1(r^2+a^2)^p|Tψ|^2ν+(-μ)(r^2+a^2)^pε|e_3ψ|^2ν
≲(-μ)ε^-1(r^2+a^2)^p_deg[ψ]ν+(-μ)(r^2+a^2)^pε|e_3ψ|^2ν,
|2rμ f(r)ℜ(ψe_4ψ)|≲(-μ)ε^-1(r^2+a^2)^p_deg[ψ]+(-μ)ε (r^2+a^2)^p|e_3ψ|^2,
|-2f(r)arμr^2+a^2 ℜ(ψΦψ)ν|
≲(-μ)ε^-1(r^2+a^2)^p_deg[ψ]ν+(-μ)(r^2+a^2)^pε|e_3ψ|^2ν,
|-f(r)(μ s+ s^2μa^4sin^2θcos^2θ(r^2+a^2)^2-μ V)ℜ(ψψ)ν|
≲(-μ)ε^-1(r^2+a^2)^p_deg[ψ]ν+(-μ)(r^2+a^2)^pε|e_3ψ|^2ν,
|2sf(r)μa^2sinθcosθ(r^2+a^2)ℑ(ψUψ)|≲(-μ)ε^-1(r^2+a^2)^p_deg[ψ]+(-μ)ε(r^2+a^2)^p|e_3ψ|^2.
Notice that thanks to the μ in front of |e_3ψ|^2 in the definition of _deg[ψ], the integral on S(u,) of _deg[ψ] can be absorbed in the one of (<ref>), for p large enough. Moreover, we choose the value of ε in the weighted Cauchy-Schwarz inequalities above so that all the terms bounded by a constant times (the integral of) (-μ)(r^2+a^2)^pε|e_3ψ|^2 can be absorbed in the term (1-μ p)|e_3ψ|^2≥|e_3ψ|^2 in (<ref>), for ε>0 small enough. This concludes the proof of Lemma <ref>.
§ COMPUTATION OF A_M(R) AND PROOF OF LEMMA <REF>
The polynomial A_m(r)=(r^2+a^2)^2𝔣_m,2(r) is defined in <cit.> by plugging the ansatz
for into the TSI (<ref>) and requiring the compatibility
1/24Δ^2∂_r_out^4(Δ^2)=1/^7∑_|m|≤ 2A_m(r)Q_m,2Y_m,2^-2(cosθ)e^imϕ_++O(^-8),
see also <cit.>, where the factor 1/24 corresponds to 1/(2𝔰)! with 𝔰=2. We recall
∂_r_out=2μ^-1e_4=∂_r+μ^-1∂_t+a/ΔΦ.
More precisally, the computation done in <cit.> gives
A_m(r)=1/24e^-imϕ_+Δ^2∂_r_out^4(Δ^2e^imϕ_+),
and then (<ref>) holds, where the O(^-8) term is given by the terms where a ∂_r_out falls on an inverse power of . Let us now compute (<ref>). We have ∂_r_out(e^imϕ_+)=2aim/Δe^imϕ_+ thus
∂_r_out(Δ^2e^imϕ_+)=(∂_rΔ^2+2aimΔ)=2Δ(2(r-M)+aim).
We compute successively
∂_r_out^2(Δ^2e^imϕ_+) =(∂_r+2iam/Δ)(2Δ(2(r-M)+aim))
=4[2(r-M)^2+3(r-M)aim+Δ-a^2m^2],
∂_r_out^3(Δ^2e^imϕ_+) =(∂_r+2iam/Δ)(4[2(r-M)^2+3(r-M)aim+Δ-a^2m^2])
=4[4(r-M)^2aim/Δ+6(r-M)-6(r-M)a^2m^2/Δ-2ia^3m^3/Δ+5aim],
∂_r_out^4 (Δ^2e^imϕ_+)
=(∂_r+2iam/Δ)(4[4(r-M)^2aim/Δ+6(r-M)-6(r-M)a^2m^2/Δ-2ia^3m^3/Δ+5aim])
=4[8(r-M)(a^2-M^2)aim/Δ+6+6a^2m^2/Δ-12a^2m^2(a^2-M^2)/Δ^2+4ia^3m^3(r-M)/Δ^2
+2aim/Δ(4(r-M)^2aim/Δ+6(r-M)-6(r-M)a^2m^2/Δ-2ia^3m^3/Δ+5aim)].
Thus we get
Δ^2∂_r_out^4(Δ^2e^imϕ_+)=8[3Δ^2 +iam(4(a^2-M^2)(r-M)+6Δ(r-M))
+a^2m^2(3Δ-6(a^2-M^2)-4(r-M)^2-5Δ)
+ia^3m^3(2(r-M)-6(r-M))+2a^4m^4],
which finally gives
A_m(r)=1/3[3Δ^2 +(r-M)(4(a^2-M^2)+6Δ)iam-(2Δ+6(a^2-M^2)+4(r-M)^2)a^2m^2
-4(r-M)ia^3m^3+2a^4m^4].
We use Proposition <ref> to get
_+2( )=
1/^7∑_|m|≤ 2[Δ A_m”(r)+2(iam-(r-M))A_m'(r)-4 A_m(r)] Q_m,Y_m,2^+2(cosθ)e^imϕ_++Err,
where we used (<ref>) to get '(Y_m,2^+2(cosθ)e^imϕ_+)=-4Y_m,2^+2(cosθ)e^imϕ_+, and where the error term Err is explicit and defined by
Err:=(a^2sin^2θ T^2-4(r^2+a^2)Te_3+2aTΦ-(6r+4iacosθ)T)[].
It satisfies e_3^≤ 1T^j^kErr=O(^-8-j). It remains only to prove
Δ A_m”(r)+2(iam-(r-M))A_m'(r)-4 A_m(r)=0.
We have
3A_m'(r)=12(r-M)Δ+iam(4(a^2-M^2)+6Δ+12(r-M)^2)-12(r-M)a^2m^2-4ia^3m^3,
3A_m”(r)=12Δ+24(r-M)^2+36(r-M)iam-12a^2m^2.
Thus we can compute
Δ A_m”(r)+2(iam-(r-M))A_m'(r)-4 A_m(r)=1/3[c_0+c_1iam+c_2a^2m^2+c_3ia^3m^3+c_4a^4m^4],
where the coefficients are :
c_0=12Δ^2+24(r-M)^2Δ-24(r-M)^2Δ-12Δ^2=0,
c_1=36Δ(r-M)+24(r-M)Δ-2(r-M)(4(a^2-M^2)+6Δ+12(r-M)^2)
-4(r-M)(4(a^2-M^2)+6Δ)
=24(r-M)Δ-24(r-M)(a^2-M^2)-24(r-M)^3=24(r-M)(Δ-Δ)=0
c_2=-12Δ-8(a^2-M^2)-12Δ-24(r-M)^2+24(r-M)^2+8Δ+24(a^2-M^2)+16(r-M)^2
=-16Δ+16(a^2-M^2)+16(r-M)^2=16(Δ-Δ)=0
c_3=-24(r-M)+8(r-M)+16(r-M)=0
c_4=8-8=0,
which concludes the proof of Lemma <ref>.
|
http://arxiv.org/abs/2409.02356v1 | 20240904005825 | An SNSPD-based detector system for NASA's Deep Space Optical Communications project | [
"Emma E. Wollman",
"Jason P. Allmaras",
"Andrew D. Beyer",
"Boris Korzh",
"Marcus C. Runyan",
"Lautaro Narváez",
"William H. Farr",
"Francesco Marsili",
"Ryan M. Briggs",
"Gregory J. Miles",
"Matthew D. Shaw"
] | physics.ins-det | [
"physics.ins-det",
"astro-ph.IM"
] |
1 Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr, Pasadena, CA 91109, USA
2 Current affiliation AWS Center for Quantum Computing, Pasadena, CA 91125, USA
3 Division of Physics, Mathematics and Astronomy, California Institute of Technology, Pasadena, California 91125, USA
* [email protected]
We report on a free-space-coupled superconducting nanowire single-photon detector array developed for NASA's Deep Space Optical Communications project (DSOC). The array serves as the downlink detector for DSOC's primary ground receiver terminal located at Palomar Observatory's 200-inch Hale Telescope. The 64-pixel WSi array comprises four quadrants of 16 co-wound pixels covering a 320 diameter active area and embedded in an optical stack. The detector system also includes cryogenic optics for filtering and focusing the downlink signal and electronics for biasing the array and amplifying the output pulses. The detector system exhibits a peak system detection efficiency of 76% at 1550 nm, a background-limited false count rate as low as 3.7 kcps across the array, timing jitter less than 120 ps FWHM, and a maximum count rate of ∼ 1 Gcps.
© 2024. California Institute of Technology. Government sponsorship acknowledged.
§ INTRODUCTION
Future space missions are expected to produce increasingly large volumes of science data over the upcoming decades. NASA is therefore exploring the possibility of complementing traditional RF communications with optical communications, which have the potential to increase data rates by a factor of 10 to 100 at Mars distances <cit.>. In 2013, NASA's Lunar Laser Communications Demonstration (LLCD) successfully demonstrated data rates up to 622 Mbps from lunar distances, at approximately 400,000 km <cit.>. The Deep Space Optical Communications (DSOC) project is the first demonstration of optical communication from interplanetary distances <cit.>. At these distances, the optical signals that reach Earth are in the photon-starved regime, and the optimal method of encoding data is through the arrival time of photon pulses (pulse-position modulation, or PPM). Optical communication links therefore require ground-based receivers capable of counting single photons with high efficiency and high timing resolution. The detectors must also have a large dynamic range to enable higher data rates when the spacecraft is closer to Earth.
Over the past decade, superconducting nanowire single-photon detectors (SNSPDs) have become the detector of choice for time-correlated single-photon counting at telecom wavelengths. SNSPDs are ideal for optical communication due to their high detection efficiency (above 95% <cit.>), ultra-high timing resolution (down to 3 ps <cit.>), and high maximum count rates (up to 1.5 Gcps <cit.>). In addition, SNSPDs with active areas up to 1 mm^2 <cit.> and array formats on the scale of hundreds of kilopixels <cit.> have now been demonstrated.
In this paper, we describe an SNSPD-based detector system developed for NASA’s DSOC technology demonstration project. The DSOC project consists of a ground uplink terminal located at JPL's Table Mountain Facility (TMF) <cit.>, a ground receiver terminal located at Caltech's Palomar Observatory <cit.>, and a flight transceiver on board NASA's Psyche spacecraft. The goal of the DSOC project is to demonstrate downlink data rates ranging from 267 Mbps to 57 kbps as the Psyche spacecraft travels over a range of distances from 0.1 AU to 2.6 AU (Mars maximum range). The SNSPD array is used in the ground receiver system to convert received photons into electrical signals. A second detector system based on the same design has also been fielded at TMF as part of the Optical to Orion (O2O) project and is currently used as part of a supplementary ground receiver station for DSOC.
§ SYSTEM DESIGN
§.§ Array design and fabrication
In order to meet the requirements of the DSOC project, the SNSPD array design must allow for efficient coupling to Palomar Observatory's 200-inch Hale telescope while maintaining the ability to count at high rates. There is a fundamental tradeoff between these two goals, because increasing the length of a nanowire in order to cover a larger area increases its kinetic inductance and thus slows down its recovery time. The SNSPD array design is based around a 320 diameter active area, which allows for a 50 µrad field of view on the sky with a numerical aperture of 0.37 at the detector. In order to achieve high count rates and to allow for multiple detections per PPM slot, the active area is distributed across 64 SNSPD nanowires. The active area is divided into four quadrants to allow for centroiding of the incoming beam. Within each quadrant, the 16 nanowires are co-wound to fill the active area. The use of co-wound pixels minimizes the photon flux variation between pixels in order to take full advantage of the maximum count rates of all pixels in a quadrant. The fill factor of the array was optimized to maximize the optical absorption while minimizing the kinetic inductance of the wires and crosstalk between adjacent pixels. For a co-wound WSi nanowire array with 160 nm wide and 4.8 nm thick nanowires, we found that a 1200 nm pitch optimized the tradeoff between efficiency and active area without introducing crosstalk between adjacent nanowires (Fig. <ref>a). Due to the co-wound design and large pitch, the active area is not perfectly circular. Fig. <ref>b shows an optical micrograph of the array with one quadrant shaded to highlight the shape of the active area.
In order to enhance optical absorption into the nanowire layer despite its relatively low 13.3% fill factor, we used an optical cavity consisting of an anti-reflective (AR) coating and a gold mirror to maximize coupling efficiency at a wavelength of 1550 nm (Fig. <ref>c). Rigorous coupled wave analysis (RCWA) <cit.> was used to calculate the expected absorbance in the device and determine the necessary thicknesses of the dielectric layers to maximize optical coupling. As the stack was fabricated, the design thicknesses of the remaining layers were adjusted to correct for any thickness errors in the previously deposited layers of the optical stack. The dielectric layer thicknesses were measured by ellipsometry and reflectometry on Si witness chips while the thicknesses of the WSi, Ti, and Au were estimated from deposition time. The modeled absorption into the final optical stack is shown in Fig. <ref>d.
To begin fabrication, we deposited the Au mirror layer for lift-off by electron beam evaporation on a 100-mm Si wafer. SiO_2 was sputtered on a Ti adhesion layer under RF bias to form a quarter-wavelength dielectric layer on the mirror. WSi was sputtered from a compound target leading to a film with a resistivity of 1.8×10^-6 ·m at 300 K. Electrical contacts were patterned by optical lithography to form 50 matched coplanar waveguides (CPWs) to route the nanowire signals to the device bonding pads. After optical lithography, the nanowires were patterned using electron beam lithography on a negative tone (ma-N 2401) electron beam resist and by etching in CHF_3/O_2 plasma. Additional inactive nanowire features were patterned around the perimeter of the active area to minimize proximity effects during exposure of the electron-beam resist and ultimately achieve a more uniform wire width across the array. To further ensure uniform features, a spatially-varying electron-beam dose was applied based on modeled proximity effects resulting from the electron-beam parameters, the composition of the underlying layers, and the local pattern geometry. After nanowire patterning, the dielectric layers were sputtered to build the anti-reflection layers.
After several iterations of design optimization and process development, a prototype array was fabricated that appeared to meet all the requirements of the DSOC project. At this point, three wafers of detectors were fabricated using the same array geometry and optical stack, and three dies from each of the wafers were packaged and screened on a testbed with prototype optics and electronics. Of the nine screened arrays, seven met the requirements of the DSOC project, and four reached saturated efficiency in all 64 channels. Of these four, we chose a primary and a backup array for the DSOC project and for the O2O project. The results reported here are either from the prototype array, the primary DSOC array, or the primary O2O array. The prototype array differs from the later devices in its packaging and in the layout of its leads. There is also some variation in the thickness of the WSi layer between devices from different wafers that leads to small differences in performance. The properties of all screened detectors are listed in the Supplemental Material.
§.§ Detector assembly design
In the DSOC project structure, the SNSPD array is one component of the larger ground detector assembly (GDA). In turn, the GDA is part of the ground laser receiver system (GLR), which is located in the Hale Telescope's Coudé Room. The detector assembly contains a cryostat, cryogenic optics for filtering and focusing light on the detector, and cryogenic and room-temperature electronics for routing the array's electrical signals and amplifying its output. The GDA also contains an optical test stimulus source (OTSS), which is able to produce both modulated and CW light at calibrated power levels. The OTSS is used for detector and system calibration and characterization. The detector assembly interfaces with the ground laser receiver optical assembly (GLROA), which is responsible for routing light from both the telescope and OTSS to the detector. The GLROA contains a near-IR camera for acquisition of the downlink signal, narrowband filters for rejection of sky background, and a variable zoom system to optimize the detector field of view. The detector assembly also interfaces with the ground signal processing assembly (GSPA), which time stamps the array pulses using a 64-channel time-to-digital converter (TDC), converts the time tags to PPM symbols, and decodes the PPM data in real time. All assemblies have their own internal monitor and control (M&C) systems that interface with the global GLR M&C software. Further information on the GLR system design can be found in Srinivasan et al. <cit.>.
The detector assembly cryostat consists of a modified FormFactor Model 106 cryostat with a Cryomech PT410 pulse tube and a Chase GL-4 4He sorption refrigerator. The 1 K stage of the refrigerator cools the detector to a temperature of 960 mK. Light is free-space-coupled onto the SNSPD array through three cryogenic filter windows mounted in the 40 K radiation shield, the 3 K radiation shield, and a 3 K bracket. Filtering of 300 K blackbody emission from room temperature optics is necessary to reduce the array’s dark count rate, because the nanowires are sensitive to wavelengths out to ∼ 4 . The custom filters from Andover Corporation consist of reflective short-pass coatings on half-inch substrates of BK7, which is absorptive at wavelengths above 2.3 . The filter coatings in the radiation shields have a cutoff of 1.9 , and the filter at the 3 K bracket has a cutoff of 1.6 . Further details about the filters can be found in Mueller et al. <cit.>. The window through the 300 K stage of the fridge is 2-inch diameter, 12 mm thick, AR-coated BK7 (Thorlabs WG12012-C). In the DSOC system, a cryogenic lens (Thorlabs AL1815-C) is mounted to the detector plate to provide a large enough NA for coupling light from the telescope’s 5 m aperture onto the detector’s 320 area. Figure <ref>a shows the cryogenic lens integrated with the detector, and Figure <ref>b shows the detector/lens assembly installed in the cryostat. The OCTL telescope used for O2O has a 1 m aperture, allowing for the final focusing lens to be located outside of the cryostat.
Each pixel of the array is biased and read out individually. Routing of RF signals inside the cryostat is performed using high-flexibility micro-coax cables with custom high-density connectors (Samtec HLCD and LSHM series). Metal-core PCBs at 3 K and at the sorption refrigerator's film-burner stage are used to thermalize the cables. At 40 K, four 16-channel amplifier boards provide two stages of amplification using PHEMT (Avago Technologies ATF 35143) and SiGe (RFMD SGL0622Z) amplifiers. The cryogenic electronics are shown in Fig. <ref>c. Following amplifiers (Minicircuits RAM 8A+) at room temperature are used to increase the final pulse amplitude. The input of the PHEMT stage is DC-coupled with a 50- termination to avoid the re-biasing challenges associated with AC-coupled amplifiers operating at high count rates <cit.>. The cryogenic amplifiers high-pass the SNSPD pulses, shortening the pulse width from > 20 ns to ∼5 ns (Fig. <ref>d). Some electrical crosstalk was observed between channels, particularly nearest neighbors. The high-density cable connectors were identified as the primary contributor to the electrical crosstalk, and the connector PCBs were redesigned to improve isolation between adjacent channels. While some crosstalk remains, it is below the comparator threshold level and does not lead to extra counts. It can, however, impact the timing jitter when there are many photons per laser pulse due to a voltage shift in the SNSPD output.
The outputs of the room-temperature amplifiers are coupled into 64 SMA coaxial cables. These are connected to the GSPA's 64-channel time-to-digital converter (Dotfast Solutions TDM1600-64). The TDC triggers on SNSPD pulses using a fixed-threshold comparator front-end and outputs sorted time tags with a timing resolution of 15.625 ps and full-width half-maximum (FWHM) timing jitter below 50 ps. Time tags are streamed over PCI Express either to disk or to the receiver FPGAs at transfer speeds up to 1.5 GTag/s.
Each SNSPD channel is individually biased using an NI PXI DAC voltage source (NI PXIe-6739) and a 200 k resistor at room temperature. A resistive bias tee couples the bias current to the detector at the input of the cryogenic amplifier. Cable resistance between the nanowire and input of the cryogenic amplifier leads to current splitting between the device and the 50 input termination of the amplifier. Because the cryogenic series resistance is unknown, we report the total bias current drawn by each channel of the voltage source. The actual current in the device is lower. The testbed used for screening the arrays had different cables between the bias tee and device, so the splitting ratio was different between the screening measurements and measurements in the GDA cryostat as installed at Palomar Observatory.
§ SYSTEM PERFORMANCE
§.§ Detection efficiency
The system detection efficiency of the array (SDE_a) is defined as the fraction of photons entering the room-temperature window of the cryostat that are counted by the readout electronics. The array system detection efficiency is calculated by measuring the total count rate across the array with and without a shutter blocking the light source. The array background count rate (BCR_a) with the source shuttered is subtracted from the array count rate with the source unshuttered to produce the array photon count rate (PCR_a). SDE_a is the ratio of PCR_a to the rate of signal photons incident on the cryostat. It is also useful to define the pixel-level equivalents SDE_p, BCR_p, and PCR_p, defined such that SDE_a = ∑SDE_p.
The incident photon flux is measured by inserting a mirror in front of the cryostat to deflect the beam onto a free-space power meter. A high flux is first applied to provide a reference power, and then calibrated attenuation is applied to reduce the flux to the desired level. When calculating the SDE, the measured power must be adjusted for the additional loss from the extra mirror and lens used for the free-space power measurement. The total uncertainty in the efficiency measurement is ± 5%. In the screening testbed, a similar procedure is used to calibrate the incident photon flux. However, in the testbed, a sliding breadboard for the free-space optics allows for measurement of the free-space power without any additional optical components. Fig. <ref>a shows the efficiency of the primary and spare DSOC arrays measured for both TE- and TM-polarized light at λ = 1550 nm using a low-NA lens in the screening testbed. Both arrays have a maximum efficiency of 76% for TE light and ∼ 70% for TM light. The DSOC downlink signal is circularly-polarized, but is converted to linear TE polarization by a quarter-waveplate and a half-waveplate in the GLROA. The polarization of the O2O downlink signal is not controlled.
Fig. <ref>b shows the normalized PCR_p as a function of bias current (I_b) for all 64 pixels of the primary DSOC array at a temperature of 950 mK on the GDA cryostat. An efficiency plateau, indicating saturated internal detection efficiency, is present for all pixels. During normal operations, all pixels of the array are biased mid-plateau at a current of 10.5 . The array is biased in the middle of the plateau to allow for potential variations in temperature or switching current over time, although no significant variations have been observed to date.
For single-mode-fiber-coupled SNSPDs, the incoming beam has a well-defined numerical aperture and spot diameter. In contrast, the free-space-coupled DSOC array must be able to accomodate different spot sizes due to varying atmospheric conditions, and it must accept a large NA beam for efficient coupling to the large telescope area. The array's optical coupling efficiency depends on both the spot size and NA of the optical signal, so the GLROA contains an adjustable zoom system to optimize the signal on the detector for different seeing conditions. The different contributions to the array efficiency are discussed in detail in the Supplemental Material.
Fig. <ref>c illustrates how the array layout leads to a dependence of the efficiency on the spot size and position. The map on the left shows the modeled TE efficiency as a function of position on the array for a 50 diameter spot, and the map on the right shows the corresponding measurement performed by scanning a tightly-focused beam across the active area with a fast steering mirror. The region of bends in the center of the array has a nanowire orientation perpendicular to the majority of the active area, leading to a decrease in absorption for TE-polarized light in both the model and measurement. If the spot size is too small, it will predominantly sample the bend region and lead to a lower SDE_a. If the spot size is too large, the efficiency will be lower due to overfilling. The optimal spot size for the array is therefore between 90 and 250 in diameter.
For the DSOC project, a variable zoom system adjusts the detector field of view from 27 µrad to 50 µrad (NA = 0.2 to 0.37) to accommodate different atmospheric seeing conditions. The optical cavity for the SNSPD array was optimized for normal incidence. When light comes in at an angle, the center wavelength of the cavity is effectively shifted, and the absorption into the nanowire layer is reduced. Fig. <ref>d shows the resulting decrease in absorption modeled for TE and TM polarized light as a function of numerical aperture. The NA dependence was also measured in the screening testbed for TE light using a cryogenic lens and different incoming beam diameters. The Supplemental Material includes a full description of the angular dependence. Under nominal seeing conditions, the efficiency of the DSOC array is expected to be 70%. Lab measurements of the efficiency using a phase plate to emulate nominal seeing conditions confirm this prediction (measured values between 69 and 72%).
§.§ Dark count rate
The dark count rate of the array is dominated by room temperature blackbody radiation; with the 3 K window blanked, BCR_a is on the order of 1 - 10 cps, and is likely still limited by residual stray light inside the cryostat. The dark count rate therefore depends on the detector's 300 K field of view and on the cryogenic filters used. In the GDA cryostat, the 300 K field of view is set by the cryogenic lens, which has an NA of 0.53. The detector's frequency-dependent optical efficiency (Fig. <ref>d) provides additional filtering. Because the Hale Telescope's Coudé room is not temperature-controlled, the dark count rate also depends on the temperature of the room. Fig. <ref> shows the array dark count rate vs. bias current measured in the summer (T = 24.5^∘ C) and in the winter (T = 8.9^∘ C). At the DSOC operating current, the dark count rate was 22 kcps in the summer and 3.7 kcps in the winter. Total nighttime on-sky background count rates are a few hundred kcps without additional filtering or 10 - 50 kcps with a 1.8 nm bandpass filter installed in the GLROA.
§.§ Maximum count rate
The DSOC array must be able to count at rates of several hundred Mcps in order to match the received photon flux at the highest data rates. Each of the 64 nanowires in the array has a dead time of approximately 20 ns, after which it recovers to full efficiency over a period of approximately 80 ns. Photons that are absorbed within 20 ns after a detection event in the same wire are not detected, and photons that are absorbed within 20-100 ns of a detection event in the same wire are detected with a lower probability. Fig. <ref>a shows the normalized array detection efficiency vs. count rate under CW flood illumination. The 3-dB saturation count rate is 850 Mcps when biased in the middle of the PCR plateau. A higher maximum count rate can be achieved by biasing the array at a higher bias current. For example, the O2O array has a 3-dB saturation count rate above 1 Gcps when biased close to the switching current. Measurements of count rates above ∼2 Gcps are limited by the TDC counter's maximum count rate.
For PPM data formats with short symbol periods (< 100 ns), the blocking loss primarily depends on the rate of incoming photons, because typical inter-pulse separations are less than the detector dead time. Fig. <ref>b shows the measured system detection efficiency vs. incoming photon flux for different data formats where the symbol period is < 100 ns. The curves are similar to the measurement under CW illumination. For data formats with long symbol periods (> 100 ns), however, pulses are typically separated by more than the detector dead time, so the blocking loss primarily depends on the number of photons per pulse – the array misses photons when multiple photons in the pulse are incident on the same nanowire. Fig. <ref>c shows the system detection efficiency vs. number of incoming signal photons per slot for different data formats where the symbol period is > 100 ns. The measurements for the different data rates follow the same curve despite representing very different count rates, indicating that the blocking loss is dominated by photon number rather than photon rate.
§.§ System jitter
The jitter of the array, GDA electronics, and TDC was characterized using a 20 MHz mode-locked laser. The laser's 20 MHz electrical sync signal was used to generate a 10 MHz clock reference for the TDC using a PLL-based clock translator (AD9553). The TDC's comparator level was set at 25% of the pulse height for each channel. The optimal trigger level is 45%, but using a lower threshold helps with the effects of temporal walk, as discussed below. One set of time tags was saved and analyzed to produce a time delay calibration for the TDC. Another set of time tags was then saved with the calibration loaded in the TDC. Fig. <ref>a shows jitter histograms for each channel of the array, obtained from the second time tag acquisition by binning the time tags modulo the laser period. The mean FWHM jitter of the individual channels is 118 ps. A histogram of counts from all channels of the array is plotted in red in Fig. <ref>a, and its FWHM is also 118 ps. The clock derivation from the laser sync is estimated to contribute ∼32 ps of jitter, and the TDC jitter is approximately 50 ps. The jitter is expected to be dominated by measurement noise. A prototype version of the DSOC array was measured to have single-channel jitter of 40 ps FWHM using a low-noise cryogenic amplifier and a fast oscilloscope.
At high count rates, the array jitter increases. The degredation in jitter is due to variations in the pulse shape that lead to timing offsets when combined with the TDC's fixed-threshold comparator <cit.>. These variations have several potential sources: smaller pulse amplitudes occur when the bias current has not fully returned to the nanowire following the previous detection; shifted pulses can occur when voltage ripples from a previous event's pulse interfere with the current pulse; and distortions in the rising edge of the pulse can occur due to electrical cross-talk from a neighboring pixel that detects a photon at the same time. All these effects are more severe when the average time between events is shorter or if more photons are detected in the array at the same time. As part of the technology development phase of the project, the readout electronics were optimized to minimize the count-rate-dependent jitter by speeding up the cryogenic amplifiers so that the undershoot in the electrical pulse (Fig. <ref>d) occurs within the detector's recovery time and by redesigning the micro-coax connectors with additional grounding between signal lines to minimize cross-talk. Fig. <ref>b shows histograms measured using a 500 MHz pulsed laser at different count rates. As the count rate increases, the histogram width increases, and it develops a longer tail at higher delay values. Fig. <ref>c shows the FWHM jitter vs. count rate measured with the 500 MHz laser. The FWHM jitter increases by about 35% from its low-count-rate limit at a count rate of 500 Mcps, and the full width at 1% increases even more severely. We investigated the possibility of correcting for the time walk contribution to the jitter, as described in <cit.>, but did not find a significant improvement in decoding performance. The GSPA includes a matching filter for jitter compensation, which, in conjunction with the error correction built in to the Consultative Committee for Space Data Systems (CCSDS) code standard <cit.>, is able to handle the increase in jitter at high count rates.
§.§ Crosstalk
In a photon-counting array, crosstalk occurs when detection events on one channel cause detection events on another channel. Crosstalk in SNSPD arrays can be due to either electrical or thermal coupling between the channels. For example, in early prototypes of the DSOC array design, the nanowires were spaced too densely, and the heat produced by a detection event on one channel was able to propagate to neighboring channels and trigger an event several nanoseconds later. In the current DSOC array, electrical coupling produces small negative pulses on neighboring channels when a channel detects a photon (Fig. <ref>d), but these pulses are below the comparator threshold level and therefore do not cause crosstalk events. In general, crosstalk mechanisms have a characteristic time scale of correlations between channels. In the absence of crosstalk, events on different channels will be completely uncorrelated.
To look for timing correlations between channels, we collected over 100 million time tags across the whole array under CW illumination and looked at the interarrival times between events on adjacent channels. The simplest way to analyze the interarrival times is first to isolate tags from the two channels of interest (e.g. Ch1 and Ch2), and then look for pairs of consecutive events when Ch1 clicked and then Ch2 clicked. Correlation in the time separation of these pairs is an indication of crosstalk on Ch2 due to events on Ch1. The same analysis is repeated for crosstalk on Ch1 due to events on Ch2. Fig. <ref> shows a representative interarrival time histogram for two adjacent channels of the DSOC array. The histogram follows an exponential decay as expected for Poisson-distributed events without any evidence of deviations due to crosstalk. The measurement sensitivity was better than the DSOC project's requirement of <1%, but no crosstalk was observed. All adjacent channel pairs were analyzed, and none showed signs of crosstalk. Correlations between Ch8 and all other channels in the same quadrant were also analyzed to look for any crosstalk between non-adjacent channels with similar findings.
§ DEPLOYMENT
The detector system was first assembled and tested at the Jet Propulsion Laboratory. The system was then delivered to Palomar Observatory in July 2021 and installed in the Coudé room of the Hale Telescope. The cryostat operated continuously from August 2021 until November 2022 and from May 2023 until July 2024, with a few outages due to planned facility maintenance to unplanned loss of power or cooling water. During the extended pre-launch operations period, the GLR underwent various on-sky and off-sky tests in order to verify its ability to acquire and track on-sky sources in different seeing conditions, to refine its operating procedures and control software, and to improve its robustness to unplanned utility outages.
On October 21, 2021, the GLR was used to measure a light curve of the Crab Pulsar. A 1450 nm long-pass filter was used to reduce background. The Crab Nebula was too faint to allow for centroiding on the detector, so the telescope was blind-pointed to the nebula coordinates after centering on a nearby star. 20 minutes of time tags were collected and processed. The time tags were folded by the best-fit period of 33.7916 ms and binned by 50 µs. Due to pointing uncertainty, the pulsar only contributed to counts on half of the array, so counts from the other two quadrants were discarded. Fig. <ref>a shows the resulting light curve.
To demonstrate the large collection area, high maximum count rate, and high timing resolution of the GLR, the second-order autocorrelation function was measured for several bright stars to demonstrate thermal photon bunching. Observations were conducted using a 1550 nm bandpass filter with a 1.8 nm nominal bandwidth. Instead of using a beam splitter and two detectors as in the original Hanbury Brown and Twiss measurements <cit.>, we measured correlations between two halves of the array. Fig. <ref>b shows g^(2) measurements for Rigel and Procyon, with a clear increase in correlations at τ=0. This demonstration suggests that SNSPDs would make good candidates for infusion into astrometrical intensity interferometers, which currently use SPAD or PMT detectors <cit.>.
DSOC operations started in November 2023, and the GLR was successfully able to decode transmitted data at rates up to the maximum supported data rate of 267 Mbps, achieving this data rate at distances up to 0.37 AU (equivalent to the minimum Earth-Mars distance). At a distance of 1.5 AU (equivalent to the average Earth-Mars distance), the GLR was able to decode at a data rate of 25 Mbps. At a distance of 2.68 AU (equivalent to Mars farthest range), the GLR was able to decode at 8.33 Mbps. The similar O2O detector system was also used as part of a back-up ground receiver at the 1 m OCTL telescope to decode at data rates up to 61.25 Mbps at a range of 0.13 AU. During the first year of operations, the flight laser was limited to half of its maximum power and lower PPM orders as a risk reduction measure. Operations are scheduled to continue into 2025 with the possibility of increasing laser power and using higher PPM order formats.
§ CONCLUSIONS AND FUTURE WORK
We have reported on an SNSPD-based detector system for the DSOC Ground Laser Receiver. The 64-channel detector array has a detection efficiency of 70% under nominal seeing conditions, total dark counts as low as 3.7 kcps, a maximum count rate of ∼ 1 Gcps at the 3-dB saturation point, and jitter of 118 ps FWHM. As part of the larger DSOC Ground Laser Receiver system, the detector assembly enabled links at data rates up to 267 Mbps. For the next generation of deep-space optical communication ground receivers, even larger and faster SNSPD arrays will be needed, and readout of future systems remains a challenge. The current approach of direct readout for each channel will become impractical for arrays much larger than 100 pixels due to cryogenic heat loads and readout complexity. Different approaches to both increase the speed of each channel and to reduce the number of readout channels through cryogenic multiplexing will likely be necessary. While challenges remain in the continued scaling of SNSPD arrays, we have demonstrated that the technology has matured to the point of producing large-scale systems that meet the demands of deep-space optical communication applications.
§ FUNDING
The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).
§ ACKNOWLEDGMENTS
The authors would like to thank the DSOC project for their support, including project manager Bill Klipstein, project technologist Abi Biswas, ground manager Meera Srinivasan, and additional members of the Ground Laser Receiver team: Erik Alerstam, Ryan Rogalin, Nathaniel Richards, Angel Velasco, Seán Meenehan, Roger O'Brient, Carlos Esproles, Vachik Garkanian, Huy Nguyen, and Sabino Piazolla. The cryogenic amplifier development was made possible by a collaboration with the Spiropulu group / INQNET at Caltech. We thank our partners from Caltech Optical Observatories at Palomar Observatory for their support and contributions to the DSOC project. The authors would also like to thank Thomas Lehner of Dotfast Solutions, Andrew Rathbone and Brian Stoddard of Cryomech, Inc., and Vikas Anant of Photon Spot, Inc. for the extra time they contributed in helping us to adapt their products for our unique needs. The authors acknowledge helpful discussion and advice from collaborators at NIST and MIT Lincoln Labs. The authors would like to credit Jeffrey Stern (1962 – 2013) with performing foundational work on SNSPD development at JPL. The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This work was supported in part by a NASA Space Technology Research Fellowship.
|
http://arxiv.org/abs/2409.03512v1 | 20240905132251 | From MOOC to MAIC: Reshaping Online Teaching and Learning through LLM-driven Agents | [
"Jifan Yu",
"Zheyuan Zhang",
"Daniel Zhang-li",
"Shangqing Tu",
"Zhanxin Hao",
"Rui Miao Li",
"Haoxuan Li",
"Yuanchun Wang",
"Hanming Li",
"Linlu Gong",
"Jie Cao",
"Jiayin Lin",
"Jinchang Zhou",
"Fei Qin",
"Haohua Wang",
"Jianxiao Jiang",
"Lijun Deng",
"Yisi Zhan",
"Chaojun Xiao",
"Xusheng Dai",
"Xuan Yan",
"Nianyi Lin",
"Nan Zhang",
"Ruixin Ni",
"Yang Dang",
"Lei Hou",
"Yu Zhang",
"Xu Han",
"Manli Li",
"Juanzi Li",
"Zhiyuan Liu",
"Huiqin Liu",
"Maosong Sun"
] | cs.CY | [
"cs.CY",
"cs.CL"
] |
Bias correction of posterior means using MCMC outputs
Yukito Iba
September 9, 2024
=====================================================
§ ABSTRACT
Since the first instances of online education, where courses were uploaded to accessible and shared online platforms, this form of scaling the dissemination of human knowledge to reach a broader audience has sparked extensive discussion and widespread adoption. Recognizing that personalized learning still holds significant potential for improvement, new AI technologies have been continuously integrated into this learning format, resulting in a variety of educational AI applications such as educational recommendation and intelligent tutoring. The emergence of intelligence in large language models (LLMs) has allowed for these educational enhancements to be built upon a unified foundational model, enabling deeper integration. In this context, we propose MAIC (Massive AI-empowered Course), a new form of online education that leverages LLM-driven multi-agent systems to construct an AI-augmented classroom, balancing scalability with adaptivity. Beyond exploring the conceptual framework and technical innovations, we conduct preliminary experiments at Tsinghua University, one of China’s leading universities. Drawing from over 100,000 learning records of more than 500 students[We follow the approval from Tsinghua University Sci.&Tech. Ethics Committee (NO.THU-04-2024-56).], we obtain a series of valuable observations and initial analyses. This project will continue to evolve, ultimately aiming to establish a comprehensive open platform that supports and unifies research, technology, and applications in exploring the possibilities of online education in the era of large model AI. We envision this platform as a collaborative hub, bringing together educators, researchers, and innovators to collectively explore the future of AI-driven online education.
§ INTRODUCTION
Explicit Background: Evolution for Scalability. The evolution of online education stands as a testament to humanity's relentless pursuit of knowledge, transcending the limitations of time and space <cit.>. From the humble beginnings of oral tradition to the advent of the printed book <cit.>, education has continually sought ways to expand its reach. Yet, for centuries, the traditional model of education was bound by the constraints of physical classrooms, limited resources, and localized instruction. The dawn of the internet marked a revolutionary shift, heralding the age of online education, where the dream of universal access to knowledge began to take tangible form. Specifically, the Massive Open Online Course (MOOC) phenomenon marks a significant milestone in the evolution of online education, reflecting both technological advancement and educational innovation <cit.>.
Since then, platforms like edX[<https://www.edx.org/>], involving institutions such as MIT and Harvard, and Coursera[<https://www.coursera.org/>], originating from Stanford, have integrated learning resources from over 270 renowned universities <cit.>. These platforms have attracted more than 100 million learners globally, progressively realizing the Scalability of online education.
Implicit Motivation: Determination of Adaptivity. However, this paradigm of serving thousands of learners from diverse backgrounds through one pre-recorded video <cit.> (as shown in Figure <ref>) struggles to align with the educational philosophy of “teaching in accordance with individual aptitudes” <cit.>. This challenge has become a significant reason for the subsequent efforts that introducing of AI techniques into online learning. To achieve the Adaptivity of learning, a series of tasks such as learning path planning <cit.>, course recommendation <cit.>, and intelligent tutoring <cit.>—driven by technologies like recommendation systems and dialogue generation—have been employed to enhance the student learning experience.
Although these technologies have been applied across various aspects of teaching and learning, the significant differences among the supporting tasks before, during, and after instruction have posed challenges for unifying them under a single deep learning framework <cit.>. Such fragmentation has, in part, delayed the emergence of a new platform where AI and online learning are fully integrated. However, with the rapid advancement of generative AI <cit.>, large language models (LLMs) have created fresh opportunities for AI-powered learning paradigms. Models such as GPT-4 <cit.> and LLaMA <cit.> possess strong generalization capabilities and encapsulate vast parametric knowledge, allowing for the flexible configuration of intelligent agents <cit.> built upon them. Currently, LLM-driven multi-agent systems <cit.> have already been explored for applications such as social simulation <cit.> and the execution of complex tasks like software development <cit.>. This progress opens up a potential pathway for introducing large language model multi-agent systems to create entirely new online teaching and learning experiences.
Proposal of MAIC. At the critical juncture of a new era defined by large language models and multi-agent systems in online education, we introduce MAIC (Massive AI-empowered Course). MAIC is dedicated to exploring the integration of multi-agent systems across various stages of online learning, including course preparation, instruction, and analysis, with the goal of balancing Scalability and Adaptivity of online education. The core concept of MAIC is to construct a series of LLM-driven agents to support both Teaching and Learning in the online educational environment.
As illustrated in the Figure <ref>, the paradigm of MOOC and MAIC can be featured with two primary aspects:
Teaching: This action is primarily performed by the instructor. In previous MOOC, the instructor is responsible for thoroughly preparing course materials, drafting lecture notes, and spending considerable time meticulously recording the course. The final output typically consists of a series of pre-recorded instructional videos. For the proposed MAIC, however, the instructor only needs to upload the teaching slides. With intelligent assistance, the instructor can further complete the PPT creation, after which the agents, utilizing a range of models such as multimodal understanding and knowledge structure extraction, will generate structured lecture notes and learning resources optimized for use by the AI system.
Learning: In MOOCs <cit.>, a single set of course materials is designed to serve thousands of students with diverse backgrounds, and the pace of instruction is predetermined by the instructor, offering very limited room for personalized adaptation based on individual student needs <cit.>. While in MAIC, course delivery is autonomously managed by AI teacher agents, which dynamically adjust the teaching process based on student interactions and inquiries. Additionally, MAIC offers AI teaching assistants and customizable AI classmates. Students can select the AI agents they wish to study with, thereby creating varied classroom scenarios that provide personalized learning companions, emotional support, and opportunities for knowledge discussion.
In this report, we introduce the MAIC platform, offering an intuitive and user-friendly solution that accommodates the needs of various users, including students and educators. This platform comes pre-equipped with a suite of intelligent agents and tools that support course analysis and the construction of new MAIC course examples. Additionally, MAIC integrates several learning analytics tools powered by large models, enabling quick access to learning data, forecasting academic outcomes, and automating tasks such as interviews and assessments.
With the support of Tsinghua University, one of China's top universities, we conducted an exploration of this new learning model over a period of more than three months. Assisted by over 500 student volunteers, we implemented the study using two courses: the AI course “Towards Artificial General Intelligence” (TAGI) and the learning science course “How to Study in the University” (HSU). During this pilot, we collected over 100,000 behavioral records. Based on the data from these courses, along with student survey measurements and qualitative interview results, we conducted an initial analysis of the features and performance of the MAIC system. In subsequent sections, we will briefly introduce the technical implementation, algorithm improvement and primary results of MAIC [Our open-source Demo and detailed analysis will be released soon. More technical details are introduced in the companion papers Slide2Lecture and SimClass <cit.>.].
§ RELATED WORK
AI-assisted Online Learning
Online learning <cit.> refers to the process of acquiring knowledge within an electronic environment composed of communication technologies, network infrastructure, artificial intelligence, and multimedia tools. While online education significantly enhances learners’ access to knowledge, supports personalized learning, and facilitates learning tailored to individual needs, persistent issues such as low course completion rates and suboptimal learning outcomes remain formidable challenges <cit.>. A retrospective study published by MIT <cit.> highlights that the lack of continuous guidance and personalized support for online learners is a critical factor affecting the quality of learning and student development in online settings. The inherent nature of online and remote learning often results in physical separation between students and instructors, making it prohibitively expensive to maintain real-time interactions through manual means <cit.>. Therefore, AI researchers have begun to introduce auxiliary learning applications such as resource recommendation systems <cit.> and intelligent teaching assistants <cit.> into online education environments. Leveraging technologies like educational knowledge graphs <cit.>, they are increasingly integrating diverse technologies to construct personalized online learning systems that enhance the learning experience through tailored support and adaptive learning pathways. In the era of large language models, platforms like Khan Academy have pioneered the deployment of AI-driven tools, such as the Khanmigo virtual tutor [<https://blog.khanacademy.org/teacher-khanmigo/>]. This development has sparked discussions among researchers about the potential to move beyond basic question-answering and recommendation functionalities, exploring the design of more deeply integrated models that fuse AI with online education in innovative ways <cit.>. Some of the research efforts gradually aim to create new paradigms that go beyond traditional approaches to AI-enhanced learning.
LLM-driven AI Tutoring System The evolution of intelligent tutoring systems (ITS) from early expert systems to agent-based models in the era of large language models has undergone several transformations in interaction paradigms <cit.>. During the 1980s and 1990s, ITS began leveraging expert systems and related technologies to deliver instruction aligned with learners' cognitive processes across diverse educational settings. Representative systems from this period include AutoTutor, developed by the University of Memphis <cit.>, and SCOT from Stanford University[51]. These systems offered greater flexibility and were among the first to support natural language-based question-and-answer interactions. However, the content of these interactions remained pre-designed, limiting their ability to provide learning support beyond the scope of the system's initial design. The advent of Large Language Models (LLMs) has significantly broadened the scope of intelligent tutoring systems (ITS), offering unprecedented interactivity and adaptability across educational platforms <cit.>. Recent strides in the study of multi-agent systems and the integration of tools have catalyzed new approaches to planning and student interaction within ITS frameworks <cit.>. Emerging research <cit.> illustrates the capacity of LLMs to autonomously curate and deliver educational content by leveraging an array of tools. Additionally, innovations like MWPTutor <cit.> explore how LLMs can effectively manage the teaching of complex subjects, such as mathematical word problems, demonstrating their versatility in specialized learning environments. Building on these developments, our research focuses on an LLM-based ITS model designed to establish a robust framework for comprehensive, lecture-level tutoring that aligns with evolving educational standards, thereby reinforcing the transformative role of LLMs in the future of education.
§ MAIC
In this section, we introduce the key techniques for implementing a MAIC platform. Specifically, we present the main workflows designed for both the teaching and learning sides, highlighting the key challenges associated with each and the corresponding solutions implemented to address these issues.
§.§ Teaching: Course Preparation Workflow
To transform vast amounts of weakly structured and static learning resources <cit.> into highly structured and adaptive learning materials <cit.>, MAIC develop a standardized course preparation workflow. This workflow is designed to streamline the workload of experts, facilitating the scalability of this online learning model and preparing it for broader implementation. Such course preparation workflow of MAIC consists of two major stage: Read and Plan.
§.§.§ Read Stage
At this stage, instructors (and authorized teaching assistants) are involved by providing material. With the assistance of multi-agent systems empowered by large language models, they upload a set of course slides 𝒫={P_i }^1≤ i≤ |𝒫|, which are then transformed into highly structured intelligent learning resources 𝒫={<P_i,D_i,K_j > }^1≤ i≤ |𝒫|_1≤ j≤ |𝒫| along with multiple AI agents designed for classroom construction. The P_i and D_i here correspond to a single slide page and its textual description, while K_j denotes to the knowledge-aware section of each page.
1. Slides Content Extraction. First, MAIC employs a multi-modal LLM (mLLM) <cit.> to capture the textual content P_i^t and the visual content P_i^v of each page of the given slide 𝒫, i.e., f_T^1: P_i→ <P_i^t , P_i^v>. Such functional model f_T^1 [The same applies to similar symbols in the following text.] can be adjusted and further improved by the arising LLM techniques, and the current implementation is based on the GPT-4V [<https://openai.com/index/gpt-4v-system-card/>] model with certain prompting contexts.
2. Structure Extraction. After the pre-processing of the uploaded slides, MAIC employs two functions to complete the read stage. The produced <P_i^t , P_i^v> are described by an mLLM-based method with straightforward and comprehensive texts, i.e., f_T^2: <P_i^t , P_i^v> → D_i. Meanwhile, MAIC takes an knowledge extraction method to organize the core knowledge of each page and build a tree-style taxonomy for the slide, i.e., f_T^3: <P_i^t , P_i^v, D_i> →K_j, which makes up the final 𝒫.
§.§.§ Plan Stage
At this stage, instructors (and authorized teaching assistants) are involved by proofreading and refining the results. Based on highly structured slides, MAIC constructs a novel instructional action representation language, allowing the incorporation of flexible teaching functions such as lecturing and questioning into preset classrooms, which naturally connects with related educational technologies like lecture script generation <cit.> and question generation <cit.>. Meanwhile, leveraging intelligent agent construction techniques <cit.>, the platform utilizes course content to provide teachers and teaching assistants with AI-driven agents, facilitating the early planning of the foundational structure for subsequent courses.
3. Function Generation. To make the heterogeneous teaching actions be generated within the classroom context, teaching activities such as lecturing and giving quizzes are conceptualized as teaching actions within MAIC. Each teaching action 𝒯 is defined as 𝒯 = (type, value), where type indicates the category of the action (e.g., ShowFile, ReadScript, AskQuestion), and value details the content of the action, such as the script to be read aloud. This approach reflects our principles of flexibility and adaptability, allowing classroom actions to be easily configurable. It empowers developers and educators to create custom teaching actions tailored to specific needs, facilitating their smooth integration into the overall teaching process.
Each Function is associated with the generation of certain content, denoted as <𝒯_n, P_𝒯>. For instance, the function AskQuestion necessitates linkage with one or a set of specific questions. Among these, the most crucial action is ReadScript, as it constitutes the core of the instructional process. Based on this function, these Teaching Actions are embedded within the course script using special marker symbols, thereby enabling the intelligent agent to read and invoke them in a personalized manner. Specifically, MAIC has trained a high-quality lecture script generation model based on long-context encoding methods and multi-modal model foundations, thereby supporting the fundamental class procedures as well as the integration of other teaching actions, i.e., f_T^4: P→P_script. Then, MAIC provide a series of optional functions such as f_T^5: P→P_question for serving proactive questioning during the class. Note that all the generated results are required to be checked and adjusted by instructors, which guarantees the quality and correctness of the produced course.
4. Agent Generation. Meanwhile, instructors can provide personalized information (such as the voice, teaching styles, and extended course material) for building customized teaching agents, such as Teacher Agent a_T and Teaching Assistant Agent a_TA. MAIC provides several agentization toolkits <cit.> that implemented via LLMs, supporting the high-level DIY of these agents. The extended course materials uploaded are also segmented in this section and integrated into different intelligent agents using the RAG (Retrieval-Augmented Generation) technology. The associated series of technological innovations are also thoroughly introduced and evaluated in the concurrent academic papers.
§.§ Learning: Multi-agent Classroom Environment
As described in the Introduction, student learning in MAIC follows a "1 Student user + N AI Agents" model. In such an environment, the AI teacher controls the learning progress based on the highly structured instructional action representation language mentioned earlier, explains course content, poses questions, and navigates PowerPoint slides, while the AI teaching assistant maintains classroom order and prevents content deviation. Students can interrupt the teacher at any time, ask questions, and engage in discussions, and the intelligent agents continuously adjust the teaching process and some content based on the students' performance. As introduce in <cit.>, the design principles for constructing this immersive adaptive classroom originate from the following two concerns: (1) How to ensure that the classroom covers the core classroom behaviors? (2) How to maintain the entirety of the interaction within the natural flow of the classroom process?
In addressing the initial concern, we systematically classify classroom interaction behaviors in accordance with established educational principles, as delineated in Schwanke's seminal work <cit.>: Teaching and Initiation (TI) encompasses the instructive actions of the teacher and the responsive feedback or insights provided by students; In-depth Discussion (ID) involves the alignment, deliberation, and iterative question-and-answer exchanges between the teacher and students, which are instrumental in facilitating students' conceptual comprehension; Emotional Companionship (EC) pertains to the encouragement of student learning, the cultivation of a conducive learning environment, and the provision of emotional sustenance; and Classroom Management (CM) refers to the maintenance of order, the organization of disruptive elements, and the steering of classroom discourse. Recognizing that these pedagogical behaviors manifest through diverse Class Roles (represented as ℛ= { r_i}^| ℛ|, with each ri signifying a distinct role), it is imperative to ensure the variety and breadth of the agents' roles within the educational setting.
Addressing the subsequent concern, we emphasize the necessity of meticulously and rhythmically orchestrating the interactions among the various agents within the system, in harmony with the course content. With the Learning Materials (denoted as C = [ c_1,...,c_t ], where each instructional script c_t is sequenced), we introduce an innovative Session Controller designed to regulate the flow of classroom interactions, contingent upon the class's dynamic state and under the aegis of a central managerial agent <cit.>.
Based on these principles, we construct multiple classmate agents for diverse roles, implement class control, and ultimately derive the multi-agent classroom process.
Classmate Agents. To enhance the educational experience and emulate the dynamics of traditional classroom settings, we currently preset a variety of student-like agents, each imbued with unique personality traits, to complement the teaching agents. These agents are designed to perform roles akin to peer students, enriching the interactive landscape of the learning environment. In this scholarly work, we have introduced an initial set of four archetypal student agents, while also providing users with the flexibility to customize and introduce additional engaging student agents onto the educational platform. Each agent 𝐚_i∈𝒜 is facilitated through prompting LLMs and associated with one or more class roles, denoted as:
𝒜 = ρ ( LLM, 𝖯_A ), 𝒜⇔ℛ,
where ρ is the role customization operation, 𝖯_A is the system prompt with agent description <cit.>.
∙ Class Clown (TI, EC, CM): Crafted to spark creativity, engender a lively classroom ambiance, and act as a supportive peer, this agent also assists the teacher in steering the class's focus when the learner's attention wanders.
∙ Deep Thinker (TI, ID): This agent is dedicated to profound contemplation and to posing thought-provoking questions that challenge and extend the intellectual boundaries of the classroom.
∙ Note Taker (TI, CM): With a penchant for summarizing and disseminating key points from the class discussions, this agent aids in the cognitive organization and retention of information for all participants.
∙ Inquisitive Mind (TI, EC): Characterized by a propensity for inquiring about lecture content, this agent fosters a culture of inquiry and dialogue, prompting others to engage in critical thinking and collaborative discourse.
Unlike Standardized Operating Procedures (SOPs) commonly used in multi-agent systems <cit.>, classroom scenarios function as dynamic, interactive environments without rigid workflows, resembling an evolving group discussion. In these settings, agents must determine appropriate timing for their interactions, adapting to the fluid nature of classroom discourse. To address this need, we designed a controller that observes classroom dynamics, makes informed decisions, and manages agents’ behaviors based on the current state of the class. The Session Controller is composed of three core modules: the Class State Receptor and Manager Agent.
Class State Receptor. The Class State Receptor captures the ongoing classroom dialogue, with the history up to time t represented as H_t = ⋃ (u_i^𝐚_j)^t, where u_i is the utterance made by agent 𝐚j or a user (denoted as 𝐚u). The class state S_t integrates this interaction data, structured as 𝒮_t = {P_t, H_t | ℛ}. Here, P_t ⊆ P represents the learning materials covered up to time t. This design prioritizes adaptability and real-time decision-making, aligning with pedagogical principles that emphasize responsiveness to the evolving needs of learners within an educational setting.
Manager Agent. Drawing inspiration from AutoGen <cit.> and MathVC <cit.>, we designed a hidden meta-agent responsible for regulating the dynamics of classroom interactions. This agent receives the current class state 𝒮_t, monitors the flow of the class, interprets ongoing activities, and determines the subsequent action to be executed, ensuring that the learning environment remains adaptive and responsive. The task f_ℒ of the Manager Agent can be formally defined as f_ℒ: 𝒮_t → ( 𝐚_t, 𝒯 ) | 𝐚_t ∈𝒜, 𝒯_n⇐𝒯.
where 𝒯_n denotes a specific function, and the selected action will be carried out, transitioning the class to the next state. After executing an action, the system enters a waiting phase for a time window τ. During this period, if a user responds or the waiting time elapses, the Manager Agent is triggered to make a new decision. This design reflects key educational principles by prioritizing a learner-centered approach, maintaining fluid class engagement, and promoting timely and contextually relevant instructional adjustments, thereby enhancing the overall educational experience.
This classroom management method is the core of MAIC learning stage. Currently, we collect plenty interaction data and employ several foundation models <cit.> via fine-tuning or prompt tuning as our baseline model.
§ KEY TECHNIQUE EVALUATION
MAIC is a complex large language model-based intelligent agent system that encompasses various specific technologies. On the teaching side, it involves multiple processes for content generation and knowledge understanding, while on the learning side, it requires evaluating the effectiveness of agent construction and classroom management capabilities. Currently, we focus on presenting two core functions: lecture script generation and course management, which are fundamental to the teaching and learning aspects of MAIC. The evaluation of other technologies will be continuously updated. It is important to note that these assessments provide only an initial view of specific aspects of MAIC rather than its overall effectiveness, which will be further explored through real-world practice and results analysis in subsequent sections.
§.§ Teaching Side Evaluation
Function Generation. As introduced in Section <ref>, generating vivid slide scripts is the core function of MAIC teaching workflow. Baselines: To assess the effectiveness of our implementation, we established two baseline configurations for comparison: (1) We replicated Script2Transcript <cit.>, referred to as S2T, which uses slide titles to offer overarching context and guidance for generating transcripts. (2) We also reproduced Self-Critique Prompting <cit.>, denoted as SCP, which incorporates a self-critique and refinement process to enhance script quality.
Evaluation Metrics: Our evaluation employs four distinct metrics to rate the generated scripts on a 5-point Likert scale, where 1 represents unacceptable quality and 5 denotes optimal performance:
* Tone evaluates whether the script appropriately reflects the instructional tone of a teacher.
* Clarity measures how clear and comprehensible the script is for learners.
* Supportiveness assesses the extent to which the script provides emotional and motivational support to students.
* Alignment evaluates the degree to which the script content aligns with the slide material.
Overall performance is determined by averaging the scores across all metrics.
Procedure: We executed the course preparation pipeline for each baseline and collected script evaluations from annotators. To minimize potential bias, each slide was assessed by three independent annotators, who were required to provide ratings for all configurations of the same slide. This approach ensures a balanced and comprehensive evaluation of the teaching material, aligning with educational principles that emphasize clarity, support, and contextual relevance in instructional content.
Results: As shown in Table <ref>, our approach (MAIC-FuncGen) achieves the highest overall score of 4.00, outperforming all baseline methods. Our analysis reveals several key insights:
* Importance of Visual Input: The inclusion of visual elements significantly enhances script generation quality. Both S2T and MAIC-FuncGen without visual inputs received lower matching scores, highlighting the need for contextual visual cues that align the script content closely with the presented materials.
* Role of Contextual Information: The presence of coherent contextual information, including content from previous pages, is critical for script quality. It not only improves the clarity of the current content but also enhances the supportive and matching aspects by providing a continuous and interconnected learning narrative. This aligns with pedagogical principles that emphasize coherence and context in learning materials to support deeper understanding.
* Comparative Performance with Human Instructors: Interestingly, our approach slightly outperformed the human baseline across three key dimensions. This can be attributed to the inherent ability of large language models (LLMs) to strictly adhere to instructions, maintaining alignment with slide content and employing an encouraging and supportive tone. In contrast, human instructors often expand on topics freely, introducing their own style and diverging from the core content, which reflects a more flexible, albeit less structured, instructional approach.
These findings underscore the value of integrating structured visual and contextual information into script generation, while also highlighting the potential of LLMs to complement traditional instructional strategies by providing consistency and structured support.
§.§ Learning Side Evaluation
r0.5
< g r a p h i c s >
Manager Agent Precision.
Classroom Manager Agent. Section <ref> also mention several relevant techniques of MAIC learning. The classroom manager agent is the keypoint of the classroom controlling. Setup: However, the evaluation of this process is highly subjective, making it challenging to establish an objective scoring system for assessment. Therefore, in the practical implementation of the TAGI and HSU courses, we select 500 actual system decisions and extracted their corresponding classroom scenarios. We recruit expert teachers and teaching assistants to manually annotate these scenarios. Based on this annotated data, we derive the results shown in Figure <ref>. These results illustrate the alignment between the actions chosen by the manager agent and those selected by human instructors in determining the next course action. Specifically, we evaluate the implementation with and without role description, detecting the effects of these contextual information.
Result: Statistical analysis reveals that omitting role descriptions for each agent reduces the classifier's performance. Although the LLM can sometimes identify the correct agent by referencing partial behaviors from the chat history, the inclusion of comprehensive role descriptions markedly enhances performance. This suggests that while leveraging chat history as input for the scene controller can provide some benefits, it is insufficient for consistently generating accurate outputs.
The current results, however, remain below optimal levels, indicating further opportunities to refine and enhance the user experience. Despite the suboptimal performance, interacting agents demonstrate the capacity to partially offset these shortcomings. This compensatory effect is due to the LLM's ability to manage user queries beyond the predefined functions of each agent, as evidenced by our subsequent behavioral study, where user ratings did not significantly decline in the ablation setting.
Nonetheless, enhancing the accuracy of the controller agent remains advantageous, as agents can more effectively manage tasks they are specifically designed for. For example, the teacher agent is tailored to adopt a softer, more instructive tone, but it may be less effective in handling safety-related cases compared to the teaching assistant agent. Improved accuracy ensures that each agent operates within its designed scope, contributing to a more seamless and effective instructional process.
§ BEHAVIORAL EXPERIMENT
Following approval from Tsinghua University Science and Technology Ethics Committee (Certificate No: THU-04-2024-56) and the recruitment of student and teacher volunteers, we conduct over three months of teaching practice and behavioral analysis in the courses "Towards General Artificial Intelligence" and "How to Study in the University." This large-scale study involves more than 500 students and aims to address three core questions: Q1: What is the quality of MAIC Courses? Q2: What are the learning outcomes within MAIC? Q3: How do students perform in the MAIC environment? In the following sections, we present some preliminary observations from this study.
§.§ Q1: The Quality of MAIC Course
We evaluated the quality of the MAIC course using the results from two questionnaires completed by course takers. The first questionnaire focused on the quality of AI teacher's teaching, adapted from the Community of Inquiry Framework <cit.>. Items in the original COI questionnaire were revised to make them suitable for the AI-engaged learning environment. This questionnaire was administered when students completed the whole course.
r0.5
< g r a p h i c s >
Results from after-course survey.
As presented in Figure <ref>, the results showed that overall students had positive beliefs of the teaching quality on MAIC. For instance, the mean score of students' rating on the question "The AI instructor clearly communicated important course goals" (Course Objective) was 4.12 (SD = 0.66) out of 5, and the average rating on "The AI instructor encouraged course participants to explore new concepts" was 4.03 (SD = 0.73). These findings suggest that students felt the AI instructors effectively helped them understand course objectives, clarify their thinking, explore new ideas, and engage in meaningful dialogue.
However, relatively lower ratings were observed on questions like “The AI instructor provided feedback that helped me understand my strengths and weaknesses” (Understand Student), which had a mean score of 3.51 (SD = 0.94). This indicates that AI instructors may lack personalization and adaptability during the teaching process, possibly because the same scripts were used for all students.
§.§ Q2: The Behavior of MAIC Student Engagement
Firstly, when it comes to choosing the class mode, students tend to prefer the "continuous mode," believing that this mode allows them to maintain their train of thought without interruption, thereby ensuring learning efficiency. The "continuous mode" refers to a setting where, after selecting the teacher and other intelligent agent roles (such as teaching assistant, thinker, note-taker), the chosen roles conduct the class from start to finish without any interactive input from the students, making it a relatively passive learning approach. For example, during interviews, one student mentioned:
.9"I mostly used the continuous mode because, in the interactive mode, after the AI teacher finishes each sentence, you have to respond before they can continue. I don't always feel like interacting after every sentence, so most of the time, I use continuous mode. Of course, there were one or two times when I used interactive mode because the AI prompted me to speak, but I remember that once, after I spoke in interactive mode, the AI didn't respond or react to what I said, so I felt like it wasn't very useful. After that, I just stuck with the continuous mode." In this mode, although interaction is not possible, some students adopt a pause strategy if they don't understand something. For instance, one student mentioned, "If I didn't understand something, I would pause and look at the PPT and the text in the text box. I don't think I ever stopped to ask the AI to explain something again, unless it was some unfamiliar term or a more exploratory topic."
Secondly, regarding specific behaviors during the class, some students proactively ask questions to the intelligent agent around certain topics. As shown in Figure 3, 61% of the students' behavior in class involved actively seeking knowledge, information, or asking questions. For example, asking "Can you explain the transformer structure in simple terms?" The interview results of this study also support this view. Many students indicated that they would actively ask questions. Some examples from the interviews include:
.9∙ "When I asked a question, the AI would ask a follow-up question related to mine, and I felt it was an amazing experience, like it was really guiding me to think more deeply. I think the questions it asked made a lot of sense. In that interactive mode, it could extend into many other discussions beyond the course content."
∙ "I think this might be an advantage of AI teaching. Because, in a traditional classroom, whether it's a large class or even a small one, students nowadays are generally reluctant to ask questions. There are various reasons for this. Also, I think immediate Q&A helps me learn the material better. If I have a question, I can get an answer right away, and I think that immediate feedback is really valuable."
r0.5
< g r a p h i c s >
Ratio of student activities.
Additionally, some students manage and control different intelligent agent roles and the class progress. These management-related behaviors account for 11% (Figure <ref>). For example, "Please go back to the previous slide," or "Please explain that in simpler terms," demonstrating strong autonomy and self-regulated learning abilities. In the interviews, some students also mentioned, I don't just ask for knowledge; I might ask, 'I want to learn more,' or 'I hope to explore something new in a certain field,' to manage and regulate the AI's responses.
Overall, the findings reveal that while students prefer the "continuous mode" for its uninterrupted flow of information, this passive approach may limit opportunities for active engagement and critical thinking. In the future, AI agent-driven classes should actively encourage student interaction rather than simply delivering continuous knowledge. In addition, the high level of proactive questioning indicates that students are eager to engage when given the opportunity, underscoring the importance of designing AI tools that foster inquiry-based learning <cit.>. Questioning is an important behavior that reflects active learning in students, which ultimately leads to better academic performance. Finally, the occurrence of management-related behaviors suggests that students already realize their active role in their learning process, which align with <cit.>' study. It is said that the capabilities of LLM AI tools can help students and educators transition from passive recipients to active co-creators of their learning experiences. Further research can give more support and encouragement for them to take an active role in directing their learning.
§.§ Q3: The Outcome of MAIC Learning
We assessed the effectiveness of the MAIC course from three perspectives: performance in module tests and the final exam, technology acceptance through questionnaires, and self-reported higher-order thinking scores.
Test Results. Module tests were conducted at the end of each module, focusing primarily on the content covered in the most recent module. The final exam was administered one week after the course concluded, synthesizing questions from the earlier module tests. Average attendance of the module tests is 76.3% (SD=6%), while that of the final exam is 73.3%. Test scores (stantardized to percentage) ranged from 53.3% (Module 2, SD=18.9%) to 82.4% (Module 4, SD=16.3%), reflecting students' learning outcomes. These outcomes were corroborated by student interviews. One participant expressed, “My most impressive gains, from the perspective of knowledge, all came from the post-class test. If there were no post-class test, I might not remember the knowledge at all, and it might just pass by like a passing cloud of smoke... When I took the test and then looked back at the courseware, it was the peak period for me to absorb knowledge, honestly speaking.”
r0.6
Correlation of students' message-aware behavior and test results. Values shown in the table are normalized.
w/o control μ(log(MsgNum)) μ(log(MsgLen))
AvgQuiz 0.341*** 0.202*
FinalExam 0.346*** 0.333**
w/ control μ(log(MsgNum)) μ(log(MsgLen))
AvgQuiz 0.206** 0.177**
FinalExam 0.174 0.235*
Additionally, test scores are strongly associated with class engagement. Specifically, the frequency (measured by the logorithm of the number of messages per module) and length of in-class chat messages (measured by the logorithm of the number of characters per message and module) —a prominent feature of the MAIC system—were positively correlated with standardized test scores and final exam performance, as presented in Table <ref>. A regression analysis of standardized test scores on in-class chat message metrics was performed, controlling for normalized scores from Module 1. As Table <ref> shows, in-class chat engagement is found to significantly predict higher test scores.
Technology Acceptance. We further evaluate the acceptance of generative AI tools, such as ChatGPT, before and after the course. The results demonstrated a significant overall increase in technology acceptance (N=111, t=3.05, p=0.002). Further analysis of specific dimensions of acceptance revealed significant improvements in Habit (t=2.81, p=0.005), Effort Expectancy (t=3.98, p<0.001), and Facilitating Conditions (t=3.22, p=0.002). These findings indicate that students grew increasingly accustomed to and supportive of the MAIC course format.
Interview responses also reflected an enhanced understanding and acceptance of AI technologies. One student remarked:
.9"I used to be quite resistant to AI, mainly because it was too complicated, but after this course, I found that it was not that complicated, and I became more accepting of it. I also learned some of its principles."
High-order Thinking. We explore the perceived impact of using LLMs on students' higher-order thinking skills through pre- and post-course questionnaires. Comparative analyses and t-test results showed significant increases in students' perceptions of the positive effects of the course on their abstract thinking (t=2.32, p=0.02) and critical thinking (t=2.37, p=0.02). These findings suggest that students believe using large language models throughout the course enhanced their cognitive abilities in these areas.Interviews further illuminated these perceptions. One student stated:
.9"I think it might make me more confident in asking these questions and think more," while another mentioned, "Unlike before, (I was always) hiding and waiting for the opportunity to ask again."
However, the course's impact on other higher-order thinking skills remained ambiguous. Several students noted a lack of deep thinking and discussion opportunities in the course. For instance, one student commented:
.9"(In a real classroom) After class, I can ask the teacher to explain it to me again... At this time, the teacher will definitely give you different ideas or explanations, but in this class, you will have no more after class." Another student added, "this course is purely about content because the teacher's teaching is quite mechanical. The course is rich in theoretical knowledge, but there is almost no life thinking, life enlightenment, or some enlightenment and thinking outside of artificial intelligence."
Notably, there is a limitation came from the questionnaire itself. As we evaluated students' perceptions on the impacts on high-order thinking abilities, it should be noted that the abilities are not estimated. Future study should include designed scales or tasks to estimate the effects of the course on the students' high-order thinking abilities.
To address this limitation, future research should incorporate well-designed scales or specific tasks that can objectively measure students' higher-order thinking abilities. These might include assessments that evaluate critical thinking, problem-solving, and analytical reasoning skills directly. By doing so, researchers can gain a more comprehensive understanding of the course's effectiveness in enhancing these abilities. Moreover, the inclusion of such measures would allow for a more rigorous evaluation of the course's impact, providing stronger evidence to support or challenge the findings based on student perceptions alone.
§ PREDICTED IMPACT AND ETHICAL CONSIDERATION
§.§ Predicted Impact
The implementation of MAIC in online education is expected to revolutionize the learning experience by enhancing both scalability and adaptability. By leveraging multi-agent systems, MAIC can dynamically adjust to the needs of individual learners, providing personalized learning paths that were previously unattainable in traditional MOOCs. This personalized approach not only improves learning outcomes but also provides access to high-quality education across diverse socio-economic backgrounds, as it reduces the dependency on human instructors for content delivery.
Moreover, MAIC is anticipated to address some of the inherent challenges of traditional online education, such as the one-size-fits-all model and the lack of real-time adaptability <cit.>. The integration of AI-driven agents as teachers, teaching assistants, and classmates creates a more interactive and responsive learning environment. This shift promises to increase engagement and motivation among students, potentially leading to higher completion rates and deeper understanding of the material.
However, it is crucial to acknowledge that the introduction of such a transformative system could also have unintended consequences. There may be a widening gap between students who adapt well to AI-powered learning environments and those who struggle with this new mode of education. Additionally, the reliance on AI systems could lead to reduced opportunities for human instructors, potentially diminishing the role of educators in the learning process. These impacts need to be carefully monitored and addressed through ongoing evaluation and refinement of the MAIC system.
§.§ Ethical Considerations
The deployment of MAIC in online education brings forth significant ethical considerations that must be carefully evaluated <cit.>. One of the foremost concerns is learner privacy and data security. The system's reliance on large-scale data collection and analysis to personalize learning experiences raises questions about how student data is stored, accessed, and used. To mitigate these concerns, stringent data protection measures have been implemented, including encryption and anonymization of student records. However, given the sensitivity of educational data, continuous efforts to enhance security protocols are essential.
Issues of discrimination and bias also present ethical challenges. While MAIC aims to provide a personalized learning experience for all students, there is a risk that the algorithms driving these personalized experiences may inadvertently reinforce existing biases, particularly if the training data is not sufficiently diverse. To address this, the development team has incorporated fairness-focused auditing procedures into the algorithmic design process. Although these measures are designed to minimize bias, it is recognized that no system is entirely immune to these challenges, and ongoing monitoring is required to ensure equitable outcomes.
The accuracy of information and content regulation within MAIC is another critical area of ethical concern. As the system automates the creation and dissemination of educational content, there is a risk that inaccuracies could be propagated at scale. To mitigate this, content generated by the AI is regularly reviewed by subject matter experts and teaching assistants. Nonetheless, given the vast scale of content production, it is acknowledged that some errors may still occur. Thus, the system includes mechanisms for students and educators to flag and correct inaccuracies, ensuring continuous improvement of the educational material.
In terms of the ethics of education, MAIC raises questions about the role of teachers in a system that increasingly relies on AI-driven instruction. While the system enhances scalability and provides personalized learning experiences, it also diminishes the direct involvement of human educators. This shift may impact the development of teacher-student relationships, which are critical for fostering emotional and social growth in learners <cit.>. To address this, MAIC includes human-in-the-loop design principles, ensuring that educators can intervene and guide the AI’s decision-making processes when necessary. However, the balance between AI automation and human oversight remains a complex issue that requires ongoing consideration.
Furthermore, the lack of peer interaction in a predominantly AI-driven educational environment could hinder the development of important social skills among students. To counter this, MAIC incorporates AI classmates designed to simulate peer interactions. While these agents provide a form of interaction, they cannot fully replicate the nuances of human-to-human communication. The system, therefore, encourages mixed-mode learning environments where students can engage with both AI and human peers, preserving the benefits of social learning.
Finally, the issue of student personalization must be approached with caution. While MAIC’s ability to adapt to individual learning needs is a significant advantage, it also poses risks to fairness and equality. There is a potential for certain students to receive more tailored and effective instruction based on their data profiles, potentially exacerbating educational inequalities. To mitigate this, MAIC includes mechanisms to ensure that all students, regardless of their data profiles, have access to high-quality learning experiences. The system’s fairness algorithms are continuously refined to promote equitable educational outcomes for all learners.
§ CONCLUSION
In this paper, we provide a concise overview of the development trajectory of online education and the technological opportunities arising in the era of large language models. Considering the principles of adaptivity and scalability, along with the sophisticated design of LLM-driven multi-agent systems, we explore how existing MOOC can be transformed into MAIC (Massive AI-empowered Courses) and discuss new paradigms of teaching and learning. We propose a comprehensive solution, analyze the key technical components, and implement each step of the process. Our approach was practically deployed in two courses at Tsinghua University, leading to a series of preliminary observations of student behavior. These initial findings suggest that highly personalized classrooms built with new AI-assisted learning technologies can achieve high quality, and student behavior demonstrates the effectiveness of the teaching process. Moving forward, this work will be continuously maintained and expanded, aiming to develop an open and shared platform for educational exploration, academic research, and technological innovation. We hope our work will call upon and serve educational theorists, technology developers, and innovators to engage in discussions about the new online environment in the era of large language models.
§ ACKNOWLEDGEMENT
§.§ Author Contribution
System Implementation. Jifan Yu, Zheyuan Zhang, and Daniel Zhang-li designed the overall framework of the system. Zheyuan Zhang refined the workflow representation method used for controlling agents, while Daniel Zhang-li was responsible for the development and engineering deployment of several algorithms, including resource processing and function generation. Shangqing Tu oversaw security reviews and the integration of RAG methods. Linlu Gong collected the evaluation data from students. Nan Zhang and Ruixin Ni were responsible for project management and coordination throughout the development phase of MAIC, playing a crucial role in ensuring the quality of the final system.
Theoretical Investigation. Zhanxin Hao and Ruimiao Li were responsible for the theoretical investigation and analysis of the MAIC concept, while Yang Dang contributed to the early development of this idea. All supervising professors provided valuable insights and guidance in the conceptualization of MAIC.
Toolkit Completion. Haoxuan Li, Yuanchun Wang, Hanming Li, Jiayin Lin, Jinchang Zhou, and Nianyi Lin contributed to the development of various MAIC tools, including the cognitive diagnosis module, automated interview module, and automatic analysis module. Haohua Wang and Lijun Deng played significant roles in data collection and processing.
Course Practice. Yisi Zhan and Chaojun Xiao were instrumental in the first round of MAIC pilot courses, handling a wide range of practical tasks, including the recruitment of student volunteers, provision of course materials, content review, and post-class management. Xusheng Dai refined the course practice process, while Xuan Yan was deeply involved in course support activities. Their efforts were critical to the successful execution of the two courses.
Pedagogical analysis. Under the guidance of Yu Zhang, Zhanxin Hao was responsible for the preliminary pedagogical evaluation of the MAIC system. Ruimiao Li, under the supervision of Manli Li, conducted key field research. Jie Cao, Fei Qin, and Jianxiao Jiang played crucial roles in the analysis process, overseeing tasks such as automated coding, qualitative interviews, and data analysis, respectively.
Paper Writing. Jifan Yu was responsible for the primary writing of the manuscript, while Zhanxin Hao and Ruimiao Li contributed significantly to the design and execution of the behavioral experiments and the ethical consideration.
Advising. Manli Li, Juanzi Li, Zhiyuan Liu, Huiqin Liu, and Maosong Sun took advisor roles in this project. Xu Han brought technical insights into the system design. This project also received guidance and support from various relevant departments at Tsinghua University.
§.§ Acknowledgement
This research project is supported by a grant from the Institute for Guo Qiang, Tsinghua University (20192920479).
This project would like to express its appreciation for the contributions and assistance of many other participants. The artistic design was skillfully provided by Shanshan Wang. The platform development and feature implementation were carefully carried out by Peng Zhou, Yuting Liu, Yuanwei Xu and Chengqiang Xu.
plainnat
|
http://arxiv.org/abs/2409.03537v1 | 20240905135334 | From annular to toroidal pseudo knots | [
"Ioannis Diamantis",
"Sofia Lambropoulou",
"Sonia Mahmoudi"
] | math.GT | [
"math.GT",
"57K10, 57K12, 57K14, 57K35"
] |
§ ABSTRACT
In this paper, we extend the theory of planar pseudo knots to the theories of annular and toroidal pseudo knots.
Pseudo knots are defined as equivalence classes under Reidemeister-like moves of knot diagrams characterized by crossings with undefined over/under information.
In the theories of annular and toroidal pseudo knots we introduce their respective lifts to the solid and the thickened torus. Then, we interlink these theories by representing annular and toroidal pseudo knots as planar O-mixed and H-mixed pseudo links.
We also explore the inclusion relations between planar, annular and toroidal pseudo knots, as well as of O-mixed and H-mixed pseudo links.
Finally, we extend the planar weighted resolution set to annular and toroidal pseudo knots, defining new invariants for classifying pseudo knots and links in the solid and in the thickened torus.
Spectral properties of hexagonal lattices with the -R coupling
[
September 9, 2024
==============================================================
§ INTRODUCTION
Knots and links have been a central focus in topology, with significant implications across various fields, including biology, chemistry, and physics. In classical knot theory, knots are typically studied through their projections on a plane, where crossings are assigned over/under information to fully capture the knot's structure. However, there are instances where this information is either unavailable or intentionally omitted. This leads to the concept of pseudo knots, which are projections of knots with some crossings left undetermined.
Pseudo knot diagrams were introduced by Hanaki in <cit.> as knot projections on the 2-sphere with certain double points lacking over/under information, the precrossings or pseudo crossings. This model is motivated by the study of DNA knots, where distinguishing over and under crossings is often unfeasible, even with electron microscopy. The theory of pseudo knots is the study of equivalence classes of pseudo diagrams under certain local combinatorial moves which extend the well-known Reidemeister moves for classical knots by taking into account also the precrossings (cf. <cit.> for further details).
In this paper, we extend the theory of pseudo knots beyond their classical planar setting by considering them on two additional surfaces: the annulus and the torus. These surfaces are of particular interest because of their inherent rotational and reflectional symmetries, while maintaining direct connections to the plane and between them, due to inclusion relations.
We study annular and toroidal pseudo knots by exploiting the theory of planar pseudo knots and by establishing interconnections due to various inclusion relations: of a disc in the annulus and in the torus and of an annulus in the torus. We further introduce the notion of the lift of a planar, annular or toroidal pseudo knot diagram into a closed curve with rigid pseudo crossings, in three-space, in the solid torus (cf. also <cit.>) or in the thickened torus, respectively. We also introduce the notion of isotopy for the lift of a pseudo knot, establishing bijections of the corresponding theories with the planar, annular and toroidal pseudo knot theories (Proposition <ref>, Theorem <ref>, and Theorem <ref>):
Theorem 1.
Two pseudo links in the three-sphere, in the solid torus or in the thickened torus, respectively, are isotopic if and only if any two corresponding planar, annular or toroidal pseudo link diagrams of theirs, projected onto the plane, the annulus or the outer toroidal boundary, respectively, are pseudo Reidemeister equivalent.
Thanks to the lifts of planar, annular and toroidal pseudo knot diagrams we exploit further inclusion relations: of a three-ball in the solid torus and in the thickened torus, of a solid torus in the thickened torus, and of the solid torus and the thickened torus in a three-ball. The inclusion relations reflect the inherent symmetries of the geometrical objects. The notions of lifts for annular and toroidal pseudo knots enable us to establish further connections of these theories to mixed link theories: in the first case to O-mixed pseudo links in S^3, which are pseudo links that contain a point-wise fixed unknotted component representing the complementary solid torus (cf. also <cit.>), in the second case to H-mixed pseudo links in S^3, which are pseudo links that contain a point-wise fixed Hopf link as a sublink, whose complement is a thickened torus (Theorem <ref> and Theorem <ref>):
Theorem 2.
Isotopy classes of pseudo links in the solid torus, resp. in the thickened torus are in bijection with isotopy classes of O-mixed pseudo links resp. H-mixed pseudo links in S^3, via isotopies that keep O resp. H point-wise fixed.
Based on the above, we further translate the spatial isotopies at the diagrammatic level through generalized Reidemeister theorems for O-mixed pseudo links (Theorem–<ref>) resp. H-mixed pseudo links and relate to the corresponding pseudo Reidemeister equivalences (Theorems–<ref> and <ref>).
Finally, we define the invariant weighted resolution sets (WeRe sets) for annular and toroidal pseudo knots, which are sets of associated knots in the solid and thickened torus, respectively, obtained by assigning to each pseudo crossing over or under information. Similarly, the O-WeRe set and H-WeRe set are also introduced for O-mixed and H-mixed pseudo links, which are sets of associated O-mixed and H-mixed links, respectively. Subsequently we have (Theorems <ref> and <ref>):
Theorem 3.
The annular resp. toroidal WeRe set is an invariant of annular resp. toroidal pseudo links. Subsequently, any invariant of links in the solid torus (resp. of links in the thickened torus), applied on the elements of an annular WeRe set (resp. of a toroidal WeRe set), induces also an invariant set of the annular pseudo link (resp. of the toroidal pseudo link). Similarly, the O-WeRe set resp. the H-WeRe set is an invariant of O-mixed pseudo links resp. H-mixed pseudo links. Subsequently, any invariant of O-mixed links (resp. of H-mixed links), applied on the elements of an O-WeRe set (resp. of an H- WeRe set), induces also an invariant set of the O-mixed pseudo link (resp. of the H-mixed pseudo link).
Apart from the interest in studying annular and toroidal pseudo knots per se, another motivation for us is their relation to periodic tangle diagrams in a ribbon and in the plane, respectively, through imposing one and two periodic boundary conditions by means of corresponding covering maps. For further details in periodic tangles, cf. for example <cit.> and references therein.
The paper is organized as follows. We begin, in <ref>, by recalling the basic notions associated with planar pseudo knots, including the pseudo Reidemeister equivalence, the lift in three-dimensional space and the WeRe set.
In <ref> we extend the theory from planar to annular pseudo knots. We first present the annular pseudo Reidemeister equivalence. We then define the lifts of annular pseudo knots in the solid torus and their isotopy moves, which lead to their representation as planar O-mixed pseudo links. We also explore the inclusion relations between annular, planar and O-mixed pseudo links. The section is concluded with the extension of the WeRe set for annular pseudo knots, providing a new invariant for their classification.
In <ref> we introduce the theory of toroidal pseudo knots. We first define the pseudo Reidemeister equivalence moves for toroidal pseudo knots. We then define the lift of toroidal pseudo knots into a closed pseudo curve in the thickened torus and the notion of isotopy for such curves. The lift of toroidal pseudo knots leads to their representation as planar H-mixed pseudo links. We also explore the inclusion relations between toroidal, annular and planar pseudo knots, as well as of O-mixed and H-mixed pseudo links. We conclude this section with the extension of the WeRe set for toroidal pseudo knots, defining new invariants for classifying pseudo knots and links in the thickened torus.
The transition from planar to annular and then to toroidal setting introduces increasing complexity due to the topological properties and symmetries of the ambient spaces. This study provides deeper insights into the interactions among planar, annular and toroidal pseudo knots.
§ CLASSICAL PSEUDO KNOTS
The classical by now theory of pseudo knots and links was introduced by Hanaki in <cit.> and the mathematical background of pseudo knot theory was established in <cit.>. In this section, we recall the basics of pseudo knots and links in the plane, their equivalence relation and the weighted-resolution invariant set (WeRe set). We also introduce the notion of lift of a planar pseudo link in the 3-space.
§.§
A planar pseudo knot/link diagram or simply pseudo knot/link diagram consists in a regular knot or link diagram in the plane where some crossing information may be missing, that is, it is unknown which strand passes over and which strand passes under the other. These undetermined crossings are called precrossings or pseudo crossings and are depicted as transversal intersections of arcs of the diagram enclosed in a light gray circle (for an illustration see Figure <ref>). Assigning an orientation to each component of a pseudo link diagram results in an oriented pseudo link diagram.
A planar pseudo link is defined as an equivalence class of pseudo link diagrams under planar isotopy and all versions of the classical Reidemeister moves R_1, R_2, R_3 and the pseudo Reidemeister moves PR_1, PR_2, PR_3, PR_3^', as exemplified in Figure <ref>, all together comprising the Reidemeister equivalence for pseudo links. For an oriented Reidemeister equivalence we require also orientations to be preserved, via the oriented versions of the moves comprising the Reidemeister equivalence.
Consider a pseudo link diagram K with no precrossings. Then, K can be viewed as a classical link diagram and classical Reidemeister equivalence is preserved by pseudo link equivalence. Therefore, there is an injection of classical link types into the set of pseudo link types.
As mentioned in <cit.>, pseudo links are closely related to singular links, that is, links that contain a finite number of rigid self-intersections. In particular, there exists a bijection f from the set of singular link diagrams to the set of pseudo link diagrams where singular crossings are mapped to precrossings. In that way we may also recover all of the pseudo Reidemeister moves, with the exception of the pseudo Reidemeister I move (PR1). Hence, f induces an onto map from the set of singular links to the set of pseudo links, since the images of two equivalent singular link diagrams are also equivalent pseudo link diagrams with exactly the same sequence of corresponding pseudo Reidemeister moves, and every pseudo link type is clearly covered.
§.§ The lift of a planar pseudo link in 3-space
We now introduce the lift of a pseudo link diagram as (a collection of) curve(s) in three-dimensional space whose regular projections are pseudo link diagrams.
The lift of a planar pseudo link diagram in three-dimensional space is defined as follows: every classical crossing is embedded in a sufficiently small 3-ball so that the over arc is embedded in its upper boundary and the under arc is embedded in its lower boundary, while the precrossings are supported by sufficiently small rigid discs, which are embedded in three-space. By lifting the precrossings in rigid discs in three-space we preserve the pseudo link's essential structure while respecting the `ambiguity' of its precrossings. The simple arcs connecting crossings can be also replaced by isotopic ones in three-space. The resulting lift, called spatial pseudo links, is a collection of closed curve(s) in three-space consisting in embedded discs from which emanate embedded arcs.
Clearly, any regular projection of a spatial pseudo link, whereby a disc supporting a pseudo crossing does not project on an arc, is a planar pseudo link diagram.
Two (oriented) spatial pseudo links are said to be isotopic
if they are related by arc and disc isotopies.
In the above context, one can easily derive the following.
Two (oriented) spatial pseudo links are isotopic if and only if any two corresponding planar pseudo link diagrams of theirs are (oriented) Reidemeister equivalent.
§.§ The weighted resolution set of pseudo links
In this subsection we recall an invariant of pseudo links, the weighted resolution set, through their resolution sets <cit.>. A resolution of a pseudo link diagram K is a specific assignment of a crossing (over or under) for every precrossing in K, for an illustration see Figure <ref>. We then have the following:
The weighted resolution set or WeRe set, of a planar pseudo link diagram K is a collection of ordered pairs (K_i, p_K_i), where K_i represents a resolution of K, and p_K_i denotes the probability of obtaining from K the equivalence class of K_i through a random assignment of crossing types, with equal likelihood for positive and negative crossings.
It can be easily confirmed that the WeRe set is preserved under the equivalence moves for pseudo links <cit.>. Hence, we have:
The WeRe set is an invariant of planar pseudo links. Subsequently, any classical invariant applied on the elements of the WeRe set induces also an invariant set of the planar pseudo link.
Consider the pseudo trefoil knot of Figure <ref>. In Figure <ref> we illustrate the resolution set of this pseudo knot resulting in the following WeRe set:
{( trefoil knot, 1/4), ( unknot, 3/4) }
Further, applying the Jones (or, equivalently, the normalized Kauffman bracket) polynomial for classical knots to the WeRe set we obtain the invariant set: {( t + t^3 - t^4, 1/4), (1, 3/4) }.
Consider the pseudo trefoil knot again, but this time
let us modify the resolution probabilities, assigning different probabilities to each precrossing. In this case, the resolution set will remain the same but the weighted resolution set will change, having a different probability distribution of the same classical links. Hence, we conclude that a pseudo link diagram L with altered resolution probabilities may lead to non-equivalent weighted resolution sets.
§ ANNULAR PSEUDO KNOT THEORY
In this section we introduce and study the theory of annular pseudo knots and links, that is, pseudo link diagrams in the interior of the annulus 𝒜, subjected to Reidemeister-like moves. The annulus 𝒜 is the space S^1 × D^1 with two circular boundary components, and may be also represented as an once punctured disc, meaning a disc with a hole at its center.
We study annular pseudo links in different geometric contexts, such as through their lifts to closed curves with precrossings in the solid torus and through their representations as planar O-mixed pseudo links, that is, pseudo links with one fixed unknotted component.
We also define the invariant WeRe set for an annular pseudo link, which is a set of associated links in the solid torus, and the O-WeRe set for an O-mixed pseudo link, which is a set of associated O-mixed links. Finally, we discuss the relations of annular pseudo links with planar ones and with links in the solid torus, through inclusion relations.
An annular pseudo knot/link diagram consists in a regular knot or link diagram in the interior of the annulus 𝒜, where some crossing information may be missing, in the sense that it is unknown which strand passes over the other. As in the classical case, these undetermined crossings are called precrossings or pseudo crossings and are depicted as transversal intersections of arcs of the diagram enclosed in a light gray circle. Assigning an orientation to each component of an annular pseudo link diagram results in an oriented annular pseudo link diagram.
In Figure <ref> we illustrate an annular pseudo link diagram in 𝒜. In particular, observe that this pseudo link diagram contains two components: a null-homologous loop linked to an essential locally knotted loop winding twice along the meridian.
An annular pseudo link is defined as an equivalence class of annular pseudo link diagrams under surface isotopy and all versions of the classical Reidemeister moves and the extended pseudo Reidemeister moves, as exemplified in Figure <ref>, all together comprising the Reidemeister equivalence for annular pseudo links. As for classical pseudo links, for an oriented Reidemeister equivalence we require also orientations to be preserved, via the oriented versions of the moves comprising the Reidemeister equivalence.
As in the classical case, one may relate the theory of annular pseudo links to the theory of annular singular links via a bijection from the set of annular singular link diagrams to the set of annular pseudo link diagrams. This bijection maps singular crossings to precrossings.
The mapping carries through to all of the pseudo Reidemeister moves, with the exception of the pseudo Reidemeister I move (PR1). Hence, we have an onto map from the set of annular singular links to the set of annular pseudo links, since the images of two equivalent annular singular link diagrams are also equivalent annular pseudo link diagrams with exactly the same sequence of corresponding pseudo Reidemeister moves, and every annular pseudo link type is clearly covered.
§.§ The lift of annular pseudo links in the solid torus
In this section we define the lift of annular pseudo links in the solid torus similarly to the lift of pseudo links in three-dimensional space (recall Definition <ref>) but with additional constraints. We consider the thickening 𝒜× I, where I denotes the unit interval, I=[0,1] (see Figure <ref>). Note that the space 𝒜× I is homeomorphic to the solid torus, ST.
The lift of an annular pseudo link diagram in the thickened annulus 𝒜× I is defined so that: each classical crossing is embedded in a sufficiently small 3-ball that lies entirely within the thickened annulus, precrossings are supported by sufficiently small rigid discs, which are embedded in the thickened annulus, and the simple arcs connecting the crossings can be replaced by isotopic ones in the thickened annulus. The lifting of the precrossings within these rigid discs maintains the pseudo link's essential structure while preserving the `ambiguity' of its precrossings. The resulting lift is called a pseudo link in the solid torus and it is a collection of closed curve(s) constrained to the interior of the solid torus, consisting in embedded discs from which emanate embedded arcs.
Two (oriented) pseudo links in the solid torus are said to be isotopic if they are related by isotopies of arcs and discs that are confined to the interior of the solid torus.
In this context, the following result holds:
Two (oriented) pseudo links in the solid torus 𝒜× I are isotopic if and only if any two corresponding annular pseudo link diagrams of theirs, projected onto the annulus 𝒜×{0}, are (oriented) pseudo Reidemeister equivalent.
§.§ The mixed pseudo link approach
In this subsection we represent annular pseudo links as mixed pseudo link diagrams in the plane as well as spatial mixed pseudo links in three-space.
It is established in <cit.> that isotopy classes of (oriented) links in a knot/link complement correspond bijectively to isotopy classes of mixed links in the three-sphere S^3 or in three-space, through isotopies that preserve a fixed sublink which represents the knot/link complement. In particular, viewing ST as the complement of a solid torus in S^3, an (oriented) link in ST is represented uniquely by a mixed link in S^3 that contains the standard unknot, O, as a point-wise fixed sublink, which represents the complementary solid torus. Cf. <cit.>.
Using now the spatial lift of a pseudo link, recall Definition <ref>, one can define:
An (oriented) O-mixed pseudo link in S^3 is an (oriented) spatial pseudo link O∪ K which contains the standard unknot O as a fixed sublink, and the sublink K resulting by removing O, as the moving part of the mixed pseudo link, such that there are no precrossing discs between the fixed and the moving part.
Then, by the same reasoning as for (oriented) links in ST, and by Definitions <ref> and <ref>, we obtain that an (oriented) pseudo link K in ST is represented uniquely by an (oriented) O-mixed pseudo link O∪ K in S^3. Namely:
Isotopy classes of (oriented) pseudo links in the solid torus are in bijective correspondence with isotopy classes of (oriented) O-mixed pseudo links in S^3 via isotopies that keep O fixed.
For an example see Figure <ref>. The reader may compare with <cit.>, where a pseudo link in ST is defined as an O-mixed pseudo link.
Taking now a diagrammatic approach we define:
An (oriented) O-mixed pseudo link diagram is an (oriented) regular projection O∪ D of an (oriented) O-mixed pseudo link O∪ K on the plane of O, such that some double points are precrossings, as projections of the precrossings of O∪ K, there are no precrossings between the fixed and the moving part, and the rest of the crossings, which are either crossings of arcs of the moving part or mixed crossings between arcs of the moving and the fixed part, are endowed with over/under information.
Combining the equivalence of planar pseudo link diagrams (recall Section <ref>) and the theory of mixed links (recall <cit.>) we obtain the discrete diagrammatic equivalence of O-mixed pseudo links:
[The O-mixed pseudo Reidemeister equivalence]
Two (oriented) O-mixed pseudo links in S^3 are isotopic if and only if any two (oriented) O-mixed pseudo link diagrams of theirs differ by planar isotopies, a finite sequence of the classical and the pseudo Reidemeister moves, as exemplified in Figure <ref>, for the moving parts of the mixed pseudo links, and moves that involve the fixed and the moving parts, called mixed Reidemeister moves, comprising the moves MR_2, MR_3, MPR_3, as exemplified in Figure <ref>.
In the last two subsections we lifted annular pseudo link diagrams in the thickened annulus and related pseudo links in the solid torus to O-mixed pseudo links and O-mixed pseudo link diagrams. These are recapitulated in Figure <ref>, where 𝒜 is represented as an once punctured disc.
Further, Theorems <ref>, <ref> and <ref> culminate in the following diagrammatic equivalence:
Two (oriented) annular pseudo link diagrams are (oriented) pseudo Reidemeister equivalent if and only if any two corresponding O-mixed pseudo link diagrams of theirs are O-mixed pseudo Reidemeister equivalent.
§.§ Annular inclusions
Annular pseudo link diagrams with no precrossings can be viewed as link diagrams in the annulus, and Reidemeister equivalence in the annulus is compatible with annular pseudo link equivalence. Thus, there is an injection of annular links into annular pseudo links. We note that annular link diagrams modulo the Reidemeister equivalence correspond bijectively to isotopy classes of links in the solid torus ST, cf. for example <cit.>.
The inclusion of a disc in the annulus induces an injection of the theory of (planar) pseudo links into the theory of annular pseudo links. In terms of liftings, the inclusion of a three-ball in the thickened annulus induces an injection of the theory of spatial pseudo links into the theory of pseudo links in the solid torus, from the above and by Theorem <ref>.
On the other hand, the inclusion of the annulus (resp. thickened annulus) in a disc (resp. three-ball) induces a surjection of the theory of annular pseudo links (resp. pseudo links in ST) onto planar (resp. spatial) pseudo links, where the essential components are mapped to usual components. View Figure <ref>.
Finally, in terms of O-mixed pseudo links, the inclusion of the annulus in a disc corresponds to omitting the fixed part O, so we are left with a planar pseudo link.
§.§ The weighted resolution set for annular pseudo links
In this subsection we extend to annular pseudo links the weighted resolution set defined in Definition <ref> for planar pseudo links <cit.>, and we prove that it is an invariant of theirs. Indeed:
A resolution of an annular pseudo link diagram K is a specific assignment of crossing types (positive or negative) for every precrossing in K. The result is a link diagram in the annulus, which lifts to a link in the solid torus, recall Subsection <ref>.
Similarly, an O-resolution of the O-mixed pseudo link diagram O∪ K is a specific assignment of crossing types (positive or negative) for every precrossing in O∪ K. In this case the resulting link is an O-mixed link in S^3, representing uniquely the lift of the resolution of K in ST.
The annular weighted resolution set or annular WeRe set, of an annular pseudo link diagram K is a collection of ordered pairs (K_i, p_K_i), where K_i represents a resolution of K, and p_K_i denotes the probability of obtaining from K the equivalence class of K_i through a random assignment of crossing types, with equal likelihood for positive and negative crossings.
Similarly, the O-weighted resolution set or O-WeRe set, of an O-mixed pseudo link diagram O∪ K is a collection of ordered pairs ( O∪ K_i, p_K_i), where O∪ K_i represents a resolution of O∪ K, and p_K_i denotes the probability of obtaining from O∪ K the equivalence class of O∪ K_i through a random assignment of crossing types, with equal likelihood for positive and negative crossings.
Further, we have the following result:
The annular WeRe set is an invariant of annular pseudo links. Similarly, the O-WeRe set is an invariant of O-mixed pseudo links. Subsequently, any invariant of links in the solid torus resp. of O-mixed links, applied on the elements of an annular resp. an O-WeRe set, induces also an invariant set of the annular resp. the O-mixed pseudo link.
It follows from Theorem <ref> of <cit.> and from Definition <ref> that the annular WeRe set is, indeed, invariant under the standard Reidemeister moves R_1, R_2, R_3 and the pseudo Reidemeister moves PR_1, PR_2, PR_3 and PR_3^', since these moves are compatible with the corresponding moves with no precrossings.
Further, an O-mixed pseudo link diagram O∪ K is by definition a planar pseudo link diagram, thus, by virtue of Theorem <ref> its O-weighted resolution set is an invariant of O∪ K. Indeed, it suffices to show that the O-WeRe set is invariant under the mixed Reidemeister moves. The MR_2 and MR_3 moves preserve the resolution set, since they do not involve precrossings. The MPR_3 moves are similar to the MR_3 moves, since regardless of which resolution is considered for the precrossing, the result is an MR_3 move that does not change the knot type.
Moreover, any invariant of links in the solid torus (e.g. <cit.>) applied to the elements of an annular resp. O-WeRe set will respect the same set of local equivalence moves, hence will preserve the WeRe set, so it induces an invariant set of the annular resp. O-mixed pseudo link.
In this example we illustrate the resolution sets of an annular pseudo trefoil with two precrossings, where 𝒜 is viewed as an once punctured disc. The resulting links are links in the solid torus ST, namely: an essential trefoil with probability 1/4, a twice counterclockwise descending essential unknot with probability 1/4 and a twice clockwise descending essential unknot with probability 1/2. In 𝒜, an essential knot is a knot diagram that cannot be contracted to a point within 𝒜.
§ TOROIDAL PSEUDO KNOT THEORY
In this section, we introduce and develop the theory of pseudo knots and links in the torus, that is, pseudo link diagrams in the torus subjected to Reidemeister-like moves. We study toroidal pseudo links in different geometric contexts, namely through their lifts to closed curves with precrossings in the thickened torus and through their representations as planar H-mixed pseudo links, that is, pseudo links with one fixed sublink, the Hopf link. We also define the invariant WeRe set for a toroidal pseudo link, which is a set of associated links in the thickened torus, and the H-WeRe set for an H-mixed pseudo link, which is a set of associated H-mixed links. Finally, we investigate the relationships between toroidal pseudo links with links in the thickened torus as well as with annular and planar pseudo links, through illuminating inclusion relations, highlighting the interplay between these various contexts.
The surface of the torus, which is the space 𝒯 = S^1 × S^1, can be viewed either meridian-wise, as the circular gluing of two cylinders along their boundary circles, or longitude-wise, as the gluing of two annuli along their outer and inner boundary circles. Equivalently, as the identification space of a cylinder, by identifying the two circular boundary components, or even, as the identification space of an annulus, by identifying the inner and the outer circular boundary components.
A toroidal pseudo knot/link diagram consists in a regular knot or link diagram in the surface of a torus, where some crossing information may be missing, in the sense that it is not known which strand passes over the other. These undetermined crossings are the precrossings or pseudo crossings and are depicted as transversal intersections of arcs of the diagram enclosed in a light gray circle. For an example view Figure <ref>. Assigning an orientation to each component of an toroidal pseudo link diagram results in an oriented toroidal pseudo link diagram.
Due to the topology of the toroidal surface, note that toroidal pseudo link diagrams have two types of essential loops that are not homologically trivial: the longitudinal ones, as in the case of annular pseudo link diagrams, but also the meridional ones, and, of course, combinations of these, such as the torus knots and links. For example, the pseudo link in Figure <ref> has one essential component which winds twice in the longitudinal direction and thrice in the meridional direction. We proceed with defining an equivalence relation in the set of toroidal pseudo link diagrams.
A toroidal pseudo link is defined as an equivalence class of toroidal pseudo link diagrams under surface isotopy and all versions of the classical Reidemeister moves and the extended pseudo Reidemeister moves, as exemplified in Figure <ref>, all together comprising the Reidemeister equivalence for toroidal pseudo links. As for classical and annular pseudo links, for an oriented Reidemeister equivalence we require also orientations to be preserved via the oriented versions of the moves.
As in the classical and annular case, the theory of toroidal pseudo links can be related to the theory of toroidal singular links. Indeed, there is a bijection from the set of toroidal singular link diagrams to the set of toroidal pseudo link diagrams. This bijection maps singular crossings to precrossings.
The mapping carries through to all of the pseudo Reidemeister moves, with the exception of the pseudo Reidemeister I move (PR_1). Hence, we obtain an onto map from the set of toroidal singular links to the set of toroidal pseudo links, since the images of two equivalent toroidal singular link diagrams are also equivalent toroidal pseudo link diagrams with exactly the same sequence of corresponding pseudo Reidemeister moves, and every toroidal pseudo link type is clearly covered.
§.§ The lift of toroidal pseudo links in the thickened torus
In this subsection we define the lift of toroidal pseudo links in the thickened torus in analogy to the lift of annular pseudo links in the solid torus (recall Definition <ref>). We consider the thickening 𝒯× I, where I denotes the unit interval, I=[0,1] (see Figure <ref>).
Recall that a thickened torus can be viewed as a solid torus having another solid torus been drilled out from its interior. Equivalently, as the identification space of a thickened cylinder, by identifying the two annular boundary components, as illustrated in Figure <ref>, or even, as the gluing of two thickened annuli along their outer and their inner annular boundaries. Finally, a thickened torus can be defined as the complement in the three-sphere S^3 of the Hopf link.
The lift of a toroidal pseudo link diagram in the thickened torus 𝒯× I is defined so that: each classical crossing is embedded in a sufficiently small 3-ball that lies entirely within the thickened torus, precrossings are supported by sufficiently small rigid discs, which are embedded in the thickened torus, and the simple arcs connecting the crossings can be replaced by isotopic ones in the thickened torus. The lifting of the precrossings within these rigid discs maintains the pseudo link's essential structure while preserving the `ambiguity' of its precrossings. The resulting lift is called a pseudo link in the thickened torus and it is a collection of closed curve(s) constrained to the interior of the thickened torus, consisting in embedded discs from which emanate embedded arcs.
In Figure <ref> we illustrate the lift of a pseudo link in 𝒯× I, viewed as the identification space of a thickened cylinder. In particular, observe that this pseudo link contains two components: a null-homologous loop linked to an essential locally knotted loop winding twice along the meridian and once along the longitude.
Two (oriented) pseudo links in the thickened torus are said to be isotopic if they are related by isotopies of arcs and discs within the interior of the thickened torus.
In this context, and in analogy to the spatial and annular pseudo links, the following holds:
Two (oriented) pseudo links in the thickened torus 𝒯× I are isotopic if and only if any two corresponding toroidal pseudo link diagrams of theirs, projected onto the outer toroidal boundary S^1 × S^1, are (oriented) pseudo Reidemeister equivalent.
§.§ Toroidal pseudo links as mixed pseudo links
In this subsection we view the thickened torus as being homeomorphic to the complement of the Hopf link, H, in the three-sphere S^3, see middle illustration of Figure <ref>. Since we fix our torus 𝒯 and subsequently the thickened torus 𝒯× I, it is crucial that in this theory we have the components of H marked, say with m and l, as depicted in Figure <ref>. Then, as in the case of annular pseudo links, an (oriented) pseudo link K in 𝒯× I can be represented by an (oriented) mixed link, whose fixed part is the (marked) Hopf link H, representing the thickened torus. We have the following:
An (oriented) H-mixed pseudo link in S^3 is an (oriented) spatial pseudo link H∪ K which contains the marked Hopf link H, as a point-wise fixed sublink, and the sublink K resulting by removing H, as the moving part of the mixed pseudo link, such that there are no precrossing discs between the fixed and the moving part.
For an example view the right-hand illustration of Figure <ref>.
Then, by the same reasoning as for O-mixed pseudo links, we obtain the following:
Isotopy classes of (oriented) pseudo links in the thickened torus are in bijection with isotopy classes of (oriented) H-mixed pseudo links in S^3, via isotopies that keep H point-wise fixed.
We now shift our focus back to the diagrammatic approach. From now on we consider H to be a fixed diagram of the Hopf link on a projection plane.
An (oriented) H-mixed pseudo link diagram is an (oriented) regular projection H∪ D of an (oriented) H-mixed pseudo link H∪ K on the plane of H, such that some double points are precrossings, as projections of the precrossing discs of H∪ K, there are no precrossings between the fixed and the moving part, and the rest of the crossings, which are either crossings of arcs of the moving part or mixed crossings between arcs of the moving and the fixed part, are endowed with over/under information.
Consider now an isotopy of an (oriented) H-mixed pseudo link H∪ K in S^3 keeping H point-wise fixed. Combining the equivalence of planar pseudo link diagrams (recall Section <ref>) and the theory of mixed links (recall <cit.>) we obtain that, in terms of H-mixed pseudo link diagrams, this isotopy translates into a sequence of local moves comprising planar isotopy and the classical, pseudo and mixed Reidemeister moves (recall Figures <ref> and <ref>). Note that, as supposed to the theory of O-mixed pseudo links, the fixed part of the H-mixed pseudo links now involves a crossing, which a moving strand can freely cross, giving rise to an extra mixed Reidemeister 3 move as illustrated in Figure <ref>.
The above lead to the discrete diagrammatic equivalence of H-mixed pseudo links:
[The H-mixed pseudo Reidemeister equivalence]
Two (oriented) H-mixed pseudo links in S^3 are isotopic if and only if any two (oriented) H-mixed pseudo link diagrams of theirs differ by planar isotopies, a finite sequence of the classical and the pseudo Reidemeister moves, as exemplified in Figure <ref>, for the moving parts of the mixed pseudo links, and moves that involve the fixed and the moving parts, the mixed Reidemeister moves, comprising the moves MR_2, MR_3, MPR_3, exemplified in Figures <ref> and <ref>.
Further, Theorems <ref>, <ref> and <ref> culminate in the following diagrammatic equivalence:
Two (oriented) toroidal pseudo link diagrams are (oriented) pseudo Reidemeister equivalent if and only if any two corresponding H-mixed pseudo link diagrams of theirs are H-mixed pseudo Reidemeister equivalent.
If we exclude PR_1-moves of Figure <ref> for the moving part of a mixed pseudo link and if we change the precrossings to singular crossings (as in Remark <ref>), we obtain from Theorem<ref> the analogue of the Reidemeister theorem for toroidal singular links in terms of mixed links.
§.§ Toroidal inclusions
Toroidal pseudo link diagrams with no precrossings can be viewed as link diagrams in the torus, and Reidemeister equivalence in the torus is compatible with toroidal pseudo link equivalence. Thus, there is an injection of toroidal links in toroidal pseudo links. We note that toroidal link diagrams modulo the Reidemeister equivalence correspond bijectively to isotopy classes of links in the thickened torus. So, from the lift of pseudo links (Definition <ref>) and Theorem <ref> we have an injection of links in the thickened torus into pseudo links the thickened torus.
The inclusion of a disc in the torus induces an injection of the theory of (planar) pseudo links into the theory of toroidal pseudo links.
Further, the inclusion of the annulus in the torus induces an injection of the theory of annular pseudo links into the theory of toroidal pseudo links. View Figure <ref>. Similarly, from the above and by Theorem <ref>, the inclusion of a three-ball in the thickened torus induces an injection of the theory of spatial pseudo links into the theory of pseudo links in the thickened torus.
Moreover, using the description of the thickened torus as the gluing of two thickened annuli and the inclusion of a thickened annulus (that is, a solid torus) in the thickened torus, the above observations and Theorem <ref> lead to an injection of the theory of pseudo links in the solid torus into the theory of pseudo links in the thickened torus.
On the other hand, the inclusion of the thickened torus in the solid torus induces a surjection of the theory of pseudo links in the thickened torus onto pseudo links in the solid torus, where the meridional windings trivialize. View Figure <ref>. In view of Theorem<ref>, this surjection induces also a surjection of the theory of toroidal pseudo links onto the theory of annular pseudo links.
Finally, the inclusion of the thickened torus in an enclosing 3-ball as in Figure <ref> induces a surjection of the theory of pseudo links in the thickened torus onto spatial pseudo links, where the meridional and longitudinal windings trivialize. By Theorem<ref>, this surjection induces a surjection of the theory of toroidal pseudo links onto the theory of planar pseudo links.
In terms of H-mixed pseudo links, the inclusion of the thickened torus in the solid torus corresponds to omitting the fixed curve m, so we are left with an O-mixed pseudo link. Further, the inclusion of the thickened torus in a 3-ball corresponds, in terms of H-mixed pseudo links, to omitting the fixed part H entirely, so we are left with a planar pseudo link.
We end this subsection with another remark.
Viewing the torus as the gluing of two annuli, one can try to project toroidal pseudo link diagrams to one of the two annular surfaces, say the upper one.
When projecting a toroidal pseudo link diagram that cannot be isotoped within an annulus, the result is an annular pseudo link that may include an additional type of crossings, namely, virtual crossings, which are not real crossings (cf. <cit.>). This occurs because a curve wrapping around the meridian of the torus projects in the annulus in such a way that it appears to cross another arc, even though no such crossing exists on the initial toroidal pseudo link diagram. The result is an annular virtual pseudo link diagram. A comparative example is illustrated in Figure <ref>, where the virtual crossing in the right-hand illustration is depicted as an encircled flat crossing.
§.§ The weighted resolution set for toroidal pseudo links
In this subsection we extend to toroidal pseudo links the notion of the weighted resolution set, defined first in <cit.> for planar pseudo links (Definition <ref>) and extended in Definition <ref> for annular pseudo links, and we prove that it is an invariant of theirs. Indeed, we define:
A resolution of a toroidal pseudo link diagram K is a specific assignment of crossing types (positive or negative) for every precrossing in K. The result is a link diagram in the torus, which lifts to a link in the thickened torus. Recall Subsection <ref>.
Similarly, an H-resolution of the H-mixed pseudo link diagram H ∪ K is a specific assignment of crossing types (positive or negative) for every precrossing in H ∪ K. The result is an H-mixed pseudo link in S^3, representing uniquely the lift of the resolution of K in 𝒯× I.
The toroidal weighted resolution set or toroidal WeRe set, of a toroidal pseudo link diagram K is a collection of ordered pairs (K_i, p_K_i), where K_i represents a resolution of K, and p_K_i denotes the probability of obtaining from K the equivalence class of K_i through a random assignment of crossing types, with equal likelihood for positive and negative crossings.
Similarly, the H-weighted resolution set or H-WeRe set, of an H-mixed pseudo link diagram H∪ K is a collection of ordered pairs ( H∪ K_i, p_K_i), where H∪ K_i represents a resolution of H∪ K, and p_K_i denotes the probability of obtaining from H∪ K the equivalence class of H∪ K_i through a random assignment of crossing types, with equal likelihood for positive and negative crossings.
The definitions above lead to the following:
The toroidal WeRe set is an invariant of toroidal pseudo links. Similarly, the H-WeRe set is an invariant of H-mixed pseudo links. Subsequently, any invariant of links in the thickened torus resp. of H-mixed links, applied on the elements of a toroidal resp. an H- WeRe set, induces also an invariant set of the toroidal resp. the H-mixed pseudo link.
The proof follows the same approach as in the cases of planar and annular resp. O-mixed pseudo link diagrams (recall Theorem <ref> of <cit.> and Theorem <ref>). Indeed, the toroidal WeRe set is invariant under the standard and pseudo Reidemeister moves of Definition <ref> since these moves are compatible with the corresponding moves with no precrossings.
Further, since an H-mixed pseudo link diagram H∪ K is by definition a planar pseudo link diagram, thus its H-weighted resolution set is an invariant of H∪ K by Theorem <ref>.
Moreover, any invariant of links in the thickened torus (e.g. <cit.>) applied to the elements of the toroidal WeRe set resp. H-WeRe set will respect the same set of local equivalence moves, hence will preserve the WeRe sets, hence it induces an invariant set of the toroidal resp. H-mixed pseudo link.
The WeRe set of the toroidal pseudo trefoil knot illustrated on the left of Figure <ref> is the same as the WeRe set of the same pseudo trefoil knot viewed as annular. However, the illustration on the right-hand side of Figure <ref> is more interesting as it is purely toroidal. So, in the projection on the annulus it requires the presence of a virtual crossing, recall Subsection <ref>. For illustration purposes only, we present in Figure <ref> the resolution set of this toroidal pseudo knot as annular with a virtual crossing, depicted as an encircled flat crossing. Note that the isotopy moves of virtual knot theory do not apply here.
Conclusions
The transition from planar to annular and then to toroidal setting introduces increasing complexity and additional factors that must be taken into account. This study provides deeper insights into the interactions among planar, annular and toroidal pseudoknots and the topological properties of their ambient spaces.
99
H R. Hanaki, Pseudo diagrams of links, links and spatial graphs, Osaka J. Math., 47, (2010) 863–883.
HJMR A. Henrich, R. Hoberg, S. Jablan, L. Johnson, E. Minten, L. Radovic, The theory of pseudoknots, J. Knot Theory and Ramifications, 22, No. 07, (2013) 1350032.
BJW V. Bardakov, S. Jablan & H. Wang, Monoid and group of pseudo braids, J. Knot Theory and Ramifications, 25, No. 09, 1641002 (2016).
D I. Diamantis, Pseudo links and singular links in the Solid Torus, Communications in Mathematics, Vol. 31, Issue 1 (2023).
LK2 L. H. Kauffman, Introduction to virtual knot theory, J. Knot Theory and Ramifications, 21, No. 12, 1240007 (2012).
LR1 S. Lambropoulou, C.P. Rourke, Markov's theorem in 3-manifolds, Topology and its Applications 78,
(1997) 95-122.
DLM1 I. Diamantis, S. Lambropoulou, S. Mahmoudi, Equivalences of doubly periodic tangles, arXiv:2310.00822 (2023).
DLM2 I. Diamantis, S. Lambropoulou, S. Mahmoudi, Directional invariants of doubly periodic tangles, Symmetry 2024, 16(8), 968. https://doi.org/10.3390/sym16080968.
DLM3 I. Diamantis, S. Lambropoulou, S. Mahmoudi, Pseudo single and doubly periodic tangles, in preparation.
La1 S. Lambropoulou, Solid torus links and Hecke algebras of B-type, Quantum Topology; D.N. Yetter Ed.; World Scientific Press, (1994), 225-245.
Tu V.G. Turaev, The Conway and Kauffman modules of the solid torus, Zap. Nauchn. Sem. Lomi 167 (1988), 79–89. English translation: J. Soviet Math. (1990), 2799-2
HK J. Hoste, M. Kidwell, Dichromatic link invariants, Trans. Amer. Math. Soc. 321 (1990), No. 1, 197-229.
La2 S. Lambropoulou, Knot theory related to generalized and cyclotomic Hecke algebras of type B, J. Knot Theory Ramifications 8, No. 5, (1999) 621-658.
P J. Przytycki, Skein modules of 3-manifolds, Bull. Pol. Acad. Sci.: Math., 39, 1-2 (1991), 91-100.
Zenkina M.V. Zenkina, V.O. Manturov, An invariant of links in a thickened torus. J. Math. Sci. 175 (2011) , 501–508.
|
http://arxiv.org/abs/2409.03269v1 | 20240905062709 | A spherical harmonic-domain spatial audio signal enhancement method based on minimum variance distortionless response | [
"Huawei Zhang",
"Jihui",
"Zhang",
"Huiyuan",
"Sun",
"Prasanga Samarasinghe"
] | eess.AS | [
"eess.AS"
] |
A spherical harmonic-domain spatial audio signal enhancement method based on minimum variance distortionless response
This work is sponsored by the Australian Research Council (ARC) Discovery Projects funding scheme with project number DE230101567.
Huawei Zhang^1, Jihui (Aimee) Zhang^2, 1, Huiyuan (June) Sun^1, Prasanga Samarasinghe^1
^1Audio & Acoustic Signal Processing Group, The Australian National University, Canberra, Australia
^2Institute of Sound and Vibration Research, University of Southampton, Southampton, U.K.
September 5, 2024
============================================================================================================================================================================================================================================================================================
§ ABSTRACT
Spatial audio signal enhancement aims to reduce interfering source contributions while preserving the desired sound field with its spatial cues intact.
Existing methods generally rely on impractical assumptions (e.g. no reverberation or accurate estimations of impractical information) or have limited applicability.
This paper presents a spherical harmonic (SH)-domain minimum variance distortionless response (MVDR)-based spatial signal enhancer using Relative Harmonic Coefficients (ReHCs) to extract clean SH coefficients from noisy ones in reverberant environments.
A simulation study shows the proposed method achieves lower estimation error, higher speech-distortion-ratio (SDR), and comparable noise reduction (NR) within the sweet area in a reverberant environment, compared to a beamforming-and-projection method as the baseline.
Spatial audio signal enhancement, relative harmonic coefficient, relative transfer function, minimum variance distortionless response (MVDR), beamforming, spherical harmonic.
§ INTRODUCTION
With the rapid increase of spatial audio applications in the last decade, there has been a growing demand for spatial audio signal enhancement (also known as ambisonic-to-ambisonic separation) <cit.>, which separates a desired sound field from interference source contributions while preserving the desired spatial cues.
This technology acts as a fundamental step in many audio signal processing applications including sound field recording <cit.>, sound field reproduction <cit.>, and spatial active noise control <cit.>.
We categorize existing methods for spatial audio enhancement into three kinds: (i) Beamforming-and-projection methods <cit.>, (ii) Multi-channel Wiener Filter methods <cit.>, and (iii) Learning based methods <cit.>.
Beamforming-and-project methods often include at least two stages: a beamforming stage to capture the desired source signal or the desired sound pressure at a point, and a projection stage to reconstruct the desired sound field using the point-to-region acoustic transfer functions (ATFs).
While these solutions have been achieved over space via multi-point processing <cit.> as well as spherical harmonic (SH) domain processing <cit.>, they make one or more impractical assumptions such as no reverberation or the accurate estimation of ATFs.
Multi-channel Wiener Filter methods achieve enhancement by preserving the estimated power spectral density (PSD) matrix of the desired SH coefficients and reducing the estimated PSD matrix of interference SH coefficients.
However, the accurate estimation of the former PSD matrix is difficult.
Learning-based methods up to today, have mainly focused on separating speech sources in the SH domain and required a suitable dataset for pre-training.
In short, existing methods are generally either with impractical assumptions or applied in limited conditions.
Relative Harmonic Coefficients (ReHCs) <cit.>, also named as SH-domain relative transfer functions <cit.>, are denoted as the ratio between SH coefficients and the specific SH coefficient at the 0-th order and the 0-th mode.
These ReHCs have been used in beamformer design <cit.> and spatial audio signal enhancement <cit.>.
These ReHCs can be estimated based on corresponding SH-domain PSD matrices obtained from the received microphone signals <cit.>.
However, the potential of ReHCs has not been fully studied.
In this paper, we develop an SH-domain minimum variance distortionless response (MVDR) spatial audio signal enhancement method with multiple beamformers based on ReHCs.
Motivated by a binaural MVDR beamforming method <cit.> and a multi-output MVDR beamforming method <cit.>, we extend the conventional SH-domain MVDR beamforming method <cit.> to be a multi-output variant.
This proposed method extracts the SH coefficients due to the desired sound source from the mixed SH domain recording.
Different from the conventional single-output MVDR beamforming method (losing the spatial information) <cit.> and the beamforming-and-projection methods (requiring a projection stage to reconstruct spatial information), this proposed method can directly extract the spatial information of the desired sound field.
Moreover, the proposed beamforming method is developed with a more practical foreknowledge of ReHCs, which is used to construct a set of spatial constraints to preserve desired SH coefficients.
§ PROBLEM FORMULATION
As shown in Fig. <ref>, consider a scenario with a desired sound source, an interference sound source, with a spherical sweet area with the radius r_s and a Q-microphone spherical array of the radius r_a placed concentrically at the origin O in a reverberated room.
The two sound sources are placed at fixed positions outside the spherical area, causing a desired sound field and an interference sound field within the sweet area, respectively.
We assume the desired sound source is uncorrelated with the interference sound source, and the interference sound source signal is stable.
In this paper, we focus on far-field scenarios.
Each sound field pressure within the sweet area is denoted as x(t, k, r, θ, ϕ) in the time-frequency (TF) domain, where t is the time frame, k = 2 π f / c is the wavenumber, f is the frequency, c is the speed of sound, r, θ, and ϕ are the radius, the elevation angle, and the azimuth angle, respectively.
The sound field pressure x(t, k, r, θ, ϕ) can be decomposed as
x(t, k, r, θ, ϕ) = d(t, k, r, θ, ϕ) + v(t, k, r, θ, ϕ),
where d(t, k, r, θ, ϕ) and v(t, k, r, θ, ϕ) are the sound pressure of the desired field and the interference field, respectively.
The received microphone signal at the q-th microphone is denoted as
x_q(t, k) = x(t, k, r_q, θ_q, ϕ_q) + u(t, k)
= d_q(t, k) + v_q(t, k) + u(t, k),
where d_q(t, k), v_q(t, k), u(t, k) are the desired microphone signal, the interference microphone signal, and the random sensor noise, respectively, r_q, θ_q, and ϕ_q are the radius, the elevation angle, and the azimuth angle, respectively, at the q-th microphone.
The microphone signal x_q(t, k) can be decomposed into SH coefficients to represent the mixed field as <cit.>
x_q (t, k) = ∑_n=0^N_k∑_m=-n^n x̃_nm(t, k) j_n(kr_q) Y_n m(θ_q, ϕ_q),
where x̃_nm(t, k) is the mixed SH coefficient representing the mixed field at the n-th order and the m-th mode, j_n(·) is the Spherical Bessel function of the first kind, Y_nm(·) is the SH function, and N_k is the corresponding maximum order for the wavenumber k defined as <cit.>
N_k = ceil(kr_a) = ceil(2 π f/cr_a),
where ceil( ) is the ceiling function.
For brevity, the time frame t and the wave number k will be omitted in the rest of this paper.
Obtained L = (N+1)^2 mixed SH coefficients 𝐱̃ = [x̃_00, x̃_1-1, ⋯, x̃_nm, ⋯, x̃_N N]^T can be divided as
𝐱̃ = SHT(𝐱) = SHT(𝐝 + 𝐯 + 𝐮) = 𝐝̃ + 𝐯̃ + 𝐮̃,
where the SHT is the Spherical Harmonic Transform based on Eq. (<ref>) <cit.>, 𝐱 = [x_1, ⋯, x_q, ⋯, x_Q]^T are received microphone signals at the array, 𝐝, 𝐯 and 𝐮 are received signals from the desired field, the interference field, and sensor noises, respectively, 𝐝̃, 𝐯̃ and 𝐮̃ are SH coefficients of the desired field, the interference field, and sensor noises, respectively.
The task of this work is to extract desired SH coefficients 𝐝̃ from mixed SH coefficients 𝐱̃ obtained from the mixture recording 𝐱.
§ PROPOSED MULTI-OUTPUT MVDR METHOD
In this section, we use ReHCs to develop an SH-domain multi-output MVDR beamforming method to estimate desired SH coefficients.
The proposed method is a multi-output extension of a SH-domain MVDR beamforming method, consisting of L beamformers 𝐰̃ = [𝐰̃_00^T, 𝐰̃_1-1^T, ⋯, 𝐰̃_nm^T, ⋯, 𝐰̃_NN^T]^T as a L^2 × 1 vector <cit.>.
The estimated desired SH coefficient d̂̃̂_nm at the n-th order and the m-th mode is obtained with the corresponding beamformer 𝐰̃_nm as
d̂̃̂_nm = 𝐰̃_nm^H 𝐱̃.
The beamformer 𝐰̃_nm is required to preserve the desired SH coefficient and reduce SH coefficients of interference and sensor noises.
As it is difficult to obtain accurate ATFs, similar to <cit.>, we use ReHCs to develop a cost function with a spatial constraint as
minimize_𝐰̃_nm 𝐰̃_nm^H 𝐑̃_𝐯+𝐮𝐰̃_nm
subject to 𝐰̃_nm^H 𝐡̃ = h̃_nm,
where 𝐑̃_𝐯+𝐮 = 𝐑̃_𝐯 + 𝐑̃_𝐮 is the summaration of the SH-domain PSD matrices of interference 𝐑̃_𝐯 = 𝔼 [𝐯̃𝐯̃^H] and that of sensor noises 𝐑̃_𝐮 = 𝔼 [𝐮̃𝐮̃^H], 𝔼[ ] is math expectation, 𝐡̃ = [h̃_00, h̃_1-1, ⋯, h̃_nm, ⋯, h̃_NN]^T are ReHCs for the desired field, and h̃_nm is the corresponding ReHC at the n-th order and the m-th mode.
These ReHCs 𝐡̃ can be obtained as <cit.>
𝐡̃
= 𝐝̃/d̃_00
=𝐑̃_𝐝𝐞̃_1/𝐞_1^H 𝐑̃_𝐝𝐞̃_1≈𝐑̂̃̂_𝐝+𝐮𝐞̃_1/𝐞̃_1^H 𝐑̂̃̂_𝐝+𝐮𝐞̃_1,
where d̃_00 is the desired SH coefficient at the 0-th order and the 0-th mode, 𝐑̃_𝐝 = 𝔼 [𝐝̃𝐝̃^H] is the SH-domain PSD matrix of desired SH coefficients, 𝐞̃_1 = [1 0_1 × (L-1)]^T, 𝐑̂̃̂_𝐝+𝐮 is the estimated SH-domain PSD matrix obtained from received microphone signals when only the desired sound source is active.
Combining cost functions as Eq. (<ref>) in all orders and all modes, a new cost function is derived <cit.> to design 𝐰̃ as
minimize_𝐰̃ 𝐰̃^H ℛ̃_𝐯+𝐮𝐰̃
subject to 𝐂̃^H 𝐰̃ = 𝐛̃,
where 𝐛̃ = conj(𝐡̃) is a L × 1 column vector, conj( ) is conjugate operation, ℛ̃_𝐯+𝐮 and 𝐂̃ are two large matrices constructed by 𝐑̃_𝐯+𝐮 and 𝐡̃, respectively, as
ℛ̃_𝐯+𝐮 =
[ 𝐑̃_𝐯+𝐮 ; 𝐑̃_𝐯+𝐮 ; ⋯ ; 𝐑̃_𝐯+𝐮 ]_L^2 × L^2,
𝐂̃ =
[ 𝐡̃ ; 𝐡̃ ; ⋯ ; 𝐡̃ ]_L^2 × L.
The optimal solution of Eq. (<ref>) can be obtained as <cit.>
𝐰̃ = ℛ̃_𝐯+𝐮^-1𝐂̃ [𝐂̃^H ℛ̃_𝐯+𝐮^-1𝐂̃]^-1𝐛̃.
Each beamformer 𝐰̃_nm can be obtained from 𝐰̃ directly.
After that, we can obtain an estimation 𝐝̂̃̂ of desired SH coefficients for all orders and all modes with Eq. (<ref>).
The estimated sound field pressures with spatial cues over the sweet area can be further obtained based on the Inverse Spherical Harmonic Transform (ISHT) <cit.> from the estimated desired SH coefficients 𝐝̂̃̂.
However, the spatial cues of the residual interference sound field may not be preserved, similar to <cit.>.
§ SIMULATION AND RESULTS
In this section, we evaluate the performance of the proposed method compared with a beamforming-and-projection method <cit.> as the baseline in a reverberant room.
The baseline is implemented with TF-domain MVDR beamforming <cit.>, requiring the summation of the TF-domain PSD matrix of interference and that of sensor noises.
§.§ Setup
A speech signal <cit.> is used as the desired source at (4.60, 4.05,
1.70) m.
A recorded washer-dryer noise signal <cit.> is used as the interference source at (1.60, 1.05, 1.20) m.
We set a 32-mic open spherical array at (1.60, 4.05, 1.70) m, with the radius r_a = 0.042 m.
The array has the same size and microphone positions as the em32 Eigenmike spherical microphone array <cit.>.
White Gaussian noises are added to the microphone array as sensor noises in terms of 35 dB signal-sensor noise-ratio (SSNR), where the ‘signal’ in SSNR refers to the desired source signal.
The room size is 5×6×4 m and the reverberation time is T_60 = 0.2 s.
The sampling frequency is 16000 Hz.
The radius of the sweet area is r_s = r_a = 0.042 m.
The speed of sound is 343 m/s.
All room impulse responses (RIRs) are simulated with a toolbox using the image source method <cit.>.
We first apply the Short-Time Fourier Transform with the frame size as 16384 and 75% overlap to the time-domain received microphone signals.
Then we apply the SHT with the corresponding maximal order N_k to each frequency bin of obtained TF-domain microphone signals.
We simulate accurate PSD matrices of interference signals and sensor noises as prior knowledge in the TF domain and the SH domain, facilitating the baseline method and the proposed method, respectively.
The PSD matrix of interference signals is calculated based on the simulated interference source signal power and corresponding ATFs from simulated RIRs.
The PSD matrix of sensor noises is calculated by the averaged estimated PSD matrix of simulated sensor noises.
For the baseline method <cit.>, ATFs for the desired sound source are estimated during a mixture recording, requiring the accurate direction of arrival (DoA) as prior knowledge.
In <cit.>, frequency smooth operation has been performed with 9 frequency bins around each target frequency bin.
The following simulation results in Sec. <ref> also show such frequency smooth is robust to different acoustic environments.
As the output of the baseline method is the estimated desired microphone signals, for a fair comparison with the proposed method, the SHT is applied to the output of the baseline method to estimate desired SH coefficients.
We can then reconstruct the corresponding estimated desired field by applying the ISHT from estimated desired SH coefficients of both the proposed method and the baseline method.
We evaluate the performance of both the proposed method and the baseline method among the common telephone bandwidth (from 300 Hz to 3400 Hz) <cit.> due to: (1) The key information of speech signals exist within the bandwidth; (2) The Spherical Bessel function of the first kind j_n(kr) will achieve values close to 0 among some higher frequency bins, resulting in the Bessel Zero problem <cit.>.
Therefore, at most 3-order SH coefficients are required based on Eq. (<ref>) in the following simulations.
The Bessel Zero problem can be relieved by using the spherical-shell microphone array <cit.>.
§.§ Performance Analysis Based on Sound Field Estimation Error
We compare the estimated desired sound fields and evaluate the corresponding estimation errors within the sweet area when the signal-noise-ratio (SNR) is 0 dB, where the ‘signal’
in SNR refers to the desired source signal and the ‘noise’ in SNR refers to the interference source signal.
The normalized square error ϵ(t, k) as the estimation error of the estimated desired sound field (sound pressures) within the sweet area is defined as:
ϵ(t, k) = 10 log_10‖𝐝_s(t, k) - 𝐝̂_s(t, k) ‖^2/‖𝐝_s(t, k) ‖^2,
where 𝐝_s(t, k) and 𝐝̂_s(t, k) are the true desired sound field pressures and the estimated desired sound field pressures at each observation point respectively, and ‖ ‖^2 denotes the 2-norm operation.
Fig. <ref> compares the estimated desired field by the proposed method, the estimated desired field by the baseline method, and the true desired field at one time frame at 1500 Hz on the x-y plane.
Each sound field with corresponding estimation error defined in Eq. (<ref>) is presented for 441 observation points evenly distributed over the x-y plane.
By comparing Fig. <ref>(b), Fig. <ref>(c), and Fig. <ref>(e), we observe that both the proposed method and the baseline method reconstructed a desired field similar to the true desired field within the sweet area.
Fig. <ref>(d) and Fig. <ref>(f) show that the proposed method achieved a lower estimation error of less than about -15 dB, compared with the baseline method.
§.§ Performance Analysis over Frequency
In Fig. <ref>, we further evaluate the average of the sound field estimation error, the speech-distortion-ratio (SDR), and the noise reduction (NR) within the sweet area across 15 time frames against different signal frequencies.
Here, SDR and NR are defined as <cit.>
SDR(t, k) = 10 log_10‖𝐝_s(t, k) ‖^2/‖𝐝^res_s(t, k) - 𝐝_s(t, k) ‖^2,
NR(t, k) = 10 log_10‖𝐯_s(t, k) ‖^2/‖𝐯^res_s(t, k) + 𝐮^res_s(t, k) ‖^2,
where 𝐯_s(t, k) is true interference field pressures, 𝐝^res_s(t, k) and 𝐯^res_s(t, k) are residual desired sound field pressures and residual interference sound field pressures after processing, respectively, and 𝐮^res_s(t, k) is representing the influence of residual sensor noises, at these observation points within the spherical sweet area.
The evaluation is based on corresponding sound pressures at 107 observation points evenly distributed over the spherical sweet area.
As shown in Fig. <ref>, the proposed method with estimated ReHCs achieved lower estimation error, higher SDR, and comparable NR (more than 25 dB NR is effective enough) than the baseline method ranging from 300 Hz to 3400 Hz.
In detail, the proposed with estimated ReHCs achieved a lower than -15 dB estimation error and a higher than 15 dB SDR at the majority of chosen frequency bins, which outperforms the baseline.
Fig. <ref> also shows the influence of the accuracy of ReHCs.
As a comparison, we play a segment of 0 dB white Gaussian noise instead of the original desired source signal at the same source position, and then use the same method in Sec. <ref> to obtain another set of ReHCs as accurate ReHCs.
These ReHCs are more accurate as white Gaussian noise is more stable than speech signals at different frequency bins.
In Fig. <ref>, we can find the proposed method with accurate ReHCs outperformed that with estimated ReHCs in terms of estimation error and SDR, and achieved comparable NR performance, which implies the performance of the proposed method is influenced by the accuracy of estimation ReHCs.
§.§ Performance Analysis over Varying Reverberation Times and SNR Levels
Here, we evaluate the performance of both the proposed method and the baseline method for varying reverberation times T_60 and varying SNRs, averaging within the spherical sweet area at the bandwidth for 15 time frames.
The evaluation is based on corresponding sound pressures at 107 observation points as Sec. <ref>.
The estimated ReHCs are used in the proposed method as Sec. <ref>.
Firstly, we evaluate the performance of these two methods over varying reverberation times T_60 as shown in TABLE <ref>.
Here, we vary the T_60 from 0 s to 0.4 s, while the SNR is set to be 0 dB.
With the reverberation time T_60 increased, both methods achieved higher estimation error, lower SDR, and lower NR.
In addition, the proposed method always achieved lower estimation error, higher SDR, and comparable NR than the baseline method for varying T_60 (more than 25 dB NR is effective enough).
Secondly, we evaluate the performance of these two methods over varying SNR levels as shown in TABLE <ref>.
Here, we vary the SNR level from 5 dB to -5 dB, and remain the T_60 as 0.2 s.
TABLE <ref> shows that the proposed method maintained stable performance for varying SNR levels in terms of estimation error (about -19 dB) and SDR (about 22 dB), always outperforming the baseline method.
By contrast, with the SNR level decreased, both estimation error and SDR achieved by the baseline method declined.
In addition, both the proposed method and the baseline method achieved comparable and stable NR (more than about 28 dB) for varying SNR levels.
§ CONCLUSION
In this paper, we propose a spherical harmonic (SH)-domain minimum variance distortionless response (MVDR) method to estimate the desired sound field from the mixture recording at a spherical microphone array.
We use a cost function with a set of spatial constraints to extract desired SH coefficients and suppress SH coefficients of interference and sensor noises.
The field due to the desired sound source within the sweet area can be hence reconstructed.
Simulation results show that the proposed method can outperform the baseline methods within the sweet area in a reverberant room.
In the future, we plan to further examine the proposed method in real-world scenarios.
IEEEtran
|
http://arxiv.org/abs/2409.03028v1 | 20240904183559 | Rigid-Body Attitude Control on $\mathsf{SO(3)}$ using Nonlinear Dynamic Inversion | [
"Hafiz Zeeshan Iqbal Khan",
"Farooq Aslam",
"Muhammad Farooq Haydar",
"Jamshed Riaz"
] | eess.SY | [
"eess.SY",
"cs.SY",
"math.OC"
] |
JuliaQCD: Portable lattice QCD package in Julia language
Akio Tomiya
September 9, 2024
========================================================
empty
empty
§ ABSTRACT
This paper presents a cascaded control architecture, based on nonlinear dynamic inversion (NDI), for rigid body attitude control. The proposed controller works directly with the rotation matrix parameterization, that is, with elements of the Special Orthogonal Group 3, and avoids problems related to singularities and non-uniqueness which affect other commonly used attitude representations such as Euler angles, unit quaternions, modified Rodrigues parameters, etc. The proposed NDI-based controller is capable of imposing desired linear dynamics of any order for the outer attitude loop and the inner rate loop, and gives control designers the flexibility to choose higher-order dynamic compensators in both loops. In addition, sufficient conditions are presented in the form of linear matrix inequalities (LMIs) which ensure that the outer loop controller renders the attitude loop almost globally asymptotically stable (AGAS) and the rate loop globally asymptotically stable (GAS). Furthermore, the overall cascaded control architecture is shown to be AGAS in the case of attitude error regulation. Lastly, the proposed scheme is compared with an Euler angles-based NDI scheme from literature for a tracking problem involving agile maneuvering of a multicopter in a high-fidelity nonlinear simulation.
§ INTRODUCTION
The rigid-body attitude control problem is central to numerous aerospace and robotics applications. Given the highly nonlinear nature of the problem, control strategies based on feedback linearization have received considerable attention over the years <cit.>. In general, feedback linearization uses coordinate transformation and feedback to achieve exact cancellation of certain nonlinearities, thereby transforming a nonlinear dynamical system into a linear, or partially linear, dynamical system for which a suitable controller is then designed <cit.>.
A feedback linearization method that has been studied extensively for aerospace applications is known as nonlinear dynamic inversion (NDI). Early results on NDI-based flight control, such as <cit.>, used Euler angles to describe rigid-body attitude. The same parametrization was also used to develop feedback linearizing control laws for quadrotor UAVs <cit.>. However, in recent years, there has been growing interest in the design of NDI-based attitude control laws which work directly with the rotation matrix representation, also known as the direction cosine matrix (DCM).
These rotation matrices evolve on matrix Lie group 3, known
as Special Orthogonal Group. Since the rotation matrix provides an attitude representation which is both globally defined and unique <cit.>, it can be used to develop attitude control laws which are plagued neither by the kinematic singularities associated with Euler angles nor the problem of unwinding associated with the unit quaternion attitude representation. Consequently, several researchers have sought to develop control schemes based on nonlinear dynamic inversion and incremental nonlinear dynamic inversion for attitude and position control <cit.>.
In <cit.>, dynamic inversion is used to control the attitude and airspeed of a high-altitude long-endurance flexible aircraft with the aircraft attitude being described using the rotation matrix parametrization. Input-output linearization is performed with the angular velocity ω and the aircraft forward velocity taken as the outputs. Thereafter, a geometric PID controller <cit.> is used for the attitude dynamics. In a similar vein, <cit.> addresses the rigid-body attitude stabilization problem on 3 using partial state-feedback (or input-output) linearization. The authors investigate different output functions for obtaining locally and almost globally stabilizing feedback linearizing controllers with well-behaved zero dynamics. However, they do not consider the problem of attitude tracking or issues related to robustness. On the other hand, <cit.> develops a feedback linearizing controller on the Special Euclidean Group 3 for attitude and position control of a quadrotor UAV operating in a windy environment. Dynamic inversion is used in conjunction with a geometric PD controller <cit.> for the attitude dynamics and a variable-gain algorithm for handling rotor thrust saturation. Another effective solution has been developed in <cit.>, where feedback linearization is used in conjunction with a learned acceleration error model to account for modeling errors and external disturbances, and to obtain a controller suitable for aggressive quadrotor flight.
As noted above, the feedback linearizing controllers developed in <cit.> utilize geometric PD or PID control laws such as those developed in <cit.>. It can be desirable from both a theoretical and a practical viewpoint to extend these geometric feedback linearization approaches so that they encompass a broader class of stabilizing linear dynamic controllers. To this end, this paper addresses the rigid-body attitude tracking problem on 3 using feedback linearization and linear dynamic compensation. In particular, a cascaded control architecture is considered which consists of an outer attitude loop and an inner angular rate loop with linear dynamic compensation in both the attitude and velocity loops. Sufficient conditions are obtained which ensure that the attitude loop is almost globally asymptotically stable (AGAS) and the rate loop globally asymptotically stable (GAS). These conditions are expressed in the form of linear matrix inequalities (LMIs).
Furthermore, in the case of attitude regulation, we show that the overall cascaded architecture renders the closed-loop system to be AGAS.
The main contribution of the paper is to extend existing geometric nonlinear control approaches so that they include more general linear dynamic controllers and provide practitioners with greater freedom in designing feedback linearizing attitude control laws on 3. Moreover, we hope that the developments detailed in this paper will allow control designers to combine linearization-based synthesis methods with geometric feedback linearizing control laws, as well as make it easier to incorporate linear models for actuator and/or sensor dynamics into the problem formulation.
The rest of the paper is structured as follows: essential background and important results are summarized in Section <ref> along with some remarks on notation, and the main results are presented in Section <ref>. Thereafter, Section <ref> presents an example of agile maneuvering of a multicopter to demonstrate the effectiveness of the proposed scheme, and Section <ref> concludes the discussion.
§ PRELIMINARIES
In this section, the rigid body attitude dynamics and kinematics are briefly discussed, and few key properties of associated operators are revisited from literature for completeness. Before presenting attitude dynamics of a rigid body, let us briefly introduce some notation. is inertial frame, fixed and centered at earth’s surface. is body fixed frame, centered at C.G. of the rigid body.
Let ω be the angular velocity of body w.r.t. to expressed in body frame . Then the rotational dynamics can be written as
ω̇ = J^-1[τ - ω× Jω - f(ω,μ)],
where J is the inertia matrix, τ is the control torque, and f(ω,μ) contains other torques acting on the body e.g. aerodynamic damping, gravitational torques, etc. The parameters vector μ, assumed to be either measured or estimated, is considered here to incorporate effects of other parameters such as aerodynamic angles, Mach number, dynamic pressure, etc.
The orientation R of a rigid body, attitude transformation matrix from body frame to inertial frame , evolves over 3, i.e. a Lie group containing all 3×3 orthogonal rotation matrices of determinant +1, commonly known as Special Orthogonal group and defined as:
3≜{ R ∈3×3| R^⊤ R = RR^⊤ = I_3, (R) = 1 }.
Then the attitude kinematics of rigid body, also known as Poisson's Kinematical Equations (PKEs) <cit.>, can be written as
Ṙ = R ω,
where if ω = [ω_1,ω_2,ω_3]^⊤∈3, then
ω≜[ 0 -ω_3 ω_2; ω_3 0 -ω_1; -ω_2 ω_1 0 ].
Here the cross map (·):3↦3, transforms a vector in 3 to its cross product form such that, a× b = ab for any a,b∈3, where the Lie Algebra 3 is a vector space, or more precisely the tangent space of 3 at identity i.e. 3 = 𝖳_I3 and it can be written as follows:
3≜{ S ∈3×3| S^⊤ = -S }.
It is worth noting that the hat map is an isomorphism. Its inverse is denoted by the vee map ∨ : 3↦3. Some important properties of the hat map, which will be required in subsequent sections, are listed as follows <cit.>:
xy = x × y = -y × x = -yx,
Ax = 1/2x(A-A^⊤) = -x^⊤ (A-A^⊤)^∨,
xA+A^⊤x = [ (AI - A)x ]^×,
RxR^⊤ = (Rx)^×,
(x)^2 = -2x^⊤ x,
for any x,y∈3, A∈3× 3, and R∈3.
§ GEOMETRIC NDI CONTROL
In this section the main results are presented. A two-loop geometric NDI structure for attitude control of a rigid body is proposed. The time scale separation is assumed between cascaded loops, and can be easily enforced by appropriate choice of controller gains. The complete control architecture is shown in Fig. <ref>. In particular it can be observed that the proposed architecture is similar to that of a standard NDI controller, except the geometric configuration error and a feed-forward term, which are precisely the components which renders the attitude loop almost globally asymptotically stable.
§.§ NDI based Rate Control
The nonlinear dynamic inversion based attitude rate control, presented in this section, is a slightly generalized version of the results presented in <cit.>. This not only allows the user to choose a higher order compensator but also incorporate the feedback filters and feed-forward terms. Consider the following NDI control law:
ẋ_ω = A_ω x_ω + B_ωω + B_ω_refω_ref
τ = ω× J ω + f(ω,μ)
+ J [C_ω x_ω + D_ωω + D_ω_refω_ref]
Substituting it in the dynamics (<ref>), the closed-loop system can be written as,
[ ẋ_ω; ω̇ ] = 𝒜_ω[ x_ω; ω ] + [ B_ω_ref; D_ω_ref ]ω_ref
where,
𝒜_ω≜[ A_ω B_ω; C_ω D_ω ],
Suppose the matrix 𝒜_ω is Hurwitz which guarantees global asymptotic stability of (<ref>). Moreover, by taking Laplace transform of (<ref>) it can be easily shown that the control law (<ref>) induces the following desired dynamics,
ω(s) = [sI - Γ_ω(s)]^-1Γ_ω_ref(s) ω_ref(s)
where,
Γ_ω(s) ≜ C_ω [sI-A_ω]^-1B_ω + D_ω
Γ_ω_ref(s) ≜ C_ω [sI-A_ω]^-1B_ω_ref + D_ω_ref
For only proportional controller, A_ω, B_ω, and C_ω are empty matrices, D_ω = -K_ω, and D_ω_ref = K_ω. In that case only K_ω needs to be positive definite as in <cit.>, and it enforces the first order desired dynamics i.e. [sI+K_ω]^-1K_ω.
§.§ Geometric NDI based Attitude Control
Before proceeding with the development of geometric NDI controller, it is worth mentioning that the Lie group 3 is not a vector space, so it is not closed under addition operation. Therefore, let the desired attitude be R_d, and define the attitude error as
R_e ≜ R_d^⊤ R.
Where the matrix R_e represents the attitude transformation from the body frame () to the desired body frame (_d). Since the desired attitude R_d also evolves on 3, therefore
Ṙ_d = R_d ω_d^×.
Here it must be noted that this ω_d is different from ω_ref in previous subsection. Thus, the error dynamics can be written as,
Ṙ_e = R_e [e]ω,
where ω_e ≜ω - R_e^⊤ω_d.
Another important point is that for control design on non-Euclidean manifolds e.g. 3, a notion of norm of a point R is required, or more precisely the manifold needs to be equipped with a Riemannian metric to define several geometric notions like length, angle etc. For attitude control using rotation matrices, commonly used metrics include the chordal and geodesic metrics <cit.>, as well as a metric recently proposed in <cit.> for improved performance (relative to the chordal metric) in the case of large-angle rotational errors. In this work only chordal metric is considered, since it will result in a smooth control law, in contrast to others, which result in discontinuous controllers. The Chordal metric is defined as follows,
⟨ R_a , R_b ⟩_c ≜I - R_a^⊤ R_b^2_F = 2 I - R_a^⊤ R_b
for any R_a,R_b∈3, where ·_F represents Frobenius norm. Now the configuration error function can be defined as,
Ψ(R_d,R) ≜1/4⟨ R_d , R ⟩_c = 1/2I - R_e
Then according to <cit.> attitude error vector (e_R) is the left-trivialized derivative of the configuration error function, so
e_R ≜Ψ(R_d,R) = 1/2(R_e - R_e^⊤)^∨
There are many metrics for 3 are available in literature e.g. Geodesic (shortest-path) <cit.>, Chordal <cit.> etc. A succinct overview of metrics on 3 is available in <cit.>.
Applying hat map on (<ref>), taking its derivative and using the identities (<ref>), results in 2[R]ė = [(R_eI - R_e^⊤) ω_e]^×. Thus, the derivative of attitude error can be written as,
ė_R = ℰ(R_e) ω_e
where ℰ(R_e) ≜1/2(R_eI - R_e^⊤). It is worth noting that these dynamics cannot be inverted directly because ℰ(R_e) is not invertible at an attitude error of exp(±π/2s) and exp(±π s) for any s∈2.
Now to obtain local attitude error dynamics, consider the Euler-axis parametrization. Under the small error angle assumption, the attitude error can be written as
R_e ≈ I + e_Φ^×
where e_Φ = Φ - Φ_d, and Φ = [ϕ,θ,ψ]^⊤ and Φ_d = [ϕ_d,θ_d,ψ_d]^⊤ are the actual and desired Euler angles, respectively. Thus using (<ref>), attitude error vector (e_R) can be approximated as
e_R ≈ e_Φ,
Now using (<ref>) in (<ref>), and after some simplification, we get,
[Φ]ė≈[I + [Φ]e][e]ω
Moreover, under the small error assumption, the product terms in (<ref>) would be negligible, therefore, local linearized error dynamics can be written as,
ė_Φ≈ω_e
Consider the following NDI control law:
ẋ_R = A_R x_R + B_R e_R
ω = R_e^⊤ω_d + C_R x_R + D_R e_R
Suppose there exist a positive definite matrix P such that
𝒬≜[ D_R ⋆; PB_R + 1/2C_R^⊤ A_R^⊤ P + PA_R ]≺ 0.
then the control law (<ref>):
* renders the desired attitude (R_e = I) to be the almost globally asymptotically stable equilibrium of (<ref>),
* gives exact local tracking performance (Φ(t) = Φ_d(t)),
* induces the following local desired dynamics about the stable equilibrium, if feedforward term (R_e^⊤ω_d) is ignored in control law
Φ(s) = -[sI - Γ_Φ(s)]^-1Γ_Φ(s) Φ_d(s)
where Γ_Φ(s) = C_R[sI - A_R]^-1B_R + D_R.
To prove first statement, considering the Lyapunov function as 𝒱 = 2 Ψ(R_d,R) + x_R^T P x_R, it can be seen directly from (<ref>) that 𝒱 is positive definite and radially unbounded. Moreover, its derivative can be computed as follows,
𝒱̇ = - Ṙ_e + ẋ_R^T P x_R + x_R^T P ẋ_R
= e_R^T (ω -R_e^Tω_d) + x_R^T (A_R^T P + P A_R) x_R
+ x_R^T P B e_R + e_R^T B^T P x_R
= e_R^TD_Re_R+1/2(e_R^T C_R x_R + x_R^T C_R^T e_R)
+ x_R^T (A_R^T P + P A_R) x_R + x_R^T P B e_R + e_R^T B^T P x_R
= [ e_R^T x_R^T ]𝒬[ e_R; x_R ] < 0.
Therefore, the control law (<ref>) drives the configuration error function (Ψ) to zero. The critical points of Ψ are the solutions R_e∈3 to the equation Ψ = 0 or I-R_e = 0, which are given by R_e = I (desired equilibrium), and R_e = exp(±πs) (undesired equilibria) for any s∈2 <cit.>. However, using Chetaev's instability theorem <cit.>, it can be
shown that the undesired equilibria are unstable (for details see <cit.>). Thus the desired equilibrium is almost globally asymptotically stable.
Proof of second and third statements is straightforward; substituting Eqs. (<ref>), (<ref>) and (<ref>) in Eq. (<ref>), and taking Laplace transform of resulting local linear closed-loop as Φ(s)=Φ_d(s). If the feedforward term (R_e^⊤ω_d) is ignored from control law, then small error assumption it can be approximated as R_e^⊤ω_d ≈Φ̇_d. This results in local error angle dynamics as ė_Φ = ω - Φ̇_d, which upon substituting Eqs. (<ref>) and taking Laplace transform yields (<ref>).
For only proportional controller, A_R, B_R, C_R, and therefore P as well, are empty matrices and D_R = -K_R. In that case K_R needs to be positive definite to ensure almost global asymptotic stability. It also enforces the first order local desired dynamics i.e. [sI+K_R]^-1K_R.
§.§ Stability Guarantees of Cascaded Architecture
In this subsection we will discuss the stability of cascaded architecture for set-point tracking or regulation problems, i.e. (ω_d = 0, ω̇_d = 0). Using Eqs. (<ref>), (<ref>), and (<ref>) we can write complete cascaded closed loop error dynamics as,
[ ė_R; ω̇_e; ẋ_R; ẋ_ω ] =
[ 0 ℰ(R_e) 0 0; D_ω_ref D_R D_ω D_ω_ref C_R C_ω; B_R 0 A_R 0; B_ω_ref D_R B_ω B_ω_ref C_R A_ω ][ e_R; ω_e; x_R; x_ω ],
For simplicity lets denote x_K = [x_R, x_ω]^⊤, then we can write Eq. (<ref>) as,
[ ė_R; ω̇_e; ẋ_K ] = [ 0 ℰ(R_e) 0; A_21 A_22 A_23; A_31 A_32 A_33 ][ e_R; ω_e; x_K ],
where,
A_21 ≜ D_ω_ref D_R, A_22 ≜ D_ω, A_23 ≜[ D_ω_ref C_R C_ω ],
A_31 ≜[ B_R; B_ω_ref D_R ], A_32 ≜[ 0; B_ω ], A_33 ≜[ A_R 0; B_ω_ref C_R A_ω ].
Now lets present the main stability results in following theorem.
The cascaded closed loop system (<ref>) is almost globally asymptotically stable for set-point tracking and regulation problems (ω_d = 0, ω̇_d = 0), if there exits a positive scalar p_11, an scalar p_21, symmetric positive definite matrices P_22, and P_33, and a matrix P_32, such that following the LMIs hold.
𝒫≜[ p_11 I p_12 I 0; ⋆ P_22 P_23; ⋆ ⋆ P_33 ]≻ 0
ℳ≜[ M_11 M_12 M_13; ⋆ M_22 M_23; ⋆ ⋆ M_33; ]≺ 0
here the submatrices are defined as follows,
M_11 ≜ p_12(A_21 + A_21^⊤),
M_22 ≜ 2p_12I + P_22A_22 + A_22^⊤ P_22 + P_23A_32 + A_32^⊤ P_23^⊤,
M_33 ≜ P_23^⊤ A_23 + A_23^⊤ P_23 + P_33A_33 + A_33^⊤ P_33,
M_12 ≜ p_11I + p_12A_22 + A_21^⊤ P_22 + A_31^⊤ P_23^⊤,
M_13 ≜ p_12A_23 + A_21^⊤ P_23 + A_31^⊤ P_33,
M_23 ≜ P_22A_23 + A_22^⊤ P_23 + A_32^⊤ P_33 + P_23A_33.
Consider the lyapunov function,
𝒱 = 2 p_11Ψ + ω_e^⊤ P_22ω_e + 2 p_12 e_R^⊤ω_e
+ x_K^⊤ P_33 x_K + 2 ω_e^⊤ P_23 x_K
Now using the fact that <cit.>,
Ψ≥1/2e_R^2.
Therefore, it can be easily seen that (<ref>) ensures the positive definiteness and radial unboundedness of 𝒱. So, the Lyapunov rate along trajectories of (<ref>) can be written as follows:
𝒱̇ = 2 p_11Ψ̇ + 2 ω_e^⊤ P_22ω̇_e + 2p_12 (e_R^⊤ω̇_e + ω_e^⊤ė_R)
+ 2x_K^⊤ P_33ẋ_K + 2(ω_e^⊤ P_23ẋ_K + x_K^⊤ P_23ω̇_e)
= z^⊤ℳ z + p_12ω_e^⊤(ℰ(R_e) + ℰ(R_e)^⊤ - 2I)ω_e
Using the fact that the matrix (ℰ(R_e) + ℰ(R_e)^⊤ - 2I) is negative semi-definite for all R_e ∈3, we can bound lyapunov rate as follows
𝒱̇≤ z^⊤ℳ z < 0
Therefore, the cascaded control architecture drives the configuration error function (Ψ) to zero, alongwith ω_e and x_K. The critical points of Ψ are the solutions R_e∈3 to the equation Ψ = 0 or I-R_e = 0, which are given by R_e = I (desired equilibrium), and R_e = exp(±πs) (undesired equilibria) for any s∈2 <cit.>. However, using Chetaev's instability theorem <cit.>, it can be shown that the undesired equilibria are unstable (for details see <cit.>). Thus the desired equilibrium (R_e=I, ω_e = 0, x_K=0) is almost globally asymptotically stable.
It is worth noting that for tracking problems (ω_d 0), we need additional cancellation torque in rate-controller (<ref>), more precisely (R_e^⊤ω̇_d - [e]ω R_e^⊤ω_d). Furthermore, with this additional cancellation term along with the assumption of no gain and/or filter in rate loop feedback path (i.e. B_ω_ref = -B_ω and D_ω_ref = -D_ω), it can be shown that the feasibility of LMIs (<ref>) also ensures AGAS for cascaded architecture for tracking problems.
§ AGILE MANEUVERING OF A MULTICOPTER: AN EXAMPLE
In this section, the agile maneuvering of a multicopter is considered to demonstrate the effectiveness of the proposed control scheme. Co-planar multicopters, as defined in <cit.>, are multicopters in which thrust vectors of all rotors are parallel in hover conditions for all control inputs. For such multicopters the function f(ω,μ) = κω in (<ref>), where κ is rotational damping coefficient. In this paper, an example of hexacopter is considered, see Fig. <ref>. The proposed geometric NDI scheme is compared with an Euler angles based NDI control scheme proposed in <cit.>. The parameters of hexacopter considered are available in <cit.>. A high fidelity nonlinear simulation is used, which also includes actuator dynamics and saturation limits on each motor RPMs, and to distribute the desired torques, Pseudo-Inverse based control allocation scheme is used, for more details see <cit.>.
A maneuver consisting of two flips (rotation of 720^∘) about roll axis followed by two flips about pitch axis, is considered, or more precisely,
R̅_d(t) = exp(2π t [1]e), 0 ≤ t ≤ 2
exp(2π(t-2.5)[2]e), 2.5 < t ≤ 4.5
I, .
where e_1 = [1,0,0]^T and e_2 = [0,1,0]^T. This maneuver is executed by generating a filtered reference (R_d,ω_d) using the second-order geometric filter developed in <cit.>. In particular, this filter is designed such that its linearized counterpart has a natural frequency of 15 rad/s and a damping ratio of 0.707. For comparison purpose, same desired dynamics and controller gains are used for the proposed scheme as for Euler angle based NDI presented in <cit.>.
Though the presented approach can consider any desired dynamics, here a PD-type controller is used, or more precisely a lead compensator of the form k_p + k_d s/τ_fs+1, for each channel of rate loop, with k_p = 4.2, k_d = 0.42, and τ_f = 10. A first order lag filter at 100 Hz is used in feedback to mitigate sensor noise with a sensor delay of 5 ms in each channel. For analysis these pure delays are approximated by third order Padé approximation. This gives an overall 12^th order rate loop control law (<ref>). It can be seen that this ensures 𝒜_ω, as defined in (<ref>), to be Hurwitz. Moreover, for attitude loop, PID-type controller of the form k_p + k_i/s+ε + k_d s/τ_fs+1 is used for each channel, with k_p = -27.75, k_i = -1.85, k_d = -5.55, ε = 0.001, and τ_f = 10. It is worth noting that instead of a pure integrator and derivative, a lead-lag compensator is used, which makes the controller practically implementable. The feasibility of LMI (<ref>) was checked by using a state-space realization of this controller, which was obtained by MATLAB's “” command. To ensure time scale separation, these controllers are designed on linearized models of each channel, and ratio of bandwidth of attitude loop to that of rate loop was kept higher than 4. With the selected gains this ratio was 7.580, 6.121, and 4.048 for roll, pitch and yaw channels, respectively. Moreover, for stability of cascaded architecture, LMIs (<ref>) were checked to be feasible using YALMIP and SeDuMi toolboxes <cit.>.
Figure <ref> shows the variation of the configuration error function (Ψ(R_d,R)), Fig. <ref> shows the angular rates, Fig. <ref> shows the attitude error vector (e_R), and Fig. <ref> shows the control effort (τ). It must be noted that Fig. <ref> shows the actual torque which is applied on the body, not the one demanded by the controller. Moreover, due to control allocation, actuator dynamics and saturation limits on motor RPMs, these demanded and actual torques are not necessarily equal. It can be easily seen that Euler angle based NDI controller barely survived the flips about roll axis with very degraded performance, and gets unstable during flips about pitch axis. However, the proposed geometric NDI scheme, both with and without feed-forward term (R_e^⊤ω_d), gives good performance during flips about both axes. Moreover, it can also be seen that the presence of feed-forward term significantly enhances the control performance at a cost of slightly larger control effort.
§ CONCLUSION
In this work, a novel nonlinear dynamic inversion based cascaded control architecture is presented for the rigid body attitude control problem. The proposed control law uses the rotation matrix parameterization
and ensures almost global asymptotic stability in the case of attitude error regulation. In particular, the proposed scheme is capable of enforcing desired linear dynamics of any order in both the attitude and velocity loops, and gives control designers the flexibility to use higher-order linear controllers in both loops while ensuring stability guarantees by just checking the feasibility of given LMIs. For practical applications, it is recommended that the inner velocity loop be at least three to five times faster than the outer attitude loop, as is standard practice in NDI-based cascaded control architectures.
|
http://arxiv.org/abs/2409.02248v1 | 20240903192102 | Some novel constructions of optimal Gromov-Hausdorff-optimal correspondences between spheres | [
"Saúl Rodríguez Martín"
] | math.MG | [
"math.MG",
"51F99"
] |
1/f Noise in the Heliosphere: A Target for PUNCH Science
[
September 9, 2024
========================================================
§ ABSTRACT
In this article, as a first contribution, we provide alternative proofs of recent results by Harrison and Jeffs which determine the precise value of the Gromov-Hausdorff (GH) distance between the circle 𝕊^1 and the n-dimensional sphere 𝕊^n (for any n∈ℕ) when endowed with their respective geodesic metrics.
Additionally, we prove that the GH distance between 𝕊^3 and 𝕊^4 is equal to 1/2arccos(-1/4), thus settling the case n=3 of a conjecture by Lim, Mémoli and Smith.
§ INTRODUCTION
In this article we consider the problem of determining the Gromov-Hausdorff (GH) distances between 𝕊^1 and all other spheres, as well as the GH distance between 𝕊^3 and 𝕊^4. Let us first recall some definitions. The Hausdorff distance between two subspaces A,B of a metric space (X,d_X) is defined as
d_(A,B)=
max(
sup_a∈ Ad_X(a,B),
sup_b∈ Bd_X(b,A)
),
where d_X(x,A):=inf_a∈Ad_X(x,a) for all x∈X and A⊆X. If (X,d_X) and (Y,d_Y) are metric spaces, we will write X≅Y whenever X,Y are isometric. Then the GH distance between two metric spaces (X,d_X) and (Y,d_Y) is defined as
d_(X,Y)=
inf{d^Z_H( X',Y');(Z,d_Z) metric space; X',Y'⊆Z; X'≅X;Y'≅Y}.
The GH distance takes values in [0,∞], and it satisfies the triangle inequality (cf. <cit.> Prop. 7.3.16). If X and Y are compact metric spaces, then d_GH(X,Y)=0 iff X and Y are isometric, and the value d_GH(X,Y) is sometimes described as a way to measure how far the spaces X and Y are from being isometric.
If X is a dense subspace of Y, then d_GH(X,Y)=0.
Since it was introduced by Edwards (<cit.>, 1975) and independently by Gromov (<cit.> 1981), the GH distance has been instrumental in research areas such as the analysis of shapes formed by point cloud data <cit.>, convergence results for sequences of Riemannian manifolds <cit.>, differentiability in metric measure spaces <cit.>, and the robustness of topological invariants of metric spaces when they suffer small deformations <cit.>.
Since being introduced by Edwards (<cit.>, 1975) and Gromov (<cit.>, 1981), GH distance has been used mostly as an asymptotic measure of the distance between spheres, see Include a few papers here, can take them from my S1SimpCon paper.
Recently, there has been growing interest (see e.g. <cit.>, and for a historical account of these efforts see <cit.>) in computing the exact value of the GH distance between certain simple metric spaces, specifically round spheres 𝕊^n⊆ℝ^n+1 (n∈ℕ:={1,2,…}) equipped with the geodesic metric d_𝕊^n.
In their paper <cit.>, Lim, Mémoli and Smith provided some upper and lower bounds for d_GH(𝕊^n,𝕊^m) for all n,m∈ℕ and they gave exact values for the pairwise distances between 𝕊^1,𝕊^2 and 𝕊^3.
Some bounds for the GH distances between spheres were further improved in <cit.>. Most importantly for our purposes, a concrete case of <cit.> implies that
2d_GH(𝕊^n,𝕊^n+1)≥ζ_n:=arccos(-1/n+1),
n∈ℕ,
and in <cit.> it was proved that
d_ GH(𝕊^1,𝕊^2n)≥π n/2n+1 and
d_ GH(𝕊^1,𝕊^2n+1)≥π n/2n+1.
n∈ℕ.
In <cit.>, Harrison and Jeffs proved that the inequalities from <Ref> are actually equalities:
For any integer n≥1, d_ GH(𝕊^1,𝕊^2n)=π n/2n+1.
For any integer n≥1, d_ GH(𝕊^1,𝕊^2n+1)=π n/2n+1.
In this article we prove three main results.
* We give an alternative proof of <Ref>. The construction we use is an immediate generalization of the one in <cit.>. Our proof was found independently of <cit.>, but it is similar to it, as we explain in <Ref>.
* We give a proof of <Ref> which is distinct from (and considerably shorter than) the one in <cit.>. The proof in <cit.> uses an `embedding-projection correspondence' (see <cit.>) to prove <Ref>, while we instead use a certain modification of the alternative construction we developed for proving <Ref>.
* We also establish the following novel result:
The distance
d_ GH(𝕊^3,𝕊^4) is 1/2ζ_3.
This proves case n=3 of <Ref>, of which cases n=1,2 were proved in <cit.>:
For all n∈ℕ we have d_(𝕊^n,𝕊^n+1)
=1/2ζ_n.
The proof strategy of <Ref> is also valid for cases n=1,2 of <Ref>, and perhaps it could be adapted to cases n=4,5,6 (see <Ref>).
The definition of d_GH is hard to work with; we now recall (cf. 7.3 in <cit.>) an equivalent definition based on correspondences between sets. Recall that a relation between two sets X,Y is a subset of X×Y. We will say a relation R⊆X×Y is a correspondence between X and Y if π_X(R)=X and π_Y(R)=Y, where π_X:X×Y→X and π_Y:X×Y→Y are the coordinate projections.
If (X,d_X) and (Y,d_Y) are metric spaces, we will define the distortion of a nonempty relation R⊆X×Y as
dis(R):=sup{|d_X(x,x')-d_Y(y,y')|;(x,y),(x',y')∈ R}∈[0,∞].
In Theorem 7.3.25 of <cit.> it is proved that, if (X,d_X),(Y,d_Y) are metric spaces,
d_GH(X,Y)=1/2inf{dis(R);R⊆X×Y correspondence between X and Y}.
So the distortion of any correspondence R between X and Y is an upper bound for 2d_GH(X,Y).
The graph of a function ϕ:X→Y is a relation, and it will be a correspondence between X and Y iff ϕ is surjective. We then let
(ϕ):=sup{|d_X(x,x')-d_Y(ϕ(x),ϕ(x'))|;x,x'∈X}
be the distortion of the graph of ϕ.
We can slightly relax the definition of correspondence: we say R⊆X×Y is a metric correspondence between (X,d_X) and (Y,d_Y) if the projections π_X(R),π_Y(R) are dense in X, Y respectively. If R is a metric correspondence between X and Y, then the triangle inequality for d_GH and <Ref> imply that 1/2dis(R) is an upper bound for d_GH(X,Y).
Thanks to <Ref>, in order to prove <Ref> it suffices to construct metric correspondences between spheres having adequate distortions.
All of the correspondences we construct are related to constructions in <cit.>, and they have several points in common. Firstly, all of them use the helmet trick (<Ref> below), which tells us that, if n,m∈ℕ and H^n_+:={x∈𝕊^n;x_n+1≥0}, then for any correspondence R⊆ H^n_+×𝕊^m there is a correspondence R'⊆𝕊^n×𝕊^m containing R and satisfying (R')=(R). When estimating the distance d_(𝕊^n,𝕊^m), this allows us to use correspondences in H^n_+×𝕊^m, which are easier to construct than correspondences in 𝕊^n×𝕊^m.
Secondly, our constructions all use regular simplices inscribed in 𝕊^n, by which we mean a set of n+2 distinct points p_1,…,p_n+2∈𝕊^n such that d_𝕊^n(p_i,p_j) is the same for all (i,j) with i≠ j. A list of useful properties of such simplices can be found in <Ref>.
§.§.§ Structure of the paper
In <Ref> we prove <Ref>, which has the easiest proof of our three main results. To do this, we devise a metric correspondence R_2n⊆ H^2n_+×𝕊^1 with distortion 2π n/2n+1. The correspondence R_2n is an immediate generalization of a construction from <cit.>.
In <Ref> we prove <Ref>.
To explain the approach we take in that section: given <Ref> and the fact that d_(𝕊^1,𝕊^2n)=π n/2n+1 (proved in
<Ref>), one could optimistically conjecture that d_(𝕊^1,𝕊^2n+1)=d_(𝕊^1,𝕊^2n)=π n/2n+1. Therefore, a natural approach is using the correspondence from <Ref> to create some correspondence R_2n+1⊆ H^2n+1_+×𝕊^1. And indeed, we start with a natural adaptation of the correspondence R_2n to dimension 2n+1 and after `rotating' it in a small subset B⊆ H^2n+1_+ (shown in <Ref>), we obtain a correspondence R_2n+1⊆ H^2n+1_+×𝕊^1 with distortion 2π n/2n+1. The way in which we rotate the correspondence in the set B⊆ H^2n+1_+ is inspired by the arguments from <cit.>.
In <Ref> we prove <Ref> using a surjective map F:H^4_+→𝕊^3 with distortion ζ_3=arccos(-1/4). The construction of the map F is not particularly complicated, but the author has only found proofs that (F)=ζ_3 using computer assistance, see <Ref>.
Roughly speaking, the map F is obtained as an `interpolation' between two functions F',F”:H^4_+→𝕊^3 described below and depicted in <Ref>.
We obtain F':H^4_+→𝕊^3 by taking points p_1,…,p_n+2 forming a regular simplex inscribed in 𝕊^3≡{x∈𝕊^4;x_n+2=0} and for each x∈ H^4_+ defining F'(x)=p_i, where i∈{1,…,n+2} is chosen so that p_i is as close as possible to x. The map F”:H^4_+→𝕊^3 is obtained by, for each p∈ H^4_+, choosing F”(p) to be a point of 𝕊^3 which minimizes the distance to p. So F(p) is the `projection' of p to 𝕊^3, except if p is the north pole N:=(0,0,0,0,1).
We want our function F to have distortion ζ_3. The map F' has distortion ζ_3, but it does not induce a correspondence. The map F” has the opposite problem: it is surjective but has distortion π, because for points x,x' very close to the north pole, d_𝕊^3(F(x),F(x'))-d_𝕊^4(x,x') may be as close to π as we want. In the proof of <cit.>, they find d_GH(𝕊^1,𝕊^2) via a surjective map ϕ:H^2_+→𝕊^1 which is equal to (the lower dimensional analogue of) F' for points in the equator and equal to F” in the rest of H^2_+.
The analogous map in higher dimension, ϕ_n:𝕊^n+1→𝕊^n, has distortion η_n>ζ_n (η_n is defined in <Ref>) for n≥3, so it cannot be used to find d_GH(𝕊^n+1,𝕊^n).
Our map F is defined as F' for points near the north pole (so that its distortion is not π), F” for points in 𝕊^3 (so that F is surjective) and an interpolation between F' and F” in between. A depiction of the map F can be found in <Ref>.
In <Ref> we define a higher dimensional analogue of F, which we call F_n:𝕊^n+1→𝕊^n.
The map F_n has distortion >ζ_n when n≥7 (see <Ref>), so in that case it cannot be used to prove <Ref>.
Do we have (F_n)=ζ_n for n=4,5,6?
In the rest of the introduction we explain how we use computer assistance to prove some inequalities needed in <Ref>, as well as give some ideas that could be used to prove <Ref>.
In <Ref>, when we prove that the map F:𝕊^4→𝕊^3 has distortion ζ_3, we need to prove inequalities of the form f(x_1,x_2)≥ 0, where x_i are in intervals [a_i,b_i]⊆ℝ. The expression for the function f is sometimes very complicated, and it is not clear how to give a clean proof of the inequality. However, we can prove f(x_1,x_2)≥0 using brute force if two conditions are met:
* The function f is bounded below by some positive constant ε.
* The function f is uniformly continuous: there is a constant δ (which we can compute explicitly) such that, if |x_1-x_1'|,|x_2-x_2'|<δ, then |f(x_1,x_2)-f(x_1',x_2')|<ε.
Under these conditions we can use a computer program to check that f(x_1,x_2)>ε for all points (x_1,x_2) in some finite set G (a grid) inside [a_1,b_1]×[a_2,b_2] such that every point of [a_1,b_1]×[a_2,b_2] is at distance <δ of some point of G, concluding the inequality.
We have used this method to check three crucial inequalities in <Ref>. The python code can be found in the GitHub repository <cit.>; both files that check the inequality and files that output a 3D plot of the function f(x,y) are included. For the most complicated inequality we have explained in detail how to obtain the uniform continuity constants in <Ref> (see page CheckUnifCont).
If the answer to <Ref> is positive, it should be theoretically provable using the same ideas of <Ref>[One first needs to use mathematical arguments to ensure that conditions <ref> and <ref> of <Ref> are satisfied.]. Indeed, <Ref> reduces to the inequality
|d_𝕊^n+1(x,x')-d_𝕊^n(F(x),F(x'))|<ζ_nx,x'∈𝕊^n+1, for n=4,5,6.
However, the author has not been able to find a computer program efficient enough to prove dist(F_n)=ζ_n using this strategy in a reasonable amount of time, the main obstacle being that grids in 𝕊^n have too many points for n≥4.
Acknowledgements. Special thanks to Facundo Mémoli for introducing the author to the study of GH distances and providing guidance and extensive feedback
while writing this article. The author gratefully acknowledges support from the
grants BSF 2020124 and NSF CCF AF 2310412. Also thanks to Daniel Hurtado, River Li and Pablo Vitoria for helping improve to some of the arguments in this article. The large language models ChatGPT-4 and Claude 3.5-Sonnet were used to aid in writing part of the python code.
§ NOTATION AND PRELIMINARIES
Throughout most of this article, the metric spaces we will be studying are the unit spheres 𝕊^n with the spherical distance, which will be denoted by d_𝕊^n. That is, considering 𝕊^n:={x∈ℝ^n+1;∑_i=1^n+1x_i^2=1}, for all x,y∈𝕊^n we will have
d_𝕊^n(x,y):=arccos(⟨ x,y⟩), where ⟨ x,y⟩:=∑_i=1^n+1x_iy_i.
Thus distances take values in the interval [0,π]. For any two non-antipodal points x,x'∈𝕊^n we will denote the (unique) geodesic segment from x to x' by [x,x']. Also, if p,q,r∈𝕊^n+1 are distinct points, then we denote by ∠ pqr∈[0,π] the angle at q of the spherical triangle with vertices p,q,r, as specified by the spherical cosine rule.
[Antipodal sets]
For any subset X⊆𝕊^n⊆ℝ^n+1, we define
-X:={-x;x∈X}.
Similarly, for any relation R⊆𝕊^n×𝕊^m, we define
-R:={(-x,-y);(x,y)∈ R}.
The following is a version of Lemma 5.5 of <cit.> for relations:
Let R⊆𝕊^n×𝕊^m be a relation and let -R={(-x,-y);(x,y)∈ R}. Then the relation
-R∪ R⊆𝕊^n×𝕊^m
has the same distortion as R.
For any k and for any two points x,y∈𝕊^k, we have d_𝕊^k(x,y)=π-d_𝕊^k(x,-y). So for any two pairs (x,y),(x',y')∈𝕊^n×𝕊^m,
|d_𝕊^n(x,-x')-d_𝕊^m(y,-y')|=|π-d_𝕊^n(x,x')-(π-d_𝕊^m(y,y'))|=|d_𝕊^n(x,x')-d_𝕊^m(y,y')|.
Thus, any distortion |d_𝕊^n(x,x')-d_𝕊^m(y,y')| between two pairs of points (x,y) and (x',y') of -R∪ R is also attained between two pairs of points of R, which proves dis(-R∪ R)=dis(R).
Fix a point q∈𝕊^n+1. For any nonempty X⊆{x∈𝕊^n+1;⟨ x,q⟩=0}, we define the cone C_qX as the union of geodesic segments
C_qX:=⋃_x∈X[x,q].
As proved in Lemma 6.1 of <cit.>, the diameter of a cone C_qX is given by
diam(C_qX)=max(π/2,diam(X)).
For each n∈ℕ, we can find points p_1,…,p_n+2∈𝕊^n which form a regular simplex in ℝ^n+1. We can associate to them the open Voronoi cells
V_i:={x∈𝕊^n;d_𝕊^n(x,p_i)<d_𝕊^n(x,p_j) for all j≠ i}, i=1,2,…,n+2.
We will say a subset A⊆𝕊^n is convex when any geodesic segment between two points of A is contained in A (so an open hemisphere is convex, and a closed hemisphere is not). The convex hull of a set A⊆𝕊^n is the intersection of all convex sets containing A.
We will need some properties of regular simplices inscribed in 𝕊^n (some of them are proved in <cit.>; see also Section 3 of <cit.> for related results):
Let (p_i)_i=1^n+2 and (V_i)_i=1^n+2 be as above. Then
* d_𝕊^n(p_i,p_j)=ζ_n:=arccos(-1/n+1) for i≠ j.
* For all i, V_i is the convex hull inside 𝕊^n of the set {-p_j;j≠ i}.
* The diameter of the Voronoi cells V_i is
η_n:={[ arccos(-n+1/n+3) for n odd; arccos(-√(n/n+4)) for n even. ].
* The Voronoi cell V_i satisfies
B_𝕊^n(p_i,ζ_n/2)⊆ V_i⊆ B_𝕊^n(p_i,π-ζ_n),
where B_𝕊^n(x,r) denotes the ball centered at x of radius r in 𝕊^n.
<Ref> are discussed in Remarks 6.4 and 6.5 of <cit.>. To prove <Ref> note that, as p_1,…,p_2n+2 are unit vectors forming a regular simplex centered at 0, we have ∑_i=1^n+2p_i=0. Moreover, by symmetry the scalar products ⟨ p_i,p_j⟩ take a single value κ for any pair (i,j) with i≠ j. This value κ can be obtained from the equation
0=⟨ p_i,0⟩=
⟨ p_i,∑_j=1^n+2p_j⟩
=
1+(n+1)κ.
So cos(d_𝕊^n(p_i,p_j))=-1/n+1 for all i≠ j, as we wanted.
Finally we prove <Ref>. The first containment is a consequence of the fact that, if d_𝕊^n(x,p_i)<ζ_n/2, then for any j≠ i we have
d_𝕊^n(x,p_j)≥
d_𝕊^n(p_i,p_j)-d_𝕊^n(x,p_i)
>ζ_n-ζ_n/2>d_𝕊^n(x,p_i),
so x∈ V_i. For the second containment, note that B_𝕊^n(p_i,π-ζ_n) is convex (balls of radius <π/2 are convex in 𝕊^n) and contains the points -p_j for all j≠ i. So by <Ref> we have V_i⊆B_𝕊^n(p_i,π-ζ_n), and as V_i is open we also have V_i⊆ B_𝕊^n(p_i,π-ζ_n).
§ DISTANCE FROM 𝕊^1 TO EVEN DIMENSIONAL SPHERES
This section is devoted to proving <Ref> by constructing a correspondence between 𝕊^1 and 𝕊^2n with distortion 2π n/2n+1. This construction generalizes the one used in Appendix D of <cit.> to find d_GH(𝕊^1,𝕊^2).
Let us start with some notation:
𝕊^2n:={x∈ℝ^2n+1;∑_i=1^2n+1x_i^2=1}
H^2n_+:={x∈𝕊^2n;x_2n+1≥0}
𝕊^2n-1:={x∈𝕊^2n;x_2n+1=0}
Let p_1,…,p_2n+1∈𝕊^2n-1 be the vertices of a regular simplex in ℝ^2n×{0}⊆ℝ^2n+1 inscribed in 𝕊^2n-1. For i=1,…,2n+1, consider the Voronoi cells
V^2n-1_i:={
x∈𝕊^2n-1;d_𝕊^2n(x,p_i)<d_𝕊^2n(x,p_j) for all j≠ i
},
V^2n_i:={
x∈ H^2n_+;d_𝕊^2n(x,p_i)<d_𝕊^2n(x,p_j) for all j≠ i
}.
Note that V^2n_i is obtained by taking a cone (see <Ref>) from V^2n-1_i with respect to the point (0,…,0,1)∈ℝ^2n+1, so by Propositions <ref> and <ref><ref> we have
diam(V^2n_i)=diam(V^2n-1_i)=arccos(-2n/2n+2)=arccos(-n/n+1)
Also, letting q_1,…,q_2n+1 be the vertices of a regular 2n+1-gon inscribed in 𝕊^1, we define the Voronoi cells
W_i:={
y∈𝕊^1;d_𝕊^1(y,q_i)<d_𝕊^1(y,q_j) for all j≠ i
},
which are intervals of length 2π/2n+1.
We will need the fact that the diameter of V^2n_i is at most 2π n/2n+1:
For all positive x∈[1,∞) we have arccos(-x/x+1)≤2π x/2x+1.
The following elegant proof of <Ref> is due to Pablo Vitoria.[This proposition plays a similar role to <cit.>.]
Note that for any x∈(1,∞) we have
arccos(-x/x+1)≤2π x/2x+1-x/x+1≥cos(2π x/2x+1)x/x+1≤cos(π/2x+1)
Changing variables to y=π/2x+1 (so that x/x+1=2π/π+y-1), it will be enough to check that cos(y)≥2π/π+y-1 for all y∈[0,π/3]. But this inequality follows from the facts that:
* The functions cos(y) and 2π/π+y-1 agree at y=0,π/3.
* In the interval [0,π/3] the function cos(y) is concave while 2π/π+y-1 is convex.
Consider the following relation R_2n⊆ H^2n_+×𝕊^1:
R_2n:=(⋃_i=1^2n+1V^2n_i×{q_i})
⋃(⋃_i=1^2n+1{p_i}× W_i)
Note that the projection of R_2n onto its first coordinate is dense in H^2n_+, because it is the union of the Voronoi cells V_i^2n for i=1,…,2n+1. Similarly, the projection of R_2n to its second coordinate is dense in 𝕊^1. Thus, R_2n∪-R_2n is a metric correspondence between 𝕊^2n and 𝕊^1. So thanks to the Helmet trick (<Ref>), we have
d_GH(𝕊^2n,𝕊^1)≤1/2dis(R_2n∪-R_2n)=1/2dis(R_2n).
So to prove <Ref> it will be enough to prove that dis(R_2n)≤2π n/2n+1. That is, we need to prove that if (x,y) and (x',y') are in R_2n, then
|d_𝕊^2n(x,x')-d_𝕊^1(y,y')|≤2π n/2n+1.
To prove <Ref> we will divide the analysis into 6 cases.
* x,x'∈ V^2n_i,y=y'=q_i for some i∈{1,…,2n+1}. In this case
|d_𝕊^2n(x,x')-d_𝕊^1(y,y')|=d_𝕊^2n(x,x')≤diam(V^2n_i)=arccos(-n/n+1)≤2π n/2n+1.
* x∈ V^2n_i,x'∈ V^2n_j, y=q_i,y'=q_j for some i≠ j. In this case we have d_𝕊^1(y,y')∈[π/2n+1,2π n/2n+1], which implies <Ref>.
* x∈ V^2n_i,x'=p_i,y=q_i,y'∈ W_i for some i. Then we have
d_𝕊^1(y,y')=d_𝕊^1(q_i,y')≤π/2n+1
d_𝕊^2n(x,x')=d_𝕊^2n(x,p_i)<π-ζ_n (see Proposition <ref><ref>),
so |d_𝕊^2n(x,x')-d_𝕊^1(y,y')|≤max(π/2n+1,π-ζ_n)<π/2.
* x∈ V^2n_i,x'=p_j,y=q_i,y'∈ W_j for some i≠ j. Then we have
d_𝕊^1(y,y')=d_𝕊^1(q_i,y')>π/2n+1
d_𝕊^2n(x,x')=d_𝕊^2n(x,p_j)>ζ_2n-1/2=1/2arccos(-1/2n) (see Proposition <ref><ref>),
but for n≥1 we have 1/2arccos(-1/2n)≥π/2n+1, because we have equality for n=1 and for n≥2 we have 1/2arccos(-1/2n)≥π/4≥π/2n+1. So both d_𝕊^2n(x,x') and d_𝕊^1(y,y') are in the interval [π/2n+1,π], thus |d_𝕊^2n(x,x')-d_𝕊^1(y,y')|≤2π n/2n+1 in this case.
* x=x'=p_i,y,y'∈ W_i for some i. Then d_𝕊^2n(x,x')=0 and d_𝕊^1(y,y')<2π/2n+1 so we are done.
* x=p_i,x'=p_j, y∈ W_i, y'∈ W_j for some i≠ j. Then d_𝕊^2n(x,x')=ζ_n∈[π/2n+1,2π n/2n+1], which implies <Ref>.
Our construction is similar to that of <cit.>. Both are extensions of the idea from <cit.> of considering finite subsets 𝒫,𝒬 of 𝕊^2n,𝕊^1 of the same cardinality and using a bijection between 𝒫 and 𝒬 to construct a correspondence between 𝕊^2n and 𝕊^1. The main difference is that in our case we need the helmet trick (as the sets V_i^2n are Voronoi cells inside H_+^2n, not 𝕊^2n) and that we use the vertices of a 2n+1-simplex as centers of the Voronoi cells in 𝕊^2n, while in <cit.> they use an orthonormal basis of vectors, together with their antipodals. However, in both our construction and the one in <cit.>, the correspondence between 𝕊^1 and 𝕊^2n has distortion 2π n/2n+1 for the same reason: there are pairs of points in 𝕊^n+1 which are arbitrarily close to each other which are mapped to points at distance 2π n/2n+1 in 𝕊^1.
§ DISTANCE FROM 𝕊^1 TO ODD DIMENSIONAL SPHERES
This section is devoted to proving <Ref>.
§.§ An optimal correspondence between 𝕊^2n+1 and 𝕊^1
Let us start with some notation which we will use in this section:
𝕊^2n+1={x∈ℝ^2n+2;∑_j=1^2n+2x_j^2=1}
𝕊^2n:={x∈𝕊^2n+1;x_2n+2=0}
𝕊^2n-1:={x∈𝕊^2n+1;x_2n+1=x_2n+2=0}
H^2n+1_+:={x∈𝕊^2n+1;x_2n+2≥0}
H^2n_+:={x∈𝕊^2n;x_2n+1≥0}
H^2n+1_++:={x∈𝕊^2n+1;x_2n+1≥0,x_2n+2≥0}.
H^2n+1_-+:={x∈𝕊^2n+1;x_2n+1≤0,x_2n+2≥0}.
These sets are illustrated in <Ref>.
Now, for a dense subset D of H_+^2n+1, we will define a map Φ:D→𝕊^1 with distortion 2π n/2n+1 and such that, if G_Φ is the graph of Φ, then G_Φ∪ -G_Φ⊆𝕊^2n+1×𝕊^1
is a metric correspondence between 𝕊^2n+1 and 𝕊^1. By the helmet trick (<Ref>) this will establish <Ref>.
We describe the map Φ in detail after <Ref>; we first give a more informal description of it.
Let N=(0,…,0,1)∈ℝ^2n+2 be the north pole of 𝕊^2n+1. We will denote points x∈ H_+^2n+1∖{N} as (p,α)∈𝕊^2n×[0,π/2), where p∈𝕊^2n⊆𝕊^2n+1 is the point of 𝕊^2n closest to p and α∈[0,π/2) is the geodesic distance from p to 𝕊^2n.
Now, note that the correspondence R_2n from the <Ref> was the union of the graphs of two maps, one map f:𝕊^2n→{q_0,…,q_2n}⊆𝕊^1 and one map g:𝕊^1→{p_0,…,p_2n}⊆𝕊^2n. Then, the restriction of our map Φ to 𝕊^2n will be just the map f: we let Φ(p,0)=f(p) for all p∈𝕊^2n.
In fact, if p∈ H^2n_+ (so p is in the northern hemisphere of 𝕊^2n), we let Φ(p,α)=f(p) for all α∈[0,π/2). That determines Φ(x) for all x∈ H^2n+1_++.
For points of H^2n+1_-+, that is, points of the form (p,α) where p is in the southern hemisphere of 𝕊^2n, we define Φ(p,α)=f(p)· e^i·min(α,π/2n+1)∈𝕊^1.
So for fixed p, as α increases from 0 to π/2n+1, Φ(p,α) follows a unit speed geodesic in 𝕊^1 from f(p) to f(p)· e^i π/2n+1, and then Φ(p,α) is equal to f(p)· e^iπ/2n+1 for all α∈[π/2n+1,π/2).
We include a depiction of the map Φ in <Ref> in the case when 2n+1=3.
Now we will define the map Φ in detail. Let p_0,…,p_2n be the vertices of a regular simplex in ℝ^2n×{(0,0)} inscribed in 𝕊^2n-1, and for j=0,…,2n we define the Voronoi cells
V^2n-1_j:={x∈𝕊^2n-1;d_𝕊^2n+1(x,p_j)<d_𝕊^2n+1(x,p_k) for all k≠ j}
V^2n_j:={x∈ H^2n_+;d_𝕊^2n+1(x,p_j)<d_𝕊^2n+1(x,p_k) for all k≠ j}
V^2n+1_j={x∈ H^2n+1_++;d_𝕊^2n+1(x,p_j)<d_𝕊^2n+1(x,p_k) for all k≠ j}
Note that V_j^2n is obtained from V_j^2n-1 by performing the cone operation described in <Ref> (and removing the point (0,…,1,0)∈ℝ^2n+2), and V_j^2n+1 is similarly obtained from V_j^2n. Thus Propositions <ref> and <ref><ref> imply that, for all j=1,…,2n+1,
diam(V_j^2n+1)
=diam(V_j^2n)
=diam(V_j^2n-1)
=arccos(-2n/2n+2).
In this section, we identify 𝕊^1 with {z∈ℂ;|z|=1}, and for k=0,…,2n we let q_k=e^2π ik/2n+1 denote the (2n+1)-th roots of unity. Note that the map f:𝕊^2n→𝕊^1 given by
[ f: 𝕊^2n → 𝕊^1;; V_j^2n ↦ q_j for j=0,…,2n.; -V_j^2n ↦ -q_j for j=0,…,2n. ]
has distortion at most 2π n/2n+1: indeed, f is obtained from applying the helmet trick to the map
[ f_+: H^2n_+ → 𝕊^1;; V_j^2n ↦ q_j for j=0,…,2n, ]
and the map f_+ has distortion at most 2π n/2n+1 because when seen as a relation, it is
⋃_j=0^2nV^2n_j×{b_j},
which is contained in the relation R_2n with distortion 2π n/2n+1 that we used in <Ref>.
Now, using <Ref>, we will repeatedly use the fact that for any fixed p∈𝕊^2n, the curve γ:[0,π/2]→𝕊^2n+1;γ(t)=(p,t) is a unit speed geodesic. Finally, we define
D=(⋃_j=0^2nV^2n+1_j)∪(⋃_j=0^2n-V^2n+1_j).
And consider the map
[ Φ: D → 𝕊^1; ; (p,α) ↦ q_k if p∈ V_k^2n.; (p,α) ↦ -q_k· e^imin(α,π/2n+1) if p∈-V_k^2n.; ]
For simplicity we will write Φ(p,α) instead of Φ((p,α)) for the image of the point (p,α). Also note that the condition p∈ V_k^2n is equivalent to (p,α)∈ V_k^2n+1.
The relation G_Φ∪-G_Φ⊆𝕊^2n+1×𝕊^1 is a metric correspondence (as defined in <Ref>) between 𝕊^2n+1 and 𝕊^1.
Firstly, the domain D of Φ is dense in H^2n+1_+. So D∪-D, which is the projection of G_Φ∪-G_Φ to 𝕊^2n+1, is dense in 𝕊^2n+1. Secondly, the image of Φ is the following set I (see <Ref> for the case 2n+1=5)
I:=⋃_k=0^2nI_k⊆𝕊^1,
where I_k is the following interval of length π/2n+1 having q_k as one endpoint:
I_k:={e^ix;x∈[π(2k-1)/2n+1,2π k/2n+1]} for k=0,1,…,2n.
So I∪-I, which is the projection of G_Φ∪-G_Φ to 𝕊^1, is the entire 𝕊^1, concluding the proof that G_Φ∪-G_Φ is a metric correspondence between 𝕊^2n+1 and 𝕊^1.
Note that the restriction of Φ to 𝕊^2n is just the function f from <Ref>, which has distortion ≤2π n/2n+1. That is, for any (p,0),(p',0)∈ D we have
|d_𝕊^1(Φ(p,0),Φ(p',0))-d_𝕊^2n+1((p,0),(p',0))|≤2π n/2n+1.
Also note that Φ maps most points of D to {q_0,…,q_2n}; letting A=Φ^-1({q_0,…,q_2n}), the set B:=D∖ A of points of D mapped by Φ outside of {q_0,…,q_2n} is essentially the set of points of H^2n+1_-+ at distance <π/2n+1 of 𝕊^2n:
B=D∖ A
=⋃_j=0^2n{(p,α)∈-V^2n+1_j;α<π/2n+1}.
§.§ Proof that Φ has distortion 2π n/2n+1
We want to prove that, for any two points (p,α) and (p',α') in D⊆ H^2n+1_+, we have
|d_𝕊^1(Φ(p,α),Φ(p',α'))-d_𝕊^2n+1((p,α),(p',α'))|≤2π n/2n+1.
That means that neither of the following two inequalities can happen:
d_𝕊^1(Φ(p,α),Φ(p',α'))-d_𝕊^2n+1((p,α),(p',α'))>2π n/2n+1.
d_𝕊^2n+1((p,α),(p',α'))-d_𝕊^1(Φ(p,α),Φ(p',α'))>2π n/2n+1.
§.§.§ Inequality <ref>
We will assume that (p,α),(p',α')∈ D satisfy <Ref>
and obtain a contradiction.
Firstly, we may assume without loss of generality that Φ(p,α)∈ I_0 (I_0 is defined in <Ref>; see <Ref>). This, along with the fact that d_𝕊^1(Φ(p,α),Φ(p',α'))≥2n/2n+1π=π-π/2n+1, implies that Φ(p',α') is either in I_n or in I_n+1. We can in fact assume Φ(p',α')∈ I_n+1, swapping (p,α) and (p',α') if not. So we have
Φ(p,α)∈ I_0 (interval between -q_n and q_0).
Φ(p',α')∈ I_n+1 (interval between -q_0 and q_n+1).
Note that, as d_𝕊^1(Φ(p,α),Φ(p',α'))>2π n/2n+1, the point Φ(p',α') cannot be exactly q_n+1, as the entire interval I_0 lies at distance ≤2π n/2n+1 from q_n+1. This implies that (p',α')∈ B, that is, α'<π/2n+1 and (p',α')∈ H^2n+1_-+. Now consider the function
[ h_1: [0,π/2n+1] → ℝ;; t ↦ d_𝕊^1(Φ(p,α),Φ(p',t))-d_𝕊^2n+1((p,α),(p',t)). ]
Then h_1 is decreasing: this is because we have Φ(p',t)=e^i(π+t), so d/dtd_𝕊^1(Φ(p,α),Φ(p',t))=-1, while the function t↦ d_𝕊^2n+1((p,α),(p',t)) is 1-Lipschitz, owing to t↦(p',t) being a unit speed geodesic. So we conclude that
d_𝕊^1(Φ(p,α),Φ(p',0))-d_𝕊^2n+1((p,α),(p',0))≥ d_𝕊^1(Φ(p,α),Φ(p',α'))-d_𝕊^2n+1((p,α),(p',α'))>2π n/2n+1.
This implies that d_𝕊^2n+1((p,α),(p',0))<π/2n+1, so we have α<π/2n+1. We also have d_𝕊^2n+1((p,0),(p',0))≤π/2 (if not, d_𝕊^2n+1((p,α),(p',0)) would be more than π/2), so we can easily deduce (e.g. by the cosine rule) that
d_𝕊^2n+1((p,0),(p',0))≤ d_𝕊^2n+1((p,α),(p',α')).
Now, we divide the analysis into 2 cases according to the quadrant to which (p,α) belongs:
* Suppose (p,α)∈ H^2n+1_++.
Then Φ(p,α)=Φ(p,0), so by <Ref> we have
d_𝕊^1(Φ(p,0),Φ(p',0))-d_𝕊^2n+1((p,0),(p',0))>2π n/2n+1,
contradicting <Ref>.
* Suppose (p,α)∈ H^2n+1_-+. We also know that α∈[0,π/2n+1]. However, the function
[ h_2: [0,π/2n+1] → ℝ;; t ↦ d_𝕊^1(Φ(p,t),Φ(p',0))-d_𝕊^2n+1((p,t),(p',0)) ]
is increasing because d/dtd_𝕊^1(Φ(p,t),Φ(p',0))=1 and the function d_𝕊^2n+1((p,t),(p',0)) is 1-Lipschitz, due to t↦(p,t) being unit speed geodesic. So we have
h_2(π/2n+1)≥ h_2(α)>2π n/2n+1.
however, this implies that d_𝕊^2n+1((p,π/2n+1),(p',0))<π/2n+1, which is impossible because (p,π/2n+1) is at distance π/2n+1 from 𝕊^2n, and 𝕊^2n contains the point (p',0).
§.§.§ Second inequality
We will assume that (p,α),(p',α')∈ D satisfy <Ref>
and obtain a contradiction.
<Ref> implies that d_𝕊^1(Φ(p,α),Φ(p',α'))<π/2n+1, so Φ(p,α) and Φ(p',α') are in the same subinterval of 𝕊^1 of length π/2n+1. We can assume that this interval is I_0, so it has q_0 and -q_n as endpoints. Also note that
α+α' =(π/2-d_𝕊^2n+1(e_n+2,(p,α)))+(π/2-d_𝕊^2n+1(e_n+2,(p',α')))
≤π-d_𝕊^2n+1((p,α),(p',α'))
<π/2n+1.
So both α,α' are
less than π/2n+1. We consider three cases:
* Both (p,α) and (p',α') are in H^2n+1_++. Then (p,α),(p',α') are both in V^2n+1_0. So by <Ref> and <Ref> we have
d_𝕊^2n+1((p,α),(p',α'))≤diam(V^2n+1_0)=arccos(-2n/2n+2)<2n/2n+1π,
contradicting <Ref>.
* Both (p,α) and (p',α') are in H^2n+1_-+. Then (p,α),(p',α') are both in -V_n^2n+1, so as in the previous case we have d_𝕊^2n+1((p,α),(p',α'))<2n/2n+1π,
contradicting <Ref>.
* We have (p,α)∈ H^2n+1_++ and (p',α')∈ H^2n+1_-+. So Φ(p,α)=q_0 and, as α'<π/2n+1, we have Φ(p',α')=-q_ne^iα'. Now, the function
[ h_3: [0,π/2n+1] → ℝ;; t ↦ d_𝕊^2n+1((p,α),(p',t))-d_𝕊^1(Φ(p,α),Φ(p',t)). ]
is increasing, because t↦ d_𝕊^2n+1((p,α),(p',t)) is 1-Lipschitz (due to the fact that t↦(p',t) is a unit speed geodesic) and d/dtd_𝕊^1(Φ(p,α),Φ(p',t))=-1.
So we have that h_3(π/2n+1)≥ h_3(α)>2n/2n+1π. Which implies that d_𝕊^2n+1((p,α),(p',π/2n+1))>2n/2n+1π. This, however, contradicts the fact that, if e_n+2=(0,…,0,1)∈ℝ^2n+2, then
d_𝕊^2n+1((p,α),(p',π/2n+1)) ≤ d_𝕊^2n+1((p,α),e_n+2)+d_𝕊^2n+1(e_n+2,(p',π/2n+1))
≤π/2+(π/2-π/2n+1)=2n/2n+1π.
§ DISTANCE FROM 𝕊^3 TO 𝕊^4
In this section we prove that d_GH(𝕊^3,𝕊^4)
=1/2ζ_3 by constructing a surjective function F_n:𝕊^n+1→𝕊^n and proving that for n=3 it has distortion ζ_3. The construction we utilize is inspired in the proof of <cit.>.
The author suspects that F_n also has distortion ζ_n for more values of n (e.g. n=4,5, see <Ref>), but he has verified that F_n has distortion >ζ_n for n≥7 (see <Ref>). Let us first introduce some notation.
* We identify 𝕊^n with 𝕊^n×{0}⊆𝕊^n+1⊆ℝ^n+2, and we let e_n+2 be the north pole (0,…,0,1)∈ℝ^n+2.
* For any x,x'∈𝕊^n+1 such that x'≠-x and λ∈[0,1], we let λ x⊕(1-λ)x' denote the point z in the geodesic segment [x,x'] such that d_𝕊^n+1(z,x')=λ d_𝕊^n+1(x,x').
* Let σ:𝕊^n+1∖{e_n+2,-e_n+2}→𝕊^n be the projection to 𝕊^n; that is, for each x in the domain, σ(x) will be the point of 𝕊^n closest to x.
* p_1,…,p_n+2 will be the vertices of a fixed regular simplex inscribed in 𝕊^n.
* For each x∈𝕊^n+1 let α(x):=d_𝕊^n+1(x,e_n+2).
* For each i=1,…,n+2, let
V_i:=
{x∈𝕊^n;d_𝕊^n+1(p_i,x)<
d_𝕊^n+1(p_j,x) for all j≠ i}.
* For each i=1,…,n+2 let N_i be C_e_n+2V_i∖{e_n+2}. Note that each N_i is convex.
Note that ⋃_i=1^n+2N_i is dense in {x∈𝕊^n+1;x_n+2≥0}. We define the function
F_n:⋃_i=1^n+2N_i→𝕊^n
by
F_n(x)=(1-f(α(x)))p_i⊕ f(α(x))σ(x), if x in N_i,
where f(x):=max(0,x+1-π/2), see <Ref>.
Now, it would be enough to prove that dist(F_n)=ζ_n in order to prove that d_GH(𝕊^n,𝕊^n+1)≤ζ_n/2. Indeed, if dist(F_n)=ζ_n, then the function
F_n':(⋃_i=1^n+2N_i)⋃(-⋃_i=1^n+2N_i)
→𝕊^n
defined by F_n'(x)=-F_n'(-x)=F_n(x) for all x∈⋃_i=1^n+2N_i also has distortion ζ_n by <Ref>, and its graph is a metric correspondence (as defined in <Ref>) between 𝕊^n and 𝕊^n+1. Thus, d_GH(𝕊^n,𝕊^n+1)≤ζ_n/2, as we wanted.
For simplicity we will write d instead of d_𝕊^n or d_𝕊^n+1 during the remainder of <Ref>, and we write F:H^4_+→𝕊^3 instead of F_3.
In the file of the GitHub repository <cit.> we have included a python program which selects a finite set S of random points in 𝕊^n+1 and computes the maximum value of |d(x,x')-d(F_n(x),F_n(x'))|, for x,x' in S.
The distortion of F being at most ζ_n means that for all x,x' in ⋃_i=1^n+2N_i we have
|d(x,x')-d(F_n(x),F_n(x'))|≤ζ_n.
Depending on whether x,x' are in one cone N∈{N_1,…,N_n+2} or in different ones N,N' and depending on whether d(x,x')-d(F_n(x),F_n(x')) is positive or negative, <Ref> turns into four different inequalities, which give names to the following subsections.
To experimentally check whether the distortion of F was ζ_n, we used a python program (see in <cit.>) which chooses a finite set S⊆𝕊^n+1 of random points in the sphere (choosing the coordinates of the points according to a Gaussian distribution) and computes the distortion |d(x,x')-d(F_n(x),F_n(x'))| for all x,x'∈ S.
Thanks to Daniel Hurtado for optimizing the program so that it could handle sets S with hundreds of thousands of points.
This program was also useful for determining a function f:[0,π/2]→[0,1] which, when substituted in <Ref>, would lead to F_n having distortion ζ_n. For example, if instead of the function f from <Ref> we chose f(x)=2x/π (this was actually what the author tried first), then we would have (F_3)>ζ_3.
§.§ x,x' in the same cone N; d(F(x),F(x'))≤ d(x,x')+ζ_3.
We prove the stronger inequality d(F(x),F(x'))≤ d(x,x')+π/2[This inequality holds for all n, not only n=3. An alternative way to prove it is checking that the function F_n is Lipschitz when restricted to each Voronoi cell N_i; the inequality easily follows.].
Let p∈{p_1,…,p_n+2} be the center of the cone N and suppose that d(F(x),F(x'))>d(x,x')+π/2 for the sake of contradiction.
Consider <Ref> below, in which the segments [p,σ(x)] and [p,σ(x')] have length <π/2.
As d(F(x),F(x'))>π/2, the angle ∠ F(x)pF(x') is >π/2 (see <Ref>). Thus,
d(σ(x),σ(x'))≥ d(F(x),F(x'))≥ d(x,x')+π/2.
where the first inequality follows from the spherical cosine rule. So by the triangle inequality, d(x,σ(x))+d(x',σ(x'))≥ d(σ(x),σ(x'))-d(x,x')≥π/2. That is, α(x)+α(x')=π-d(x,σ(x))+d(x',σ(x'))≤π/2. Which leads to contradiction:
d(F(x),F(x'))≤ d(F(x),p)+d(F(x'),p)≤ f(α(x))d(σ(x),p)+f(α(x'))d(σ(x'),p)
≤π/2(f(α(x))+f(α(x')))≤π/2,
where the last inequality uses that the condition α(x)+α(x')≤π/2 implies f(α(x))+f(α(x'))≤1 (see <Ref>).
§.§ x,x' in the same cone N; d(x,x')≤ d(F(x),F(x'))+ζ_3.
Suppose that d(x,x')> d(F(x),F(x'))+ζ_3 for the sake of contradiction. Note that by <Ref> applied to the triangle with vertices A=e_n+2,B=x and C=x', the angle ∠ xe_n+2x' is >π/2, so applying again <Ref> to the triangle with vertices A=e_n+2,B=σ(x),C=σ(x'), with the points B',C' from <Ref> being x,x' respectively, we have d(σ(x),σ(x'))>d(x,x')>d(F(x),F(x'))+ζ_3. Thus,
(π-ζ_3)(π/2-α(x))+(π-ζ_3)(π/2-α(x'))
≥ d(p,σ(x))(1-f(α(x)))+
d(p,σ(x'))(1-f(α(x')))
≥ d(F(x),σ(x))+d(F(x'),σ(x'))≥ d(σ(x),σ(x'))-d(F(x),F(x'))>η_3,
where the first inequality is due to the definition of F and <Ref> <Ref>, the second one is due to the definition of F and the third one follows from the triangle inequality. The inequality above implies α(x)+α(x')≤π-ζ_3/π-ζ_3<1.76. That cannot happen because
α(x)+α(x')=d(e_n+2,x)+d(e_n+2,x')>d(x,x')>ζ_3>1.82.
§.§ x,x' in different cones N≠ N'; d(x,x')≤ d(F(x),F(x'))+ζ_3.
We prove d(x,x')≤ d(F(x),F(x'))+π/2 (which holds for all F_n, not only F=F_3).
Suppose for the sake of contradiction that d(x,x')> d(F(x),F(x'))+π/2. We will denote α=α(x),α'=α(x'), p,p'∈{p_1,…,p_n+2} will be the centers of the cones N,N' and
H':={x∈𝕊^m;d(p',x)<d(p,x)} and
H:={x∈𝕊^m;d(p,x)<d(p',x)}.
We divide the analysis into 2 cases.
* α,α'>π/2-1. As σ(x)∈ H and σ(x')∈ H', by <Ref> and using that d(p,p')=ζ_3>π/2 we get
d(F(x),H')≥ζ_3/2(π/2-α)
d(F(x'),H)≥ζ_3/2(π/2-α').
Note that the geodesic from F(x) to F(x') passes through the boundary ∂ H=∂ H' at some point q_0, thus
d(F(x),F(x'))=d(F(x),q_0)+d(F(x'),q_0)≥ d(F(x),H')+d(F(x'),H)≥ζ_3/2(π-α-α').
And now, using the triangle inequality,
α+α'≥ d(x,x')
≥ζ_3+d(F(x),F(x'))
≥ζ_3+ζ_3/2(π-α-α'),
So α+α'≥ζ_31+π/2/1+ζ_3/2≥π/21+π/2/1+π/4>2.26, which contradicts <Ref>.
* α≤π/2-1. Then F(x)=p, so d(F(x),F(x'))≥ d(p,H')=ζ_3/2≥π/4, so d(x,x')≥ d(F(x),F(x'))+π/2≥3π/4, contradicting d(x,x')≤α+α'≤π-1.
§.§ x,x' in different cones N≠ N'; d(F(x),F(x'))≤ d(x,x')+ζ_3.
This is the inequality which is most difficult to prove, and it fails when instead of n=3 we have n≥7; such a failure of the inequality can be reached when σ(x)=-p'[σ(x) cannot be exactly -p', but it can be arbitrarily close to it.], σ(x')=-p and α=α'=π-1/2, as in <Ref>. Then we have d(F(x),F(x'))=π and by the cosine rule applied to the triangle with vertices e_n+2,x,x', we have
d(x,x')=arccos(cos(π-1/2)^2-sin(π-1/2)^2/n+1)
n→∞⟶arccos(cos(π-1/2)^2)∼1.339.
One can then check that for n≥7, d(x,x')+ζ_n<π=d(F(x),F(x')).
In order to proceed, suppose for the sake of reaching a contradiction that d(F(x),F(x'))>d(x,x')+ζ_3. As in the previous case, we need to introduce some notation: let p,p'∈{p_1,…,p_n+2} be the centers of the cones N,N' respectively, and for brevity we will denote
κ :=d(σ(x),σ(x'))
α :=d(x,e_n+2)
α' :=d(x',e_n+2)
We will give a computer assisted proof. We first prove that κ∈[0.7,1.4] and α+α'∈[π-2,2] in <Ref>. Then we give some upper bounds for d(F(x),F(x'))
(the functions 𝒰_1,𝒰_2,𝒰_6 introduced below) and a lower bound for d(x,x') (the function ℒ defined below) which only depend on α+α' and κ. Then we check that that (min(𝒰_1,𝒰_2,𝒰_6))(α+α',κ) cannot be bigger than ℒ(α+α',κ)+ζ_3 for all (α+α',κ) in the rectangle [0.7,1.4]×[π-2,2], concluding the proof.
* α,α' are both >π/2-1.
§.§.§ Lower bound for d(x,x')
d(x,x')≥ℒ(α+α',κ) :=
arccos(cos(α+α')+sin(α+α'/2)^2(1+cos(κ)))
=
arccos(cos(α+α')1-cos(κ)/2+1+cos(κ)/2).
Note that the function ℒ(x,y) is defined for all x,y∈ℝ.
By the spherical cosine rule applied to the triangle with vertices x,e_n+2,x' we have
d(x,x') =arccos(cos(α)cos(α')+sin(α)sin(α')cos(κ))
=arccos(cos(α+α')+(1+cos(κ))sin(α)sin(α'))
≥arccos(cos(α+α')+sin(α+α'/2)^2(1+cos(κ))).
The last inequality above is Jensen's inequality applied to the function x↦ln(sin(x)) in the interval (0,π).
§.§.§ Bounds for α+α' and κ
We need some bounds on α+α' and κ for some of our following arguments to work.
We have
sin(α+α'/2)cos((π-ζ_3)(π/2-α+α'/2))≤√(3)/2√(2).
This implies α+α'<2.
Note that d(x,F(x))+d(x',F(x')) has to be >ζ_3 due to the triangle inequality and the fact that |d(F(x),F(x'))-d(x,x')|>ζ_3. This cannot happen if α,α' are very close to π/2; let us quantify this observation.
The triangle with vertices x,σ(x),F(x) has a right angle at σ(x) and sides d(x,σ(x))=π/2-α and d(σ(x),F(x))≤(π-ζ_3)(π/2-α), so we see by the cosine rule that
d(x,F(x))
≤
A(α):=
arccos(cos(π/2-α)cos((π-ζ_3)(π/2-α)))
But by <Ref> below, the function A(α) is convex in the interval (π/2-1,π/2). So for any fixed value of α+α' we will have
ζ_3≤
d(x,F(x))+d(x',F(x'))
≤
A(α)+A(α')≤2A(α+α'/2)
So A(α+α'/2)>ζ_3/2. So cos(A(α+α'/2))≤cos(ζ_3/2)=√(3)/2√(2).
We have κ≥0.7.
We will need <Ref> below (which does not need κ>0.7 in its proof).
First we prove that κ cannot be very small: Suppose π-ζ_3+κ<arccos(-1/16)(so κ<0.315 approximately). Then we will prove that d(F(x),F(x'))≤ζ_3, contradicting d(F(x),F(x'))> d(x,x')+ζ_3.
To prove it we first check that d(p,F(x'))≤ζ_3, or in general that p is at distance <ζ_3 of all points of the segment [p',σ(x')].
Note that the triangle with vertices p,p',σ(x') has sides d(p,p')=ζ_3, d(p',σ(x'))≤π-ζ_3 and d(p,σ(x'))≤π-ζ_3+κ. We can assume due to <Ref> that d(p',σ(x'))=π-ζ_3 and d(p,σ(x'))=π-ζ_3+κ, so the angle ∠ pp'σ(x') is <π/2 due to the cosine rule (equality is reached for π-ζ_3+κ=arccos(-1/16)), thus the maximum distance from p to points of the segment [p',σ(x')] is reached at p'.
Now, knowing that d(p,F(x'))<ζ_3, d(p,σ(x))≤π-ζ_3 and d(F(x'),σ(x))≤κ+π-ζ_3, we repeat the same reasoning to obtain that the distance from F(x') to any point of the segment [p,σ(x)] (and in particular F(x)) is ≤ζ_3.
So we can assume κ∈(0.3,0.7). Now, by <Ref> we have
d(x,x')≥ℒ(α+α',κ)≥ℒ(π-2,κ)=arccos(cos(2)cos(κ)-1/2+1+cos(κ)/2).
We also give an upper bound for d(F(x),F(x')) depending only on κ:
letting C(κ) be given by <Ref>, we have d(x,F(x'))≤ C(κ) by <Ref>, and note that
d(F(x'),σ(x))
≤ d(F(x'),σ(x'))+d(σ(x'),σ(x))
≤π-ζ_3+κ.
Thus, by <Ref> and <Ref> we conclude that
d(F(x),F(x'))
≤𝒰_3(κ):=
arccos(
-4/√(15)√(C(κ)^2+cos(π-ζ_3+κ)^2-1/2C(κ)cos(π-ζ_3+κ))).
However, no value κ∈(0.3,0.7) satisfies ℒ(π-2,κ)+ζ_3≤𝒰_3(κ) (see the graph of 𝒰_3(κ)-ℒ(π-2,κ)-ζ_3 below), so we are done.
We have κ∈[0.7,1.4] and α+α'∈[π-2,2].
Clearly α+α'>π-2, as we are in the case where α,α'≥π/2-1. The facts that α+α'<2 and κ≥0.7 follow from <Ref>.
In order to deduce that κ<1.4 one can consider the following upper bounds for d(F(x),F(x')) obtained from the triangle inequality:
d(σ(x),σ(x'))+(d(F(x),σ(x))+d(F(x'),σ(x')))≤𝒰_6(α+α',κ):=
κ+(π-ζ_3)·(π/2-α)+(π-ζ_3)·(π/2-α')=κ+(π-ζ_3)·(π-(α+α')).
d(p,p')+d(p,F(x))+d(p',F(x'))≤𝒰_7(α+α',κ):=ζ_3+(π-ζ_3)((α+α')-π+2).
If d(F(x),F(x'))>d(x,x')+ζ_3, then we have min(𝒰_6,𝒰_7)(α+α',κ)-ℒ(α+α',κ)-ζ_3>0.
However, one can check numerically (see <Ref> and file in <cit.>) that both inequalities cannot happen if κ≥1.4.
§.§.§ Upper bounds for d(F(x),F(x'))
First we obtain an upper bound for d(F(x),F(x')) which is useful when κ is small. We need the following lemma.
We have
cos(d(p,F(x')))≥ C(κ):=-√(1-2cos(ζ_3-κ)+16cos^2(ζ_3-κ))/√(15).
We prove that arccos(C(κ)) is an upper bound for the distance from p to any points on the segment [σ(x'),p']. Note that d(p,p')=ζ_3, d(p',σ(x'))≤π-ζ_3 and d(p,σ(x'))≤π-ζ_3+κ≤π. By <Ref> we can assume d(p,p')=ζ_3, d(p',σ(x'))=π-ζ_3 and d(p,σ(x'))=π-ζ_3+κ. So by <Ref>, the from p to all points of the segment [σ(x'),p'] is bounded above by the function C(κ) from <Ref>.
Letting κ_2:=κ+(π-ζ_3)(π/2-α+α'/2), if π-ζ_3+arccos(C(κ))+κ_2<2π, then
d(F(x),F(x'))≤arccos(
-4/√(15)√(C(κ)^2+cos(κ_2)^2-1/2C(κ)cos(κ_2))).
If on the other hand, π-ζ_3+arccos(C(κ))+κ_2≥2π, then d(F(x),F(x'))≤2π.
Notice that, swapping x,x' if necessary, we have d(F(x'),σ(x))<κ_2. So the triangle with vertices F(x'),σ(x),p has side lengths d(p,F(x'))≤arccos(C(κ)), d(p,σ(x))≤π-ζ_3 and d(F(x'),σ(x))≤κ_2. If the sum of these three lengths is <2π then by <Ref> and <Ref> we conclude.
d(F(x),F(x'))
≤𝒰_2(α+α',κ):=
(π-ζ_3)(α+α'/2-π/2+1)
+
G(κ,α+α'/2-π/2+1),
where
G(κ,β'):=arccos(-1/4cos((π-ζ_3)β')+√(15)/4sin((π-ζ_3)β')16cos(π-ζ_3+κ)+1/15),
Assume that α'≥α. We will use the inequality
d(F(x),F(x'))≤ d(F(x),p)+d(p,F(x'))≤(π-ζ_3)(α-π/2+1)+d(p,F(x')).
Now, to find an upper bound for d(p,F(x')) we first consider a point q_0∈𝕊^4 such that d(p',q_0)=π-ζ_3 and d(p,q_0)=π-ζ_3+κ. We let P_0:=(α'-π/2+1)q_0+(π/2-α')p'.
d(p,F(x'))<d(p,P_0).
We first note that the points x and x' are fixed, so κ,α' will remain constant during this proof. Let β'=α'-π/2+1∈[0,1].
For each a∈[0,π-ζ_3] and b∈[0,π-ζ_3+κ] with a+b≥ζ_3=d(p,p') we consider a point q=q(a,b)∈𝕊^4 given by d(p',q)=a and d(p,q)=b (so q could be σ(x') for adequate a,b), and let F(a,b)=β'q+(1-β')p'. We will be done if we prove that the values a_0,b_0 which
maximize d(p,F(a,b)) are a_0=π-ζ_3, b_0=π-ζ_3+κ, so d(p,F(a_0,b_0))=d(p,P_0).
Let us first prove b_0=π-ζ_3+κ. It is not hard to see that for any fixed value of a, d(p,F(a,b)) is increasing with b. Thus, if ζ_3+a_0>π-ζ_3+κ (so that the triangle inequality in the triangle with vertices p,p',q(a,b) holds), we necessarily have b_0=π-ζ_3+κ. But[Note that a can take the value π-2ζ_3+κ, because κ>0.7.]
d(p,F(a_0,b_0))≥ d(p,F((π-ζ_3+κ)-ζ_3,π-ζ_3+κ))=π-ζ_3+κ.
Thus, a_0=d(p',q(a_0,b_0))≥ d(p',F(a_0,b_0))≥ d(p,F(a_0,b_0))-ζ_3=π-2ζ_3+κ, concluding the proof that b_0=π-ζ_3+κ.
So it will be enough to check that d(p,F(a,b_0)) is increasing with a. This can be proved using convexity arguments, but in this special case we can compute d(p,F(a,b_0)) explicitly: first note that applying the cosine rule twice, we obtain
cos(∠ pp'F(a,b_0))
=
cos(∠ pp'q(a,b_0))
=
cos(b_0)-cos(a)cos(ζ_3)/sin(a)sin(ζ_3).
cos(d(p,F(a,b_0))) =cos(ζ_3)cos(aβ')+
sin(ζ_3)sin(aβ')cos(b_0)-cos(a)cos(ζ_3)/sin(a)sin(ζ_3)
=
-cos(aβ')/4+
sin(aβ')/sin(a)(cos(b_0)+cos(a)/4).
So, computing the derivative with respect to a, we obtain
∂/∂ acos(d(p,F(a,b_0)))
=
β'sin(aβ')/4+sin(aβ')/sin(a)·-sin(a)/4+∂/∂ a(sin(aβ')/sin(a))(cos(b_0)+cos(a)/4).
Using that β'∈[0,1] and cos(b_0)<-1/4, one readily checks that both the sum of the first two terms above and the third term above are negative for all a∈(0,π). So d(p,F(a,b_0)) increases with a, as we wanted.
Now, by the cosine rule (using d(p,p')=ζ_3) we have that
cos(∠(p,p',q_0))=cos(π-ζ_3+κ)+1/16/15/16=16cos(π-ζ_3+κ)+1/15.
Thus, by <Ref> and the cosine rule and letting β'=α'-π/2+1,
d(p,F(x'))
≤ G(κ,β'):=
d(p,P_0)=
arccos(-1/4cos((π-ζ_3)β')+√(15)/4sin((π-ζ_3)β')16cos(π-ζ_3+κ)+1/15).
Letting the RHS in <Ref> be G(κ,β')=d(p,(α-π/2+1)q+(π/2-α)p'), our upper bound is d(F(x),F(x'))≤(π-ζ_3)β+G(κ,β').
But note that for fixed α+α' with α'≥α, this upper bound will be minimized for α=α'=α+α'/2 (this follows from the fact that, due to the definition of P_0, for any fixed κ the function G(κ,β') is (π-ζ_3)-Lipschitz in β'), completing the proof of <Ref>.
d(F(x),F(x'))≤arccos(
-4/√(15)√(C(k)^2+cos(ζ_3-k)^2+1/2C(k)cos(ζ_3-k))).
Letting k_2:=k+(π-ζ_3)(α+α'/2-π/2+1), we also have
d(F(x),F(x'))≤arccos(
-4/√(15)√(C^2+cos(k_2)^2-1/2Ccos(k_2))).
We consider the triangle σ(x)pF(x'). In order to find the upper bound for d(F(x),F(x')), we find an upper bound for the maximum distance D_2 from F(x') to any point of [p,σ(x)]. We know that d(p,σ(x))≤π-ζ_3, d(p,F(x'))≤arccos(C)∈[ζ_3,π] and d(σ(x),F(x'))≤π-ζ_3+k. For the second inequality use instead that, swapping x,x' if necessary, d(p,F(x'))≤ k_2.
Reasoning similarly as above[In this case we can first note that for fixed values of d(p,F(x')) and d(σ(x),F(x')), D_2 is maximized for maximal d(p,σ(x)), so we can assume d(p,σ(x))=π-ζ_3.], we can check that D_2 is maximized when the distances are d(p,σ(x))=π-ζ_3, d(p,F(x'))=arccos(C) and d(σ(x),F(x'))=π-ζ_3+k, so by <Ref> we are done. Wait, what if d(p,F(x'))+d(σ(x),F(x'))>π. Then the maximum is just π. So UBdF1 is a function defined in pieces I guess. Same with UBdF3. Formalize this proposition better.
§.§.§ Concluding the proof of the inequality in <Ref>
Thus, we have a lower bound ℒ:[π-2,2]×[0.7,1.4]→ℝ for d(x,x') and three upper bounds 𝒰_1,𝒰_2,𝒰_6:[π-2,2]×[0.7,1.4]→ℝ for d(F(x),F(x')) in terms of α+α' and κ (see <Ref> for the definition of 𝒰_6). Letting
(x, y):=
min(𝒰_1,𝒰_2,𝒰_6)(x,y)-ℒ(x,y)-ζ_3,
it would be enough to check that (x, y)<0 for all (x,y)∈ R:=[π-2,2]×[0.7,1.4].
This is checked numerically in the file in <cit.>. Indeed, as explained in <Ref>, it is enough to check that (x,y)<-0.08 for all points (x,y) in a grid in the rectangle R with coordinates spaced by d=10^-5, and then check that if two points (x,y) and (x',y') in R satisfy |x-x'|,|y-y'|≤10^-5/2, then |(x,y)-(x',y')|<0.08. This follows from the facts that:
* If |x-x'|,|y-y'|≤10^-5/2, then |ℒ(x,y)-ℒ(x',y')|<10^-5. This follows from the definition of ℒ(x,y), which is the distance between two points of 𝕊^n+1 at distance x/2 of e_n+2 whose geodesics to e_n+2 form an angle of y).
* If |x-x'|,|y-y'|≤10^-5/2, then |𝒰_1(x,y)-𝒰_1(x',y')|<0.07. Similarly with 𝒰_2,
𝒰_6. This is easy to see for 𝒰_2,
𝒰_6 (in those cases we can change 0.07 by 10^-4). To check it for 𝒰_1 one can use that the function κ↦ C(κ) is 1-Lipschitz, that C(κ)^2+cos(κ_2)^2-1/2C(κ)cos(κ_2)≥3C(κ)^2/4≥0.15 for all κ∈[0.7,1.5] and the uniform continuity constants of x↦√(x),x↦arccos(x).
* Either α or α' are <π/2-1. We will assume α'<π/2-1, so F(x')=p'. We can assume α>π/2-1, lest d(F(x),F(x')) be ζ_3. In this case we will also prove the inequality numerically using upper and lower bounds for d(F(x),F(x')) and d(x,x') respectively.
Note that, similarly as in <Ref>, we can deduce that, if β=α-π/2+1,
d(F(x),F(x'))=d(F(x),p')
≤ G(κ,β).
We also have a couple of lower bounds for d(x,x') in this case.
We also know that d(x,x')≥ℒ(α+α',κ)≥ℒ(α,κ):=ℒ_1(α,κ).
We also know that d(x,x')≥ℒ_2(α,κ), where ℒ_2(α,κ)=α when κ≥π/2 (as in that case, we can assume α'=0, which reduces d(x,x') and does not change d(F(x),F(x'))) and when κ≤π/2, ℒ_2(α,κ) is just the distance from x' to the geodesic e_n+2x, that is,
ℒ_2(α,κ)=arcsin(sin(α)sin(κ)).
However, it can be checked numerically (see <Ref> and file in <cit.>) that for no values α∈[π/2-1,π/2] and κ∈[0.3,π] can we have G(κ,α-π/2+1)>max(ℒ_1(α,κ),ℒ_2(α,κ)), so we are done.
§ SPHERICAL GEOMETRY LEMMAS USED IN <REF>
We will use the notation d=d_𝕊^n+1 in this section.
Let T be a spherical triangle with vertices A,B,C and opposite side lengths a,b,c respectively. If b,c≤π/2 and the angle α at A is ≤π/2, then a≤π/2.
By the spherical cosine rule,
cos(a)=cos(b)cos(c)+sin(b)sin(c)cos(α)≥0.
Let T be a spherical triangle with vertices A,B,C and opposite side lengths a,b,c respectively. If b,c≤π/2 and a≥π/2, then we have α>a≥π/2, where α is the angle at a. If, moreover, B',C' are points in the sides AB and BC respectively and a'=d(B',C'), then a'≤ a.
For the first part, by <Ref> we have α≥π/2, so by the spherical cosine rule,
cos(a)=cos(b)cos(c)+sin(b)sin(c)cos(α)≥cos(α).
For the second part, note that if in the formula above we change b,c by some smaller values b',c', then cos(a) increases.
Let p,p' be two points in 𝕊^m (m≥2) at distance a, and consider the hemispheres H'={x∈𝕊^m;d(p',x)<d(p,x)},
H={x∈𝕊^m;d(p,x)<d(p',x)}. Then for any point x∉H' and any λ∈[0,1],
d(λ p+_S(1-λ)x,H')≥λa/2.
Note that for any point q∉H', d(H',q) is just d(O_H',q)-π/2, where O_H' is the center of H'. So the inequality above reduces, by Jensen's inequality applied to the function h:[0,1]→ℝ;λ↦ d(λ p+_S(1-λ)x,H') (note h(0)≥0 and h(1)=a/2), to proving that for any point O and any geodesic γ(t), the function t↦ d(O,γ(t)) is convex in the intervals where d(γ(t),O)≥π/2. To prove this we can assume that 𝕊^m=𝕊^2⊆ℝ^3 and O=(0,0,-1), so we just have to prove that for any k∈[0,1], the function arcsin(ksin(t)) is convex in [0,π], which can be seen from computing its second derivative with respect to t.
If we let ρ=π-ζ_3=arccos(1/4), the function A(t)=arccos(cos(t)·cos(ρ t)) is convex in the interval (0,1).[Thanks to River Li for his help in the proof that A(t) is convex; see this https://math.stackexchange.com/a/4911668/807670MathStackExchange answer.]
Let c=cos(t),s=sin(t),c_1=cos(ρ t),s_1=sin(ρ t) during this proof (note that c,c_1,s,s_1>0 for t∈(0,1)).
The second derivative of A(t) is given by the following formula
A”(t)=-1/(1-c^2c_1^2)^3/2(2ρ ss_1-cc_1ρ^2s^2-cc_1s_1^2).
So we just need to prove that ∀ t∈(0,1), 2ρ ss_1-cc_1ρ^2s^2-cc_1s_1^2>0. This follows from the following two inequalities:
* ρ ss_1>cc_1ρ^2s^2; this is equivalent to s_1>cc_1ρ s, which is true because for all t∈(0,1), tan(ρ t)>ρ t≥ρsin(t)≥ρsin(t)cos(t).
* ρ ss_1>cc_1s_1^2; in fact we have ρ ss_1>s_1^2, as ρsin(t)>sin(ρ t) for all t∈(0,1).
Let u,v,w be the vertices of a spherical triangle in 𝕊^n with the sides opposite to u,v,w having lengths x_1,x_2,x_3 respectively (so x_1+x_2+x_3≤2π). Then the maximum distance α between u and any point of the geodesic passing through v,w satisfies
cos(α)
=-√(cos(x_2)^2+cos(x_3)^2-2cos(x_1)cos(x_2)cos(x_3))/sin(x_1).
Note that α∈[π/2,π] is π minus the angle between u and the plane containing v and w. Letting a=⟨ v,w⟩=cos(x_1),b=⟨ u,w⟩=cos(x_2) and c=⟨ u,v⟩=cos(x_3), we have
π-α=arcsin(|(u,v,w)|/√(1-a^2))
=
arcsin√([ 1 c b; c 1 a; b a 1 ])/√(1-a^2)
=arcsin(√(1+2abc-a^2-b^2-c^2)/√(1-a^2)).
So
cos(α)
=
-√(b^2+c^2-2abc)/√(1-a^2)
=-√(cos(x_2)^2+cos(x_3)^2-2cos(x_1)cos(x_2)cos(x_3))/sin(x_1).
Fix p∈𝕊^2 and a≤ b numbers in [0,π]. For any points q,r∈𝕊^2 with d(p,q)=a and d(p,r)=b let t=d(q,r). Then the function f(t) which gives the maximal distance from p to any points in the geodesic segment qr is well defined and increasing. More concretely,
* If a+b<π, then f:[b-a,b+a]→ℝ is given by f(t)=b.
* If a+b>π, then f:[b-a,2π-a-b]→ℝ is given by f(t)=b for all t<arccos(cos(a)/cos(b)). For t≥arccos(cos(a)/cos(b)) f is increasing, with
cos(f(t))
=-√(cos(a)^2+cos(b)^2-2cos(t)cos(a)cos(b))/sin(t).
If b<π/2 then the closed ball centered at p of radius b is convex, so it contains the segment qr. If b≥π/2 and a+b<π, then consider a closed hemisphere H centered at some point at distance b-π/2 of p. Then H contains some point r at distance b of p, and any segment between r and points at distance a of p is entirely contained in H, proving that f(t)≤ b.
Finally, assume that a+b>π, and fix some point q at distance a of p. The point r can be any point in the circle C of points at distance b of p; as r moves from the point in C closest to q to the the point in C furthest to q, t=d(q,r) increases from b-a to 2π-a-b.
If cos(t)≥cos(a)/cos(b), then by the cosine rule the angle ∠ prq at r is at most π/2, so f(t)=d(p,r)=b. If cos(t)<cos(a)/cos(b), then ∠ prq is obtuse, and the maximal distance from p to points of the segment qr is given by <Ref>. Note that as r moves further from q, the angle between 0⃗p⃗ and the plane containing the origin, q and r decreases, so f is increasing.
Let a,b,c∈[0,π] satisfy a≤ b+c,b≤ a+c,c≤ a+b. We consider spherical triangles with vertices x,y,z in 𝕊^2, with d(y,z)≤ a,d(x,z)≤ b,d(x,y)≤ c. If a+b+c≥2π, then the antipodal point -x may lie in the segment yz. If a+b+c<2π, then the maximal distance from x to any point of the segment yz is reached when d(y,z)= a,d(x,z)=b and d(x,y)=c.
First note that for any a,b,c as above with a+b+c<2π there are spherical triangles with sides a,b,c. If a+b+c=2π, then the union of the three sides of the triangle is a great circle. So the only nontrivial case is a+b+c<2π.
Let f(α,β,γ) be the maximal distance from x to any point of the segment yz, where d(y,z)= α,d(x,z)=β and d(x,y)=γ.
<Ref> proves that for fixed values of α,β, the function f(α,β,γ) increases with γ. It is also not hard to check that for fixed values of β,γ (so we can consider x,z fixed and y in a radius α circumference around z), f(α,β,γ) is increasing with respect to α, and similarly, for fixed α,γ the function f(α,β,γ) increases with β. So among all values α≤ a,β≤ b and γ≤ c, f(α,β,γ) will be maximized when α=a,β=b,γ=c, and we are done.
amsalpha
4
[ABC]Polymath
H. Adams et al.
Gromov-Hausdorff distances, Borsuk-Ulam theorems, and Vietoris-Rips complexes.
arXiv preprint arXiv:2301.00246v1, 2022.
[BBI]BBI
D. Burago, Y. Burago, S. Ivanov.
A Course in Metric Geometry.
American Mathematical Society, 2001.
[BBK]BBK
A. Bronstein, M. Bronstein, R. Kimmel.
Efficient computation of isometry-invariant distances between surfaces.
SIAM J. Scientific Computing 28, no. 5, pp. 1812–1836, 2006.
[CC]CC
J. Cheeger, T. H. Colding.
On the structure of spaces with Ricci curvature bounded below. I.
J. Differential Geom. 46(3) pp. 406-480, 1997.
[CCG]CCG
F. Chazal, D. Cohen-Steiner, L. J. Guibas, F. Mémoli, S. Oudot.
Gromov‐Hausdorff Stable Signatures for Shapes using Persistence. Computer Graphics Forum, 28(5), 1393–1403, 2009.
[Ch]Ch
J. Cheeger.
Differentiability of Lipschitz Functions on metric measure spaces.
GAFA, Geom. funct. anal. Vol. 9, pp. 428–517, 1999.
[Cho]Cho
M. Cho.
On the optimal covering of equal metric balls in a sphere.
J. Korea Soc. Math. Educ. Ser. B: Pure Appl. Math. Volume 4, Number 2 (November 1997), Pages 137–143.
[Co1]Co1
T. H. Colding.
Shape of manifolds with positive Ricci curvature.
Invent. math. 124, pp. 175–191, 1996.
[Co2]Co2
T. H. Colding.
Large manifolds with positive Ricci curvature.
Invent. math. 124, pp. 193–214, 1996.
[CSO]CSO
F. Chazal, V. de Silva, S. Oudot.
Persistence stability for geometric complexes.
Geometriae Dedicata, 173(1):193–214, 2014.
[Ed]Ed
D. A. Edwards.
The Structure of Superspace.
Studies in Topology, Academic Press, 1975.
[Gr]Gr
M. Gromov.
Metric structures for Riemannian and non-Riemannian spaces.
Birkhaüser, 2007.
[HJ]HJ
M. Harrison, R.A. Jeffs.
Quantitative upper bounds on the Gromov-Hausdorff distance between spheres.
arXiv preprint arXiv:2309.11237, 2023.
[Ke]Ke
S. Keith.
Modulus and the Poincaré inequality on metric measure spaces.
Math. Z. 245, pp. 255–292, 2003.
[LMS]LMS
S. Lim, F. Mémoli, Z. Smith.
The Gromov-Hausdorff distances between spheres.
Geometry & Topology 27 pp. 3733–3800, 2023.
[MS]MS
F. Mémoli, Z. Smith.
Embedding-Projection Correspondences for the estimation of the Gromov-Hausdorff distance.
arXiv preprint, arXiv:2407.03295.
[Pe]Pe
P. Petersen.
A finiteness theorem for metric spaces.
J. Differential Geometry 31 pp. 387-395, 1990.
[PW]PW
C. Plaut, J. Wilkins.
Discrete homotopies and the fundamental group.
Advances in Mathematics 232 pp. 271–294, 2013.
[Ro]Ro
S. Rodríguez Martín.
Gromov-Hausdorff distances from simply connected geodesic spaces to the circle.
arXiv preprint, arXiv:2404.05153, 2024.
[Git]Git
S. Rodríguez Martín.
PythonPaperGHDistancesSpheres.
Github Repository, 2024. Link:
https://github.com/saulingo/PythonPaperGHDistancesSpheres.
[Sa]Sa
L. A. Santaló.
Convex regions on the n-dimensional spherical surface.
Annals of Mathematics, 1946, pp. 448–459.
|
http://arxiv.org/abs/2409.02218v1 | 20240903184326 | Early Design Exploration of Aerospace Systems Using Assume-Guarantee Contracts | [
"Nicolas Rouquette",
"Alessandro Pinto",
"Inigo Incer"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
A Novel Audio-Visual Information Fusion System for Mental Disorders Detection
Yichun Li, Shuanglin Li, Syed Mohsen Naqvi
Intelligent Sensing and Communications Research Group, Newcastle University, UK
September 9, 2024
===============================================================================================================================
§ ABSTRACT
We present a compositional approach to early modeling and analysis of complex aerospace systems based on assume-guarantee contracts.
Components in a system are abstracted into assume-guarantee specifications. Performing algebraic contract operations with Pacti allows us to relate local component specifications to that of the system.
Applications to two aerospace case studies—the design of spacecraft to satisfy a rendezvous mission and the design of the thermal management system of a prototypical aircraft—show that this methodology provides engineers with an agile, early analysis and exploration process.
§ INTRODUCTION
In the early phases of a typical design process, engineers focus on defining an initial set of requirements and allocating them to subsystems and components. This requirement definition process is challenging due to the need for balancing goals and constraints from customers (top-down) against the capabilities of available or implementable components (bottom-up)<cit.>. Stringent requirements may be infeasible due to technological limitations or can lead to over-design<cit.>, overly loose requirements risk producing a system that fails to meet customer goals or lacks robustness against changes in those goals<cit.>.
Systems, sub-systems, and component engineers must negotiate specifications in this early phase. This process involves frequent interactions among subject-matter experts (SMEs) to elicit requirements, a notoriously challenging activity<cit.>. Furthermore, the heterogeneous nature of this activity makes collaboration and negotiation difficult <cit.>. Prototyping <cit.> and simulation <cit.> allow combining subsystem models into system-level models to explore various scenarios. However, several challenges arise with simulation-based analyses during the early design stages. Firstly, sufficiently detailed subsystem simulation models are rarely available when subsystem specifications are still evolving. Secondly, even if such models exist, few experts within the organization possess the skills to utilize them effectively. In particular, managing a system-level simulation that integrate subsystem-level models from all SMEs is beyond the expertise of any single individual. Thirdly, detailed simulations often require significant software setup and runtime, limiting their ability to provide insights into multiple what-if scenarios quickly. Lastly, the results obtained through simulation are typically valid only for specific system implementations under specific operating conditions. Thus, while insightful, analysis by simulation remains an incomplete and time-consuming analysis method poorly suited for early design stage elicitation.
To support the early stage of system design, we seek an alternative prototyping methodology with the following characteristics:
* Implementation flexibility: The methodology and supporting tools should operate on a a range of possible implementations for each component used in the design.
By considering sets of components instead of a specific implementation, we obtain
implementation flexibility, ensuring that analysis results are valid for all valid implementations of the models representing each element. This conservative flexibility prevents engineers from being cornered into sub-optimal solutions.
* Sound and Complete Analysis Procedures: Unlike simulation-based analyses, where a successful run does not guarantee desired characteristics for all possible executions, the methodology should assert that systems will exhibit the desired properties for all potential runs.
* Support for Compositional Reasoning: Compositional reasoning allows the decomposition of analysis and verification problems into smaller, more manageable tasks, addressing the computationally prohibitive nature of analyzing complex systems. It also enables the independent implementation of subsystems and components. Once a set of requirements has been defined and allocated, and it has been demonstrated that these requirements satisfy top-level requirements, different teams or suppliers can independently implement these components. As long as each component satisfies its local requirements, the overall system is guaranteed to meet top-level goals. This formal compositional methodology is essential for handling the increasing number of requirements and avoiding integration problems arising from informal approaches.
* Fast Turnaround and Insightful Feedback: In the early phases, the methodology should enable quick transitions from candidate designs to figures of merit and swift verification that the requirement breakdown meets system objectives. When system objectives are not met, the tools should provide explanations for the shortcomings. Automatic verification tools should offer insights into system properties, moving beyond binary feedback to compute the model of the entire system from component models. This approach allows systems engineers to understand why the system fails to meet top-level requirements, facilitating better-informed decision-making.
These characteristics motivate us to explore the application of a formal and compositional framework in the early stages of design, focusing on formal requirements as the specification of components. Requirements represent multiple implementations, satisfying (1) above. Using requirements means specifying the properties components must satisfy without being prescriptive about how they satisfy them. We adopt a modeling framework based on the theory of assume-guarantee contracts <cit.>. Contracts are formal specifications split in two parts:
the component's guarantees and
the component's expectations of its operational context to deliver its guarantees.
Contracts have several algebraic operations that enable us to carry out compositional system-level analysis <cit.>, satisfying (3) and (4). The analysis enabled by these operations is both sound and complete, as specified in (2).
In this paper, we will represent the assumptions and guarantees of a contract using polyhedral constraints over the variables that describe the interface of a component. We use the Pacti <cit.> tool for modeling and automatic analysis since it meets the above-mentioned requirements. Pacti allows us to model systems using assume-guarantee contracts and can compute explicit representations of several of contracts[Pacti is available as an open-source package under a BSD 3-Clause license at <https://github.com/pacti-org/pacti>]. Pacti is designed with computational efficiency and explainability as top priorities. Consequently, a designer can use it interactively to gather quick feedback (less than three seconds in our experiments) about the impact of varying design parameters. Thus, we believe the system designer can benefit significantly by specifying subsystem operational behaviors as contracts and manipulating them using Pacti.
The purpose of this article is to show that the use of contracts, their algebraic operations, and their tool support leads to the effective exploration of design alternatives for aerospace systems. We accomplish this by applying these techniques to early design exploration of two case studies: (i) a CubeSat-sized spacecraft performing a small-body asteroid rendezvous mission, and (ii) the fuel and thermal management system of a hypothetical aircraft. For the former, we model mission scenarios as sequences of task-specific steps, showing how each step can be modeled by a contract and demonstrating how the composition and merging operations are used to compute mission-level metrics as a function of different spacecraft parameters. We use the notion of viewpoints to split a complex contract into simple subcontracts, each focused on one aspect of the task, such as power, science & communication, and navigation. Once we have defined the contracts for each task, we use Pacti to perform the following tasks: sequencing task-specific steps using contract composition, fusing such sequences across viewpoints using contract merging, computing figures of merit such as bounds on the average battery state of charge across a sequence using optimization, and evaluating bounds for state variables at arbitrary steps in the sequence.
For the fuel and thermal management systems, we model the key components as parametric contracts. The parameters represent tolerance values of component properties, correlating with the quality of the implementations of these components. The system includes other parameters that capture the operating point, including internal variables such as fuel flow rates and external variables such as flight altitude and required level of thrust. We use Pacti to compute the composition of the component contracts and the expected temperature bounds in key points of the thermal management system. This exploration is used to select promising operating points for further optimization. We integrate Pacti with an optimization procedure to find the largest acceptable tolerances for the components that still guarantee the satisfaction of safety margins.
Related work.
Several computer-based frameworks for contract-based modeling and verification, such as AGREE <cit.> and OCRA <cit.>, have been developed over the past decade. In AGREE, assumptions and guarantees are specified in a synchronous language, while in OCRA they are specified in Linear Temporal Logic. These systems allow users to instantiate and connect components and to check whether such composition refines a higher-level contract using a series of satisfiability queries <cit.>. In contrast, Pacti <cit.> explicitly supports multiple viewpoints and the ability to compute the result of the composition operation in the form of a new contract at the interface of the system. This capability provides the actual expression of the contract implemented by the composition of components, resulting in two advantages: (1) it allows users to inspect the result of the composition and to gain insights into the reasons why a system satisfies (or not) its specifications, and (2) it allows to use the result of the composition to compute the space of accepted inputs (or the space of possible outputs) of the entire system, which can be used to design margins or compute robustness metrics. Moreover, Pacti implements algorithms that can compute the quotient of a specification with respect to a partial system to obtain the specification of a missing component.
We have used these features in the two aforementioned case studies. In the first case study, we used contracts to model actions and state transitions in a sequence of spacecraft operations. Several languages have been defined in the past to deal with sequences of operations. The AI planning community uses the Planning Domain Definition Language (PDDL) <cit.>, including extensions for hierarchical planning <cit.>. These languages focus on succinctness and efficiency for the purpose of planning, and there is no tool support for composition and other algebraic operations. On the other hand, the contract framework in Pacti <cit.> focuses on system and sub-system modeling from different viewpoints, and it is expressive enough to encode state transitions. The contract framework focuses on concurrent, component-based engineering of complex systems. Planning/scheduling systems designed for embedded systems impose significant restrictions on the expressiveness of the planning language to ensure responsiveness in limited computing environments. In <cit.>, the authors describe a generalized timeline representation restricting planning constraints to a single variable linear constraint. In contrast, Pacti supports linear constraints with multiple variables as shown in the case study above.
Finally, the analysis presented in the first case study differs from classical planning and scheduling. In planning, we are given a set of task models, an initial state, and a goal state; the problem is to find a sequence of task instances that bring a system from the initial state to the goal state <cit.>. Developing the task models (also referred to as domain authoring) is a major contributor to efficiency, and verifying and validating such models are hard and time-consuming activities <cit.>. The analysis we present can be seen as addressing the joint exploration of system requirements allocation and domain authoring to derive optimal task specifications as a precursor to the development of task models.
Instead of using contracts for operational analysis, the second case study shows using contracts to design and analyze an aircraft sub-system. Some previous work in this area include <cit.> and <cit.>. We use the same case study presented in <cit.>. However, the analysis and design space exploration in this paper extends considerably that of <cit.>, whose focus is refinement checking, not design space exploration. This is an important difference because design space exploration requires the explicit computation of the performance of a composed system. The analysis presented in <cit.>, while spanning several abstraction layers, seems to focus on the use of “vertical” contracts, meaning on the propagation of assumptions and guarantees between one stage of a design process and the next. Instead, we focus on the analysis of requirement decomposition at one level, but by computing the result of the composition explicitly.
Contributions. To the best of our knowledge, this work, which is a continuation of our conference publication <cit.>, is the first to leverage the explicit solution of algebraic operations on contracts in the early analysis and design of aerospace systems. The use of compositional methods and specifically contract-based design has been mainly centered around addressing the refinement verification problem. Namely, given a system specification S as a pair of assumptions A (the set of environments of interest) and guarantees G (the valid set of behavior of an implementation for an environment in A), a designer architects a system of interconnected components {C_1,…,C_n}, and checks whether S is satisfied by the composition without having to compute it.
In contrast, this work exploits the explicit computation of contract operations using Pacti in the design space exploration of aerospace systems. In addition to the space mission case study described in <cit.>, this paper includes a case study on the fuel and thermal management system of a hypothetical aircraft.
Paper outline. Section <ref> provides an overview of contracts and Pacti. Section <ref> presents a case study of early design exploration of a space mission operation involving a small-body asteroid. We model subsystem-specific tasks as contracts and use Pacti to obtain insight into many system-level aspects. Section <ref> presents the use of contracts and Pacti in the analysis and design of the fuel and thermal management system of an aircraft.
We conclude in Section <ref>.
§ OVERVIEW OF CONTRACTS & PACTI
Pacti <cit.> helps designers to reason about specifications and to manipulate them. These specifications are given to Pacti as assume-guarantee contracts, which are pairs (A,G) where A is a set of assumptions, and G a set of guarantees. Contracts resemble the form in which datasheets are typically written, i.e., a component's datasheet specifies that the component will satisfy certain guarantees only when its context of operation satisfies certain assumptions. This section provides a brief overview about Pacti.
For Pacti, a contract has four elements:
* A set of input variables.
* A set of output variables.
* A set of assumptions that are constraints on the input variables.
* A set of guarantees that are constraints on both input and output variables.
Intuitively, the assumptions define a set of possible environments in which the subsystem can be used. The guarantees define the input-output relation that the subsystem promises to enforce when the assumptions are satisfied. Pacti currently supports constraints expressed as linear inequalities, also called polyhedral constraints, but its architecture is extensible to other constraint formalisms.
The algebra of contracts has been formalized in several previous works (see, for instance <cit.>, and references therein). The formalization includes the definition of several operators and their properties. These operators can be used to address several tasks relevant to system design, including:
* Building systems out of subsystems. Suppose that we have specified contracts for a set of subsystems. We can define a system as the assembly of such subsystems. The operation of composition allows us to compute the contract of such a system from the contracts of the assembled subsystems. In other words, the composition operator provides a mechanism for computing system contracts from subsystem contracts.
* Patching systems. The operation of quotient allows us to compute the contract of a subsystem that needs to be composed with an existing subsystem so that the resulting system composition meets a top-level contract. In other words, the quotient finds contracts of missing subsystems from contracts for the system and a partial implementation.
* Validating decompositions. Refinement allows us to tell when a contract is more relaxed, or less demanding than another. When a subsystem satisfies a contract, it is guaranteed to satisfy a more relaxed contract. When a system contract is broken into an assembly of subsystem contracts, refinement allows us to tell whether this decomposition is a valid refinement of the system-level contract.
* Fusing viewpoints. The operation of merging allows us to generate a single contract whose assumptions and guarantees require the satisfaction of the assumptions and guarantees of the merged contracts, respectively. In other words, merging fuses multiple contract viewpoints, a common operation in concurrent design.
The following sub-sections provide an overview of how Pacti supports these common tasks.
§.§ Computing system specifications
Consider the system shown in Figure <ref>. Subsystem M has input i and output o, and M' has input o and output o'. The assumptions and guarantees of M are, respectively, {|i| ≤ 2} and {o ≤ i ≤ 2o + 2}, while the assumptions and guarantees of M' are, respectively, {-1 ≤ o ≤ 0.2} and {o' ≤ o}.
The figure shows how we can use Pacti to obtain the contract of the system formed by assembling these two subsystems. Pacti tells us that the system contract has input i, output o', assumptions {0 ≤ i ≤ 0.2}, and guarantees {o' ≤ i}. Pacti's answer only involves the top-level input and output variables, having eliminated the intermediate variable, o.
§.§ System diagnostics
In Figure <ref>, we have the same subsystems as those shown in Figure <ref>, except that the guarantees of M have been replaced by {|o| ≤ 3}. When we try to form a system using M and M', Pacti tells us that the guarantees of M are insufficient to satisfy the assumptions of M'. Indeed, M' requires its input to be bounded by 0.2, while M guarantees this signal to be bounded by 3. Pacti thus flags a potential flaw in our design.
§.§ Specifying missing subsystems
Figure <ref> shows the situation in which we will implement a system M with input i and output o' having assumptions {|i| ≤ 1} and guarantees {o' = 2i + 1}.
To implement this system, we will use a subsystem M' with input i, output o, assumptions {|i| ≤ 2} and guarantees {o = 2i}.
To implement the top-level specification using M', we have to identify the specification of the missing subsystem denoted by a question mark in the figure. Pacti computes this missing-subsystem specification for us, saying that this subsystem will have input o, output o', assumptions {|o| ≤ 2} and guarantees {o' = o + 1}.
§.§ Fusing viewpoints
Contract-based design enables us to organize specifications in categories, or viewpoints.
Figure <ref> show a subsystem M with two different contracts assigned to it: a functionality contract and a power contract.
The operation of merging can generate a single contract that contains both viewpoints of the design.
When performing analysis, we use only the subsystem specifications for the task at hand.
For example, to carry out power analysis of an entire system, we should be able to use only the power viewpoints of the subsystems that compose it.
§ SMALL-BODY ASTEROID MISSION
In this section,
we focus on the problem of operating a CubeSat-sized spacecraft performing a small-body asteroid rendezvous mission[This case study is available at <https://github.com/pacti-org/cs-space-mission>, tag: SMC-IT-2023.]. Figure <ref> illustrates a simplification of the small-body asteroid approach scenario described in more detail in <cit.>[The Sun, Earth, spacecraft, and small-body asteroid are shown at different scales for illustration purposes.]. During its mission, the spacecraft (blue cube) has the high level objective of approaching an asteroid, making measurements of scientific interest, and sending this data to earth.
To achieve the mission, the spacecraft must perform a sequence of the following basic tasks.
To communicate with Earth, the spacecraft must orient its fixed antennas towards Earth (task ). Depending on the trajectory, this orientation may be suboptimal for the spacecraft panels to produce maximum electrical power. When energy is needed, the spacecraft must find a way to reorient itself towards the sun (task ). Optical science measurements also require orienting the spacecraft's camera towards the asteroid for observation (task ). Finally, autonomous navigation requires a different orientation to yield the desired velocity change when performing a Trajectory Correction Maneuver (task ).
Each of these tasks will be characterized according to certain parameters P. Among these parameters, we will have
energy consumption rates, energy generation rates, rate of convergence for trajectory correction maneuvers, etc.
[Mission analysis and design]
We will be interested in characterizing
sets of task parameters that will ensure that the spacecraft is able to complete mission requirements. The mission objectives will require the spacecraft to record and transmit to earth a certain amount of scientific data, and to operate with its battery level never falling below a certain threshold.
§.§ Leveraging contracts for mission analysis and design
Over the course of the operation of the spacecraft, we will be interested in tracking the values of the following quantities of interest, or state variables. These state variables will be used to state mission-level requirements.
* The state of charge of the battery, denoted by soc. Its value will be a percentage.
* The amount of onboard science data storage, denoted by d. This value will be a percentage. 100% will mean that all onboard storage is used to hold the measurements that have been gathered and not yet transmitted to earth.
* Cumulative science data acquired, denoted by c. This is positive real number. It represents the total amount of data gathered over the course of the mission.
* Relative trajectory estimation uncertainty, denoted by u and given as a percentage.
* Relative trajectory progress, denoted by r and given as a percentage.
As shown in previous work on planning and scheduling for space missions <cit.>, the effect of high-level tasks on the spacecraft's states can often be represented by linear inequalities of the form a· t ≤ b where t is a time, a is a rate constant, and b is a value constant. State variables that are typically represented using linear constraints include the state of charge and the data generated by science experiments or sent to Earth. This class of constraint formulas fits within the expressiveness of Pacti's polyhedral constraints. Thus, we explore modeling tasks as assume-guarantee contracts in Pacti.
Figure <ref> illustrates how tasks are modeled as contracts, and how sequences are derived through the composition <ref> of task models. Each task instance such as or in Figure <ref> is a contract specifying the system behavior during that step of a mission scenario. The contract defines the entry and exit conditions on the state variables of the spacecraft as assumption and guarantee constraints, respectively, for a given duration Δ T of the mission step.
For example, task specifies assumptions in terms of its input variables {V_^, Δ T_} only, and guarantees in terms of both input and output variables {V_^, V_^, Δ T_}. Time elapses while moving from left to right in Figure <ref>: the inputs of a contract represent the state at time t, while the outputs represent the state at time t + Δ T_. Sequences are obtained by linking the output variables of a contract representing a mission step to the input variables of the next step.
Schedulability analysis (see Problem <ref>) is the exploration of sequences that together refine system-level specification. The requirements specify constraints on the initial conditions, final conditions, duration, and performance at different points in the mission. We leverage the capability of Pacti to compute the composition of contracts and gain insights on the achieved mission level performance. We then implement a search of the hyperparameters P of the model to define the requirements for each task. The hyperparameters of a generic task include minimum and maximum power generation gen_^ for , and minimum and maximum power consumption cons_^ for . Sampling techniques can be used to generate multiple scenarios and to perform schedulability analysis. Our experience so far is that the computational complexity of this approach remains within practical considerations given that analysis in Pacti requires solving a number of linear programming problems proportional to the number of constraints involved. This means that constructing the contract for a scenario involving a finite number of steps to be composed and a finite number of operational requirements to be refined will result in a fixed upper bound on the number of linear programming problems to be solved. Since computing schedulability analyses over hyperparameter samples is easily parallelizable, this exploratory methodology enables a rapid turnaround between scenario contract modeling and analysis results.
§.§ Modeling the space mission using polyhedral assume-guarantee contracts.
We will assume that, at any given time over the course of the mission, the spacecraft is executing one out of the following four tasks shown in Figure <ref>:
Orient the spacecraft's antenna towards Earth to downlink science data.
Orient the spacecraft's camera towards the asteroid for science and navigation observations.
Orient the spacecraft's chemical thrusters in a direction to perform a Trajectory Correction Maneuver computed onboard to bring the spacecraft's trajectory closer to that of the asteroid.
Orient the spacecraft's solar panels towards the Sun to charge the battery.
Table <ref> summarizes the qualitative impacts of each type of task on key mission parameters, where +, 0, and - denote, respectively, positive, independent, and negative correlation of state variables with respect to task duration. In this table, we also group state variables by viewpoint. Viewpoints are aspects of concern in the design process, such as power, timing, etc. In this case study, we will write contracts for each viewpoint of each task.
To describe the Pacti contracts we authored in the case study, we name constants to convey their nature and adopt the following concise notation:
* x ∈[t] [γ_⋯]Δ T =
{x | γ_Δ T ≤ x ≤γ_Δ T}
* [γ_⋯] = [γ_, γ_]
* v_- = v_ - v_
In this notation, x is an expression; γ_min and γ_max are constants; and v_entry and v_exist are variables. All tasks assume valid ranges of input variables: task duration must be positive, if applicable, and other inputs must be within valid ranges.
Before we can analyze the schedulability of a mission operation scenario against operational requirements in Section <ref>, we need to construct the scenario contract. Figure <ref> shows a representative operation scenario involving a sequence of the following tasks: , , , and . We decompose into two subtasks: heating, , and a delta-v maneuver, . We leverage Pacti's support for fusing viewpoints and break down the specification of each task across the three viewpoints described above: power, science & communication, and navigation.
Our task now is to define contracts for each task and for each of these viewpoints.
§.§.§ Pacti contracts in the power viewpoint
Qualitatively, , , and have similar power-consuming contracts, whereas has a power-generation contract. The guarantees assert that the change in state of charge will be proportional to a generation or consumption rate applied for the task's duration. involves two different power-consuming behaviors, thruster heating and delta-V, modeled as two subcontracts: and . Thus, the 4-step scenario becomes the composition of 5 steps.
The task template notation below uses the following abbreviations: for constant hyperparameters, for input and output variables, respectively, and for contract assumptions and guarantees, respectively.
2cTask template for 𝒯
_^𝒯
_, Δ T^𝒯
_
Δ T^𝒯≥ 0 _≥ 0
_ - ∈ [_⋯]Δ T^𝒯
_ ∈ [0,100]
The constant hyperparameter, _^𝒯, defines the range of power generation charging the battery during an instance of this task, which guarantees that the difference between the exit and entry state of charge will be proportional to the power generation rate interval constant, [_⋯^𝒯], times the task duration input variable, Δ T^𝒯.
2cTask templates for 𝒯∈{}
_^𝒯
_, Δ T^𝒯
_
Δ T^𝒯≥ 0 _≥ 0
_ - ∈ [_⋯]Δ T^𝒯
_ ∈ [0,100]
The constant hyperparameter, _^𝒯, defines the range of power consumption depleting the battery during an instance of this task, which guarantees that the difference between the entry and exit state of charge will be proportional to the power consumption rate interval constant, [_⋯^𝒯], times the task duration input variable, Δ T^𝒯.
Composing the above for the 5-step scenario yields the contract in Table <ref>.
Notice the counter-intuitive assumption about the first step, , requiring the scenario's initial state of charge to be greater than the worst-case consumption during the step; Pacti derived this assumption from the first step's contract guarantees. Furthermore, although each step contract guarantees an upper bound on the state of charge, Pacti's algebraic operations effectively captured the fact that this upper bound constraint is necessary for the task and is otherwise implied for the subsequent power-consuming steps due to the chaining of the state of charge effects.
§.§.§ Pacti contracts in the science & communication viewpoint
Qualitatively, this viewpoint is unaffected by and tasks: their contracts reduce to a no-change guarantee that the science state variables on exit are equal to those on entry. For , the range of downlink rate during the task instance depletes the onboard science data storage but leaves the cumulative science data acquired unaffected. For , the science data generation rate range during the task instance increases both onboard science data storage and cumulative science data acquired.
2cTask template for 𝒯
[_⋯]
d_, c_, Δ T^𝒯
d_, c_
Δ T^𝒯≥ 0 d_∈ [0,100]
d_- ∈ [_⋯]Δ T^𝒯
c_- = 0
The constant hyperparameter, _^𝒯, defines the range of downlink rate draining the onboard science data storage during an instance of this task, which guarantees that the difference between the exit and entry data storage will be proportional to the downlink rate interval constant, [_⋯^𝒯], times the task duration input variable, Δ T^𝒯.
2cTask template for 𝒯
_⋯
d_, c_, Δ T^𝒯
d_, c_
Δ T^𝒯 ≥ 0 c_≥ 0
d_entry ∈ [0,100 - _maxΔ T_SBO]
d_ ≤ 100
d_ - ∈ [_⋯]Δ T^𝒯
c_ - ∈ [_⋯]Δ T^𝒯
The constant hyperparameter, _^𝒯, defines the range of science data generation rate accumulating in the onboard science data storage during an instance of this task, which guarantees that the difference between the exit and entry data storage (onboard and cumulative) will be proportional to the generation rate interval constant, [_⋯^𝒯], times the task duration input variable, Δ T^𝒯.
Note that the tasks have trivial no-change contracts with the exit variables, u,r, equal to the corresponding entry variables. The overall science and communication viewpoint contract for the 5-step scenario yields the science and communication contract in Table <ref>.
§.§.§ Pacti contracts in the navigation viewpoint
Qualitatively, and have similar impacts where the trajectory estimation uncertainty increases according to a noise range due to a change of spacecraft orientation performed during an instance of such tasks. Thanks to optimal measurements of the asteroid performed during this task, the onboard auto-navigation software can reduce the trajectory estimation uncertainty within some range of improvement. has no impact on uncertainty. All three tasks leave the relative trajectory distance unchanged. Due to performing a long-duration change of velocity, the task injects additional trajectory estimation uncertainty proportional to a noise range; however, it reduces the relative trajectory distance proportional to an improvement range.
2cTask template for 𝒯∈{}
_⋯
u_, r_
u_, r_
u_∈ [0,100] r_∈ [0,100]
r_ = r_ u_exit≤ 100
u_ - ∈ [_⋯]
The constant hyperparameter, _⋯, defines the range of trajectory estimation uncertainty noise injected in an instance of this task due to a single change of spacecraft orientation.
2cTask template for 𝒯
_⋯
u_, r_, Δ T^𝒯
u_, r_
Δ T^𝒯≥ 0 u_≤ 100
r_ = r_ u_exit∈ [0,100]
u_ - ∈ [_⋯] Δ T^𝒯
The constant hyperparameter, _⋯, defines the range of trajectory estimation uncertainty improvement during an instance of this task due to onboard autonomous navigation calculations, which guarantees that the difference between the exit and entry uncertainty will be proportional to the improvement rate interval constant[An improvement interval with a negative lower bound corresponds to the possibility of a navigation trajectory deterioration.], [_⋯], times the task duration input variable, Δ T^𝒯.
Note that the task has a trivial no-change contract with the exit variables, u,r, equal to the corresponding entry variables.
2cTask template for 𝒯
_⋯, noise_⋯
u_, r_, Δ T^𝒯
u_, r_
Δ T^𝒯≥ 0 u_∈ [0,100] r_entry≤ 100
r_≥ 0 u_∈ [0,100]
r_ - ∈ [_⋯] Δ T^𝒯
u_ - ∈ [_⋯] Δ T^𝒯
The constant hyperparameter, _⋯, defines the range of relative trajectory progress improvement during an instance of this task due to onboard autonomous navigation calculations, which guarantees that the difference between the exit and entry progress will be proportional to the improvement rate interval constant, [_⋯], times the task duration input variable, Δ T^𝒯. The constant hyperparameter, _⋯, defines the range of trajectory estimation uncertainty degradation during an instance of this task due to velocity change being performed, which guarantees that the difference between the exit and entry uncertainty will be proportional to the noise interval constant, [_⋯], times the task duration input variable, Δ T^𝒯.
Composing the above for the 5-step scenario yields the navigation contract in Table <ref>.
§.§ Schedulability analysis
Our schedulability analysis methodology reflects separating design concerns from operation concerns. Design concerns correspond to the capability characteristics of each task:
* range of power consumption for each of the tasks ;
* range of power generation for the task;
* min,max range of downlink speed for the task;
* range of science data acquisition rate for the task;
* range of trajectory estimation uncertainty noise injection for each of tasks;
* range of trajectory estimation uncertainty improvement due to optimal small body measurements for the task; and
* range of relative trajectory progress for the task.
We define these capability characteristics as contract hyperparameters, as discussed in Section <ref>. On the other hand, we define operational requirements as constraints on entry/exit variables:
* Minimum battery state of charge: 60-90%
* Minimum task duration for each step: 10-50 seconds
* Initial science data volume: 60-100%
* Initial trajectory estimation uncertainty: 40-90%
Methodologically, we defined schedulability as the compatibility between a schedule (based on a given choice of capability hyperparameters) and a set of operational requirements (based on a given choice of values for entry/exit variables). We applied the Latin hypercube statistical sampler to generate multiple combinations of scenarios and operational requirements as summarized in Table <ref> using a Windows 10 workstation powered by an AMD Threadripper Pro 3955WX processor with 16 cores and 128GB RAM running Ubuntu 20.04 under Windows 10's WSL2[For details about performance measurement and API statistics, see <https://github.com/pacti-org/pacti-instrumentation>.]. For scenario generation, we sampled 200 distributions to generate mean and deviation for specifying the range of each of the 12 capability hyperparameters. The second column shows the statistics for producing a short 5-step scenario and a long 20-step scenario given such a sample. For varying operational requirements, we generated 100 random values within predefined ranges of requirement constraints. We computed schedulability using Pacti's merge operation for all combinations of 200 scenarios and 100 operational requirements. The third column shows the statistics for this schedulability analysis. The scarcity of admissible solutions (i.e., less than 1%) and the efficiency of schedulability analysis[Over 100 combinations per second (5-step scenario) and over 25 combinations per second (20-step scenario) using up to 32 concurrent jobs.] demonstrates the usefulness of Pacti for rapid exploration of design and operational constraints.
Pacti's API provides additional capabilities to get useful insights into admissible schedules. For example,
Figures <ref> and <ref> show two results of the visualizing the bounds of battery state-of-charge at the entry and exit of each step in the schedule using Pacti's API. Note that this range visualization is qualitatively different from a simulation timeline: a single value over time. These figures also illustrate a subtle aspect of Pacti's polyhedral contract algebra where the effect of composing the sequence step contracts results in relaxing the guarantees <cit.>, broadening the possible exit variable ranges since the final variables are unconstrained. Conversely, the middle section of the scenario shows greater precision in the bound calculations since the subsequent contracts force constraints on the exit variables, thereby preventing their relaxation. Aside from the subtleties of contract relaxation, these two figures show a stark contrast between different scenario characteristics and operational requirements combinations. Such differences would compellingly motivate cross-validating the Pacti contract approximations with appropriate simulation models to get additional insights into these differences.
With Pacti's API, we computed the minimum and maximum values of a linear optimization metric, the average of all states of charges at the end of each step, and plotted these results in Figure <ref> by scoring each admissible schedule w.r.t the scenario and operational requirement. For scenario scoring, we took the average of adding/subtracting all viewpoint-specific positive/negative capabilities: generation vs. consumption for the power viewpoint, downlink speed vs. observation rate for the science viewpoint, and noise vs. improvement for the navigation viewpoint.
For operational requirement scoring, we averaged all constraints since the difficulty of achieving them increases with their magnitude. The clustering of admissible schedules suggests that designers could get more insight by performing this scoring on specific viewpoints instead of combining them as was done here.
§ PRELIMINARY DESIGN OF AN AIRCRAFT FUEL SYSTEM
After having considered an application of contracts and Pacti to the design of space missions,
in this section we will consider their application in the design the design of the thermal management system for aircraft.
§.§ Description of the thermal management system
The dependencies of a few sub-systems of a prototypical aircraft are shown in Figure <ref>. The propulsion system is represented by an engine which transforms the chemical energy stored in the fuel into thrust and mechanical power. The mechanical power is transformed into electric power by a generator. The generation of mechanical power and electric power are both inefficient processes that generate heat. Fuel must be delivered from the tank to the engine at a certain rate which depends on the required thrust level. A fuel pump serves this purpose by moving fuel along a circuit using, in our example, electric power from the generator. An aircraft has several other electric loads, such as actuators and flight computers that, not being 100% efficient, generate heat as well. The heat generated by the electric loads, the generator, and the engine can be used to maintain a desirable fuel temperature which would otherwise be too low at high-altitudes to operate the engine efficiently. Thus, heat exchangers are used to transfer heat to the fuel before reaching the engine. Some fuel is returned to the tank for two reasons. First, the fuel flow rate must be regulated to absorb heat from the components on the aircraft while also maintaining the fuel temperature at the engine inlet within prescribed bounds. Secondly, the fuel in the tank must also be maintained at a desirable temperature (definitely above the freezing point and below the burning point) which can be achieved by returning hot fuel to the tank. The returned fuel could be too hot though, and its temperature may have to be reduced by rejecting some heat through a heat exchanger with the outside air.
We are interested in the problem of designing the system so that the temperature at the engine inlet and in the tank are always kept within acceptable ranges over the entire flight envelope. The design has to be robust with respect to uncertainties in the generated heat, and component tolerances. Ideally, the result of the design phase is a set of specifications for the parameters of the components in the system. The wider the range allowed for these parameters, the wider the set of components to choose from. Also, in general, larger tolerances correspond to less expensive manufacturing processes. The key variables we consider are the following: the flight regime is defined by the pair (alt, thrust) of the flight altitude and thrust level, respectively, and the operating point is defined by the pair (ṁ_in,ṁ_a) of the fuel flow rate imposed by the pump and the air flow rate used to cool down the fuel returning to the tank, respectively. The system-level specification Spec defines the range of allowed values for the temperature at the engine T_e, and at the tank inlet T_out. Thus, we can define two problems that we wish to address:
[Analysis]
Given a range for (alt,thrust), and given component models and associated uncertainties, check whether a pair (ṁ_in,ṁ_a) satisfies system level specification Spec.
[Optimization]
Given a feasible pair (ṁ_in,ṁ_a), and given a map that associates a cost to a component as a function of tolerances and inefficiencies, find the optimal distribution of these parameters among the components of the system such that total cost is minimized.
In order to address these two problems, we abstract the system into the block diagram shown in Figure <ref>. This block diagram focuses on the fuel system only, and abstracts the other subsystems at their interfaces with the fuel system. This block diagram shows how the system works: fuel flows from the tanks to the engines using a pump; the heat generated by electrical and electronic devices, and the heat generated by the engines, is transferred to the fuel to increase its temperature; some hot fuel is burned by the engines, while some is returned to the tank; the fuel that returns back may become too hot, and its temperature may have to be decreased through a heat exchanger that uses external air as cold fluid.
The electric pump determines the fuel flow rate ṁ_in at the input of the system which is equal to the flow rate at the output of the electric pump. The temperature of the fuel at the inlet of the pump is T_in, while the temperature at its output is T_ep. A heat exchanger collects heat h_g from the electric power generator, h_l from the electric load (representing a lumped model of the electric distribution system and various electric loads), and h_e from the engines. These three heat sources increase the temperature of the fuel to T_hl. The fuel is then split into two paths: the engine burns fuel at a rate ṁ_e, while the remaining flow returns to the tank at a rate ṁ_in - ṁ_e. Finally, the fuel is cooled down through a fuel-air heat exchanger which brings the temperature T_out to an acceptable value.
§.§ Leveraging contracts for analysis and optimization
One way to tackle Problems <ref> and <ref> is to develop a simulation model, and sample its parameters in a certain range. For example, assume the electric load in Figure <ref> requires a nominal power w_l, but the actual power requirement is in the range [(1-ϵ_l)· w_l, (1+ϵ_l)· w_l], where the tolerance factor ϵ_l models several sources of uncertainty. Then the model would be simulated for different values of w_l in this range. The data gathered by these simulation runs is used to create the response surfaces for (ṁ_in,ṁ_a) and for total cost. Samples that violate the assumptions of any of the components, or the system specification are simply rejected as invalid. As the number of parameters that can span many values grows, the number of simulation runs grows geometrically.
We leverage a contract-based framework to represent explicitly the assumptions and the guarantees for each component, and compute algebraically the temperature ranges for T_e and T_out. We represent the system specification as a contract Spec where the inputs include the flight regime, the temperature of the fuel in the tank, and the nominal power requirement, and the outputs are the temperature of the fuel T_e at the engine inlet and the temperature going back to the tank T_out. Each component in Figure <ref> is modeled by its own contract where the guarantee captures a range of implementations by defining bounds on the values of the component's outputs. We leverage the ability of Pacti to compute an explicit representation of the composition of contracts to derive a single contract for the system under study (SUD). This contract provides us with the entire range of possible values for the two key variables T_e and T_out. We also use Pacti's ability to check whether the SUD refines the specification Spec. These features are also leveraged during optimization where Pacti is called in the optimization loop to compute the bounds for T_e and T_out which are used in the computation of the cost function and constraint violations.
§.§ Modeling the system using polyhedral assume-guarantee contracts
Consider a connection between two components in the fuel circuit. Let ṁ (in ) denote the fuel flow rate, and T (in ) denote the fuel temperature. The heat rate through such connection is ṁ· C_f · T, where C_f is the specific heat of the jet fuel (0.2 ). The heat rate is an important quantity in this model because balancing heat while satisfying temperature and fuel flow rate constraints is the key problem in this application. However, this term involves the product of two key quantities (the fuel flow rate and the temperature) that are also involved in other constraints. If these quantities are both considered variable in our analysis, then the heat rate becomes a non-linear term which, and the model would fall outside of the polyhedral constraint required to perform analysis using Pacti. One of the two sets of variables (either fuel flow rates or temperatures) must be treated as as a set of parameters which are fixed to a constant value. A natural choice in this case study is to consider the fuel temperatures at different points in the system as variables because they must obey operational constraints. Specifically, the temperate at the engine and tank inlet must lie within acceptable ranges.
The top-level specification of the system under study can be defined as a contract Spec with the set of input variables I_Spec = { T_in, T_a, w_nom}, representing the input temperature from the tank, the air temperature, and the nominal power requirement from the electrical components on the aircraft. The set of output variables is O_Spec={ T_e,T_out}. The specification contract is Spec =(I_Spec,O_Spec,A_Spec,G_Spec), where the assumption A_Spec specifies bounds around nominal values of the input temperature, the air temperatures, and the power requirement, while the guarantee G_Spec specifies acceptable ranges of the temperature at the engine and at the output.
The complete model of the contracts used to define the System Under Study (SUD) is shown in Figure <ref>.
The electric pump has an inlet, an outlet, and an electrical interface. The pressure difference between the inlet and the outlet is Δ P_ep which we assume to be 6.9 MPa. The electric power required by the pump is w_ep = ṁ_in·Δ P_ep/ρ_f ·η_ep where ρ_f=800 ^3 is the density of the fuel, and η_ep is the efficiency of the pump which we assume to be 0.6. Some power, specifically w_ep· (1 - η_ep), is transformed into heat which is absorbed by the fluid going through the pump. The pump increases the temperature of the fuel by (1 - η_ep)·Δ P_ep/C_f ·ρ_f ·η_ep . We use two parameters ϵ_ep,w and ϵ_ep,t to capture the power and temperature uncertainties which can also be seen as characterizing a space of possible implementations for the pump. This is also a general modeling pattern that we use throughout the development of the component models for this use case.
The heat load on the main fuel line to the engine increases the fuel temperature by h_g + h_l + h_e/ṁ_in· C_f , while the fuel-air heat exchanger decreases the temperature of the fuel returning to the tank by η_x ·ṁ_a · C_a/ṁ_s · C_f· (T_s - T_a)[The heat exchanger equation can be found in <https://web.mit.edu/16.unified/www/FALL/thermodynamics/notes/node131.html>.] , where η_x is the efficiency of the heat exchanger which we assume to be 0.6 (and which models several factors including the size of the exchanger), and C_a is the specific heat of the outside air which we assume to be 1 ).
§.§ Analysis and design space exploration
We use these models to explore and optimize the SUD according to the methodology shown in Figure <ref>. The exploration sweeps over three flight altitudes (5, 10, and 15 ), and four levels of engine thrust (5,000, 10,000, 15,000, and 20,000 ). We map the flight altitude to a nominal value for the air temperature T^*_a according to the model described in <cit.>, and we map thrust to a nominal value for the fuel flow as ṁ_e = 0.7 · thrust / 3600. We also fix the nominal power requirement to w^*_nom = 140 , and the nominal fuel temperature in the tank to T^*_in = 288 . For each operating point, the exploration loop sweeps over a range of fuel flow ṁ_in and air-flow ṁ_a. The ranges were selected to cover many possible implementations of pumps and heat exchangers.
After selecting these parameters, we use Pacti to compute the contract of the system SUD as the composition of the the electric pump, electric generator, electric load, heat load, fuel splitter, and heat exchanger (as shown in Figure <ref>).
We then compose the SUD with an engine model E which is used to define the engine heat h_e as a function of the parameter ṁ_e. The nominal engine model is h_e = k_e ·ṁ_e with k_e = 5,000 . The resulting system SUD ∥ E must refine the specification contract Spec.
We define A_Spec≡T_in≤ T_in≤T_in∧T_a≤ T_a≤T_a∧w_nom≤ w_nom≤w_nom. The bounds T_in, T_in, T_a, and T_a are defined as 2% tolerances around their nominal values, while w_nom and w_nom are defined as 5% tolerances around their nominal values. The guarantee of this system is G_Spec≡T_e≤ T_e≤T_e∧ T_in - Δ_t≤ T_out≤ T_in + Δ_t. The output temperature needs to be close enough to the input temperature to maintain the temperature of the fuel in the tank approximately constant. We use Δ_t = 10 in our analysis, and we fix T_e = 300 and T_e = 330 .
If SUD ∥ E ≤ Spec, then we compute the actual temperature bounds at the engine and at the output. We leverage the unique capability of Pacti to compute and explicit polyhedral representation of SUD ∥ E ∥ A_Spec where, with abuse of notation, we have denote by A_Spec the contract (True,A_Spec). Pacti can then compute the extreme vertices of SUD ∥ E ∥ A_Spec which correspond to the actual ranges Δ(T_e) and Δ(T_out) of the output variables T_e and T_out, respectively.
Figure <ref> shows the temperature bounds computed by the exploration loop. As expected, the temperature bounds at the engine do not depend on ṁ_a. Higher values of ṁ_in result in lower values of temperate bounds at the engine, while higher values of ṁ_a result in lower values of temperature bounds at the output.
At this level of abstraction, it is also possible to compare different component options. For example, consider the case where the fixed heat exchanger shown in Figure <ref> is replaced by its controlled version. The controlled heat exchanger is a more complex sub-system that under the assumption that there is a temperature difference between the hot and cold sides of at least 10 , i.e., T_s - T_a ≥ 10, guarantees that T_in - 5 ≤ T_out≤ T_in + 5. The implementation may require an operable fixture to modulate ṁ_a, and a fan when the system is operating a very low speed (e.g., taxiing on the ground). The result of the comparison between the two solutions is shown in Figure <ref>. Each dot is a valid instance for different combinations of design variables ṁ_in and ṁ_a, and operational variables ṁ_e (corresponding to thrust level) and T_a (corresponding to flight altitude). We can observe that in the case of the fixed heat exchanger, there is not a combination of design variable values that can satisfy the entire range of the operational variables. In the case of the controlled heat exchanger, instead, it seems sufficient to operate the electric pump in two regimes (low, and high) to cover the entire flight envelope.
Once the exploration is completed, the resulting bounds can be used to select promising instances that can be further optimized. The optimization criteria, or cost function, is related to the quality of the components in the system which, for the generic component x, is modeled by the tolerance parameters ϵ_x. Lower values of the tolerance parameters correspond to higher cost. We define a cost function which is the sum of two terms: c(ϵ) is defined as ||1 - ϵ||_2; the other term is defined by a function V of the temperature bounds. Specifically, V has a large value if either bound violates G_S, otherwise V = (T_e,min - T_e)^2 + (T_out,min - T_out)^2 + (T_e - T_e,max)^2 + (T_out - T_out,max)^2. This cost function forces to optimizer to prefer valid instances that span the entire allowed range for the temperature bounds, which correspond to finding the weakest guarantee that still refines the specification.
For instance, in the case of the controlled heat exchanger, we have selected an instance with altitude 15 , thrust level 20,000 , ṁ_in = 9.316 , and ṁ_a = 0.429 . We used the Nelder-Mead optimization methods implemented by the SciPy package <cit.>, and we set the initial value of all tolerances to 0. We also limit the search space to tolerance values between one and ten percent. After 2000 iteration, the solution found by the optimizer is ϵ_ep,w = 0.01, ϵ_ep,t = 0.09998, ϵ_g = 0.01008, ϵ_l,w = 0.1, ϵ_l,h = 0.09998, ϵ_hl = 0.06214, ϵ_s = 0.01.
§ CONCLUDING REMARKS
The ability to analyze complex systems early on in the design cycle is essential to system success. In the early stages of design, the development of requirements and their allocation to sub-systems involves a collaborative and iterative effort. Tool support would be beneficial to evaluate trade-offs among multiple viewpoints with respect to the system design and its planned operation. The modeling and analysis tools should be formal, should support compositional design and analysis, and should be efficient and explainable.
Considering this objective, we presented an agile methodology for designing and analyzing such systems across multiple viewpoints based on a compositional, contract-based modeling paradigm using Pacti <cit.>. Pacti supports the theory of polyhedral constraints for specifying assume-guarantee contracts. Since this formalism involves linear constraints for specifying assumptions and guarantee constraints, the modeling paradigm is accessible to most stakeholders and appropriate for communicating across stakeholders with diverse expertise and viewpoints the intricacies of space mission design, requirements, and operations formulated in this manner.
We demonstrated the scalability of several of Pacti's API operations for composing and merging contracts and for computing bounds for variables and linear optimization criteria. The case studies we presented confirm the viability of the general approach. We collected a list of lessons learned from this experiment. Mainly, Pacti is effective at combining contracts, whether these are for system or component specification purposes or for operational requirement purposes. Despite polyhedral algebra being mathematically simple, complex polyhedral contracts can be difficult to understand. In these cases, we found that computing minimum and maximum bounds for contract variables yields valuable information for elaborating operational requirements. In engineering, behavior is typically thought in terms of simulating the state variables as a function of time. Pacti requires systems engineers to reconsider their components as the sets of possible behaviors they can reflect;
on the other hand, Pacti's contract algebra provides powerful tools to help understand bounded behavior. For example, developing a time-based simulation model involves a risk that one could get lost in the details and lose track of the overall modeling objective. In contrast, Pacti's polyhedral contract algebra coerces the system engineer to think about which aspects of behavior are important to characterize with bounds.
§ ACKNOWLEDGMENTS
JPL/Caltech Copyright: © 2023 California Institute of Technology. Government sponsorship acknowledged. The research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). This work was partially supported by the JPL Researcher On Campus (JROC) program for which we gratefully acknowledge Prof. Richard M. Murray's support. This work was also partially supported by NSF and ASEE through an eFellows postdoctoral fellowship.
|
http://arxiv.org/abs/2409.03748v1 | 20240905175827 | A neural processing approach to quantum state discrimination | [
"Saeed A. Khan",
"Fangjun Hu",
"Gerasimos Angelatos",
"Michael Hatridge",
"Hakan E. Türeci"
] | quant-ph | [
"quant-ph"
] |
]A neural processing approach to quantum state discrimination
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544
Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh
Department of Electrical and Computer Engineering, Princeton University, Princeton, NJ 08544
§ ABSTRACT
Although linear quantum amplification has proven essential to the processing of weak quantum signals, extracting higher-order quantum features such as correlations in principle demands nonlinear operations. However, nonlinear processing of quantum signals is often associated with non-idealities and excess noise, and absent a general framework to harness nonlinearity, such regimes are typically avoided. Here we present a framework to uncover general quantum signal processing principles of a broad class of bosonic quantum nonlinear processors (), inspired by a remarkably analogous paradigm in nature: the processing of environmental stimuli by nonlinear, noisy neural ensembles, to enable perception. Using a quantum-coherent description of a monitoring a quantum signal source, we show that quantum nonlinearity can be harnessed to calculate higher-order features of an incident quantum signal, concentrating them into linearly-measurable observables, a transduction not possible using linear amplifiers. Secondly, provide coherent nonlinear control over quantum fluctuations including their own added noise, enabling noise suppression in an observable without suppressing transduced information, a paradigm that bears striking similarities to optimal neural codings that allow perception even under highly stochastic neural dynamics. Unlike the neural case, we show that -engineered noise distributions can exhibit non-classical correlations, providing a new means to harness resources such as entanglement. Finally, we show that even simple in realistic measurement chains can provide enhancements of signal-to-noise ratio for practical tasks such as quantum state discrimination. Our work provides pathways to utilize nonlinear quantum systems as general computation devices, and enables a new paradigm for nonlinear quantum information processing.
[
Hakan E. Türeci
September 9, 2024
=====================
§ INTRODUCTION
While engineering quantum systems hinges on isolating quantum components from their noisy classical environment, the observation of any such system necessitates extracting its emitted weak quantum signals back to the classical world. It is therefore essential to bolster quantum signals prior to their interaction with noisy classical modes and the readout noise they introduce, using signal processors that are themselves quantum-mechanical <cit.>. The most widely-used such processors are quantum amplifiers <cit.>, which provide linear gain to an input signal and have been critical to quantum signal processing applications such as high-fidelity quantum state readout <cit.> and generation of non-classical light <cit.>. In spite of this success, the linearity of quantum amplifiers means they are only able to reveal a subset of the features of a complex quantum signal. A well-known limitation is the estimation of higher-order correlations. When only linear readout is available, the nonlinearity demanded by correlation calculations must be provided by classical post-processing: this process amplifies readout noise, leading to a signal-to-noise ratio that degrades exponentially with the order of the desired correlation function <cit.>. For signals that exhibit non-classical, higher-order quantum correlations <cit.>, linear processing restricts our ability to fully resolve the quantum domain, thereby limiting our capacity to control it.
Interestingly, even though a rich expanse of quantum devices can be realized beyond the confines of linearity <cit.>, the use of nonlinear quantum systems as general purpose quantum signal processors has been limited. Partly, this is because there are no established general principles of quantum information processing with nonlinear quantum systems.
The very fact that nonlinear systems are a priori not bound by constraints of a linear dependence between input and output - which could hold the key to new ways of processing quantum signals - makes their operation much more complicated to analyze. More importantly, a complete description of the operation of any general quantum processor must be constrained by quantum mechanics, namely the uncertainty principle <cit.>. For linear quantum amplifiers, this translates to well-known limits on noise added during the amplification process <cit.>. Analogous limits are much less explored for nonlinear amplifiers (with some exceptions <cit.>), and nonlinear regimes are often even associated with excess noise <cit.>. As a result, in spite of some measurement schemes effectively utilizing nonlinear systems <cit.>, nonlinearity is often considered an inconvenience in quantum signal processing, being the source of gain saturation <cit.> and operational instabilities <cit.>, which has led to significant efforts to mitigate nonlinear effects <cit.>.
Remarkably, although the processing of weak stochastic signals using nonlinear devices is still not thoroughly explored in the quantum domain, this paradigm is fundamental to perhaps the most well-known information processor: the brain. It is understood that neurons, the signalling cells that constitute all nervous matter, function as part of large neural ensembles whose collective firing properties can be highly nonlinear <cit.> and exhibit great stochasticity <cit.>. Experiments recording the activity of neural ensembles in the visual cortex of animals have provided robust evidence that neuronal response exhibits significant variability even under repeated presentations of identical stimuli (see Fig. <ref>(a)), with the total noise power even increasing with the size of the neural ensemble <cit.>. Nevertheless, such noisy ensembles are able to successfully process distinct stimuli to enable perception <cit.>. Research over the past few decades has begun to establish an explanation <cit.>. Nature appears to prefer encoding cortical signals in the response of collective neuron modes, as opposed to highly-variable single neuron dynamics (referred to as population coding <cit.>). Furthermore, by exploiting the nonlinear response of neurons <cit.>, the encoding of sensory information in collective neuron modes (or coding directions) that have the largest noise power is avoided <cit.>. While neural noise still places limits on sensory signal discrimination <cit.>, this coding principle allows the perception of weaker stimuli than would be resolvable if the largest noise mode was used for coding. This mechanism provides an example of how Nature is able to harness nonlinearity to facilitate signal processing in the presence of seemingly overwhelming stochasticity.
Motivated by the example of stochastic neural processing, we therefore ask the question: can nonlinear, fundamentally stochastic quantum systems provide an advantage in the detection and processing of weak quantum signals beyond the paradigm of linear amplification? If so, can any general principles be established? Answering this question in its most general setting requires a description of the joint dynamics of the quantum nonlinear processor (QNP) and the signal generating quantum system (QS), often excluded in descriptions of linear quantum amplifiers <cit.> (with exceptions e.g. Ref. <cit.>). Here we develop the necessary framework to model a as an in situ processor of signals generated by a quantum system (QS) it is coherently linked to (see Fig. <ref>(b)); our approach thus accounts for the nonlinearity (with some restrictions), the QS- interaction (including the possibility of their entanglement), but most importantly for not only the quantum fluctuations of the itself, but also those due to the quantum nature of the input signals. Building on this framework, our analysis is able to identify two extremely general aspects of quantum information processing enabled by , which we summarize here. First, by characterizing their now nonlinear input-output map, we show that enable higher-order transduction: the mapping of higher-order observables, such as the correlation of quantum signals, to lower-order observables such as quadrature means, making them accessible using much simpler linear readout protocols. Secondly, are able to coherently manipulate incident quantum fluctuations so as to direct amplified noise to observables that do not encode the transduced signal. This feature, which is particularly reminiscent of population coding in noisy neural ensembles, relies on both the nonlinearity and its deployment as a quantum processor.
To demonstrate the above results, we show at work for a practical quantum information processing task inspired by Gaussian state tomography <cit.>: the binary discrimination of states for which the quadratures of constituent QS modes possess identical mean values, but differ only in their quantum fluctuation signatures (Fig. <ref>(b)). These signatures may be evident in the quantum fluctuations of signals emanating from a single QS mode, or may only be revealed in non-classical noise correlations across distinct modes, such as for entangled states. When signals from a QS in such states are processed using standard linear amplifiers and heterodyne readout, the limitations of a linear input-output map ensure that obtained output distributions exhibit no difference in their mean values (Fig. <ref>(c), top). As a result, the distributions are impossible to distinguish perfectly using only a linear decision boundary.
In contrast, we show that are able to transduce the quantum fluctuation signatures of incident signals into displacements observable via heterodyne measurement, effectively functioning as quantum `cross-correlators' across multiple QS modes. As a result the obtained output distributions (Fig. <ref>(c), bottom) exhibit a nonzero mean separation δμ, enabling a linear classifier to separate them perfectly. Most importantly, enable all the nonlinear processing demanded by this discrimination task to be performed on pristine quantum signals, prior to their corruption by classical readout noise n̅_ cl. Consequently, the signal-to-noise ratio for -enabled discrimination degrades only as n̅_ cl^-1. In contrast, with standard linear amplifiers a nonlinear post-processing step must be performed on the final classical signals; this worsens readout noise, furnishing a signal-to-noise ratio that scales as n̅^-2_ cl. We note that the concept of transduction using nonlinear systems has been explored previously <cit.>, often via a lossless Hamiltonian description; here we consider a much more general setting, and account for practical constraints of losses and measurement. Secondly, here are deployed as nonlinear processors of stochastic quantum signals; this paradigm is therefore distinct from the more routinely explored use of quantum systems to compute nonlinear functions of deterministic classical signals.
Finally, we show how can control quantum fluctuations for information processing in ways unavailable to their linear counterparts. Linear amplifiers are restricted to perform the same transformation on an incident signal and its noise; for a fixed input signal, therefore, a decrease in the magnitude of the amplifier output noise is accompanied by a decrease in the magnitude of the output signal, and vice versa. Unconstrained by linearity, enable a much more complex interplay between the transduced signal and the output noise. In particular, we show how tuning parameters alone, the output noise in the signal carrying quadrature P_δμ (the unique quadrature parallel to δμ) can be decreased while the transduced signal magnitude ||δμ|| remains fixed (Fig. <ref>(c), bottom). Most strikingly, a tuning condition can be reached such that the signal carrying quadrature is parallel to the quadrature with minimal output noise, and thus orthogonal to all other quadrature combinations which possess larger noise, a scenario that appears very similar to optimal coding in neural circuits. Crucially, for processing quantum input signals, the large noise modes include quantum fluctuations generated by the upstream QS, but also fluctuations amplified by the during processing itself, referred to as `added noise'. The minimum noise mode can even reach non-classical (i.e. sub-vacuum) levels, in the presence of squeezing or entanglement. Therefore the ability to control noise using provides a practical means to harness non-classical correlations for quantum state discrimination.
A quantum-coherent description of a QS and in the same measurement chain, valid for multi-mode systems, and across a range of excitation conditions and interaction types, demands multiple theoretical techniques. Our key analytic results are built upon a nonlinear van Kampen expansion <cit.> of the Fokker-Planck equation associated with the quantum state of the complete measurement chain. At the expense of being restricted to weakly nonlinear , this approach has the advantage of applying to arbitrarily-multimode bosonic quantum systems. As such, we expect the uncovered operating principles to hold for general under heterodyne monitoring, especially relevant to cQED. These results are numerically verified and extended to stronger nonlinearities using a truncated cumulants approach that simulates measurement-conditioned dynamics of the measurement chain, with a complexity that is only quadratic in the number of total modes. Supplementing these results via full (stochastic) master equation simulations and exact results for select few-mode, multi-system measurement chains <cit.>, we are able to provide a comprehensive description of quantum information processing using .
The remainder of this paper is organized as follows. In Sec. <ref>, we introduce our model for processing quantum signals using a , provide a summary of the key results of our theoretical approach, and introduce the particular classification tasks we analyze. Sec. <ref> then analyzes quantum state classification using a single-mode , based on a model that can be easily realized in cQED experiments. We highlight the crucial role of nonlinearity and discuss how parameters enable control over quantum fluctuations, including the ability to harness non-classical correlations. In Sec. <ref> we provide a comparison of quantum state measurement using a against standard linear quantum amplifiers, finding improved robustness to added classical noise for the -based scheme. Sec. <ref> then considers quantum state classification tasks that require the non-local processing of quantum fluctuations, where a multi-mode becomes essential. Here, we demonstrate how output field entanglement can be engineered using a to produce sub-vacuum noise in a multi-mode quadrature. The paper concludes with a discussion of possible applications of in the quantum information processing landscape.
§ QUANTUM NONLINEAR PROCESSORS FOR QUANTUM SIGNALS
§.§ Quantum measurement chain for processing quantum signals
The complete measurement chain we consider is depicted in Fig. <ref>(b). The conditional evolution of the quantum state of the chain under continuous measurement is formally described by the stochastic master equation (SME)
d = ( + + ) dt +∑_k𝒮[√()b̂_k].
The superoperator describes a quantum system (QS) that is the source of quantum signals to be processed for a given quantum information processing task. We consider the form
ρ̂ = -i[ℋ̂^(l)_ QS,ρ̂] + ∑_mκ_m 𝒟[â_m]ρ̂,
where ℋ̂_ QS is a linear Hamiltonian governing the dynamics of M bosonic modes â_m, m ∈ [1,M], and experiencing damping {κ_m}. The superscript (l) indexes distinct states of the QS, to be distinguished in a classification task.
This QS is coupled to the quantum nonlinear processor () in the same measurement chain (via a non-reciprocal coupling described by , see Appendix <ref>), enabling the to monitor the QS as an in situ measurement apparatus. The is governed by ,
= -i[ ℋ̂_ + 𝒩̂_ ,],
where ℋ̂_ is a general linear Hamiltonian governing the dynamics of K bosonic modes b̂_k, k ∈ [1,K]. The nonlinearity can then be furnished by any general bosonic nonlinear interaction; for simplicity we consider a Hamiltonian describing K Kerr modes with common nonlinearity strength Λ, 𝒩̂_ = -∑_k Λ/2kkkk.
The chosen measurement scheme is of the standard heterodyne type: it comprises continuous monitoring of the decay channels (with unit efficiency). The resulting measurement-conditioned evolution of the complete measurement chain in accordance with quantum measurement theory <cit.> is governed by the stochastic heterodyne measurement superoperator 𝒮[√()b̂_k], with a rate assumed equal for all modes (see Appendix <ref>). This yields heterodyne records ℐ_k(t),𝒬_k(t) for each monitored mode <cit.>:
ℐ_k(t) = ξ_ℐ_k(t) +√(γ_ H)[ X̂_k + ξ_ℐ_k^ qm(t)] + √(n̅_ cl) ξ^ cl_ℐ_k(t),
𝒬_k(t) = ξ_𝒬_k(t) + √(γ_ H)[P̂_k + ξ_𝒬_k^ qm(t)] + √(n̅_ cl) ξ^ cl_𝒬_k(t).
Heterodyne monitoring probes the canonically-conjugate quadratures X̂_k = 1/√(2)(k+k), P̂_k = -i/√(2)(k-k), but a given measurement record also contains contributions from multiple noise sources ξ. The terms ξ_ℐ_k,ξ_𝒬_k describe vacuum noise that would be present even if no signal was emanating from the monitored quantum modes (i.e. → 0). The much more interesting terms are marked `qm': these describe noise contributions of a quantum origin, such as non-classical correlations due to squeezing or entanglement, or added noise by quantum dynamics, and are contingent on the measurement superoperator 𝒮 (see Appendix <ref>). In contrast, the terms `cl' define classical readout noise in the measurement chain; these are not associated with a stochastic measurement superoperator and hence have no backaction on the quantum measurement chain. Equivalently, n̅_ cl quantifies noise added after the so-called Heisenberg-von Neumann cut <cit.>.
These stochastic records are thus often filtered to obtained heterodyne quadratures (I_k,Q_k) ≡∫_t_0^t_0+𝒯 dτ 𝒦(τ)×(ℐ_k(τ),𝒬_k(τ)), where 𝒦(τ) is the filter function (we assume a boxcar filter 𝒦(τ) = 1/√(2𝒯) ∀ τ) over a window of length 𝒯 starting from an initial time t_0. The quadratures can be compactly represented via the vector x = (I_1,Q_1,…,I_K,Q_K)^T ∈ℝ^2K. For any quantum information processing task, the data x is typically further processed to obtain a vector y of output features, a step we define generally via y = x, which can include ensemble averaging over distinct shots, but also nonlinear processing to be clarified in due course. The complete measurement chain thus described is designed to measure QS properties via the obtained outputs y; note that our description subsumes the standard paradigm of linear quantum amplification plus heterodyne measurement if 𝒩̂_→ 0 and an appropriate choice of ℋ̂_ is made.
We can now specialize to the consideration of binary quantum state discrimination (although can be deployed for more general processing tasks as well). Binary discrimination has a simple objective: to distinguish QS state ρ_ QS^(l) from ρ_ QS^(p), where ρ_ QS^(l)≡ tr_[^(l)], based on the corresponding outputs y^(l) and y^(p) from the measurement chain. To this end, we introduce a standard measure of the distinguishability of the two measured distributions of y^(l) and y^(p): Fisher's discriminant , defined as
(y) = δμ^T ·𝐕^-1·δμ,
where δμ = μ^(l)-μ^(p) is the difference of means of the two measured distributions, while 𝐕 = 1/2(Σ^(l)+Σ^(p)) is a measure of their combined variance:
μ^(l) = 𝔼[ y^(l)], Σ^(l) = 𝔼[y^(l)y^(l)T] - 𝔼[y^(l)]𝔼[y^(l)]^T.
Note that has the intuitive form of a generalized signal-to-noise ratio (SNR). Furthermore, the fidelity 𝒞 of classifying two Gaussian distributions with identical covariance matrices Σ^(1)=Σ^(2), is simply 𝒞 = 1/2(1 + erf/2√(2)), where erf z is the standard Gaussian error function; as →∞, 𝒞→ 1. For binary QS state discrimination, the of measured distributions is correlated with the fidelity of discriminating the two QS states that give rise to these distributions.
Using such binary classification tasks, where the concepts of signal δμ and noise 𝐕 are precisely defined, we will demonstrate how can process quantum signals in situ for practical quantum information processing applications.
§.§ Approximate input-output map for quantum nonlinear processors
In the context of binary state discrimination, the role of can be characterized by evaluating the central quantities δμ and 𝐕 identified in the previous section. However, the nonlinearity introduced by the - central to its operation - generally excludes exact theoretical treatments that would be valid if the measurement chain comprised only linear bosonic modes and interactions. Furthermore, allowing the QS and to comprise arbitrary numbers of modes renders exact numerical integration of Eq. (<ref>) unfeasible for all but the lowest excitation numbers.
Our solution begins by introducing an alternate set of dynamical variables to describe the quantum state of the entire nonlinear measurement chain: quantum cumulants (see Appendix <ref>). Formally, cumulants are an infinite set of dynamical variables parameterizing , indexed by an integer order n_ ord∈ℤ^+. Crucially, however, we show that retaining cumulants only up to a certain finite order n_ ord≤ n_ trunc can very accurately describe the nonlinear measurement chains we consider, contingent on the nonlinearity strength. In this work, we show that n_ trunc = 2 is an excellent approximation provided certain well-understood and achievable constraints on the nonlinearity strength are met. In this case, defining b̂≡ (b̂_1,b̂_1^†,…,b̂_K,b̂_K^†)^T and â analogously for QS operators, the retained first-order cumulants are simply single-operator expectation values [ â; b̂ ], while second-order cumulants are normal-ordered covariances, 𝐂 = ⟨:[ â; b̂ ][ â^T b̂^T ]:⟩ - [ â; b̂ ][ â^T b̂^T ]. This constitutes a truncated cumulants ansatz for : an efficient description of multimode nonlinear quantum dynamics, with the number of retained cumulants scaling only quadratically, and not exponentially, with the total number of modes =K+M for n_ trunc=2.
We then develop an approximation that allows the truncated cumulants to be solved for analytically, formally employing a nonlinear van Kampen (NVK) expansion <cit.> in the nonlinearity (also referred to as a `system size' expansion, see full details in SI Sec. <ref>). Under the NVK approximation, first-order cumulants are written as [ â; b̂ ]= Λ^-1/2[ â; b̂ ] + [ δâ; δb̂ ] where we have introduced the dimensionless nonlinearity Λ = Λ/γ, and γ, the mode decay rate, serves as a normalization factor. Λ→ 0 describes the classical (large occupation) limit, where â, b̂ become the dominant contributions to first-order cumulants; these define the expansion point and satisfy the equations of motion:
d_t â = 𝐋^(l)_a â + Λ^1/2η⃗^(l)
d_t b̂ = 𝐋_b b̂ + N⃗_b(b̂) - η⃗_b^(l)
These equations immediately provide useful insight. Being linear, the QS response is governed entirely by 𝐋_a and possible coherent drives η⃗^(l), as dictated by ℋ̂_ QS^(l) (and dissipative terms). The non-reciprocal QS→ interaction defined by ensures that the QS drives the via the coupling Γ and not vice-versa, leading to an effective QS state-dependent drive η⃗_b^(l)≡Γâ^(l) on the . The dynamics, in contrast, contain both a linear contribution 𝐋_b as well as a nonlinear contribution N⃗_b(b̂), the latter determined by the nonlinear Hamiltonian 𝒩̂_. Thus Eq. (<ref>) is nonlinear in b̂.
The full quantum state further requires specifying the deviation δb̂ (we can show that δâ→ 0) and the second-order cumulants 𝐂. Starting with the latter and introducing the block form 𝐂 = [ 𝐂_a 𝐂_ab; 𝐂_ab^T 𝐂_b ], we show that second-order cumulants satisfy the Lyapunov differential equation:
d_t 𝐂 =
[ 𝐂[ 𝐋_a 0; -Γ 𝐉_b ]
+ m.t.] +
[ 𝐃_a^(l) 0; 0 𝐃_b ],
where m.t. is the (unconjugated) matrix transpose. Here 𝐉_b is the Jacobian matrix of the , [𝐉_b]_ij∝ [𝐋_b]_ij + ∂ [N⃗_b]_i/∂b̂_j, with [·]_ij indicating tensor notation, and (·) specifying evaluation at the expansion point b̂. Then 𝐃_a, 𝐃_b are diffusion matrices for the QS and respectively; these describe incident fluctuations - quantum or classical - beyond vacuum fluctuations, and must be nonvanishing to yield nontrivial 𝐂.
Finally, we introduce the most important dynamical equation in the NVK approximation, for δb̂:
d_t δb̂ = 𝐉_b δb̂ + Λ^1/2𝐇_b : 𝐂_b^(l).
Crucially, the change in first-order cumulants depends on the second-order cumulants, via the final term. This dependence is quantified by the Hessian tensor 𝐇_b of the , defined as [𝐇_b]_ijk∝∂ [N⃗_b]_i/∂b̂_j ∂b̂_k. The Hessian operates on matrices - here on 𝐂_b - to return a vector, via the tensor double contraction (:) over pairs of indices. The Hessian contribution is often neglected in standard linearization schemes; we will see its crucial role in quantum nonlinear processing. Note that if the is linear, N⃗_b → 0, and the Hessian vanishes.
The NVK approximation, combined with input-output theory <cit.>, allows us to analytically obtain μ^(l) and Σ^(l) as defined in Eq. (<ref>), and eventually . The quadrature means after a filtering time 𝒯 are given by
μ^(l) = √(𝒯/2) 𝐔_K( Λ^-1/2b̂^(l) - Λ^1/2𝐉_b^-1𝐇_b : 𝐂_b^(l)),
where 𝐔_K is the K-mode quadrature change-of-basis matrix. The second term demonstrates transduction: the sensitivity of linearly-measurable quadratures to its higher-order (here, second-order) quantum cumulants.
The covariance matrix in the long-filter limit (γ𝒯≫ 1, see SI Sec. <ref>) is given by:
Σ^(l)γ𝒯≫ 1=σ_ vac^2(n̅_ cl + 1)𝐈_K
+σ_ vac^2/γ𝐔_K 𝐉_b^-1[ 𝐃_b + Γ𝐋_a^-1𝐃_a^(l) (𝐋_a^-1)^TΓ^T ](𝐉_b^-1)^T𝐔_K^T.
The first term describes output vacuum fluctuations σ_ vac^2 = 1/2 and classical readout noise ∝n̅_ cl, the latter also in units of σ_ vac^2 (𝐈_K ∈ℝ^2K is the identity matrix). The second line then describes all other contributions from quantum noise. The term ∝𝐃_b describes noise added by the itself. The term ∝𝐃_a^(l), on the other hand, arises due to our quantum-coherent description of both the and QS: it describes noise originating from the QS, which arrives at the via the coupling Γ, after undergoing QS evolution via 𝐋_a^-1. Both noise terms are processed by the , as indicated by the appearance of the Jacobian 𝐉_b^-1.
The above expressions can be used to understand quantum state classification using general weakly-nonlinear bosonic quantum systems. First, we consider the limit of ideal linear quantum amplifiers, processing input drive signals (𝐃_a → 0). In this case 𝐇_b → 0, 𝐉_b^-1→𝐋_b^-1, and Λ^-1/2b̂→ -𝐋_b^-1(𝐋_a^(l))^-1η⃗^(l); hence both the mean and covariance are determined entirely by the matrix 𝐋_b, as must be the case for a linear system. The resulting expressions can be used to obtain the standard quantum limits <cit.> on amplification (see SI Sec. <ref>).
Practical quantum amplifiers, in contrast, can exhibit nonlinear behaviour for sufficiently strong input signals. Then, the leading contribution b̂ to μ is determined by the nonlinear Eq. (<ref>), while the covariance is determined by 𝐉_b, an indicator of the difference in response of nonlinear quantum systems to signal and noise. However, if the aim is still to process input drives η⃗^(l), the large drives typically needed to reach such regimes lead to high signal-to-noise ratios where this difference is often ignored, with more attention instead paid to mitigating the nonlinear response of b̂.
However, in other quantum information processing tasks, noise takes center stage. A simple case arises when discriminating quantum states l and p such that â^(l) = â^(p), so that the leading term ∝b̂ in μ makes no contribution to δμ. Then, quantum state classification becomes the task of processing quantum fluctuations encoded in 𝐃_a^(l), and consequently in 𝐂_b^(l). The nonlinearity enables such processing in situ, via the Hessian tensor. Furthermore, while the covariance is still determined by 𝐉_b^-1 alone, δμ is now determined both by 𝐉_b^-1 and by the Hessian tensor 𝐇_b. This enables a complex interplay of signal and noise only possible using nonlinear quantum systems. We demonstrate the implications of this result using various examples in this paper.
As these expressions are derived using the NVK approximation, one may question their validity beyond the lowest order in nonlinearity. Importantly, the truncated cumulants ansatz holds past the NVK regime; we use this to develop a computational approach, the Stochastic Truncated Equations of Motion (STEOMs), that can be used to simulate measurement-conditioned dynamics of the nonlinear multimode measurement chain beyond the NVK approximation (see SI Sec. <ref>). The STEOMs allow us to account for classification under finite sampling as in real experiments, numerically verify NVK results, and analyze performance in strongly-nonlinear regimes. Finally, we provide select comparisons using exact (S)ME integration to qualitatively verify our results without any assumptions on nonlinearity strength (see SI Sec. <ref>).
§.§ Quantum state discrimination tasks
We begin by defining the QS of Eq. (<ref>) whose states we wish to classify. Precisely, we consider the QS to be a general M-mode linear quantum system under coherent driving at frequencies {ω_dm}; written in the interaction picture and assuming resonant driving, it is described by the Hamiltonian
ℋ̂_ QS^(l) = ( 1/2∑_m G_m^(l)e^-iϕ_m^(l)â_m^2 + ∑_n≠ m G_nm^(l)e^-iϕ_nm^(l)â_nâ_m + h.c. )
+∑_mη_m^(l)(-iâ_m+iâ_m^†).
The integer l then indexes the states we wish to classify, generated by distinct choices of parameters also indexed by l. Such a Hamiltonian can be easily realized in the cQED architecture using tunable parametric drives <cit.>.
We are ultimately interested in coupling the QS directly to the - a nonlinear quantum device - for quantum signal processing. However, to provide a benchmark for later comparisons, we begin by introducing the classification tasks and how they may be performed in a simpler, more usual context: heterodyne readout of the QS using linear quantum amplifiers. Recall that this simplified measurement chain, depicted in Fig. <ref>(a) for M=2, is still described by the same general form of SME, Eq. (<ref>), provided 𝒩̂_→ 0 and ℋ̂_→ℋ̂_ PP, where the latter Hamiltonian specifically describes the phase-preserving (PP) style of linear amplifier (for full model, see Appendix <ref>).
We can finally define the binary quantum state discrimination tasks we wish to analyze: our work considers the discrimination of QS states which possess the unique feature that any monitored modes (i.e. modes that are coupled to the downstream processor) have identical steady-state quadrature expectation values for both states. More precisely, for two distinct states indexed by l and p, â_m^(l)=â_m^(p) = ∀ m s.t. Γ_m ≠ 0, where defines a common amplitude of monitored modes. Such states can then only be distinguished on the basis of their quantum fluctuation statistics, if these are distinct.
For Task I of this type, we consider distinguishing a single-mode squeezed state of mode â_1 (l=1) from a two-mode squeezed state (l=2), by monitoring only mode â_1 (Γ_1 ≠ 0, Γ_2 = 0). The QS parameters used to realize these states are summarized in Table <ref>. The top plot in Fig. <ref>(b) shows the steady-state quadrature expectation values of the monitored mode â_1, which are equal by construction. In contrast, the depicted full QS quadrature covariance matrices, related to the second-order QS cumulants 𝐂_a^(l) via a simple change-of-basis (see SI), clearly indicate the differences in quantum fluctuations of the two QS states. The discrimination task requires extracting these differences from measurement records x, typically following some processing to obtain readout features y=x. If we restrict this processing to only linear operations, the obtained distributions of features y = x = (I_1,Q_1) (simulated using the STEOMs) are shown in the first panel of Fig. <ref>(c) for single-shot (S=1) readout of different realizations of each QS state. Clearly, the mean values of both measured feature distributions overlap; note that this mean would be unchanged by averaging over repeated shots (S>1). Hence ||δμ||→ 0 and therefore → 0, leading to the conclusion that the two states cannot be distinguished in the space of (I_1,Q_1).
However, the distributions of measured quadratures are visibly distinct, just not in their mean values, but instead in their second-order moments or covariances. To estimate such second-order moments (routinely required for example for Gaussian state tomography <cit.>), the standard approach is to obtain S shots and estimate the variance of the measurements over the dataset yielding readout features y = x = 1/S∑_s^S(I^2_1,Q^2_1) (shot s dependence of quadratures I_k,Q_k is implied). For Task I, we plot distributions of these nonlinear readout features in the second panel of Fig. <ref>(c). The mean values of these features are now estimators of the monitored mode covariances; as these are distinct, the centers of the distributions no longer coincide, rendering them linearly separable. Without this nonlinear processing step, it is impossible to perfectly distinguish the QS states in Task I under QS readout using only linear amplifiers.
For Task II we consider a more complex classification task, which requires both nonlinear and nonlocal information processing. Specifically, we wish to distinguish a pair of two-mode squeezed states l ∈ (3,4) that experience an identical amount of joint squeezing, but whose two-mode squeezed quadratures are mutually orthogonal. Such states can be generated using two-mode squeezing interactions of equal strength but opposite phase (see Table <ref>). The full QS covariance matrices (see Fig. <ref>(d)) then differ only in the sign of cross-correlations between the two QS modes (off-diagonal blocks); all other covariance metrics are identical.
Measuring such cross-correlations necessitates monitoring both QS modes (Γ_1, Γ_2 ≠ 0). We again first show measured features under linear processing alone, now for two distinct modes, y = x = (I_1,I_2), in the left panel of Fig. <ref>(e). Clearly, the distributions have anisotropic profiles that differ in the axis of minimal fluctuations, indicative of two-mode squeezing of distinct joint quadratures. However, the centers of measured distributions overlap as before, due to their equal mean values. Also as before, nonlinear processing to estimate second-order moments provides the solution, but with an important caveat: all local second-order moments are insensitive to the sign of cross-correlations of the QS modes. Instead, the estimation of nonlocal second-order moments, y = x = 1/S∑_s^S(I_1I_2,Q_1Q_2), is necessary to obtain linearly separable distributions for Task II.
Therefore, both our considered classification tasks - instances of the broad family of tasks that require measuring correlations between observables to distinguish quantum states - demand nonlinear processing of heterodyne records at room temperature. However, such processing can be particularly sensitive to high-temperature noise ∝n̅_ cl in the measurement chain. A key result of our work is that using to process signals from the QS in the same cryogenic environment where they are generated can circumvent the need for nonlinear processing at room temperature. This renders classification schemes incorporating much more robust to excess noise in the measurement chain (see Sec. <ref>).
§ QUANTUM STATE DISCRIMINATION USING A SINGLE-MODE
We now demonstrate the paradigm of in situ nonlinear processing of quantum signals enabled by the in the context of quantum state discrimination. In particular, we will show that such processing would not be possible using standard linear amplifiers in the same measurement configuration. Our analysis begins with the simpler Task I; Task II is analyzed in Sec. <ref>.
Recalling that only a single QS mode is read out in Task I, we consider a K=1 to monitor this QS mode, which proves sufficient. Our chosen is defined by a single Kerr-nonlinear mode with frequency ω_1 and nonlinearity Λ (for details, see Appendix <ref>). The resulting quantum measurement chain is then depicted in Fig. <ref>(a). When coupled, the quantum state of the is determined by the state of the QS, as desired: this dependence is shown in Fig. <ref>(b) via the covariance matrices of the complete measurement chain for the two QS states to be disntinguished. In the NVK approximation, the dependence is quantified by the linear Lyapunov system, Eq. (<ref>), although the exact relationship can be more complex. From Eq. (<ref>), it is also clear that this inter-dependence itself does not require the to be nonlinear; it only requires a nonzero coupling Γ.
Crucially, we require this dependence to be transduced to readout features to enable successful QS state discrimination by linear readout of the alone. It is here that the role of nonlinearity becomes clear. To demonstrate this, we analyze readout distributions for features obtained under linear processing only, y = x = (I_1,Q_1), as a function of nonlinearity. Our STEOMs framework enables simulating individual quantum trajectories of the QS and (crucially accounting for the latter's nonlinearity), providing the resulting heterodyne measurement records defined by Eq. (<ref>) that are used to construct y; typical examples are shown in Fig. <ref>(c), (d). By repeating over several measurement chain initializations for each QS state, here l=1,2, we obtain distributions of measured features shown in Fig. <ref>(e), (f). Then Fisher's discriminant computed for the two distributions determines the fidelity of classifying the QS states.
Recall that depends on the mean separation δμ of the distributions. The NVK approximation provides a very useful form of δμ, Eqs. (<ref>), which specialized to the single-mode Kerr takes the form:
δμ = √(𝒯/2)𝐔_K [ √(Λ/γ) 𝐉_b^-1𝐇_b : ( 𝐂_b^(l)-𝐂_b^(p)) ],
where γ = + Γ_1 is the total damping rate. The nonlinear dependence is associated with the Hessian 𝐇_b, which vanishes for linear systems, so that ||δμ|| = 0. This is exactly what is observed for readout with linear amplifiers in Sec. <ref>, as well as with vanishing nonlinearity (Λ = 0), shown in Fig. <ref>(e). In this case, increasing the filtering time 𝒯 also has no effect on ||δμ||, and consequently on .
The measured distributions change qualitatively when the is nonlinear (Λ≠ 0), as shown in Fig. <ref>(f). Now ||δμ||≠ 0, and the mean separation increases with 𝒯 as predicted by Eq. (<ref>). The is able to transduce information encoded in fluctuations 𝐂_b^(l), which in turn depend on the QS state, to quadrature mean values, an operation governed by the Hessian 𝐇_b. Now even with only linear processing · of measured quadratures, ≠ 0: a suitable is thus able to circumvent the need for nonlinear post-processing of measurement records, and the resulting classification accuracy shown in Fig. <ref>(f) approaches unity. In contrast, for vanishing nonlinearity, perfect classification is impossible using only ·.
This simple example highlights the extremely general principle of transduction using . However, making successful use of this principle requires asking some more complex questions, each of which we address in this paper. First, one may ask whether any works for a given quantum state discrimination task. Unsurprisingly, this is not the case; instead, optimization is important, and features a landscape far richer than that of linear quantum amplifiers. The reason for this is simple: the very nonlinearity that enables a nonzero `signal' ||δμ|| also determines the fluctuations 𝐂_b^(l) that appear in the `noise', characterized by the measured covariance matrix Σ^(l). This leads to a complex interplay between signal and noise for classification using , where one cannot be optimized without considering the other. Further complicating this interplay is the noise added by the itself, which must also be accounted for in the optimization. While the limits of added noise have been well-established for linear quantum amplifiers <cit.>, much less is known about added noise in the context of nonlinear quantum systems, and even less so for driven-dissipative cases relevant in practical settings.
Our framework characterizes this nontrivial relationship between signal and noise in . Even more importantly, it also accounts for noise added by during processing. The latter crucially includes the possibility of the noise being correlated, even non-classically, either due to correlations in the incident quantum signals arriving from the QS, or even due to correlations generated by the dynamics itself. Crucially, we find that the nonlinear nature of implies that the signal and noise contributions exhibit different relative functional dependencies, a freedom not available to linear quantum systems. This distinction provides the ability to manipulate the readout noise for so as to limit its impact on the signal, by tuning parameters of the alone. This capability proves crucial to the optimization of a for learning, as is discussed in the next subsection.
We emphasize that a precise understanding of the noise physics and optimal operating regimes of is not merely an academic question, but one that must be addressed to understand the ultimate limits of information processing with such devices. For example, for classification with minimal resources (e.g. filtering time 𝒯), information processing must be carried out in regimes where the signal ||δμ|| is of comparable magnitude to the standard deviation of measured distributions. Such a case is depicted in Fig. <ref>(f). From the observed non-isotropic distributions, it is clear that readout can exhibit correlated noise statistics. If the correlated nature of fluctuations was ignored (in other words, if the added noise was assumed to be uncorrelated and isotropic), the decision boundary would simply be the bisector of the mean separation ||δμ||, depicted by the dashed purple line. Unless 𝒯 is very large so that ||δμ|| dominates over noise (a significant increase in measurement resources), the resulting classification accuracy (purple) is substantially lower than the optimal. Therefore the correlation properties of noise in readout - including of noise added by the itself - must be accounted for, as we do here and analyze in detail next.
§.§ Coherent control of quantum fluctuations using a
We have seen that the nonlinearity is essential to learn QS states using linear readout. However, not all will be equally effective at learning: performance metrics such as for a fixed 𝒯 are strongly dependent on operating parameters. In this section, we address how can be optimized for learning. Our analysis centers around understanding how Fisher's discriminant for -enabled classification can be maximised, to which end analytic expressions obtained using the NVK approximation are particularly powerful. In particular, while our analysis is presented for Task I using a K=1 mode , the NVK approximation allows us to uncover general principles for operation that apply to more complex tasks with larger , as we show in Sec. <ref> with Task II.
A nonzero Fisher's discriminant requires a non-vanishing δμ. Within the NVK approximation, Eq. (<ref>), it is clear that the magnitude ||δμ|| must be determined by the inverse of the Jacobian 𝐉_b^-1 describing the . This is unsurprising, as the Jacobian typically determines the linearized response of a nonlinear system, with the dimensionless susceptibility |χ_b| given by the largest eigenvalue of 𝐉_b^-1. As discussed in Appendix <ref>, for a K=1 mode Kerr-type this takes the form
|χ_b| = max{| eig 𝐉_b^-1|} = γ/γ/2 - √((2n_1)^2- (Δ_1+4n_1)^2 ),
where n_1 = |b_1|^2, obtained by solving Eq. (<ref>). Importantly, it can be shown that |χ_b| depends on the nonlinearity only via the dimensionless effective parameter ,
= √(Λ/γ)·Γ_1/γâ_1^(l) = √(Λ/γ)·Γ_1/γ
which is independent of l due to our choice of classification tasks. The above form of |χ_b| is ubiquitous in studies of the linearized response of a variety of Kerr-based systems, from parametric amplifiers <cit.> to frequency comb generators <cit.>. The susceptibility can be made large by suitable parameter choices, including ; typically this is determined by the strength of a separately applied pump tone. Our use of Kerr-based exhibits a slight difference: is instead set by the amplitude of signals incident from the QS upstream. The QS state to be measured hence serves to `pump' the very being used to measure it (although a distinct pump tone could equivalently be applied).
This difference notwithstanding, suitable choices of (,Δ_1) can similarly cause |χ_b| to become large, increasing the magnitude of response to any input, including to quantum fluctuations from the QS. In Fig. <ref>(a), we plot |χ_b| as a function of Δ_1/γ and (also Λ/γ for ·Γ_1/γ=80/9). The orange region, where |χ_b| diverges, marks the well-known classical bistability of the single Kerr oscillator <cit.>, here brought about by the mean amplitude of incident QS signals. Operating near this bistability - and more generally, any instability of the Jacobian 𝐉_b - will increase |χ_b|.
We therefore consider near the bistability, more precisely for the fixed detuning Δ_1/γ = -0.67 and for varying across the vertical dashed line in Fig. <ref>(a). Here the susceptibility |χ_b| is plotted in the inset of Fig. <ref>(b), which exhibits a Lorentzian-like profile with a single maximum. Note that the important quantity for Fisher's discriminant is instead the magnitude of measured mean separation ||δμ||; this is also shown in Fig. <ref>(b), using both the NVK approximation and STEOMs integration. Interestingly, while ||δμ|| does increase in conjunction with |χ_b|, it also displays a double-peak structure that is manifestly distinct. Thus, while |χ_b| is clearly important, it does not completely define the physics of QS state learning using a . Here, the difference can be attributed to the fact that ||δμ|| is not simply the linearized response to an input field, but the response to quantum fluctuations encoded in 𝐂_b^(l), which drive the via the Hessian tensor 𝐇_b (which is absent in standard linearization approaches).
The other factor determining is the noise in measured quadratures, quantified by the covariance matrices Σ^(l). To aid our analysis of noise properties, we first introduce the minimum and maximum noise eigenvalues and corresponding eigenvectors of the 2K-by-2K covariance matrix Σ^(l) as {σ_ min^2(l),v_ min^(l)} and {σ_ max^2(l),v_ max^(l)} respectively. These eigenpairs denote combinations of measured quadratures with minimal and maximal noise, and for K=1, completely define Σ^(l). We next note that binary classification in an arbitrary-dimensional measured space can be cast into a two-dimensional subspace for visualization. We introduce a vector v_∥ = 1/||δμ||δμ parallel to δμ, which is unique upto normalization, and a vector v_⊥ orthogonal to v_∥, namely v_∥^Tv_⊥ = 0, of which there are 2K-1 choices. These vectors allow us to a define a measured quadrature P_δμ that is parallel to δμ, and R_⊥ as one of 2K-1 measured quadratures orthogonal to δμ,
P_δμ = v_∥^Ty, R_⊥ = v_⊥^Ty.
The quadrature P_δμ has the property that 𝔼[P_δμ^(l)-P_δμ^(p)] = ||δμ||, as may be readily verified. For isotropic noise distributions where only the mean separation determines distinguishability, P_δμ then defines the only feature that need be measured for classification. Of course, noise in our situation is far from isotropic. We therefore also introduce the noise projected along δμ,
σ^2(l)_δμ = v_∥^T·Σ^(l)·v_∥.
Note that the noise in the P_δμ quadrature is exactly σ^2(l)_δμ.
In Fig. <ref>(c), we now plot this quantity, together with the minimum and maximum noise eigenvalues, for l=1 and for the same parameters as Fig. <ref>(b). Finally, and are plotted over the same parameter range in Fig. <ref>(d). Clearly, σ_ max^2(1) increases following |χ_b|, describing the amplification of fluctuations by the . In contrast, σ_ min^2(1), which should represent squeezing, does not undergo a corresponding dip; this is because the Kerr-based does not generate ideal squeezed states. Regardless, we still have σ_ min^2(1) < σ_ vac^2, so the measured distributions exhibit squeezing below vacuum. The projected noise σ_δμ^2(1) varies in between these maximum and minimum noise eigenvalues, with some important features that we now describe.
To illustrate the interplay of mean separation and noise, we consider three defined by nonlinearity values marked (i)-(iii) in Fig. <ref>. As seen in Fig. <ref>(b), measured output distributions from (i) have the largest mean separation. However, the projected noise plotted in Fig. <ref>(c) is far from minimal. From the resulting measured distributions depicted in Fig. <ref>(i), it is visually clear that noise in the direction of δμ is not minimized.
(ii) ends up being the most interesting operating point. While the mean separation of outputs from (ii) is lower than (i), the projected noise is minimized, σ_δμ^2(1) = σ_ min^2(1), as seen in Fig. <ref>(ii). From the measured distributions, we see that δμ is now aligned with the direction of minimum noise, defined by the noise eigenvector v^(1)_ min. The resulting Fisher's discriminant and are in fact larger than for (i), even though ||δμ|| is slightly smaller.
Finally, (iii) is specifically chosen so that its output ||δμ|| is equal to that for (ii); however, the projected noise is much larger. From Fig. <ref>(iii), it is visually clear that the measured distributions from (iii) are the least distinguishable of the three considered, leading to the smallest value of and hence .
This analysis has several important conclusions. First, it emphasizes the necessity of accounting for quantum fluctuations for optimal classification. This is the only factor that distinguishes (ii) and (iii). Secondly, we are able to clarify the role of -mediated squeezing of quantum fluctuations in classification. As seen from Fig. <ref>(d), the measured outputs from all three exhibit distributions with squeezing below vacuum (σ_ min^2(1) < σ_ vac^2). However, this squeezing does not optimally aid classification unless the specific measured quadrature defined by P_δμ is the one undergoing squeezing, here (ii).
Finally, and most remarkably, even a simple, single-mode Kerr provides the ability to manipulate quantum fluctuations separately from mean values, so that the aforementioned optimal squeezing scenario can actually be engineered using suitable parameter choices. This capability is emphasized by (ii) and (iii), which exhibit the same `signal' ||δμ|| but very different projected noise properties, a scenario not possible in, for example, linear amplifiers. The remarkable freedom in adjusting the projected noise independently of the mean separation is demonstrated via the surface plot of σ_ min^2(1) in Fig. <ref>(e); here the white curve, which defines operating points with fixed ||δμ||, traverses through regions of both very large projected noise, as well as minimal projected noise that is sub-vacuum. Clearly, optimal performance requires operating in regimes where the projected noise can be lowered (not necessarily always exactly minimized, as also depends on ||δμ||).
Before proceeding, we make two further observations. First, it is clear that ||δμ|| (and hence ) exhibits a complex dependence on the nonlinearity, in contrast to the monotonic dependence alluded to by Eq. (<ref>). This is because simply varying Λ also modifies via Eq. (<ref>) if is fixed, thus changing the operating conditions in a nontrivial fashion. To explore the performance improvement with Λ hinted at by Eq. (<ref>) a more careful analysis is needed; this is presented in Appendix <ref>. Secondly, note that all quantities calculated using the NVK approximation agree well with their counterparts obtained using exact integration of the STEOMs, especially in qualitative terms. This highlights the utility and validity of the NVK approximation in analyzing and, more generally, networks of coupled nonlinear quantum systems.
§ ADVANTAGE FOR QUANTUM SIGNALS
Thus far we have demonstrated that enable processing of quantum signals in a way that is distinct from linear quantum amplifiers. However, it is not yet clear that this difference will lead to a practical advantage for quantum information processing. We now address this key question.
To do so, we will compare readout using the against standard linear quantum amplifiers for the task of quantum state discrimination. Note that both these schemes have already been discussed separately in Sec. <ref> and Sec. <ref> respectively, albeit the latter only for phase-preserving amplification. Now, we take advantage of the sufficiently general and comprehensive description of quantum measurement chains enabled by our framework to consider the configuration in Fig. <ref>, where a general quantum processor follows the QS to be read out. The processor can be the , or one of a phase-preserving (PP) or phase-sensitive (PS) linear quantum amplifier (see Appendix <ref> for exact models). In this way, the individual couplings, losses, and readout rates of modes constituting the quantum processor can be held fixed from one model to the next, ensuring a direct comparison. Secondly, we now also include the effect of excess classical noise ∝n̅_ cl, which necessitates the use of quantum processors in the first place. We can then identify what practical advantages (if any) are enabled by the .
We briefly note that if one considers only binary quantum state discrimination tasks, provably optimal measurement protocols are known, namely Helstrom measurements <cit.>. While a sufficiently broad definition of a would also incorporate such schemes, such measurements are specific to the particular states to be distinguished, and are not always straightforward to perform as they typically require non-Gaussian operations or strong constraints on coherence. We envision the considered in this paper to be akin to linear quantum amplifiers in this respect: they may not necessarily be the optimal choice for a single task, but can provide general quantum processing for quantum signals for much more general tasks. Nevertheless, we will still compare the and linear amplifiers against bounds set by optimal discrimination schemes, to be discussed shortly.
Even for processors in the same measurement chain, one last degree of freedom remains: the operating parameters. For linear PS and PP amplifiers, it is straightforward to identify equivalent operating regimes: we can simply choose them to operate at the same gain, or equivalently the same susceptibility |χ_b|. The PS amplifier has only one additional degree of freedom, namely the phase of the quadrature to be amplified. Unlike these amplifiers, the does not provide only linear amplification, and so its operation is not determined entirely by |χ_b|, as shown earlier. Nevertheless, we choose to operate the at the same fixed |χ_b| as the linear amplifiers to ensure comparable operating points. These operating points are defined by `isogain' contours of |χ_b| in Λ-Δ_1 space; considered examples are shown in Fig. <ref>(b).
§.§ Quantum Chernoff Bound and optimal discrimination
In assessing the relative performance of the various processors at quantum state discrimination, we first note that the upper bound on the discrimination accuracy must depend on the fundamental distinguishability of the two reduced QS quantum states _ QS^(1),_ QS^(2), independent of the specific processing or measurement scheme. One measure of this distinguishability is the Quantum Chernoff Bound (QCB) ζ. The QCB bounds the discrimination accuracy according to 1 - 𝒞_ max∼exp ( -N ζ ) where access to N identical copies of each state is assumed, all of which may be measured <cit.>. Clearly, the larger the value of the QCB ζ, the more easily distinguishable the two quantum states are for fixed N. While strictly speaking a bound that holds in the asymptotic limit of many copies N, we use the QCB here due to the ease of its calculation for Gaussian states. We then define ζ_ QS as the QCB for the QS states _ QS^(1),_ QS^(2). Similarly, when such states are processed using a downstream , the QCB ζ can also be computed to quantify the distinguishability of the states _^(1),_^(2), where _^(l) = tr_ QS[^(l)]. We will consider Task I in what follows, so that the QCB is computed for the single mode to be read out (see Appendix <ref> for full details).
We plot the ratio of these two QCB values, ζ/ζ_ QS, for three different |χ_b| values for the different quantum processors in Fig. <ref>(c). For the PS amplifier the optimal QCB is chosen by varying the PS amplification phase. For the , the QCB is plotted along the isogain contours in Fig. <ref>(b), using both TEOMs and within the NVK approximation. Immediately, we note that ζ/ζ_ QS < 1: the use of a quantum processor will generally introduce loss channels that are independent of the states to be distinguished and therefore hinder classification. We also note that for the present task, PS amplification enables a larger QCB than PP amplification: even though the PS amplifier can only amplify information in a single quadrature of the quantum input signal, it does so without adding any noise. Interestingly, we see that the can approach and at certain operating points even exceed the QCB obtainable using linear quantum amplifiers in the same measurement chain. We emphasize that this was not a priori guaranteed in the presence of the added noise of the ; hence this observation is promising for the use of for quantum state discrimination.
§.§ robustness to readout noise
Given the reduction in the QCB when using a downstream quantum processor, we reiterate why such processors are necessary in the first place. Excess classical noise n̅_ cl in the measurement chain can swamp weak quantum signals, such as those emanating from the QS directly, as they are extracted to the classical observer. The QCB is a discrimination bound that assumes optimal measurements, and therefore does not account for the limitations on readout imposed by n̅_ cl. In practical measurements subject to readout noise, quantum processors such as linear amplifiers then provide the pre-amplification necessary to overcome this noise and enable visibility of weak quantum signals.
To quantify the impact of this practical constraint, we instead compute the classification accuracy for the different quantum processors under heterodyne readout and in the presence of n̅_ cl. We consider the operating point labelled (d) in Fig. <ref>(c), where ζ/ζ_ QS agrees well between the NVK and TEOMs. We also deliberately consider an operating point where the PS amplifier enables a larger QCB ζ than the ; as we will show, the particular advantage we wish to highlight prevails in spite of this. Finally, we define readout features obtained using linear (·) and nonlinear (·) post-processing of measurement records over multiple shots, y = 1/S∑_s^S(I_1,Q_1,I_1^2,Q_1^2,I_1Q_1). This enables a fair comparison of the against linear amplifiers, as the latter can only perform this discrimination task when nonlinear post-processing is allowed.
We plot the obtained 𝒞_ max using the different quantum processors as a function of n̅_ cl in Fig. <ref>(d). For low enough n̅_ cl, both and PS readout exhibit similar performance while the PP amplifier is worse, consistent with the obtained QCB values. With increasing n̅_ cl, the performance of all readout schemes degrades as expected. However readout is clearly the most robust, exhibiting the smallest reduction. The red shaded region marks typical n̅_ cl values for cQED measurement chains, attributed primarily to noisy HEMT amplifiers (for calibration and mapping to equivalent noise temperatures as shown, see SI Sec. <ref>). readout therefore provides enhanced robustness against experimentally relevant levels of excess classical noise.
The observed advantage stems from a simple principle: enable nonlinear processing of quantum signals from the QS in the same quantum environment where they are generated, and - crucially - prior to corruption by excess readout noise n̅_ cl. In contrast, linear quantum amplifiers provide only linear gain to quantum signals, so that nonlinear processing of the extracted noisy signals is still required. By reducing or even eliminating the need for nonlinear post-processing, therefore purvey a fundamentally different scaling of the signal-to-noise ratio with readout noise n̅_ cl. As shown in the inset of Fig. <ref>(d), Fisher's discriminant (which defines the signal-to-noise ratio for binary discrimination) degrades only linearly with n̅_ cl for the . This is markedly different to linear amplifiers, in which case degrades quadratically with n̅_ cl, as the post-processing required to compute second-order nonlinear features necessary for this task must also amplify the added readout noise. Most importantly, our results emphasize that this processing advantage does not demand large-scale or highly-coherent quantum systems as : few- and even single-mode nonlinear quantum systems suffice, deployed in lossy quantum measurement chains with minimal additional components.
We conclude with two practical observations. First, while other choices of readout features y (e.g. for different temporal filters 𝒦(τ)) could influence the quantitative performance of either type of processor, the qualitative difference in scaling with n̅_ cl, which relies on the in situ nonlinear processing paradigm enabled by , will remain.
Secondly, in the interest of a fair comparison, here we have considered as a direct replacement for linear quantum amplifiers, demanding to offer both transduction and amplification to overcome readout noise. In this configuration, we are thus not able to take full advantage of the 's ability to engineer minimal projected noise σ^2_δμ = σ^2_ min, as this minimal noise can be swamped by the following readout noise n̅_ cl≫σ^2_ min. Using the to control quantum fluctuations is still important, but to instead operate in a regime where σ^2_δμ≳n̅_ cl (to the extent possible for a given |χ_b|). Alternatively, can be deployed in conjunction with linear amplifiers. Then, a would first be used to engineer a transduced signal quadrature P_δμ with minimal projected noise σ^2_ min. Next, a phase-sensitive amplifier would provide noise-free gain √(𝒢_ PS) to this quadrature, yielding ||δμ||^2 = 𝒢_ PS(𝔼[P^(l)_δμ-P^(p)_δμ])^2 and σ^2_δμ = 𝒢_ PSσ^2_ min + (n̅_ cl+1)σ_ vac^2. Provided 𝒢_ PS is large enough to satisfy 𝒢_ PSσ^2_ min≫n̅_ cl, the output becomes ≃ (𝔼[P^(l)_δμ-P^(p)_δμ])^2/σ^2_ min, taking advantage of both transduction using (which determines the `signal'), and their ability to engineer non-classical noise distributions (which determines the `noise').
§ MULTI-MODE
In this final section, we analyze the use of a to perform Task II: distinguishing a pair of two mode squeezed states. Here, we see that the need for multi-mode naturally arises, which then allows us to identify the role of entanglement in quantum information processing tasks using .
§.§ Role of coupling
As shown in Sec. <ref>, successfully performing Task II under direct QS readout requires monitoring of both QS modes, and computation of cross-correlations. For QS modes with disparate frequencies |ν_1 - ν_2| ≫κ (as required to realize non-degenerate squeezed states), quantum signals from the QS modes at ω_dn∼ν_n are also at widely separated frequencies. A single mode with similar bandwidth γ_1+Γ_1 ≃κ_1+Γ_1, will thus not exhibit a strong response to two signals with such a wide spectral separation. Note that increasing the bandwidth relative to the QS requires γ_1 ≫Γ_1; however, this undercouples the QS to the , reducing the influence of input QS signals and hence the response (see Eq. (<ref>)).
The processing of quantum signals from multiple non-degenerate quantum modes is therefore a situation where the need for a multi-mode naturally emerges. We consider the measurement chain shown in Fig. <ref>(a) incorporating a K=2 mode , and where each QS mode is monitored by a mode via the non-reciprocal interaction defined by Γ. A tunable (e.g. parametric) coupling _12 between the modes allows us to clearly identify the role played by the multi-mode nature of the .
The full measurement chain covariance matrices are shown in Fig. <ref>(b) for the two QS states l=3,4 constituting Task II, first for _12 = 0. Only matrix elements that are different between the two states are shown in color, to aid visibility. In particular, when _12 = 0, the local mode covariances are independent of l; this is because the two QS states also have identical local covariances. Instead, even if the modes are uncoupled, non-trivial correlations that vary with the QS state are established between the modes. While this may appear surprising at first glance, it is merely a consequence of the fact that the two uncoupled modes are nevertheless driven by a correlated quantum input signal originating from the QS, which means the two modes' states must also be correlated. However, it is only when the coupling is turned on, Fig. <ref>(c), that these cross-correlations can be transduced to differences in the local mode covariances. We will see how, in contrast to Task I, this means that now both the nonlinearity and coupling of modes is required for success at Task II.
Unfortunately, with increasing size, the number of tunable parameters and the complexity of their interplay grows, making it a priori difficult to isolate the importance of a particular parameter, such as the coupling _12. Here, our analysis of optimization for Task I in Sec. <ref> proves its generality, and hence its value. For a given , two principles were found: the importance of |χ_b| (the largest eigenvalue of the Jacobian) in enhancing the response of the to any input, and the control of quantum fluctuations to minimize projected noise σ^2(l)_δμ. Our strategy to isolate the role of coupling is to therefore fix other parameters such that |χ_b| can be held constant and σ^2(l)_δμ is simultaneously minimized.
The two-mode model we consider is a realization of the Kerr or Bose-Hubbard dimer that has been analyzed in prior work, but only under coherent driving <cit.>, not with both modes driven by quantum signals from a QS upstream as illustrated schematically in Fig. <ref>(a). By analyzing the Jacobian of this two-mode , we identify regions of parameter space where the susceptibility |χ_b| grows and diverges, namely near the classical bistability of the Kerr dimer, marked as the orange region in the phase diagram of Fig. <ref>(a). Then, using the NVK approximation, we identify a trajectory through this parameter space near the classical bistability - labelled the optimal noise trajectory (gray curve) - where |χ_b| = 9.0 at every coupling strength, and simultaneously the projected noise is minimized, σ^2(3)_δμ≃σ^2(3)_ min. An example of how this choice constraints parameters is shown for point (ii) on this trajectory in the lower panel of Fig. <ref>(a).
We again consider single-shot readout features obtained under linear processing only, now from both modes, y = x = (I_1,Q_1,I_2,Q_2). Performing Task II for parameters along this trajectory, we obtain as a function of coupling in Fig. <ref>(c), which follows a smooth curve. We also plot the corresponding mean separation ||δμ|| and projected noise in Fig. <ref>(c). For _12→ 0, we see that ||δμ||→ 0, even though 𝐂_b^(3)≠𝐂_b^(4) as seen in Fig. <ref>(b) and the nonlinearity is nonzero. This is because the Hessian tensor in Eq. (<ref>) is still local, as the nonlinearity is on-site, so that δμ is only sensitive to local mode covariances which are independent of l when _12 = 0. This observation is the result of a much simpler fact: since each mode is coupled to only one QS mode, and the difference between QS states l=3,4 is present in correlations of different QS modes, it is necessary for the two modes to `communicate' (hence, be coupled) to be able to distinguish the inputs they receive. If this communication is not performed in situ using _12, it will have to be performed in post-processing, by computing correlations of measured quadratures for distinct modes explicitly, which constitutes a nonlinear processing step. Turning on _12 allows these correlations to be computed via the dynamics, allowing the difference in QS states to translate to a nonzero ||δμ|| of measured quadratures, enabling → 1.
§.§ Engineering entanglement for classification
We are now well-placed to explore the role of entanglement in the processing of quantum signals using a . Importantly, we are interested in output field entanglement, which requires non-classical correlations between measured quadratures. For the two-mode , the 4-by-4 measured covariance matrix can be expressed in the block form
Σ =
[ Σ_11 Σ_12; Σ_12^T Σ_22 ]
where Σ_jk is the contribution from covariances between modes j and k (see Appendix <ref>), and we have suppressed the superscript (l) indicating the corresponding QS state. The metric we then use to quantify the degree of entanglement in outputs is the logarithmic negativity E_ N <cit.>, a standard entanglement monotone. In terms of Eq. (<ref>), E_ N can be defined as
E_ N = max{0,-ln 2ν^-}, ν^- = √(d-√(d^2-4Σ)/2),
d = Σ_11+Σ_22-2Σ_12
In Fig. <ref>(b), we then plot E_ N as a function of coupling _12 along the optimal noise trajectory, computed using the NVK approximation as well as using simulated covariance matrices via the STEOMs. At first glance, the role of entanglement appears straightforward: increasing coupling coincides with an increase in the degree of entanglement between measured quadratures, and leads to improved classification performance.
However, this observation does not indicate whether entanglement is always useful, or if it provides an advantage that goes beyond non-classical correlations of only single-mode variables. The context of binary quantum state discrimination allows us to probe these questions directly. We once again consider the reduced quadrature description introduced in Sec. <ref>. In Fig. <ref>(d), we then plot measured distributions in the projected space spanned by (P_δμ,R_⊥) for three different operating parameters (i)-(iii), labelled in Fig. <ref>(a).
By construction, the separation of distribution means for all (i)-(iii) is entirely confined to the P_δμ quadrature. (i) and (ii) fall along the optimal noise trajectory and are thus engineered to minimize noise in the P_δμ quadrature, as is clearly visible in the distributions. Fig. <ref>(d). In the top panel of Fig. <ref>(d), we also show the content of the P_δμ quadrature. For (i), P_δμ is predominantly comprised of quadratures of only a single mode (here k=2). Hence, the sub-vacuum noise in P_δμ is the result of quantum correlations between outputs of only a single mode, indicative of single-mode squeezing. In this weak coupling regime, entanglement is not necessary to obtain sub-vacuum noise; particular, the measured outputs do not exhibit entanglement as E_ N=0.0.
However, Task II benefits from increasing coupling between the modes, as observed previously; this not only increases the mean separation ||δμ||, but also generates output entanglement, as seen for (ii) where E_ N = 0.11. Now P_δμ is comprised of quadratures of distinct modes, and thus is a non-local quadrature. Sub-vacuum noise in P_δμ therefore must arise due to non-classical correlations amongst these non-local quantum modes, namely entanglement. Here, entanglement of measured quadratures directly improves classification performance by ensuring that non-local quadratures that carry useful information (encoded in ||δμ||) are also non-classically correlated, such that their fluctuations are reduced below the vacuum limit.
Perhaps most interestingly, just the presence of output field entanglement is not always guaranteed to be beneficial for classification. A simple counterexample is (iii), which demonstrates a nonzero logarithmic negativity E_ N = 0.10 like (ii), but is not on the optimal noise trajectory as seen from the lower panel of Fig. <ref>(a). P_δμ is still a non-local quadrature, but its noise is clearly not minimal, and is in fact above vacuum. The presence of output field entanglement means that there is some set of non-local quadratures that exhibit non-classical correlations that lead to sub-vacuum noise; however, these quadratures are not always guaranteed to carry useful information encoded in ||δμ||, and hence such non-classical correlations may not always be useful for the task at hand.
Crucially, the use of nonlinear provides the ability to control quantum fluctuations, such that the situation in (ii) can be engineered, and non-classical correlations can be manipulated for computational benefit. For binary classification, this entails finding operating points where the unique quadrature with maximal signal P_δμ can simultaneously exhibit minimal, non-classical (e.g. sub-vacuum) fluctuations. The ability to engineer this unique quadrature also opens up avenues to only amplify and measure said quadrature using noiseless phase-sensitive amplification, instead of phase-preserving amplification of all 2K quadratures for a K-mode , which necessarily adds noise.
§ CONCLUSIONS AND OUTLOOK
Nonlinearity is an essential component of signal processing, playing a key role from digital processing to neural circuits. However, the role of general, fundamentally stochastic quantum nonlinear devices in processing quantum signals - beneficial or otherwise - is much less explored. In this paper, we have addressed this limitation by identifying key general principles of quantum information processing enabled by a broad class of nonlinear bosonic quantum systems, which we refer to as .
Our two main results, which hold beyond the quantum state discrimination tasks we have chosen to demonstrate them, and which make explicit use of being both quantum and nonlinear, can be simply stated. We show that can be efficient information transducers (cf. Eq. (<ref>)): by processing quantum signals in situ, can render nonlinear properties of quantum signals such as correlation functions accessible via linear readout schemes like heterodyne monitoring. Secondly, by harnessing a nonlinear input-output map, can coherently manipulate quantum fluctuations in a manner unavailable to linear amplifiers that are linearly coupled to a quantum signal source: without suppressing the transduced signal magnitude, can modify its output noise properties. This requires to control not only quantum fluctuations emanating from the quantum system the is monitoring, but also those resulting from the operation itself, referred to as `added noise' for linear amplifiers (cf. Eq. (<ref>)). Remarkably, both these capabilities can be accessed using small-scale , making them relevant for implementations in current experiments. In fact, by analyzing realistic measurement chains, we show that even single-mode can provide robustness against classical readout noise.
On the other hand, the key theoretical tools these results are built on - the analytic NVK approximation of nonlinear measurement chains and the numerical STEOMs framework to efficiently simulate their conditional dynamics, both verified against exact master equation methods - provide the means to analyze quantum information processing using very general, arbitrarily-multimode nonlinear bosonic quantum systems. As such they can be used to study the processing of signals from many-body quantum systems and non-classical correlations or entanglement across several modes, as we study in Task II. Furthermore, they can enable the exploration of more general paradigms of quantum information processing. One example pointed out in the conclusion of Sec. <ref> is the use of measurement chains employing both and phase-sensitive amplifiers. Related paradigms enabled by our framework include the possibility of entangling the with the QS, via for example non-reciprocal entangling operations <cit.>.
Taking an even broader view, our work has direct applications to general information processing and computation paradigms such as quantum machine learning and quantum sensing. Nonlinearity is considered essential to the expressive capacity of physical neural networks including quantum systems <cit.>. However, several popular bosonic quantum machine learning platforms are linear <cit.>, instead enabling nonlinear processing by careful use of nonlinear input encoding schemes. Our work provides both tools and possible directions to explore the utility of multi-mode quantum nonlinear devices for learning applications; distinct from other approaches, our framework also describes learning on quantum inputs. In particular, it is expected that genuinely quantum advantages in such applications must make use of decidedly quantum properties, such as squeezing and entanglement. harness nonlinearity to control quantum fluctuations such that non-classical correlations appear in desired observables only. Furthermore, we elucidate general principles that can enable to operate in regimes to enable such processing. The resulting quantum mechanism to harness quantum correlations for the enhancement of classification accuracy could prove useful in extracting quantum advantages for quantum machine learning and quantum sensing.
Our work also invites exploration of more complex beyond the weakly-nonlinear regime. Firstly, the scaling advantage with n̅_ cl can be expected to be even more significant with increasing complexity of the required nonlinear computation, such as the calculation of higher-order or many-body correlations <cit.>. Such computations will demand the analysis of with higher-order nonlinearities, or operation in increasingly non-Gaussian regimes. Secondly, for the specific case of quantum state discrimination, we have shown that can approach and even exceed the optimal discrimination bound, quantified here by the Quantum Chernoff bound (QCB), achievable using linear quantum processors. However, a study of the maximum attainable QCB limits using requires going beyond the NVK approximation, and could quantify the ultimate constraints on quantum information processing using nonlinear systems. Both these directions are natural extensions that we leave for future work.
Finally, we return full circle to our original motivation: by extracting more information from the quantum domain, we hope can ultimately improve our ability to control quantum systems. By enabling the efficient simulation of measurement-conditioned dynamics of measurement chains including , our STEOMs framework provides the necessary first step in the study of quantum feedback and control using . Such control is necessary for important quantum information processing tasks such as error correction <cit.>, either via continuous monitoring and feedback <cit.> or autonomous protocols via coherent quantum feedback <cit.>.
We would like to thank Dan Gauthier, Luke Govia, Peter McMahon, Ioan Pop, Graham Rowlands, Guilhem Ribeill, Tatsuhiro Onodera, Logan Wright, Ryan Kaufman, Boris Mesits, Shyam Shankar, Florian Marquardt, and Ryotatsu Yanagimoto for useful discussions. This work is supported by AFOSR under Grant No. FA9550-20-1-0177 and the Army Research Office under Grant No. W911NF18-1-0144. Simulations in this paper were performed using the Princeton Research Computing resources at Princeton University, which is a consortium of groups led by the Princeton Institute for Computational Science and Engineering (PICSciE) and Office of Information Technology's Research Computing.
30mm1pt
§ MEASUREMENT CHAIN DESCRIPTION AND TABLE OF SYMBOLS
In this Appendix section we expand on details of the measurement chain considered described by Eq. (<ref>) that were omitted for brevity in the main text.
First, we define the interaction between the QS and described by . We engineer this interaction to be non-reciprocal by balancing a coherent and a dissipative hopping interaction with an appropriately-chosen phase <cit.>,
ℒ_cρ̂ = -i[i/2∑_mΓ_mâ_m^†b̂_m + h.c. ,ρ̂]+∑_mΓ_m𝒟[â_m + b̂_m]ρ̂.
Some features of this interaction are of note. Firstly, although the interaction has a dissipative component, it defines a coherent link, such that it allows for a non-separable joint quantum state of the QS and . Secondly, for simplicity we require that one mode of the QS couples to at most one mode of the , with strength Γ_m (although the coupling can also vanish if Γ_m = 0 for a given â_m). Finally, we note that Eq. (<ref>) describes a standard circulator <cit.>, and is therefore realized as a matter of course in cQED experiments.
For practical implementations in the cQED architecture, we consider for the a model of K coupled Kerr nonlinear modes b̂_k, k∈[1,K], furnished by capacitively-shunted Josephson junctions <cit.>. The nonlinear modes have frequencies {ω_k} and nonlinearity strengths {Λ_k}, with linear coupling {_jk} between modes i and j. Then, in an appropriate interaction picture at frequencies {ω_dk} (close to {ω_k} respectively), the linear Hamiltonian takes the form
ℋ̂_ = -∑_kΔ_k kk + ∑_jk_jk(jk + jk),
with detunings Δ_k = ω_dk-ω_k, while the nonlinear component of the Hamiltonian is given by
𝒩̂_ = - ∑_k Λ_k/2kkkk.
For simplicity, we assume Λ_k ≡Λ ∀ k, although we emphasize this is not necessary for .
Y>X
Finally, the conditional dynamics of under heterodyne monitoring are governed by the stochastic measurement superoperator 𝒮[√()b̂_k], given by:
𝒮[√()b̂_k] = √(/2)( b̂_k + b̂_k^† - b̂_k+b̂_k^†) dW_ℐ_k(t)
+√(/2)( -ib̂_k + ib̂_k^† - -ib̂_k+ib̂_k^†) dW_𝒬_k(t)
+𝒟[b̂_k],
where ô indicates the conditional expectation value of an arbitrary operator ô with respect to the measurement-conditioned quantum state, ô = tr{ô}. Here dW_ℐ_k, dW_𝒬_k are independent Wiener increments describing measurement noise, satisfying dW_ℐ_k=dW_𝒬_k=0, and dW_ℐ_kdW_𝒬_k' = 0, dW_ℐ_kdW_ℐ_k' = dW_𝒬_kdW_𝒬_k' = δ_k,k'dt. The Wiener increments are therefore related to the white noise terms introduced in Eq. (<ref>) via ξ_ℐ_k≡dW_ℐ_k/dt, ξ_𝒬_k≡dW_𝒬_k/dt. The classical readout noise terms in Eq. (<ref>) are also taken to obey white noise statistics.
On the other hand, the quantum noise contributions ξ^ qm_ℐ_k, ξ^ qm_𝒬_k introduced in Eq. (<ref>) generally do not obey white noise statistics. Quantum trajectories are conditioned on the heterodyne measurement record via the stochastic measurement superoperator 𝒮. Quantum noise contributions depend on these quantum trajectories; more concretely, for example, ξ_ℐ_k^ qm(t) = X̂_k - X̂_k, and thus describes the stochastic deviation of an observable conditioned on a specific quantum trajectory from the ensemble average. Hence the quantum noise contributions inherit nontrivial correlation and noise properties from quantum trajectories. Lastly, monitoring of modes opens them up to linear damping at the rate , described by the standard dissipative superoperator 𝒟[ô] = ôô^† - 1/2{ô^†ô,}.
Definitions of all parameters characterizing the general measurement chain are summarized in Table <ref>. Specific parameter values used to generate the various figures in the main text are summarized in Table <ref> in the SI.
§ TRUNCATED CUMULANTS APPROACH
For an arbitrary describing a quantum measurement chain with = M+K modes, we can define its associated characteristic function χ(w⃗,w⃗^*) <cit.>,
χ(w⃗,w⃗^*) = tr{∏_j=1^ e^iw_j^∗ô_j^†∏_l=1^ e^iw_lô_l}
where ô_j ∈{â_m,b̂_k} describes any mode of the complete quantum measurement chain, and w⃗ = (w_1,…,w_) are auxiliary variables. Then, normal-ordered cumulants of the density matrix are formally defined via the log of the characteristic function (sometimes called the generating function),
C_o_1^† p_1⋯ o_^† p_ o_1^q_1⋯ o_^q_≡. ∂^n_ ordlnχ(w⃗,w⃗^*) /∏_j∂ (i w_j^∗)^p_j∏_l∂ (i w_l)^q_l|_w⃗=w⃗^∗=0
where n_ ord = p_1 + … + p_J + q_1 + … + q_J defines the order of cumulants. Notwithstanding the complex formal definition, cumulants can be transparently related to more familiar operator expectation values: for a general operator ô_j, first-order cumulants C_o_j≡ô_j are simply expectation values, while second-order cumulants C_o_jo_l = ô_jô_l - ô_jô_l are their covariances. Expressions for higher-order cumulants become increasingly more involved, but can be systematically obtained, as discussed in the SI <cit.>.
§.§ Numerical scheme: Stochastic Truncated Equations of Motion
The crucial advantage of cumulants as descriptors of a quantum state is that specific multimode quantum states admit particularly efficient representations when expressed in terms of cumulants. In particular, a quantum system in a product of coherent states is described entirely by its nonzero first-order cumulants; all cumulants with n_ ord > 1 vanish (see SI <cit.> for derivations). Multimode quantum states that are defined entirely by their first and second-order cumulants admit Gaussian phase-space representations, and are thus labelled Gaussian states. States with nonzero cumulants of third or higher-order are thus by definition non-Gaussian states <cit.>.
Our numerical approach leverages this economy of representation by using normal-ordered cumulants as a set of dynamical variables for quantum modes of the measurement chain. For example, for specific systems such as coherently-driven linear bosonic systems initialized in a Gaussian state, cumulants of order n_ ord≤ 2 are sufficient, as such systems can be shown to persist in Gaussian states. If the considered in this paper were also linear, our model would satisfy this requirement. However, our interest is precisely in the role of the nonlinearity. In this case, the cumulants describing the measurement chain and their dependencies are shown schematically in Fig. <ref> for M=K=1. In particular, the nonlinearity generates states with nonzero cumulants of n_ ord> 2. In the most general case, the dynamical equations for these cumulants couple to all orders, forming an infinite hierarchy of equations that does not close.
To obtain a tractable numerical method, we consider an ansatz wherein the quantum state of the complete measurement is described entirely by cumulants up to a finite order n_ ord≤ n_ trunc; all cumulants of order n_ ord > n_ trunc are thus set to zero, truncating the hierarchy and yielding a closed set of equations for the retained nonzero cumulants. In this paper, we choose n_ trunc = 2 for a quantum measurement chain defined entirely by its first and second-order cumulants, although the truncation can similarly be carried out at higher order. The resulting Stochastic Truncated Equations Of Motion (STEOMs) form the basis of our numerical approach in this paper.
We emphasize that the truncated cumulants ansatz has some important differences when compared to standard linearization approaches. As seen in Fig. <ref>, note that both the nonlinearity and measurement terms couple second-order cumulants to first-order cumulants: this fact proves critical for the quantum state classification tasks we consider in this paper.
The utility of the STEOMs is naturally determined by the validity of the truncated cumulants ansatz. Since we are specifically interested in nonlinear quantum systems, which can generate higher-order cumulants in dynamics, one must ask when such an ansatz may hold. In the SI <cit.>, we carry out a detailed analysis of the regimes of validity of this framework, together with benchmarking against exact solutions and full SME integration. We find that the STEOMs provide a very good approximation of full SME integration provided the strength of nonlinearity of modes is weak relative to their loss rates; good agreement remains even up to Λ_k/γ∼ 0.1, which is around an order of magnitude larger than the strongest nonlinearity we consider in this paper.
§.§ Semi-analytic scheme: Nonlinear van Kampen expansion based of the Fokker-Planck equation
In addition to enabling the (S)TEOMs as a practical numerical method for multimode measurement chains, the truncated cumulants approach is also central to our main analytic tool: a description of quantum dynamics that is perturbative in the nonlinearity of the measurement chain. This analysis is enabled by the close connection between normal-ordered cumulants and the positive-P representation via the characteristic function. More precisely, the positive-P representation is simply the Fourier transform of the characteristic function <cit.>,
𝒫(𝒪⃗,𝒪⃗^†) = 1/π^2J∫∏_j=1^ d^2 w_j d^2 w_j^* e^-iw_j^* 𝒪_j^†e^-iw_j 𝒪_jχ(w⃗,w⃗^*)
The dynamics of the positive-P distribution follow a Fokker-Planck equation that is equivalent to Eq. (<ref>), while also employing normal-ordered cumulants as its natural dynamical variable set. We use this connection to first obtain an approximate Fokker-Planck equation for dynamics of the measurement chain in powers of the nonlinearity. This directly allows us to obtain semi-analytic solutions for the TEOMs, Eqs. (<ref>), (<ref>), from which classification metrics such as the Fisher's discriminant can be readily evaluated. Our analysis and its results, which are used throughout the main text, are detailed in the SI, Sec. <ref> and <ref>.
§ SCALING OF PERFORMANCE WITH NONLINEARITY FOR FIXED
In Sec. <ref>, we explored how parameters can be optimized to improve classification performance. However, a plain reading of the expression for the mean separation δμ, Eq. (<ref>), would appear to suggest that an even simpler strategy may be to simply increase the nonlinearity strength Λ, which monotonically scales δμ. However, in practice this dependence is much more complex: for a classification task defined by a given , varying Λ also varies via Eq. (<ref>). This changes the operating conditions of the and directly influences the mean separation and noise properties.
Therefore, to observe and understand the scaling of ||δμ|| with nonlinearity, we must keep fixed while varying the nonlinearity. Practically, we do so by using of varying nonlinearity to perform separate instances of Task I, characterized by different values of ·Γ_1/γ. For each instance, the achieved as a function of Λ is plotted in Fig. <ref>.
We immediately note that for decreasing ·Γ_1/γ, the required nonlinearity to reach the optimal increases, as required by the form of . The crosses indicate three specific , one for each instance of the considered task, with nonlinearity Λ such that is the same. Comparing these , we now clearly see that the optimal increases with increasing Λ. In the top panel of Fig. <ref>, we plot the mean separation amplitude ||δμ|| and the projected noise σ^2(l)_δμ for l=1 for each considered instance of Task I, under the NVK approximation and using integration of STEOMs. To lowest order in nonlinearity as captured by the NVK approximation, the projected noise properties remain the same for each instance. In contrast, the mean separation increases with nonlinearity, indicative of the sought-after scaling. Unchanged noise properties with increasing mean separation imply an increase in and hence classification accuracy.
Note that with increasing nonlinearity Λ, the agreement between the NVK approximation and exact integration of the STEOMs is reduced. In particular, the exact mean separation is lower than the NVK approximation, while the projected noise is larger. Both effects serve to reduce and hence limit the improvement in to below that predicted by the NVK approximation. These observations are an example of saturation that is higher-order in the nonlinearity and hence not captured by the NVK approximation, and are commonplace in strongly-driven nonlinear quantum systems <cit.>.
§ MODELS OF STANDARD LINEAR QUANTUM AMPLIFIERS
In this section we provide the models for both phase-preserving and phase-sensitive linear quantum amplifiers deployed for readout in cQED, and which we use as benchmarks in Sec. <ref> of the main text. Note that our description of the measurement chain, Eq. (<ref>), is general enough to include this standard paradigm, by neglecting the nonlinear contribution to the Hamiltonian, 𝒩̂_→ 0.
In particular, by taking ℋ̂_→ℋ̂_ PP where
ℋ̂_ PP = -∑_k Δ_kb̂_k^†b̂_k + G_ PP(-ib̂_1b̂_2 + h.c.),
we are able to describe QS readout using linear phase-preserving quantum amplifiers.
For phase-sensitive (PS) amplifiers on the other hand, we can set ℋ̂_→ℋ̂_ PS, where
ℋ̂_ PS = -∑_k Δ_kb̂_k^†b̂_k + G_ PS(-ib̂_1^2 + h.c.).
§ QUANTUM CHERNOFF BOUND: ADDITIONAL DETAILS
In this Appendix section we provide some additional details of the calculation of the Quantum Chernoff bound (QCB) used in Sec. <ref> of the main text to determine the optimal discrimination bounds for quantum state discrimination tasks.
To define the QCB as used in the main text, we first introduce the quantity <cit.>:
Q(^(l),^(p)) = - min_s ∈ [0,1]log tr[ (^(l))^s(^(p))^1-s]
We compute the QCB using only the quantum states of the modes that are intended to be monitored. For Task I, this is simply mode â_1 for the QS. We can then define the QCB for the QS ζ_ QS as:
ζ_ QS≡ Q( tr_â_2[^(1)_ QS], tr_â_2[^(2)_ QS] )
where tr_ô[] is used to trace out the sector corresponding to mode ô from the density matrix . For the general quantum processor, only mode b̂_1 is monitored, so the QCB ζ analogously becomes:
ζ≡ Q( tr_b̂_2[ ^(1)_], tr_b̂_2[^(2)_] )
Note that the and the PS amplifier are both single-mode devices for Task I, so the trace operation has no effect in those cases.
The expressions in Eqs. (<ref>), (<ref>) can be computed straightforwardly if the quantum states are Gaussian. These simplified expressions are derived in Ref. <cit.>; we employ these results in plotting the QCB in the main text, Fig. <ref>.
§ NON-CLASSICAL NOISE DUE TO THE ALONE: CLASSIFYING THERMAL STATES
In the main text, the considered Tasks I and II both require classification of states that exhibit some degree of quantum correlations, namely at least one state exhibits some degree of single or two-mode squeezing. As a result, the observed quantum (i.e. sub-vacuum) noise observed at the output will in general contain a contribution attributable to the quantum correlations of QS signals, and cannot be attributed to the alone.
In this Appendix section, we show that non-classical correlations can be generated solely by appropriate models, and they can be manipulated for use in classification in the same way as has been demonstrated for both Task I and II. To this end, we first introduce the modified Liouvillian for the QS:
= -i[ℋ̂^(l)_ QS,] +
∑_m κ_m (n^ th(l)_m+1)𝒟[â_m]
+ κ_m n^ th(l)_m 𝒟[â^†_m],
where in contrast to Eq. (<ref>) each QS mode is now coupled to a finite temperature bath. For simplicity, we further restrict ourselves to a single-mode QS being monitored by a single-mode , M=K=1; all QS parameters for mode m=2 are set to zero. Secondly, we turn off single-mode squeezing of the QS mode, G_1^(l) = 0 ∀ l, so that the QS states to be distinguished have no non-trivial quantum correlations. Instead, the states to be distinguished are characterized entirely by different thermal bath temperatures and hence occupation numbers, n^ th(l)_m. While not straightforward to generate in practice, this choice is ideal to test whether non-classical correlations can be generated by the alone, and whether they can still be manipulated by the . All parameters characterizing Task III are summarized in Table <ref>.
It suffices to consider only the NVK approximation and to then plot both the the mean separation of measured quadratures ||δμ|| and the projected noise in Fig. <ref>, as a function of nonlinearity. We note that the qualitative features are as observed in Fig. <ref>, and therefore reiterate the main results in the paper: the ability to compute correlations in situ, and the ability to control noise in measured quadratures differently from their mean value. Crucially, projected noise below vacuum in Fig. <ref>(b) can now solely be attributed to the action of the , as the QS states are purely thermal and do not demonstrate such non-classical correlations; in fact, they exhibit fluctuations above the vacuum value due to the coupling to a finite temperature bath. This emphasizes that the can provide the ability to not just manipulate existing quantum correlations, but also provide useful quantum correlations itself.
We emphasize that the model we consider here is based on coherently-driven Kerr oscillators, which by themselves are not ideal squeezers. However, our general approach applies to much more general models, where more non-classical correlations could be generated.
|
http://arxiv.org/abs/2409.03639v1 | 20240905155522 | Knots Inside Fractals | [
"Joshua Broden",
"Malors Espinosa",
"Noah Nazareth",
"Niko Voth"
] | math.GT | [
"math.GT",
"math.GN"
] |
[email protected], University of Waterloo
[email protected], Department of Mathematics, University of Toronto
[email protected], University of Waterloo
§ ABSTRACT
We prove that all knots can be embedded into the Menger Sponge fractal. We prove that all Pretzel knots can be embedded into the Sierpinski Tetrahedron. Then we compare the number of iterations of each of these fractals needed to produce a given knot as a mean to compare the complexity of the two fractals.
Mathematics Subject Classification: 57K10, 28A80
Word count: 4775
§ INTRODUCTION AND MOTIVATION
§.§ The questions of this paper
The Menger Sponge is a fractal obtained by iteratively subdividing a cube into 27 equal cubes and removing the central cube of each face and the interior central cube. In 1926, Menger proved the following
[Menger, 1926]
The Menger Sponge is universal for all compact one dimensional topological spaces.
The concept of universality in the previous theorem means that any compact topological space of dimension 1 has a homeomorphic copy as a subspace of the Menger Sponge. We refer the reader to <cit.> for a discussion on these topics.
A particular example of such space is the circle 𝕊^1. By inspecting the Menger Sponge, or its first iterations, we can see that the circle can be found as a subspace in many different ways, some rather complicated. In particular, some of them are actually knotted. This is what motivates our two questions:
Question 1: Can we find every knot as a subspace of the Menger Sponge? More particularly, can we find every knot in the 1-skeleton (that is, the edges of the boundary) of a finite iteration of the Menger Sponge?
By this we mean that we do not only want 𝕊^1 embedded in any which way but in a given way that is isotopic to a given knot and created by the edges of the boundary of a finite iteration. The reason for this choice is that, otherwise, we cannot guarantee points on it do not get removed at further stages of the iterative process.
Question 2: Can we find fractals, also created by iterative processes, where we can also find all the knots in their finite iterations? We would expect that more complicated fractals produce complicated knots faster than simpler fractals.
Notice that question 1 does not follow from universality, since all knots are homeomorphic to 𝕊^1. Thus, the embedding in the Menger Sponge might not preserve the knot type!
§.§ Results of this paper
The results we will prove on this paper, with regard to the two questions posed above are as follows. We completely settle the first question:
Any knot K can be embedded in a finite iteration of the Menger Sponge.
To prove the above result we will use the Arc Presentation (in grid form) of K and its associated connectivity graph. Concretely, we will find the connectivity graph on the face of a Menger Sponge and then prove that we can resolve each intersection, by pushing it into the sponge in such a way that we avoid all the holes. As a consequence of the proof of the above result we will deduce the well known fact that the isotopy classes of tame knots form a countable set.
To discuss question 2 we explore another fractal, the Sierpinski Tetrahedron. It is also created by an iterative process, which we describe below in section <ref>. The strategy used before, where we found the connectivity graph on a face and then push into the fractal, doesn't work as there seem to be no appropriate knot diagram suitable for this.
Instead, we introduce the combinatorial diagram of the tetrahedron, which is a two dimensional map of the fractal that allow us to look either by inspection or by structure of certain families of knots, in the map and transfer into the tetrahedron. We prove
Any pretzel knot K can be embedded in a finite iteration of the Sierpinski Tetrahedron.
However, we were unable to prove that in general any knot is inside the tetrahedron, but we do not see any real impediment to find them. Thus we leave open the following
Every knot K can be embedded into the Sierpinski Tetrahedron.
The Sierpinski Tetrahedron is simpler than the Menger Sponge. We use these two fractals to study the second question. To do so we define M(K) as the minimal number of iterations of the Menger Sponge needed for K to be embedded in it. For example, we have M(3_1) = 1. Similarly, whenever a knot K is embedded in a finite iteration of the Sierpinski Tetrahedron, we denote the minimal such iteration by S(K).
With this notation at hand, we prove
Let K be a knot, then we have
M(K) ≤ S(K),
supposing S(K) is defined.
We see the above result as a positive answer to the second question posed above: indeed, when using knots as probes to measure the complexity of a fractal, the simpler fractal take longer to include more complicated knots.
§.§ Previous Literature
Knots and graph theory have been related for a long time. Frequently, constructions of graph theory are used to compute and create knot invariants. We refer the reader to <cit.>, and the bibliography therein, for examples and the history of this connection.
A particular set of graphs which will be related to our combinatorial representation appears in <cit.>, and is denoted S_4^n in there. The main difference is that for us the vertices are resolved, that is, we know which edge goes above and which one goes below. This difference is fundamental since we use the knot diagrams found in these combinatorial representations to construct the actual knots we are looking for. To be more precise, if we were to find the connectivity graphs of knots in these S_4^n, we would still need to know that the vertices can be resolved consistently with the three dimensionality of the tetrahedron.
For instance, in our example <ref> below, we find the trefoil in an iteration of the Sierpinski Tetrahedron. If we change all crossings to a vertex we obtain the graph S_4^4 (see <cit.>). However, we need to know which line goes above and below to obtain a real knot diagram of the trefoil.
Finally, when one searches for knots and fractals, not much appears. Again, some connections exists aimed at the computation of knot invariants, as for example in <cit.>. Other directions, frequently aimed at artistic renditions and the search for beautiful patterns in them, deal with knots obtained by iteratively substituting crossings by smaller and smaller versions of the original knot. This leads to certain wild knots that resemble fractals.
In this paper, however, our point of view has been to use knots and their complexity to study how different fractals, created by iterative processes, manifest their complexity. Our guiding principle has been the idea that more complicated fractals should produce complicated knots faster. It seems this perspective has not been explored before.
§.§ Acknowledgements
The authors thank the Outreach Department of the Department of Mathematics of the University of Toronto for the organization of the mentorship programs and their support during this time. We also wish to thank the Department of Mathematics of the university of Toronto, as well as the organizing committee of the Canadian Undergraduate Mathematics Conference 2022, for their support in the presentation of this work in the CUMC 2022.
The three dimensional pictures that appear in this work have been created with Blender. The Arc Presentations in grid form of 3_1 and 4_1 were taken from The Knot Atlas.
§ REVIEW OF PREREQUISITES
§.§ The Cantor Set, Cantor Dust, Sierpinski Carpet, Sierpinski Tetrahedron and the Menger Sponge.
Let I = [0, 1]. By the Cantor set we mean the classical one third Cantor set. We denote it by C and recall the following classical characterization:
A number x∈ [0, 1] is a Cantor number, that is x∈ C, if and only if in its ternary non-ending representation there are no digit 1's appearing.
See <cit.>.
The product of two Cantor Sets, C× C, is called Cantor Dust. It can also be constructed in an iterative process as follows:
* Define C_0 = I× I.
* Divide C_0 into nine equal squares of length size 1/3. To obtain C_1, remove any point that is contained only in the central square or the middle central squares of the sides. Notice that C_1 is closed.
* Iterate this process for each remaining square of C_n to obtain C_n + 1.
We then have that
C× C = ⋂_n = 0^∞ C_n.
We also can characterize Cantor Dust in terms of ternary representations. More precisely we have
A point (x, y)∈ I× I is a Cantor Dust Point, that is (x, y)∈ C× C if and only if both x and y do not have a digit one in their corresponding ternary non-ending representations.
This follows immediately from proposition <ref>.
We now discuss is the Sierpinski Carpet (See <cit.>). It is also defined by an iterative process as follows:
* Define s_0 = I× I.
* Divide s_0 into 9 equal squares of side length 1/3. Remove the center open one.
* Repeat the previous step, removing the central square, to each one of the remaining squares of s_n to obtain s_n+1.
The Sierpinski Carpet is defined as
s = ⋂_n = 0^∞s_n.
Just as the Cantor set and the Cantor Dust, we have a characterization of s in terms of ternary representations. Concretely, we have
A point (x, y)∈ I× I is a Sierpinski Carpet point, that is (x, y)∈ s if and only if x and y do not have a digit one in the same position in their nonending ternary representation.
See <cit.>.
An immediate conclusion from propositions <ref> and <ref> is the following
The Cantor Dust is a subset of the Sierpinski Carpet.
The points of the Cantor Dust have no ones in the ternary representations of their coordinates, so they cannot share a digit one in the same position.
Now we discuss the Sierpinski Tetrahedron. It is also defined by an iterative process as follows:
* Define t_0 as a closed regular tetrahedron.
* On each corner, keep the homothetic closed tetrahedron with length size scaled by 1/2. Remove the rest to obtain t_1.
* Repeat the previous step on each remaining tetrahedron of the previous step to obtain t_n+1 from t_n.
The Sierpinski Tetrahedron is defined as
T = ⋂_n = 0^∞t_n.
The final fractal we will need is produced iteratively as follows:
* Define M_0 = I× I× I.
* Divide M_0 into 27 equal cubes of side-length size 1/3. To obtain M_1, remove any point that is only contained in the central cube of each face or the open central cube of M_0. Notice that M_1 is a closed set.
* Iterate this process for each remaining closed cube of M_n to obtain M_n + 1.
Then the Menger Sponge is defined as
M = ⋂_n = 0^∞M_n.
§.§ The Arc Presentation
As we have mentioned before, every knot has a knot diagram that fits our purposes perfectly. We will review it now.
An Arc Presentation in grid form is encoded in an ordered list of n unordered pairs
{a_1, b_1},..., {a_n, b_n},
sucht that a_1,..., a_n and b_1,..., b_n are permutations of 1, ..., n and
a_i ≠ b_i, i= 1,...., n.
We then construct the finite grid of points
(a, b), 1≤ a, b≤ n .
It consists of n^2 points with integer coordinates. To construct the presentation we do as follows:
Step 1: For each 1≤ i≤ n, draw the horizontal line segment joining the points (a_i, i) with (b_i, i).
Step 2: Each vertical line at j =1, 2, ..., n has exactly two points drawn on it. Draw the segment joining those two points for each j.
Step 3: To resolve the crossings, whenever there is one, always draw the vertical segment above the horizontal one.
For a given knot K, the minimal n for which the above construction can be carried out is called the Arc index of K. It is denoted by α(K).
An Arc Presentation of the Eight Knot 4_1 is
{3, 5}, {6, 4}, {5, 2}, {1, 3}, {2, 6}, {4, 1}.
The obtained knot diagram is
< g r a p h i c s >
It can be proven that α(4_1) = 6.
We have the following result:
Every knot admits an Arc Presentation in grid form. Furthermore, there is an algorithm to construct the Arc Presentation out of any other knot diagram of the knot.
For a proof that every knot has an Arc Presentation see <cit.>. For a discussion of an algorithm to produce grid diagrams, see <cit.>.
§.§ Pretzel Knots
Given a tuple of integers (q_1,....q_n) a Pretzel knot (or possibly a link) is the knot (or link) that corresponds to the diagram
< g r a p h i c s >
The sign of q_i corresponds to the orientation of the first crossing of the helix while the magnitude to the number of total crossings in the corresponding helix. That is, for q_i positive the orientation goes as shown while for q_i negative the crossings are flipped.
Notice that not every knot is a Pretzel knot. For example, the first prime knot that is not pretzel is 8_12. We refer the reader to <cit.> for a list of the prime knots up to nine crossings that are or are not Pretzel.
§ THE POSITIVE ANSWER FOR THE MENGER SPONGE
In this section we will prove that every knot is embedded in a finite iteration of the Menger Sponge. Let us be more precise. For every knot K, we will prove that there exists an n such that a closed path on the edges of the boundary of M_n is equivalent to K. In this way, since points in the edges are not removed in successive iterations, the knot is also found in the Menger Sponge itself, as desired.
What we do now is as follows: given a knot K, we will see that on a face of a large enough iteration of the Menger Sponge we can draw the connectivity graph of an Arc Presentation of K. Then, by suitably resolving the intersections, we will be able to push the presentation into the sponge in such a way that the vertical and horizontal segments lie in opposite faces and are joined by appropriate paths within the sponge (i.e. we avoid the holes).
The main fact we need for the pushing into the Menger Sponge to be possible is given by the following
Let (x, y) be a Cantor Dust point, then
(x, y, z) ∈ M
for all 0≤ z ≤ 1.
For each iteration of the Menger Sponge M_n we define L_n to be the subset of the points (x, y) on the front face of M_n such that (x, y, z)∈ M_n for 0≤ z≤ 1.
L_0 is the whole front face of the cube M_0. In the next stage, L_1 consists only of the four corner squares as shown in the figure:
< g r a p h i c s >
Every point not shaded black in the front face has some hole of M_1 behind it. Notice however that this hole might not share a face with the front face.
In the next stage, this process gets repeated iteratively. It is enough to see what happens in each of the cubes which have a face on the front since those behind are just translations orthogonal to the plane of the front face. Thus, if a hole were to appear in a farther cube there is also one on a closer cube to the front face.
Thus L_0, L_1, L_2,... is obtained by iteratively repeating the process shown in the above figure. We see this is the iterative process of the Cantor Dust of section <ref>. We conclude that the points of the front face that are Cantor Dust points are the ones that can be joined by a straight line to the back face through M.
With this lemma at hand we are ready to give the positive answer to question one from the introduction. Concretely, we have
All knots K can be embedded into a finite iteration of the Menger Sponge.
For a given knot K, proposition <ref> above shows that it admits an Arc Presentation
{a_1, b_1},..., {a_n, b_n},
where a_1,..., a_n and b_1,..., b_n are permutations of 1, 2, ... n.
Take an iteration, of the process that produces the Cantor set C, that has at least n endpoints. This is possible since the k^th iteration has 2^k+1 endpoints. Among those endpoints pick n, say, p_1 < p_2 < ...< p_n.
We now consider the Arc Presentation codified as
{p_a_1, p_b_1},..., {p_a_n, p_b_n},
where p_a_i is considered on the horizontal axis of the front face, while the p_b_j is considered on the vertical axis of the front face (with origin in the lower left corner).
The corresponding diagram has the same knot type. The only difference is that the distance between the vertical segments (or the horizontal segments) changed, since they were shifted to begin at certain endpoints, as opposed to be evenly distributed in [0, 1]. Notice that we have not introduced new intersections since we are preserving the order of both horizontal and vertical segments. We say we have shifted the grid diagram.
We now make an observation: if we pick a point on the front face (x_0, 0, 0) such that x_0 is a point of the Cantor set, then (x_0, y, 0) is entirely on the front face for 0≤ y ≤ 1. This follows from proposition <ref> because x_0 does not have a 1 in its ternary expansion, since it is an element of the Cantor set, and so (x_0, y) satisfies the criterion given by the lemma. The analogous result hold for horizontal segments (x, y_0, 1) if y_0 is an endpoint of the Cantor set.
Since the connectivity graph associated to the Arc Presentation given above is constructed by line segments included in segments as those discussed in the previous paragraph, we conclude that all vertical segments are included entirely on the front face. We also conclude that all the horizontal segments can be translated to the back face and they will be included entirely in it.
Our final task is to show that an endpoint of a vertical segment, in the front face, can be joined with the corresponding vertex of the horizontal segment on the back. This is possible, by lemma <ref>, because the coordinates of such points are of the form (x_0, y_0, 0) and (x_0, y_0, 1) with (x_0, y_0) in the Cantor Dust. Thus we can go within the sponge avoiding its holes!
With this we have been able to construct a knot inside the Menger Sponge. This knot is K because its knot diagram, when projected on the front face, precisely recovers the Arc Presentation we began with. This concludes the proof.
The above procedure done for the trefoil knot 3_1 is as follows. The first image on the left shows the coordinates (just labeled for the x axis) evenly spaced. The shifting described in the proof is shown in the second image. As we can see the vertical and horizontal lines have intersections in Cantor Dust points.
< g r a p h i c s >
The third image is the connectivity graph in the face of the Menger Sponge.
After pushing we get the following
image (with and without the Menger Sponge iteration):
< g r a p h i c s >
.
Doing the above process with the Arc Presentation of the eight knot 4_1 that we saw in example <ref> we get
< g r a p h i c s >
In the drawing on the left we have rotated the knot to show how it looks when seen from the back of the sponge.
A well known result that follows from the above is
The isotopy classes of tame knots form a countable set.
The number of finite iterations of the Menger Sponge is countable. The result now follows from theorem <ref>.
§ PARTIAL ANSWER FOR THE SIERPINSKI TETRAHEDRON
Contrary to what we are able to prove for the Menger Sponge, we do not know if all knots can be embedded into a finite iteration (or the final one) of the Sierpinski Tetrahedron.
Since we do not have a knot diagram that is particularly suited for the Sierpinski Tetrahedron, we have to develop a different way of searching through it. We now explain how we do this.
§.§ The Combinatorial Representation of the Tetrahedron
In order to make the arguments for our proofs, as well as the searches by inspection that we did, it is convenient to flatten the different iterations of the Sierpinski Tetrahedron.
We do so as shown in the following figure for the first and second iterations:
< g r a p h i c s >
The final figure now can be used as a map where we look for the planar diagrams of knots. Notice that the graphs that correspond to the successive iterations of the tetrahedron are not planar after the first two. This can be easily proven via Kuratowski's theorem.
Looking for 3_1 in the combinatorial representation we get
< g r a p h i c s >
This representation then becomes the following embedding of 3_1 in the tetrahedron.
< g r a p h i c s >
We emphasize that the combinatorial representation makes the search easier since we avoid the complex nature of the three dimensionality of the tetrahedron. Instead we have this interesting map of how to move in it. The authors were unable to find by inspection many knots in the tetrahedron until we had the combinatorial representation available.
We obtain in the combinatorial representation the following knot diagram of the eight knot 4_1.
< g r a p h i c s >
This becomes the following embedding in the tetrahedron.
< g r a p h i c s >
§.§ Pretzel Knots
To prove a given knot K is inside the tetrahedron it would be enough that some knot diagram of K is inside one big enough combinatorial representation of it. We were unable to prove this systematically for all knots but succeeded for Pretzel Knots.
The results will follow from a series of lemmas, each of which is straightforward from the corresponding picture so we will minimize the discussion to the essentials in their proofs.
Any isolated helix can be put inside a finite iteration of the tetrahedron.
The images below show that every helix can be found in a big enough iteration of the combinatorial representations.
< g r a p h i c s >
Notice that what distinguishes the positive crossing from the negative one is which strand comes on top on the first crossing. The one from A for the negative helix and the one from B from the positive helix. This is preserved in the drawings.
Finally, we draw the attention of the reader to the following point: when going from one of the crossings of one helix to the next, we produced the following type of crossing:
< g r a p h i c s >
These crossings are irrelevant, from the perspective of knot type, as they can be removed by a twist (i.e. a Reidemiester move of type I).
We are now ready to prove
All Pretzel Knots are inside a finite iteration of the Sierpinski Tetrahedron.
We have seen in lemma <ref> that individual helices can be put into big enough iterations of the Sierpinski Tetrahedron. We now show that any number of them with any given orientations can be successively joined.
We can simplify a Pretzel Knot diagram to the following simple form
< g r a p h i c s >
Each cross represents an helix with A_i, B_i, C_i, D_i representing the different endpoints of the helices as used in lemma <ref>.
We have proven that each helix is independently found. We now show we can join them as required. The following picture shows how (for concreteness we did the drawing for three helices):
< g r a p h i c s >
Notice that A_1 gets connected with B_k by a path going through the upper part of the combinatorial representation, while D_1 gets connected with C_k by one going through the bottom part of the combinatorial representation.
We emphasize that the above process is algorithmic. For example, the knot 8_15 is a Pretzel knot. Concretely, it is P(-3, 1, 2, 1, -3). To find this knot by inspection would be next to impossible, but using the method given in the above proofs and keeping track of the number of iterations needed for all helices to exists as required, we see we can put it inside the eleventh iteration.
As we have said before, not all knots are Pretzel Knots. Thus, our previous algorithm does not produce the inclusion of every knot in the Sierpinski Tetrahedron. However, the authors do not really see an impediment for other knots to be found there. Thus we put forward the following
All knots can be found a finite iteration of the Sierpinski Tetrahedron.
§ COMPARISON BETWEEN FRACTALS
Now that we have discussed separately the Menger Sponge and the Sierpinski Tetrahedron, let us compare the information each of them gives. For this purpose we give the following
Let K be a knot. Define M(K) as the minimum number of iterations needed to embed K into M_n. Similarly, define S(K) as the minimum number of iterations needed to embed K into S_n, supposing such n exists for K.
As we have mentioned in the introduction, the Menger Sponge is a more complicated object than the Sierpinski Tetrahedron. In this direction we will prove below that any knot appears faster in the Menger Sponge than it does in the Sierpinski Tetrahedron. Everything follows from the next fact.
The one-skeleton of S_n can be embedded into the one-skeleton of M_n for n = 0, 1, 2,...
Furthermore, such an embedding i_n can be constructed in such a way that it preserves knot type. That is, if K is a knot in the one-skeleton of S_n, then i_n(K) is isotopic to K (i.e. the embedding does not produce extra knotting).
We proceed by induction. For M_0 and S_0 the inclusion is as shown in the following picture:
< g r a p h i c s >
The four coloured vertices, out of the eight vertices of the cube M_0, correspond to those of the tetrahedron S_0 under the embedding.
For S_1 and M_1 we have the following picture:
< g r a p h i c s >
(Notice how this graph is isomorphic to S_4^2 as show in <cit.>.)
To produce S_2 inside M_2 what we do is to substitute each of the coloured S_0, inside the S_1, by the graph of S_1 above. This produces the S_2. We proceed in this way, inductively, to produce S_n+1 inside M_n+1: we substitute each of the S_0 in S_n+1 by S_1 above, with the condition that the colours match (i.e. the red corner is where the red cube goes, etc.). For the colours to match one has to swap green and blue in half the cubes, but this doesn't change the graph type.
Since the graphs we are producing are isomorphic to the skeletons of S_0, S_1, S_2, S_3,... because they replace the last S_0 by the corresponding subdivision into S_1, which is exactly how we get successive iterations of the Sierpinski Tetrahedron (i.e. subdividing the smallest tetrahedron's, into four smaller ones as explained in section <ref>).
Notice that these substitutions occur in disjoint regions, so they do not loop among themselves, whence proving that a knot K in S_n is isotypic to i_n(K). Finally, because to add the iteration S_1 we only needed one extra subdivision, we indeed have that S_n+1 lies in M_n+1. This concludes the proof.
We immediately get the following
Let K be a knot, then we have
M(K) ≤ S(K),
whenever S(K) is defined.
We can verify S(3_1)≥ 2. Indeed, the one-skeleton of S_1 is a planar graph that is already on the surface of a topological sphere (which is the original tetrahedron). Thus, any knot that is produced by a path on it has genus 0, and thus trivial. However, by inspection, we can see that M(3_1) = 1 (We leave to the interested reader the task of finding the trefoil in the one-skeleton of M_1). We thus see the inequality in the previous theorem can be sharp.
A comparison between the moment of appearance in the Menger Sponge and the Arc index is given by the next corollary.
Let K be a knot with Arc index α(K), then
M(K) ≤log_2(α(K)) + 1,
or equivalently,
α(K) ≥ 2^M(K) - 1.
This follows immediately from the proof of theorem <ref> above.
The trefoil 3_1 has α(3_1) = 5. The above inequality is M(3_1)≤ 3.322.... We know M(3_1) = 1.
Corollary <ref> compares two different indices constructed out of a knot. On the one hand, we have the Arc index which requires the planarity of the Arc Presentation. Thus, for its construction, it wastes the flexibility that the three dimensionality of a knot gives. On the other hand, M(K) takes advantage of it since the iterations of the Menger Sponge are three dimensional. What the inequality states, conceptually, is that indeed a lot of space is gained by this. However, this inequality was obtained by a very elementary embedding of a knot that still ignores a lot of paths within the iterations of the Menger Sponge (for example, M_2 is already very complicated).
Thus, it is believable that the inequality is always strict (even if we consider integer part of the right hand side), and even that the gap between both sides grows. It is thus an interesting question to know what is the real asymptotic behaviour of M(K).
§ FINAL QUESTION
We conclude this paper with a question which we decided to emphasize by isolating it to its own section. We have seen that all knots exists within the iterations of the Menger Sponge, and that certain ones lie in the Sierpinski Tetrahedron. As we mentioned, we suspect actually all knots should be there as well. This leads to the our question of which we have no answer: is there a fractal, produced by an iterative process, that admits certain types of knots but not others? If so, what does that say of the fractal itself or of the families it avoids. An example of such constructions would be quite impressive from our point of view.
acm
|
http://arxiv.org/abs/2409.03051v1 | 20240904194152 | Successive-Cancellation Flip Decoding of Polar Codes Under Fixed Channel-Production Rate | [
"Ilshat Sagitov",
"Charles Pillet",
"Pascal Giard"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
IEEEexample:BSTcontrol
awgnAWGNadditive white Gaussian noise
snrSNRsignal-to-noise ratio
ferFERframe-error rate
scSCsuccessive-cancellation
sclSCLsuccessive-cancellation list
scfSCFsuccessive-cancellation flip
dscfDSCFdynamic scf
crcCRCcyclic-redundancy check
llrLLRlog-likelihood ratio
VECA: Reliable and Confidential Resource Clustering for Volunteer Edge-Cloud ComputingThis material is based upon work supported by the National Science Foundation (NSF) under Award Number: OAC-2232889. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF.
Hemanth Sai Yeddulapalli1,
Mauro Lemus Alarcon1,
Upasana Roy1,
Roshan Lal Neupane1,
Durbek Gafurov1,
Motahare Mounesan2,
Saptarshi Debroy2,
Prasad Calyam1
1University of Missouri-Columbia, USA;
2City University of New York, USA
Email:
1{hygw7, lemusm, u.roy, neupaner, durbek.gafurov, calyamp}@missouri.edu;
[email protected];
[email protected]
=======================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Polar codes are a class of error-correcting codes that provably achieve the capacity of practical channels under the low-complexity scf decoding algorithm.
However, the scf decoding algorithm has a variable execution time with a high (worst-case) decoding latency. This characteristic poses a challenge to the design of receivers that have to operate at fixed data rates. In this work, we propose a multi-threshold mechanism that restrains the delay of a scf decoder depending on the state of the buffer to avoid overflow. We show that the proposed mechanism provides better error-correction performance compared to a straightforward codeword-dropping mechanism at the cost of a small increase in complexity. In the region of interest for wireless communications, the proposed mechanism can prevent buffer overflow while operating with a fixed channel-production rate that is 1.125 times lower than the rate associated to a single decoding trial.
§ INTRODUCTION
Polar codes <cit.> are a type of linear error-correction codes that can achieve the channel capacity for practically relevant channels under sc decoding. However, at short to moderate block lengths, the sc algorithm provides an error-correction performance that is lacking for many practical applications. To address this, the scl decoding algorithm was proposed <cit.>. It provides great error-correction capability to the extent that polar codes were selected to protect the control channel in 3GPP's next-generation mobile-communication standard (5G), where scl serves as the error-correction performance baseline <cit.>.
However, the great error-correction performance of a scl decoder comes at the cost of high hardware implementation complexity and low energy efficiency <cit.>.
As an alternative to scl, the scf decoding algorithm was proposed <cit.>.
scf leads to an improved error-correction performance compared to sc, but still falls short of that of an scl decoder with a moderate list size. However, scf is more efficient than scl both in terms of computing resources and energy requirements <cit.>.
dscf decoding was proposed in <cit.>, where modifications to scf were made to improve error-correction performance. With these modifications, the error-correction performance approaches that of a scl decoder with moderate list sizes at the cost of a minor increase of complexity compared to scf.
Preliminary results from a hardware implementation indicate that dscf decoders maintain a higher energy efficiency compared to scl decoders <cit.>.
Regardless of the variant, scf-based decoders exhibit a variable execution time by nature, with a latency much higher than the average execution time.
Some efforts were made to reduce the variability of the execution time <cit.>, but this characteristic cannot be fully eliminated.
This poses a challenge to the realization of receivers that have to operate at fixed data rates.
To compensate for the variable execution time of the decoder, words arriving from transmitter with a fixed time interval have to be stored in a buffer. Without any additional mechanisms, a fixed-size buffer may overflow even under reasonable conditions, e.g., when the channel-production rate is only slightly slower than the average decoder throughput.
To avoid overflow, one of the straight-forward approaches would be to drop the received words when the buffer approaches overflow, i.e., applying a codeword-dropping mechanism. However, a codeword-dropping mechanism severely affects the error-correction performance.
§.§.§ Contributions
In this work, we present a system model for operation under fixed channel-production rate that notably includes a controller for a scf-based decoder.
We propose a multi-threshold mechanism for that controller that modifies the maximum number of decoding trials by tracking the state of the input buffer. A codeword-dropping mechanism is used for reference. We provide a methodology for threshold selection.
Simulation results are provided for various channel-production rates that are close to the rate associated to a single trial of scf decoding.
They show that both codeword-dropping and multi-threshold mechanisms can operate at fixed channel-production rates and prevent buffer overflow.
We show that the multi-threshold mechanism provides a better error-correction performance than the codeword-dropping approach.
§.§.§ Outline
The remainder of this paper is organized as follows. <ref> provides a short introduction to polar codes and their construction, and briefly describes the sc and scf decoding algorithms. In <ref>, the system model is presented and the functionalities of each block of the model are described, with the exception of the controller. The controller is explained in <ref> along with the details on the codeword-dropping mechanism used for reference as well as the proposed multi-threshold mechanisms. In <ref>, the threshold-selection methodology for both mechanisms is provided. In <ref>, simulation results, in terms of the buffer-size variation and the error-correction performance, are provided and discussed.
<ref> concludes this work.
§ BACKGROUND
§.§ Construction of Polar Codes
The central concept of polar codes is channel polarization. As the code length tends to infinity, bit locations either become completely reliable or completely unreliable. To construct a 𝒫(N,k) polar code, where N is the code length and k the number of information bits, the (N-k) least-reliable bits, called frozen bits, are set to predefined values, typically all zeros. The encoding is the linear transformation such that x=u× F^⊗ n, where x is the polar-encoded row vector, u is a row vector of length N that contains the k information bits in their predefined locations as well as the frozen-bit values, n=log_2 N, and F^⊗ n is the n^th Kronecker product ( ⊗) of the binary polar-code kernel F = [ [ 1 0; 1 1 ]]. The bit-location reliabilities depend on the channel type and conditions. In this work, the awgn channel is considered and the construction method used is that of Tal and Vardy <cit.>.
§.§ Successive-Cancellation Decoding
sc decoding is a natural way of decoding of polar codes as was introduced in the seminal paper <cit.>. The received vector (channel llr), denoted by {α_ch(0), … , α_ch(N-1)}, is used to estimate the bits of the polar-encoded word starting from the first bit û_0 <cit.>. The following bits {û_1,…, û_N-1} are estimated sequentially, i.e., in successive manner, by the same vector of channel llr and the estimations of the previous bits. Each information bit û_i is estimated by taking a hard decision on the corresponding decision llr, denoted by α_dec(i). Frozen bits are known to the decoder and are thus directly set to their corresponding value, typically zero.
§.§ SC-Flip Based Decoding
The scf decoding algorithm is introduced in <cit.>, where the authors observed that if the first erroneously-estimated bit could be detected and corrected before resuming sc decoding, the error-correction capability of the decoder would be significantly improved. In order to detect the decoding failure of the codeword, the information bits are concatenated with a r-bit crc being passed through the polar encoder. The code rate of the polar code is thus increased to R=(k+r)/N.
If the crc check indicates decoding failure at the end of the initial sc decoding pass, a list of bit-flipping candidates, denoted by ℒ_flip, is constructed. In the original scf decoding algorithm, ℒ_flip stores the bit indices that correspond to the non-frozen bits with the smallest absolute values α_dec.
A more accurate metric for constructing ℒ_flip is introduced in <cit.>. This metric takes in account the successive nature of the decoder, and its calculation for each non-frozen bit with an index i after the initial sc attempt is defined as:
M_i = |α_dec(i)| + 1/c·∑_j ≤ i
j∈𝒜ln( 1 + e^( -c · |α_dec(j)|)) ,
where ln(·) denotes the natural logarithm, 𝒜 is the set of non-frozen bit indices, and c is a constant optimized experimentally by way of simulation. The value c will vary in the range 0.0 < c≤ 1.0 depending on polar code parameters and channel conditions.
Regardless of the type of metric, for each new decoding trial the next bit index of ℒ_flip is selected and when this bit is estimated, the opposite decision is made, i.e., the estimated bit is flipped. Decoding then resumes until the last bit, following the sc algorithm. New scf trials are ran until the crc matches or until the maximum number of trials T_max is reached.
The maximum number of trials T_max∈ℕ^+ defines the decoding latency, and 1≤ T_max≤( k+r). Setting T_max to 1 renders the scf decoder equivalent to an sc decoder. If after T_max trials the crc check fails, decoding is stopped and the word is considered undecodable.
We note that in <cit.> the authors adapt the metric of (<ref>) to allow multiple bit flips per trial and name the resulting algorithm dscf decoding. Preliminary results of a hardware implementation of dscf decoding <cit.> show that a decoder that flips 2
bits per trial is up to 5 times more area-efficient compared to state-of-the-art scl decoders while providing the same error-correction performance.
However, this comes at the cost of 11.5% lower throughput compared to scl. Without loss of generality, in this work, we do not apply multiple bit flips per trial. Thus, in the remainder of this work, the scf decoder with metric calculation of (<ref>) is applied.
§.§ Execution time of scf-based Decoders
scf-based decoding algorithms have a variable execution time by nature. In this work, we assume that the latency of processing one decoding word under scf is an integer multiple of the execution time of one sc decoding pass. By denoting the latency of an sc pass as τ_sc, the execution time of one word under scf decoding is calculated as:
τ_dec = t_req·τ_sc ,
where t_req is the required number of trials for a given codeword and 1≤ t_req≤ T_max, i.e., the required number of decoding trials either corresponds to the number of trials until the crc matches or to the maximum number of allowed trials.
§ SYSTEM MODEL
In this work, we use a system model where the communication chain is simplified such that parts of the transmitter, the channel and the detector, are lumped into one block denoted as the channel. <ref> illustrates this simplified model, where the channel acts as a data generator to the remainder of the model that is the central part of this work, i.e., the buffer, the controller, and the decoder.
In the remainder of this section, we describe the general functionality of each block of the system model, with the exception of the controller that is described at greater length in its dedicated <ref>.
§.§ Channel
The channel block in our model acts as the generator that delivers incoming data blocks (words) to the decoder. The words are generated at a fixed time interval τ_ch and stored in the buffer. The direction of the data write operation is illustrated by the arrow that is denoted by w in <ref>. We define the channel-production interval τ_ch as follows:
τ_ch=υ_pr·τ_sc ,
where υ_pr∈ℝ^+ is an additional coefficient that we call the production coefficient and τ_sc corresponds to the latency of one sc decoding trial.
The channel-production interval τ_ch cannot be lower than the latency of a single trial τ_sc , thus υ_pr≥ 1.
An increase of the production coefficient corresponds to an increase the data-production interval by the channel.
For convenience, throughout the paper, we often use the term channel-production rate, which corresponds to the inverse of the channel-production interval τ_ch .
§.§ Buffer
The buffer is used as memory to store words coming from the channel. The buffer is divided in slots, where each slot can accommodate one word. In this work, we consider a circular buffer.
We denote the size of the buffer by the total number of slots B_tot, and the number of occupied slots is denoted by B_occ. One received word takes one slot in the buffer. The number of occupied slots B_occ is provided to the controller block.
§.§ Decoder
The decoder block reads the received words from the buffer. The reading event is illustrated by the arrow denoted by r in <ref>. The decoder implements the scf decoding algorithm, where, without loss of generality, the bit-flipping candidates are defined according to (<ref>). The decoder operates with a maximum number of trials T_max. However, the behavior of the block can change if the controller asserts its C_stop signal. When C_stop is True, the decoder immediately ceases the current decoding attempt, declares decoding failure, and starts processing the next word. Reading it from the buffer releases a memory slot, moving away from overflow. While C_stop is False, the decoder maintains its usual behavior, i.e., attempts up to T_max trials. The decoder provides the current number of fully applied trials t_cur to the controller.
§ CONTROL MECHANISMS
As illustrated in <ref>, the controller is a key ingredient to our model. It regulates the decoder based on the number of available memory slots in the buffer. It aims to avoid buffer overflow while maximizing the error-correction performance.
During processing, the buffer has two critical states: buffer underflow and buffer overflow. Buffer underflow can easily be avoided, e.g., by suspending the decoder until the buffer is further filled with data. Furthermore, buffer underflow does not affect the error-correction performance. Buffer overflow is more challenging to deal with as it essentially requires to control the worst-case execution time of the decoder thus affecting the error-correction performance. Therefore, our work focuses on control mechanisms that cope with buffer overflow.
In our model, the controller regulates the operation of the decoder by way of thresholds: as the number of occupied slots in the buffer gets closer to overflow, pre-defined thresholds are violated and the decoding delay is gradually restricted by lowering the maximum number of trials of the scf decoder.
In this work, the controller can implement two different mechanisms: codeword dropping or multi-threshold. <ref> illustrates the Gen_Ctrl_Sigs algorithm that generates the control signals. This algorithm covers both mechanisms that are considered. The inputs of the Gen_Ctrl_Sigs algorithm are the sets of buffer-size and trial-decoding thresholds, denoted by ℬ and 𝒯, respectively. The sets consist of multiple thresholds, where ℬ = {B_1,B_2, …, B_P} and 𝒯 = {T_1,T_2,…, T_P} with P being the number of thresholds in each set. The set of thresholds ℬ is sorted in descending order while the set 𝒯 is sorted in ascending order. Once sorted, each threshold from ℬ corresponds to the threshold of 𝒯 located at the same position, i.e., they form a threshold pair according to their index.
As illustrated by <ref>, the states of the buffer and of the decoder are obtained through the number of occupied buffer slots B_occ and the current number of decoding trials t_cur. The buffer state B_occ is compared to the elements B_i∈ℬ. When the first violation is detected, the decoder state t_cur is compared to the threshold T_i∈𝒯 of the corresponding index i. If a violation is detected, the controller stops the decoder.
§.§ Codeword-Dropping Mechanism
The codeword-dropping mechanism follows <ref> with the single threshold pair {B_1,T_1}. Note that the threshold-violation check loop is executed only once as P=1.
As will be described in <ref>, the codeword-dropping mechanism only comes into play when the buffer is very close to overflow, i.e., B_1 is almost equal to B_tot. The trial-decoding threshold is set to T_1=0. This way, when B_occ>B_1, the decoder is immediately stopped regardless of how many trials have been attempted, i.e., the current codeword is dropped.
§.§ Multi-Threshold Mechanism
The multi-threshold mechanism follows <ref> with sets of multiple thresholds. For simplicity, in this work, we propose to use sets composed of P=3 thresholds. The buffer-size thresholds satisfy B_3 < B_2 < B_1 < B_tot while the trial-decoding thresholds are T_1 < T_2 < T_3 ≤ T_max. The threshold pair {B_1, T_1} is the same as codeword dropping.
To obtain the best performance and tradeoff, the number of buffer-size thresholds and their values are expected to vary depending on code length and rate, channel condition, and T_max. The general goal remains the same: evenly set the buffer-size thresholds throughout the buffer to achieve gradual control. We propose to define the trial-decoding thresholds following the methodology provided in <ref>.
§ THRESHOLD-SELECTION METHODOLOGY
As mentioned in the previous section, the threshold T_1=0, and the buffer-size thresholds B_1,B_2,…,B_P are evenly distributed across the buffer.
Setting P to 3, only the thresholds T_2 and T_3 need to be derived.
The proposed threshold-selection methodology requires obtaining the balanced number of trials of scf decoding from offline simulations at the channel snr of interest and selecting the targeted production coefficient υ_pr.
The key metric for determining the balanced number of trials T_bal is the average number of decoding trials T_av derived from offline simulations.
Experiments have shown that our system model can operate with a fixed channel-production rate without buffer overflow if the average number of trials T_av of the decoder, restricted by T_max alone, does not exceed the production coefficient υ_pr. To establish a good tradeoff between error-correction performance and buffer-overflow prevention, we start by defining the balanced number of trials as T_bal=max(T_max)|T_av<υ_pr.
Simulations of the scf decoder based on the setup described in <ref> are performed for the ideal case, i.e., T_max is the only decoding latency restriction.
<ref> shows examples of the average number of trials T_av for various T_max values. These results were obtained by running 10^6 random words for each T_max value considered and for a channel snr of 2.25 dB.
To illustrate, consider the two production coefficients υ_pr = 1.091 and υ_pr = 1.125 represented by the horizontal lines in <ref>, highlighted in solid green and dashed red, respectively.
In this example, the balanced number of trials is T_bal=2 for υ_pr = 1.091 whereas it is of 4 for υ_pr=1.125.
For our proposed multi-threshold mechanism, we suggest to set thresholds T_2 and T_3 as T_bal and T_bal+1, respectively.
As mentioned in <ref>, the thresholds B_2 and B_3 are set to the middle and the head slots of the buffer. This way, the multi-threshold mechanism cuts off the high decoding trials exceeding T_3 once the buffer is filled up to B_3, and further restricts decoding to T_2 trials when the buffer is half full. As a further protection against buffer overflow, codeword dropping is activated when the buffer is full. The proposed methodology is applicable to other configurations, i.e., different N and k of the polar code, channel snr, and T_max.
We highlight that applying data rates that are too high, i.e., too low υ_pr, will put too much pressure on the multi-threshold mechanism resulting in the equivalent of the codeword-dropping mechanism. Therefore, when possible, we recommend to select a data rate that results in a T_bal≥ 2.
On the other hand, if too low data rates are applied to the extent that T_bal=T_max, the multi-threshold mechanism is not necessary to avoid buffer overflow; T_2=T_3=…=T_P=T_max.
§ SIMULATION RESULTS
We start this section with a description of our simulation methodology and continue by detailing the simulation algorithm.
The simulation results are then presented and discussed.
§.§ Methodology
The simulation of the system model consists in a series of iterations with each iteration being a single unit of time. In order to represent the channel data-production interval with the production coefficient υ_pr, channel and decoder blocks need to perform their operations at particular loop iterations.
For simplicity, we normalize the time by a latency equivalent to a single sc pass. For example, with a production coefficient υ_pr=1.125 and a decoding latency τ_sc of 8 units, the channel generates data every τ_ch=υ_pr·τ_sc=9 time units (<ref>).
Before simulating our system model, we run simulations of the scf decoder within the ideal system, i.e., with the initial maximum number of trials as the only decoding latency restriction.
To illustrate the functionality of our proposed algorithm, the random blocks of data were encoded with a 𝒫(1024,512) polar code and a crc of r=16 bits with polynomial z^16+z^15+z^2+1 was used. The polar encoding algorithm is constructed for an approximate design snr of 2.365 dB. Binary phase-shift keying modulation is used over an awgn channel. Simulations were ran for S=10^6 random codewords at channel snr ranging from 1.75 to 2.5 dB. The scf decoding algorithm with a maximum number of trials T_max=11 was used, where the bit-flipping candidates are defined according to the metric of (<ref>). In <cit.>, the authors suggest adapting the constant c of the metric at each snr. Regardless, we use c=0.3 across all snr values to simplify analysis.
For each decoding word, the required number of trials is stored in the list ψ_req.
The frame-error flag, indicating whether the word was successfully decoded or not, is stored in the list of frame-error flags E. At the end of simulations, the lists ψ_req and E are saved and used for further analysis of the system model.
Then simulations are performed for the system model of <ref>, using the results obtained from the simulation of the ideal system. To illustrate our algorithm, the total size of the buffer is fixed to B_tot=100 memory slots. Both codeword-dropping and multi-threshold mechanisms are simulated. For the codeword-dropping mechanism, the thresholds B_1=99 and T_1=0 are set. For the multi-threshold mechanism, the set of the buffer-size thresholds is ℬ={99, 50, 10}. The set of corresponding trial-decoding thresholds is 𝒯={0, T_bal, T_bal+1} and varies depending on the specific channel snr and υ_pr. For the applied configurations, the sets of three thresholds result in the optimal tradeoff between complexity and error-correction performance.
In this work, we illustrate with production coefficients that are close to the bound of 1, i.e., υ_pr∈{1.091, 1.11, 1.125, 1.15, 1.2} are considered. We focus on υ_pr that are close to the bound to show that the mechanism maintains a fer near 10^-2 without running into a buffer overflow even with very aggressive channel-production rates.
For all simulations, the resulting metrics are analyzed when the buffer is filled with substantial amount of words, i.e., when the system is at the steady-state, such that the comparison is fair for different values of υ_pr and snr.
§.§ Simulation Algorithm
The simulation algorithm of our system model is summarized in <ref>. The algorithm contains a loop, where functions corresponding to each block of the system model are called at each iteration. Each iteration of the loop corresponds to one time unit, that is used as reference to all processes in the system. The function generating the channel data is denoted by Gen_Data, the function generating the controller signals is Gen_Ctrl_Sigs, and the decoder function is Decode.
The functions of channel and decoder are passthrough functions with a behavior that depends on the state of their internal counters. Gen_Data will add a word to the buffer after every τ_ch iteration loops. Decode will read the word from the buffer at every τ_dec=t_req·τ_sc iteration loops (<ref>), where the required number of trials t_req for each decoding word s is read from the list ψ_req.
The word counter s is incremented when C_stop is raised, i.e., when either one of the thresholds is violated or when the decoder completed decoding according to t_req. At the same condition, the final current number of trials is saved to the list of resulting number of trials ψ_res. Simulation ends when all S decoding words are processed. The number of occupied buffer slots is stored in the list χ_occ at every loop iteration.
At the end of simulation, Calc_fer_Impact calculates the binary list of resulting frame-error flags E^' indicating which words were successfully decoded and which were not. This list differs from the list of original frame-error flags E obtained from the simulation of the ideal system.
A decoding error is declared when the ideal system failed to decode the word or when there is an early decoder stoppage (ψ_res(s)<ψ_req(s)).
§.§ State of the Buffer Over the Course of Simulation
<ref> shows the number of used buffer slots over the course of simulation, where the words come from the channel at a fixed rate that corresponds to a production coefficient υ_pr=1.125 and the channel snr is of 2.25 dB. The codeword-dropping mechanism is depicted in blue while the multi-threshold is in red. From the figure, we can see that both mechanisms effectively prevent buffer overflow, i.e., buffer occupied slots never reach B_tot=100 slots.
§.§ Error-correction performance
<ref> shows the fer of the model, where the controller implements the codeword-dropping (blue) and multi-threshold mechanisms (red).
Simulations are for various snr, but for a fixed channel-production rate corresponding to υ_pr=1.125. As such, the channel-production interval is close to the delay of a single scf trial.
The black curve is the ideal performance provided for reference.
The figure shows that, at low channel snr, both considered control mechanisms experience a degradation of the error-correction performance compared to the ideal case. This gap is reduced as the channel improves; the loss is virtually nonexistent at a snr of 2.375 dB. Across the range, we see that the multi-threshold mechanism either matches or outperforms the codeword-dropping mechanism. At the point of interest for wireless communication, a fer of 10^-2 is achieved by the scf decoder within the ideal system at approximately 2.25 dB. The codeword-dropping and the multi-threshold mechanisms show performance losses of approximately 0.1 dB and 0.0625 dB respectively.
<ref> also shows the fer of the model for both mechanisms, but for a fixed snr of 2.25 dB and various υ_pr .
scf with T_max=11 is applied for both mechanisms.
Although it cannot be sustained, the ideal performance for various T_max values are shown as horizontal lines for reference. From the figure, it can be seen that at lower production coefficients both codeword-dropping and multi-threshold mechanisms have a loss in error-correction performance compared to the ideal case with T_max=11. At υ_pr=1.091, the fer is even worse than the ideal case with T_max=3. The gap reduces as the production coefficient increases. The multi-threshold mechanisms fares better than codeword dropping across the whole range. At υ_pr=1.125, the fer of the multi-threshold mechanism reaches the ideal case for T_max=5. Both mechanisms match the ideal fer for T_max=11 at υ_pr=1.2.
§ CONCLUSION
In this work, we proposed a control algorithm that adjusts the execution time of a scf-based decoder in realtime, allowing it to sustain operation without buffer overflow with a channel that produces data with a fixed rate that approaches that of a single decoding trial. By using multiple thresholds, the proposed mechanism is shown to allow an scf-based decoder to operate in a system with a fixed channel-production rate that is 1.125 times lower than the rate associated to a single decoding trial while preventing buffer overflow. In the region of interest for wireless communications, this at the cost of a small error-correction performance of approximately 0.0625 dB in comparison to the ideal but unsustainable case.
§ ACKNOWLEDGEMENT
The authors thank Tannaz Kalatian for helpful discussions. Work supported by NSERC Discovery Grant #651824.
IEEEtran
|
http://arxiv.org/abs/2409.02534v1 | 20240904085226 | Bulk Reconstruction and Gauge Invariance | [
"Sotaro Sugishita",
"Seiji Terashima"
] | hep-th | [
"hep-th",
"gr-qc"
] |
StyleTokenizer: Defining Image Style by a Single Instance for Controlling Diffusion Models
Wen LiMuyuan FangCheng Zou Biao Gong Ruobing Zheng
Meng Wang Jingdong Chen Ming Yang
============================================================================================
fancy
§ ABSTRACT
In this paper, we discuss the concept of bulk reconstruction, which involves mapping bulk operators into CFT operators to understand the emergence of spacetime and gravity. We argue that the N=∞ approximation fails to capture crucial aspects of gravity, as it does not respect gauge invariance and lacks direct connections between energy and boundary metrics. Key concepts such as entanglement wedge reconstruction and holographic error correction codes, which are based on the N=∞ theory, may be incorrect or require significant revision when finite N effects are considered. We present explicit examples demonstrating discrepancies in bulk reconstructions and suggest that a gauge-invariant approach is necessary for an accurate understanding.
empty
§ INTRODUCTION AND SUMMARY
The AdS/CFT correspondence <cit.> can be regarded as a definition of quantum gravity.
To achieve this, we need to consider a finite N (holographic) CFT because the N=∞ limit of CFT is not a well-defined conformal theory.
Then we will study a large N (asymptotic) expansion of physical quantities to interpret the CFT as a bulk gravitational theory.
For such an interpretation,
it is important to understand how to represent bulk operators as CFT operators <cit.>.
This procedure is called the bulk reconstruction.
In other words, it provides an answer to the question of how spacetime and gravity emerge.
The holographic CFT_d for
N=∞ is expected to be
the d-dimensional generalized free field (GFF) <cit.>,
which is just the d+1 dimensional
free theory where the radial direction is regarded as an internal space of the Kaluza-Klein theory.
This d+1 dimensional free theory
corresponds to the free bulk theory.
Here, we emphasize that this free gravity theory does not impose the “Gauss law” constraints of the gauge invariance (diffeomorphism invariance) of the gravity theory and the energy is not related to the metric of the asymptotic boundary <cit.>.
Thus, the N=∞ approximation misses the most important aspect of gravity and is crucially different from the finite N theory.
On the other hand, some important notions in the bulk reconstruction
are based on N=∞ theory, such as the entanglement wedge reconstruction, subregion duality, and the holographic error correction code.
Furthermore, these concepts are expected to be valid even when we include 1/N corrections. At least, it is generally expected that there are no significant differences between the N=∞ theory and the finite N theory.
These notions are based on the importance of considering subregions of spacetime; for example, the horizon and black hole are related to the concept of subregions.
However, given the crucial differences between the N=∞ theory and the finite N theory discussed above,
a serious reconsideration of these expectations is required.
Furthermore, it has been claimed that these notions in bulk reconstruction are shown to be invalid or significantly modified, based on explicit computations in the AdS/CFT correspondence, as demonstrated in <cit.>.
The differences between the N=∞ theory and the finite N theory are crucial for these results.
In this paper,
we argue that
the reason why such widely accepted properties of bulk reconstruction should be either incorrect or significantly modified is, indeed, that they do not respect the gauge invariance in the interacting gravitational theory corresponding to the finite N theory.
We will consider the simplest case: the (global) vacuum state which corresponds to the pure global AdS space, and take a ball-shaped subregion whose entanglement wedge corresponds to the AdS-Rindler wedge. Even for this simplest case, the discussion is non-trivial, and we have the differences between the N=∞ theory and the finite N theory.
Indeed, we will provide explicit examples demonstrating that global and AdS-Rindler bulk reconstructions yield different operators, and that the original entanglement wedge reconstruction is not valid.
These also demonstrate that
the subregion complementarity <cit.>,
rather than the holographic error correction code, is realized to solve the radial locality “paradox” in <cit.>.
Then, we will explain that this property is naturally understood within a gauge-invariant bulk description.
It is important to note that there are no gauge invariant local operators in the gravitational theory,
and the gravitational dressing is needed to make naive local operators gauge invariant <cit.>.
§.§ Summary of our claims
Below, we summarize our claims on the several “established” properties of AdS/CFT.
The explanation of details of our claims
will be described in later sections.
Quantum error correction codes in AdS/CFT
The bulk reconstruction claims that bulk operators (which are low-energy operators) can be reconstructed by CFT operators in multiple ways.
These CFT operators are distinct in the entire CFT Hilbert space including high-energy states.
However,
the holographic quantum error correction (QEC) proposal claims that these different operators are the same in the low-energy subspace (code subspace) as stated in <cit.>.
If we take N=∞ where the CFT is the generalized free field (GFF) theory, which is equivalent to the free bulk theory,
it can be regarded as QEC as proposed in <cit.>.
However, the code subspace is equivalent to the entire Hilbert space space of GFF because the GFF only contains the low-energy modes,
i.e. it is trivial as a quantum error correction code.[
One might think that for N=∞ the CFT Hilbert space will be decomposed to GFFs around
the semi-classical backgrounds, like the vacuum or black holes, and
the GFF for the vacuum is the code subspace.
However, for N=∞ the GFF is completely decoupled from
other sectors.
]
Thus, for the holographic QEC proposal, it is crucial to see whether this structure still holds including 1/N corrections.
Indeed, if we include the leading order correction in the 1/N expansion to the GFF,
which is the non-vanishing leading order of the three-point functions in CFT,
bulk fields constructed from CFT operators on different subregions
becomes different in code subspace in general.
Therefore, the quantum error correction code proposal works only
for N=∞ rather trivially.
For finite N, the relevant question for the holographic QEC proposal is whether
the two bulk operators, which are equivalent for N=∞, with 1/N corrections can be equivalent in the code (low energy) subspace
with the interactions in the bulk theory.
The answer is no.
(If some properties of the free bulk theory are completely different from those of the interacting bulk theory, the free theory is not useful for them.)
Entanglement wedge reconstruction
Let us consider a bulk local operator supported on the intersection of the entanglement wedges of two different CFT subregions A and B for N=∞.
We can reconstruct the bulk operator as the CFT operators _A and _B (which are supported on A and B respectively) for N=∞.
We now consider 1/N corrections of them.
Then we can show that
these two CFT operators _A and _B cannot be the same at the leading order correction in 1/N expansion in general,
even around the vacuum which corresponds to the pure AdS spacetime.
Thus, the entanglement wedge reconstruction <cit.> which claims _A = _B does not work including 1/N corrections.
On the other hand, a weaker version of the entanglement wedge reconstruction, stated in <cit.>, may be valid.
This claims that parts of (smeared) bulk local operators with 1/N corrections in the entanglement wedge of a CFT subregion can be reconstructed from the CFT operators supported on the subregion, while the ones outside the entanglement wedge cannot be reconstructed. The set of reconstructable bulk operators is smaller than that of naive local operators in the usual entanglement wedge reconstruction.
Subregion duality from relative entropy
The relative entropies in the bulk and the CFT for the subregion may be the same up to O(1/N) <cit.> and this seems to lead to the subregion duality and the entanglement wedge reconstruction as in <cit.>.
However, this should not be valid at O(1/N)[
This is the first non-trivial
leading order of the three-point function
which is needed for the results of <cit.>.
]
as shown in <cit.>.
This may be because in <cit.> it was assumed that the Hilbert space for the bulk gravitational theory is factorized for the subregions. The assumption of the factorization is (approximately) valid only if we fix the gauge.
The correct understanding of the bulk relative entropy in <cit.> will be one using the algebraic entanglement entropy
where the bulk subalgebra is generated by the gauge invariant operators supported on the entanglement wedge,
and it leads to a subregion duality and an entanglement wedge reconstruction in the sense discussed in <cit.> <cit.>.
We claim that subregion duality and the entanglement wedge reconstruction based on the algebraic entanglement entropy are completely different from the usual ones
although it appears to be assumed that there is not much significant difference between them in <cit.> <cit.>.
In particular, the weaker version of the entanglement wedge reconstruction <cit.> is related to the gauge invariant operators on the bulk region as we argue.[
Including the 1/N corrections, the bulk local operators need to attach the gravitational dressing <cit.> which is the critical reason for the non-factorization of the Hilbert space.
This gravitational dressing is related to the weaker version of the entanglement wedge reconstruction.
]
The main points that have been stated above are as follows:
There are gaps between the GFF and finite N case or 1/N corrected one concerning consideration of subregions.
Note that the GFF and the free bulk theory are equivalent,
and thus this may be regarded as the discrepancy between the bulk effective theory and quantum gravity (=CFT).
§ BULK RECONSTRUCTION WITH GAUGE FIXING
In this section, we review the global and AdS-Rindler (HKLL) bulk reconstruction <cit.> by taking care of N=∞ or finite N. The distinction between N=∞ and finite N is essential to see the difference between the global and AdS-Rindler reconstruction.
The discussion will be done in a gauge-fixed way as in the standard argument of the bulk reconstruction <cit.>.
The difference between the two reconstructions will be obvious in a gauge-invariant argument as we will see in section <ref>.
§.§ Global AdS
We consider the d-dimensional holographic CFT on the cylinder with coordinates (τ, Ω). The corresponding bulk is the (d+1)-dimensional global AdS and we take the bulk metric as
ds^2=1/cos^2ρ(-dτ^2+d ρ^2+ sin^2 ρ dΩ_^2).
First, we consider N=∞ case,
i.e. bulk free theory in global AdS or the GFF theory on the boundary.
We can solve the EOM of the bulk free theory by the mode expansion
for the global AdS coordinates and
obtain the creation operators a^†_n l m.
Then, the bulk local field ϕ can be
expanded by these modes.
In particular, the boundary value of ϕ, which is given by
O(τ, Ω)=lim_z → 0 z^-Δϕ(τ, z, Ω)
where z=cos (ρ) and Δ is the conformal dimension of the CFT operator corresponding to ϕ, is expressed by a^†_n l m.
Conversely, we can express a^†_n l m by the boundary values O(τ, Ω).
Then, inserting this into the mode expansion of ϕ, we have
the bulk reconstruction formula from the boundary values:
ϕ (τ,ρ,Ω) = ∫ d τ' dΩ' K(τ,ρ, Ω;τ',Ω') O(τ',Ω'),
where K(τ,ρ, Ω;τ',Ω') is a specific function called the smearing function
whose explicit form was given in <cit.> and called the HKLL bulk reconstruction formula.
It is important to note that the two-point function
⟨ 0 | O(τ, Ω) O(τ', Ω') | 0 ⟩
computed in the free bulk theory coincides with
the two-point function of the primary operator with the conformal dimension Δ of CFT, which is universal, up to a numerical normalization factor.
If we regard this O(τ, Ω) as a d-dimensional “CFT” operator, the “CFT” is a generalized free field theory <cit.>.
If we define
ϕ^G (τ,ρ,Ω) ≡∫ d τ' dΩ' K(τ,ρ, Ω;τ',Ω') O^CFT (τ',Ω'),
for a primary operator O^CFT (τ,Ω) of any finite N CFT,
the bulk two-point function
is reproduced by the CFT operator ϕ^G:
⟨ 0 | ϕ^G (τ,ρ,Ω) ϕ^G (τ',ρ',Ω') | 0 ⟩
=⟨ 0 | ϕ (τ,ρ,Ω) ϕ (τ',ρ',Ω') | 0 ⟩,
bacause O and O^CFT have the same two-point functions.
In particular, an N=∞ limit of the holographic CFT around the vacuum is expected to be this GFF because of the large N factorization.
Then, (<ref>) in this limit reproduces the bulk n-point function and gives the HKLL bulk reconstruction formula.
Note that the smearing function
K(τ,ρ, Ω;τ',Ω') has ambiguities although there is no ambiguity for expressing by the creation operators.
Indeed, for N=∞,
if we replace it as
K(τ,ρ, Ω;τ',Ω') → K(τ,ρ, Ω;τ',Ω') +δ K(τ,ρ, Ω;τ',Ω'),
the reconstructed bulk operator
ϕ^G (τ,ρ,Ω) is invariant if
∫ d Ω' ∫_-∞^∞ dτ' e^i ωτ Y_lm(Ω) δ K(τ,ρ, Ω;τ',Ω') =0,
is satisfied for
ω= 2n+l+Δ
where n is a non-negative integer although ω can take any real number as a Fourier transformation of τ.
1/N corrections
The HKLL bulk reconstruction can be extended
to include 1/N corrections, which correspond to the interaction in the bulk
by requiring non-linear EOM or a kind of micro causality <cit.>,[
The requirements were described in the holographic gauge. However, the HKLL bulk reconstruction may not give a local operator in the holographic gauge as discussed in <cit.>.
]
which is given as
ϕ^G(1) (τ,ρ,Ω) ≡∫ d τ' dΩ' K(τ,ρ, Ω;τ',Ω') O^CFT (τ',Ω')
+ 1/N∫ d τ' dΩ' d τ” dΩ”
K^(1)ab(τ,ρ, Ω;τ',Ω', τ”,Ω”) O^CFT_a (τ',Ω') O^CFT_b (τ”,Ω”),
where O^CFT_a is a low-energy primary operator
and K^(1)ab is N-independent.
It is noted that if we take ϕ a spherical symmetric operator, i.e. it is Ω-independent,
each term of 1/N expansion is a spherical symmetric operator.
§.§ AdS-Rindler bulk reconstruction
Next, we review the AdS-Rindler bulk reconstruction <cit.>.
Let us consider the AdS- Rindler patch of AdS_d+1 with the metric
ds^2=-ξ^2 dt_R^2+dξ^2/1+ξ^2+(1+ξ^2) d χ^2,
where χ stands for the coordinates of the d-1 dimensional hyperbolic space.
Its asymptotic boundary on the t_R=0 slice, which will be denoted by A, is a ball-shaped subregion in the sphere, whose entanglement wedge M_A is the AdS-Rindler patch itself.
For this, we can repeat the bulk reconstruction for the global AdS: we solve the EOM to obtain the mode expansion and
rewrite the bulk local operator ϕ by the modes.
Then, the bulk local operator can be expressed by the boundary values of the bulk local operator using a smearing function K^R[
Precisely speaking,
the smearing function K^R does not exist for the bulk local operator.
We always need to consider the smeared bulk operator (distribution) <cit.>.
It is also noted that K^R is unique.]
Using the smearing function, we define
ϕ^R (t_R,ξ,χ) ≡∫ dt'_R dχ' K^R(t_R,ξ,χ; χ',t'_R) O^CFT(χ',t'_R).
This CFT operator ϕ^R reproduces the bulk two-point function correctly even for finite N as
⟨ 0 | ϕ (X) ϕ (X') | 0 ⟩
=
⟨ 0 | ϕ^R (X) ϕ^R (X') | 0 ⟩,
for arbitrary two points in M_A,
where |0⟩ is the global vacuum (not the Rindler vacuum).
Furthermore, they satisfy
⟨ 0 | ϕ (X') ϕ (X) | 0 ⟩
=⟨ 0 | ϕ^G (X') ϕ^G (X) | 0 ⟩=
⟨ 0 | ϕ^R (X') ϕ^G (X) | 0 ⟩,
for arbitrary X in the entire bulk and X' in the entanglement wedge M_A.
For N=∞, we can also reproduce any higher-point bulk correlation function in n the entanglement wedge M_A as the global reconstruction does.
Thus, the bulk local operators in the entanglement wedge M_A
are reconstructed from the CFT operators supported on the subregion A for N=∞.
Note that for N=∞,
the CFT is regarded as the GFF which is just the free bulk theory on AdS.
Thus, ϕ^G and ϕ^R are the same operators.[
This is possible by the ambiguities of K, i.e. we may have K^R=K+δ K.
]
In particular, the creation and annihilation operators are related by a unitary transformation (i.e., the Bogoliubov transformation with operators on M_A̅).
This means that the holographic error correction code in the N=∞ theory,
the map (for example, the Petz map) is trivial.
This is because the code subspace (low energy subspace) and the physical space are the same for N=∞.
Furthermore, if we include even the first non-trivial corrections in the 1/N expansion, the structure of the holographic error correction code is lost <cit.>
as we will see later.
Here, we will give some remarks on it.
Let us consider the difference between the two operators ϕ^G in (<ref>) and ϕ^R in (<ref>):
ϕ^δ≡ϕ^R-ϕ^G,
in the finite N CFT, i.e., we use the HKLL formulas for ϕ^G in (<ref>) and ϕ^R in (<ref>) with O^CFT for finite N CFT although the smearing functions K and K^R are those for N=∞ theories without 1/N corrections.
This ϕ^δ satisfies, for arbitrary point X in the entire bulk,
⟨0|ϕ^δ O_a(X) |0⟩=0,
where O_a(X) is any primary operator of the CFT,
because this is a two-point function between the primary operators corresponding to ϕ and O_a(X) which vanishes by (<ref>).
This exactly implies
ϕ^δ|0⟩=0.
However, it does not imply ϕ^δ =0 and we may have ϕ^δ|ψ⟩≠ 0 for some |ψ⟩.
In particular, some three-point functions can be nonzero:
⟨0| O_a (X) ϕ^δ O_b(X) |0⟩≠ 0.
We indeed have such an example for CFT on d=2 Minkowski spacetime (t,x).
Using the lightcone coordinates u=t-x, v=t+x,
let us consider
ϕ̃^δ:= ∫ du dv e^iu p_u +i v p_v O_Δ (u,v),
for a non-chiral scalar primary operator O_Δ with p_u p_v <0 which implies the energy p^t is lower than the absolute value of the momentum p^x.
Then, we find
⟨0|ϕ̃^δ O_Δ (u',v') |0⟩= ∫ du dv e^iu p_u +i v p_v (u-u'+i ϵ)^-Δ (v-v'+i ϵ)^-Δ=0,
where ϵ>0 is due to the ordering of the operators <cit.>
because if p_u <0 the integration path u ∈𝐑 will be deformed to u ∈𝐑-i ∞
without crossing a singular point.
On the other hand, for the three-point function with the energy-momentum tensor, we have
⟨0| T(u”) ϕ̃^δ O_Δ (u',v') |0⟩
= ∫ du dv e^iu p_u +i v p_vΔ/2(u-u'+i ϵ)^2/(u”-u+i ϵ)^2(u”-u'+2 i ϵ)^21/(u-u'+i ϵ)^Δ (v-v'+i ϵ)^Δ≠ 0,
because there are singular points for u in the integrand both in
the upper and lower half-plane.
Thus, we have ϕ̃^δ|0⟩=0 and ϕ̃^δ≠ 0.
Like this example, ϕ^G and ϕ^R=ϕ^G+ϕ^δ can be distinguished for finite N CFT by looking at three-point functions.
The difference ϕ^δ comes from the 1/N correction, and then we can ask
whether we can take ϕ^G(1)=ϕ^R(1)
including the 1/N correction, like (<ref>), at this order.
In other words, the question that we should ask is whether ϕ^δ can be canceled by the CFT operator supported only on A by order by order in 1/N expansions, and the answer is no as shown in <cit.> by giving an explicit example.
We will show it again in the next section with a refined example.
§ SUBREGION COMPLEMENTARITY AND BULK RECONSTRUCTION
In this section, we will show first that
the global and AdS-Rindler bulk reconstructions should give different operators if we include 1/N corrections.
On the other hand, both of the bulk theories on the global and the AdS-Rindler patches may be consistent.
Indeed, these two theories have
the same operators on
an overlapped region (which is the AdS-Rindler patch) of these two patches
for the free bulk limit (i.e. N=∞),
while the operators will be different for finite N.
We called the existence of the two different descriptions depending on the observers, the subregion complementarity <cit.>.
We will give an example that illustrates what the subregion complementarity is.
In the next section, we will explain that
the subregion complementarity results from the intrinsic non-locality of the operators in the gravitational theory due to the gauge invariance.
§.§ Global and AdS-Rindler bulk reconstructions give different operators
Let us consider the CFT on S^d-1 and take a ball-shaped subregion A which is slightly larger than half of the S^d-1.
We then consider a spherically symmetric smeared bulk local operator ϕ̃ in the global AdS such that ϕ̃ is supported only on the entanglement wedge M_A associated with the CFT subregion A.
We also require ϕ̃^†=ϕ̃.
By the global HKLL bulk reconstruction (or another bulk reconstruction that maintains the symmetry), we have
ϕ̃_G ≡∫_S^d-1 K(X) O^CFT(X) with ϕ̃_G^† =ϕ̃_G where K(X) is a real and spherically symmetric function.
Below, we will use the CFT energy-momentum tensor T_00(Ω) on the τ=0 slice and the Hamiltonian H=∫_S^d-1 d Ω T_00(Ω).
Here, T_00(Ω) is shifted by a constant from the usual definition in CFT so that we have ⟨0|H |0⟩=⟨0|T_00(Ω) |0⟩=0, which implies H |0⟩=0.
Then, by the rotational symmetry, we have
⟨0|[ϕ̃_G , [T_00(Ω) ,ϕ̃_G ]] |0⟩
=1/V(S^d-1)⟨0|[ϕ̃_G , [H ,ϕ̃_G ]] |0⟩
= 2/V(S^d-1)⟨0|ϕ̃_G H ϕ̃_G |0⟩ >0,
for any Ω.
This implies that ⟨0|[ϕ̃ , [T_00(Ω) ,ϕ̃ ]] |0⟩ = O(N^0).[
Note that T_μν∼ N h_μν where the “graviton” operator h_μν is normalized
such that it has the conventional normalization of the CFT two-point function, which is independent of N. In the large N expansion, we use h_μν, then the three-point function is O(1/N).
]
On the other hand,
since ϕ̃ is localized on M_A, we can reconstruct it by the AdS-Rindler HKLL reconstruction as ϕ̃^R.
Then, for Ω∈A̅, we will have
⟨0|[ϕ̃^R , [T_00(Ω) ,ϕ̃^R ]] |0⟩=0,
by the micro causality (i.e., any operators on A commutes with T_00(Ω) for Ω∈A̅). Note that this discussion uses the low-energy states only.[
This implies that (ϕ̃^R)^2 |0⟩≠ (ϕ̃)^2 |0⟩ although ϕ̃^R |0⟩ = ϕ̃|0⟩ as discussed in the previous subsection.
On the other hand, using the Reeh–Schlieder theorem, we have an operator φ^R supported on A such that φ^R |0⟩ =(ϕ̃)^2 |0⟩, but it is impossible that
φ^R = (ϕ̃^R)^2 even for the low-energy states.]
Any 1/N corrections to ϕ̃ cannot contribute to (N^0) terms in ⟨0|[ϕ̃ , [T_00(Ω) ,ϕ̃ ]] |0⟩.
Therefore,
the AdS-Rindler bulk reconstruction of a bulk operator in the entanglement wedge and the global bulk reconstruction of the same bulk operator cannot be the same CFT operators even if we include 1/N corrections appropriately.
We can also see the difference by considering the energy density of the following state:
|ψ⟩= (1+i ϵϕ̃_G+ 1/2 (i ϵϕ̃_G )^2) |0⟩,
where ϵ is a small real parameter.
By the rotational symmetry, we have
⟨ψ| T_00(Ω) |ψ⟩ = 1/V(S^d-1)⟨ψ| H |ψ⟩
= ϵ^2 1/V(S^d-1)⟨0|ϕ̃_G H ϕ̃_G |0⟩+ O(ϵ^3),
where V(S^d-1) is the volume of the time slice S^d-1.
Thus, ⟨ψ| T_00(Ω) |ψ⟩ is O(ϵ^2) for any point Ω.
On the other hand, using ϕ̃_R instead of ϕ̃_G, we consider
|ψ^R⟩= (1+i ϵϕ̃^R+ 1/2 (i ϵϕ̃^R )^2) |0⟩.
If we take Ω∈A̅, we have
⟨ψ^R| T_00(Ω) |ψ^R⟩ =
1/2ϵ^2 ⟨0|[ψ̃ , [T_00(Ω) ,ϕ̃^R ]] |0⟩
-1/2 i ϵ^3 ⟨0| ( (ϕ̃^R)^2 T_00(Ω) ϕ̃^R - ϕ̃^R T_00(Ω) (ϕ̃^R)^2 )|0⟩
+ O(ϵ^4)
= -1/2 i ϵ^3 ⟨0| ((ϕ̃^R)^2 T_00(Ω) ϕ̃^R - ϕ̃^R T_00(Ω) (ϕ̃^R)^2 )|0⟩
+ O(ϵ^4),
where we have used the micro causality [T_00(Ω) ,ϕ̃^R ]=0 for Ω∈A̅.
Thus, ⟨ψ^R| T_00(Ω) |ψ^R⟩ is O(ϵ^3).
Therefore, we have ⟨ψ| T_00(Ω) |ψ⟩≠⟨ψ^R| T_00(Ω) |ψ^R⟩, and |ψ^R⟩ are different from |ψ⟩.
§.§ Some illustrative examples
Let us consider an example that exhibits the relation between the subregion complementarity and gauge invariant operators.
The setup is similar to the discussion of the bulk locality in <cit.>, although the conclusions are completely different.
We consider a time slice of global AdS_3 spacetime.
The corresponding
CFT is on S^1 parameterized by 0 ≤θ < 2π.
We take a subregion A_1 of the CFT by 0 ≤θ < θ_1 where θ_1 satisfies π < θ_1 < 4 π/3.
We also take the subregions A_2 and A_3
by the 2 π/3 and 4 π/3 rotations
of A_1, respectively (see Fig. <ref>).
Note that A_1 ∩ A_2 ∩ A_3= ∅.
Then, the intersection of the entanglement wedges of the three subregions
is an island-like region M_A_1∩ M_A_2∩ M_A_3, which does not reach the boundary, around the center of the disk as Fig. <ref>.
Now, let us consider a Hermitian bulk operator ϕ supported only on this island-like region.
The bulk operator ϕ can be reconstructed from
the CFT operator supported on any one of the subregions A_i,
i.e. ϕ=ϕ_i for
ϕ_i ∈ A(A_i) for i=1,2,3, where A(A_i) is the algebra generated by the CFT operator supported on D(A_i).
If the (strong) entanglement wedge reconstruction holds, these operators are the same (ϕ_1=ϕ_2=ϕ_3) on the code (low-energy) subspace <cit.>.
However, we will see that they are different even in the low-energy subspace.
We define[
Instead of e^i ϕ, we can consider the Taylor expansion of it for some order like (<ref>), and the conclusion does not change.]
the states |ψ_i⟩= e^i ϕ_i|ψ_0⟩ for i=1,2,3
where |ψ_0⟩ is an arbitrary state.
They exactly satisfy
⟨ψ_i| O_i̅|ψ_i⟩=⟨ψ_0| O_i̅|ψ_0⟩,
for arbitrary
O_i̅∈ D(A̅_̅i̅)
by the causality in the CFT.
If we suppose ϕ_1=ϕ_2=ϕ_3, these states are the same (|ψ_1⟩=|ψ_2⟩=|ψ_3⟩≡|ψ⟩).
This implies
⟨ψ| O|ψ⟩=⟨ψ_0| O|ψ_0⟩
for any CFT operator O because
A̅_̅1̅∪A̅_̅2̅∪A̅_̅3̅ is the entire space,[For the GFF theory, we have the additivity anomaly <cit.>, i.e. A(A)∨ A(A')≠ A(A ∪ A').
Thus, A(A_1)∨ A(A_2)∨ A(A_3)≠ A(S^1), and we cannot conclude ⟨ψ| O|ψ⟩=⟨ψ_0| O|ψ_0⟩ for any operator O∈ A(S^1). Our discussion in the main text is for the finite N CFT without the additivity anomaly. Although the entanglement wedge reconstruction may hold for the GFF theory, it does not contradict the radial locality due to the additivity anomaly, and thus we do not need the QEC structure in this N=∞ case.]
and then we conclude |ψ⟩∼|ψ_0⟩ which means that ϕ is proportional to an identity operator.
Thus, there is no such non-trivial operator ϕ satisfying ϕ=ϕ_1=ϕ_2=ϕ_3.
Instead, these operators ϕ_1, ϕ_2, ϕ_3 are different, that is the subregion complementarity.
It is natural that there are no nontrivial operators satisfying (<ref>)
because there is no overlap of the CFT subregions A_i, i.e. A_1 ∩ A_2 ∩ A_3= ∅.
This is a typical example of the subregion complementarity.
The statement that ϕ_1, ϕ_2, ϕ_3 are different operators sounds trivial from the CFT perspective because they have different supports.
However, this is not trivial for low-energy subspace because we need clarification for the concept of the local operators in low-energy effective theories as we will discuss in subsec. <ref>.
It will be clear that they are different from the bulk side when we take into account the gravitational dressing (see sec. <ref>).
One might think that there exists some non-trivial bulk operator ϕ
which satisfies
ϕ≃ϕ_i where
ϕ_i ∈ D(A_i) for i=1,2,3
only in the low-energy subspace (which is the code subspace) as the holographic quantum error correction proposal <cit.>.
However,
one can repeat the above discussion by restricting
|ψ_0⟩ and
O_i̅
to a state and an operator in the low-energy subspace.
It is important to note that ϕ_i which is reconstructed by the CFT operators should be only supported on A_i
strictly and O_i̅ should be only supported on A̅_i.
This enables us to use the micro-causality.
Then,
we find that ⟨ψ| O|ψ⟩=⟨ψ_0| O|ψ_0⟩ for any low-energy CFT operator O
and
such ϕ should be proportional to the identity operator in the low-energy subspace.[
Strictly speaking, to show this, we need to assume that there is no genuine non-local low-energy CFT operator which
cannot be supported on A_j for any j.
However, such genuine non-local operators may not exist
because there may not be such low-energy excitations in the bulk.
Furthermore, even if we assume the existence of such a non-local operator,
ϕ_i should include this non-local operator, which cannot be supported on A_j for any j.
This is impossible because ϕ_i is supported on A_i.
]
Thus, such an operator should be trivial in the low-energy subspace and it is contrary to the holographic quantum correction code proposal.
Generalization
We can extend the discussion to general backgrounds and more general choices of subregions.
Let us consider a time slice of an asymptotic AdS spacetime M and
CFT subregions A_i ⊆∂ M.
We will take bulk regions
a_i ⊆ M such that
a_i ∩∂ M = A_i,
for any i.
Though this condition is satisfied for the time-slice of the entanglement wedge of A_i, we here allow a_i to be more general bulk subregions satisfying this condition.
We also require
the following condition,
∪_j A̅_j =∂ M,
which is also satisfied with the previous example.
(We do not require ∪_j A_j =∂ M here.)
The condition (<ref>) is equivalent to
∩_j A_j =∅,
which implies that there is no CFT operator ϕ such that ϕ∈ A(A_i) for all i, where A(A_i) is the algebra generated by the CFT operators supported on A_i.
Under the condition (<ref>),
(<ref>)
is equivalent to
∩_j a_j ∩∂ M =∅,
which means
that the intersection of the bulk wedges a_i is an “island” which does not reach the boundary.
Thus, this can be a generalization of the previous example
because all of the conditions here are satisfied in the previous examples.
We can repeat the previous discussion and obtain the same conclusion for this generalized case. That is,
if the overlapped region of the bulk wedges of the subregions is an island region and we reconstruct a bulk operator supported only on the island region as a CFT operator supported on a subregion A_i, then the CFT operator cannot be the same as the reconstruction from another subregion A_j (i≠ j) even in the low-energy subspace.
§ GRAVITATIONAL DRESSING (GAUGE-INVARIANT OPERATORS IN GRAVITY) AND SUBREGION COMPLEMENTARITY
We have seen that the CFT operator on a subregion that reconstructs a bulk “local” operator
is different from the other one for a different subregion
even restricted in the low-energy subspace.
This is because of the subregion complementarity.
We will explain that this property is naturally understood in the bulk gauge invariant description.
It is important that
there are no gauge (diffeomorphism) invariant local operators in the gravitational theory,
and the gravitational dressing (or the gravitational Wilson lines) is needed to make a naive local operator gauge invariant <cit.>.[
For gauge theories also, such a dressing is required.
However, there are gauge-invariant local operators in
the gauge theories and the non-local operators may not
play an important role in some situations, like
the low energy physics of the large N gauge theories because the Wilson loops will be heavy.
For the gravitational theory, there are only non-local gauge invariant operators.
]
We will also show that subregion complementarity, rather than holographic QEC proposal, is consistent with the algebraic approach to the entanglement wedge reconstruction based on
the gauge-invariant operators in gravity.
Note that this version of the entanglement wedge reconstruction is
the weak version of the entanglement wedge reconstruction, which we claimed in <cit.>, and is different from the usual entanglement wedge reconstruction.
§.§ Gravitational dressing
The examples of the subregion complementarity given in the previous section can be understood from the fact that
there are no local operators in the gravitational theory (without a gauge fixing).
In gauge theories, charged objects are always accompanied by electromagnetic fields.
In other words, we have to attach the Wilson lines to the charged operators in order to make them gauge-invariant.
In the gravitational theory, gravitational fields are universally coupled with all fields, and thus all operators are accompanied by gravitational fields.
Thus, we have to attach the gravitational dressing or the “Wilson line”[
The gravitational analogue of the Wilson line is different from the Wilson line in the gauge theories
because there is no analogue of the Wilson loop.
In particular, for gauge theories, the Wilson line can be deformed arbitrarily by
adding the Wilson loop, which is gauge-invariant and localized inside the bulk.
]
to the local field to make it gauge (diffeomorphism) invariant <cit.>.
In the gauge theories, the Wilson lines from a charged particle do not have to end on the boundary because they can end on the anti-charged particles.
The gravitational dressing has to end on the boundary of the bulk spacetime because there are no “anti-charged” objects for the gravitational force.
In the left figure in Fig. <ref>, the bulk operator has the gravitational dressing which extends on the entire boundary.
This operator corresponds to the CFT operator for the global reconstruction.
On the other hand, we can consider other choices of gravitational dressing as in the right figure in Fig. <ref>.
If the gravitational dress extends only on a subregion A of the boundary, the operator may be constructed from the CFT operators supported only on A.
The CFT operator is different from the one for the global reconstruction.
The physical difference is clear because they have different gravitational fields.
In the example (Fig. <ref>) in the previous section, the operator ϕ_1 constructed from A_1 corresponds to the bulk operator with the gravitational dressing extending (only) to A_1.
Similarly, ϕ_2 and ϕ_3 correspond to the bulk operators with gravitational dressing extending (only) to A_2 and A_3, respectively.
Therefore, the bulk operators including the gravitational dressing have different supports, and it is consistent with our observation that CFT operators ϕ_1, ϕ_2, ϕ_3 are different.
We have seen that the constraint of the gauge invariance of the bulk theory is related to the causality of the CFT side.
It may be interesting to understand such a relation clearly.
No operators supported on island
Gravitational dressing must end on the boundary.
Thus, there are no gauge invariant operators supported only on an island region in the bulk.
This is consistent with the examples in the previous section because there are no CFT operators associated with the island region.
This may imply that
an island subregion in an entanglement wedge is irrelevant for
the operators supported on the wedge.
Let us take
a certain bulk subregion M_A which is connected to the bulk boundary.
If we consider
another subregion M'_A=M_A ∪ I where
I is an island subregion that is not connected to the bulk boundary and also M_A,
then the bulk subalgebra of the operators supported on M_A and
the one on M'_A are the same because there are no gauge-invariant operators localized on I.
Note that this conclusion is based only on the bulk consideration, and
the discussion here may be equivalent to the one in <cit.>
although the holographic error correction codes <cit.> and
the (original version of) entanglement wedge reconstruction <cit.> were supposed to be correct in <cit.>.
§.§ Subalgebra associated with subregion and the algebraic entanglement entropy
The subregion duality and the entanglement wedge reconstruction <cit.> are based on the equivalence of the bulk and CFT relative entropies <cit.>.
To define the (conventional) entanglement entropy or the reduced density matrix for a space subregion,
a tensor-factorized structure is needed for the Hilbert space.
However, without a gauge fixing,
there are no local gauge invariant operators in
the gravitational theory and
there is no such tensor-factorized structure.
Instead of a tensor-factorized structure associated with a space subregion,
we can consider a subalgebra
that is generated by operators supported on the subregion.
We have a generalization of the definition of entanglement entropy based on
the tensor-factorized structure to the one based on the subalgebra (see, e.g., <cit.>), which is also reviewed in appendix <ref>.
Note that this generalized definition reduces to the conventional definition when the total Hilbert space is tensor-factorized as _1 ⊗_2 by considering the subalgebra non-trivially acting only on a factor of the tensor product, e.g., _1.
Thus, the definition of the reduced density matrix based on subalgebra may be a natural generalization.
Indeed, in <cit.>,
the subregion duality and the entanglement wedge reconstruction based on the algebraic approach were proposed by
(implicitly) assuming that the relative entropy computed in <cit.> is based on the algebraic approach.
It is not yet known whether this assumption is correct
because
the replica trick was used to obtain the density matrix for the subregion in <cit.> and
the replica trick is based on the tensor-factorized structure of the Hilbert space at least naively.
We claim that the subregion duality and the entanglement wedge reconstruction based
on the subalgebra are what we call the weak version of them <cit.>.
This implies that the entanglement wedge reconstruction based on the tensor product of the Hilbert space is completely different from
those based on the subalgebra
although it seems to be assumed that they are not so different in <cit.>.
In particular, the subregion complementarity is relevant in our discussion instead of the holographic QEC proposal.
§.§ What is the CFT counterpart of the bulk subalgebra
Let us consider ball-shaped subregions A, B in CFT and
the corresponding AdS-Rindler patches M_A, M_B in the bulk. We allow A as the entire space S^d-1, i.e. M_A as the global AdS space.
Then, we can reconstruct a bulk local operator ϕ(x)
where x ∈ M_A ∩ M_B by either of the AdS-Rindler HKLL reconstruction from the subregions A or B.
The reconstructed operators from A and B should be different as discussed in the previous section.
In the gauge invariant language,
the bulk local operator ϕ(x) should be supplemented by the gravitational dressing.
The gravitational dressings also should be supported on M_A or M_B for the reconstructed operators from A or B, respectively,
if the algebraic version of the subregion duality holds.
Then, the gravitational dressing is expected to
give the nonzero three-point function of the CFT energy-momentum tensor and two corresponding to ϕ only on A or B, respectively. Thus, the two reconstructed operators are different because they have different three-point functions.
The above claim can also be further explained with more physics-related implications by using the simple picture given in <cit.> which are based on
the studies of the AdS/CFT in the operator formalism <cit.>.
The bulk local operator can be represented by the time evolution of the CFT primary operators on the intersection of the asymptotic AdS boundary and the lightcone of it.
In particular, the bulk wave packet operator can be
represented by the time evolution of the CFT primary operators at the point where
the trajectory of the wave packet, which is a null geodesic for the well-localized wave packet, intersects with the asymptotic AdS boundary <cit.>.
Note that the CFT primary operators are regarded as
the bulk operators on the boundary which are gauge invariant because the gravitational dressings are not needed.
Then, the time evolution of them in the bulk picture
produces the operator analogues of the gravitational waves (and other waves)
by the bulk interactions.
They spread within M_A because of the bulk causality. The gravitational dressing of the bulk operator is localized only on M_A.
Then, the AdS-Rindler HKLL reconstruction of the bulk local operator from the subregion A differs from the bulk local operator from the subregion B.
In particular, if we consider a wave packet operator that is of the boundary-to-horizon type <cit.> for the AdS-Rindler patch M_A, but of the horizon-to-horizon type for the patch M_B, then the reconstructed operators from A and B should be different because the gravitational dressing of the former should extend beyond M_B.
§.§.§ The bulk subalgebra in terms of CFT operators
Here, we will express the bulk subalgebra in terms of CFT operators. As we will see below, we need a non-trivial clarification.
In the CFT picture, there is
the algebra A_CFT which is generated
by the CFT operators.
Around a semiclassical bulk background, which corresponds to the vacuum for our case here,
we will define the low-energy Hilbert space, whose basis is spanned by the states with energy less than O(N^0), and the subspace may be called the code subspace.[
There are some subtleties to defining the low-energy subspace because the O(N^0) energy is only defined in the large N limit and we need to introduce an explicit cutoff for finite N case.
We will ignore this subtlety assuming that the discussions below are not sensitive to the explicit cutoff.]
The operators of the semiclassical bulk theory will act on this.
We have considered
the algebra generated A_bulk by the bulk operators.[
There are also some subtleties to define A_bulk
because if the numbers of the products of low-energy operators or the number of the derivatives are very large,
the corresponding operators cannot be regarded as low-energy ones.
We will ignore this assuming the approximate notion of the subalgebra is enough.]
Note that A_bulk can be identified as a subalgebra of A_CFT, by considering the operators acting on the low-energy states only.[If we consider another excited state |ψ⟩ corresponding to the non-trivial semiclassical background, the low-energy states mean the states excited by acting low-energy operators on |ψ⟩.
]
Here, the low-energy operators will be defined such that
the matrix elements between low-energy states (whose energy is equal to or less than (N^0)) and high-energy states (whose energy is equal to or bigger than the Planck mass) vanish (or small compared with 1/N^n where n is an arbitrary positive integer).
More precisely, for the identification of A_bulk with a subalgebra in CFT, we need to
extract
the matrix elements between the low-energy states.
Now, we consider a subregion A in CFT,
and then we have the subalgebra A_A of
A_CFT
which is generated by
the CFT operators supported on A.[
We will ignore non-local CFT operators, which will be high-energy operators.
]
A central question of the entanglement wedge reconstruction might be which part of the bulk algebra A_bulk can be generated by A_A,
in other words, what is the low-energy subalgebra of A_A.
However, this is not a good question because
the bulk theory is meaningful only for the low-energy subspace and operators involving multiple derivatives
cause some problems
as discussed in <cit.>.
We will explain this below.
The bulk theory may have a UV cutoff at the Planck scale. Let us remember the subtlety of the operators with a large number of derivatives
in the cutoff theory
discussed in <cit.>.
For the notational simplicity, we will consider the CFT on the Minkowski space.
Let us consider the following operator
O(x̅, t̅ ) = e^t̅∂_t O(x̅, t ) |_t=0.
It is not localized at {x̅, t=0 }, but at {x̅, t̅}.[
In the discussions here we will consider time derivatives for simplicity although we can generalize them to any direction.
]
It means that if we act the infinite number of derivatives on the local operator at a point, the obtained operator is not local on the original point.
On the other hand, if the number of derivatives is finite as
O^[q](x̅, t̅ ) ≡
[e^t̅∂_t]_q O(x̅, t ) |_t=0,
where
[e^t̅∂_t]_q ≡∑_n=0^q(t̅∂_t)^n/n!,
it is the local operator at {x̅, t=0 } for a UV complete quantum field theory.
For a low-energy effective theory with the energy cutoff Λ,
the local operator should be smeared,
for example, as
O_Λ(x, t )≡∫ dt' e^-Λ^2/2(t-t')^2 O(x, t' ).
Here, the cutoff Λ may be restricted to be much smaller than the Planck mass M_p ∼ N^2/d-1 for the bulk theory.
Then,
the “local” operator
O_Λ(x̅, t̅ ) in the effective theory
cannot be distinguished with
O^[q]_Λ (x̅, t̅ ) ≡
[e^t̅∂_t]_q O_Λ (x̅, t ) |_t=0
if q ≫t̅Λ,
because
(t̅∂_t)^q/q!∼ e^q(ln (t̅∂_t)-ln q ),
and the Fourier mode of O_Λ (x̅, t ) for t is exponentially suppressed for ω≫Λ.
Note that if we take q=M_p ( ≲ N^2) the condition q ≫t̅Λ is satisfied for t̅= O (N^0) and the difference O^[q]_Λ (x̅, t̅ )- O_Λ(x̅, t̅ ) is suppressed by a factor e^-q at least.
Thus, as the effective theory,
such higher derivative terms cannot be considered to be a local operator at t=0 although
it is a local operator at t=0 for the UV complete CFT.[
This can be explicitly seen in the lattice field theory.
]
For the AdS/CFT correspondence, the important thing is that
the large N gauge theory has O (N^2) degrees of freedom and
(∂_t)^q O( x, t )
with q ≲ N^2 are independent fields
by imposing the equations of motion.
Even though this fact, O^[q]_Λ (x̅, t̅ ) with q = O (N^2) cannot be regarded as a local operator at t=0
for the bulk effective theory.
Thus, when we take a subregion A and consider CFT operators on A with a large but finite number of derivatives like O^[q](x̅, t̅) in (<ref>), these operators are not local operators on A in the EFT, and can be local operators on other subregions.
It means that the CFT operators supported only on a subregion can be operators supported on any subregion in the EFT (which corresponds to the bulk description )using a huge number of derivatives, and thus they can be bulk operators anywhere (because of the UV cutoff).
This leads that instead of A_A,
we need to consider A_A^Λ
which is generated by CFT operators without a huge number (N^2/d-1) of derivatives supported on A.[
From the viewpoint of the CFT living on the subregion A, such restriction may come from the coordinate choice.
In the Rindler coordinate, the boundary is at the infinite limit of the coordinates.
More precisely, the boundary condition on the boundary of A is important and we employed it implicitly by taking the Rindler coordinate.
]
Then, we find that
A_A^Λ will be generated by
∫_D(A) K(X) O_a(X),
where K(X) is a smooth function such that
the Fourier modes of it above the cutoff Λ are suppressed
and O_a is any low-energy single trace primary operator.
Note that we need CFT operators supported on D(A) instead of A because some independent CFT operators, which have a huge number of derivatives, on A are not included in A_A^Λ.[
CFT primary operators at different time correspond to
bulk operators at t=0 with different values for the radial coordinate.
]
To compare this to the bulk subalgebra, which acts on the low energy subspace only, we define A_A^low as the restriction of
A_A^Λ to the low energy subspace.[
It is important to note that the low energy part of A_A, instead of that of A_A^Λ, was supposed to be dual to the bulk subalgebra in the literature.
However, it is not correct as we have explained.
]
We can show that A_A^low= A_M_A for the ball-shaped subregion A in CFT whose entanglement wedge M_A is the causal wedge of A covered by the AdS-Rindler patch as follows.
Here, the subalgebra A_M_A is generated by
bulk gauge-invariant operators, including the gravitational dressings, supported on M_A.
First, we can easily see that A_A^low⊆ A_M_A because the CFT operator O_a(X) in (<ref>) is localized on D(A), which lies on the asymptotic boundary in the bulk picture. By the bulk equations of motion and causality, it can only affect the causal wedge M_A.
Next, let us consider O^bulk∈ A_M_A and represent it using low-energy CFT primary operators. Recall that the gravitational dressing in O^bulk is also localized on M_A.
To represent it in CFT, we do not need CFT primary operators outside D(A), because such operators cannot be supported on D(M_A) in the bulk picture.
Furthermore, a CFT operator supported on D(A) with a large number of derivatives will be a high-energy operator unless it is an operator at a different point approximately.
Therefore, we find that A_A^low = A_M_A.
Thus, the simple picture presented in <cit.> and <cit.>
is consistent with the algebraic version of the JLMS <cit.> and the entanglement wedge reconstruction based on it <cit.>.
Comments
Here, we argue that there are no holographic quantum correction code structures for the bulk reconstruction.
More precisely, we claim that if we can reconstruct a bulk operator as an operator O^A supported on subregion A in CFT, as well as an operator O^B supported on subregion B,
then, if they are the same operator even in the low energy subspace, we can reconstruct it as an operator supported on subregion A ∩ B in CFT as follows.
The reconstructed operator O^A supported on subregion A should commute with the low-energy operators, including the energy-momentum tensors, supported on D(A̅), and then O^B should also commute with them.
On the other hand, if O^B contains operators supported outside D(A) which act on the low-energy subspace, it may not commute with the energy-momentum tensors supported on D(A̅).
This means that the part of the operators should not act on the low-energy subspace and we can redefine the reconstructed operator supported on B by eliminating the part.
Thus, the bulk operator can be reconstructed as an operator supported on subregion A ∩ B in CFT.
Hawking radiation and information paradox
What we have shown implies that for the information theoretical aspect of the AdS/CFT or quantum gravity,
the bulk local operator or state cannot be an appropriate approximation
and we need to consider some subregion including the asymptotic boundary.
This is the keypoint of the subregion complementarity.
Thus, if we want to consider, for example, the Unitarity of the Hawking radiation process for the information paradox,
we need to specify what the Hawking radiation in this sense because the derivation of the Hawking radiation uses the semiclassical approximation which does not specify the associated gravitational dressings.
This is related to the holography of information which was recently discussed in <cit.>.
In the holography of information, the important fact is that there is a difference between a non-gravitational theory and a gravitational theory concerning information.
We can say that what is important in our paper is that
the difference between a N=∞ (or its 1/N perturbation theory) theory and a finite N theory concerning information.
Here, a N=∞ theory corresponds to a non-interacting bulk gravity theory,
which is similar to a non-gravitational theory in a sense.
Thus, our discussions may be related to the holography of information.
§ ACKNOWLEDGEMENT
The authors thank Hiroki Kanda, Taishi Kawamoto, Juan Maldacena, Yoshinori Matsuo, Yu-ki Suzuki, Yusuke Taki, Tadashi Takayanagi and Zhenbin Yang for the useful comments.
This work was supported by MEXT-JSPS Grant-in-Aid for Transformative Research Areas (A) “Extreme Universe”, No. 21H05184.
This work was supported by JSPS KAKENHI Grant Number 24K07048.
SS acknowledges support from JSPS KAKENHI
Grant Numbers JP21K13927 and JP22H05115.
§ ALGEBRAIC DEFINITION OF REDUCED DENSITY MATRIX AND ENTANGLEMENT ENTROPY
When we define the reduced density matrix, we usually suppose a tensor factorized structure of the Hilbert space as =_A ⊗_A̅.
We then define the reduced density matrix ρ_A on _A by taking the partial trace of a given total density matrix ρ over _A̅ as ρ_A= _A̅ρ.
However, we encounter some problems when we try defining a reduced density matrix associated with a subregion in QFTs.
One issue is the UV problem.
In QFTs, the reduced density matrix of a subregion is inherently ill-defined due to UV problems, necessitating some UV regularization.
This is related to the fact that the subalgebra associated with a subregion in QFTs is type III.
Here, we assume a UV regulator that circumvents this issue, such as in lattice field theories.
Even setting aside the UV problem,
there is another issue for gauge theories.
The Hilbert space of gauge-invariant physical states is not a tensor product like ≠_A ⊗_A̅ concerning the subregion A and its complement A̅ due to the Gauss law constraint.
It corresponds to the fact that the subalgebra for the subregion A is not a factor for gauge theories due to the existence of a non-trivial center.[von Neumann algebra 𝒜 is called a factor if the center 𝒵=𝒜∩𝒜' consists only of multiples of the identity operator, where 𝒜' is a commutant of 𝒜. ]
However, by adopting an algebraic approach (see, e.g., <cit.>), we can define the reduced density matrix for subregion A even for this case.
This approach is also used to define the target space entanglement entropy (see, e.g., <cit.>).
Let 𝒜 be a subalgebra associated with the subregion A.
We define the reduced density matrix ρ_𝒜 associated with a subalgebra 𝒜 from a given total density matrix ρ as a positive semi-definite operator in 𝒜 satisfying
(ρ_)= (ρ)
for any ∈.
In this definition, we do not need a tensor factorized form of operators in 𝒜 like _A ⊗ 1_A̅.
Nevertheless, by taking an appropriate basis, the total Hilbert space can be decomposed into a direct sum of tensor products as[See, e.g., <cit.>. Here, as discussed above, we assume that a UV regulator is introduced and also that the total space is compact such that the total algebra is type I.]
= ⊕_k _A^(k)⊗_A̅^(k)
such that subalgebra takes a tensor-factorized form in each sector as
=⊕_k ℒ(_A^(k)) ⊗ 1__A̅^(k)
where ℒ(_A^(k)) denotes a set of operators on _A^(k).
The subalgebra does not mix different sectors labeled by k,[
The label k might be continuous parameters.
In this case, the direct sum is replaced by the direct integral.
]
and thus k is the label of the superselection sectors for observables in .
Let Π^(k) be the projection from onto the k-sector ^(k):= _A^(k)⊗_A̅^(k).
Then, from the total density matrix ρ, we can define the density matrix on the k-sector by using the projection as
ρ^(k):= 1/p^(k)Π^(k)ρΠ^(k),
where p^(k):= (Π^(k)ρΠ^(k)) is a normalization factor so that ρ^(k) is a normalized density matrix on ^(k) as ρ^(k)=1.
We can regard p^(k) as the probability of finding the given state ρ in the k-sector.
Since ^(k) takes a tensor factorized form as ^(k)= _A^(k)⊗_A̅^(k), we can take the partial trace of ρ^(k) with respect to _A̅^(k) as
ρ^(k)_A:=__A̅^(k)ρ^(k).
We then define the reduced density matrix on A as
ρ_A:= ⊕_k p_k ρ^(k)_A.
Note that ρ_A is a density matrix on the space _A:=⊕_k _A^(k) and normalized as __Aρ_A=∑_k p_k =1.
Entanglement entropy for A is given by the von Neumann entropy of ρ_A as
S_A(ρ):=-__A(ρ_A logρ_A )
=-∑_k p_k log p_k +∑_k p_k S_A^(k)(ρ^(k)),
where S_A^(k)(ρ^(k)) is given by
S_A^(k)(ρ^(k))=-__A^(k)(ρ^(k)_Alogρ^(k)_A ),
that is, it is the entanglement entropy of ρ^(k) for _A^(k) defined in the standard way.
If is tensor factorized as = ⊕_k _A^(k)⊗_A̅^(k), the above definition of entanglement entropy reduces to the standard one because in that case we have only a single sector (say k=1) and thus p_k=1=1.
We have a diffeomorphism symmetry in the bulk description.
Thus, there are no gauge-invariant (diffeomorphism-invariant) local operators, and all operators have non-local gravitational dressing.
For bulk subregion M_A, we define the subalgebra _M_A associated with M_A as the set of (low-energy) bulk operators that are supported only on M_A including their gravitational dressing as Fig. <ref>.
Due to this construction, the bulk subregion M_A must have an asymptotic boundary where the gravitational dressing ends.
For this subalgebra _M_A, the above procedure defines the reduced density matrix ρ_M_A and the entanglement entropy S_M_A.
Let us consider an entanglement wedge M_A associated with a boundary subregion A.
At least if A is a connected region,
we expect that the above algebra _M_A should agree with the CFT (low-energy) subalgebra _A^low on A.
For instance,
when M_A is the AdS-Rindler patch, we have _A^low=_M_A as will be argued in subsec. <ref>, although the definition of low-energy subalgebra _A^low is not-trivial (see subsec. <ref>).
Then, by construction, the reduced density matrix ρ_A defined by the subalgebra _A^low is the same as ρ_M_A.
We thus have the JLMS formula of the relative entropies <cit.> in the weak sense <cit.>.
The entanglement entropy S_M_A has the classical Shannon entropy term as the first term of the RHS of (<ref>).
This term is owing to the center of _M_A and may be related to the degrees of freedom localized on the RT surface Σ_A as argued in <cit.>.
It is thus natural to expect the entropy to be related to local quantities on Σ_A such as the area.
utphys
|
http://arxiv.org/abs/2409.02167v1 | 20240903180001 | Machine Learning-based Search of High-redshift Quasars | [
"Guangping Ye",
"Huanian Zhang",
"Qingwen Wu"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Department of Astronomy, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China
[email protected]
Department of Astronomy, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China
Steward Observatory, University of Arizona, Tucson, AZ 85719, USA
[email protected]
Department of Astronomy, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China
§ ABSTRACT
We present a machine learning search for high-redshift (5.0 < z < 6.5) quasars using the combined photometric data from the DESI Imaging Legacy Surveys and the WISE survey. We explore the imputation of missing values for high-redshift quasars, discuss the feature selections, compare different machine learning algorithms, and investigate the selections of class ensemble for the training sample, then we find that the random forest model is very effective in separating the high-redshift quasars from various contaminators. The 11-class random forest model can achieve a precision of 96.43% and a recall of 91.53% for high-redshift quasars for the test set. We demonstrate that the completeness of the high-redshift quasars can reach as high as 82.20%. The final catalog consists of 216,949 high-redshift quasar candidates with 476 high probable ones in the entire Legacy Surveys DR9 footprint, and we make the catalog publicly available. Using MUSE and DESI-EDR public spectra, we find that 14 true high-redshift quasars (11 in the training sample) out of 21 candidates are correctly identified for MUSE, and 20 true high-redshift quasars (11 in the training sample) out of 21 candidates are correctly identified for DESI-EDR. Additionally, we estimate photometric redshift for the high-redshift quasar candidates using random forest regression model with a high precision.
§ INTRODUCTION
Quasars in general are driven by a supermassive black hole (SMBH) at the centre of the host galaxy through a process of accretion, and are the brightest non-transient sources of light in the Universe. SMBH activities are a key ingredient of galaxy formation, and are critical to subsequent galaxy evolution. Quasars at z>5 are often referred to as high-redshift quasars since they are at the end of the reionization epoch of the Universe, when the vast majority of the Universe's neutral hydrogen has been reionised <cit.>. High-redshift quasars provide effective probes for the study of galaxy evolution and cosmology, including the evolution of the intergalactic medium <cit.> and the circumgalactic medium <cit.>, the formation of early supermassive black holes, the co-evolution of SMBH and their host galaxies <cit.>.
There exist many challenges in searching and verifying high-redshift quasars. On one hand, the number density of high-redshift quasars is very low in the Universe <cit.>, resulting to a small number of observable high-redshift quasars, which means that the single-object spectroscopic observations on large-aperture telescope of quasar candidates are needed and expensive; on the other hand, the contamination is overwhelming and could be a few orders of magnitude more than signals <cit.>. The major contaminations include cool galactic dwarfs with spectral types of M, L, and T <cit.>, which are similar to those of high-redshift quasars in the color space constructed from optical and near-infrared broad-band photometry.
The traditional method for searching the high-redshift quasars is the color-cut selection <cit.> based on color drop-out, which is caused
by the Lyα break in quasar spectra due to significant IGM
absorption at the wavelength blueward of the Lyα emission
line at the rest frame of the quasar. This method successfully discovers the majority of the currently known high-redshift quasars <cit.>. The big advantage of this method is that it leads to well-defined selections that are easily reproducible and can be justified with known physics (e.g., the redshift evolution of the Lyα emission through the broadband filters and the drop-out due to neutral IGM absorption). However, color-cut selections might not make use of all the available information in the high-dimensional color space and the two-dimensional color-color diagram might be misleading or biased. And the strict color cuts might result in missing quasars due to the scattering out of the selection regions based on color-color cuts. In contrast to the color-cut selection method, machine learning-based methods can make full use of the color information and construct correlations in the high-dimensional space. Moreover, the large imaging surveys collect data far more than what can be handled manually, machine learning-based automatic methods can easily process those data and can be easily applied to new data.
So far, there have been many successful examples of machine learning algorithm-based searches for quasars up to redshift z ∼ 6 quasars <cit.>. A variety of supervised machine-learning algorithms have been successfully applied for quasar selections, such as random forest <cit.>, Support Vector Machines (SVM), XGBoost, and Artificial Neural Networks <cit.>. <cit.> successfully applied the random forest algorithm to the high-redshift quasars (4.8 < z < 6.3) search using the combined dataset of Pan-STARRS DR1 <cit.> and ALLWISE <cit.>. Although the y band from Pan-STARRS is essential for high-redshift quasars at z > 6.5, it is comparably shallow (3π stack 5σ depth is ∼ 21.4), which might not be efficient in capturing faint high-redshift quasars. Here in this study we will use data from Dark Energy Spectroscopic Instrument (DESI) Legacy Imaging Surveys <cit.>, which is roughly 2 magnitudes deeper than the 3π stack 5σ depth of y band of Pan-STARRS. We will demonstrate that the constructed features other than the simple photometry provided in the catalog are critical to a successful machine learning model for high-redshift quasars search.
The aim of this paper is to search new high redshift quasars from the Legacy Survey data <cit.> combined with Wide-field Infrared Survey Explorer (WISE) all-sky survey data <cit.> based on machine learning algorithms.
The structure of the paper is organized as follows. In Sec. <ref> we introduce the catalog data we will use in this study and the training sample for the machine learning algorithm. In Sec. <ref> we briefly discuss the various aspects of the machine learning algorithms. In Sec. <ref> we estimate the photo-z of the high-redshift quasar candidates. In Sec. <ref> we discuss the procedure to obtain the high-redshift quasar candidates, the spectral verification of the candidates and present the final catalog. Finally we summarize. The magnitudes used in this paper are generated based on the AB system after applying Galactic extinction corrections. We use a ΛCDM cosmology with Ω_Λ = 0.7, Ω_m = 0.3 and H_0 = 70 km s^-1Mpc^-1.
§ DATA
§.§ Legacy Survey Data
We use the photometric data from the Data Release 9 (DR9) of the Legacy Survey, which includes observations obtained by the DECam at the CTIO 4 m (DECaLS), an upgraded MOSAIC camera at the KPNO 4 m telescope (MzLS, Mayall z-band Legacy Survey), and the 90Prime camera <cit.> at the Steward Observatory 2.3 m telescope (BASS, Beijing-Arizona Sky Survey). Briefly, the Legacy Survey <cit.> was initiated to provide targets for the DESI survey drawn from deep, three-band (g=24.7, r=23.9, and z=23.0 AB mag, 5σ point-source limits) images, roughly two magnitudes deeper than the Sloan Digital Sky Survey (SDSS) data. The survey covers about 14,000 deg^2 of sky visible from the northern hemisphere between declinations approximately bounded by -18^∘ and +84^∘. The depth of SDSS <cit.> and Pan-STARRS <cit.>) is insufficient to provide reliable DESI targets. The footprint of DR9 (Figure <ref>) also includes an additional 6,000 deg^2 extending down to -68^∘ imaged at the CTIO by the Dark Energy Survey <cit.>. Furthermore, the Legacy Survey also includes g, r, z band images in the North Galactic Cap (NGC) region that overlaps with DES, extending the coverage of the Legacy Survey to over 20,000 square degrees.
§.§ The Wide-field Infrared Survey Explorer Data
In addition to the Legacy Survey data, we also take advantage of the WISE data release, providing infrared photometry of four bands at central wavelength of 3.4, 4.6, 12, and 22 μm (W1, W2, W3, W4) over the entire sky <cit.>. The W1, W2, W3 and W4 data in the Legacy Survey catalog are based on forced photometry in the unWISE stacked images (including all imaging through year 7 of NEOWISE-Reactivation) at the locations of the Legacy Survey optical sources. For our selection process we only focus on the W1 (3.4 μm) and W2 (4.6 μm) photometry with limiting magnitudes of 19.8, 19.0 in AB magnitude (Vega: 17.1, 15.7), respectively. The conversion between AB and Vega magnitude for W1 and W2 is W1_ AB = W1_ Vega + 2.699 and W2_ AB = W2_ Vega + 3.339, respectively.
§.§ Training sample
We use supervised machine learning algorithms to perform the classification, therefore the training sample is critical to construct a good and reliable model. Any bias in the training sample will propagate to the new data, resulting in highly unreliable outputs. In order to construct a representative and unbiased training sample, we collect the spectra-confirmed signal (high-redshift quasars at 5.0 < z < 6.5) and all possible contaminators which populate the same or similar color space (eg. r-z, z-W1, z-W2) because of similar absorption features in the optical and near-infrared bands for both the signal and the contaminators. For quasars at z ∼ 5, Lyman series absorption systems begin to dominate in the r band and Lyα emission moves to i or z band, resulting in redder r-z color. However, as the redshift increases, most z > 5 quasars enter the MLT dwarfs locus in g-r/r-z color-color diagram (as shown in the upper-left subplot of Figure <ref>). At z ≳ 5.7, Lyα emission line moves into the z band and the g, r, i bands become the drop-out bands, resulting in very red r-z color and comparably red z-W1 color. And the infrared photometry W1 and W2 (constructed colors such as z-W1, z-W2, W1-W2) are useful for all redshifts. The medium-redshift quasars with redshift just below 5, the quasars at lower redshift, the MLT dwarfs, the different types of stars also share similar color-space with the high-redshift quasars, consisting of the major contaminants for the signal of high-redshift quasars.
We then obtain the photometry data from both Legacy Survey (g, r, z bands) and WISE (W1, W2 bands) for the spectra-confirmed high-redshift quasars (5.0 < z < 6.5), and the spectra-confirmed contaminators, which include the MLT brown dwarfs, AFGK stars, medium-redshift quasars (3.5 < z < 5.0), low-redshift quasars (1.5 < z < 3.5), and very low-redshift quasars (0 < z < 1.5). Although there are a number of quasars at z > 6.5, we will not include those since the Lyα emission of those quasars is already beyond the z band coverage of the Legacy Survey. We also do not include O- and B-type stars, because they are far from the high-redshift quasars in color space <cit.>. The distant compact early-type galaxies at intermediate or higher redshift, exhibiting very red r-z color, might be a possible type of contaminants, which will be further discussed in Sec. <ref>.
The sample of high-redshift quasars is extracted from the Supplementary Database[https://www.annualreviews.org/content/journals/10.1146/annurev-astro-052920-102455#supplementary_dataSupplementaryDatabase.org/] <cit.>, which are constructed from previous observations <cit.>, and the DESI spectra-confirmed quasars at z > 5[https://cdsarc.cds.unistra.fr/viz-bin/cat/J/ApJS/269/27#/browseDESIHighzQSO.org/] <cit.>.
The MLT brown dwarfs are extracted from both UltracoolSheet[http://bit.ly/UltracoolSheetUltracoolSheet.org/] <cit.>
and SDSS DR16 Database[https://www.sdss.org/dr16sdss.org/dr16] <cit.>.
The quasars at z < 5 are extracted from SDSS quasars catalog DR16[https://data.sdss.org/sas/dr16/eboss/qso/DR16Q/data.sdss.org/sas/dr16/eboss/qso/DR16Q]<cit.>.
The AFGK stars are obtained from the SDSS DR16 Database <cit.>.
To obtain a training sample as representative as possible, we remove the entries whose z band magnitude is either too bright (top 1%) or too faint (last 1%) for each category, which are not representative among the whole training sample. The number of instances for each category in the training sample is shown in Table <ref> and the redshift distribution of the quasars is shown in Figure <ref>. Note that the data size for each class in the training sample is not equal, which means that we are dealing with the imbalanced classification. This issue will be further discussed in detail in Sec. <ref>.
The Legacy Survey catalog data also provides fluxes at different aperture radius for each band, calling `apflux', which is useful because it provides more features for the machine learning model and essential to discriminate the signal from extended but compact sources like distant compact galaxies. The radius of the apertures are [0.5, 0.75, 1.0, 1.5, 2.0, 3.5, 5.0, 7.0] arcseconds for the g, r, z bands and [3, 5, 7, 9, 11] arcseconds for the W1, W2 bands. We will construct apflux ratios as the machine learning features, with a detailed description in Sec. <ref>.
Moreover, the combined fluxes grz and W, which are constructed from g, r, z fluxes and W1, W2 fluxes, respectively, are useful in quasars search according to a recent study <cit.>. The definitions of the combined fluxes of grz and W and the conversion between flux and magnitude are:
flux_grz = (flux_g + 0.8*flux_r + 0.5*flux_z)/2.3
flux_W = 0.75*flux_W1 + 0.25*flux_W2
mag = 22.5 - 2.5*lg(flux)
The measurements we will use in the machine learning model include: g, r, z, grz, W1, W2, W fluxes and magnitudes, as well as apfluxes. Some selected color-color distributions of the training sample are shown in Figure <ref>. The top panels are based on the data set with no missing values of all measurements for high-redshift quasars (252 sources), and for all background sources. The bottom panels are based on the entire dataset, where the missing values of high-redshift quasars have been filled in. The process of filling in the missing values is described in detail in the following Sec. <ref>. As is shown clearly that some colors such as g-r, g-W2, r-grz are effective in separating high-redshift quasars from the contaminators. We would expect that the machine learning algorithm would significantly enhance the performance in separating the high-redshift quasars from the contaminators when making full use of all color information. It is also obvious that the MLT brown dwarfs and the mid-z quasars are the major contaminators since they overlap with the high-redshift quasars to a greater extent in the color space.
§.§ Imputation of Missing Values
Currently, the number of spectra-confirmed quasars at z > 5 is 727, of which 602 quasars are in the Legacy Survey footprint. Because of the color drop-out, a large number of quasars (335) have nearly zero or even negative fluxes in the g, r bands, and a small fraction of quasars (29) also have nearly zero or negative flux measurements in the W1 and W2 bands due to the low sensitivity of WISE survey. There are no well-defined magnitudes when converting from zero/negative fluxes into magnitudes, which we call “missing values". The missing value issue highly limits the high-redshift quasars sample since the machine learning algorithm could not deal with missing values. Therefore, special considerations and techniques may be required to address this issue to ensure accurate analysis and interpretation of the available data.
To fully utilize the limited photometric information available for the known high-redshift quasars, we first employ common imputation methods to handle missing values in the training dataset. One common imputation method is to fill in zeros for the missing values, but it will highly bias the classification model because of the extreme color constructed from the filled-in zeros. Moreover, a zero AB mag has a physical meaning, so we will not discuss it here. The other common imputation methods include: 1) mean imputation, replacing missing values with the mean value of the respective feature; 2) LS limit imputation, substituting missing values with the limiting magnitudes of the Legacy Survey; 3) future imaging survey limit imputation(LR limit), adopting the 5σ point-source depth of 27.5 AB mag from the Large Synoptic Survey Telescope <cit.> for g, r, z bands <cit.> and of 24 from
the Roman Space Telescope <cit.> for W1, W2 bands <cit.> to replace the corresponding missing values, which will be further discussed in Sec. <ref>; 4) random forest (RF) imputation, training a random forest regression model to generate reasonable values for missing values; 5) Multiple Imputation by Chained Equations <cit.> forest, generating imputed values by iterative random forest model on multiple independent divisions of dataset. To validate the feasibility of the five methods mentioned above, we compare the distributions of the imputed values with the complete dataset without missing values and assess the differences.
Here we briefly introduce MICE. MICE is based on the random forest algorithm, and it constructs multiple different imputed datasets randomly to examine the uncertainty and other effects caused by missing values. Each dataset undergoes independent imputation processes, where the imputed values for each feature in the dataset are predicted using a random forest regression model trained on the non-missing dataset. The above process is repeated to update the imputed values. This iterative process does not stop until the average of the imputed values for each feature tends to converge. One big advantage of MICE is its ability to automatically select features related to the imputation variable without requiring manual specification. It also calculates weights for each variable to optimize the variable selection and model fitting during the imputation process. By combining the predictions from multiple random forest models, MICE provides accurate and reliable imputation results.
We determine the best imputation method by comparing the density distributions of the magnitudes between the imputed dataset and the complete dataset. We use R^2 to represent the differences between the imputed datasets and the complete dataset for these features. R^2 assesses how well the model explains variable changes (0 to 1, closer to 1 for better fit). Figure <ref> illustrates the density distributions of the four magnitudes with missing values for the complete dataset as well as the five imputed datasets. For the g magnitude with more than half missing values (55.6%), the RF and MICE methods can achieve a R^2 value of > 0.90, while the results of the other methods are poor. It is easily seen that MICE performs significantly better than the RF model. For the r, W1, W2 magnitudes with fractions of missing values of 12.29%, 2.33%, 3.82%, respectively, the performance of RF and MICE are similar and are much better than the other methods. Note that the R^2 criteria could not reflect the effectiveness of the imputation method using future imaging survey limits, and we will further discuss it in Sec. <ref>.
Overall, MICE is an effective approach that leverages the power of random forest regression models for missing value imputation. It is very effective in that it automates feature selection and enhances the imputation process through weighted variables. The combination of multiple random forest models enables MICE to provide accurate and reliable imputation results. Therefore, we will adopt the MICE method to perform imputation in the following discussion. The robustness of the MICE imputation will be discussed in the following discussions.
§ MACHINE LEARNING
In the past decades, many large surveys such as SDSS <cit.>, Legacy Survey <cit.>, DESI <cit.> collected tons of data that is challenging to analyze. And the approach using automatic methods like machine learning is both efficient and easily reproducible. Here we briefly introduce the various aspects of the machine learning technique.
§.§ Metrics
We already introduce the training sample in Sec. <ref>. Here we will introduce the other aspects of the machine learning technique. To evaluate the performance of a classification model, we typically consider the evaluation metrics including precision, recall, accuracy, f1 score, and the area under the Receiver Operating Characteristic curve (ROC AUC). In the evaluation process, the comparison between the predicted labels from the machine learning algorithms and the true labels yields four outcomes: true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN), from which we can obtain the desired evaluation metrics.
Precision refers to the fraction of true positives in total positives predicted, defined as:
precision = TP/TP + FP
Recall refers to the ratio of positive class instances correctly identified by a classifier, defined as:
recall = TP/TP + FN
The f1 score metrics combines precision and recall. In fact, the f1 score is the harmonic mean of the two. A high f1 score symbolizes a high precision as well as a high recall. It is defined as:
f1 = 2 ×precision×recall/precision + recall
The f1 score belongs to a unified formula f_β where β = 1. f_β is defined as:
f_β = (1 + β^2)(precision×recall)/(β^2 ×precision) + recall
On datasets with balanced class quantities, the f1 score performs well. For highly imbalanced dataset, the Adjusted f-score (AGF) is an improvement upon the f_β score <cit.>. The AGF considers all elements of the original confusion matrix, making it a more equitable evaluation metrics for the classification of the minority class. It is defined as:
AGF = √(f_β=2× inv f_β=0.5)
where the f_β=2 score weights recall more than precision, the inv f_β=0.5 refers to the f_β=0.5 score calculated using a confusion matrix obtained by inverting the labels, where TP is replaced by TN, and then substituted into f_β.
In principle, the precision, recall, f1 as well as AGF is least affected by the imbalanced data sample. And it is almost impossible to tune the desired metrics for all classes for a multi-class classification task. Therefore, we will focus on the metrics of precision, recall, f1 and AGF for the signal of high-redshift quasars in the following discussions, because the correct identification of the high-redshift quasars from enumerous backgrounds is our focus in this study. Although there will be confusion between the classes of contaminants, such as the confusion between the different types of stars, they are not the focus of this study. But we will discuss the weighted metrics of all other classes except the high-redshift quasars for completeness.
§.§ Feature Selection
The feature selection, the appropriate algorithm and the choices of class ensemble is essential to a good machine learning model, all of which will be discussed in the following. Many studies <cit.> have demonstrated that random forest is very effective in separating signals from contaminators, therefore we will adopt the 11-class random forest classifier as our fiducial model to discuss the feature selection, and the 11 classes are shown in Table <ref>. And then we will compare the different machine learning algorithms and finally we will discuss whether the different class ensembles will help improve the model performance.
The measurements we will use in this study include dereddened magnitudes, dereddened fluxes, and aperture fluxes in the g, r, z, grz, W1, W2, and W bands. The aperture radius are [0.5, 0.75, 1.0, 1.5, 2.0, 3.5, 5.0, 7.0] arcseconds for the g, r, z bands, and are [3, 5, 7, 9, 11] arcseconds for the W1 and W2 bands. The apfluxes are essential to remove extended but compact sources like galaxies, which will be further discussed in Sec. <ref>.
The features used in the machine learning algorithm are constructed from magnitudes (colors), dereddened fluxes (flux ratios) and aperture fluxes (flux ratios). The magnitude features include g, r, z, W1, W2, grz, W. The color features are constructed from the subtraction of two magnitudes. The flux ratios are constructed from the division of two fluxes, which is the same as the construction of color features. The apflux ratios are constructed as follows: 1) for apfluxes in the same band, we construct the apflux ratios by taking the ratio of aperture fluxes at adjacent radius indices; 2) for apfluxes across different bands, we construct the apflux ratios by taking the ratio of aperture fluxes at the same radius index across adjacent bands for the first 5 radius for both optical (g, r, z) and the infrared (W1, W2) bands. For the extra three radius in the optical bands (g, r, z), the apflux ratios are constructed by the ratio of aperture fluxes at the same radius across different bands.
The apflux ratios can be represented by a general formula: ap_[a]_[i]/ap_[b]_[j], where [a] and [b] represent indices for the photometric bands (g, r, z, W1, W2). And the indices [i] and [j] representing the aperture radius, range from 1 to 8 for the optical bands and range from 1 to 5 for the infrared bands. The conditions are:
* [a] = [b] (the same band) and [i] - [j] = 1 (adjacent radius indices).
* [a] - [b] = -1 (adjacent bands) and [i] = [j] (the same radius index).
There are 7 magnitudes, 21 colors or flux ratios and 55 apflux ratios in total. To fully understand the contribution to improving the performance of the machine learning algorithm, we investigate the following subsamples with different feature subsets:
* FeatureSet-A (7): g, r, z, W1, W2, grz, W magnitude features.
* FeatureSet-B (28): magnitude (7) and color (21) features.
* FeatureSet-C (83): magnitude (7), color (21) and apflux ratio (55) features.
* FeatureSet-D (83): flux (7), flux ratio (21) and apflux ratio (55) features.
FeatureSet-D is the mirror of FeatureSet-C and the big advantage of FeatureSet-D is that there is no missing value issue. We train the random forest classification model on the above four different datasets separately. In the model training, we use RandomizedSearchCV to find the best set of hyperparameters and we split the entire training sample into “training set" and “test set". Table <ref> presents the evaluation scores of the four feature sets for the class of high-redshift quasars. We also use cross-validation (CV) to avoid overfitting on the training sample, and the fold number is set to be 10. In Table <ref>, the “Val" column represents the average results of ten equally folds of the cross-validation datasets. We adopt the standard deviation within the ten folds as the error. The “Test" column shows the results of the models for the test set.
The comparison between FeatureSet-A and B tells us that the addition of 21 color features significantly improve the performance for the validation data set. This also indicates that in high-dimension color space, the high-redshift quasars are separable from the contaminators. When incorporating the additional 55 apflux ratio into the models, we find that the performance is also enhanced for the test set. Additionally, we use pure flux values from different bands and their ratios as features in FeatureSet-D. Comparing the performance of FeatureSet-C and FeatureSet-D in the table, it can be observed that their precision are very similar, with FeatureSet-C having slightly higher recall, f1 and AGF. The consistency of all evaluation metrics between FeatureSet-C and FeatureSet-D as shown in Table <ref> demonstrates the robustness of our imputation method.
Table <ref> presents top 20 features ranked by the importance for FeatureSet-C. We can see that the color features align with the trends shown in Figure <ref>. As can be seen clearly, the newly introduced features of grz and W magnitudes have comparably high importance, demonstrating that these additional features indeed play an important role in improving the model performance. We notice that the most important feature, z-W2, is not effective in separating the high-redshift quasars from the contamination shown in Figure <ref>, which will be further discussed in detail in Sec. <ref>.
§.§ Comparison of Different Algorithms
There are many machine learning algorithms with their own strength and weakness, here we compare the performance of different machine learning algorithms on separating high-redshift quasars from contaminators. The classification algorithms used in this study are the following:
* : A non-parametric algorithm that forms a model by placing the data from the training set in the high-dimensional feature space and makes predictions based on the similarity between the data point and its k nearest neighbours of the training set in the high-dimension feature space. The hyper-parameters in this algorithm include number of neighbors (n_neighbors), weights (w) and distance metric (p). The distance metric can be either Manhattan distance or Euclidean distance, used to measure the similarity.
* : A non-parametric supervised learning that constructs a tree-like classification or regression structure by continuously dichotomising features based on a discrete set of values. In these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. The hyper-parameters in this algorithm include criterion for split quality, maximum depth of trees, minimum number of samples required to split an internal node, minimum number of samples required at each leaf node, maximum number of features to consider for each split, etc.
* : An integrated approach that performs classification or regression by combining multiple decision tree models, where each tree is constructed using a random subset of the training data and a random subset of features. This randomness helps to reduce the overfitting issue and to improve generalisation. The hyper-parameters of this algorithm include number of estimators and the ones of the decision tree algorithm.
* : A gradient boosting algorithm that makes predictions by iteratively training multiple decision trees. Each decision tree is trained based on the residuals of the previous tree to gradually reduce the prediction error. The learning rate in LGBM algorithm is critical to the model performance. Other hyper-parameters are similar to the ones in random forest classifier.
* : A plain Bayesian probabilistic classification algorithm assuming that features are independent of each other and that their conditional probabilities follow a Gaussian distribution. The prior probability (p) is the most important hyper-parameter of this algorithm. By default, p is estimated from the proportion of samples in each category in the training data.
We apply the above algorithms to FeatureSet-C and obtain the corresponding precision, recall, f1 and AGF scores of the high-redshift quasars for these models, as shown in Table <ref>. The maximum value for each evaluation metrics is highlighted. It can be seen clearly that the random forest model is the best one for all four evaluation metrics. Then we will adopt the random forest algorithm in the following discussion.
Note that FeatureSet-C is the best feature set selected via the random forest model, but it might not be the best feature set for other algorithms. Here we investigate whether FeatureSet-A or FeatureSet-B on other algorithms have better performance than FeatureSet-C. We find that all metrics on FeatureSet-A for other algorithms are worse than those on FeatureSet-B. We present the best metrics using FeatureSet-B for the other four algorithms, we have the best precision (0.88 ± 0.05, from KNN), best recall (0.84 ± 0.06, from LGBM), best f1 score (0.83 ± 0.05, from LGBM) and best AGF (0.83± 0.05, from LGBM). It is obvious that the results obtained on FeatureSet-B for these four algorithms are all lower than the corresponding metrics on Featureset-C. Therefore, we conclude that Featureset-C is the best feature set not only for the random forest algorithm, but also for other classification algorithms considered in this work.
§.§ Class Selection
We adopt the 11-class classification model in the above discussion, and the 11 classes are shown in Table <ref>. In order to investigate whether further reducing the number of classes in the training sample can improve the model's performance, we will now discuss four scenarios in which certain classes from FeatureSet-C are merged. The four scenarios are the following:
* 11-class classification (P_11):
* P_11: vlowz, lowz, midz, highz quasars and M, L, T, A, F, G, K dwarfs.
* 4-class classification (P_4):
* P_4^0: vlowz, lowz, midz quasars
* P_4^1: highz quasars
* P_4^2: M, L, T dwarfs
* P_4^3: A, F, G, K stars
* 3-class classification (P_3):
* P_3^0: vlowz, lowz, midz quasars
* P_3^1: highz quasars
* P_3^2: M, L, T, A, F, G, K dwarfs
* 2-class classification (P_2):
* P_2^0: vlowz, lowz, midz quasars and M, L, T, A, F, G, K dwarfs
* P_2^1: highz quasar
We apply the above four scenarios to FeatureSet-C and obtained four new datasets, called P_11, P_4, P_3, P_2. In P_4, P_3, P_2, the reason why we can combine vlowz, lowz, midz quasar samples is that they are in the period when the cosmic reionisation epoch is already over. In P_4 scenario, we separate the M, L, T dwarfs from other stars because they are the main contaminants of high-redshift quasars and they are quite different from other types of stars in the optical and near-infrared bands. M, L, T dwarfs are grouped together because of the similarity between them. A, F, G, K dwarfs are also grouped together because they are relatively far away from the high-redshift quasars on the color-color diagrams, as can be seen from Figure <ref>.
We train the random forest classification model on the above four datasets. The precision, recall, f1 and AGF score of the high-redshift quasars for the four scenarios are presented in Table <ref>. Meanwhile, we also present the average metrics weighted by the number of instances of each class for all other classes except the high-z class for the test set. It is clearly seen that the precision and recall are consistent within error for the different scenarios identified by the number of classes in the dataset. This might indicate that there is certain correlation among the different contaminators, which is demonstrated by the weighted metrics of all other classes. The relatively stable precision indicates that the sample of high-redshift quasars is clean and possesses inherent features that is distinguishable from other contaminants, ensuring high precision and recall regardless of the number of classes in the classification models. As we reduce the complexity of the classification model by fewer classes via combining certain classes, the weighted metrics should get better
because the confusion between classes that we combine gets ignored, which is consistent with the results shown in Table <ref>.
Based on the statistics in Table <ref>, we decide to pick the 11-class model for two reasons. First of all, the 11-class model would provide us the full picture of contamination within the different classes, expecially the contamination for the high-redshift quasars. Secondly, we find that the strict boundary between “mid-z" and “high-z" redshift bins introduces confusion between these two classes, which should not be a concern. The inspection on those “mid-z" quasars, which are classified as “high-z" quasars, tells us that the redshift for those “mid-z" quasars are between 4.84 and 4.98, right around the redsfhit boundary of 5. Further inspection on the spectra of these “mid-z” quasars indicates that they bear a striking resemblance to the typical spectra of high-redshift quasars. This implies that these “mid-z" quasars which are predicted as the “high-z" class are not actually contaminants of the “high-z" quasars. Therefore, the precision of the “high-z" class for the 11-class model has been underestimated. If we exclude these “mid-z" quasars, the precision of of the “high-z" class for the 11-class model would reach 0.99.
§.§ Important feature: i-band photometry
For quasars with redshifts between 5 and 6.5, the Lyα emission line is shifted to the wavelength range of 7296 Å - 9120 Å. However, the Legacy Survey g, r bands does not cover this wavelength range, and the z band only has a small coverage in this wavelength range. Then the i band, whose wavelength coverage mostly overlaps with this wavelength range, is essential for high-redshift quasar (5 < z < 6.5) searching . Fortunately, the Legacy Survey DR10 not only includes images in the g, r, and z bands from DECaLS, but it also contains DECam observations for g, r, i, and z bands from several non-DECaLS surveys, primarily the Dark Energy Survey, the DELVE Survey <cit.>, and the DeROSITA Survey[https://noirlab.edu/science/programs/ctio/instruments/Dark-Energy-Camera/DeROSITASDeROSITAS.org/]. These surveys mainly cover the southern sky (Declination ≤ 32.375^∘). The area covered in the i band for number of passes greater than 3 is 13,024 square degrees, while the total area covered in the g, r, i, and z bands jointly is 9,923 square degrees.
Because the combined sky coverage of the g, r, i, and z bands is approximately half of that for the g, r, z bands, we divide the entire dataset into the one that includes the i band data (FeatureSet-i) and the one that does not include the i band data (FeatureSet-non-i). The feature construction is the same as above except that the FeatureSet-i dataset includes all i band related measurements. We apply MICE to fill in missing values as discussed in Sec.<ref> and we train models using the 11-class
random forest classification algorithm on these two datasets. We evaluate the models on the validation set and test set to assess the i band importance.
Table <ref> shows the evaluation scores of validation and test sets for the two datasets (FeatureSet-i v.s. FeatureSet-non-i), as well as the first few most important features of the models trained on FeatureSet-i. Compared to FeatureSet-non-i, which does not include features related to the i band, the models trained on FeatureSet-i achieve higher precision on the test set, while the other evaluation scores are similar. This indicates that the i band can improve the performance of our model in searching for high-redshift quasar to some extent. The columns of “Feature" and “Importance" display the top-ranked features and the corresponding importance in the models trained on FeatureSet-i. It can be observed that the feature g-i has high importance, which reflects the significance of i band related features in our model.
In this section, we already demonstrate that the i band and its corresponding features are indeed playing an important role in the classification model for high-redshift quasars. However, due to the limited amount of i band data, our model is ultimately trained on the data provided by Legacy Survey DR9. In future work, we will consider incorporating photometric data from more bands to obtain additional features and further improve the model's performance.
§.§ Discussion of Random Forest performance
After the discussion on feature selections, the comparison of different classification algorithms and class selections of the dataset, we find that the 11-class random forest classification model on the FeatureSet-C and D yields the best performance for separating the high-redshift quasars from the contaminators. Figure <ref> shows the confusion matrix of the model using FeatureSet-C for the test set, where each row represents the true class and each column represents the predicted class. The percentages in each square indicate the fraction of the corresponding class in each column. Squares with a percentage of less than 1% are not displayed. For the diagonal elements, these percentages represent the precision of the corresponding class.
For the class of high-redshift quasars, the precision achieves as high as 96.43% and the recall can also reach 91.53%. The precision of the high-redshift quasars for this model is significantly enhanced compared to the previous work <cit.>. The high precision ensures that the final high-redshift quasar candidates have much larger probability to be true high-redshift quasars, which significantly improves the efficiency of the future spectra verification. The recall is also significantly enhanced compared to the previous work <cit.>, which means much less true high-redshift quasars will be missed. The average precision and recall weighted by the number of instances of each class for all other classes except the signal can also reach 86% and 87%, respectively, which are also significantly higher than those presented in <cit.>.
As can be seen that the main contaminators to the high-redshift quasars are M, L, and T dwarfs, which agrees with the studies using traditional color-cuts, because M, L, and T dwarfs exhibit many absorption features in the optical and near-infrared bands, making them to have similar colors as the high-redshift quasars in those bands. To further improve the classification model, on one hand, we need to expand the training sample size, especially the high-redshift quasars, L and T dwarfs. On the other hand, considering the inclusion of photometric data from additional bands, such as i, and y bands, which are not available in Legacy Survey DR9 but will be available in future imaging surveys, may provide us with useful features to distinguish the high-redshift quasars from the contaminators.
For quasars with z < 5, they all have ∼ 90% precision, indicating that each class has its own intrinsic characteristics that make them distinguishable. However, there is also significant contamination among them. On one hand, this is because they are in the post-reionization era of the universe, where the IGM is already mostly ionized, resulting in color similarities among these quasars. On the other hand, our rough setting of the redshift boundaries might lead to misclassification of quasars around the redshift boundary into other classes.
For M, L, and T dwarfs, the contamination among them is significant. This is because they all belong to subtypes of brown dwarfs, which are differentiated based on the spectra <cit.>, and photometric data lacks the precision of spectroscopic data. We find that a significant portion of L and T dwarfs are misclassified as other classes, which is due to the high similarity in the optical spectra and comparably small sample size. Moreover, because of the limitation of the sample size, the color space for L and T dwarfs are not fully spanned and not representative enough to separate them. The issues for both M dwarfs and quasars with z < 5 are highly mitigated. As for other type stars, although there is serious contamination among them, we have not find any significant impact on the predictions of the high-redshift quasars.
We notice that the top-ranked feature of z-W2 in Table <ref> contradicts to the color-color diagram presented in Figure <ref>. Here we investigate this issue in detail. It is known that the color-color diagram presented in Figure <ref> is the projection of the high-dimensional color space. Then the projection on certain direction might diminish the separation between the high-redshift quasars and the contaminators. In order to fully resolve this issue, we construct a three-dimensional (3-D) color space from the top ranked features presented in Table <ref>. Figure <ref> illustrates the color diagram of high-redshift quasars and the main contaminant sources, MLT dwarfs, in the 3-D color space of z-W2, g-z, and r-grz. It can be observed that while there was significant overlap in the 2-D color space of z-W2 and r-grz, these sources exhibit clear separation in the 3-D color space. This demonstrates that the signal and the contaminators can be better distinguished in high-dimensional color space, which is not evident in the projected 2-D color space.
§.§ Discussion of the imbalanced issue
We notice that the number of instances in each class in the training sample is not balanced, as shown in Table <ref>. Here we are discussing the impact of the imbalanced data sample on the random forest classification model. The ratio of the most numerous class (low-z quasars) to the least abundant class (high-z quasars) is as high as 700:1.
Typically, classification models trained on the extremely imbalanced datasets tend to favor the majority class, making it easier for the minority classes to be incorrectly classified as the majority. This skews the model's classification performance for the minority class, more specifically, it results in a lower recall for the minority class, which also
indirectly affects the precision of the minority class.
There are multiple approaches to deal with the imbalanced datasets. For example, the random forest classification algorithm offers a class_weight parameter to adjust the weights of each class based on the number of instances in the input dataset, with the goal of giving the classes that are less frequent a higher weight. This approach aims at balancing the dataset by ensuring that the minority classes are not overwhelmed by the majority classes. There are two modes (“balanced” and “balanced_subsample”) to adjust the class weights. The “balanced” mode uses the values of labels to automatically adjust weights inversely proportional to class frequencies in the input data. The “balanced_subsample” mode is the same as “balanced” except that weights are computed based on the bootstrap sample for every tree grown.
The two methods mentioned above do not involve any reprocessing of the training samples but rather assigning different weights to different classes to achieve balanced datasets. There are methods that directly alter the number of instances for each class in the training sample to balance the dataset, and the methods can be further divided into two categories. One approach is to reduce the entire training sample according to the minority class, which is called “under-sampling". And the other approach is generating more training sample according to the majority class, which is called “over-sampling".
The commonly used under-sampling algorithms include: Balanced Random Forest <cit.>, Near Miss <cit.>, and One Sided Selection <cit.>. The main difference among the algorithms lies in the way they undersample the training sample. BRF draws a bootstrap sample from the minority class and sample with replacement the same number of samples from the majority class. NM uses the KNN algorithm to remove majority class samples near the minority class. OSS rejects the majority class samples around the minority class and near the decision boundary, and those that have minimal impact on the model. We adopt these three under-sampling methods to reprocess the training sample.
The commonly used over-sampling algorithms include: Random Over Sampler (ROS), Synthetic Minority Oversampling Technique <cit.>, and Adaptive Synthetic Sampling <cit.>. The approaches to over sample the training sample are different for these algorithms. ROS randomly samples and duplicates the minority class with replacement. SMOTE synthesizes new samples between the nearest neighbors of minority class samples in the feature space, while ADASYN focuses on synthesizing new samples between the nearest neighbors of minority class samples misclassified by the KNN classifier. We adopt these three algorithms to reprocess the training sample.
Table <ref> shows the performance for the RF model without resampling and the RF models on the balanced samples for the test set.
For the class_weight balanced methods, it can be observed that setting the class_weight parameter has little impact on recall but reduces precision compared to the results on imbalanced samples.
For the under-sampling balanced methods, the precision of both BRF and NM methods decreases significantly, while both recalls show a significant improvement. This is because the under-sampling methods reduces the number of the majority samples significantly, leading to the loss of important information from the contaminators and shrinking in the color space. This issue is particularly obvious for the BRF method which is based on random under-sampling.
Surprisingly, we find that the OSS method returns comparably good precision and excellent recall.
Despite such a small training set, we are able to obtain comparably good models with balanced dataset. This also reflects the fact that the high-redshift quasars possess inherent features such that they are separable from the contaminators in the high-dimensional color space. For the over-sampling balanced methods, the metrics obtained by these three methods are quite consistent.
Compared to the metrics on the imbalanced dataset, they show little change in recall but a decrease in precision. Although the performance of the over-sampling balanced methods is excellent, one issue about these methods is that the over-sampled instances might even not exist or might be biased in the color space.
In conclusion, most of the methods discussed for handling imbalanced datasets do not show significant differences in all four metrics (precision, recall, f1, AGF) compared to those on the imbalanced dataset. Additionally, these balanced methods often prioritize on enhancing the recall of the high-z class while sacrificing its precision. However, what we are more concerned about is the precision of the high-z class, which is more crucial for correctly classifying a vast amount of data with unknown label using our model. Therefore, we will still adopt the 11-class random forest classification model on our original training sample.
§.§ Future Imaging Surveys
There are multiple complementary imaging surveys on the way, which will be the frontier for high-redshift quasars searching. On one hand, there will be more confirmed high-redshift quasars in the near future, expanding the size of training sample significantly. On the other hand, those deep imaging surveys not only provide much better images with high-quality, but also do they provide many more features for both high-redshift quasars searching and photo-z estimation. For example, the Chinese Space Station Telescope <cit.> will cover ∼ 17000 square degrees of the sky and has a wavelength coverage of 2600-10000 Å with broad-band filters of near-ultraviolet (NUV), u, g, r, i, z, y. The Roman Space Telescope
<cit.> has a wide wavelength coverage of 0.48-2.3 microns with 8 broad-band filters, whose near-infrared images will be a lot better than the WISE W1, W2 images. The 5σ AB magnitude limits could reach ∼ 24[https://roman.gsfc.nasa.gov/science/WFI_technical.htmlromanWFItechnical.org/] <cit.>
for filter F184 and F213 (central wavelength are 1.84 and 2.31 μm, respectively) even with only 55 seconds exposure. The Euclid Space Telescope <cit.> will deliver data over 15,000 square degrees of the sky with a wavelength range of 0.95-2.02 microns. The Large Synoptic Survey Telescope <cit.> will deliver the best images in 6 filters for over 18,000 square degrees sky coverage, and the 5σ point-source depth of the coadded maps can reach 27.5 AB magnitude for r-band <cit.>. The near- or mid-infrared photometric data from those wide-field imaging surveys will significantly improve the performance of high-redshift quasars searching.
Furthermore, those future imaging surveys could reach much deeper detection limit, which might resolve the missing values issue for some of the high-redshift quasars. Here we investigate the imputation method in which the missing values are replaced with the 5σ point-source depth
from future imaging surveys (refer to Sec. <ref> for details). The evaluation scores (precision: 0.96, recall: 0.91, f1: 0.93, and AGF: 0.95) of the imputation method using future imaging survey limits on the test set are very close to those using the MICE imputation method (precision: 0.96, recall: 0.92, f1: 0.94, and AGF: 0.96). This indicates that the future imaging surveys are effective to identify the high-redshift quasars. When incorporating additional features other than the g, r, z, W1, W2 bands from future imaging surveys, one could imagine how effective our random forest classification model will be.
§ PHOTOMETRIC REDSHIFT ESTIMATION
Generally, spectroscopic redshift is challenging to obtain for a large number of high-redshift quasar candidates, then the photometric redshift could provide us a good estimation to study various distance-related aspects of the high-redshift quasars. The photometric redshift estimation algorithms can be divided into two categories: template fitting and machine learning methods. Template fitting involves establishing a relationship between photometric magnitudes or fluxes and spectroscopic redshifts using a series of spectral energy distribution (SED) templates. For example, EAZY <cit.> is an algorithm based on template fitting, in which the default template set, as well as the default functional forms of the priors are from semianalytical models. However, machine learning methods seek to find the relationship between photometric information and spectroscopic redshifts based on spectra-confirmed samples. Algorithms that are commonly used include k-nearest neighbors <cit.>, random forest regression <cit.>, and CatBoost <cit.>. Here we are using the random forest regression algorithm to establish a model to estimate the photometric redshift of the high-redshift quasars and introducing the evaluation metrics of the photometric redshift estimation.
The evaluation of photometric redshift is often based on the ratio of the number where the absolute value of Δ z = z_ spec - z_ photo is less than some threshold, e, to the total number, as shown in Equation <ref>. <cit.> conducted statistical analyses on the spectroscopic redshifts of previously verified quasars and the photometric redshifts derived from their optical and near-infrared photometric data.
ϕ_e = 1/N∑_z∈ Z |Δ z| < e
here Z is the set of spectroscopic redshifts, N is the total number of objects with spectroscopic redshift. e is typically set to be 0.1, 0.2, 0.3 <cit.>.
The evaluation criteria mentioned earlier directly utilize the difference between the spectroscopic redshift and the estimated redshift, which is referred to as “non-normalized" and remains unaffected by the magnitude of the spectroscopic redshift. Another evaluation criterion is based on using the normalized difference scaled by the spectroscopic redshift, as shown in Equation <ref>.
Δ z_ norm = Δ z/1 + z_ spec ,
here z_ spec is the spectroscopic redshift.
The fraction of entries where Δ z_ norm exceeds a certain threshold compared to the total number is commonly referred to as the outlier rate, η. As shown in Equation <ref>, we typically set this threshold to 0.1. A smaller outlier rate indicates that more of the predicted values are within the predetermined margin of error. This outlier rate evaluation criterion is more suitable for lower photometric redshift evaluations. The outlier rate is defined as:
η_0.1 = 1/N∑_z∈ Z|Δ z_ norm| > 0.1 ,
where represents an eigenfunction, defined as:
x =
1,true
0,false
,
We utilize three widely employed machine learning algorithms (KNN, RF, CatBoost) to construct regression models across two distinct datasets, FeatureSet-mag and FeatureSet-flux, respectively. FeatureSet-mag comprises magnitudes of the g, r, z, W1, W2, W, and grz bands, alongside the apfluxes in the g, r, z, W1, and W2 bands. FeatureSet-flux encompasses fluxes of the g, r, z, W1, W2, W, and grz bands, in addition to the apfluxes in the g, r, z, W1, and W2 bands. There are only 602 high-redshift quasar samples (5 < z_ spec < 6.5), in order to enhance the accuracy of redshift predictions, we include the mid-z quasars with redshifts greater than 4.5 in the training sample. Similar to the classification models above, we divide the dataset into training, testing, and validation sets and use RandomizedSearchCV to find the best set of hyperparameters in the model's hyperparameter space.
We have ultimately developed six regression models based on the quasars with redshifts ranging from 4.5 to 6.5. Table <ref> presents the R^2 (R-squared) and MSE (Mean Squared Error) scores of these six models. R^2 and MSE are common regression model evaluation metrics. R^2 assesses how well the model explains variable changes (0 to 1, closer to 1 for better fit), while MSE directly reflects prediction accuracy (smaller values for higher accuracy). Among the different training sets, the models trained on FeatureSet-mag outperform those trained on FeatureSet-flux, which also demonstrates the effectiveness of the MICE imputation method we use earlier. When considering different regression algorithms, the KNN model performs significantly worse compared to the other two models. Both the CatBoost and RF models exhibit very close and relatively good scores, with the RF model outperforming the CatBoost model.
Figure <ref> illustrates the distribution of spectroscopic redshifts and photometric redshifts for all quasars with redshifts from 4.5 to 6.5 for the three machine learning regression algorithms (KNN, RF, CatBoost) based on FeatureSet-mag, along with the distribution of Δ z_ norm. According to the traditional evaluation criterion ϕ_ e, the KNN, RF, and CatBoost models achieved ϕ_0.1 values of 42.30%, 66.98%, and 54.30%, respectively, representing the proportion of Δ z < 0.1 to the total count. The ϕ_0.2 values for these models are 65.62%, 90.64%, 86.72%. The ϕ_0.3 values are 78.38%, 96.60%, and 95.40%. Judging based on the normalized evaluation criterion η_0.1, with results of 2.89%, 0.68%, and 0.51% for the KNN, RF, and CatBoost models. In the lower panel of the three top subplots, the horizontal cyan lines represent Δ z_ norm = ± 0.1. It is clearly seen that there are almost no points falling outside the cyan parallel lines.
All three models show exceptional performance, although the results of the KNN model are slightly inferior to the other two models. In conclusion, it is evident that the RF regression model predicts redshifts with higher precision and accuracy. Therefore, we adopt the random forest regression model to predict the photometric redshifts for the high-redshift quasar candidates which will be discussed in the following.
§ HIGH-Z CANDIDATES AND VERIFICATION
§.§ Selection criteria
Based on the discussions above, we decide to adopt the 11-class random forest classification model on FeatureSet-C and FeatureSet-D. These models are named the “mag model" and the “flux model", respectively. Subsequently, we apply these two models separately to the entire Legacy Survey DR9 dataset to obtain the high-redshift quasar candidates. Before that, we have applied several selection criteria to reduce the size of the catalog data from LS DR9 (more than 1 billion entries) without missing too many signals. The selection criteria followed by the reasoning are:
* dered_mag_g,r,z,W1,W2 is not null if the features include colors.
* brick_primary=1 and maskbits!=[1,10,12,13].
* The type is `PSF'.
* snr_z>5, snr_W1>3 and snr_W2>2.
* dered_mag_z > 15 and dered_mag_z<21.5.
1) We apply the first criterion to remove sources with missing values because the trained random forest model does not allow missing values, and imputation is challenging due to the large data size and unknown object labels. 2) The second criterion is used to ensure that the quality of the source is minimally affected by uncontrollable factors (such as being on the boundary, near bright sources, etc). 3) Setting the source type to Point Spread Function (PSF) helps eliminate contamination from extended sources like galaxies. 4) The fourth criterion sets the signal-to-noise ratio constraints, as referenced in <cit.>, helping reduce the catalog size without missing too many signals. 5) The last criterion sets the magnitude constraints to help reduce either too bright or too faint sources in the z-band.
Although setting the source type to PSF can help us eliminate extended sources significantly, it cannot exclude all galaxies, such as the compact early-type galaxies at intermediate or higher redshift. These galaxies, due to their comparably high redshift and compactness, exhibit the morphology of PSF and have a redder color. We will investigate whether those distant compact galaxies are contaminants for the high-redshift quasars by training a binary RF classifier. For the training sample, we first obtain the galaxy sample from the SDSS DR17 Database[https://www.sdss4.org/dr17/sdss.org/dr17] <cit.> with spectral type of “Galaxy" and with z ≥ 0.2 (2,207,611 objects in total). Then, we cross-match the galaxy sample to the LS DR9 database with the above selection criteria to obtain the sample of distant compact galaxies (19,374 objects in total). The sample of high-redshift quasars is the same as the one presented in Table <ref>.
Similar to the 11-class classification model, we also obtain two parallel feature sets (FeatureSet-C and FeatureSet-D) with the best hyper-parameters determined by RandomizedSearchCV. The evaluation scores (precision, recall, f1, AGF) of the “high-z" class for the test set are 0.98, 0.99, 0.98, 0.99, respectively, for the “binary mag model", and the results for the “binary flux model" are consistent with those of the “binary mag model". The corresponding metrics (precision, recall, f1, AGF) of the distant compact galaxies for the test set are 1.00, 1.00, 1.00, 0.99, respectively, for both the “binary mag model" and the “binary flux model". These evaluation scores indicate that the distant compact galaxies almost do not contaminate the high-redshift quasars. Moreover, we find that the apflux ratios are indeed very effective to distinguish the high-redshift quasars from the distant compact galaxies.
Furthermore, we apply the trained 11-class classification models to the distant compact galaxy sample, and we found that only 4 galaxies (0.021%) are predicted as the “high-z" class by the “mag model", and only 1 galaxy (0.005%) is predicted as the “high-z" class by the “flux model". In conclusion, the contamination of the distant compact galaxies to the signal of high-redshift quasars is minimal and negligible.
§.§ High-z candidates
After applying the above selection criteria, we have obtained a total of 140 million sources from LS DR9, which is passed to the two trained random forest classification models to obtain two parallel sets of high-redshift quasar candidates. The random forest classification algorithm provides the predicted probability of each category for each source object, we take the class with the highest predicted probability as the predicted class for each source object. The number of objects for each predicted class using the “mag model" is shown in the Table <ref>. The “flux model" returns similar results, with 420,208 mid-z quasar candidates and 568,188 high-z quasar candidates, respectively. And there are 216,949 overlapping high-redshift quasar candidates which are both identified by the “mag model" and the “flux model".
To obtain more reliable high-redshift quasar candidates, further screening is necessary. The random forest classification model provides us with the predicted probabilities for each source object. By setting reasonable thresholds on these probabilities, we can select much more reliable high-redshift quasar candidates. Figure <ref> displays the distribution of photometric redshifts for high-redshift quasar candidates and known quasars with the probability of being classified as the high-z class. The orange and cyan dots depict our high-redshift quasar candidates, which are the overlapping candidates obtained from both the “mag model" and the “flux model" for the entire LS DR9 footprint. The red and green dots represent spectra-confirmed high-redshift quasars. The photometric redshift for the high-z class is provided by the random forest regression model trained on FeatureSet-mag.
We try first to set a probability threshold such that the precision of the known high-redshift quasars in our predicted results can reach 100%. To do that, we identify the highest predicted probability among the known high-redshift quasars that are incorrectly predicted as non-high-redshift quasars, and we call it as the first probability threshold, and the values are p_ thre1,mag = 0.41 and p_ thre1,flux = 0.40 for the “mag model" and the “flux model", respectively. They are represented by the lower blue and black horizontal lines in Figure <ref>, with the green dots representing known high-redshift quasars that are misclassified to other categories. This threshold helps us remove approximately one-third of the candidates.
The second probability threshold (p_ thre2) is related to the objects which are not “high-z" class but predicted as the “high-z" class. Among the objects predicted as the “high-z" class, we identify the highest probability among those that is incorrectly predicted as the “high-z" class. They are p_ thre2,mag = 0.83 for the “mag model" and p_ thre2,flux = 0.75 for the “flux model", and are represented by the upper blue and black dashed horizontal lines in Figure <ref>. The blue points in Figure <ref> represent the “mid-z" quasars that have been incorrectly predicted as the “high-z" class. As has been discussed earlier, the confusion is not a concern because their redshifts are around the boundary. This means that the precision of our trained random forest regression model is actually higher than what is currently being observed.
At this stage, we have obtained a relatively pure candidate set. To further ensure the plausibility of the high-redshift quasar candidates, we also quote the results for higher probability thresholds of 0.90 and 0.95. The number left after each of the probability threshold for the entire LS DR9 footprint is presented in Table <ref>. For the “mag model", we have p_ quasar≥ p_ thre1,mag: 198,339 candidates, p_ quasar≥ p_ thre2,mag: 2,984 candidates, p_ quasar≥ 0.90: 476 candidates, p_ quasar≥ 0.95: 32 candidates. For the “flux model", we have p_ quasar≥ p_ thre1,flux: 473,401 candidates, p_ quasar≥ p_ thre2, flux: 69,736 candidates, p_ quasar≥ 0.90: 11,012, p_ quasar≥ 0.95: 740. And for the overlapping candidates, we adopt the probability threshold derived from the “mag model", and we have p_ quasar≥ p_ thre1: 165,734 candidates, p_ quasar≥ p_ thre2: 2,984 candidates, p_ quasar≥ 0.90: 476, p_ quasar≥ 0.95: 32. Our catalog includes information of all candidates, and Table <ref> describes the columns of the data file, which is available in its entirety in machine-readable form. We present the catalogs obtained from the “mag model", the “flux model" and the overlapping results.
We notice that there is a big difference between the “mag candidate" and the “flux candidate" in Table <ref>. This difference becomes more pronounced as the probability threshold increases. Remember that the big advantage of the “flux model" (FeatureSet-D) is that there is no missing value issue. We suspect that the big difference between the “mag candidate" and the “flux candidate" is due to the missing value issue in the LS catalog. To fully understand this issue, we present the probability distributions of the high-redshift quasar candidates from the two models in Figure <ref>. The green area corresponds to the probability distribution of the candidates for the “mag model". The red area represents the probability distribution of the candidates for the “flux model".
It is clearly seen that there are many more candidates of the “flux model" than the “mag model", especially at the high probability region. It is obvious that the objects with missing values tend to have larger probability to be high-redshift quasars due to the color drop-out, which can be inferred from the training sample (more than half of the current known high-redshift quasars have missing value). If we remove the objects with missing values for the “flux model", which is represented by the sky blue area. It is obvious that the majority candidates of the “flux model" at high probability region have missing values, resulting in the big difference between the two models as shown in Table <ref>. The phenomenon observed in the high-redshift quasar candidates is consistent with that of the training sample.
Furthermore, by calculating the completeness of known high-redshift quasars in the test set at different thresholds, we provide a reference for the completeness of our high-redshift quasar candidates. The calculation of completeness is similar to recall. At different thresholds, if the probability of the high-z class obtained from the random forest is higher than the threshold, it is considered a correctly classified high-redshift quasar; otherwise, it is deemed a misclassified high-redshift quasar. The completeness of the high-z class is the ratio of correctly classified high-redshift quasars to the total number of high-redshift quasars in the test set. For the “mag model", the completeness of the known high-redshift quasars in the test set at different thresholds is as follows:
p_ quasar≥ p_ thre1,mag: 82.20%; p_ quasar≥ p_ thre2,mag: 33.05%; p_ quasar≥ 0.90: 18.64%; p_ quasar≥ 0.95: 5.09%. Similarly, for the “flux model", the completeness is as follows: p_ quasar≥ p_ thre1,flux: 79.49%; p_ quasar≥ p_ thre2,flux: 41.88%; p_ quasar≥ 0.90: 15.39%; p_ quasar≥ 0.95: 0.00%.
§.§ Verification using MUSE
The Multi Unit Spectroscopic Explorer <cit.>, an integral field unit (IFU) mounted on the Very Large Telescope (VLT-Yepun, UT4), has a field of view (FOV) of 1 arcmin^2. The MUSE instrument provides high throughput (35% end-to-end, including the telescope, at 7000 Å), moderate spectral resolution (R ≃ 3000 at ∼7000 Å), full optical coverage 4650 - 9300 Å) spectroscopy
at a spatial sampling scale of 0.2 arcsec across a 1 × 1 arcmin^2 FOV. The wide wavelength coverage and moderate spectral resolution enable us to identify high-redshift quasars up to redshift of ∼ 6.6 and suitable for our purpose of high-redshift quasar candidates verification.
So far there are nearly 20,000 publicly available MUSE datacubes, of which the sky coverage is approximately 6 square degrees. Among the entire high-redshift candidates, 21 of them are located within the MUSE field. After spectroscopic verification, 11 of these candidates are high-redshift quasars included in our training set, 3 are known high-redshift quasars but not present in our training sample.
5 of these candidates are M dwarfs, while the rest two have spectra that are much noisier and are difficult to identify emission/absorption lines, making it hard to determine the type of those sources. By estimating their redshifts using Lyα, N V or C IV emission lines, we find that these three “new" high-redshift quasars precisely align with the redshift boundaries we set for high-redshift quasars. Two of them are having redshifts of z ∼ 5 , and one is near the boundary of 6.5. Because they fall on the redshift boundaries, we initially miss them when collecting the high-redshift quasars training sample. Since they are not part of our training sample, they provide us a valuable opportunity to validate our model.
Overall, 14 out of 21 high-redshift quasar candidates from our random forest model has been confirmed to be true high-redshift quasars, reaching a success rate of 66.7%. If we only focus on the high-redshift quasars not present in the training sample, the success rate is 30% (3 “new" high-redshift quasars out of 10 candidates).
Figure <ref> shows the MUSE spectra of these three “new" high-redshift quasar obtained from our random forest model. The details of these three “new" high-redshift quasars are the following:
J014132.4-542749.9. <cit.> discovered this quasar by cross-matching the first data release of the Dark Energy Survey with the Sidney University Molonglo Survey radio catalog at 0.843 GHz. The z band magnitude of this quasar is 21.07,
and the photometric redshift provided by our trained regression model is 5.087. <cit.> utilized Gaussian profiles based on the Lyα, O VI, N V, and C IV emissions to estimate the redshift of this quasar, with the value of 5.000 ± 0.002.
J103418.65+203300.2. <cit.> discovered this quasar in the SDSS Quasar Catalog. Its magnitude in the z band is 19.61.
The photometric redshift provided by our trained regression model is 5.214. Based on the emission lines of Hα, Hβ, Mg II, C III, C IV, and Lyα, the redshift of this quasar is estimated to be 5.0150 ± 0.0005 <cit.> .
J022426.54–471129.4. <cit.> discovered this quasar using the European Southern Observatory New Technology Telescope and Gemini South telescopes. The z band magnitude of this quasar is 20.23,
and the photometric redshift provided by our trained regression model is 6.139. The significant difference between the photometric and spectroscopic redshifts is due to the scarcity of samples with redshift around 6.5 in our training set. <cit.> calculated the redshift of this quasar by fitting its spectrum with a known quasar model, obtaining a value of 6.50 ± 0.01.
§.§ Verification using DESI
DESI will collect millions of qusars in its 5-year operation <cit.>, of which thousands of high-redshift quasars will be identified. Currently there are 306 objects at z > 5 in the DESI-EDR catalog <cit.>. We then apply our trained RF classification model to these 306 objects at z > 5 in the DESI-EDR catalog to validate and evaluate our model. Before that, we first inspect the spectra of those objects visually. After spectra inspection, we find that only 22 of them are true high-redshift quasars at z > 5, 12 of which are in our training sample and 10 of which are new high-redshift quasars. The rest 284 objects are not high-redshift quasars after spectra inspection, which might be due to either the incorrect classification or the incorrect redshift determination by the DESI team.
This ensemble of objects at z >5 from DESI-EDR provides a perfect labratory to test and validate our random forest classification model. For the “mag model", 188 out of 306 sources have no missing values, with only 6 out of the 22 true high-redshift quasars found among these 188 sources. 4 out of 188 objects are predicted to be high-redshift quasars, and these 4 objects are among the 6 true high-redshift quasars. The rest 2 true high-redshift quasars are not correctly classified: one is classified as a M dwarf, and the other is classified as a mid-z quasar. This implies that the precision is 1.00 and the recall is 0.67 for the “mag model". Using the MICE imputation to fill in the missing values for the 118 sources, we find that 17 out of 118 sources are predicted to be high-redshift quasars. 16 out of the 17 candidates are true high-redshift quasars, and the type of the rest 1 source is suspicious because it is too faint without distinct emission or absorption lines. Overall, 21 objects are predicted to be high-redshift quasars, 20 of which are true high-redshift quasars. Therefore, the precision is 0.95 and the recall is 0.91 for the “mag model".
For the “flux model", all 306 sources have well-defined features. Among these sources, 21 objects are predicted to be high-redshift quasars, 20 of which are true high-redshift quasars. The rest 1 object is the same as the above one whose type is suspicious. The precision is 0.95, and the recall is 0.91 for the “flux model". As discussed earlier, there exist a confusion between “mid-z" and “high-z" quasars because of the set of arbitrary redshift boundary, which should not be a concern. In this sample, one “high-z" quasar with true redshift of 5.036 is predicted to be a “mid-z" quasar, and we should not consider this as an error. This means that the precision and recall will be slightly higher, with a precision and recall of 0.96, 0.96 for both the “mag model" and the “flux model". Even if we only focus on the 10 new high-redshift quasars, 9 of which are successfully recovered. It turns out that our random forest classification model can achieve much higher success rate than the DESI target selection. The consistency between the “mag model" and the “flux model" further demonstrates the robustness and unbiasness of the imputation method we adopt.
Although the estimation of the precision and recall for the high-redshift quasars using the DESI-EDR database is biased because these objects are not randomly picked from our high-redshift quasar candidates and they are pre-selected by the DESI team, they can still shed a light on the robustness of our machine learning model. On the other hand, the DESI target selection of quasars is mainly for quasars at 0.9 < z < 2.1 and seldom of these quasars are at z > 2.1 <cit.>. Moreover, the DESI target selection of high-redshift quasars is still based on the traditional color-cut method that has been adopted in previous high-redshift quasar surveys <cit.>. In this sense, these 306 objects extracted from DESI-EDR database can be viewed as randomly picked objects from our high-redshift quasar candidates. Therefore, if our estimation of the precision and recall for the high-redshift quasars from DESI-EDR database is biased, they are least biased. The high success rate of our high-redshift quasar candidates using the objects at z>5 from DESI-EDR further validates our random forest classification model. Therefore, if we pick the high-redshift quasar candidates with high probability (p_ quasar > 0.9 for instance) from our classification model for follow-up spectroscopy observations, we will discover many more high-redshift quasars than expected.
§ SUMMARY
In this work, we obtain a representative training sample which are all spectra-confirmed and it consists of 588 high-redshift quasars and millions of contaminators in the DESI Imaging Legacy Survey DR9 footprint. We then obtain the photometric data for the training sample from Legacy Survey DR9 and WISE survey and constructed various features to train a machine learning model. In addition to the photometric measurements in the g, r, z, W1, and W2 bands, we also construct the combined photometric measurements of grz and W. We implement the fluxes at different aperture radius provided by the LS DR9, the apfluxes are very effective in distinguishing high-redshift quasars from the extended sources like distant compact galaxies. And we demonstrate that the contamination from distant compact galaxies can be negligible via a binary classification for the first time. In addition, we address the issue of missing values in the known high-redshift quasar dataset. Among the various imputation methods, we find that the predictive results of the model trained with the MICE method are best consistent with the complete dataset.
We compare the performance of several different commonly used machine learning classification algorithms, including KNN, decision tree, random forest, LGBM, GaussianNB. The random forest classification model emerged as the most optimal algorithm. To further enhance the performance of the random forest classification model, we perform feature selections on the training sample. Additionally, we discuss whether the number of classes in the training sample affects the performance of the classification model. Considering the intrinsic characteristics of each class in the training sample, we decide to adopt the 11-class classification model as our final model. We discuss the issue of imbalance sample and find that different algorithms handling the imbalanced data sample do not show significant differences in all metrics compared to those with the imbalanced dataset.
The performance of the random forest with 11-class can reach a high precision of 96.43%, a high recall of 91.53% for the high-redshift quasars for the test set, both of which are significantly higher than the previous studies. In addition to the classification model, we also train a regression model on a dataset of quasars with redshift between 4.5 and 6.5 to predict the photometric redshift of the high-redshift candidates. Among the different machine learning regression algorithms (KNN, random forest, CatBoost), we find that the random forest regression model has the best performance, with 99.3% data points within the range of Δ z/1 + z_ spec < 0.1 and MSE ≤ 0.025.
We apply several selection criteria to reduce the catalog size of the entire Legacy Survey DR9 catalog. Applying the final 11-class models using random forest algorithm on FeatureSet-C (FeatureSet-D) to the 140 millions sources from LS DR9, we obtain 272,424 (568,188) high-redshift candidates. And there are 216,949 candidates which are classified as high-redshift quasars by both the “mag model" and the “flux model". We further narrow down the high-redhisft quasar candidates by setting two cutoff probabilities from the random forest model: the maximum probability of a known high-redshift quasar being misclassified as another class (40.82%, 40.03% for the “mag model" and “flux model", respectively) and the maximum probability of a contaminator being misclassified as a high-redshift quasar (82.49%, 75.23% for the “mag model" and “flux model", respectively). By applying these cutoff probabilities to the common high-redhisft quasar candidates identified by both the “mag model" and the “flux model", we obtain 165,734 and 2,984 high-redhisft quasar candidates, respectively. There are 476 candidates with probability greater than 90%, which could be the targets of the highest priority for future spectroscopy followup.
Using MUSE spectra to do the spectra verification of the high-redshift quasar candidates, we find that there are 21 high-redshift quasar candidates in the total ∼ 20000 MUSE observations footprint. After inspection, we find that 11 of these candidates are high-redshift quasars already included in our training sample, 3 are known high-redshift quasars but not present in our training sample. Using DESI-EDR spectra, we confirm that 21 out of 22 true high-redshift quasars with correct redshift can be successfully identified for the “mag model" with missing values imputed and 21 out of 22 true high-redshift quasars can also be successfully classified by the “flux model", reaching much higher success rate than the DESI target selection.
Our current model still has room for improvement. The future DESI survey will discover thousands of new high-redshift quasars, which will significantly expand the training sample size. The current photometric measurements available are very limited, therefore introducing new photometric measurements in the near- or mid-infrared could enhance the model performance significantly. The Legacy Survey DR10 currently includes the i band for ∼ half sky coverage of DR9, and the future data release including the y band is on the way as well. We find that the inclusion of i band photometric measurement improves the classification performance significantly, which demonstrates that more photometric measurements are critical to improving the machine learning performance. The future imaging surveys such as CSST, RST, EST and LSST, would deliver much deeper images with many more photometric measurements such as the y, J, H and K bands, which are beyond the current surveys and would be the frontier for high-redshift quasars searching with redshift even up to 7 or 8.
§ ACKNOWLEDGMENTS
GY and HZ acknowledge financial support from the start-up funding of the Huazhong University of Science and Technology and the National Science Foundation of China grant (No. 12303007). QW acknowledges financial support from the National Science Foundation of China grant (No. 12233007). The authors thank the referees for comments that helped us improve the manuscript. This research was supported by the Munich Institute for Astro-, Particle and BioPhysics (MIAPbP) which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy EXC-2094 390783311.
The Legacy Surveys consist of three individual and complementary projects: the Dark Energy Camera Legacy Survey (DECaLS; Proposal ID #2014B-0404; PIs: David Schlegel and Arjun Dey), the Beijing-Arizona Sky Survey (BASS; NOAO Prop. ID #2015A-0801; PIs: Zhou Xu and Xiaohui Fan), and the Mayall z-band Legacy Survey (MzLS; Prop. ID #2016A-0453; PI: Arjun Dey). DECaLS, BASS and MzLS together include data obtained, respectively, at the Blanco telescope, Cerro Tololo Inter-American Observatory, NSF's NOIRLab; the Bok telescope, Steward Observatory, University of Arizona; and the Mayall telescope, Kitt Peak National Observatory, NOIRLab. Pipeline processing and analyses of the data are supported by NOIRLab and the Lawrence Berkeley National Laboratory (LBNL). The Legacy Surveys project is honored to be permitted to conduct astronomical research on Iolkam Du'ag (Kitt Peak), a mountain with particular significance to the Tohono O'odham Nation.
NOIRLab is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. LBNL is managed by the Regents of the University of California under contract to the U.S. Department of Energy.
This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo, Financiadora de Estudos e Projetos, Fundacao Carlos Chagas Filho de Amparo a Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cientifico e Tecnologico and the Ministerio da Ciencia, Tecnologia e Inovacao, the Deutsche Forschungsgemeinschaft and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenossische Technische Hochschule (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l'Espai (IEEC/CSIC), the Institut de Fisica d’Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, NSF’s NOIRLab, the University of Nottingham, the Ohio State University, the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University.
BASS is a key project of the Telescope Access Program (TAP), which has been funded by the National Astronomical Observatories of China, the Chinese Academy of Sciences (the Strategic Priority Research Program “The Emergence of Cosmological Structures” Grant #XDB09000000), and the Special Fund for Astronomy from the Ministry of Finance. The BASS is also supported by the External Cooperation Program of Chinese Academy of Sciences (Grant #4A11KYSB20160057), and Chinese National Natural Science Foundation (Grant #12120101003, #11433005).
The Legacy Survey team makes use of data products from the Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE), which is a project of the Jet Propulsion Laboratory/California Institute of Technology. NEOWISE is funded by the National Aeronautics and Space Administration.
The Legacy Surveys imaging of the DESI footprint is supported by the Director, Office of Science, Office of High Energy Physics of the U.S. Department of Energy under Contract No. DE-AC02-05CH1123, by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract; and by the U.S. National Science Foundation, Division of Astronomical Sciences under Contract No. AST-0950945 to NOAO.
This research used data obtained with the Dark Energy Spectroscopic Instrument (DESI). DESI construction and operations is managed by the Lawrence Berkeley National Laboratory. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High-Energy Physics, under Contract No. DE–AC02–05CH11231, and by the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility under the same contract. Additional support for DESI was provided by the U.S. National Science Foundation (NSF), Division of Astronomical Sciences under Contract No. AST-0950945 to the NSF’s National Optical-Infrared Astronomy Research Laboratory; the Science and Technology Facilities Council of the United Kingdom; the Gordon and Betty Moore Foundation; the Heising-Simons Foundation; the French Alternative Energies and Atomic Energy Commission (CEA); the National Council of Science and Technology of Mexico (CONACYT); the Ministry of Science and Innovation of Spain (MICINN), and by the DESI Member Institutions: www.desi.lbl.gov/collaborating-institutions. The DESI collaboration is honored to be permitted to conduct scientific research on Iolkam Du’ag (Kitt Peak), a mountain with particular significance to the Tohono O’odham Nation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U.S. National Science Foundation, the U.S. Department of Energy, or any of the listed funding agencies.
Based on observations collected at the European Southern Observatory under ESO programme(s) 109.238W.003, 105.208F.001, and 0104.A-0812(A), and/or data obtained from the ESO Science Archive Facility with DOI(s) under https://doi.org/10.18727/archive/41.
|
http://arxiv.org/abs/2409.03554v1 | 20240905141424 | Critical domains for certain Dirichlet integrals in weighted manifolds | [
"Levi Lopes de Lima"
] | math.DG | [
"math.DG"
] |
Universidade Federal do Ceará (UFC),
Departamento de Matemática, Campus do Pici, Av. Humberto Monte, s/n, Bloco 914, 60455-760,
Fortaleza, CE, Brazil.
[email protected]
§ ABSTRACT
We start by revisiting the derivation of the variational formulae for the functional assigning to a bounded regular domain in a Riemannian manifold its first Dirichlet eigenvalue
and
extend it to (not necessarily bounded) domains in certain weighted manifolds.
This is further extended to other functionals defined by certain Dirichlet energy integrals, with a Morse index formula for the corresponding critical domains being established. We complement these infinitesimal results by proving a couple of global rigidity theorems for (possibly critical) domains in Gaussian half-space, including an Alexandrov-type soap bubble theorem. Although we provide direct proofs of these latter results, we find it worthwhile to point out that the main tools employed (specifically, certain Pohozhaev and Reilly identities) can be formally understood as limits (when the dimension goes to infinity) of tools previously established by Ciarolo-Vezzoni <cit.> and Qiu-Xia <cit.> to handle similar problems in round hemispheres, with the notion of “convergence” of weighted manifolds being loosely inspired by the celebrated Poincaré's limit theorem in the theory of Gaussian random vectors.
Critical domains for certain Dirichlet integrals in weighted manifolds
Levi Lopes de Lima
September 9, 2024
======================================================================
§ INTRODUCTION
We consider a complete Riemannian manifold (M,g) of dimension n≥ 2 and a smooth function ϕ:M→ℝ and then form the weighted manifold (M,g,e^-ϕd vol_g),
where d vol_g is the Riemannian volume element associated to g.
We fix a connected, proper
domain Ω⊂ M with a smooth boundary Σ=∂Ω. In the purely Riemannian case, where ϕ equals a constant, we always assume that Ω is compact. Otherwise, we impose that Ω has finite ϕ-volume (in the sense that vol_ϕ(Ω)<+∞,
where for any Borel subset B⊂ M we set vol_ϕ(B)=∫_B d vol_ϕ, with d vol_ϕ=e^-ϕd vol_g). In particular, if M itself has finite ϕ-volume, which in fact is the only case of interest here, then if needed we can assume that d vol_ϕ defines a probability measure on M and with this normalization our assumption on Ω boils down to vol_ϕ(Ω)<1.
A notable example of a weighted manifold is the Gaussian space
(ℝ^n,δ, e^-ϕ_nd vol_δ), ϕ_n(x)=e^-|x|^2/2-n/2log(2π),
where δ is the standard flat metric, which satisfies vol_ϕ_n(ℝ^ n)=1. A somewhat informal interpretation of Poincaré's limit theorem (see <cit.> and Section <ref> below) allows us to view this space as the appropriate limit of orthogonal projections onto ℝ^n of (volume re-scaled) round spheres of dimension k-1 and radius √(k) as k→+∞, a viewpoint that, at least on heuristic grounds, we will find useful to employ here; see Remarks <ref> and <ref> and Section <ref> for more on this.
Our aim here is to study, initially from a variational perspective, the most elementary properties of optimal configurations for Dirichlet functionals of the type
(Ω,g,e^-ϕd vol_ϕ)↦ℰ(Ω):=inf_u∫_Ω(α|∇ u|^2+β u^2-fu)d vol_ϕ,
where α,β∈ℝ, α>0, f is an auxiliary function and u:Ω→ℝ is assumed to vary in a suitable Sobolev space of functions determined by appropriate boundary conditions (usually of Dirichlet type) and possibly satisfying an extra integral constraint. Whenever the functional in question displays a geometric flavor it is natural to assume further that Ω varies in the set of domains with a fixed ϕ-volume, a viewpoint we will always adopt here. Although a minimizer (henceforth, an optimal domain) to this variational problem is known to exist under mild conditions on the data (see for instance <cit.> and the references therein), a complete classification is far from being available in general. However, and this is our initial motivation here, we may use variational methods to have a perception on the properties an optimal domain should eventually satisfy. Instead of delving into the intricacies of a general theory, we prefer here to illustrate the methods and results by considering a couple of important examples.
(The first Dirichlet eigenvalue)
Let ℰ(Ω,g,d vol_ϕ)=λ(Ω,g,d vol_ϕ) be the functional ascribing to any weighted domain the first eigenvalue λ(Ω,g,d vol_ϕ) associated to the Dirichlet problem
{[ Δ_ϕ u+λ u=0 Ω; u=0 Σ ].
where
Δ_ϕ= div_ϕ∇=Δ-⟨∇ϕ,·⟩
is the weighted Laplacian. Here,
div_ϕ=e^ϕ div_ge^-ϕ= div -⟨∇ϕ,·⟩
is the weighted divergence, ⟨ ,⟩=g(·,·), Δ is the metric Laplacian and ∇ is the metric gradient; we refer to Section <ref> for a discussion on the technical assumptions needed to make sure that the existence problem for (<ref>) (in particular, the existence of a first eigenvalue with the expected properties) is well-posed for appropriate choices of Ω. Granted those conditions, it turns out that
λ(Ω,g,d vol_ϕ)=∫_Ω|∇ u|^2d vol_ϕ,
where, besides solving (<ref>), u is further constrained to satisfy
∫_Ω u^2d vol_ϕ=1.
Due to the obvious geometric meaning of λ, it is natural to investigate its variational properties under a ϕ-volume-preserving assumption. More precisely, if 0<V< vol_ϕ(M),
𝒟_V={Ω⊂ M; vol_ϕ(Ω)=V
}
and
λ_V=inf_Ω∈𝒟_Vλ(Ω),
the question is to determine those Ω∈𝒟_V such that λ_V=λ(Ω).
(A Dirichlet energy)
For β∈ℝ and f∈ L^2_ϕ(Ω) the Dirichlet (β,f)-energy of Ω⊂ M is
ℰ_β,f(Ω)=inf_v
J_β,f(v),
where
J_β,f(v)=∫_Ω(1/2|∇ v|^2+β/2v^2-fv)d vol_ϕ
which, under suitable conditions (cf. Remark <ref> below), is well defined whenever β>-λ(Ω) in the sense that there exists w:Ω→ℝ such that ℰ_β,f(Ω)=J_β,f(w)>-∞ with w satisfying
{[ -Δ_ϕ w+β w=f Ω; w=0 Σ ].
An interesting problem here is to classify the domains which minimize the energy (<ref>) in the class of all domains with a given ϕ-volume. Thus,
if
ℰ_V,β,f=inf_Ω∈𝒟_Vℰ_β,f(Ω),
the question is to determine those Ω∈𝒟_V such that ℰ_V,β,f=ℰ_β,f(Ω).
A recurrent theme here is to try to understand how ℰ(Ω) varies with Ω at an infinitesimal level (and of course restricting ourselves to the examples above). For this we consider variations of Ω induced by one-parameter families of diffeomorphisms of the type t∈(-ϵ,ϵ)↦φ_t:M→ M with φ_0= Id, which we assume to be ϕ-volume-preserving in the sense that vol_ϕ(Ω_t)= vol_ϕ(Ω) for t∈(-ϵ,ϵ). In the context of Example <ref>, and
following the pioneering works by Garabedian and Schiffer <cit.> and Shimakura <cit.>, where the Euclidean case (ℝ^n,δ,d vol_δ) has been treated in detail, our first goal is to compute the variational formulae for
d/dtλ(Ω,g,d vol_ϕ)|_t=0 and d^2/dt^2λ(Ω,g,d vol_ϕ)|_t=0, Ω_t=φ_t(Ω),
in terms of the
variational function ξ=⟨ v,ν⟩, where
v:=dφ_t/dt|_t=0 is the variational field and ν is the outward unit normal vector field to Σ (Propositions <ref> and <ref>).
Needless to say, we also perform similar calculations in the setting of Example <ref> (see the proof of Proposition <ref> culminating in (<ref>) and Proposition <ref>).
Our calculation for (<ref>) should be compared with the more general approaches in <cit.> (in the purely Riemannian case) and in <cit.> (for weighted manifolds but restricted to the first variation), where the authors handle more general classes of deformations (with the time varying metrics not being necessarily induced by embeddings of Ω into a fixed Riemannian manifold). Besides its simplicity, the advantage of the (more pedestrian) approach adopted here, which aligns with the computations in <cit.>, lies in the fact that it can be easily extended to other naturals energy functionals, as illustrated here for the examples above in the category of weighted manifolds.
A common (and certainly well-known) feature that emerges from our computations for the first variational formula is that optimal domains support functions (in our examples, the functions u and w satisfying (<ref>) and (<ref>), respectively) meeting an extra boundary condition involving its normal derivative. In other words, these functions satisfy an over-determined elliptic boundary value problem, which means that optimal domains tend to be quite rigid and hence in principle amenable to a classification; see Remark <ref>.
With the variational formulae at hand, we then proceed to extend the Morse index formula first established in <cit.> (for λ and in the Euclidean setting) to the more general framework considered here ( Theorems <ref> and <ref>), with some interesting applications briefly indicated (Remark <ref> and <ref>).
Although variational methods certainly may be explored to access the most elementary properties of
optimal domains for Dirichlet integrals, they do not seem to be of much use in the rather challenging problem of their explicit characterization. Thus, it is natural to seek for alternate routes by resorting to global methods relying on suitable variations of certain classical differential and integral identities (usually associated to Pohozhaev and Reilly, respectively). Our first result in this direction is Theorem <ref>, where we classify solutions to an over-determined system associated to a certain Dirichlet energy (ℰ_-1,1 in the notation of Example <ref>) defined in a domain contained in Gaussian half-space. This constitutes the exact analogue of a result in <cit.> for domains in a round hemisphere and similarly relies on a certain Pohozhaev identity (Proposition <ref>). We stress that, as explained in Remark <ref>, both our over-determined system and the associated Pohozhaev identity may be viewed as “Poincaré's limits” of the corresponding entities in <cit.>, which points toward the existence of an underlying, although informal, principle which may be useful in other contexts. We confirm this by means of Theorem <ref>, a version of Alexandrov's soap bubble theorem for embedded hypersurfaces in Gaussian half-space. As in the previous example, this is the exact analogue of the classical spherical case, as approached in <cit.>, thus similarly hinging on a Reilly-type identity (Proposition <ref>) which, as explained in Remark <ref>, may be viewed as the “Poincaré's limit” of the corresponding identity in <cit.>. Hopefully, this kind of “Poincaré's convergence”, which is used here
as an informal device to transfer tools and results from spheres to Gaussian space, may be successfully employed in other classes of problems.
Acknowledgments. The author would like to thank S. Almaraz, C. Barroso and J.F. Montenegro for conversations.
§ SOME PRELIMINARY FACTS
We collect here the preliminary technical ingredients needed to carry out the proofs of our main results.
We insist that the smooth boundary Σ=∂Ω will always be oriented by its outward pointing unit normal vector ν and is endowed with the weighted area element d area=e^-ϕd area_g, where d area_g is the area element induced by g.
As usual, we let L^2_ϕ(Ω) denote the space of all measurable functions f:Ω→ℝ such that ∫_Ω f^2d vol_ϕ<+∞. More generally, if s∈ℕ_0={0,1,2,…} we denote by W^s_ϕ(Ω) the Sobolev space of all such functions such that its weak derivatives {∇^jf}_j=0^s up to order s lie in
L_ϕ^2(Ω)=W_ϕ^0(Ω)
with the Hilbertian norm
f_W^s_ϕ(Ω)^2=∑_j=1^s∇^jf^2_L^2_ϕ(Ω).
Note that we may extend this definition to s∈ℤ (by duality) and to s∈ℝ (by interpolation) Finally, if Ω is proper we denote by W^s_ϕ[0](Ω) the closure of C^∞_0(Ω) in W^s_ϕ(Ω).
Since ϕ is smooth, whenever Ω is bounded
the Sobolev norm (<ref>) is equivalent to the standard metric Sobolev norm (which is obtained by taking ϕ≡ 0 and gives rise to the standard Sobolev spaces W^s(Ω), etc.). In this case we have identifications W^s_ϕ(Ω)=W^s(Ω), etc.
We impose a few working assumptions (which are automatically satisfied when Ω is bounded) on our otherwise general setup.
* A.1
A Poincaré inequality holds for elements of W^1_ϕ[0](Ω): there exists C=C_Ω,ϕ>0 such that
∫_Ω |∇ f|^2d vol_ϕ≥ C∫_Ω f^2 d vol_ϕ, f∈
W^1_ϕ[0](Ω),
which implies that the square root of the integral in the left-hand side defines a norm in W^1_ϕ[0](Ω) which is equivalent to _W^1_ϕ(Ω).
* A.2
The embedding W^1_ϕ(M)↪ L^2_ϕ(M) is compact.
Taken together, these assumptions imply that the embedding W^1_ϕ[0](Ω)↪ L^2_ϕ(Ω) is compact for any Ω as above. Thus, by standard spectral theory, the (weak) formulation of the eigenvalue problem in (<ref>) admits a non-trivial solution (an eigenfunction) for λ varying in a discrete set {λ_l}_l=1^+∞ with λ_l→+∞ as l→+∞. Also, under these conditions it is known that the first eigenvalue λ=λ_1>0 is such that the corresponding eigenfunction u does not change sign throughout Ω, so that the corresponding eigenspace is simple (that is, one-dimensional). Hence, λ depends smoothly on smooth variations of Ω (for instance, variations of the type Ω_t=φ_t(Ω) as in Section <ref>). Finally, since both Σ and ϕ are assumed to be smooth, standard elliptic theory implies that u is smooth up to the boundary. Hence, if we normalize u such that (<ref>) is satisfied then λ is given by (<ref>);
we refer to the discussion in <cit.> and the references therein for details; see also Remark <ref> below.
The existence of a smallest eigenvalue λ for Δ_ϕ implies that the operator
-Δ_ϕ+β:W^1_ϕ[0](Ω)→ L^2_ϕ(Ω)
is invertible if β>-λ, which justifies the existence of a minimizer for J_β,f in (<ref>) via a solution of (<ref>).
The assumptions above suffice to justify the computations leading to the variational formulae in Sections <ref> and <ref>. However, in order to establish finer properties of the underlying functional (such as the Morse index formula in Theorems <ref> and <ref>) one more requirement is needed.
* A.3 The standard “elliptic package” for non-homogeneous boundary problems of the type
{[ Δ_ϕ u=f Ω; u=ξ Σ ].
holds true, including suitable trace/extensions theorems, the fact that the map
u∈ W^s+2_ϕ(Ω)↦ (Δ_ϕ u,u|_Σ)∈ W_ϕ^s(Ω)× W_ϕ^s+3/2(Σ), s≥ 0,
is an isomorphism and the corresponding regularity theory (up to the boundary); see Remark <ref>.
Poincaré inequality (<ref>) holds in the Gaussian setting of Example <ref>
with Ω being any proper domain with vol_ϕ_nΩ)<1.
Proofs may be found in <cit.> and the method of proof, which is based on Gaussian rearrangements, may be extended to weighted manifolds for which the validity of a certain relative isoperimetric inequality is taken for granted <cit.>.
On the other hand, the compactness of the embedding W^1_ϕ(M)↪ L^2_ϕ(M) seems to hold in a more general setting (if we further assume that vol_ϕ(M)<+∞).
For instance, it holds
for (ℝ^n,δ,e^-ϕd vol_δ), with ϕ(x)∼ |x|^θ, θ>1, as |x|→+∞ <cit.>. More generally (that is, in the presence of a Riemannian background (M,g)) it is well-known that this property follows from the validity of a certain logarithmic Sobolev inequality, which by its turn holds true whenever the corresponding weighted manifold satisfies Ric_ϕ≥ρ g, ρ>0, where
Ric_ϕ= Ric_g+∇^2ϕ
is the Bakry-Émery Ricci tensor; see <cit.>, where this is discussed in the abstract framework of diffusion Markov triples satisfying the curvature-dimension condition CD(ρ,∞), ρ>0.
If Ω is compact then the elliptic package mentioned above is classically known <cit.>. In the case (ℝ^n,δ, e^-ϕ), it is established in <cit.> under the assumption that Σ is smooth with a positive reach and under mild assumptions under ϕ which are too complicated to exactly reproduce here but which essentially boil down to assuming that ϕ(x) grows at least as |x|^2 as |x|→ +∞). In particular, this covers the Gaussian case in (<ref>). Keeping the same growth conditions on ϕ, we further mention that the methods in <cit.> may be adapted without much difficulty to the general case (M,g,e^-ϕ) if there exist a compact K⊂ M, r>0 and a diffeomorphism ψ:(M\ K,g)→(ℝ^n\ B_r(0⃗),δ) with uniformly bounded distortion (up to a sufficiently high order).
The assumptions A.1, A.2 and A.3 above entail the fact that both Ω and Σ are sufficiently well behaved at infinity (both topologically and metrically) in order that the validity of a couple of technical ingredients further needed below is ensured.
First, the following integration by parts formula holds:
∫_Ω fΔ_ϕ h d vol_ϕ+∫_Ω⟨∇ f,∇ h⟩ d vol_ϕ=
∫_Σ f∂ h/∂νd area_ϕ,
where f∈ W_ϕ^1(Ω)∩ C^∞(Ω) and h∈ W_ϕ^2(Ω)∩ C^∞(Ω); this follows from the corresponding divergence theorem
∫_Ω div_ϕ Y d vol_ϕ=∫_Σ⟨ Y,ν⟩ d area_ϕ,
where now Y is vector field on Ω lying in a suitably defined Sobolev space.
We remark that this latter formula follows
from the standard one (for bounded domains) by a simple approximation.
In the same vein, Σ is is assumed to support its own weighted Sobolev scale W_ϕ^s(Σ), so that
the intrinsic analogue of (<ref>) holds:
∫_Σ fΔ_Σ,ϕ h d area_ϕ=-∫_Σ⟨ D f, D h⟩ d area_ϕ,
where now f and h are functions on Σ lying in a suitable Sobolev space (say, W^2_ϕ(Σ)), D is the covariant derivative of g|_Σ and we use self-explanatory notation stemming from the fact that Σ itself is a weighted manifold with the induced structures. As above, this should follow from the analogue of (<ref>),
∫_Σ div_Σ,ϕ Y d vol_ϕ=0,
where Y is tangent to Σ (and also lies in a suitable Sobolev space). Finally, we note that at some points of this paper, specifically in the proofs of Theorem <ref> and <ref> (which concern properties of domains in Gaussian space) we assume that Σ is such that certain test functions, which depend polynomially on |x|, lie in W^2_ϕ(Σ), so that the integration by parts formulae above may be applied. In this regard, polynomial volume growth at infinity is more than enough.
Standing assumptions: Henceforth we will always take it for granted that (Ω,g,e^-ϕ) and Σ=∂Ω meet the assumptions A.1, A.2 and A.3 above; see also Remarks <ref>, <ref> and <ref>.
Another ingredient we will use is
a simple adaptation to the weighted setting of certain variational formulae for bulk and surface integrals which are well-known in the area of shape optimization <cit.>. To state it we recall that the weighted mean curvature of Σ is
H_ϕ= div_ϕν=H-⟨∇ϕ,ν⟩,
where H= div ν is the usual mean curvature.
For any variation t∈ (-ϵ, ϵ)↦Ω_t as above (not necessarily preserving volume) and any smooth family of smooth maps t∈ (-ϵ, ϵ)↦Ψ(t, ·): Ω_t→ℝ such that
Ψ(t,·) and ∂Ψ/∂ t(t,·) lie in W^1_ϕ(Ω_t)∩ C^∞(Ω_t) there hold
d/dt∫_Ω_tΨ d vol_ϕ=∫_Ω_t∂Ψ/∂ td vol_ϕ+
∫_Σ_tΨξ d area_ϕ, Σ_t=∂Ω_t,
and
d/dt∫_Σ_tΨ d area_ϕ=∫_Σ_t(
∂Ψ/∂ t+(⟨∇Ψ,ν⟩+H_ϕΨ)ξ)d area_ϕ,
where ∇ here means the metric gradient with respect to the x variable.
The simple proof <cit.> in the Euclidean case (with Ω bounded) applies verbatim to the Riemannian setting. From this the weighted versions above (again with Ω bounded) may be easily derived by straightforward manipulations. The general case, in which Ω is only assume to have a finite ϕ-volume, follows by a simple approximation (based on our standing assumptions).
If Ψ=1 in (<ref>) we obtain the first variation formula for the ϕ-volume:
d/dt_|_t=0 vol_ϕ(Ω_t)=∫_Σξ d area_ϕ.
In particular, if the variation is ϕ-volume-preserving then the variational function ξ satisfies
∫_Σξ d area_ϕ=0.
Conversely, it is well-known that for any compactly supported, smooth function ξ:Σ→ℝ satisfying (<ref>) there exists a ϕ-volume-preserving variation whose variational vector is ξν. For any such variation we may use (<ref>) with Ψ=ξ to check that
0= d^2/dt^2_|_t=0 vol_ϕ(Ω_t)=
∫_Σ(ξ̇+(∂ξ/∂ν+H_ϕξ)ξ)d area_ϕ,
where here and in the following a dot means d/dt|_t=0.
§ THE VARIATIONAL FORMULAE FOR Λ
We start by stating our version of the classical Hadamard formula for the first variation of λ; see also <cit.> and the references therein for other versions of this result in the weighted setting.
If
Ω_t=φ_t(Ω) is a (not necessarily ϕ-volume-preserving) variation of domains as above and λ_t is the corresponding first eigenvalue then
λ̇=-∫_Σ(∂ u/∂ν)^2ξ d area_ϕ.
The corresponding first eigenfunction u_t satisfies
{[ Δ_ϕ u_t+λ_t u_t=0 Ω_t; u_t=0 Σ_t ].
so upon derivation we see that u̇ satisfies
{[ Δu̇+λ̇ u+λu̇=0 Ω; u̇=-∂ u/∂νξ Σ ].
We now observe that, due to our normalization (<ref>),
1=∫_Ω_tu_t^2d vol_ϕ,
so if we use (<ref>) with Ψ=u_t^2 we get
0=2∫_Ω uu̇ d vol_ϕ+∫_Σ u^2ξ d area_ϕ,
and since u=0 on Σ,
∫_Ω uu̇ d vol_ϕ=0.
We also have, this time using the normalization (<ref>),
λ_t=∫_Ω_t|∇ u_t|^2d vol_ϕ,
which by (<ref>) with Ψ=|∇ u_t|^2 gives
λ̇
= 2
∫_Ω⟨∇ u,∇u̇⟩ d vol_ϕ+
∫_Σ|∇ u|^2ξ d area_ϕ.
By means of (<ref>) we can handle the first integral in the right-hand side above in two slightly different ways. We first have
∫_Ω⟨∇ u,∇u̇⟩ d vol_ϕ =
-∫_Ω uΔ_ϕu̇ d vol_ϕ+∫_Σ u∂u̇/∂νd area_ϕ
(<ref>)= ∫_Ω u(λ̇ u+λu̇) d vol_ϕ,
so that (<ref>) gives
∫_Ω⟨∇ u,∇u̇⟩ d vol_ϕ=λ̇.
On the other hand,
∫_Ω⟨∇ u,∇u̇⟩ d vol_ϕ =
-∫_Ωu̇Δ_ϕ u d vol_ϕ+∫_Σu̇∂ u/∂ν d area_ϕ
(<ref>)+(<ref>)=
-∫_Ωu̇(-λ u) d vol_ϕ+
∫_Σ(-∂ u/∂νξ)∂ u/∂νd area_ϕ,
so that, again using (<ref>),
∫_Ω⟨∇ u,∇u̇⟩ d vol_ϕ=-∫_Σ(∂ u/∂ν)^2ξ d area_ϕ,
and the result follows from this and
(<ref>).
We now explore (<ref>) by recalling a definition first put forward in <cit.>.
Let Σ'⊂Σ be an open subset. We say that (Ω,g) is Σ'-critical (for λ) if λ̇=0 for any ϕ-volume-preserving deformation such that supp ξ⊂Σ'.
That Ω is Σ'-critical is equivalent to the first eigenfunction u satisfying
{[ Δ_ϕ u+λ u= 0 Ω; s_ξ= ξ Σ; |∇ u|=κ Σ' ].
for some constant κ>0. In particular, if Σ'=Σ and Ω lies in a simply connected space form 𝒮^n_K with sectional curvature K∈ℝ, results in <cit.> imply that Ω is a geodesic ball (where Ω is require to lie in a hemisphere in the spherical case K>0). On the other hand, it is clear that any annular domain Ω⊂𝒮^n_K enclosed by geodesic spheres (so that ∂Ω=𝕊^n-1_r_1(x_1)∪𝕊^n-1_r_2(x_1), the r_i>0 and x_i∈𝒮^n_K, i=1,2) is 𝕊^n-1_r_i(x_i)-critical for i=1,2. Thus, the localization in Definition <ref> allows for some more flexibility in the somewhat arduous task of finding non-trivial, explicit solutions to the over-determined system (<ref>).
If Ω is optimal for λ then it is Σ-critical and (<ref>) means that the “optimal” eigenfunction u satisfies an extra condition along Σ (its normal derivative is constant). Of course, this aligns with the comments in Remark <ref>.
We now proceed to compute λ̈ assuming that (Ω,g) is Σ'-critical.
We first notice that under Σ'-criticality any such ξ admits a natural extension towards Ω. Indeed, observing that u̇=0 outside supp ξ, it follows from (<ref>) that s_ξ:=-(∂ u/∂ν)^-1u̇ satisfies
{[ Δ_ϕ s_ξ+λ s_ξ= 0 Ω; s_ξ= ξ Σ; ∫_Ω s_ξ u d vol_ϕ=0 ].
In particular, this allows us to define ∂ξ/∂ν:=∂ s_ξ/∂ν along Σ.
We may now state our version of the second variation formula for λ in the weighted setting; see Remark <ref> for other approaches to this calculation in the purely Riemannian setting.
If Ω is Σ'-critical (for λ) then
λ̈ = 2κ^2Q_ϕ(ξ),
where the quadratic form Q_ϕ is given by
Q_ϕ(ξ)=∫_Σ'ξℒ_ϕξ d area_ϕ, ℒ_ϕξ=∂ξ/∂ν+H_ϕξ.
An useful information here is obtained by first noticing that, along Σ', where |∇ u|=|∂ u/∂ν|=κ,
0 = -λ u =Δ_ϕ u=Δ_gu-∂ u/∂ν⟨∇ϕ,ν⟩,
where ∇^2 is the metric Hessian operator.
Since
Δ_gu=Δ_Σ u+∂ u/∂ν H+(∇^2u)(ν,ν)=∂ u/∂ν H+(∇^2u)(ν,ν)
and Δ_Σ u=0, where Δ_Σ is the intrinsic Laplacian of Σ, we conclude that
(∇^2u)(ν,ν)=-∂ u/∂νH_ϕ.
With these preliminaries at hand we note that, by (<ref>),
dλ/dt=-∫_Σ'_t⟨∇ u_t,ν_t⟩^2ξ_t d area_ϕ, Σ'_t=φ(Σ'),
so that (<ref>) gives
-λ̈=∫_Σ'(Ψ̇+(⟨∇Ψ,ν⟩+Ψ H_ϕ)ξ)d area_ϕ, Ψ=⟨∇ u_t,ν_t⟩^2ξ_t.
We now compute the various terms in the right-hand side above. First,
Ψ̇=
2 ⟨∇ u,ν⟩(⟨∇u̇,ν⟩+⟨∇ u ,ν̇⟩)ξ
+⟨∇ u,ν⟩^2ξ̇.
Now recall that u̇=-∂ u/∂ν s_ξ and
|∇ u|=κ along Σ'. Also, without loss of generality we may assume that v is normal (v=ξν), which gives ν̇=-Dξ <cit.>.
Hence,
Ψ̇=-2κ^2ξ∂ξ/∂ν+κ^2ξ̇.
On the other hand, again using Σ'-criticality,
⟨∇Ψ,ν⟩+Ψ H_ϕ=ν(⟨∇ u,ν⟩^2)ξ+
κ^2(∂ξ/∂ν+H_ϕξ).
But
ν(⟨∇ u,ν⟩^2)=2∂ u/∂ν(⟨∇_ν∇ u,ν⟩+⟨∇ u,∇_νν⟩)
and since
⟨∇ u,∇_νν⟩=∂ u/∂ν⟨ν,∇_νν⟩=0
we find that
ν(⟨∇ u,ν⟩^2)=2∂ u/∂ν⟨∇_ν∇ u,ν⟩(<ref>)=-2κ^2H_ϕ.
Thus, if we put together the pieces of our computation we get
λ̈ =2κ^2∫_Σ'ξ(∂ξ/∂ν+H_ϕξ)d area_ϕ -κ^2∫_Σ(ξ̇+(∂ξ/∂ν+H_ϕξ)ξ)d area_ϕ,
so the proof is completed using (<ref>).
§ A MORSE INDEX FORMULA
The computations leading to (<ref>)-(<ref>) justify the following definitions, which are direct extensions of concepts already appearing in <cit.>. Given Σ⊂Σ, we denote by W^1_ϕ[0](Σ) the space of functions ξ∈W^1_ϕ[0](Σ) such that ∫_Σξ d area_ϕ=0.
If Ω⊂ M is Σ'-critical (for λ) with Σ'⊂Σ, then we say that it is Σ'-stable if Q_ϕ(ξ)≥ 0 for any ξ∈W^1/2_ϕ[0](Σ'). In case Q_ϕ|_W^1/2_ϕ[0](Σ') is positive definite then we say that Ω it is strictly Σ'-stable. If Σ'=Σ then we simply say that Ω is stable (or strictly stable).
If Ω is Σ'-critical (with Σ'⊂Σ) and Σ”⊂Σ', we define the nullity and the index of the pair (Ω,Σ”) by
null(Ω,Σ”)={ξ∈W^1/2_ϕ[0](Σ”)); Q_ϕ(ξ)=0 },
and
ind(Ω,Σ”)=sup{ V; V⊂W^1/2_ϕ[0](Σ”) and Q_ϕ|_V is positive definite},
respectively.
It has been proved in <cit.> (in the classical Euclidean setting) that any Σ'-critical domain is locally strictly stable in the sense that for any x_0∈Σ' there exists a small neighborhood U⊂Σ' with x_0∈ U such that Ω is strictly U-stable. In <cit.> this result has been reformulated in a more conceptual framework (again in the classical case) by means of the establishment of a Morse index formula for the quadratic form Q_ϕ. We will check below that this latter result may be extended to our more general setting with essentially the same proof (Theorem <ref>).
The stability (or lack thereof) of (purely) Riemannian domains has been recently studied from a global viewpoint (i.e with Σ'=Σ) in <cit.>, notably in the two-dimensional case (n=2). In this regard,
it follows from Faber-Krahn inequality <cit.> that a geodesic ball B_r(x)⊂𝒮^n_K in a simply connected space form is the only (volume-preserving) minimizer for λ. In particular, it is not only 𝕊^n-1_r(x)-critical but also strictly 𝕊^n-1_r(x)-stable, where 𝕊^n-1_r(x)=∂ B_r(x) is the associated geodesic sphere. An interesting problem, first put forward in <cit.>, is to check whether in the spherical case K>0 the converse of this statement (namely, if Ω⊂ℍ_K^n is Σ-stable then Ω is a geodesic ball) holds true. Despite an affirmative answer in <cit.> for n=2, this problem remains wide open if n≥ 3, except for a further contribution in <cit.>, where it is shown that the domain is a hemisphere in case its boundary is minimal.
A Faber-Krahn inequality also holds in the Gaussian setting, with the optimal domains being the Gaussian half-spaces
ℋ^n_ u,d={
x∈ℝ^n;⟨ x, u⟩_δ>d
}.
where u∈ℝ^n, u=1, and d∈ℝ <cit.>. In particular, one may ask whether these are the only stable domains for λ in Gaussian space. In fact, Poincaré's limit theorem (cf. Section <ref>) suggests that this should hold true if and only if the corresponding conjectured property for 𝒮^n_K mentioned in Remark <ref> holds as well (at least for all n large enough).
We now turn to the numerical invariants appearing in Definition <ref>. At first sight, it not clear how the index/nullity relate to an algebraic count of negative/null “eigenvalues” of the quadratic form Q_ϕ, which is defined in terms of the manifestly non-local operator ℒ_ϕ. In particular, it is not even clear that these invariants are finite if Σ' is compact. However, as already mentioned in Remark <ref>, standard results in elliptic theory ensured by our standing assumption (specially A.3) may be used to settle this problem. A first ingredient here is the following Gärding inequality satisfied by Q_ϕ.
There exist constants C_1,C_2>0 (depending only on Ω) such that, for any ξ∈ W^1/2(Σ),
Q_ϕ(ξ)≥ C_1ξ^2_W^1/2(Σ)-C_2ξ^2_W^0(Σ),
where W^s(Σ), s∈ℝ, is the standard Sobolev scale of Σ.
We follow the proof of <cit.> closely (see also <cit.> for a slightly different argument). Let us
consider the isomorphism
ξ∈ W^s_ϕ(Σ)↦ h_ξ∈ W_ϕ^s+1/2(Ω)
that to each ξ associates its ϕ-harmonic extension (that is, Δ_ϕ h_ξ=0 on Ω); cf. (<ref>).
Also,
let us set w_ξ=s_ξ-h_ξ, where s_ξ∈ W_ϕ^s+1/2(Ω) is given by (<ref>). Since Δ_ϕ w_ξ=-λs_ξ, elliptic regularity implies that w_ξ∈ W_ϕ^s+5/2(Ω). Hence, standard trace theory gives that ∂ξ/∂ν=∂ w_ξ/∂ν∈ W_ϕ^s+1(Γ), so that the operator ξ∈ W_ϕ^s(Σ)↦ℒ_ϕξ∈ W_ϕ^s+1(Σ) has order -1. Thus, if
𝒬_ϕ(ξ)=∫_Σξ∂𝔰_ξ/∂νd vol_ϕ
for ξ∈ W_ϕ^1/2(Σ)⊂ W_ϕ^0(Σ),
we may apply this with s=0 to obtain
|Q_ϕ(ξ)-𝒬_ϕ(ξ)|≤ C_3f^2_W_ϕ^0(Σ), C_2>0,
from which we see that
Q_ϕ(ξ)≥𝒬_ϕ(ξ)-C_3ξ^2_W^0(Σ).
On the other hand, by (<ref>) and (<ref>),
𝒬_ϕ(ξ)
= ∫_Ω|∇ h_ξ|^2 d vol_ϕ
= h_ξ^2_W^1_ϕ(Ω)- h_ξ^2_W^0_ϕ(Ω)
≥ C_1ξ^2_W_ϕ^1/2(Σ) - C_4ξ^2_W_ϕ^-1/2(Σ),
where C_1, C_4>0 and we used (<ref>) with s=1/2 and s=0 in the last step. Now, the Sobolev embedding W_ϕ^0(Σ)↪ W_ϕ^-1/2(Σ) gives
ξ_W_ϕ^-1/2(Σ)≤ C_5ξ_W_ϕ^0(Σ), C_5>0,
so we obtain (<ref>) with C_2=C_3+C_4C_5^2.
We may now state the weighted version of the Morse index formula proved in <cit.>.
Let Σ_1⊂Σ be such that
Ω is Σ_1-critical with Σ_1 compact. Then ind(Σ_1)<+∞. Moreover, if Σ_0⊂Σ_1
and there exists a smooth deformation Σ_t⊂Σ, 0≤ t≤ 1, connecting Σ_0 to Σ_1
then there holds
ind(Ω,Σ_1)- ind(Ω,Σ_0)=∑_0<t<1 null(Ω,Σ_t).
<cit.> If Ω is Σ'-critical and x∈Σ' then there exits U⊂Σ' with x∈ U such that Ω is strictly U-stable.
(of Theorem <ref>) We resort to the abstract strategy in <cit.>, so we only need to check the validity of two assertions regarding ℒ_ϕ and Q_ϕ:
* Q_ϕ satisfies a Gärding inequality;
* ℒ_ϕ satisfies the unique continuation property (UCP) : if ξ meets ℒ_ϕξ=λξ, λ∈ℝ, and ξ≡ 0 in some neighborhood U⊂Σ_1 then ξ≡ 0 on Σ.
Now, the appropriate Gärding inequality has been established in Proposition <ref>. As for UCP, notice that ∂ξ/∂ν= on U. Since ℳ_ϕ=Δ_ϕ+λ is elliptic, its principal symbol never vanishes and hence U is non-characteristic for ℳ_ϕ. Thus, by Hölmgren's uniqueness, s_ξ vanishes in a neighborhood of U (inside Ω). But ℳ_ϕ is known to satisfy its own version of the UCP, which implies that s_ξ≡ 0 and hence ξ≡ 0 on Σ, as desired. This completes the proof.
Among the many interesting (potential) applications of Theorem <ref> we mention:
* the annular domains in Remark <ref> are strictly U-stable if U⊂𝕊^n-1_r_i(x_i) is small enough;
* any of the many critical domains constructed more recently by perturbative methods (see <cit.> and the references therein) are locally strictly stable;
* very likely, the perturbative results mentioned in the previous item should admit counterparts in the weighted setting, so that local stability of critical domains should extend to this case as well;
* the nullity/index can now be identified to an algebraic count of negative/null eigenvalues of the associated quadratic form, which opens up the possibility of effective computations of these invariants in examples.
§ THE VARIATONAL FORMULAE FOR J_Β,Γ
We now turn to the Dirichlet energy ℰ_β,f in Example <ref>. We start by exhibiting an important example where minimizing the corresponding Dirichlet integral J_β,f makes sense.
In the Gaussian setting (<ref>),
the half-spaces ℋ^n_ u,0 defined in (<ref>) satisfy
λ(ℋ^n_ u,0)=1 (with the corresponding eigenspace being generated by u(x)=⟨ x, u⟩_δ). Thus,
if Ωℋ^n_ u,0 then λ(Ω)> 1 by eigenvalue monotonicity, so that (<ref>) has a solution for each β≥ -1 by Remark <ref>, and under this condition it makes sense to minimize ℰ_β,f(Ω), where Ω∈𝒟_V, V= vol_ϕ_n(ℋ^n_ u,d), d>0.
As already observed, the general problem of characterizing optimal domains for ℰ_β,f, even in the rather special case of Example <ref>, seem to lie beyond the current technology.
Nevertheless, we may try to use the variational methods discussed above to have a first idea on the properties an optimal domain should satisfy, at least in the case f≡γ, a real constant.
In fact, as a consequence of the first variation formula (<ref>) for J_β,γ, we next check that optimal domains for ℰ_β,γ satisfy the over-determined elliptic system (<ref>) below.
If Ω is optimal in the sense ℰ_V,β,γ=ℰ_β,γ(Ω) then the corresponding “optimal” function w (that is, ℰ_β,γ(Ω)=J_c,γ(w)) satisfies the over-determined system
{[ -Δ_ϕ w+β w=γ Ω; w=0 Σ; |∇ w|= c Σ ].
for some c> 0.
If V= vol_ϕ(Ω_0) and t∈(-ϵ,ϵ)↦Ω_t∈𝒟_V is a smooth ϕ-volume-preserving variation of Ω with Ω_0=Ω then optimality of Ω implies that J_β,γ(w)≤ J_β,γ(w_t), where w_t is the solution of (<ref>) with Ω=Ω_t (and f=γ). Hence, J̇_̇β̇,̇γ̇=0, where as always the dot means derivative at t=0. We now compute this derivative using the formalism stemming from Proposition <ref>. From (<ref>),
J_β,γ(w_t)=∫_Ω_tΨ_t d vol_ϕ, Ψ_t=1/2|∇ w_t|^2+β/2w_t^2-γ w_t,
so that, by (<ref>),
J̇_̇β̇,̇γ̇
= ∫_Ω(⟨∇ w_0,∇ẇ⟩+β wẇ-γẇ)d vol_ϕ+1/2∫_Σ |∇ w|^2η d area_η,
where η is the variational function and ẇ satisfies
{[ -Δ_ϕẇ+βẇ=0 Ω; ẇ=-∂ w/∂νη Σ ].
Now, by (<ref>),
∫_Ω⟨∇ w,∇ẇ⟩ d vol_ϕ =
-∫_ΩẇΔ_ϕ w d vol_ϕ+∫_Σẇ∂ w_0/∂ν d area_ϕ
=
-∫_Ωẇ(β w-γ) d vol_ϕ+
∫_Σ(-∂ w/∂νη)∂ w/∂νd area_ϕ,
and leading this to (<ref>) we see that
J̇_β,γ=-1/2∫_Σ(∂ w/∂ν)^2η d area_ϕ.
The result then follows (with c=|∂ w/∂ν|) because we already know that this derivative vanishes for any η such that ∫_Ση d area_ϕ=0.
As in Section <ref>, this proposition justifies a notion of Σ-criticality for J_β,γ, so that optimal domains for ℰ_β,γ are automatically Σ-critical and hence satisfy (<ref>). Naturally, the question remains of classifying such domains in each specific case. The inherent difficulty in approaching this problem is best illustrated by certain examples in Gaussian space taken from <cit.>.
In Gaussian space of Example <ref>, the following domains are easily verified to be Σ-critical for J_-1,1 (equivalently, they support a function w satisfying (<ref>) for suitable values of β):
* round balls centered at the origin;
* the complements of the balls in the previous item;
* the slabs {x∈ℝ^n;|⟨ x, u⟩|<ε}, where ε>0 and | u|=1;
* the half-spaces ℋ^n_ u,d in (<ref>); in this case the explicity solution to (<ref>) for d≥ 0 is w(x)=β x_1-1 with β d=1.
Note the the set of all domains in each class of examples above exhausts the possible values for the ϕ_n-volume of a proper domain in Gaussian space, which indicates that the problem of deciding which Σ-critical domains are optimal is far from trivial given that at least four candidates concur for each value of the volume. As a first step toward approaching this difficulty we may look at how J_β,γ varies to second order around such a domain. Thus, we proceed by computing J̈_β,γ along variations passing through the given Σ-critical domain.
We first observe that, as in Section <ref>, criticality here also implies that any η∈ H^1/2(Σ) admits a natural extension to Ω. Indeed, it follows from (<ref>) that 𝔯_η:=-(∂ w/∂ν)^-1ẇ satisfies
{[ -Δ_ϕ𝔯_η +β𝔯_η=0 Ω; 𝔯_η= η Σ ].
which allows us to set ∂η/∂ν:=∂𝔯_η/∂ν along Σ.
If Ω is Σ-critical for J_β,γ with β≠ 0 then
J̈_β,γ=c^2
𝒬_β,λ(η), c=|∂ w/∂ν|,
where the quadratic form 𝒬_β,γ is given by
𝒬_β,γ(η)=∫_Σηℒ_β,γη d area_ϕ, ℒ_β,γη=∂η/∂ν+H_ϕη.
This uses essentially the same argument as in the previous computation of λ̈ leading to (<ref>). Indeed,
it follows from (<ref>) and (<ref>) that
-J̈_β,γ=1/2∫_Σ(Ψ̇+(⟨∇Ψ,ν⟩+Ψ H_ϕ)η)d area_ϕ, Ψ=⟨∇ w_t,ν_t⟩^2η_t.
As before, we may assume that the variational vector field is normal along Σ, which immediately gives
Ψ̇=-2c^2ξ∂ξ/∂ν+c^2ξ̇.
On the other hand,
the only novelty in the computation of the remaining term inside the integral in (<ref>) is that instead of (<ref>) we now have
(∇^2 w)(ν,ν)=-∂ w/∂νH_ϕ-γ,
so we end up with
J̈_β,γ =
c^2∫_Ση(∂η/∂ν+H_ϕη)d area_ϕ
-γ/2∂ w/∂ν∫_Σ_0η d area_ϕ
-c^2/2∫_Σ(η̇+(∂η/∂ν+H_ϕη)η)d area_ϕ,
which reduces to (<ref>) if we recall that the variation is ϕ-volume-preserving and use (<ref>) and (<ref>) with ξ=η.
We stress the formal similarity between (<ref>)-(<ref>) and (<ref>)-(<ref>), the only essential difference being in the way the variational functions extend to Ω in each case. In particular, we can define here the notions corresponding to those in Definitions <ref> and <ref> above, so that the same argument as in the proof of Theorem <ref> yields the following result.
The appropriate Morse index formula, similar to (<ref>), holds in the present setting. In particular, any Σ'-critical domain is locally strictly stable.
As a consequence of Theorem <ref>, all the Σ-critical domains (for J_-1,1) in Example <ref> are locally strictly stable, thus being variationally indistinguishable from this viewpoint.
§ A POHOZHAEV IDENTITY AND OPTIMAL DOMAINS FOR ℰ_-1,1 IN GAUSSIAN HALF-SPACE
As already observed in the Introduction and confirmed by Remark <ref>, in general (infinitesimal) variational methods by themselves do not seem to provide effective tools for the classification of
optimal domains for Dirichlet integrals. Thus, we henceforth seek for alternate routes by resorting to global methods relying on suitable differential/integral identities.
We first illustrate this approach by means of the next result, which completely classifies optimal domains for ℰ_-1,1 in the Gaussian half-space ℋ^n_ u,0. It constitutes the exact analogue of a result in the spherical case <cit.> and we refer to Remark <ref> for details on the heuristics behind this analogy, which relies on a naive application of Poincaré's limit theorem (Section <ref>).
If Ωℋ^n_ u,0 be a Σ-critical domain for J_-1,1 in Gaussian space then Ω=ℋ^n_ u,d, d>0.
If Ωℋ^n_ u,0 is optimal for ℰ_-1,1 then Ω=ℋ^n_ u,d, d>0.
A key ingredient in our proof of Theorem <ref> is the following Pohozhaev-type identity in Gaussian space.
If w:ℝ^n→ℝ is a C^2 function and X is parallel (in the sense that ∇ X≡ 0, where ∇ is the covariant derivative induced by the flat metric δ) then
div_ϕ_n(|∇ w|^2/2X-⟨ X,∇ w⟩∇ w)=|∇ w|^2/2 div_ϕ_n X-⟨ X,∇ w⟩Δ_ϕ_n w.
We have
div_ϕ_n(|∇ w|^2/2X)=|∇ w|^2/2 div_ϕ_n X +
⟨∇(|∇ w|^2/2),X⟩
and
div_ϕ_n(⟨ X,∇ w⟩∇ w)=⟨ X,∇ w⟩ div_ϕ_n∇ w+
⟨∇⟨ X,∇ w⟩,∇ w⟩,
and since
⟨∇(|∇ w|^2/2),X⟩=
⟨∇⟨ X,∇ w⟩,∇ w⟩
because X is parallel, the result follows.
We now observe that, by Proposition <ref>, any Ω satisfying the conditions of Theorem <ref> supports a function w such that
{[ -Δ_ϕ_n w-w=1 Ω; w=0 Σ; |∇ w| =c Σ ].
for some c>0. This leads to our next preparatory result.
If w:Ω→ℝ satisfies (<ref>) then
Δ_ϕ_n(|∇ w|^2)≥ 0,
with the equality occurring if and only if ∇^2w≡ 0.
In the Gaussian case, the weighted Bochner-Weitzenböck identity says that
1/2Δ_ϕ_n(|∇ w|^2)=|∇^2w|^2+⟨∇ w,∇(Δ_ϕ_n w)⟩+|∇ w|^2;
see <cit.>.
It follows that
1/2Δ_ϕ_n(|∇ w|^2)
≥ ⟨∇ w,∇(-w-1)⟩ +|∇ w|^2
= 0,
with the equality occurring if and only if |∇^2w|=0.
The final ingredient in the proof of Theorem <ref> is the next rigidity result for solutions of (<ref>) in Ωℋ^n_ u,0, in whose proof the Pohozhaev in Proposition <ref> plays a key role.
If w satisfies (<ref>) and Ωℋ^n_ u,0 then |∇ w|=c everywhere on Ω.
By rotational invariance we may assume that u=e_1=(1,0,⋯,0) so that x_1≥ 0 on Ω. By (<ref>), the maximum principle and the fact that |∇ w|=c along Σ we see that |∇ w|≤ c on Ω, which immediately gives
c^2∫_Ω x_1 d vol_ϕ_n > ∫_Ω x_1|∇ w|^2 d vol_ϕ_n
unless there already holds |∇ w|=c everywhere on Ω.
However,
x_1|∇ w|^2
= div_ϕ_n (x_1 w|∇ w|)-w⟨∇ x_1,∇ w⟩-x_1wΔ_ϕ w
= div_ϕ_n (x_1 w|∇ w|)-w∂ w/∂ x_1+x_1w(w+1),
so if we take this to (<ref>) we obtain
c^2∫_Ω x_1 d vol_ϕ_n >
-∫_Ω w∂ w/∂ x_1d vol_ϕ_n+
∫_Ω x_1w(w+1)d vol_ϕ_n.
We now check that this contradicts the Pohozhaev-type identity (<ref>) for X=-e_1, for which there holds div_ϕ_n X=x_1. Indeed, integrating the right-hand side of (<ref>) over Ω with this choice and using the divergence theorem twice we get
∫_Σ(c^2/2⟨ X,ν⟩-⟨ X,cν⟩⟨ cν,ν⟩)d area_ϕ_n =
-c^2/2∫_Σ⟨ X,ν⟩ d area_ϕ_n
= -c^2/2∫_Ω div_ϕ_n X d vol_ϕ_n
= -c^2/2∫_Ω x_1 d vol_ϕ_n.
On the other hand, the right-hand side of (<ref>) gives
1/2∫_Ω x_ 1|∇ w|^2 d vol_ϕ_n-∫_Ω⟨-e_1,∇ w⟩ (-w-1)d vol_ϕ_n = 1/2∫_Ω x_ 1|∇ w|^2 d vol_ϕ_n
-∫_Ω⟨ (w+1)e_1,∇ w⟩ d vol_ϕ_n.
Now, the last integral may be manipulated as follows. We have
⟨ (w+1)e_1,∇ w⟩ = div_ϕ_n (w(w+1)e_1)-w div_ϕ_n((w+1)e_1)
= div_ϕ_n (w(w+1)e_1)-w(w+1) div_ϕ_n e_1-w⟨∇ w,e_1⟩
= div_ϕ_n (w(w+1)e_1)+x_1w(w+1)-w∂ w/∂ x_1,
so that
∫_Ω⟨ (w+1)e_1,∇ w⟩ d vol_ϕ_n=∫_Ω x_1w(w+1) d vol_ϕ_n-
∫_Ω w∂ w/∂ x_1 d vol_ϕ_n,
and we see
that altogether (<ref>) gives
c^2∫_Σ x_1 d area_ϕ_n=
-∫_Ω x_ 1|∇ w|^2 d vol_ϕ_n+2∫_Ω x_1w(w+1) d vol_ϕ_n-2∫_Ω w∂ w/∂ x_1 d vol_ϕ_n.
Comparing this with (<ref>) we conclude that
∫_Ω x_1w(w+1) d vol_ϕ_n-∫_Ω w∂ w/∂ x_1 d vol_ϕ_n
-∫_Ω x_ 1|∇ w|^2 d vol_ϕ_n>0.
However,
∫_Ω x_1w(w+1) d vol_ϕ_n = -∫_Ω x_1wΔ_ϕ_n w d vol_ϕ_n
= ∫_Ω⟨∇ (x_1w),∇ w⟩ d vol_ϕ_n
= ∫_Ω(w∂ w/∂ x_1+x_1|∇ w|^2) d vol_ϕ_n,
so we end up with 0>0, a contradiction which completes the proof.
(of Theorem <ref>) By Proposition <ref> there exists w:Ω→ℝ satisfying (<ref>), so Proposition <ref> applies to ensure that |∇ w|=c everywhere on Ω. In particular, Δ_ϕ_n(|∇ w|^2)=0 and the equality holds in (<ref>). Thus, by Proposition <ref>, ∇^2w≡ 0 and w is an affine function. It follows that Σ=∂Ω lies in a hyperplane (because w vanishes there) and since
Ωℋ^n_ u,0, the result follows.
Our proof of Theorem <ref> is modeled on <cit.>, where solutions of the over-determined system
{[ -Δ w-(k-1)Kw=1 Ω; w=0 Σ; |∇ w| =c Σ ].
are shown to be quite rigid.
Here, Ω⊂𝒮_K,+^k-1, where 𝒮_K,+^k-1 is a round hemisphere of dimensional k-1 and radius 1/√(K), K>0. In particular, it is proved that Ω is a geodesic ball. Now, if we take K=1/k, so that 𝒮_1/k,+^k-1 is a hemisphere of radius √(k), and take the limit as k→ +∞ then (<ref>) formally converges to the problem
{[ -Δ_∞ w-w=1 Ω; w=0 Σ; |∇ w| =c Σ ].
Besides precisely determining the nature of the elusive “Laplacian” Δ_∞, the question remains of realizing in which space this limiting problem should be treated. From an entirely heuristic viewpoint, we may approach this by making use of the Poincaré's limit theorem (see Section <ref> below), which may be loosely interpreted as saying that the push-forward of the weighted manifold (𝒮_1/k^k-1,δ_ sph_k,e^-ϕ_(k)d vol_δ_ sph_k) under the orthogonal projection Π_k,n:ℝ^k→ℝ^n converges in a suitable sense to the Gaussian space (ℝ^n,δ,e^-ϕ_n) as k→+∞. Here, δ_ sph_k is the standard spherical metric in 𝒮_1/k^k-1 and ϕ_(k)>0 is chosen such that vol_δ_ sph_k(𝒮_1/k^k-1)=e^ϕ_(k), so that ℙ_k:=e^-ϕ_(k)d vol_δ_ sph_k-1 defines a probability measure. This clearly suggests that, as we did above, the limiting problem (<ref>) should be treated in Gaussian half-space ℋ^n_ u,0=(ℝ^n_+,δ,e^-ϕ_n), with the proviso that Δ_∞ must be replaced by the corresponding weighted Laplacian Δ_ϕ_n. With the right over-determined system at hand, it remains to transplant to our setting the methods from <cit.>, which rely on a Pohozhaev-type identity verified in <cit.>. As it is well-known, this kind of identity is a manifestation of the existence of certain conformal fields on spheres which may be intrinsically characterized as gradient vector fields of functions lying in the first eigenspace of the metric Laplacian. Since spectral data behave quite well under Poincaré's limit <cit.>, we were naturally led to formulate Proposition <ref> in terms of parallel vector fields in Gaussian space, since they can be written as ∇ψ with ψ satisfying Δ_ϕ_nψ+ψ=0.
§ A REILLY FORMULA AND AN ALEXANDROV-TYPE THEOREM IN GAUSSIAN HALF-SPACE
Inspired by Theorem <ref>, and taking into account the heuristics behind it explained in Remark <ref>, we are led to speculate on possible extensions of other classical results to the Gaussian setting by similar methods. The next theorem confirms this expectation and aligns itself with the well-known fact that Serrin's uniqueness result (or rather its reformulation by Weinberger <cit.>) may be used to provide an alternate proof of the classical Alexandrov's soap bubble theorem (cf. the comments in <cit.>).
Let Σ⊂ℋ^n_ u,0 be a smooth embedded hypersurface whose weighted mean curvature is (a non-zero) constant. Then Σ=∂ℋ^n_ u,d for some d≥ 0.
There are no compact embedded hypersurfaces with (non-zero) constant weighted mean curvature in ℋ^n_ u,0.
We start the proof of this result by recalling that if B is a bi-linear symmetric form on vectors on Ω, its ϕ-divergence is given by
div_ϕ B=e^ϕ∘ div B∘ e^-ϕ= div B- i_∇ϕB,
where div B_j=∇^iB_ij is the metric divergence and i_Y means contraction by a vector field Y.
Under the conditions above, if h is a smooth function on Ω then
div_ϕ(h i_YB)
= B(∇ h,Y) +h i_Y div_ϕ B+h/2⟨ B,ℒ_Yg⟩,
where ℒ denotes Lie derivative.
We have
div(h i_YB)
= ∇^i(hB_ijY^j)
= B_ij∇^i h X^j+h∇^iB_ij Y^j+hB_ij∇^iY^j
= B(∇ h,Y) + h i_Y div B +
h/2⟨ℒ_Yg,B⟩,
and using (<ref>) twice,
div_ϕ(h i_YB)
=
B(∇ h,Y) + h i_Y divB +
h/2⟨ℒ_Yg,B⟩ -h i_∇ϕ( i_YB)⟩
=
B(∇ h,Y)+h i_Y( div_ϕ B+ i_∇ϕB)
+
h/2⟨ℒ_Yg,B⟩ -h i_∇ϕ( i_YB)⟩,
so the result follows.
It turns out that (<ref>), for appropriates choices of B and Y, is the starting point in establishing a Reilly-type formula holding for any weighted domain (Ω,g,d vol_ϕ) for which our standing assumptions hold.
Under our standing assumptions, if h and f are functions on Ω then there holds
∫_Ω h((Δ_ϕ f+f)^2-|∇^2f|^2)
= ∫_Σ h(2f_νΔ_Σ,ϕ f+H_ϕ f_ν^2+A_Σ(Df,Df)+2f_ν f)
+∫_Σ h_ν(|Df|^2-f^2)
+∫_Ω(
(∇^2h)(∇ f,∇ f)-Δ_ϕ h· |∇ f|^2 -2h|∇ f|^2+h Ric_ϕ(∇ f,∇ f) )
+∫_Ω(Δ_ϕ h+h)f^2,
where A_Σ is the second fundamental form of Σ and f_ν=⟨∇ f,ν⟩, etc.
We take B=∇^2 f and Y=∇ f in (<ref>) and integrate over Ω to obtain
∫_Ω h|∇^2f|^2=∫_Σ h(∇^2f)(∇ f,ν)-∫_Ω (∇^2f)(∇ h,∇ f)-∫_Ω h( div_ϕ∇^2f)(∇ f),
where for convenience we omit the weighted area and volume elements in the integrals.
Now,
-∫_Ω (∇^2f)(∇ h,∇ f)
=
-1/2∫_Ω⟨∇ h,∇(|∇ f|^2)⟩
= 1/2∫_ΩΔ_ϕ h·|∇ f|^2-1/2∫_Σ |∇ f|^2h_ν
and the Ricci identity easily implies that
-∫_Ω h( div_ϕ∇^2f)(∇ f)=-∫_Ω h⟨∇(Δ_ϕ f),∇ f⟩ -∫_Ω h Ric_ϕ(∇ f,∇ f),
so that
∫_Ω h|∇^2f|^2
= ∫_Σ h(∇^2f)(∇ f,ν) + 1/2∫_ΩΔ_ϕ h·|∇ f|^2-1/2∫_Σ |∇ f|^2h_ν
-∫_Ω h⟨∇(Δ_ϕ f),∇ f⟩ -∫_Ω h Ric_ϕ(∇ f,∇ f).
On the other hand, the next to the last term in the right-hand side above may be treated as
-∫_Ω h⟨∇(Δ_ϕ f),∇ f⟩ = -∫_Ω⟨∇(hΔ_ϕ f),∇ f⟩ +∫_ΩΔ_ϕ f⟨∇ h,∇ f⟩
= ∫_Ω h(Δ_ϕ f)^2-∫_Σ hΔ_ϕ f· f_ν
+∫_ΩΔ_ϕ f⟨∇ h,∇ f⟩,
so we see that
∫_Ω h|∇^2f|^2
= ∫_Σ h(∇^2f)(∇ f,ν) -1/2∫_Σ |∇ f|^2h_ν
+ 1/2∫_ΩΔ_ϕ h·|∇ f|^2
-∫_Σ hΔ_ϕ f· f_ν +
∫_Ω h(Δ_ϕ f)^2+∫_ΩΔ_ϕ f⟨∇ h,∇ f⟩
-∫_Ω h Ric_ϕ(∇ f,∇ f).
We now observe that
∫_Ω h((Δ_ϕ f+f)^2-|∇^2f|^2)
= ∫_Ω h(Δ_ϕ f)^2+ 2∫_Ω hΔ_ϕ f· f
+∫_Ω hf^2-∫_Ω h|∇^2 f|^2
and
∫_Ω hΔ_ϕ f· f
= ∫_Σ h ff_ν-∫_Ω⟨∇(hf),∇ f⟩
= ∫_Σ h ff_ν-∫_Ω h|∇ f|^2 -∫_Ω f⟨∇ h,∇ f⟩,
so we obtain
∫_Ω h((Δ_ϕ f+f)^2-|∇^2f|^2)
= ∫_Ω h(Δ_ϕ f)^2 +
2∫_Σ h ff_ν-2∫_Ω h|∇ f|^2 -2∫_Ω f⟨∇ h,∇ f⟩
+∫_Ω h f^2
-∫_Σ h(∇^2f)(∇ f,ν) +1/2∫_Σ |∇ f|^2h_ν
- 1/2∫_ΩΔ_ϕ h·|∇ f|^2
+∫_Σ hΔ_ϕ f· f_ν -
∫_Ω h(Δ_ϕ f)^2-∫_ΩΔ_ϕ f⟨∇ h,∇ f⟩
+∫_Ω h Ric_ϕ(∇ f,∇ f)
= ∫_Σ hΔ_ϕ f· f_ν +1/2∫_Σ |∇ f|^2h_ν
-∫_Σ h(∇^2f)(∇ f,ν) + 2∫_Σ h ff_ν
- 1/2∫_ΩΔ_ϕ h·|∇ f|^2
-∫_ΩΔ_ϕ f⟨∇ h,∇ f⟩ +∫_Ω h Ric_ϕ(∇ f,∇ f)
-2∫_Ω h|∇ f|^2 -2∫_Ω f⟨∇ h,∇ f⟩ +∫_Ω h f^2.
Since we can handle the sixth and ninth terms in the right-hand side above as
-∫_ΩΔ_ϕ f⟨∇ h,∇ f⟩ = -∫_Σ f_ν⟨∇ h,∇ f⟩ + ∫_Ω⟨∇ f,∇⟨∇ h,∇ f⟩⟩
= -∫_Σ f_ν⟨∇ h,∇ f⟩ +∫_Ω(∇^2h)(∇ f,∇ f)+1/2∫_Ω⟨∇ h,∇ |∇ f|^2⟩
= -∫_Σ f_ν⟨∇ h,∇ f⟩ +∫_Ω(∇^2h)(∇ f,∇ f)
+1/2∫_Σ |∇ f|^2h_ν-1/2∫_ΩΔ_ϕ h· |∇ f|^2
and
-2∫_Ω f⟨∇ h,∇ f⟩ = -∫_Ω⟨∇ h,∇ f^2⟩
= -∫_Σ f^2 h_ν +∫_Ω f^2Δ_ϕ h,
we end up with
∫_Ω h((Δ_ϕ f+f)^2-|∇^2f|^2)
= ∫_Σ h(Δ_ϕ f · f_ν-(∇^2f)(∇ f,ν))
+∫_Σ(|∇ f|^2h_ν-f_ν⟨∇ h,∇ f⟩)
+ 2∫_Σ hff_ν -∫_Σ h_ν f^2
+∫_Ω(
(∇^2h)(∇ f,∇ f)-Δ_ϕ h· |∇ f|^2 -2h|∇ f|^2+h Ric_ϕ(∇ f,∇ f) )
+∫_Ω(Δ_ϕ h+h)f^2.
We now handle the first two boundary terms above by computing in an adapted frame along Σ. First,
∫_Σ h(Δ_ϕ f · f_ν-(∇^2f)(∇ f,ν))
= -∫_Σ h⟨∇ϕ,∇ f⟩ f_ν +
∫_Σ h(Δ f · f_ν-(∇^2f)(∇ f,ν))
= -∫_Σ h⟨∇ϕ,∇ f⟩ f_ν
+∫_Σ h(f_νΔ_Σ f+H f_ν^2-⟨ D f_ν,Df⟩+A_Σ(Df,Df))
= -∫_Σ h⟨∇ϕ,∇ f-f_νν⟩ f_ν
+∫_Σ hf_ν⟨ Dϕ,Df⟩
+∫_Σ h(f_νΔ_Σ,ϕ f+H_ϕ f_ν^2-⟨ D f_ν,Df⟩+A_Σ(Df,Df))
= ∫_Σ h(f_νΔ_Σ,ϕ f+H_ϕ f_ν^2
-⟨ D f_ν,Df⟩+A_Σ(Df,Df)),
where we used Gauss-Weingarten in the second step.
Also,
∫_Σ(|∇ f|^2h_ν-f_ν⟨∇ h,∇ f⟩)
= ∫_Σ(|Df|^2 h_ν-f_ν⟨ Dh,Df⟩)
= ∫_Σ(|D f|^2h_ν -⟨ D(f_ν h),Df⟩
+h⟨ Df_ν,Df⟩)
= ∫_Σ(
|D f|^2h_ν +h⟨ Df_ν,Df⟩
+f_ν hΔ_Σ,ϕf
),
so if we substitute these identities back in (<ref>) the result follows after a few manipulations.
(of Theorem <ref>) By rotational invariance we may assume that u=e_1. We apply our Reilly formula in Proposition <ref> with h=x_1, so that Δ_ϕ_n h+h=0 and choose f satisfying the Dirichlet problem
{[ Δ_ϕ_n f+f=1 Ω; f=0 Σ ].
where we may assume that Σ≠∂ℋ^n_e_1,0, hence bounding a domain Ωℋ^n_e_1,0; cf. Remark <ref>.
Since Ric_ϕ_n=δ, the formula reduces to
∫_Ω x_1-∫_Ω|∇^2f|^2=∫_Σ H_ϕ_n x_1f_ν^2= H_ϕ_n∫_Σ x_1f_ν^2,
and using Cauchy-Schwarz,
∫_Ω x_1-∫_Ω|∇^2f|^2≥ H_ϕ_n(∫_Σ x_1 f_ν)^2/∫_Σ x_1.
Now, as a consequence of (<ref>),
∫_Σ x_1 f_ν =∫_Ω x_1Δ_ϕ_n f - ∫_Ω fΔ_ϕ_n x_1=∫_Ω x_1.
On the other hand, the Minkowski formula in Gaussian space <cit.> gives
∫_Σ x_1=-H_ϕ_n∫_Σ⟨ e_1,ν⟩,
and since
∫_Σ⟨ e_1,ν⟩=∫_Ω div_ϕ_n e_1=-∫_Ω x_1,
we see from (<ref>) that ∇^2f≡ 0 and therefore f is an affine function. Thus,
Σ lies in a hyperplane and since Ω⊂ℋ^n_e_1,0, the proof is completed.
Proposition <ref> implies that an optimal domain Ωℋ^n_e_1,0 for ℰ_-1,-1 supports a function f satisfying the over-determined sustem
{[ Δ_ϕ_n f+f=1 Ω; f=0 Σ; |∇ f|=c Σ ].
for some c>0.
Although the methods of Section <ref>, which are appropriate to handle the slightly different problem in (<ref>), do not seem adequate to provide a rigidity result for such domains, the proof above
shows that if we replace the third condition in (<ref>) by the assumption that the weighted mean curvature of Σ=∂Ω is constant then rigidity is recovered.
Our argument leading to Theorem <ref> is modeled on the proof of <cit.>, where Alexandrov's soap bubble theorem in simply connected space forms is retrieved as a consequence of a generalized Reilly formula <cit.>. It turns out that if we apply their formula to (a compact domain in) the round hemisphere 𝒮^k-1_1/k,+ in Remark <ref>, take the limit as k→ +∞ and argue (always on heuristic grounds!) that not only the metric Laplacian but also the standard mean and Ricci curvatures should give rise to their weighted counterparts along Poincaré's convergence, we obtain our Reilly-type formula in Proposition <ref> (as applied to Gaussian space). Of course, the long computation in the proof of this latter proposition provides a rigorous justification of this rather informal argument based on Poincaré's limit (Section <ref>). Finally, we mention that Proposition <ref> (again, as applied to Gaussian space) also follows formally as the appropriate Poincaré's limit of a Reilly-type formula in <cit.>, which by its turn generalizes to weighted domains the one in <cit.> referred to above.
Hypersurfaces in Gaussian space with constant weighted mean curvature are usually referred to as λ-hypersurfaces and there is a huge literature on trying to classify them under a most varied set of assumptions. We refer to <cit.> for a recent survey on the subject.
§ POINCARÉ'S LIMIT THEOREM
For the reader's convenience, we provide here the precise formulation of Poincaré's limit theorem <cit.>, which has been used in a rather informal way in the bulk of the text. Using the notation of Remark <ref>, let
Π_k,n:ℝ^k→ℝ^n be the orthogonal projection associated to the natural embedding ℝ^n↪ℝ^k and let (𝒮^k-1_1/k,ℙ_k) be the
probability space defined by ℙ_k=e^-ϕ_(k)d vol_δ_ sph_k.
With this terminology, and using standard probabilistic jargon, Poincaré's limit theorem says that the
random vectors
Π_k,l=Π_k,n|_𝒮^k-1_1/k:(𝒮^k-1_1/k,ℙ_k)→ℝ^n
converge weakly to a ℝ^n-valued random vector Z whose distribution is the Gaussian density d vol_ϕ_n=e^-ϕ_nd vol_δ,
which means that 𝔼(ψ(Π_k,l))→𝔼(ψ(Z))) for any ψ:ℝ^n→ℝ uniformly bounded and continuous. The standard probabilistic proof of this remarkable result combines the Law of Large Numbers and the well-known fact that a
random vector Z^[k] uniformly distributed over 𝒮^k-1_1/k may be expressed as
Z^[k]=√(k)X^[k]/X^[k],
where X^[k] is a ℝ^k-valued independent standard Gaussian vector <cit.>.
With a bit more of effort it may be checked that
this statement actually holds true with ψ= 1_A, the indicator function
of an arbitrary Borel set A⊂ℝ^n. Precisely,
lim_k→ +∞ℙ_k(Π_k,n^-1(A))
= vol_ϕ_n(A);
see <cit.> or <cit.>, where it is also explained how this sharpened version may be used to transfer the solution of the isoperimetric problem from (𝒮^k-1_1/k,ℙ_k) to Gaussian space as k→ +∞, a celebrated result first proved by Borell <cit.> and Sudakov-Tsirel'son <cit.> and which lies at the heart of the “concentration of measure phenomenon” <cit.>.
Inspired by this circle of ideas we are led to loosely interpret (<ref>) as saying that the “projections” of the weighted manifolds (𝒮^k-1_1/k,δ_ sph_k,ℙ_k) onto ℝ^k converge (in a sense whose actual meaning is not relevant here) to the Gaussian space (ℝ^n,δ,d vol_ϕ_n), a perspective we have naively explored above in order to transplant problems and techniques from (high dimensional) spheres to Gaussian space. Finally, we refer to
<cit.>
for a detailed guide to the proofs of both versions of this foundational result mentioned above.
alpha
|
http://arxiv.org/abs/2409.03023v1 | 20240904182822 | Machine learning of phases and structures for model systems in physics | [
"Djenabou Bayo",
"Burak Çivitcioğlu",
"Joseph J Webb",
"Andreas Honecker",
"Rudolf A. Römer"
] | cond-mat.dis-nn | [
"cond-mat.dis-nn"
] |
JuliaQCD: Portable lattice QCD package in Julia language
Akio Tomiya
September 9, 2024
========================================================
§ INTRODUCTION
Identification of critical points separating distinct phases of matter is a central pursuit in condensed matter and statistical physics <cit.>. This task requires a thorough understanding of the global behavior of the many-body system because phenomena may emerge that are very difficult to derive from microscopic rules <cit.>.
Traditional analytic methods and numerical simulations have proven effective in understanding these complex systems <cit.>, but they often come with limitations, particularly in high-dimensional parameter space <cit.>.
Machine-learning methods, particularly supervised <cit.> and unsupervised learning techniques <cit.>, have in the last years appeared in physics as a novel strategy to bypassing some of these limitations <cit.>. Convolution neural networks (CNN), a class of deep, i.e., multi-layered, neural networks (DNNs) in which spatial locality of data values is retained during training, have, when coupled with a form of residual learning <cit.>, shown to allow astonishing precision when classifying images, e.g., of animals <cit.> and handwritten characters <cit.>, or when predicting numerical values, e.g., of market prices <cit.>.
These supervised learning strategies similarly yield promising predictions in identifying critical points or phases in parameter space <cit.>, providing an alternative and potentially more efficient way of exploring complex systems.
By now, the evidence in favour of supervised machine-learning methods' efficacy in identifying different phases of a physical system appears compelling <cit.>.
Unsupervised learning and semi-unsupervised learning approaches have also demonstrated the ability to reconstruct the outlines of a system's phase diagram.<cit.>. The potential to identify structural changes within a system further supports the significance of these techniques in modern scientific exploration <cit.>.
Among the various models studied in the context of machine learning and statistical physics, the Ising model on the square lattice has served as an important benchmark <cit.> due to the simplicity of its two thermal phases, the low-temperature ferromagnet and the high-temperature paramagnet, and the ready availability of its exact solution <cit.> with exactly known critical temperature.
We note that the use of ML to determine phases from just the spin configurations suggests that these themselves should contain sufficient information to identify phases, providing a level of physical insight that was, while not unknown, at least not as clear as it now seems.
We also mention related work on multi-layer <cit.> and Potts models <cit.>, where the latter include the Ising model as the q=2 case.
Percolation can be considered as the q→1 limit of the Potts model <cit.>
and yields another class of paradigmatic models to which machine-learning techniques have been applied to identify the non-spanning and spanning phases<cit.>.
Previous ML studies have mostly used supervised learning in order to find the two phases via ML classification <cit.>. An estimate of the critical exponent of the percolation transition has also been given <cit.>. The task of determining the transition threshold, p_c, was further used to evaluate different ML regression techniques<cit.>.
For unsupervised and generative learning, less work has been done <cit.>. While some successes have been reported <cit.>, other works show the complexities involved when trying to predict percolation states <cit.>.
Disordered electron systems provide quantum systems with similarly rich phase diagrams. Examples are given by the Anderson insulator<cit.>, diffusive metals<cit.>, the quantum Hall<cit.> and quantum anomalous Hall insulators<cit.>, Weyl semimetals<cit.>, as well as topological insulators<cit.>. In these cases, the thermal states investigated for Ising-type models are replaced by quantum mechanical eigenfunctions, or variations thereof such as the local density of states (LDOS). These have specific features in each phase but, due to the random nature of these systems, precisely determining a phase from
an LDOS is difficult.<cit.> Recent supervised learning work on the Anderson model of localization, capturing the features of eigenfunctions across the delocalization–localization transition,<cit.> as well as further transfer-learning approaches to the disordered Chern insulator–Anderson insulator transition,<cit.> have shown to allow a seemingly accurate description of phases and phase boundaries.
The power of generative machine learning has not yet been harnessed to the same extent. This is partly because it is still a relatively novel machine learning strategy <cit.>. In brief, the difference to the supervised methods lies in the generative methods being able to seemingly create novel predictions which do not appear in any of the provided data.
For example, in computer vision, generative networks construct previously non-existent high-resolution images, conditional on information from other images <cit.>.
Here, we will show how to use such generative ML strategies to study the phases for the J_1-J_2 Ising model, an extension of the Ising model that incorporates competing interactions across the diagonals of the Ising squares and presents a more challenging 3-phase structure. As ML generator, we shall use a so-called variational autoencoder (VAE), a type of neural network that reconstructs a given predicted state after being trained on a selected set of states <cit.>.
The application of ML to structure determination via electron diffraction has also blossomed in the last decade <cit.>. ML strategies have been used to reduce the data flow in single-molecule data classification <cit.>, convolutional neural nets (CNNs) were shown to help with phase reconstruction for convergent-beam electron diffraction (CBED)-based scanning transmission electron microscopy (TEM) <cit.> while molecular structure imaging was found to benefit from such CNNs as well <cit.>. At the core of the deep learning methods employed in these works lie the same supervised DL techniques as used for phase determination. Again, generative ML for electron diffraction is not so common. Here, we will show how a so-called conditional generative adversarial network (cGAN) can be used to make accurate predictions of large-angle CBED (LACBED) images from just standard crystal information as encoded, e.g., in the usual text information<cit.> given in the Inorganic Crystal Structure Database (ICSD) <cit.>, the world’s largest such database.
§ A BRIEF RECAP OF THE ML APPROACH TO PHASES AND STRUCTURES
§.§ Classification and regression
Machine learning (ML) differs from traditional programming in that it does not rely on explicit rules to solve tasks. Instead, the network is expected to develop a strategy based on the input dataset to accomplish the required task.
There are three primary types of learning: supervised learning, unsupervised learning, and reinforcement learning.<cit.> Here, we will focus mainly on the first two.<cit.>
Supervised learning aims to discover the optimal strategy for performing a task by using a labeled dataset.
Within supervised learning, two key tasks can be identified: classification and regression.
In classification, the ML model learns to divide data into distinct categories. Essentially, it finds an optimal representation of the dataset that separates samples into different classes.
In regression, the algorithms are trained to understand the relationship between inputs and labels, enabling them to make continuous predictions for new, unseen labels based on the given inputs. This sets regression apart from classification, as it allows the model to predict values for data not encountered during training.
The second type of learning is unsupervised learning. In this approach, the ML algorithm processes unlabeled data and is expected to uncover hidden patterns or correlations without any external guidance.
Unsupervised learning is further divided into three categories: clustering, dimensionality reduction, and association learning.
Clustering aims to group similar samples within the dataset. Dimensionality reduction seeks to simplify the data representation while retaining its essential characteristics. Association learning explores relationships between different samples in the dataset. Unsupervised learning has a wide range of applications. It can be used as a preprocessing step to reveal the structure of a dataset before supervised learning begins.<cit.> It also powers generative methods, such as VAEs and GANs, which create new data samples.
§.§ Generative ML: VAEs and cGANs
A Variational Autoencoder (VAE) represents a relatively recent deep learning architecture that integrates standard compression techniques with the regularization strategies of machine learning, functioning simultaneously as a generative model <cit.>.
In essence, a VAE comprises an encoder, which is a multilayered neural network trained on input data to generate output parameters for a variational distribution. These parameters define a low-dimensional probabilistic distribution, referred to as the latent space.
The decoder, another deep neural network architecture, then reconstructs the output data from the latent space, drawing samples from this space rather than selecting deterministic points.
When the latent space dimensionality, d, is significantly smaller than the information content of the input data, some degree of information loss is inevitable. Thus, the goal is to design the encoder and decoder in such a way that maximizes the preservation of information during encoding while minimizing the error in the reconstructed data during decoding.
To effectively train a VAE, two primary loss functions are utilized. The reconstruction loss ℓ_ε measures the discrepancy between the input and reconstructed output during training. Additionally, the Kullback-Leibler divergence <cit.>, which serves as a regularization term, ensures that the latent space approximates a standard normal distribution <cit.>.
In practice, the training process involves minimizing a total loss ℓ, which is a combination of the reconstruction loss ℓ_ε and the Kullback-Leibler loss ℓ_KL, such that ℓ = ℓ_ε + c ℓ_KL, where c is a hyperparameter that balances the two components <cit.>.
GANs have emerged as a highly popular architecture for image-to-image translation tasks <cit.>. While VAEs are known to struggle with producing high-fidelity outputs, often resulting in blurriness <cit.>, GANs inherently avoid this issue by design <cit.>. An absence of blurriness is particularly critical in quantitative electron diffraction, where clarity is essential. For this reason, we focus on conditional GANs (cGANs) <cit.>, which are well-suited to our image-to-image task involving the learning of a mapping from an input image x and random noise vector z to a target image y, denoted as G: x, z→ y. In this context, G represents the generator.
GANs also introduce a second component, the discriminator, denoted as D. The discriminator is trained to differentiate between `real' images from the dataset and `fake' images generated by G. This adversarial setup ensures that the generator improves over time, as the discriminator learns to recognize blurry images as fake, thereby driving the generator to produce sharper outputs.
Unlike VAEs, which rely on a predefined loss function, GANs instead learn a loss function for the desired task, solving another problem: deciding which loss function to use for comparing diffraction patterns is not apriori clear and can vary between different applications.
§ PREDICTING PERCOLATING CLUSTERS WITH CNNS
This section reviews work done previously,<cit.>
where we showed that standard CNNs, usually employed in image recognition ML tasks, also work very well for classifying site percolation states according to occupation probability p as well as for regression when determining p from such states.
However, analyzing in detail whether spanning clusters at p < p_c or non-spanning clusters at p> p_c are correctly identified, we found that the same CNNs consistently fail to reflect the ground truth. Rather, it appears that the CNNs use p as a proxy measure to inform their classification predictions — a strategy that is obviously false for the percolation problem.
§.§ The physics model of “percolation”
The percolation problem is well-known with a rich history across the natural sciences <cit.>. It provides the usual statistical characteristics across a second-order transition such as, e.g., critical exponents, finite-size scaling, renormalization and universality <cit.>.
Briefly, on a percolation lattice of size L × L, individual lattice sites x⃗=(x,y), x,y ∈ [1,L], are randomly occupied with occupation probability p such that the state ψ of site x⃗ is ψ(x⃗)=1 for occupied and ψ(x⃗)=0 for unoccupied sites.
We say that a connection between neighboring sites exists when these are side-to-side nearest-neighbors on the square lattice, while diagonal sites can never be connected. A group of these connected occupied sites is called a cluster (cf. Fig. <ref>(a)).
Such a cluster then percolates when it spans the whole lattice either vertically from the top of the square to the bottom or, equivalently, horizontally from the left to the right. Obviously, for p=0, all sites are unoccupied and no spanning cluster can exist while for p=1 the spanning cluster trivially extends throughout the lattice.
In Fig. <ref>(a), we show examples of percolation clusters generated for various p values.
The percolation threshold is at p=p_c(L), such that for p< p_c(L) most clusters do not span while for p > p_c(L)
there is at least one spanning cluster.
This can be expressed via
the quantities P(p), Q(p)=1-P(p)
that denote the probabilities of the presence or absence of the spanning cluster at a given p, respectively (cf. Fig. <ref>(b)).
We note that P is a finite-L version of ψ in the notation of <cit.>.
We will occasionally emphasize this point using P_L and, likewise, Q_L.
For an infinite system (L→∞), one finds the emergence of an infinite spanning cluster at p_c=0.59274605079210(2). This estimate has been determined numerically evermore precisely over the preceding decades <cit.> while no analytical value is yet known <cit.>.
§.§ The ML approach to the percolation problem and the generation of ML “data”
Several ML studies on the percolation model have been
published, mostly using supervised learning in order to identify the two phases via ML classification <cit.>.
In order to facilitate the recognition of percolation with image recognition tools of ML, we have generated finite-sized L × L, with L=100, percolation states, denoted as ψ_i(p), for the 31 p-values 0.1, 0.2, …, 0.5, 0.55, 0.555, 0.556, …, 0.655, 0.66, 0.7, …, 0.9. For each such p, N=10000 different random ψ_i(p) have been generated.
Each state ψ_i(p), i=1, …, N, is of course just an array of numbers with 0 denoting unoccupied and 1 occupied sites. Nevertheless, we occasionally use for convenience the term “image” to denote ψ_i(p).
The well-known Hoshen-Kopelman algorithm <cit.> is employed to identify and label clusters from which we (i) compute s(p) and (ii) determine the presence or absence of a spanning cluster. Correlation measures have also been calculated but are not shown here for brevity.<cit.>
We emphasize that in the construction, we took care to only construct states such that for each p, the number of occupied sites is exactly N_occ= p × L^2 and hence p can be used as exact label for the supervised learning approach. Hence p= N_occ / L^2 can also be called the percolation density.
For the ML results discussed below, it will also be important to note that the spacing between p values reduces when p reaches 0.5 with the next p value given by 0.55 and then 0.555. Similarly, the p spacing increases as 0.655, 0.66, 0.7. We will later see that this results in some deviations from perfect classification/regression. For reference, we now have 12 values p=0.1, …, 0.58< p_c(100) and 18 values p=0.59, …, 0.9> p_c(100). We also note that the training set contains 92.7% of states without a spanning cluster below p_c and 94.8% are spanning above p_c.
We have also generated similar training and test sets for L=200; our results do not change significantly<cit.>.
Last, all our ML results have been obtained from ten training, validation and test cycles allowing us to quote ML indicators, such as losses, accuracies, in terms of averages and their errors.<cit.> Our CNN uses the ResNet18 implementation of PyTorch.<cit.>
§.§ Results for ML classification according to spanning or non-spanning properties
The hallmark of the percolation transition is the existence of a spanning cluster which determines whether the system is percolating or not <cit.>.
We now want to check this and label all states according to whether they are spanning or non-spanning. From Fig. <ref>(b), it is immediately clear that for finite-sized systems considered here, there are a non-negligible number of states which appear already spanning even when p < p_c and, vice versa, are still non-spanning when p > p_c. Furthermore, we note that for such L, the difference between p_c and p_c(L) is large enough to be important and we hence use p_c(L) as the appropriate value to distinguish the two phases.
Fig. <ref> shows the averaged results after ϵ=20 with a validation loss of min_ϵ[⟨ l_c,val⟩]=0.165 ± 0.001 (corresponding to a maximal validation accuracy max_ϵ[⟨ a_c,val⟩]= 92.702%± 0.001).
At first glance, the figure seems to indicate a great success: from the 31000 states present in τ, 11510.6 have been correctly classified as non-spanning (i.e., N→ N'), and 17206.9 as spanning (S→ S') while only 1223.1 are wrongly labeled as non-spanning (S→ N') and 1059.4 as spanning (N→ S') (We note that these numbers are not integers since they are computed as averages over 10 independent training runs <cit.>).
Overall, we would conclude that 92.6% of all test states are correctly classified while 7.4% are wrong.
However, from the full percolation analysis for τ, we can compute that there are 11127 states (92.7%) without a spanning cluster below p_c(L) while 873 states (7.3%)
already contain a spanning cluster. Similarly, for p>p_c(L), 94.9% of states, equivalent to 17075 states, are spanning and 5.1% are not, corresponding to 925 states. At p_c(L)=0.585, we furthermore have 482 spanning and 518 non-spanning states. Hence in total, we expect 2280 wrongly classified states.
Since the last number is very close to the actual number of 2282.5 of misclassified states, this suggests that it is precisely the spanning states below p_c(L) and the non-spanning ones above p_c(L) which the DL network is unable to recognize.
Let us rephrase for clarity: it seems that the CNN, when trained in whether a cluster is spanning or non-spanning, completely disregards this information in its classification outputs.
We show that this is indeed the case by a detailed analysis of the clusters around p_c as well as test sets which have been constructed to allow testing for the existence of the spanning cluster.<cit.>
In summary, when looking at p, classification and regression techniques for percolation states allow us to obtain good recognition with near-perfect ⟨ a_c,val⟩ = 99.323%± 0.003) for classification (cf. also Fig. <ref>(c)) and near-zero ⟨ l_r,val⟩ =0.000062 ± 0.000012 average mean-square loss for regression.<cit.>
On the other hand, the DL network completely ignores whether a cluster is spanning or non-spanning, essentially missing the underlying physics of the percolation problem — it seems to still use p as its main ordering measure. We believe that the root cause of the failure to identify the spanning clusters, or their absence, lies in the fundamentally local nature of the CNN: the filter/kernels employed in the ResNets span a few local sites only. Hence it is not entirely surprising that such a CNN cannot correctly identify the essentially global nature of spanning clusters. But it is of course exactly this global percolation that leads to the phase transition. This should serve as a warning to enthusiastic proponents of the ML approach not to ignore the physics undeservedly.
§ RESOLVING DISORDER STRENGTHS FROM IMAGES OF THE 3D ANDERSON MODEL
One of the hardest challenges in modern eigenvalue computation is the numerical solution of large-scale eigenvalue problems, in particular those arising from quantum physics<cit.>.
Typically, these problems require the computation of some eigenvalues and eigenvectors for systems which have up to several million unknowns due to their high spatial dimensions.
Here, the Anderson model of localization<cit.> is a particularly paradigmatic model as its underlying structure involves random perturbations of matrix elements which invalidates simple preconditioning approaches based on the graph of the matrices.<cit.>
Its physical importance comes from the prediction of a spatial confinement of the electronic motion upon increasing the disorder – the so-called Anderson localization.<cit.> When the model is used in three spatial dimensions, it exhibits a metal-insulator transition in which the disorder strength w mediates a
change of transport properties from metallic behavior at small w via critical behavior at the transition w_c ∼ 16.57 to insulating behavior and strong localization at larger w> w_c.<cit.>
The 3D Anderson model hence provides us with a physically meaningful quantum problem in which to use ML strategies to distinguish its two phases, namely the metallic phase with extended states at w<w_c and the insulating phase with localized states at w>w_c (Occasionally, one might want to also study w ≈ w_c as a 3rd phase), while avoiding the many challenges of fully interacting quantum systems.<cit.> In this sense, it can be seen as the quantum ML test partner to complement the classical statistical physics tests available via the percolation and Ising-type models.
Similarly to the percolation model, previous ML studies have already been performed and showed good success for ML classification with CNNs to identify the two phases of the system <cit.>.
Here, we show that not only phases but also disorder strengths can be recovered from eigenstates of the 3D Anderson model.
§.§ The formulation of the Anderson model in 3D
In its usual form, the localization problem in 3D with coordinates x, y, z corresponds, in the absence of a magnetic field, to a Hamilton operator in the
form of a real symmetric matrix H, with quantum mechanical energy levels given by
the eigenvalues E_n. The respective wave functions are simply the eigenvectors
of H, i.e., vectors ψ_n(r⃗) ∈ for r⃗=(x,y,z). With N = M^3 sites, the quantum mechanical (stationary) Schrödinger equation is equivalent to the eigenvalue equation
H ψ_n = E_n ψ_n, which in site representation reads as
∑_σ=±ψ_n(r⃗+σa⃗) + ψ_n(r⃗+σb⃗) + ψ_n(r⃗+σc⃗)
= [ E_n - ε(r⃗) ] ψ_n(r⃗),
with a⃗=(1,0,0), b⃗=(0,1,0) and c⃗=(0,0,1) denoting the lattice vectors of a periodic, simple cubic lattice.
The disorder usually<cit.> enters the matrix on the diagonal, where the entries ε_n(r⃗) correspond to a spatially
varying disorder potential and are selected randomly according to a suitable distribution.<cit.> Here, we shall use the standard box distribution ε(r⃗) ∈ [-w/2,w/2] such that w parameterizes the aforementioned disorder strength.
For disorders w ≪ w_c, most of the eigenvectors are extended, i.e., ψ_n(r⃗) fluctuating from site to site, but the envelope
|ψ_n| is approximately a nonzero constant. For large disorders w > w_c, all eigenvectors
are localized such that the envelope |ψ_n| of the nth eigenstate may be approximately
written as ∼exp[ -|r⃗ - r⃗_n|/ξ(w) ] with ξ(w) denoting the localization length of the eigenstate.
Directly at w = w_c, the last extended states at E=0 vanish.
The wave function vector ψ_E=0(r⃗) appears simultaneously extended and localized and has multifractal properties <cit.>.
In Fig. <ref>, we show examples of such states.
In order to numerically distinguish the two (or three) phases mentioned before, one usually needs to (i) go to rather large system sizes of order N^3=10^6 to 10^8 and (ii) average over many different realizations of the disorder, i.e., compute eigenvalues or eigenvectors for many matrices with different diagonals.<cit.>
In the present work, we concentrate on the computation of a few eigenvalues and corresponding eigenvectors for the physically most interesting case around the critical disorder w_c and in the center of the spectrum σ(H), i.e., at E = 0, for large system sizes.
§.§ Hamiltonian eigenfunctions as data
The square-normalized eigenstates ψ_n= ∑_x,y,zψ_n(x,y,z) |x,y,z⟩ have been numerically obtained using the Jadamilu library <cit.>. The |x,y,z⟩ indicate the orthonormal Wannier basis in the usual tight-binding formulation.
For the 17 disorders w = 15, 15.25, …, 16, 16.2, …, 17, 17.25, … 18 we consider for training and validation a previously used dataset<cit.> with 5000 disorder realization for each disorder and system sizes N= 20^3, 30^3, …, 100^3.
For all the data,<cit.> we have considered a single eigenstate per sample (disorder realization) with energy close to E = 0.
This is costly in terms of computing time but essential to avoid the noticable correlations that exist between eigenstates of the same sample.<cit.>
In addition, we have generated, for each of the disorders, 500 independent test wave functions at E=0, i.e., using random numbers with different seeds.
In order to be able to use standard 2D image recognition machine learning tools, we represent the ψ_n graphically as in Fig. <ref>. We remove the black box and the color scale before using the images for training, validation and testing purposes. Furthermore, the images are converted from their original postscript, using the ImageMagick set of routines, and rendered as portable network graphics (PNG) in the pixel resolutions of s=100× 100, 200× 200 and 500× 500. This conversion results in some changes in the visual presentation as shown in Fig. <ref>.
§.§ ML models and results
Previous ML studies of the Anderson model use CNNs composed of 6 convolutional layers and a fully connected layer to identify the extended and localized phases from the |ψ(x,y,z)|^2 <cit.>.
Here, our goal is to expand on these results show that a ResNet18, as used in section <ref>, can also recover the value of w used in images made from these |ψ|^2.
We first establish the capacity of the ResNet18 to identify the two phases of the 3D Anderson model of localisation from images (not shown here). Here, we want to train a network to identify individual disorder values. Following a similar strategy as in section <ref>, we train our network for 17 disorder values w= 15, 15.25, …, 16, 16.2, …, 17, 17.25, … 18 for fixed N=20^3, 40^3, 100^3 and s=100^2.
After training the 17 disorder values for N=20^3, we obtain a min_ϵ[⟨ l_c,val⟩]=2.408 ± 0.003 (corresponding to an accuracy of max_ϵ[⟨ a_c,val⟩] =15.9%± 0.2).
At first, the performance of the network on this system appears to be rather limited. From the confusion matrix obtained after training (not shown), we notice that only the smallest and largest disorders, i.e., w=15 and w=18, are perfectly classified.
We increase the size of the system and train our network for N=40^3. Following the training we reach min_ϵ[⟨ l_c,val⟩]=1.951 ± 0.004 (corresponding to an accuracy of max_ϵ[⟨ a_c,val⟩] =25.7%± 0.2). Looking at the metrics in Fig. <ref> (b), we notice the decrease of ⟨ l_c,val⟩ and ⟨ l_c,val⟩ between the training for N=20^3 and N=40^3.
Still, the apparent improvement in the performance of the network is not yet convincing.
We finally train for N=100^3 and s=100 × 100. We obtain min_ϵ[⟨ l_c,val⟩]=1.327 ± 0.006 (corresponding to an accuracy of max_ϵ[⟨ a_c,val⟩] =43.3%± 0.3).
This is an increase of almost 18%. Even though the accuracy is still less than 50%, the network seems to be getting better at recognizing the w values. The confusion matrix obtained after this training is given in Fig. <ref>(a). Clearly, the matrix is heavily diagonally dominant: misclassifications appear to exist mostly between directly adjacent disorder values. Thus, while the training does not result in a perfect recognition of w's, it is nevertheless already very good in recognizing the vicinity of each w, even very close to the metal-insulator transition.
Increasing the size of the input images to s=200 × 200 does not help to provide significant improvement. After training we obtain min_ϵ[⟨ l_c,val⟩]=1.216 ± 0.003 (corresponding to an accuracy of max_ϵ[⟨ a_c,val⟩] =47.9%± 0.2). Furthermore, training for such a large input leads to a substantial increase in training time.
In summary, we find that even using images of eigenstates allows to distinguish the phase of the 3D Anderson model well, while the classification of w values proceeds with nearly the same accuracy as in the case of classifying p for percolation in section <ref>. Furthermore, increasing the system size from N=20^3 to 100^3 improves the predictions considerably. Such finite-size effects remind us rather reassuringly that the ML strategies are obviously subject to the same physics constraints as standard approaches.
§ PREDICTING PHASES OF THE J_1-J_2 ISING MODEL WITH VAES
The J_1-J_2 Ising model serves as a still relatively simple system to illustrate an already more complex 3-phase behavior.<cit.>.
With J_1 denoting the nearest-neighbor interaction, the competing second-neighbor interaction J_2 gradually suppresses the ordering temperature, until it vanishes completely when J_2=|J_1|/2 <cit.>. Furthermore, beyond this point, a new ordered “superantiferromagnetic phase” appears.
The universality class of the transition into the superantiferromagnetic phase has been investigated early on
<cit.>, but still continues to attract attention since its nature remains controversial.<cit.>
There is at least also one investigation of this model on the D-wave quantum annealer <cit.>
and a small number of machine-learning investigations
<cit.>.
§.§ Definition of the J_1-J_2 Ising model
The Hamiltonian of the J_1-J_2 Ising model can be expressed as
H_J_1J_2 =
-J_1 ∑_⟨ i,j ⟩ s_i s_j +
J_2 ∑_⟨⟨ i,j ⟩⟩ s_i s_j ,
where s_i represents the spin at site i, which can be either up (+1) or down (-1); ⟨ i,j ⟩ refers to nearest-neighbor pairs, ⟨⟨ i,j ⟩⟩ denotes next-nearest neighbor pairs, while J_1, J_2 ≥ 0 signify the interaction strengths between the nearest and next-nearest neighbors, respectively.
Our chosen sign conventions in Eq. (<ref>) lead to a ferromagnetic coupling for J_1 pairs while next-nearest neighbors prefer to align in an antiferromagnetic structure.<cit.>
The three distinct phases of the model correspond to (i) a low-temperature, low-J_2 ferromagnet, (ii) a low-temperature, high-J_2 superantiferromagnet and (iii) the high-temperature paramagnet. We illustrate spin configurations representative of these phases and close-to-phase transitions in Fig. <ref>.
In this work, we review recent work aiming to predict the three phases with a generative VAE, using a spin-adapted mean-squared error ε as ML cost function.<cit.>
§.§ Generating states as ML training data via the Metropolis Monte-Carlo approach
To generate the necessary input data for the training of the VAE, we utilize the Metropolis algorithm, a well-established method for simulating statistical models at finite temperature <cit.>.
In the present investigation, we initially focus on a system size of 30×30 with periodic boundary conditions <cit.>. In order to assess the influence of the size of the system, we also investigate 60×60 and 120×120 square lattices.
Equilibration of the model can be difficult, in particular in the regime of
J_2 ≈|J_1|/2 <cit.>. We assure proper thermalization by successively cooling our configurations for fixed J_2/|J_1|.<cit.>
We set the energy scale with J_1=1.
For J_2=0, we are back to the nearest-neighbor Ising model with known critical temperature T_c,Ising≈ 2.269 <cit.>. We can therefore confidently start our exploration of the as-yet unknown phase diagram by choosing an initial temperature range of 0 ≤ T ≤ 4 ≈ 2 × T_c,Ising containing T_c,Ising.
We also know that the ferromagnetic-to-superantiferromagnetic transition is at J_2=1/2.<cit.> Hence we choose a range for J_2 from 0 to 1.5. Should we later see that these ranges do not suffice to capture all phases, we could further increase the maximal T and J_2 values.
Using Δ T = 0.025, we thus proceed with a set T of |𝒯|=157 temperatures with T ∈ [0.1,4] for T ∈ T. The Monte-Carlo construction is repeated with different random numbers until we have C=40 configurations for each temperature at the given values of J_2. Let J_2 = {
0, 0.1, 0.2, 0.3, 0.4, 0.45, 0.48, 0.49, 0.495, 0.5, 0.505,
0.51, 0.52, 0.55, 0.6, 0.65, 0.7, 0.8, 0.9, 1, 1.2, 1.5
} denote the |𝒥_2|=22 chosen distinct values.
In total, this results in a dataset containing |𝒯|× |𝒥_2| × C = 157 × 22 × 40 = 138 160 independent configurations for a given system size.
§.§ Reconstruction of the phase diagram using single-region VAEs
We can now use the VAE architecture to identify the phases of the J_1-J_2 model as a function of T and J_2 for constant J_1=1. Details of the VAE implementation can be found elsewhere.<cit.>
We start the training of the VAE for T ≪ T_c,Ising in two distinct regions, namely (i) J_2 < 1/2 and (ii) J_2 > 1/2. Consequently, we have two restricted training data regions
ρ_low-J_2 and ρ_high-J_2. In order to have a reasonable amount of training data, we use all 40 values for each (T, J_2) in each training region. For the results underlying Fig. <ref>(a), this amounts to 1440 training configurations in ρ_low-J_2, while for Fig. <ref>(b), we have 1800 configurations in ρ_high-J_2.
From Fig. <ref> we see that indeed two distinct regions emerge. The low-T, low-J_2 region shown in (a) is clearly separated from the rest of the (T, J_2) plane. Similarly, panel (b) establishes a low-T, high-J_2 region.
We note that in both cases, the ε values in the low/high-J_2 regions are close to zero, while in the other regions we have ε≈ 0.5.
This value suggests that in both cases, the out-of-region configurations have about 50% of spins different, in agreement with the behavior in the known phases.
We can therefore conclude the existence of two low-T regions identified in Fig. <ref>. By exclusion, the third region corresponds to ε≈ 0.5 from both trainings.
Indeed, these regions agree very well with the previously established phase boundaries shown in Fig. <ref>. The ε values of 0, 0.25, and 0.5 indicate best, random, and worst reconstruction possible, respectively, compatible with the spin configurations in each phase.
Clearly, the regions with ε≈ 0 correspond to the ordered ferro- and superantiferromagnetic phases in Fig. <ref> (a) and (b), respectively.
Further results with similar ML strategies as well as on a direct comparison of states can be found elsewhere.<cit.>
§ MICROSCOPY WITH GANS
Convergent-beam electron diffraction (CBED) <cit.> is a transmission electron microscopy (TEM) technique with unparallel sensitivity <cit.>. Its origins date back nearly 100 years to pioneering work<cit.> and its modern applications include crystal symmetry classification <cit.>, lattice parameter determination <cit.>, strain & defect analysis <cit.>, and more <cit.>.
However, CBED sees the majority of its use in symmetry determination <cit.> and charge density refinement <cit.> and is still lacking in popularity when compared to the more established structure solution and refinement methods of X-ray and neutron diffraction <cit.>.
Collecting the necessary amount of high-quality diffraction data from a TEM, to construct a large-angle CBED (LACBED) image, is one of the inherent challenges of the method. Here, modern computer-controlled TEM setups offer a clear advantage and can make the task near automatic <cit.>.
A perhaps even more constraining challenge lies in the fact that the complexity introduced by multiple scattering of electrons as they propagate through the specimen <cit.> requires sophisticated modelling techniques to construct the theoretical predictions to compare with TEM results.
To make CBED a more accessible approach, there have been two major computational methods developed: (i) the Bloch-wave method <cit.>, and (ii) Multislice <cit.>. Whilst both have seen success in accurately generating CBED patterns, they even today remain computationally resource- and time-intensive, often well beyond what a standard desktop computer can provide <cit.>.
§.§ Selection of training data
For our aim to generate LACBED patterns via ML, we require a large body of data in which crystal structure information has been paired with corresponding bright field LACBED images. Experience from previous such machine learning tasks in computer vision <cit.> and related applications <cit.>, as well as in the previous sections, suggests that often more than 10,000 such training pairs are needed. On the scale necessary for a successful model, it is infeasible to use experimental data for such patterns.
Fortunately, the Inorganic Crystal Structure Database (ICSD) <cit.>, as the world’s largest such database, provides ready access to the full structural information for more than 240,000 crystals in the form of a `Crystallographic Information File' (CIF), a standard text file format <cit.>.
Direct training with structured textual data, as available in the CIFs, is still a major challenge for machine learning tasks <cit.>. Even small changes in, e.g., numerical values of the lattice parameters, can have major changes in the resulting CBED patterns.
On the other hand, existing image-to-image translation tasks have been primarily optimised for 2D data. To harness this knowledge, we need a feasible way to represent the CIF information in 2D image form as well.
Fortunately, the projected electron potential ρ is a convenient such image representation. Since we have decided to concern ourselves only with cubic (isometric) crystals, we can nicely project the electron potential along z to a 2D image as shown in Fig. <ref> below.
Using the structure factors F(𝐠) of the crystal, obtained from Felix <cit.>, we generate the projected potential using
ρ (𝐫) ∝
∑_g F (𝐠) ·exp[-2π i 𝐠·𝐫]
.
Here, the 𝐠 are the lattice vectors of the unit cell in reciprocal space.
We normalize the resulting image of electron potential strength and also restrict their size to 128× 128.
Note that in requiring all of these inputs to be the same image dimensions for the machine learning model, we lose information regarding the size of the crystal.
Next, we need a method to construct the LACBED patterns corresponding to each CIF and projected electron potential ρ. We employ Felix, an open-source software implementation of the Bloch-wave method<cit.> for generating LACBED images originally developed in part by two of us <cit.>.
Felix has been shown to provide atomic coordinate refinements with sub-picometer accuracy <cit.>, and can accurately simulate LACBED patterns<cit.>.
The software takes as input a CIF, beam parameters, microscope settings, crystal settings, and the desired beam direction. Most of these values were calibrated previously<cit.>.
In our simulations, we only consider the (0,0,0) beam direction for simplicity. The other simulation parameters used here are provided in the code accompanying the present work <cit.>. We use Felix to generate LACBED images of size 128× 128. Such sizes are sufficient for many computer-vision-based machine learning tasks <cit.>, whilst remaining small enough to allow the generation of results on a large scale for our dataset.
Our strategy in generating the necessary input from the information provided in the ICSD is then as follows: (i) We convert the textual information provided by each crystal's CIF into the normalized projected electronic potential, i.e., a 2D image. (ii) we compute, via the Bloch-wave code Felix, the corresponding bright-field LACBED images.
While the construction of the electron potential images is very fast, generating the LACBED dataset takes a few weeks using bespoke high-performance computing architecture.
In the current proof-of-principle work, we focus on the 12454 CIFs each corresponding to a unique cubic crystal.
We ultimately have a dataset of 12454 image pairs of size 128× 128, with pixel values between 0 and 255. Each pair consists of a crystal's projected electron potential and its simulated (0,0,0) LACBED diffraction pattern. This dataset is publicly hosted <cit.>.
Whilst we use as many cubic crystals as we can, since after all, a much higher number is still desired, we encounter significantly imbalanced data in many areas. For example, when resolved according to their space group classification, we find that the ICSD data is highly imbalanced. As shown in Fig. <ref>, some space groups contain less than 10 ICSD entries while others have many thousands.
It is well known that machine learning methods, and in particular our chosen adversarial network architecture, suffer in their predictive strength when using imbalanced data <cit.>. Hence it could be worthwhile in future studies to include other crystal data from the ICSD beyond the cubic ones.
§.§ Results
We train a cGAN to create LACBED patterns by providing the projected electron density as input.<cit.> In Fig. <ref> we show some results.
Before going into details on how we create these images, we start by noting that the ground truth LACBED images shown in the figure, with whom we compare our predictions, needed about 400 seconds each to be constructed by the Bloch-wave method on a high-performance compute cluster while our predicted LACBED images arrived within 20 milliseconds on a modern, i.e., GPU-supported, desktop.
The images given in Fig. <ref> show, in the left column, the computed project potential for three crystal structures from different space groups, namely , and from top to bottom. Each such projected potential has been normalized. The comparison between the CBED predictions of the cGAN and the expected behavior from the Felix results show an overall good agreement, in particular w.r.t. the underlying two-fold symmetries. While we use the standard mean-squared error in training the cGAN, in Fig. <ref>, we report a shifted zero-mean normalized cross-correlation fit index R for pixel intensities,
R(𝐲, 𝐲̂) = 1/2 + 1/2 n^2∑_i,j^ny_ij-⟨𝐲⟩/σ(𝐲)·ŷ_ij-⟨𝐲̂⟩/σ(𝐲̂) ,
which has often been used in CBED image comparison studies <cit.>.
In the normalization used here, the value R=1 corresponds to a perfect fit, while R=0 is perfectly anti-correlated. The values R=0.5 and 0.75 emerge when 1/2 or 3/4 of the image pixels are correlated and 1/2 and 1/4 are anti-correlated, respectively. Also, R=0.5 corresponds to two images with uncorrelated intensities.
Our resulting R values for the LACBED image reconstruction, as given in the caption of Fig. <ref>, indicate an overall very good agreement.
§ CONCLUSIONS AND OUTLOOK
The learning aspects of DL networks are often referred to as “black boxes”, highlighting that it appears occasionally surprising how a DNN arrives at its classification, regression or generative predictions. On the other hand, it is exactly this lack of apriori imposed basic descriptors that allows a DL architecture to variationally construct its own set of descriptors to achieve an optimal prediction.
So when ML succeeds in classifying states of Ising-type, percolation, and Anderson models, this also shows that the phase information must be encapsulated directly in the states alone, even for those relatively close to the phase boundaries as shown by the overall good reconstruction of these phases. While this was not unknown before or unexpected,<cit.> it is nevertheless an interesting qualitative insight to have re-emphasized.
Conversely, this also suggests that simply comparing states with each other, by mean-squared deviations, R correlation or otherwise, might also be an alternative quantitative method for phase diagram construction - as already demonstrated.<cit.>
The caveat discovered when studying the globally spanning cluster for the percolation problem with locally focused CNNs, i.e., the failure of such CNNs to correctly identify the percolating cluster,<cit.> furthermore suggests that even the power of modern ML approaches can fail when the underlying physics is ignored.<cit.>
In this context it is also important to mention that the cGAN predictions for the outcome of electron interference experiments, i.e., the LACBED intensities, do not somehow circumvent the quantum mechanical measurement problem. Rather, they simply provide a good interpolation to the various diffraction solutions of the electron dynamical scattering problem provided by the Bloch wave calculations the cGAN was trained on.
Last, the review given here clearly reflects the prejudices and preferences of its authors in selecting the applications of ML to physics. Many other applications and application areas have been ignored such as Boltzmann machines<cit.> and the extremely interesting approaches to finding states of many-body systems<cit.>.
Acknowledgments J.J.W. and R.A.R. gratefully acknowledge discussions with Richard Beanland, Warwick, on the results presented here for the LACBED reconstructions. This work forms part of an ongoing project of all three of us.
D.B., A.H., and R.A.R. are grateful for co-tutelle funding via the EUtopia Alliance. R.A.R. also very grateful acknowledges the CY Initiative of Excellence (grant “Investissements d’Avenir” ANR-16-IDEX-0008).
The computations presented here used the University of Warwick's Research Technology Platform (RTP Scientific Computing) and the Sulis Tier 2 HPC platform hosted by the RTP. Sulis is funded by EPSRC Grant EP/T022108/1 and the HPC Midlands+ consortium.
Djena Bayograduated from the Sorbonne Université in 2017. Following this, she obtained a Master's degree from CY Cergy Paris Université in 2019. As part of the Master's program, she did an internship under the supervision of Vita Ilakovac on “Calculation of vibrations in core excited molecules" at the Laboratoire de Chimie Physique-Matière et Rayonnement (Sorbonne Université). In 2019, she joined Römer at the University of Warwick as a co-tutelle PhD student with Honecker at CY Cergy Paris Université. She defended her PhD thesis<cit.> in April 2024.
Burak Çivitcioğluwas born in Antalya, Türkiye. After a first degree at Koç University / Istanbul, he obtained his Master degrees from the Université Paris Diderot and CY Cergy Paris Université. In 2020, he started a PhD there with Honecker, informally co-supervised by Römer. He is about to defend his PhD thesis.<cit.>
Joe Webb
was born in Essex in the UK.
He recently received a BSc in Mathematics and Physics from the University of Warwick.
There he founded and published two editions of the Poincaré student magazine and collaborated with Römer on ML applications to LACBED patterns.
He is currently a scholar at Worcester College, Oxford,
studying for an MSc, with plans to pursue a PhD.
Andreas Honeckerwas born in Tübingen, Germany in 1967 and obtained first his diploma and then in 1995 his PhD degree in physics from the University of Bonn under the supervision of Werner Nahm and Günter von Gehlen, respectively. He went on his post-doctoral journey to the FU Berlin, SISSA Trieste, and ETH Zürich. After that, he worked at the TU Braunschweig where he obtained his Habilitation in 2003 under the direction of Wolfram Brenig.
He continued on to the Georg-August-Universität Göttingen, with partial support via a Heisenberg fellowship,
where he was awarded a professor title (apl.). He held a number of further fixed-term professorial appointments at Hannover, Strasbourg, and Lyon before he became full professor at CY Cergy Paris Université, France in 2014. Beyond broad scientific interests in statistical physics and condensed matter theory, he also contributes to administrative obligations. In particular, he served as director of the physics department and is presently serving as adjoint director of the Institut des Sciences et Techniques of CY Cergy Paris Université. He proudly supports new publishing initiatives such as SciPost Physics as editor.
Rudo Römerwas born in Gedern, Hessen, Germany in 1966. He attended the Wolfgang-Ernst Gymnasium in Büdingen,
studied with Robert Schrader for a Dipl.-Phys. at the FU Berlin, obtained a PhD with Bill Sutherland at the University of Utah in 1994, postdoc'ed as Feodor-Lynen fellow with Sriram Shastry at the IISc in Bangalore, Dieter Vollhardt at the RWTH Aachen, and achieved his Habilitation qualification with Michael Schreiber at the TU Chemnitz in 2000. In 2002 he was appointed at the University of Warwick, UK, where he heads the disordered quantum systems research group. Since then he has spent research stays at the MPIPKS in Dresden, the ICCMP at UN Brasilia (now in Natal), the Chinese Academy of Sciences at the Wuhan Institute of Physics and Mathematics, was appointed Lotus (Fu Rong) Visiting Professor at Xiangtan University, Xiangtan, Hunan province, China, and as Senior Fellow at CY Cergy Paris Université. He is on a number of editorial and advisory boards and editor-in-chief for Physica E.
100
Ashcroft1976SolidPhysics
N. W. Ashcroft and N. D. Mermin, Solid State Physics.
Saunders College Publishing, Fort Worth, 1976.
Diep2013FrustratedSystems
H. T. Diep, https://doi.org/10.1142/11660 Frustrated Spin Systems.
World Scientific, 2020.
https://doi.org/10.1142/11660.
Anderson1972MoreDifferent
P. W. Anderson, “More Is Different,” https://dx.doi.org/10.1126/science.177.4047.393Science 177 no. 4047, (1972) 393–396. https://www.science.org/doi/abs/10.1126/science.177.4047.393.
Yu2016DimensionlessTransitions
Y.-C. Yu, Y.-Y. Chen, H.-Q. Lin, R. A. Römer, and X.-W. Guan, “Dimensionless ratios: Characteristics of quantum liquids and their phase transitions,” https://dx.doi.org/10.1103/PhysRevB.94.195129Phys. Rev. B 94 no. 19, (11, 2016) 195129. https://link.aps.org/doi/10.1103/PhysRevB.94.195129.
Wang2024DistinguishingStatistics
C.-Y. Wang, T.-G. Zhou, Y.-N. Zhou, and P. Zhang, “Distinguishing Quantum Phases through Cusps in Full Counting Statistics,” https://dx.doi.org/10.1103/PhysRevLett.133.083402Phys. Rev. Lett. 133 no. 8, (8, 2024) 083402. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.133.083402.
Gomez2019AProblems
H. Gomez, M. Bures, and A. Moure, “A review on computational modelling of phase-transition problems,” https://dx.doi.org/10.1098/rsta.2018.0203Phil. Trans. R. Soc. A 377 no. 2143, (2019) 20180203.
Alpaydin2020IntroductionEdition
E. Alpaydin, Introduction to Machine Learning, fourth edition.
Adaptive Computation and Machine Learning series. MIT Press, 2020.
https://books.google.fr/books?id=uZnSDwAAQBAJ.
Hinton1999UnsupervisedComputation
G. Hinton and T. J. Sejnowski, https://dx.doi.org/10.7551/mitpress/7011.001.0001Unsupervised Learning: Foundations of Neural Computation.
The MIT Press, 8, 1999.
https://doi.org/10.7551/mitpress/7011.001.0001.
Mehta2019APhysicists
P. Mehta, M. Bukov, C.-H. Wang, A. G. Day, C. Richardson, C. K. Fisher, and D. J. Schwab, “A high-bias, low-variance introduction to Machine Learning for physicists,” https://dx.doi.org/10.1016/j.physrep.2019.03.001Phys. Rep. 810 (5, 2019) 1–124. https://linkinghub.elsevier.com/retrieve/pii/S0370157319300766.
Carleo2019MachineSciences
G. Carleo, I. Cirac, K. Cranmer, L. Daudet, M. Schuld, N. Tishby, L. Vogt-Maranto, and L. Zdeborová, “Machine learning and the physical sciences,” https://dx.doi.org/10.1103/RevModPhys.91.045002Rev. Mod. Phys. 91 no. 4, (12, 2019) 045002. https://link.aps.org/doi/10.1103/RevModPhys.91.045002.
Schurov2024InvariantApplications
I. Schurov, D. Alforov, M. Katsnelson, A. Bagrov, and A. Itin, “Invariant multiscale neural networks for data-scarce scientific applications,”. <https://arxiv.org/abs/2406.08318v1>
He2016a
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” https://dx.doi.org/10.1109/CVPR.2016.90Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2016-Decem (2016) 770–778
Tabak2019
M. A. Tabak, M. S. Norouzzadeh, et al., “Machine learning to classify animal species in camera trap images: Applications in ecology,” https://dx.doi.org/10.1111/2041-210X.13120Methods in Ecology and Evolution 10 no. 4, (2019) 585–590. https://onlinelibrary.wiley.com/doi/10.1111/2041-210X.13120
Zhang2017CombinationRecognition
R. Zhang, Q. Wang, and Y. Lu, https://dx.doi.org/10.1109/ICDAR.2017.324“Combination of ResNet and Center Loss Based Metric Learning for Handwritten Chinese Character Recognition,” in 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR), pp. 25–29.
IEEE, 2017.
http://ieeexplore.ieee.org/document/8270270/
Zhao2020
Y. Zhao and M. Khushi, https://dx.doi.org/10.1109/ICDMW51313.2020.00060“Wavelet Denoised-ResNet CNN and LightGBM Method to Predict Forex Rate of Change,” in 2020 International Conference on Data Mining Workshops (ICDMW), vol. 2020-Novem, pp. 385–391.
IEEE, 11, 2020.
https://ieeexplore.ieee.org/document/9346446/
Carrasquilla2017
J. Carrasquilla and R. G. Melko, “Machine learning phases of matter,” https://dx.doi.org/10.1038/nphys4035Nat. Phys. 13 no. 5, (2017) 431–434. https://www.nature.com/articles/nphys4035.pdf
ChNg2018
K. Ch'ng, N. Vazquez, and E. Khatami, “Unsupervised machine learning account of magnetic transitions in the Hubbard model,” https://dx.doi.org/10.1103/PhysRevE.97.013306Phys. Rev. E 97 no. 1, (1, 2018) 013306. https://link.aps.org/doi/10.1103/PhysRevE.97.013306
Tanaka2017DetectionNetworks
A. Tanaka and A. Tomiya, “Detection of Phase Transition via Convolutional Neural Networks,” https://dx.doi.org/10.7566/jpsj.86.063001J. Phys. Soc. Jpn. 86 no. 6, (6, 2017) 63001. http://doi.org/10.7566/JPSJ.86.063001
Huembeli2018a
P. Huembeli, A. Dauphin, P. Wittek, and C. Gogolin, “Automated discovery of characteristic features of phase transitions in many-body localization,” https://dx.doi.org/10.1103/PhysRevB.99.104106Phys. Rev. B 99 no. 10, (3, 2019) 104106. https://link.aps.org/doi/10.1103/PhysRevB.99.104106
Dong2019MachineTransitions
X.-Y. Dong, F. Pollmann, and X.-F. Zhang, “Machine learning of quantum phase transitions,” https://dx.doi.org/10.1103/PhysRevB.99.121104Phys. Rev. B 99 no. 12, (3, 2019) 121104. https://link.aps.org/doi/10.1103/PhysRevB.99.121104
Canabarro2019UnveilingLearning
A. Canabarro, F. F. Fanchini, A. L. Malvezzi, R. Pereira, and R. Chaves, “Unveiling phase transitions with machine learning,” https://dx.doi.org/10.1103/PhysRevB.100.045129Phys. Rev. B 100 no. 4, (7, 2019) 045129. https://link.aps.org/doi/10.1103/PhysRevB.100.045129
Shinjo2019MachineModel
K. Shinjo, K. Sasaki, S. Hase, S. Sota, S. Ejima, S. Yunoki, and T. Tohyama, “Machine Learning Phase Diagram in the Half-filled One-dimensional Extended Hubbard Model,” https://dx.doi.org/10.7566/JPSJ.88.065001J. Phys. Soc. Jpn. 88 no. 6, (6, 2019) 065001. https://journals.jps.jp/doi/10.7566/JPSJ.88.065001
Wang2016DiscoveringLearning
L. Wang, “Discovering phase transitions with unsupervised learning,” https://dx.doi.org/10.1103/PhysRevB.94.195105Phys. Rev. B 94 no. 19, (11, 2016) 195105. https://link.aps.org/doi/10.1103/PhysRevB.94.195105
Kottmann2020UnsupervisedDetection
K. Kottmann, P. Huembeli, M. Lewenstein, and A. Acín, “Unsupervised Phase Discovery with Deep Anomaly Detection,” https://dx.doi.org/10.1103/PhysRevLett.125.170603Phys. Rev. Lett. 125 no. 17, (10, 2020) 170603. https://link.aps.org/doi/10.1103/PhysRevLett.125.170603
Alexandrou2020TheAutoencoders
C. Alexandrou, A. Athenodorou, C. Chrysostomou, and S. Paul, “The critical temperature of the 2D-Ising model through deep learning autoencoders,” https://dx.doi.org/10.1140/epjb/e2020-100506-5Eur. Phys. J. B 93 no. 12, (12, 2020) 226. https://doi.org/10.1140/epjb/e2020-100506-5
DAngelo2020LearningNetworks
F. D'Angelo and L. Böttcher, “Learning the Ising model with generative neural networks,” https://dx.doi.org/10.1103/PhysRevResearch.2.023266Phys. Rev. Res. 2 no. 2, (6, 2020) 023266. https://link.aps.org/doi/10.1103/PhysRevResearch.2.023266
Shiina2020Machine-LearningModels
K. Shiina, H. Mori, Y. Okabe, and H. K. Lee, “Machine-Learning Studies on Spin Models,” https://dx.doi.org/10.1038/s41598-020-58263-5Sci. Rep. 10 no. 1, (2, 2020) 2177. https://doi.org/10.1038/s41598-020-58263-5
Corte2021ExploringModels
I. Corte, S. Acevedo, M. Arlego, and C. A. Lamas, “Exploring neural network training strategies to determine phase transitions in frustrated magnetic models,” https://dx.doi.org/https://doi.org/10.1016/j.commatsci.2021.110702Comput. Mater. Sci. 198 (2021) 110702. https://www.sciencedirect.com/science/article/pii/S0927025621004298
Walker2018IdentifyingMethods
N. Walker, K.-M. Tam, B. Novak, and M. Jarrell, “Identifying structural changes with unsupervised machine learning methods,” https://dx.doi.org/10.1103/PhysRevE.98.053305Phys. Rev. E 98 (11, 2018) 053305. https://doi.org/10.1103/PhysRevE.98.053305
Morningstar2018DeepCriticality
A. Morningstar and R. G. Melko, “Deep Learning the Ising Model Near Criticality,” J. Mach. Learn. Res. 18 no. 163, (2018) 1–17. http://jmlr.org/papers/v18/17-527.html
Suchsland2018
P. Suchsland and S. Wessel, “Parameter diagnostics of phases and phase transition learning by neural networks,” https://dx.doi.org/10.1103/PhysRevB.97.174435Phys. Rev. B 97 no. 17, (2018) 1–13
Efthymiou2019Super-resolvingNetworks
S. Efthymiou, M. J. S. Beach, and R. G. Melko, “Super-resolving the Ising model with convolutional neural networks,” https://dx.doi.org/10.1103/PhysRevB.99.075113Phys. Rev. B 99 no. 7, (2, 2019) 075113. https://link.aps.org/doi/10.1103/PhysRevB.99.075113
Walker2020DeepAutoencoder
N. Walker, K.-M. Tam, and M. Jarrell, “Deep learning on the 2-dimensional Ising model to extract the crossover region with a variational autoencoder,” https://dx.doi.org/10.1038/s41598-020-69848-5Sci. Rep. 10 no. 1, (8, 2020) 13047. https://doi.org/10.1038/s41598-020-69848-5
Goel2020LearningVariables
S. Goel, “Learning Ising and Potts Models with Latent Variables,” in Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, S. Chiappa and R. Calandra, eds., vol. 108 of Proceedings of Machine Learning Research, pp. 3557–3566.
PMLR, 8, 2020.
<https://proceedings.mlr.press/v108/goel20a.html>
PhysRevResearch.3.013074
J. Wang, W. Zhang, T. Hua, and T.-C. Wei, “Unsupervised learning of topological phase transitions using the Calinski-Harabaz index,” https://dx.doi.org/10.1103/PhysRevResearch.3.013074Phys. Rev. Res. 3 (Jan, 2021) 013074. https://link.aps.org/doi/10.1103/PhysRevResearch.3.013074
PhysRevResearch.3.033052
J. Arnold, F. Schäfer, M. Žonda, and A. U. J. Lode, “Interpretable and unsupervised phase classification,” https://dx.doi.org/10.1103/PhysRevResearch.3.033052Phys. Rev. Res. 3 (Jul, 2021) 033052. https://link.aps.org/doi/10.1103/PhysRevResearch.3.033052
Civitcioglu2022MachineModel
B. Çivitcioğlu, R. A. Römer, and A. Honecker, “Machine Learning the Square-Lattice Ising Model,” https://dx.doi.org/10.1088/1742-6596/2207/1/012058J. Phys: Conf. Ser. 2207 no. 1, (3, 2022) 12058. https://doi.org/10.1088/1742-6596/2207/1/012058
Basu2022MachineSpin-shuffling
P. Basu, J. Bhattacharya, D. P. S. Jakka, C. Mosomane, and V. Shukla, “Machine learning of Ising criticality with spin-shuffling,”. <http://arxiv.org/abs/2203.04012>
condmat8030083
D. W. Tola and M. Bekele, “Machine learning of nonequilibrium phase transition in an Ising model on square lattice,” https://dx.doi.org/10.3390/condmat8030083Condensed Matter 8 no. 3, (2023) 83. https://www.mdpi.com/2410-3896/8/3/83
PhysRevE.108.L032102
V. Chertenkov, E. Burovski, and L. Shchur, “Finite-size analysis in neural network classification of critical phenomena,” https://dx.doi.org/10.1103/PhysRevE.108.L032102Phys. Rev. E 108 (Sep, 2023) L032102. https://link.aps.org/doi/10.1103/PhysRevE.108.L032102
Naravane2023Semi-supervisedAutoencoders
A. Naravane and N. Mathur, “Semi-supervised learning of order parameter in 2D Ising and XY models using Conditional Variational Autoencoders,”. <http://arxiv.org/abs/2306.16822>
Pavioni2024MinimalistModels
G. L. G. Pavioni, M. Arlego, and C. A. Lamas, “Minimalist neural networks training for phase classification in diluted Ising models,” https://dx.doi.org/https://doi.org/10.1016/j.commatsci.2024.112792Comput. Mater. Sci. 235 (2024) 112792. https://www.sciencedirect.com/science/article/pii/S0927025624000132
McCoy1973TheModel
B. M. McCoy and T. T. Wu, https://dx.doi.org/10.4159/harvard.9780674180758The Two-Dimensional Ising Model.
Harvard University Press, 12, 1973.
https://www.degruyter.com/document/doi/10.4159/harvard.9780674180758/html
Rzadkowski2020DetectingLearning
W. Rza̧dkowski, N. Defenu, S. Chiacchiera, A. Trombettoni, and G. Bighin, “Detecting composite orders in layered models via machine learning,” https://dx.doi.org/10.1088/1367-2630/abae44New J. Phys. 22 no. 9, (9, 2020) 93026. https://dx.doi.org/10.1088/1367-2630/abae44
Fukushima2021CanModel
K. Fukushima and K. Sakai, “Can a CNN trained on the Ising model detect the phase transition of the q-state Potts model?,” https://dx.doi.org/10.1093/ptep/ptab057Prog. Theor. Exp. Phys. 2021 no. 6, (8, 2021) 061A01. https://doi.org/10.1093/ptep/ptab057
Giataganas2022NeuralModels
D. Giataganas, C.-Y. Huang, and F.-L. Lin, “Neural network flows of low q-state Potts and clock models,” https://dx.doi.org/10.1088/1367-2630/ac63daNew J. Phys. 24 no. 4, (4, 2022) 43040. https://dx.doi.o/Users/honecker/Downloads/IOPEXPORT_BIB.bibrg/10.1088/1367-2630/ac63da
Tirelli2022UnsupervisedModel
A. Tirelli, D. O. Carvalho, L. A. Oliveira, J. P. de Lima, N. C. Costa, and R. R. dos Santos, “Unsupervised machine learning approaches to the q-state Potts model,” https://dx.doi.org/10.1140/epjb/s10051-022-00453-3Eur. Phys. J. B 95 no. 11, (11, 2022) 189. https://doi.org/10.1140/epjb/s10051-022-00453-3
Wu1982TheModel
F. Y. Wu, “The Potts model,” https://dx.doi.org/10.1103/RevModPhys.54.235Rev. Mod. Phys. 54 no. 1, (1, 1982) 235–268. https://link.aps.org/doi/10.1103/RevModPhys.54.235
Zhang2019MachineModels
W. Zhang, J. Liu, and T.-C. Wei, “Machine learning of phase transitions in the percolation and XY models,” https://dx.doi.org/10.1103/PhysRevE.99.032142Phys. Rev. E 99 no. 3, (3, 2019) 032142. https://link.aps.org/doi/10.1103/PhysRevE.99.032142
Yu2020UnsupervisedPercolation
W. Yu and P. Lyu, “Unsupervised machine learning of phase transition in percolation,” https://dx.doi.org/10.1016/J.PHYSA.2020.125065Physica A 559 (12, 2020) 125065.
Shen2021SupervisedPercolation
J. Shen, W. Li, S. Deng, and T. Zhang, “Supervised and unsupervised learning of directed percolation,” https://dx.doi.org/10.1103/PhysRevE.103.052140Phys. Rev. E 103 no. 5, (2021) 052140. http://arxiv.org/abs/2101 http://dx.doi.org/10.1103/PhysRevE.103.052140
Bayo2022MachineModel
D. Bayo, A. Honecker, and R. A. Römer, “Machine learning the 2D percolation model,” https://dx.doi.org/10.1088/1742-6596/2207/1/012057J. Phys.: Conf. Ser. 2207 no. 1, (2022) 12057. https://iopscience.iop.org/article/10.1088/1742-6596/2207/1/012057
Bayo2023TheLearning
D. Bayo, A. Honecker, and R. A. Römer, “The percolating cluster is invisible to image recognition with deep learning,” https://dx.doi.org/10.1088/1367-2630/ad0525New J. Phys. 25 no. 11, (11, 2023) 113041. https://dx.doi.org/10.1088/1367-2630/ad0525
Patwardhan2022MachineNetworks
S. Patwardhan, U. Majumder, A. D. Sarma, M. Pal, D. Dwivedi, and P. K. Panigrahi, “Machine Learning as an Accurate Predictor for Percolation Threshold of Diverse Networks,”. <http://arxiv.org/abs/2212.14694>
Cheng2021MachineModel
S. Cheng, F. He, H. Zhang, K.-D. Zhu, and Y. Shi, “Machine Learning Percolation Model,”. <http://arxiv.org/abs/2101.08928>
Li2009TopologicalInsulator
J. Li, R. L. Chu, J. K. Jain, and S. Q. Shen, “Topological Anderson insulator,” https://dx.doi.org/10.1103/PHYSREVLETT.102.136806/FIGURES/4/MEDIUMPhys. Rev. Lett. 102 no. 13, (3, 2009) 136806. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.102.136806
Balogh2014DiffusionAlloys
Z. Balogh and G. Schmitz, “Diffusion in Metals and Alloys,” https://dx.doi.org/10.1016/B978-0-444-53770-6.00005-8Physical Metallurgy: Fifth Edition 1 (1, 2014) 387–559.
Oswald2015a
J. Oswald and R. A. Römer, “Imaging of Condensed Quantum States in the Quantum Hall Effect Regime,” https://dx.doi.org/10.1016/j.phpro.2015.12.038Physics Procedia 75 (2015) 314–325. http://linkinghub.elsevier.com/retrieve/pii/S1875389215016776
Oswald2020
J. Oswald and R. A. Römer, “Microscopic details of stripes and bubbles in the quantum Hall regime,” https://dx.doi.org/10.1103/PhysRevB.102.121305Phys. Rev. B 102 no. 12, (9, 2020) 121305. http://arxiv.org/abs/2001.07542 https://link.aps.org/doi/10.1103/PhysRevB.102.121305
Haldane1988ModelAnomaly
F. D. Haldane, “Model for a Quantum Hall Effect without Landau Levels: Condensed-Matter Realization of the "Parity Anomaly",” https://dx.doi.org/10.1103/PhysRevLett.61.2015Phys. Rev. Lett. 61 no. 18, (10, 1988) 2015. https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.61.2015
Yu2010QuantizedInsulators
R. Yu, W. Zhang, H. J. Zhang, S. C. Zhang, X. Dai, and Z. Fang, “Quantized anomalous Hall effect in magnetic topological insulators,” https://dx.doi.org/10.1126/SCIENCE.1187485/SUPPL_FILE/YU.SOM.PDFScience 329 no. 5987, (7, 2010) 61–64. https://www.science.org/doi/10.1126/science.1187485
Pixley2015
J. Pixley, P. Goswami, and S. Das Sarma, “Anderson Localization and the Quantum Phase Diagram of Three Dimensional Disordered Dirac Semimetals,” https://dx.doi.org/10.1103/PhysRevLett.115.076601Phys. Rev. Lett. 115 no. 7, (2015) 076601. http://link.aps.org/doi/10.1103/PhysRevLett.115.076601
Meng2019
L. Meng, J. Wu, J. Zhong, and R. A. Römer, “A type of robust superlattice type-I Weyl semimetal with four Weyl nodes,” https://dx.doi.org/10.1039/C9NR04551ANanoscale 11 no. 39, (2019) 18358–18366. http://xlink.rsc.org/?DOI=C9NR04551A
Schulz-Baldes2020
E. Prodan, H. Schulz-Baldes, “Bulk and Boundary Invariants for Complex Topological Insulators: From K-Theory to Physics,” https://doi.org/10.1007/978-3-319-29351-6Mathematical Physics Studies Part F1117, 1–201 (2015).
Hasan2010
M. Z. Hasan and C. L. Kane, “Topological Insulators,” https://dx.doi.org/10.1103/RevModPhys.82.3045Physics 82 no. 4, (2010) 23. http://arxiv.org/abs/1002.3895
Rotenberg2011
E. Rotenberg, “Topological insulators: The dirt on topology,” Nat. Phys. 7 no. 1, (2011) 8–10. <http://dx.doi.org/10.1038/nphys1869>
Rodriguez2010
A. Rodriguez, L. J. Vasquez, K. Slevin, and R. A. Römer, “Critical Parameters from a Generalized Multifractal Analysis at the Anderson Transition,” https://dx.doi.org/10.1103/PhysRevLett.105.046403Phys. Rev. Lett. 105 no. 4, (7, 2010) 046403. https://link.aps.org/doi/10.1103/PhysRevLett.105.046403
Rodriguez2011
A. Rodriguez, L. J. Vasquez, K. Slevin, and R. A. Römer, “Multifractal finite-size scaling and universality at the Anderson transition,” https://dx.doi.org/10.1103/PhysRevB.84.134209Phys. Rev. B 84 no. 13, (10, 2011) 134209. https://link.aps.org/doi/10.1103/PhysRevB.84.134209
Ohtsuki2016DeepSystems
T. Ohtsuki and T. Ohtsuki, “Deep Learning the Quantum Phase Transitions in Random Two-Dimensional Electron Systems,” https://dx.doi.org/10.7566/JPSJ.85.123706J. Phys. Soc. Jpn. 85 no. 12, (12, 2016) 123706. https://journals.jps.jp/doi/10.7566/JPSJ.85.123706
Ohtsuki2019DrawingFunctions
T. Ohtsuki and T. Mano, “Drawing Phase Diagrams for Random Quantum Systems by Deep Learning the Wave Functions,”. https://arxiv.org/abs/1909.09821
Wichert2021MachineLearning
A. Wichert and L. Sa-Couto, https://dx.doi.org/10.1142/12201Machine Learning — A Journey to Deep Learning.
World Scientific, 2, 2021.
https://www.worldscientific.com/worldscibooks/10.1142/12201
Isola2017Image-to-imageNetworks
P. Isola, J. Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” https://dx.doi.org/10.1109/CVPR.2017.632Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 2017-January (11, 2017) 5967–5976
Wang2018High-ResolutionGANs
T. C. Wang, M. Y. Liu, J. Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs,” https://dx.doi.org/10.1109/CVPR.2018.00917Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (12, 2018) 8798–8807
Kingma2013Auto-EncodingBayes
D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes,”. <http://arxiv.org/abs/1312.6114>
Ede2021AdvancesLearning
J. M. Ede, “Advances in Electron Microscopy with Deep Learning,”. http://arxiv.org/abs/2101.01178 http://dx.doi.org/10.5281/zenodo.4399748
Matinyan2023MachineData
S. Matinyan, B. Demir, P. Filipcik, J. P. Abrahams, and E. van Genderen, “Machine learning for classifying narrow-beam electron diffraction data,” https://dx.doi.org/10.1107/S2053273323004680Acta Crystallographica Section A Foundations and Advances 79 no. 4, (7, 2023) 360–368. https://scripts.iucr.org/cgi-bin/paper?S2053273323004680
Friedrich2023PhaseLearning
T. Friedrich, C.-P. Yu, J. Verbeeck, and S. Van Aert, “Phase Object Reconstruction for 4D-STEM using Deep Learning,” https://dx.doi.org/10.1093/micmic/ozac002Microsc. Microanal. 29 no. 1, (2, 2023) 395–407. https://academic.oup.com/mam/article/29/1/395/6985579
Liu2021MachineStructures
X. Liu, K. Amini, A. Sanchez, B. Belsa, T. Steinle, and J. Biegert, “Machine learning for laser-induced electron diffraction imaging of molecular structures,” https://dx.doi.org/10.1038/s42004-021-00594-zCommun. Chem. 4 no. 1, (11, 2021) 154. https://www.nature.com/articles/s42004-021-00594-z
Hall2006SpecificationCIF
S. R. Hall, J. D. Westbrook, N. Spadaccini, I. D. Brown, H. J. Bernstein, and B. McMahon, https://dx.doi.org/10.1107/97809553602060000728“Specification of the Crystallographic Information File (CIF),” in International Tables for Crystallography, pp. 20–36.
International Union of Crystallography, Chester, England, 10, 2006.
https://dx.doi.org/10.1107/97809553602060000728
IgorLevin2018NISTICSD
Igor Levin, “NIST Inorganic Crystal Structure Database (ICSD).” 2018.
<https://icsd.nist.gov/>
Chou2019GeneratedVAE
J. Chou, “Generated Loss and Augmented Training of MNIST VAE,”. <http://arxiv.org/abs/1904.10937>.
IanGoodfellow2016DeepLearning
Ian Goodfellow, Yoshua Bengio, and Aaron Courville, Deep Learning.
MIT Press, 2016.
<http://www.deeplearningbook.org>
Goodfellow2014GenerativeNets
I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial Nets,” in Advances in Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. Weinberger, eds.
Curran Associates, Inc., 27 ed., 6, 2014.
https://proceedings.neurips.cc/paper_files/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf
Kullback1951OnSufficiency
S. Kullback and R. A. Leibler, “On Information and Sufficiency,” https://dx.doi.org/10.1214/aoms/1177729694Ann. Math. Statist. 22 no. 1, (3, 1951) 79–86. http://projecteuclid.org/euclid.aoms/1177729694
Pathak2016ContextInpainting
D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context Encoders: Feature Learning by Inpainting,”. <http://arxiv.org/abs/1604.07379>
Mirza2014ConditionalNets
M. Mirza and S. Osindero, “Conditional Generative Adversarial Nets,”. <http://arxiv.org/abs/1411.1784>.
Broadbent1957PercolationMazes
S. R. Broadbent and J. M. Hammersley, “Percolation processes: I. Crystals and mazes,” https://dx.doi.org/10.1017/S0305004100032680Math. Proc. Camb. Philos. Soc. 53 no. 3, (1957) 629–641. https://www.cambridge.org/core/journals/mathematical-proceedings-of-the-cambridge-philosophical-society/article/abs/percolation-processes/C00CC4943F48228F8AC8031092FE84EC
Stauffer2018
D. Stauffer and A. Aharony, https://dx.doi.org/10.1201/9781315274386Introduction To Percolation Theory.
Taylor & Francis, 12, 2018.
https://www.taylorfrancis.com/books/9781482272376
Elliott1960EquivalenceFerromagnetism
R. J. Elliott, B. R. Heap, D. J. Morgan, and G. S. Rushbrooke, “Equivalence of the Critical Concentrations in the Ising and Heisenberg Models of Ferromagnetism,” https://dx.doi.org/10.1103/PhysRevLett.5.366Phys. Rev. Lett. 5 no. 8, (10, 1960) 366–367. https://link.aps.org/doi/10.1103/PhysRevLett.5.366
Flory1953a
P. J. Flory, Principles of Polymer Chemistry.
Cornell University Press, 1953.
<https://books.google.fr/books/about/Principles_of_Polymer_Chemistry.html?id=CQ0EbEkT5R0C redir_esc=y>
Derrida1985
B. Derrida and D. Stauffer, “Corrections To Scaling and Phenomenological Renormalization for 2-Dimensional Percolation and Lattice Animal Problems.,” https://dx.doi.org/10.1051/jphys:0198500460100162300J. Phys. (Paris) 46 no. 10, (1985) 1623–1630.
https://doi.org/10.1051/jphys:0198500460100162300
Grimmett1989
G. Grimmett, Percolation.
Springer Verlag, Berlin, 1989.
<https://link.springer.com/book/10.1007/978-3-662-03981-6>
Jacobsen2014
J. L. Jacobsen, “High-precision percolation thresholds and Potts-model critical manifolds from graph polynomials,” https://dx.doi.org/10.1088/1751-8113/47/13/135001J. Phys. A: Math. Theor. 47 no. 13, (2014) 135001.
https://dx.doi.org/10.1088/1751-8113/47/13/135001
Hoshen1976a
J. Hoshen and R. Kopelman, “Percolation and cluster distribution. I. Cluster multiple labeling technique and critical concentration algorithm,” https://dx.doi.org/10.1103/PhysRevB.14.3438Phys. Rev. B 14 no. 8, (10, 1976) 3438–3445. https://link.aps.org/doi/10.1103/PhysRevB.14.3438
Paszke2019
A. Paszke, S. Gross, et al., “PyTorch: An imperative style, high-performance deep learning library,” Advances in Neural Information Processing Systems 32 no. NeurIPS, (2019). <http://arxiv.org/abs/1912.01703>
Schenk2008b
O. Schenk, M. Bollhöfer, and R. A. Römer, “On Large-Scale Diagonalization Techniques for the Anderson Model of Localization,” https://dx.doi.org/10.1137/070707002SIAM Review 50 no. 1, (1, 2008) 91–112. http://epubs.siam.org/doi/10.1137/070707002
Anderson1958a
P. W. Anderson, “Absence of Diffusion in Certain Random Lattices,” https://dx.doi.org/10.1103/PhysRev.109.1492Phys. Rev. 109 no. 5, (3, 1958) 1492–1505. https://link.aps.org/doi/10.1103/PhysRev.109.1492
Elsner1999
U. Elsner, V. Mehrmann, F. Milde, R. A. Römer, and M. Schreiber, “The Anderson Model of Localization: A Challenge for Modern Eigenvalue Methods,” https://dx.doi.org/10.1137/S1064827598332217SIAM Journal on Scientific Computing 20 no. 6, (1, 1999) 2089–2102. http://epubs.siam.org/doi/10.1137/S1064827598332217
Brandes2003AndersonRamifications
T. Brandes and S. Kettemann, https://dx.doi.org/10.1007/b13139Anderson Localization and Its Ramifications, vol. 630 of Lecture Notes in Physics.
Springer Berlin Heidelberg, Berlin, Heidelberg, 2003.
<http://link.springer.com/10.1007/b13139>
Evers2008
F. Evers and A. D. Mirlin, “Anderson transitions,” Rev. Mod. Phys. 80 no. 4, (2008) 1355–1417. <http://link.aps.org/doi/10.1103/RevModPhys.80.1355>.
Theveniaut2019NeuralLimitations
H. Théveniaut and F. Alet, “Neural network setups for a precise detection of the many-body localization transition: finite-size scaling and limitations,”https://dx.doi.org/10.1103/PhysRevB.100.224202Phys. Rev. B 100 no. 22, (2019) 224202. https://link.aps.org/doi/10.1103/PhysRevB.100.224202
Mano2017a
T. Mano and T. Ohtsuki, “Phase diagrams of three-dimensional Anderson and quantum percolation models using deep three-dimensional convolutional neural network,” https://dx.doi.org/10.7566/JPSJ.86.113704J. Phys. Soc. Jpn. 86 no. 11, (2017) . https://doi.org/10.7566/JPSJ.86.113704
Cadez2023MachinePhases
T. Cadez, B. Dietz, D. Rosa, A. Andreanov, K. Slevin, and T. Ohtsuki, “Machine learning wave functions to identify fractal phases,”. <http://arxiv.org/abs/2306.01402>.
Romer2022NumericalLocalization
R. A. Römer, https://dx.doi.org/10.1016/B978-0-323-90800-9.00099-8“Numerical methods for localization,” in Reference Module in Materials Science and Materials Engineering.
Elsevier, 7, 2022.
https://linkinghub.elsevier.com/retrieve/pii/B9780323908009000998
Bollhofer2007JADAMILU:Matrices
M. Bollhöfer and Y. Notay, “JADAMILU: a software code for computing selected eigenvalues of large sparse symmetric matrices,” https://dx.doi.org/https://doi.org/10.1016/j.cpc.2007.08.004Comput. Phys. Commun. 177 no. 12, (2007) 951–964. https://www.sciencedirect.com/science/article/pii/S0010465507003700
Swendsen1979MonteN2
R. H. Swendsen and S. Krinsky, “Monte Carlo Renormalization Group and Ising Models with n≥2,” https://dx.doi.org/10.1103/PhysRevLett.43.177Phys. Rev. Lett. 43 no. 3, (7, 1979) 177–180. <https://link.aps.org/doi/10.1103/PhysRevLett.43.177>
Landau1980PhaseInteractions
D. P. Landau, “Phase transitions in the Ising square lattice with next-nearest-neighbor interactions,” https://dx.doi.org/10.1103/PhysRevB.21.1285Phys. Rev. B 21 no. 3, (2, 1980) 1285–1297. https://link.aps.org/doi/10.1103/PhysRevB.21.1285
Binder1980PhaseInteractions
K. Binder and D. P. Landau, “Phase diagrams and critical behavior in Ising square lattices with nearest- and next-nearest-neighbor interactions,” https://dx.doi.org/10.1103/PhysRevB.21.1941Phys. Rev. B 21 no. 5, (3, 1980) 1941–1962. https://link.aps.org/doi/10.1103/PhysRevB.21.1941
Landau1985PhaseCouplings
D. P. Landau and K. Binder, “Phase diagrams and critical behavior of Ising square lattices with nearest-, next-nearest-, and third-nearest-neighbor couplings,” https://dx.doi.org/10.1103/PhysRevB.31.5946Phys. Rev. B 31 no. 9, (5, 1985) 5946–5953. https://link.aps.org/doi/10.1103/PhysRevB.31.5946
Moran-Lopez1993First-orderInteractions
J. L. Morán-López, F. Aguilera-Granja, and J. M. Sanchez, “First-order phase transitions in the Ising square lattice with first- and second-neighbor interactions,” https://dx.doi.org/10.1103/PhysRevB.48.3519Phys. Rev. B 48 no. 5, (8, 1993) 3519–3522. https://link.aps.org/doi/10.1103/PhysRevB.48.3519
Malakis2006MonteInteractions
A. Malakis, P. Kalozoumis, and N. Tyraskis, “Monte Carlo studies of the square Ising model with next-nearest-neighbor interactions,” https://dx.doi.org/10.1140/EPJB/E2006-00032-2Eur. Phys. J. B 50 no. 1, (2, 2006) 63–67. https://link.springer.com/article/10.1140/epjb/e2006-00032-2
Monroe2007PhaseModel
J. L. Monroe and S.-Y. Kim, “Phase diagram and critical exponent ν for the nearest-neighbor and next-nearest-neighbor interaction Ising model,” https://dx.doi.org/10.1103/PhysRevE.76.021123Phys. Rev. E 76 no. 2, (8, 2007) 021123. https://link.aps.org/doi/10.1103/PhysRevE.76.021123
dosAnjos2008PhaseLattice
R. A. dos Anjos, J. Roberto Viana, and J. Ricardo de Sousa, “Phase diagram of the Ising antiferromagnet with nearest-neighbor and next-nearest-neighbor interactions on a square lattice,” https://dx.doi.org/https://doi.org/10.1016/j.physleta.2007.09.059Phys. Lett. A 372 no. 8, (2008) 1180–1184. https://www.sciencedirect.com/science/article/pii/S0375960107013680
Kalz2008PhaseInteractions
A. Kalz, A. Honecker, S. Fuchs, and T. Pruschke, “Phase diagram of the Ising square lattice with competing interactions,” https://dx.doi.org/10.1140/epjb/e2008-00359-6Eur. Phys. J. B 65 no. 4, (10, 2008) 533. https://doi.org/10.1140/epjb/e2008-00359-6
Watanabe2023Non-monotonicSystems
H. Watanabe, Y. Motoyama, S. Morita, and N. Kawashima, “Non-monotonic behavior of the Binder parameter in discrete spin systems,” https://dx.doi.org/10.1093/ptep/ptad022Prog. Theor. Exp. Phys. 2023 no. 3, (8, 2023) 033A02. https://doi.org/10.1093/ptep/ptad022
Yoshiyama2023Higher-orderLattice
K. Yoshiyama and K. Hukushima, “Higher-order tensor renormalization group study of the J_1-J_2 Ising model on a square lattice,” https://dx.doi.org/10.1103/PhysRevE.108.054124Phys. Rev. E 108 no. 5, (11, 2023) 054124. https://link.aps.org/doi/10.1103/PhysRevE.108.054124
Li2024ALattice
S.-W. Li and F.-J. Jiang, “A Comprehensive Study of the Phase Transitions of the Frustrated J_1-J_2 Ising Model on the Square Lattice,” https://dx.doi.org/10.1093/ptep/ptae061Prog. Theor. Exp. Phys. 2024 no. 5, (8, 2024) 053A06. https://doi.org/10.1093/ptep/ptae061
Park2022FrustratedMachine
H. Park and H. Lee, “Frustrated Ising Model on D-wave Quantum Annealing Machine,” https://dx.doi.org/10.7566/JPSJ.91.074001J. Phys. Soc. Jpn. 91 no. 7, (6, 2022) . https://journals.jps.jp/doi/abs/10.7566/JPSJ.91.074001
Civitcioglu2024PhaseLearning
B. Çivitcioğlu, R. A. Römer, and A. Honecker, “Phase determination with and without deep learning,” <http://arxiv.org/abs/2403.09786>.
Metropolis1953EquationMachines
N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, “Equation of State Calculations by Fast Computing Machines,” https://dx.doi.org/10.1063/1.1699114J. Chem. Phys. 21 no. 6, (1953) 1087–1092. https://doi.org/10.1063/1.1699114
Newman1999MontePhysics
M. E. J. Newman and G. T. Barkema, Monte Carlo Methods in Statistical Physics.
Oxford University Press, 1999.
<https://global.oup.com/academic/product/monte-carlo-methods-in-statistical-physics-9780198517979>
Berg2004MarkovAnalysis
B. A. Berg, https://dx.doi.org/10.1142/5602Markov Chain Monte Carlo Simulations and Their Statistical Analysis.
World Scientific, 10, 2004.
https://www.worldscientific.com/worldscibooks/10.1142/5602
Landau2014APhysics
D. P. Landau and K. Binder, https://dx.doi.org/10.1017/CBO9781139696463A Guide to Monte Carlo Simulations in Statistical Physics.
Cambridge University Press, 4 ed., 2014.
https://doi.org/10.1017/CBO9781139696463
Acevedo2021PhaseLearning
S. Acevedo, M. Arlego, and C. A. Lamas, “Phase diagram study of a two-dimensional frustrated antiferromagnet via unsupervised machine learning,” https://dx.doi.org/10.1103/PhysRevB.103.134422Phys. Rev. B 103 no. 13, (2021) 134422. https://link.aps.org/doi/10.1103/PhysRevB.103.134422
Kramers1941
H. A. Kramers and G. H. Wannier, “Statistics of the two-dimensional ferromagnet. part I,” https://dx.doi.org/10.1103/PhysRev.60.252Phys. Rev. 60 (Aug, 1941) 252–262. https://link.aps.org/doi/10.1103/PhysRev.60.252
Spence1992ElectronMicrodiffraction
J. C. H. Spence and J. M. Zuo, https://dx.doi.org/10.1007/978-1-4899-2353-0Electron Microdiffraction.
https://doi.org/10.1007/978-1-4899-2353-0
Springer US, Boston, MA, 1992.
Tanaka1994Convergent-beamDiffraction
M. Tanaka, “Convergent-beam electron diffraction,” https://dx.doi.org/10.1107/S0108767393010426Acta Crystallogr. A 50 no. 3, (5, 1994) 261–286.
https://doi.org/10.1107/S0108767393010426
Beanland2013DigitalPicture
R. Beanland, P. J. Thomas, D. I. Woodward, P. A. Thomas, and R. A. Roemer, “Digital electron diffraction – seeing the whole picture,” https://dx.doi.org/10.1107/S0108767313010143Acta Crystallogr. A 69 no. 4, (7, 2013) 427–434. https://scripts.iucr.org/cgi-bin/paper?S0108767313010143
Kossel1939ElektroneninterferenzenBundel
W. Kossel and G. Möllenstedt, “Elektroneninterferenzen im konvergenten Bündel,” https://dx.doi.org/10.1002/andp.19394280204Annalen der Physik 428 no. 2, (1, 1939) 113–140. https://onlinelibrary.wiley.com/doi/10.1002/andp.19394280204
Buxton1976ThePatterns
B. F. Buxton, J. A. Eades, J. W. Steeds, and G. M. Rackham, “The symmetry of electron diffraction zone axis patterns,” https://dx.doi.org/10.1098/rsta.1976.0024Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 281 no. 1301, (3, 1976) 171–194. https://royalsocietypublishing.org/doi/10.1098/rsta.1976.0024
Tanaka1983Point-groupDiffraction
M. Tanaka, R. Saito, and H. Sekii, “Point-group determination by convergent-beam electron diffraction,” https://dx.doi.org/10.1107/S010876738300080XActa Crystallogr. A 39 no. 3, (5, 1983) 357–368.
https://doi.org/10.1107/S010876738300080X
Tanaka1983Space-groupDiffraction
M. Tanaka, H. Sekii, and T. Nagasawa, “Space-group determination by dynamic extinction in convergent-beam electron diffraction,” https://dx.doi.org/10.1107/S0108767383001695Acta Crystallogr. A 39 no. 6, (11, 1983) 825–837.
https://doi.org/10.1107/S0108767383001695
Saunders1999QuantitativeMeasurements
M. Saunders, A. G. Fox, and P. A. Midgley, “Quantitative zone-axis convergent-beam electron diffraction (CBED) studies of metals. II. Debye–Waller-factor measurements,” https://dx.doi.org/10.1107/S0108767398016316Acta Crystallogr. A 55 no. 3, (5, 1999) 480–488.
https://doi.org/10.1107/S0108767398016316
Zuo1998AMatching
J. M. Zuo, M. Kim, and R. Holmestad, “A new approach to lattice parameter measurements using dynamic electron diffraction and pattern matching,” https://dx.doi.org/10.1093/oxfordjournals.jmicro.a023568J. Electron Microsc. Tech. 47 no. 2, (1, 1998) 121–127.
https://doi.org/10.1093/oxfordjournals.jmicro.a023568
Kaiser1999ApplicationSubstrates
U. Kaiser, K. Saitoh, K. Tsuda, and M. Tanaka, “Application of the CBED method for the determination of lattice parameters of cubic SiC films on 6H SiC substrates,” https://dx.doi.org/10.1093/oxfordjournals.jmicro.a023672J. Electron Microsc. Tech. 48 no. 3, (1, 1999) 221–233.
https://doi.org/10.1093/oxfordjournals.jmicro.a023672
Armigliato2003ApplicationDevices
A. Armigliato, R. Balboni, G. P. Carnevale, G. Pavia, D. Piccolo, S. Frabboni, A. Benedetti, and A. G. Cullis, “Application of convergent beam electron diffraction to two-dimensional strain mapping in silicon devices,” https://dx.doi.org/10.1063/1.1565181Appl. Phys. Lett. 82 no. 13, (3, 2003) 2172–2174. https://pubs.aip.org/apl/article/82/13/2172/510371/Application-of-convergent-beam-electron
Kramer2000AnalysisCBED
S. Krämer, J. Mayer, C. Witt, A. Weickenmeier, and M. Rühle, “Analysis of local strain in aluminium interconnects by energy filtered CBED,” https://dx.doi.org/10.1016/S0304-3991(99)00191-6Ultramicroscopy 81 no. 3-4, (4, 2000) 245–262.
https://www.sciencedirect.com/science/article/pii/S0304399199001916
Cherns1989ConvergentMultilayers
D. Cherns and A. R. Preston, “Convergent beam diffraction studies of interfaces, defects, and multilayers,” https://dx.doi.org/10.1002/jemt.1060130204J. Electron Microsc. Tech. 13 no. 2, (10, 1989) 111–122.
https://onlinelibrary.wiley.com/doi/abs/10.1002/jemt.1060130204
Morniroli1996AnalysisDiffraction
J. Morniroli and D. Cherns, “Analysis of grain boundary dislocations by large angle convergent beam electron diffraction,” https://dx.doi.org/10.1016/0304-3991(95)00087-9Ultramicroscopy 62 no. 1-2, (1, 1996) 53–63.
https://www.sciencedirect.com/science/article/pii/0304399195000879
Midgley1996QuantitativeBonds
P. A. Midgley and M. Saunders, “Quantitative electron diffraction: From atoms to bonds,” https://dx.doi.org/10.1080/00107519608217535Contemp. Phys. 37 no. 6, (11, 1996) 441–456.
https://doi.org/10.1080/00107519608217535
Zuo1999DirectCu2O
J. M. Zuo, M. Kim, M. O'Keeffe, and J. C. H. Spence, “Direct observation of d-orbital holes and Cu–Cu bonding in Cu_2O,” https://dx.doi.org/10.1038/43403Nature 401 no. 6748, (9, 1999) 49–52. https://www.nature.com/articles/43403
Beanland2021RefinementDiffraction
R. Beanland, A. Hubert, and R. Roemer, “Refinement of crystal structure using ‘digital’ large angle convergent beam electron diffraction,” https://dx.doi.org/10.1017/S1431927621004803Microsc. Microanal. 27 no. S1, (8, 2021) 1282–1284. https://academic.oup.com/mam/article/27/S1/1282/6889110
Hubert2019StructurePatterns
A. Hubert, R. Römer, and R. Beanland, “Structure refinement from ‘digital’ large angle convergent beam electron diffraction patterns,” https://dx.doi.org/10.1016/j.ultramic.2018.12.007Ultramicroscopy 198 (3, 2019) 1–9.
https://www.sciencedirect.com/science/article/pii/S0304399118303498
Zuo1995OnMethod
J. Zuo and A. Weickenmeier, “On the beam selection and convergence in the Bloch-wave method,” https://dx.doi.org/10.1016/0304-3991(94)00190-XUltramicroscopy 57 no. 4, (3, 1995) 375–383.
https://www.sciencedirect.com/science/article/pii/030439919400190X
2018ElectronMicroscopy
B. G. Mendis, ed., https://dx.doi.org/10.1002/9781118696545Electron Beam‐Specimen Interactions and Simulation Methods in Microscopy.
Wiley, 5, 2018.
https://onlinelibrary.wiley.com/doi/book/10.1002/9781118696545
Cowley1957TheApproach
J. M. Cowley and A. F. Moodie, “The scattering of electrons by atoms and crystals. I. A new theoretical approach,” https://dx.doi.org/10.1107/S0365110X57002194Acta Crystallographica 10 no. 10, (10, 1957) 609–619. https://scripts.iucr.org/cgi-bin/paper?S0365110X57002194
VanDyck1984TheMicroscopy
D. Van Dyck and W. Coene, “The real space method for dynamical electron diffraction calculations in high resolution electron microscopy,” https://dx.doi.org/10.1016/0304-3991(84)90072-XUltramicroscopy 15 no. 1-2, (1, 1984) 29–40. https://linkinghub.elsevier.com/retrieve/pii/030439918490072X
Chuvilin2005OnSimulation
A. Chuvilin and U. Kaiser, “On the peculiarities of CBED pattern formation revealed by multislice simulation,” https://dx.doi.org/10.1016/j.ultramic.2005.03.003Ultramicroscopy 104 no. 1, (8, 2005) 73–82. https://linkinghub.elsevier.com/retrieve/pii/S0304399105000483
Kaiser2006ProspectsCalculation
U. Kaiser and A. Chuvilin, “Prospects of the multislice method for CBED pattern calculation,” https://dx.doi.org/10.3139/146.101319Int. J. Mater. Res. 97 no. 7, (7, 2006) 912–919. http://www.hanser-elibrary.com/doi/abs/10.3139/146.101319
Kirkland2010AdvancedMicroscopy
E. J. Kirkland, https://dx.doi.org/10.1007/978-1-4419-6533-2Advanced Computing in Electron Microscopy.
Springer US, Boston, MA, 2010.
https://link.springer.com/10.1007/978-1-4419-6533-2
BeanlandFelix:Software
R. Beanland, K. Evans, and R. A. Roemer, “Felix: Bloch wave method diffraction pattern simulation software.” 2023.
<https://github.com/WarwickMicroscopy/Felix>
2023Kaggle:Science
“Kaggle: Your Home for Data Science.” 2023.
<https://www.kaggle.com/>
Halevy2009TheData
A. Halevy, P. Norvig, and F. Pereira, “The Unreasonable Effectiveness of Data,” https://dx.doi.org/10.1109/MIS.2009.36IEEE Intelligent Systems 24 no. 2, (3, 2009) 8–12. http://ieeexplore.ieee.org/document/4804817/
Spasic2020ClinicalReview
I. Spasic and G. Nenadic, “Clinical Text Data in Machine Learning: Systematic Review,” https://dx.doi.org/10.2196/17984JMIR Medical Informatics 8 no. 3, (3, 2020) e17984. http://medinform.jmir.org/2020/3/e17984/
Webb2024GitHubWephy/ai-diffraction
J. J. Webb, “GitHub - wephy/ai-diffraction.” 2, 2024.
<https://github.com/wephy/ai-diffraction>
Webb2023FelixPatterns
J. J. Webb and R. Beanland, “Felix Diffraction Patterns.” V2, 2023.
<https://www.kaggle.com/datasets/wephys/felix-diffraction-patterns>
Saini2023TacklingReview
M. Saini and S. Susan, “Tackling class imbalance in computer vision: a contemporary review,” https://dx.doi.org/10.1007/s10462-023-10557-6Artif. Intell. Rev. 56 no. S1, (10, 2023) 1279–1335. https://link.springer.com/10.1007/s10462-023-10557-6
Edwards1975TheoryGlasses
S. F. Edwards and P. W. Anderson, “Theory of spin glasses,” https://dx.doi.org/10.1088/0305-4608/5/5/017J. Phys. F: Met. Phys. 5 no. 5, (5, 1975) 965.
https://iopscience.iop.org/article/10.1088/0305-4608/5/5/017/meta
Carleo2016
G. Carleo and M. Troyer, “Solving the quantum many-body problem with artificial neural networks”, https://dx.doi.org/10.1126/science.aag2302Science 355 no. 6325, (2, 2017) 602–606.
http://dx.doi.org/10.1126/science.aag2302 http://0-science.sciencemag.org/content/355/6325/602
Zhu2023HubbardNet:Networks
Z. Zhu, M. Mattheakis, W. Pan, and E. Kaxiras, “HubbardNet: Efficient predictions of the Bose-Hubbard model spectrum with deep neural networks,” https://dx.doi.org/10.1103/PhysRevResearch.5.043084Phys. Rev. Res. 5 no. 4, (10, 2023) 043084. https://link.aps.org/doi/10.1103/PhysRevResearch.5.043084
Pfau2024AccurateNetworks
D. Pfau, S. Axelrod, H. Sutterud, I. v. Glehn, and J. S. Spencer, “Accurate computation of quantum excited states with neural networks,” https://dx.doi.org/10.1126/SCIENCE.ADN0137Science 385 no. 6711, (8, 2024) . https://0-www-science-org/doi/10.1126/science.adn0137
PhDBayo24
D. Bayo, Phase Determination by Deep Learning for Disordered Systems.
PhD thesis, University of Warwick (home) & CY Cergy Paris Université (host), 2024.
PhDCivitcioglu24
B. Çivitcioğlu, Phase Determination with and without Deep Learning for the J_1-J_2 Ising Model.
PhD thesis, CY Cergy Paris Université, 2024.
|
http://arxiv.org/abs/2409.03722v1 | 20240905172426 | Kerr Geodesics in horizon-penetrating Kerr coordinates: description in terms of Weierstrass functions | [
"Zuzanna Bakun",
"Angelika Łukanty",
"Anastasiia Untilova",
"Adam Cieślik",
"Patryk Mach"
] | gr-qc | [
"gr-qc"
] |
Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, Poland
Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, Poland
Faculty of Physics, Astronomy and Applied Computer Science, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, Poland
Institute of Theoretical Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, Poland
Institute of Theoretical Physics, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, Poland
§ ABSTRACT
We revisit the theory of timelike and null geodesics in the (extended) Kerr spacetime. This work is a sequel to a recent paper by Cieślik, Hackmann, and Mach, who applied the so-called Biermann–Weierstrass formula to integrate Kerr geodesic equations expressed in Boyer–Lindquist coordinates. We show that a formulation based on the Biermann–Weierstrass theorem can also be applied in horizon-penetrating Kerr coordinates, resulting in solutions that are smooth across Kerr horizons. Horizon-penetrating Kerr coordinates allow for an explicit continuation of timelike and null geodesics between appropriate regions of the maximal analytic extension of the Kerr spacetime. A part of this work is devoted to a graphic visualisation of such geodesics.
Kerr Geodesics in horizon-penetrating Kerr coordinates: description in terms of Weierstrass functions
Patryk Mach
September 9, 2024
=====================================================================================================
§ INTRODUCTION
The Kerr metric is usually written in Boyer–Lindquist coordinates <cit.>. They lead to a simple form of the metric tensor with a single off-diagonal term—an advantage allowed by the fact that the Kerr metric is circular (or orthogonally transitive)—but suffer a coordinate singularity at the black hole horizon. A system of coordinates on the Kerr spacetime regular at the horizon has already been proposed in the original paper of Kerr in 1963 <cit.>. Here we will work with its more popular variant, obtained by performing an additional transformation t = u - r (in the original notation of Kerr <cit.>) and a change in the convention regarding the black hole spin (a → -a). These coordinates are often referred to simply as Kerr or Kerr–Schild coordinates <cit.>. Since the latter term is usually reserved for a popular Cartesian-type coordinate system on the Kerr spacetime, we prefer to use the term “horizon-penetrating Kerr coordinates.”
Separability of the geodesic motion in the Kerr spacetime was discovered in 1968 by Carter <cit.> (see also <cit.>), who worked in the original coordinate system introduced by Kerr (save for the change a → - a). Nowadays, the majority of works on Kerr geodesics use Boyer–Lindquist coordinates (a sample of papers published after 2000 includes <cit.>). While algebraic simplicity is an obvious advantage, using Boyer–Lindquist coordinates for Kerr geodesics can be, in some cases, misleading, as it leads to a non-physical near-horizon behavior—generic geodesics tend to wind up around the horizon. Boyer–Lindquist coordinates were also used in a recent paper <cit.>, which introduced a uniform description of all generic timelike and null Kerr geodesics, based on Weierstrass functions. The advantage of this work was three-fold: Solutions were given in a form depending explicitly on the constants of motion and initial positions. No a priori knowledge of radial turning points was needed. Finally, geodesics with no radial turning points were described by explicitly real formulas.
In this paper we extend the analysis of <cit.> to horizon-penetrating coordinates, which allow us to remove the singular behavior of geodesics at the black hole horizon, depending on the direction of motion. Quite surprisingly, little of the original simplicity of the analysis presented in <cit.> and based on Boyer–Lindquist coordinates is lost. Radial and polar equations and solutions remain unchanged. Additional terms appear in the azimuthal and time equations, which can readily be integrated. All solutions can still be written in terms of standard Weierstrass functions, and a single set of formulas remains applicable to all generic geodesics. As in <cit.>, solutions are fully specified by the values of constants of motion—the energy, the angular momentum, the Carter constant—and the initial position.
We show explicit examples of Kerr geodesics, attempting to visualize the behavior of those geodesics that cross the horizons. Such a visualization is especially tricky for geodesics which continue to negative values of the radius, allowed in the maximal analytic extension of the Kerr spacetime.
We prove that a future-directed timelike geodesic originating outside the event horizon and plunging into the black hole can, in horizon-penetrating Kerr coordinates, be continued smoothly through the event horizon. We also show that if this coordinate system admits a smooth transition of a given geodesic trajectory through one of horizons in a given radial direction, such a smooth transition is not allowed in the opposite direction.
§ GEODESIC EQUATIONS IN THE KERR SPACETIME
§.§ Metric conventions
In this work, we employ standard geometric units in which the speed of light c and the gravitational constant G are set to 1. The metric signature is (-, +, +, +).
In Boyer–Lindquist coordinates x^μ = (t,r,θ,φ) the Kerr metric can be written as
g=-(1-2Mr/ρ^2) dt^2 -4Marsin^2θ/ρ^2 dtdφ+ ρ^2/Δ dr^2+ρ^2 dθ^2+(r^2+a^2+2Ma^2r sin^2θ/ρ^2) sin^2θ dφ^2,
where
Δ := r^2 - 2Mr + a^2,
ρ^2 := r^2 + a^2 cos^2θ.
We will only consider the case with a^2 < M^2, due to its physical relevance. Kerr horizons are located at the zeros of Δ, i.e., at r_± = M ±√(M^2 - a^2). We will refer to horizons at r = r_- and r = r_+ as the Cauchy and the event horizon, respectively.
The coordinate singularity at the horizons can be removed by a coordinate transformation to horizon-penetrating Kerr coordinates (t^',r,θ,φ^') defined in terms of one-forms
dt^' = dt+2Mr/Δdr,
dφ'=dφ+a/Δdr,
or more explicitly, by setting
t'=t+2M∫rdr/r^2-2Mr+a^2, φ'=φ+a∫dr/r^2-2Mr+a^2.
In horizon-penetrating Kerr coordinates the Kerr metric reads
g = -(1-2Mr/ρ^2)dt^'^2-4Mr/ρ^2asin^2θ dt^' dφ^' + 4Mr/ρ^2 dt^' dr - 2 a (1 + 2Mr/ρ^2)sin^2θ dr dφ^'
+ (1+2Mr/ρ^2) dr^2 + ρ^2 dθ^2 + [ (r^2+a^2)^2-a^2 Δsin^2θ]sin^2θ/ρ^2dφ^'^2.
In this paper we will restrict ourselves to a region of the maximally extended Kerr spacetime covered by horizon-penetrating Kerr coordinates with the ranges t^'∈ℝ, r ∈ℝ, θ∈ [0,π], φ∈ [0,2π). It spans across three causal diamonds or Boyer–Lindquist blocks, numbered as I, II, and III, according to a convention used in <cit.>. They are defined by the following ranges of the radius. Block I: r > r_+; Block II: r_- < r < r_+; Block III: r < r_-. We depict this region on a Penrose conformal diagram corresponding to the symmetry axis of the Kerr spacetime in Fig. <ref>. To show the properties of the time foliation defined by the horizon-penetrating Kerr coordinates, we also plot, in Fig. <ref>, hypersurfaces of constant time t^'. They remain regular across the event horizon r = r_+ joining Blocks I and II and across the Cauchy horizon r = r_-, joining Blocks II and III. Further details of the construction of the Penrose diagram shown in Fig. <ref> are given in Appendix <ref>.
§.§ Geodesic equations
Geodesic equations can be written in the Hamiltonian form as
dx^μ/dτ̃=∂ H/∂ p_μ, dp_ν/dτ̃=-∂ H/∂ x^ν,
where p^μ=dx^μ/dτ̃, H(x^α,p_β)=1/2g^μν(x^α)p_μ p_ν=-1/2m^2, and m is the particle rest mass. The four-velocity u^μ = d x^μ/dτ is normalised as g_μνu^μ u^ν=-δ_1, where
δ_1 = 1 for timelike geodesics,
0 for null geodesics.
The above conventions imply that the affine parameter τ̃ and the proper time τ are related by τ̃= τ/m.
Let the metric g be given by Eq. (<ref>) or (<ref>). A standard reasoning shows that H, E := - p_t and l_z := p_φ are constants of motion. The fourth constant, referred to as the Carter constant 𝒦, can be derived by a separation of variables in the corresponding Hamilton–Jacobi equation <cit.>. This separation can be performed both in Boyer–Lindquist and in horizon-penetrating Kerr coordinates. Moreover, in both coordinate systems constants H, E, l_z, and 𝒦 have the same numerical values. In particular, momentum components p_t and p_φ transform as p_t^' = p_t = -E and p_φ^' = p_φ = l_z.
In Boyer–Lindquist coordinates geodesic equations can be written as
ρ^2dr/dτ̃ = ϵ_r√(R(r)),
ρ^2dθ/dτ̃ = ϵ_θ√(Θ(θ)),
ρ^2dφ/dτ̃ = a[(r^2+a^2)E-al_z]/Δ+1/sin^2θ(l_z-aEsin^2θ),
ρ^2dt/dτ̃ = (r^2+a^2)[(r^2+a^2)E-al_z]/Δ+a(l_z-aEsin^2θ),
where we denoted
R(r) := [(r^2+a^2)E-al_z]^2-Δ(m^2r^2+𝒦),
Θ(θ) := 𝒦-m^2a^2cos^2θ- ( l_z/sinθ-asinθ E )^2.
We will refer to R(r) and Θ(θ) as the radial and polar effective potentials, respectively. The signs ϵ_r = ± 1 and ϵ_θ = ± 1 indicate the direction of motion with respect to the radial and polar coordinates.
There is a useful parametrization of Kerr geodesics, introduced by Mino in <cit.>, which allows to partially decouple Eqs. (<ref>). The so-called Mino time s̃ is defined by
ρ^2dx^μ/dτ̃=dx^μ/ds̃
or
τ̃= ∫_0^s̃ρ^2ds.
Using s̃ as the geodesic parameter, we obtain
dr/ds̃ = ϵ_r√(R(r)),
dθ/ds̃ = ϵ_θ√(Θ(θ)),
dφ/ds̃ = a[(r^2+a^2)E-al_z]/Δ+1/sin^2θ(l_z-aEsin^2θ),
dt/ds̃ = (r^2+a^2)[(r^2+a^2)E-al_z]/Δ+a(l_z-aEsin^2θ).
Geodesic equations in the horizon-penetrating Kerr coordinates can be obtained simply by the following vector transformation
dt'/ds̃ = ∂ t'/∂ tdt/d s̃+∂ t'/∂ rdr/d s̃=dt/ds̃+2Mr/Δdr/ds̃,
dφ'/ds̃ = ∂φ'/∂φd φ/d s̃+∂φ '/∂ rdr/d s̃=dφ/ds̃+a/Δdr/ds̃.
This gives
dr/ds̃ = ϵ_r√(R(r)),
dθ/ds̃ = ϵ_θ√(Θ(θ)),
dφ'/ds̃ = a[(r^2+a^2)E-al_z]/Δ+1/sin^2θ(l_z-aEsin^2θ)+a/Δϵ_r√(R(r)),
dt'/ds̃ = (r^2+a^2)[(r^2+a^2)E-al_z]/Δ+a(l_z-aEsin^2θ)+2Mr/Δϵ_r√(R(r)).
In the remainder of this paper we will use the following dimensionless variables:
a=Mα, t'=MT', r=Mξ, E=mε, 𝒦=M^2m^2κ, l_z=Mmλ_z, s̃=1/Mms.
Dimensionless radii corresponding to Kerr horizons will be denoted by ξ_± = 1 ±√(1 - α^2). For null geodesics, for which m = 0, the parameter m in the above equations can be replaced with any mass parameter m̃ > 0.
In dimensionless variables (<ref>), geodesic equations (<ref>) have the form
dξ/ds = ϵ_r√(R̃),
dθ/ds = ϵ_θ√(Θ̃),
dφ'/ds = α[(ξ^2+α^2)ε-αλ_z]/ξ^2-2ξ+α^2+1/sin^2θ(λ_z-αεsin^2θ)+α/ξ^2-2ξ+α^2ϵ_r√(R̃),
dT'/ds = (ξ^2+α^2)[(ξ^2+α^2)ε-αλ_z]/ξ^2-2ξ+α^2+α(λ_z-αεsin^2θ)+2ξ/ξ^2-2ξ+α^2ϵ_r√(R̃),
where
R̃ := [(ξ^2+α^2)ε-αλ_z]^2-(ξ^2-2ξ+α^2)(δ_1ξ^2+κ),
Θ̃ := κ-δ_1α^2cos^2θ-1/sin^2θ(λ_z-αεsin^2θ)^2.
Note that the coordinate transformation from Boyer–Lindquist to horizon-penetrating Kerr coordinates affects only the equations for φ^' and T^', while the radial and polar equations remain unchanged. Equations (<ref>) can also be obtained directly by working in the horizon-penetrating Kerr coordinates and by separating the variables in the Hamilton–Jacobi equation. We emphasise a connection with the Boyer–Lindquist form (<ref>), in order to make use of solutions to Eqs. (<ref>) derived in <cit.>. In the next section, we review the solutions of the radial and polar equations obtained in <cit.> and focus on equations for φ^' and T^'.
§ SOLUTIONS OF GEODESIC EQUATIONS
§.§ Biermann–Weierstrass formula
The form of solutions used in this work predominantly rely on the following result due to Biermann and Weierstrass <cit.>. Proofs of this theorem can be found in <cit.>.
Let f be a quartic polynomial
f(x) = a_0 x^4 + 4 a_1 x^3 +6 a_2 x^2 + 4a_3 x + a_4,
and let g_2 and g_3 denote Weierstrass invariants of f:
g_2 = a_0 a_4 - 4a_1 a_3 + 3 a_2^2,
g_3 = a_0 a_2 a_4 + 2a_1 a_2 a_3 -a_2^3 -a_0 a_3^2 - a_1^2 a_4.
Denote
z(x) = ∫^x_x_0dx^'/√(f(x^')),
where x_0 can be any constant. Then x can be expressed as
x = x_0 + - √(f(x_0))℘'(z)
+ 1/2 f'(x_0) [ ℘(z) - 1/24f”(x_0) ]
+ 1/24 f(x_0) f”'(x_0) /2 [ ℘(z)
- 1/24 f”(x_0) ]^2 - 1/48 f(x_0) f^(4)(x_0) ,
where ℘(z) =℘(z;g_2,g_3) is the Weierstrass function with invariants
(<ref>). In addition
℘(x) = √(f(x)f(x_0))+f(x_0)/2(x-x_0)^2+f'(x_0)/4(x-x_0)+f”(x_0)/24,
℘' (x) = -[f(x)/(x-x_0)^3-f'(x_0)/4(x-x_0)]√(f(x_0)) - [f(x)/(x-x_0)^3+f'(x_0)/4(x-x_0)]√(f(x)).
§.§ Radial motion
The dimensionless radial potential R̃ can be written as
R̃(ξ) = a_0 ξ^4 + 4 a_1 ξ^3 + 6 a_2 ξ^2 + 4a_3 ξ + a_4,
where
a_0 = ε^2 - δ_1,
a_1 = 1/2δ_1,
a_2 = -1/6(δ_1 α^2 + κ - 2α^2 ε^2 + 2 αελ_z) ,
a_3 = 1/2κ,
a_4 = α^4 ε^2 - α^2 κ - 2 α^3 ελ_z + α^2 λ_z^2 = -α^2 [κ - (αε - λ_z)^2].
Weierstrass invariants associated with the coefficients (<ref>) will be denoted by
g_R̃,2 = a_0 a_4 - 4 a_1 a_3 + 3 a_2^2,
g_R̃,3 = a_0 a_2 a_4 + 2 a_1 a_2 a_3 - a_2^3 - a_0 a_3^2 - a_1^2 a_4.
A direct application of the Biermann–Weierstrass theorem to Eq. (<ref>) yields the formula for ξ=ξ(s),
ξ(s)=ξ_0+-ϵ_r,0√(R̃(ξ_0))℘'_R̃(s)+1/2R̃'ξ_0)[℘_R̃(s)-1/24R̃”(ξ_0)]+1/24R̃(ξ_0)R̃”'(ξ_0)]/2[℘_R̃(s)-1/24R̃”(ξ_0)]^2-1/48R̃(ξ_0) R̃^(4)(ξ_0),
where ℘_R̃(s)=℘_R̃(s;g_R̃,2g_R̃,3) is the Weierstrass function with invariants (<ref>), and ξ_0 = ξ(0) is an arbitrarily selected initial radius corresponding to s = 0. In Equation (<ref>) the sign ϵ_r,0 = ± 1 denotes a value of ϵ_r corresponding to the initial location ξ_0. In other words, ϵ_r,0 is a part of initial data (initial parameter), while the sign ϵ_r can change along a given geodesic.
§.§ Polar motion
Equation (<ref>) can be transformed to the Biermann–Weierstrass form by a substitution μ=cosθ, which provides a one to one mapping for 0 ≤θ≤π. Defining g(μ) := sin^2 θΘ̃(θ), we get
dμ(s)/ds = - ϵ_θ√(g(μ)).
The function g(μ) is a polynomial with respect to μ given by
g(μ) = b_0 μ^4 + 6 b_2 μ^2 + b_4,
where the coefficients b_0, b_2, and b_4 can be expressed as
b_0 = - α^2 ( ε^2 - δ_1 ) = - α^2 a_0,
b_2 = 1/6( - α^2 δ_1 + 2 α^2 ε^2 - κ - 2 αελ_z ) = a_2,
b_4 = - α^2 ε^2 + κ + 2 αελ_z - λ_z^2 = κ - (αε - λ_z)^2 = - a_4/α^2,
and a_0, a_2, and a_4 are given by Eqs. (<ref>). Weierstrass invariants associated with coefficients b_0, b_2, and b_4 can be written as
g_g,2 = b_0 b_4 + 3 b_2^2,
g_g,3 = b_0 b_2 b_4 - b_2^3.
Again, using the Biermann–Weierstrass formula, one can write the expression for μ=μ(s) in the form
μ(s)=μ_0+ϵ_θ ,0√(g(μ_0))℘'_g(s)+1/2g'(μ_0)[℘_g(s)-1/24g”(μ_0)]+1/24g(μ_0)g”'(μ_0)]/2[℘_g(s)-1/24g”(μ_0)]^2-1/48g(μ_0) g^(4)(μ_0),
where ℘_g(s)=℘_g(s;g_g,2g_ g,3) and μ_0 = μ(0) = cosθ_0 =cosθ(0) represents an initial value corresponding to s=0. As in Eq. (<ref>), the sign ϵ_θ ,0 = ± 1 is a parameter equal to ϵ_θ at θ = θ_0, and it remains constant along the entire geodesic. A more detailed discussion of solutions (<ref>) and (<ref>) can be found in <cit.>.
§.§ Azimuthal motion
Equation (<ref>) consists of a component related to the azimuthal motion in Boyer–Lindquist coordinates and an additional term, dependent on ξ(s), arising from transformation (<ref>). We will first treat the two components separately and then discuss their sum, which may remain regular across the horizons. An integration with respect to the Mino time s yields
φ^'(s)-φ^'(0) = ∫_0^s{α [(ξ^2(s̅)+α^2)ε-αλ_z]/ξ^2(s̅)-2ξ(s̅)+α^2+1/sin^2θ(s̅)[ λ_z-αεsin^2θ(s̅)] } ds̅+∫_0^sαϵ_r √(R̃(s̅))/ξ^2(s̅)-2ξ(s̅)+α^2 ds̅
= α∫_0^s2 ξ(s̅) ε - αλ_z/ξ^2(s̅)-2ξ(s̅)+α^2 ds̅ + λ_z ∫_0^s d s̅/sin^2 θ(s̅) + α∫_0^sϵ_r √(R̃(s̅))/ξ^2(s̅)-2ξ(s̅)+α^2 ds̅
= J_BL(s) + J_ξ(s),
where J_BL(s) := J̃_BL(s) + J_θ(s) and
J̃_BL(s) := α∫_0^s2 ξ(s̅) ε - αλ_z/ξ^2(s̅)-2ξ(s̅)+α^2 ds̅,
J_θ(s) := λ_z ∫_0^s d s̅/sin^2 θ(s̅),
J_ξ(s) := α∫_0^sϵ_r √(R̃ (s̅))/ξ^2(s̅)-2ξ(s̅)+α^2 ds̅.
Integrals J̃_BL and J_θ were expressed in terms of Weierstrass functions in <cit.>. To obtain a solution for Eq. (<ref>), we write
J_ξ(s)= α∫_0^s d ξ(s̅)/d s̅/ξ^2(s̅)-2ξ(s̅)+α^2 ds̅.
For a segment of a geodesic along which ξ = ξ(s) is monotonic, one change the integration variable to ξ and write
J_ξ(s)= α∫_ξ_0^ξ(s)dξ̅/ξ̅^2-2 ξ̅+α^2=α[ arctan(ξ(s)-1/√(α^2 -1))/√(α^2 -1)- arctan(ξ_0 - 1/√(α^2 -1))/√(α^2 -1)],
which, together with Eq. (<ref>), provides the solution. Note that due to the symmetry of Eq. (<ref>), expression (<ref>) remains valid also for trajectories passing through radial turning points.
The integrals J_BL and J_ξ are, generically, divergent at the horizons, i.e., for s such that ξ(s) = ξ_±, but the sum J_BL + J_ξ can remain regular. A direct calculation shows that J_BL(s) and J_ξ (s) can be expressed in the form
J̃_BL(s)+J_ξ (s) = α∫_0^s{δ_1 ξ ^2(s̅)+κ/[ξ^2(s̅)+α^2]ε -αλ_z - d ξ(s̅)/d s̅ - ε}ds̅.
The integrand in Eq. (<ref>) can be divergent, if
[ξ^2(s)+α^2] ε -αλ_z - d ξ(s)/d s = 0
or, equivalently,
[ξ^2(s)+α^2]ε -αλ_z = ϵ_r √(R̃).
By computing the square of the above equation we see that this can only happen, if
[ξ^2(s) - 2 ξ(s) + α^2][δ_1 ξ^2(s) + κ] = 0,
that is at ξ(s) = ξ_±. On the other hand, the expression [ξ^2(s)+α^2]ε -αλ_z - d ξ(s)/d s can be clearly non-zero at the horizon, depending on the sign ϵ_r. In our examples discussed in Sec. <ref> this happens for ϵ_r = -1, i.e., for incoming geodesics. In general, if the sign of [ξ^2(s) + α^2] ε - αλ_z can be controlled, one can exclude the possibility that J̃_BL diverges at the horizon. For instance, for [ξ^2(s) + α^2] ε - αλ_z > 0 and ϵ_r = -1, the denominator in Eq. (<ref>) remains strictly positive. We discuss this problem in more detail in Sec. <ref>.
§.§ Time coordinate
Similarly to the equation for φ^', the right-hand side of Eq. (<ref>) also consist of a Boyer–Lindquist term and a term associated with the transformation to horizon-penetrating Kerr coordinates. Integrating Eq. (<ref>) one gets
T^'(s)-T^'(0) = ∫_0^s{[ξ^2(s̅)+α^2][(ξ^2(s̅)+α^2)ε-αλ_z]/ξ^2(s̅)-2ξ(s̅)+α^2 + α [λ_z-αεsin^2θ(s̅)] }ds̅
+∫_0^s 2ξ(s̅) ϵ_r √(R̃(s̅))/ξ^2(s̅)-2ξ(s̅)+α^2ds̅
= N_BL(s) + N_ξ(s).
Here
N_BL(s) := ∫_0^s{[ξ^2(s̅)+α^2][(ξ^2(s̅)+α^2)ε-αλ_z]/ξ^2(s̅)-2ξ(s̅)+α^2+α(λ_z-αεsin^2θ(s̅)) } ds̅
= Ñ_BL(s) - α^2 ε N_θ(s),
where
Ñ_BL(s) := ∫_0^s [ ξ^2(s̅) + α^2 ]^2 ε - 2 αλ_z ξ(s̅)/ξ^2(s̅) - 2 ξ(s̅) + α^2 d s̅,
N_θ(s) := ∫_0^s sin^2 θ(s̅) d s̅,
and
N_ξ(s) := 2∫_0^sξ(s̅) ϵ_r √(R̃(s̅))/ξ^2(s̅)-2ξ(s̅)+α^2ds̅.
The integrals Ñ_BL(s) and N_θ(s) were computed in <cit.>.
As before, we start by evaluating the integral in Eq. (<ref>), which we write in the form
N_ξ(s)= 2∫_0^sξ(s̅)/ξ^2(s̅)-2ξ(s̅)+α^2d ξ(s̅)/d s̅ ds̅.
Changing the integration variable form s to ξ=ξ(s) we obtain
N_ξ (s) = 2 ∫_ξ_0^ξ(s)ξ̅/ξ̅^2-2ξ̅+α^2dξ̅ = log[ ξ^2(s) - 2 ξ(s) +α^2/ξ_0^2-2 ξ_0+α^2] + 2 [arctan(ξ(s) - 1/√(α^2 - 1))/√(α^2 -1)- arctan(ξ_0 - 1/√(α^2 -1))/√(α^2 -1)],
which again together with Eq. (<ref>) provides the solution.
A regularization of the sum Ñ_BL(s) + N_ξ(s) can be done in many ways. One of them is, however, particularly convenient. Note that
(ξ^2 + α^2)^2 ε - 2 αλ_z ξ/ξ^2 - 2 ξ + α^2 = (ξ^2 + α^2) ε + 2 ξ[ (ξ^2 + α^2) ε - αλ_z ]/ξ^2 - 2 ξ + α^2.
Therefore,
Ñ_BL(s) + N_ξ(s) = ∫_0^s {2 ξ(s̅) [ (ξ^2(s̅) + α^2) ε - αλ_z + dξ(s̅) / ds̅]/ξ^2(s̅) - 2 ξ(s̅) + α^2 + [ξ^2(s̅) + α^2] ε} d s̅
= ∫_0^s {2 ξ(s̅) [ δ_1 ξ^2(s̅) + κ]/[ξ^2(s̅) + α^2] ε - αλ_z - d ξ(s̅) / d s̅ + [ξ^2(s̅) + α^2] ε} d s̅.
The advantage of this choice is that now a potentially problematic denominator in Eq. (<ref>) has the same form as in Eq. (<ref>).
§.§ Regularity at horizons
We see from preceding subsections that the regularity of the expressions for φ^' and T^' depends on the signs of
A := (ξ^2 + α^2) ε - αλ_z
and ϵ_r. It can be shown that for timelike future-directed geodesics at ξ > ξ_+ (in Boyer–Lindquist Block I), one has A > 0. Thus, by continuity, for ϵ_r = -1 (incoming geodesics) and R̃ > 0, we have A - d ξ(s)/ds = A - ϵ_r √(R̃) > 0 up to the horizon at ξ_+. Hence, for incoming future-directed timelike geodesics both φ^'
and T^' remain regular at the event horizon joining Blocks I and II.
A proof that for a timelike future-directed geodesic at ξ > ξ_+, one must have A > 0 can be found in <cit.>, but we repeat it here for completeness. Note first that the vector
N = ( 1 + 2 M r/ρ^2) ∂_t^' - 2 M r/ρ^2∂_r
is normal to hypersurfaces of constant time t^'. Lowering the indices in N one gets N_μ = (-1,0,0,0). Since g(N,N) = - (1 + 2 M r/ρ^2), it is also timelike, as long as r > 0. This vector defines a time orientation. Consider a vector
X := (r^2 + a^2) ∂_t^' + a ∂_φ^'.
O'Neill refers to X as one of “the canonical Kerr vector fields” (<cit.>, p. 60).
It satisfies g(X,X) = - ρ^2 Δ, and thus it is timelike for Δ > 0, i.e., for r < r_- or r > r_+ (in Blocks I and III). It is also future-directed, since g(N,X)= - dt^'(X) = -(r^2 + a^2) < 0. On the other hand,
p(X) = (r^2 + a^2) p_t^' + a p_φ^' = - [ (r^2 + a^2) E - a l_z ] = - M^2 m [ (ξ^2 + α^2) ε - αλ_z ],
where p denotes the momentum covector. A future-directed momentum p has to satisfy p(X) < 0, and hence A = (ξ^2 + α^2) ε - αλ_z > 0. We emphasise that negative values of ε are still allowed within the ergosphere.
In Block II, where Δ < 0, i.e., for r_- < r < r_+, Rioseco and Sarbach <cit.> propose to use the vector
Y := 2 M r ∂_t^' + Δ∂_r + a ∂_φ^'.
It satisfies g(Y,Y) = ρ^2 Δ and, consequently, it is timelike for r_- < r < r_+. Since g(N,Y) = -dt^'(Y) = - 2 M r, it is also future-directed in r_- < r < r_+. On the other hand,
p(Y) = 2 M r p_t^' + Δ p_r + a p_φ^' = - 2 M r E + Δ p_r + a l_z.
Since Δ p_r = ϵ_r √(R) + 2 M r E - a l_z, we get p(Y) = ϵ_r √(R). Thus in Block II, the momentum p can be future-directed only for ϵ_r = -1.
Turning to the “regularization” applied in Eqs. (<ref>) and (<ref>) note that it is based on the observation that
A^2 - R̃ = (ξ^2 - 2 ξ + α^2)(δ_1 ξ^2 + κ),
and hence A^2 - R̃ has two real zeros, precisely at ξ = ξ_±. On the other hand
A^2 - R̃ = ( A - ϵ_r √(R̃)) ( A + ϵ_r √(R̃)) = ( A - d ξ/ds) (A + d ξ/ds).
Thus, if A - ϵ_r √(R̃) remains non-zero at a given radius ξ = ξ_+ or ξ = ξ_-, then A + ϵ_r √(R̃) must be zero for the same radius, and vice versa. In other words, horizon-penetrating Kerr coordinates do not allow for a description in which a given trajectory can cross smoothly horizons at ξ = ξ_+ or ξ = ξ_- in both radial directions ϵ_r = ± 1 at the same time. This is, of course, consistent with the behavior shown in Fig. <ref>. Suppose that a given trajectory passes from Block I to Block II, and then to Block III, where it encounters a radial turning point and continues further with ϵ_r = +1. Such a trajectory will hit the Cauchy horizon at ξ = ξ_- with ϵ_r = +1, where both T^' and φ^' would diverge. We show this behavior in particular examples in Sec. <ref>. It would also be tempting to illustrate this situation directly in the Penrose diagram in Fig. <ref>, which is however plotted for points at the axis, and assuming that d θ = d φ = 0. An illustration of this kind should be possible in terms of projection diagrams defined in <cit.>. Unfortunately, from the computational point of view, such diagrams are much more difficult to draw exactly.
§ EXAMPLES
In this section we discuss a collection of sample solutions obtained with the help of formulas derived in preceding sections.
We perform our computations using Wolfram Mathematica <cit.>. The formulas for ξ(s) and μ(s), or equivalently θ(s), can be encoded directly. The formulas for the azimuthal angle φ^'(s) and the time T^'(s) require a regularization, if a geodesic crosses one of the horizons. In practice, we substitute in Eq. (<ref>) expressions for ξ(s) and θ(s) given by Eqs. (<ref>) and (<ref>), and evaluate the resulting integral numerically. In principle, the derivative d ξ(s)/ds in Eq. (<ref>) could be expressed as d ξ(s)/ds = ϵ_r √(R̃), by virtue of Eq. (<ref>). Using this form turns out to be problematic, as it requires knowledge of the sign ϵ_r, which changes at the radial turning points. To circumvent this difficulty, we use in Eq. (<ref>) the derivative d ξ(s)/ds obtained by a direct differentiation of Eq. (<ref>). Although the integral (<ref>) can be evaluated analytically, similarly to the calculation presented in <cit.>, we find computing it numerically to be quite effective. In a sense, we try to combine in our implementation the best of the two approaches—elegant formulas for the radial and polar coordinates and relatively straightforward numerical integrals providing φ^' and the time coordinate T^'.
Figures <ref> to <ref> depict our solutions obtained for various parameters, collected in Table <ref>. They show the orbits in ξ–θ and ξ–φ^' planes and in the three-dimensional space. We use Cartesian coordinates (x,y,z) defined as
x = ξcosφ^'sinθ,
y = ξsinφ^'sinθ,
z = ξcosθ.
In Table <ref> we provide also real zeros of R̃(ξ), corresponding to radial turning points, as well as zeros of Θ̃(θ). The latter define the angular ranges available for the motion along a given geodesic. We illustrate these ranges by drawing appropriate cones in three-dimensional plots or lines in ξ–θ plane plots. Kerr horizons at ξ = ξ_± are depicted as spheres or circles. Finally, as a double-check of our results, we plot solutions obtained by solving numerically the Kerr geodesic equations. These numerical solutions are depicted with dotted lines. For simplicity, we omit the prime in φ^' in labels of all figures in this paper.
Figure <ref> shows an unbound timelike trajectory, plunging into the black hole. The trajectory crosses smoothly both horizons at ξ = ξ_± and encounters a radial turning point located at 0 < ξ < ξ_-. The solution for φ^' can be continued smoothly up to the Cauchy horizon at ξ = ξ_-, where it diverges.
Figure <ref> shows a standard timelike bound geodesic, corresponding to a “Keplerian” motion. In Fig. <ref>, we depict an unbound timelike orbit, which does not plunge into the black hole.
A very interesting case is shown in Fig. <ref>, depicting a timelike unbound trajectory crossing smoothly both horizons at ξ = ξ_±. This trajectory does not encounter a radial turning point and consequently it continues, in the extended Kerr spacetime, to negative radii ξ (Boyer–Lindquist Block III, see also <cit.>, p. 163). In the terminology of <cit.> such orbits are referred to as “transits”. We illustrate the transition from ξ > 0 to ξ < 0 by plotting a segment of the geodesic corresponding to ξ > 0 in orange, and a segment corresponding to ξ < 0 in purple. Note that Cartesian coordinates defined by Eq. (<ref>) allow for a change from positive to negative values of the radius, which corresponds simply to the reflection x → -x, y → - y, z → -z, or equivalently, to a change of angular coordinates. To avoid confusion, apart from using two colors in the graphs, we also provide a plot of the radius ξ versus the Mino time s. Since no horizons occur for ξ < 0, the trajectory can be continued smoothly to ξ→ - ∞. Note that the apparent reflection of Cartesian coordinates associated with the transition from ξ > 0 to ξ < 0 creates (in the plots) a false impression of a reflection in the polar angle θ, as well as an impression that the trajectory leaves the bounds of the allowed polar motion, given by zeros of Θ̃(θ). In reality, θ remains safely within the allowed range along the entire geodesic trajectory.
Figure <ref> shows an example of an unbound null geodesic, scattered by the black hole. In Fig. <ref>, we also deal with a null geodesic, but it is much more interesting. This trajectory crosses smoothly both horizons and continues to negative values of ξ, where it encounters a radial turning point. It then re-enters the region with ξ > 0 and can be continued up to the Cauchy horizon at ξ = ξ_-. In Fig. <ref> a segment corresponding to negative radii is again marked in purple, while a segment with positive radii is plotted in orange.
Figure <ref> illustrates a behavior similar to the one depicted in Fig. <ref>, but obtained for a null geodesic. The trajectory plunges into the black hole and continues to negative radii. Figure <ref> shows another timelike geodesic with a behavior similar to the one illustrated in Fig. <ref>.
Finally, in Fig. <ref> we plot a collection of more or less random timelike geodesics plunging into the black hole. In this case we only show segments of geodesics located outside the black hole horizon (although they can be continued into the black hole). With this plot we aim to show that, when visualized in horizon-penetrating Kerr coordinates, the geodesics plunging into the black hole do not make an impression of swirling around the horizon—a behavior present in the Boyer–Lindquist coordinates. Our Fig. <ref> should be contrasted, e.g., with Fig. 6 in <cit.>.
§ SUMMARY
We provided a description of timelike and null Kerr geodesics in horizon-penetrating Kerr coordinates, extending a recent analysis of <cit.>. From the technical point of view, the change from Boyer–Lindquist coordinates used in <cit.> to the horizon-penetrating Kerr coordinate system only slightly affects our formalism. The radial and polar equations remain unchanged, which allows us to use relatively compact formulas (<ref>) and (<ref>) for the radial and polar coordinates. Additional terms appear in equations for the azimuthal and time coordinates, but they can be integrated, once the solution to the radial equation is known. Our formulas (<ref>) and (<ref>) provide, together with Eq. (<ref>), all necessary expressions.
The horizon-penetrating Kerr coordinate system allows for a continuation of geodesics across horizons within the region of the extended Kerr spacetime in which it is well-defined, i.e., within Boyer–Lindquist Blocks I, II, and III (Fig. <ref>). At the level of geodesic equations, regularity at the horizons depends explicitly on the radial direction of motion. A reasoning given in Sec. <ref> and our examples of Sec. <ref> provide the following generic picture. A future-directed timelike or null geodesic originating outside the black hole (in Block I) and moving inward (ϵ_r = -1) can encounter a radial turning point at r > r_+ or it can pass smoothly through the event (r = r_+) and the Cauchy (r = r_-) horizons, transiting through Block II to Block III. In Block III a trajectory may continue smoothly to r → - ∞ (the so-called transit orbit), or it may get reflected at a radial turning point. In the latter case, the Kerr coordinate system only allows for a continuation up to the Cauchy horizon at r = r_-, where both t^' and φ^' diverge. We leave aside an obvious case of a geodesic hitting the ring singularity at ρ^2 = 0. It is known (see a proof in <cit.>, p. 288) that a timelike or null trajectory can only hit the ring singularity, if it is located entirely within the equatorial plane.
A. C. and P. M. acknowledge a support of the Polish National Science Centre Grant No. 2017/26/A/ST2/00530.
§ PENROSE DIAGRAM AND HYPERSURFACES OF CONSTANT KERR TIME
The diagram shown in Fig. <ref> is computed by a nearly standard Kruskal procedure, but a few details should be given in order to explain the plots of hypersurfaces of constant time t^'.
The conformal diagram at the symmetry axis is constucted for the two dimensional metric
^(2)g = - . Δ/ρ^2|_cos^2 θ = 1 dt^2 + . ρ^2/Δ|_cos^2 θ = 1 dr^2 = - F(r) dt^2 + 1/F(r) dr^2,
where
F(r) := r^2 - 2 M r + a^2/r^2 + a^2.
The usual Kruskal construction starts by defining a new coordinate
r_∗ := ∫^r dr^'/F(r^') = ∫^r r^'^2 + a^2/r^'^2 - 2 M r^' + a^2 dr^'.
The above integral can be evaluated analytically. In terms of dimensionless quantities, we get,
ξ_∗ = ξ + 1/√(1 - α^2)log| (ξ - ξ_+)^ξ_+/(ξ - ξ_-)^ξ_-|,
where r_∗ = M ξ_∗. Expression (<ref>) is well defined on ξ∈ℝ, except at ξ = ξ_±, where it diverges.
We will discuss some of the details, working for simplicity in Block I. A standard construction assumes the following coordinate transformations:
u := t - r_∗, v := t + r_∗
and
û := - exp (- c u), v̂ := exp (c v),
where c is a constant. We choose c = F^'(r_+)/2. This gives
^(2)g = -F dt^2 + 1/F dr^2 = -F/c^2 exp (2 c r_∗) d û d v̂.
Variables û and v̂ can be compactified by setting
U := arctan(û), V := arctan(v̂).
Finally, one defines Cartesian type coordinates T and X by
U = T - X, V = T + X,
or, equivalently,
T := U + V/2, X := V - U/2.
so that
- d û d v̂ = 1/cos^2(T - X) cos^2(T + X)(-dT^2 + dX^2).
This yields the metric in the form
^(2)g = -F dt^2 + 1/F dr^2 = F/c^2 exp (2 c r_∗) cos^2(T - X) cos^2(T + X)(-dT^2 + dX^2).
Combining these transformations, we get
- tanh( c r_∗) = cos (2 X)/cos (2 T), tanh (c t) = sin (2 T)/sin (2 X).
Equations (<ref>) allow for drawing the lines of constant r_∗ and t. We use these formulas to plot Blocks I and III, assuming the following ranges of X and T: In Block I: X ∈ (0,π/2), T ∈ (-π/4,π/4). In Block III: X ∈ (-π/2,0), T ∈ (π/4,3π/4).
In Block II, formulas (<ref>) have to be defined in a slightly different way. We set
- tanh (c r_∗) = cos (2 T)/cos (2 X), tanh (c t) = sin (2 X)/sin (2 T),
while the ranges of X and T are X ∈ (-π/4,π/4), T ∈ (0,π/2).
Plotting the lines of constant t^' is more involved. From Eq. (<ref>), we have t^' = t - r + r_∗. We plot the lines t^' = const by inverting, numerically, the relation r_∗ = r_∗(r), defined by Eq. (<ref>).
99
BoyerLindquist1967 R. H. Boyer and R. W. Lindquist, Maximal Analytic Extension of the Kerr Metric, J. Math. Phys. 8, 265 (1967).
Kerr1963 R. Kerr, Gravitational Field of a Spinning Mass as an Example of Algebraically Special Metrics, Phys. Rev. Lett. 11, 237 (1963).
rezzolla O. Zanotti and L. Rezzolla, Relativistic hydrodynamics, Oxford University Press, Oxford 2013.
Carter1968 B. Carter, Global Structure of the Kerr Family of Gravitational Fields, Phys. Rev. 174, 1559 (1968).
carter_1968b B. Carter, Hamilton-Jacobi and Schrodinger Separable Solutions of Einstein's Equations, Comm. Math. Phys. 10, 280 (1968).
walker_penrose_1970 M. Walker and R. Penrose, On Quadratic First Integrals of the Geodesic Equations for Type { 22 } Spacetimes, Commun. Math. Phys. 18, 265 (1970).
schmidt_2002 W. Schmidt, Celestial mechanics in Kerr spacetime, Class. Quantum Grav. 19, 2743 (2002).
glampedakis_kennefick_2002 K. Glampedakis and D. Kennefick, Zoom and whirl: Eccentric equatorial orbits around spinning black holes and their evolution under gravitational radiation reaction, Phys. Rev. D 66, 044002 (2002).
Mino2003 Y. Mino, Perturbative approach to an orbital evolution around a supermassive black hole, Phys. Rev.D, 67 (2003).
teo_2003 E. Teo, Spherical Photon Orbits Around a Kerr Black Hole, Gen. Rel. Gravit. 35, 1909 (2003).
drasco_hughes_2004 S. Drasco and S. A. Hughes, Rotating black hole orbit functionals in the frequency domain, Phys. Rev. D 69, 044015 (2004).
slezakova_2006 G. Slezáková, Geodesic Geometry of Black Holes, PhD thesis, University of Waikato 2006.
levin_perez_giz_2008 J. Levin and G. Perez-Giz, A periodic table for black hole orbits, Phys. Rev. D 77, 103005 (2008).
fujita_hikida_2009 R. Fujita and W. Hikida, Analytical solutions of bound timelike geodesic orbits in Kerr spacetime, Class. Quantum Grav. 26, 135002 (2009).
levin_perez_giz_2009 J. Levin and G. Perez-Giz, Homoclinic orbits around spinning black holes, I. Exact solution for the Kerr separatrix, Phys. Rev. D 79, 124013 (2009).
perez_giz_levin_2009 G. Perez-Giz and J. Levin, Homoclinic orbits around spinning black holes II: The phase space portrait, Phys. Rev. D 79, 124014 (2009).
hackmann_2010 E. Hackmann, Geodesic equations in black hole space-times with cosmological constant, PhD thesis, Bremen 2010.
hod_2013 S. Hod, Marginally bound (critical) geodesics of rapidly rotating black holes, Phys. Rev. D 88, 087502 (2013).
grib_pavlov_vertogradov_2014 A. A. Grib, Yu. V. Pavlov, and V. D. Vertogradov, Geodesics with negative energy in the ergosphere of rotating black holes, Mod. Phys. Lett. A 29, 1450110 (2014).
vertogradov_2015 V. D. Vertogradov, Geodesics for Particles with Negative Energy in Kerr’s Metric, Gravitation Cosmol. 21, 171 (2015).
lammerzahl_hackmann_2016 C. Lämmerzahl and E. Hackmann, Analytical solutions for geodesic equation in black hole spacetimes, Springer Proc. Phys. 170, 43 (2016).
rana_mangalam_2019 P. Rana and A. Mangalam, Astrophysically relevant bound trajectories around a Kerr black hole, Class. Quantum Grav. 36, 045009 (2019).
tavlayan_tekin_2020 A. Tavlayan and B. Tekin, Exact formulas for spherical photon orbits around Kerr black holes, Phys. Rev. D 102, 104036 (2020).
vandemeent_2020 M. van de Meent, Analytic solutions for parallel transport along generic bound geodesics in Kerr spacetime, Class. Quantum Grav. 37, 145007 (2020).
stein_warburton_2020 L. C. Stein and N. Warburton, Location of the last stable orbit in Kerr spacetime, Phys. Rev. D 101, 064007 (2020).
gralla_lupsasca_2020 S. E. Gralla and A. Lupsasca, Null geodesics of the Kerr exterior, Phys. Rev. D 101, 044032 (2020).
teo_2021 E. Teo, Spherical orbits around a Kerr black hole, Gen. Rel. Gravit. 53, 10 (2021).
mummery_balbus_2022 A. Mummery and S. Balbus, Inspirals from the Innermost Stable Circular Orbit of the Kerr Balck Holes: Exact Solutions and Universal Radial Flow, Phys. Rev. Lett. 129, 161101 (2022).
mummery_balbus_2023 A. Mummery, S. Balbus,
Complete characterization of the orbital shapes of the noncircular Kerr geodesic solutions with circular orbit constants of motion, Phys. Rev. D 107, 124058 (2023).
Dyson2023 C. Dyson, M. van de Meent, Kerr-fully diving into the abyss: analytic solutions to plunging geodesics in Kerr, Class. Quantum Grav. 40, 195026 (2023).
gonzo_shi_2023 R. Gonzo and C. Shi, Boundary to bound dictionary for generic Kerr orbits, Phys. Rev. D 108, 084065 (2023).
CHM2023 A. Cieślik, E. Hackmann, and P. Mach, Kerr Geodesics in Terms of Weierstrass Elliptic Functions, Phys. Rev. D 108, 024056 (2023).
Neill1995 B. O'Neill, The Geometry of Kerr Black Holes (A. K. Peters, Ltd., Wellesley, Massachusetts 1995).
biermann_1865 W. Biermann, Problemata quaedam mechanica functionum ellipticarum ope soluta, PhD thesis, Berlin 1865.
Greenhill_1892 A. G. Greenhill, The applications of elliptic functions (Macmillan, London 1892).
Reynolds_1989 M. J. Reynolds, An exact solution in non-linear oscillations, J. Phys. A: Math. Gen. 22, L723 (1989).
CM2022 A. Cieślik and P. Mach, Revisiting timelike and null geodesics in the Schwarzschild spacetime: general expressions
in terms of Weierstrass elliptic functions, Class. Quantum Grav. 39, 225003 (2022).
Rioseco2024 P. Rioseco and O. Sarbach, Phase Space Mixing of a Vlasov Gas in the Exterior of a Kerr Black Hole, Commun. Math. Phys. 405, 105 (2024).
Chrusciel2012 P. T. Chruściel, Ch. R. Ölz, S. J. Szybka, Space-time diagrammatics, Phys. Rev. D 86, 124041 (2012).
wolfram Wolfram Research, Inc., Mathematica, Version 13.2.1, Champaign, IL (2023).
Hawking1973 S. W. Hawking and G. F. R. Ellis, The large scale structure of space-time (Cambridge Univeristy Press, Cambridge 1973).
|
http://arxiv.org/abs/2409.03434v1 | 20240905113516 | A Key-Driven Framework for Identity-Preserving Face Anonymization | [
"Miaomiao Wang",
"Guang Hua",
"Sheng Li",
"Guorui Feng"
] | cs.CR | [
"cs.CR",
"cs.CV"
] |
IEEEpubidpullup6.5/20.5pt
^ Corresponding authors: Sheng Li <[email protected]>, and Guorui Feng <[email protected]>
Network and Distributed System Security (NDSS) Symposium 2025
23 - 28 February 2025, San Diego, CA, USA
ISBN 979-8-9894372-8-3
https://dx.doi.org/10.14722/ndss.2025.23729
www.ndss-symposium.org
[]
A Key-Driven Framework for Identity-Preserving Face Anonymization
Miaomiao Wang2,
Guang Hua3,
Sheng Li4^, and
Guorui Feng2^
2School of Communication and Information Engineering, Shanghai University, China
3Infocomm Technology Cluster, Singapore Institute of Technology, Singapore
4School of Computer Science, Fudan University, China
September 9, 2024
==============================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Virtual faces are crucial content in the metaverse. Recently, attempts have been made to generate virtual faces for privacy protection. Nevertheless, these virtual faces either permanently remove the identifiable information or map the original identity into a virtual one, which loses the original identity forever. In this study, we first attempt to address the conflict between privacy and identifiability in virtual faces, where a key-driven face anonymization and authentication recognition (KFAAR) framework is proposed. Concretely, the KFAAR framework consists of a head posture-preserving virtual face generation (HPVFG) module and a key-controllable virtual face authentication (KVFA) module. The HPVFG module uses a user key to project the latent vector of the original face into a virtual one. Then it maps the virtual vectors to obtain an extended encoding, based on which the virtual face is generated. By simultaneously adding a head posture and facial expression correction module, the virtual face has the same head posture and facial expression as the original face. During the authentication, we propose a KVFA module to directly recognize the virtual faces using the correct user key, which can obtain the original identity without exposing the original face image. We also propose a multi-task learning objective to train HPVFG and KVFA. Extensive experiments demonstrate the advantages of the proposed HPVFG and KVFA modules, which effectively achieve both facial anonymity and identifiability.
§ INTRODUCTION
§.§ Background
The virtual face is widely used in the metaverse, which can be generated by artificial intelligence content generation techniques. Research on virtual faces provides more possibilities for the development of the metaverse.
However, the widespread application of virtual faces has also brought about a series of issues regarding privacy, ethics, and security. Therefore, when discussing the application of virtual faces in the metaverse, we also need to seriously think about how to balance the relationship between technological innovation and the protection of personal rights and interests. In addition, to meet the needs of interactivity, virtual faces need to be able to respond to user actions and emotions in real time.
Early approaches protect the privacy of faces through masking techniques such as mosaic <cit.>, blurring <cit.> and pixelization <cit.>. These methods can anonymize digital face images, but the virtual face images often have poor visual quality, which hinders subsequent applications <cit.>.
With the recent advances in deep learning, researchers take advantage of generative adversarial networks <cit.> to generate virtual faces <cit.>. Despite the high quality and anonymization ability, these face images suffer from permanent identity loss and cannot be utilized for recognition.
To tackle the privacy-utility trade-off, recent efforts <cit.> have been made to generate the visual faces while maintaining the identifiability.
Li et al. <cit.> propose a method that adaptively locates identity-independent attributes in face images and generates the virtual faces using the original face and the located face attributes. They subsequently <cit.> propose an identity-preserving face disguise method by decoupling the appearance code and the identification code, and replacing the appearance code with a target one. Although these methods can ensure the identifiability of the virtual faces, the identity information is still revealed, posing privacy risks. The work in <cit.> proposes an identifiable virtual face generator (IVFG), which does not reveal any information about the original face during the authentication. Specifically, it uses different keys to control the identities of the virtual faces, which can be directly used for face authentication. However, the virtual faces cannot meet the requirements of posture synchronization, which is always the same regardless of the head posture of the original face images. In addition, face recognition is conducted in the virtual domain with the original identity lost.
In recent years, methods focused on the interactivity of virtual faces have been proposed. The generator of CIAGAN <cit.> is able to preserve the head posture after the anonymization. However, the stability of the image quality still faces certain challenges, where the generated images may be blurred, distorted, or have artifacts. In addition, the original identity of the virtual face is permanently lost and cannot be applied to downstream tasks like face recognition and authentication. Since the latent space of the StyleGAN2 contains rich semantic information, VFGM <cit.> performs style mixing in the latent space to generate virtual faces that maintain head posture. However, this method only provides privacy protection at the visual level, which does not offer any protection in terms of face recognition. That is to say, the protected face image is visually different from the original face, but the attacks can still obtain the original identity using a face recognition approach.
To summarize, existing research on facial anonymization faces the following challenges. (1) The conflict between anonymity and identifiability. The existing virtual faces either completely discard the identifiability or transform the original identity into a virtual one. Both permanently lose their original identity, which creates obstacles for face management. (2) Difficulties in maintaining head posture and facial expression. The virtual faces need to maintain a certain level of authenticity with the head posture and facial expression synchronized for face recognition tasks such as head turning, smiling, and blinking, to enhance the interaction between the real world and the virtual world.
§.§ Our Work
In order to solve the above problems, we aim to propose an identity-preserving framework with the following functionalities, (1) authorized and traceable virtual face authentication to balance between privacy and authenticity. It is required that the virtual face can be authenticated as the original identity with the correct key, and no visual information is leaked during the authentication.
And (2) significant appearance changes with the head posture maintained. In the metaverse, the virtual face has significant visual differences from the original face while maintaining the head posture and expression characteristics of the original face. These are achieved through a key-driven face anonymization and authentication recognition (KFAAR) multitasking mechanism, which contains a head posture-preserving virtual face generation (HPVFG) module and a key controllable virtual face authentication (KVFA) module.
Our HPVFG module leverages a user-specific key to project the latent vector of the original face into a virtual vector space, ensuring that the virtual face exhibits significant visual differences from the original face while maintaining the original head posture and expression. This is accomplished through a series of components, including an encoder, a projector, a mapping network, a generator, and a head posture correction module. These components jointly produce a virtual face that is anonymized with the original head posture and facial expression maintained.
The KVFA module, on the other hand, is designed for virtual face authentication. It is capable of extracting the original identity from the virtual face using a dedicated recognition network when the correct key is given. This network is trained through a multi-task learning objective, which can prevent misidentification and false rejection, and guarantees correct face recognition without revealing the visual information of the original face.
The flowchart of the proposed framework is presented in Figure <ref>. It can be seen that the virtual faces generated by our HPVFG have significant visual differences from the original face. Therefore neither human nor machine can determine its true identity. It should also be noted that our virtual faces can maintain the original head postures and facial expression. With the correct key given, face authentication can be performed through KVFA, allowing the virtual face to be identified as the original identity. Table <ref> summarizes the attributes of our virtual faces in comparison with the mainstream solutions, including CIAGAN <cit.>, VFGM <cit.> and IVFG <cit.>.
We summarize the main contributions as follows:
* We make the first attempt to solve the virtual face authentication challenge while offering privacy protection. The generated high-quality virtual faces are anonymized to both humans and machines, while the original identity can still be obtained with the correct key for authentication.
* We propose a new framework HPVFG to generate anonymous, synchronized, diverse, and high-quality virtual faces.
* We propose a dedicated recognition network i.e., the KVFA module, which can extract the original identities from our virtual faces using the correct key.
* Extensive experiments demonstrate the effectiveness of our proposed framework. Compared with SOTA, our method performs well in generating the virtual faces that maintain head posture and facial expression, which achieves face authentication through the KVFA module. In addition, we also conduct a security analysis of the potential attacks to demonstrate the security of our method.
§ RELATED WORK
We classify the face anonymization methods into the following three categories.
Anonymization by Visual Modification:
Early face anonymization methods <cit.> are usually conducted by visual modification, including blurring, mosaicing, and masking. They sacrifice image quality to remove information that can be used for human perception. However, the loss of identity information and visual quality limits its application in the field of computer vision.
To improve the quality of virtual faces, deep learning-based methods have been proposed. We can divide these schemes into two groups. One is identity preservation <cit.>. These methods alter the appearance of faces for visual protection. However, the identity features are maintained in the protected faces, which may lead to the risk of identity theft. The other is identity modification <cit.>. These anonymization methods are irreversible, as the identity of the original face is permanently lost, making it impossible to perform downstream tasks like face recognition.
Anonymization by Visual Preservation:
The virtual faces generated by such methods maintain the original visual information, i.e., not altering the appearance of the original face. Methods <cit.> decouple facial attributes and modify sensitive attributes to protect privacy, ensuring that the generated face have minimal visual changes.
Saheb et al. <cit.> use semi-adversarial networks to learn the feature representation of input face images through convolutional autoencoders, ensuring that the generated faces offer maximum privacy protection with minimal distortion.
Mirjalili et al. <cit.> utilize the method of generating adversarial perturbations to protect gender information, which can change gender information without affecting the discrimination of biometric matchers and human observers.
Shan et al. <cit.> propose a face anonymizationy method called Fawkes that can help users resist face recognition systems. This method adds imperceptible perturbations on photos for privacy protected visual content sharing. These methods typically anonymize certain visual or identity attributes while revealing important face information.
Anonymization by Visual Reversibility:
Visually recoverable anonymization methods <cit.> usually have to meet two requirements: one is anonymization and the other is de-anonymization.
Gu et al. <cit.> propose a new face identity conversion module, which can automatically perform key-based anonymization and de-anonymization of faces. Inspired by Gu et al. <cit.>, Pan et al. <cit.> propose a framework based on a conditional encoder and decoder, which can achieve diversity and controllability of the virtual faces according to a key and a privacy protection level. These anonymization methods realize the reversibility of identity information, allowing people with the correct key to view user data, and prevent the leakage of facial privacy to a certain extent.
§ PROBLEM STATEMENT
§.§ System Model
The system model considered in this paper is shown in Figure <ref>, in which the involved participants include user (U), face anonymization server (FAS), face recognizer (FR), virtual face authentication server (VFAS) and adversary (AD).
U is responsible for securely generating the user keys. Specifically, U calls the KeyGen algorithm to generate a key and sends the user key k and the original face image x to FAS. For authentication, U downloads virtual face images from the cloud and sends them along with the user key to VFAS to obtain the identity of the virtual face.
FAS receives the key k from U and the user's face image x, and generates the corresponding virtual face image x_v. After the anonymization, FAS clears user information and uploads x_v to the cloud.
FR is a universal face recognition algorithm that can be accessed by anyone.
VFAS is a server used for virtual face identity authentication, which receives the key k sent by U and the virtual face x_v to obtain the identity of the virtual face. If the key is correct, the identity of the virtual face is consistent with that of the corresponding original face.
AD represents all attackers attempting to obtain the original identity from the virtual faces. For AD, only the virtual faces stored in the cloud can be accessed. Therefore, AD will attempt to guess the key k and obtain access control permissions of VFAS to reveal original identity of the virtual face.
§.§ The Interaction Process
The interaction process among different participants can be further divided into the following six stages.
Step1: U sends user keys and original face images to FAS.
Step2: FAS conducts the anonymization of the face image, generates a virtual face, and uploads the virtual face to the cloud.
Step3: The virtual images saved in the cloud are shared, and AD can easily access and download virtual face images from the cloud.
Step4: AD can use a universal face recognition algorithm FR, but the obtained identity differs greatly from the original face.
Step5: U downloads virtual faces from the cloud for authentication purpose.
Step6: U sends the virtual face and user keys to VFAS to obtain the same identity as the original face for authentication.
§.§ Threat Model
We consider U to own the data itself and be trustworthy, and FAS as an honest entity. While VFAS is considered as an honest-but-curious entity, which follows the required protocol and performs the authentication.
Based on the mechanism of our model, we can determine that the identity of the original face can only be obtained if AD steals the user key and gains access to VFAS. AD may steal U's identity through the following means:
§.§.§ Key Guessing
AD may hack the user keys through the following methods:
* User Information Assisted Guess. AD studies the user's digital footprint and attempts to guess the user's key based on such information. AD may also try common keys to guess the user keys.
* Brute Force Attack. AD uses robots to repeatedly use random keys until the correct key is found.
§.§.§ Model Guessing
AD may obtain access to VFAS by guessing the model. AD may attempt to guess the model through various means, such as using brute force attack, social engineering, phishing emails, etc., to obtain credentials or information for accessing the model.
§ THE PROPOSED METHOD
§.§ Overview
The proposed KFAAR framework consists of an HPVFG module and a KVFA module. The HPVFG module utilizes user keys to convert the latent vectors of the original faces into virtual vectors, preserving important attributes like head postures and facial expression. The KVFA module aims to verify the identity of virtual faces using the correct key without the need to restore the original face, thereby ensuring privacy.
We use multi-task learning objectives to train HPVFG and KVFA modules, effectively achieving the dual goals of anonymization identifiability.
§.§ Virtual Face Generation
§.§.§ Properties of Virtual Faces
The goal of HPVFG is to generate a virtual face with a new appearance with the same head posture and facial expression as the original face. In addition, the virtual faces also need to be diverse, differentiated, and interactive.
Given two sets of face images 𝒳 and 𝒴, and the user keys 𝒦 which control the generation of virtual faces. The generation of a virtual face can be expressed as G( x, k ), and R( x ) represents the feature of x extracted by a general face recognizer. Given the original faces x_1, x_2 ∈ 𝒳, y ∈ 𝒴, and two distinct keys k_1, k_2 ∈ 𝒦, our goal is to generate the virtual faces with the following properties.
* Anonymity: The virtual face has a different identity from the original face, formulated as,
R( G( x_1,k_1) ) ≠ R( x_1).
* Synchronism: If x_1 and x_2 share the same original identity, the virtual faces derived from the same key should belong to the same virtual identity, formulated as,
R( G( x_1,k_1) ) = R( G( x_2,k_1) ).
* Diversity: The virtual identities generated by the same original face but using different keys should be different, formulated as,
R( G( x_1,k_1) ) ≠ R( G( x_1,k_2) ).
* Differentiation: For two original faces x and y which are from different identities, the generated virtual identities should be different even if they are derived from the same key, formulated as,
R( G( x,k_1) ) ≠ R( G( y,k_1) ).
* Interactivity: Virtual faces should be able to track the head posture and facial expression, thereby enhancing the user experience in the virtual environment.
* Realism: The authenticity of the virtual faces is essential, and the appearance of the virtual faces needs to have a high degree of realism.
§.§.§ Network Architecture of HPVFG
As shown in Figure <ref> (a), HPVFG consists of five parts, including: an encoder E, a projector P, a mapping network M, a face generator G and a head posture correction module G_f. Each part will be described in detail in this section.
Encoder: We use a pre-trained face recognizer as the encoder E. The face features are extracted from the original faces and represented as z= E (x).
Projector: We propose a projector P that modifies the face feature by combining the original face feature z and key k. It can be formulated as z'=P(z,k). The network consists of a concatenated operation and a multilayer perceptron (MLP).
Mapping Network: The main task of the mapping network is to generate style parameters, which expands the 512-dimensional latent z from the latent space Z to Z^+. The extended latent z^+ consists of 18 different z. It can be formulated as z^+=M(z).
Generator: This generator uses the pre-trained StyleGAN2 to reconstruct the face images. It maps style-mixed extended latent z^+ into virtual face images. It can be formulated as x' = G(z^+).
Head Posture Correction Module:
FaceVid2Vid is a method based on conditional generative adversarial network (CGAN) <cit.> for head posture replay of face images. The main idea is to map the input virtual face image and the original head posture to a shared latent variable space by learning the relationship between the face image and the corresponding head posture, thereby achieving the transformation of the head head posture. The FaceVid2Vid model can replay head postures by learning the relationship between face images and corresponding head posture. This model can be applied to fields like virtual reality, providing users with a more realistic experience.
In order to make the virtual faces have the posture and expression of the original faces, we use a pre-trained face reconstruction model FaceVid2Vid to obtain the virtual faces x_v. The facial reconstruction process can be described as: x_v=G_f (x', x), where G_f represents the FaceVid2Vid model.
The encoder E takes a face image tensor as input, the projector P combines k with the face feature and projects it to the latent space of StyleGAN2. G generates a realistic virtual face with virtual identity. The virtual face learns the head posture and facial expression features of the original face through x_v=G_f to generate a virtual face that maintains the head posture and face expression (see Algorithm 1 for the generation process).
§.§.§ Training of HPVFG
We use a multi-task learning approach to train HPVFG, as shown in Figure 4 (a), which can achieve the first four properties of virtual faces. In the training, the projector P is trainable, while others are frozen.
We use the cosine embedding loss L_cos to measure the similarity between the face features. It can be expressed as:
L_cos(f_1, f_2,l ) = {(1-cos(f_1, f_2) ) , l=1,
max( m,cos(f_1, f_2) ), l=-1,
.
where m represents a hyperparameter, l=1 means that f_1 and f_2 come from faces with the same identity, l=-1 means that f_1 and f_2 are from different faces. We optimize L_cos to maximize/minimize the difference between features.
* Anonymity Loss.
We hope that the original face and the virtual face are not belong to the same identity, and the ID-distance between them is increased by minimization L_ano, which can be expressed as:
L_ano = L_cos(R( G (x_1,k_1) ),R(x_1),-1 ).
* Synchronism Loss. Given two different face images x_1,x_2∈𝒳 corresponding to the same identity, the corresponding virtual faces generated, with the same key, have the same identity. We propose L_syn loss to represent reducing the identity distance between virtual faces:
L_syn = L_cos(R( G (x_1,k_1) ), R( G (x_2,k_1) ), 1 ).
* Diversity Loss. To ensure the diversity of virtual faces, we attempt to ensure that the virtual faces generated from the same face image controlled by different keys have different identities. We propose L_div, given by:
L_div = L_cos(R( G (x_1,k_1) ), R(G (x_1,k_2) ), -1 ).
* Differentiation Loss. To meet the differentiation property, the virtual faces generated from different original faces should belong to different identities. Given two face images x ∈𝒳 and y ∈𝒴, to generate virtual faces with different identities using the same key, it is necessary to reduce the decisive role of the key on the identity. We define L_dif and expand the differences between virtual faces by minimizing this loss, given by:
L_dif = L_cos(R( G (x,k_1) ), R( G (y,k_1) ), -1 ).
* Total Loss. Our overall optimization goal is the weighted average of the above losses, which is given below.
L_tot = λ_anoL_ano+λ_synL_syn + λ_divL_div +λ_difL_dif,
where λ_ano, λ_syn, λ_div and λ_dif are the weights of different losses.
§.§ Authentication
§.§.§ Properties of KVFA
To ensure that the virtual faces are traceable, we train the KVFA network to ensure that the virtual face can be authenticated by the key. We use I( x ) to denote the identity of x extracted by this network. It has the following properties:
Prevent Misidentification:
* The original face and the virtual face belong to different identities.
I(x) ≠ I( G( x,k_1) ).
* When using the wrong key for authentication, the identities of the original face and the virtual face are different.
I(x) ≠ I( G( x,k_1), k_2).
* When using the correct secret key for authentication, different faces have different identities.
I( G( x,k_1),k_1) ≠ I( G( y,k_1), k_1).
Prevent False Rejection:
* When using the correct key for authentication, the identities of the original face and the virtual face are consistent.
I(x) = I( G( x,k ),k ).
* When using the correct key for authentication, two different virtual faces from the same original identity belong to the same identity.
I( G( x_1,k_1),k_1) = I( G( x_1,k_2), k_2),
and
I( G( x_1,k_1),k_1) = I( G( x_2,k_2), k_2).
§.§.§ Network Architecture of KVFA
KVFA (as shown in Figure 3 (b)) consists of two parts, including a feature extractor (F) and a projector (P). When KVFA inputs an image, the output of the model is the feature representation of the image. When the input of HPVFG is an image and a key, the output is a projection of the feature representation of the image combined with the key.
* Feature Extractor: We use an end-to-end network as a feature extractor, which maps the input image directly to the feature representation. This network consists of multiple layers, including the input layer, hidden layer, and output layer, and we set the number of hidden layers as 4. Feature representations from input images can be described as z_v = F (x_v).
* Projector: We propose a projector P that modifies the face feature by combining the face feature z_v and key k. It can be formulated as z_v'=P(z_v,k). The network consists of an MLP.
Using KVFA, we can achieve traceability of virtual faces using the correct key without restoring the original faces, the authentication process is summarized in Algorithm 2.
§.§.§ Training of KVFA
We use a multi-task learning approach to train KVFA separately, as shown in Figure <ref> (b).
In KVFA, our goal is to reveal the original identity of the virtual face by the correct key, and the appearance of the original face is not exposed during the authentication. Since virtual faces are anonymous, universal face recognizers are not applicable. We train a specialized key-conditioned face recognizer I to extract features of virtual faces. When the input key is correct, the features of the virtual face and the original face are consistent. We use L_cos (5) to measure the similarity of face features.
Prevent Misidentification: First of all, the model should have the ability to prevent misrecognition, that is, to prevent two different faces from being recognized as the same identity. We design set of losses to meet the needs, the details of which are given below for different cases.
* Case 1: This model can distinguish the original face and the virtual face, i.e., their features of are different, the corresponding loss can be expressed as:
L_pmis1 = L_cos(I( G (x_1,k_1) ),I (x_1),-1 ).
* Case 2: The authentication of virtual faces is controlled by the correct key. When an incorrect key is used as input, their features should be different using I. The corresponding loss is defined as follows:
L_pmis2 = L_cos(I ( G (x_1,k_1) ,k_2),I (x_1),-1 ).
* Case 3: For virtual faces from different original identities, even if the correct secret key is used for authentication, their features should be different using I. The corresponding loss is defined as follows:
L_pmis3=L_cos(I( G (x,k_1) ,k_1),I ( G (y,k_1) ,k_1),-1 ).
* Total Loss to Prevent Misidentification. The total loss to prevent misidentification can be described as:
L_tot1 = λ_pmis1L_pmis1 +λ_pmis2L_pmis2 + λ_pmis3L_pmis3,
where λ_pmis1, λ_pmis2, and λ_pmis3 are the weights for different losses.
Prevent False Rejection: This model can prevent faces of the same identity from being falsely rejected. We design a set of different losses for different cases, which is detailed below.
* Case 1: When the correct key is used, the identities of the virtual face and the original face are consistent. We minimize the distance between their features by optimizing L_per1, which can be described as:
L_per1 = L_cos(I( G (x_1, k_1), k_1),I (x_1),1 ).
* Case 2: Different virtual faces generated from the same identity should have the same identity when the correct secret key is presented. We minimize the distance between the two virtual faces' features by optimizing L_per2, which can be described as:
L_per2 = L_cos(I( G (x_1,k_1) ,k_1),I(G (x_2,k_1) ,k_1),1 ).
* Total Loss to Prevent False Rejection: The total loss to prevent false rejection can be described as:
L_tot2 = λ_per1L_per1+λ_per2L_per2,
whereλ_per1 and λ_per2 are the weight for different losses.
Overall Objective. The overall objective for KVFA is formulated as follows:
L_tot = L_tot1+ L_tot2.
§ EXPERIMENTAL RESULTS
In this section, we conduct a series of experiments to evaluate the effectiveness of the proposed framework.
§.§ Experimental Setting
∙ Dataset:
We conduct experiments on the following two public datasets:
(1) LFW <cit.>, which contains a total of more than 13,000 face images from 13,233 individuals, of which 1,680 have more than two images. (2) CelebA <cit.>, which contains 202,599 face images of 10,177 celebrities.
∙ Pre-trained Models:
In the training phase, we use FaceNet Inception-ResNet-v1 <cit.> which is trained on VGGFace2 <cit.> for both encoder E and recognizer R.
We train the official release of StyleGAN2 on the LFW and CelebA datasets as our generator G.
We use the pre-trained FaceVid2Vid as the head posture correction module G_f.
During the testing and evaluation phase, we choose the pre-trained ArcFace <cit.> and Sphereface <cit.> as face recognizers R.
∙ Parameter Setting: The projector is trained for 10 epochs by the Adam optimizer with β_1 = 0.9 and β_2 = 0.999. Learning rate is set to 1 × 10^-4, and we set λ_ano=0.4, and λ_syn= λ_div= λ_dif=1. KVFA is also used by the Adam optimizer with β_1=0.9 and β_2=0.999, where the batch size is set as 2, the learning rate is set as 1 × 10^-4 and λ_pmis1=λ_pmis2=λ_pmis3=λ_per1=λ_per2=1.
The training on LFW takes four hours on a single NVIDIA GTX 3090 GPU.
∙ State of the Art:
We compare our anonymization framework HPVFG with three state-of-the-art (SOTA) methods, namely CIAGAN <cit.>, IVFG <cit.>, and VFGM <cit.>.
∙ Evaluation Metrics: To evaluate the anonymity of the virtual face, we use "anonymization" to measure the unsuccessful matching between the original face and the virtual faces. We use equal error rate (EER) and area under curve (AUC) to evaluate the synchronism of virtual faces. AUC measures the area under the
receiver operating characteristic (ROC) curve. Larger AUC and smaller EER indicate higher the accuracy of the face recognition system. We use the objective image quality assessment metrics, i.e., Frech’et inception distance (FID), to evaluate the visual quality of the virtual faces. In addition, to evaluate the performance of head posture and facial expression preservation, we use existing head posture estimation techniques to calculate the Euler angles (Yaw, Pitch, and Roll) within the human body.
§.§ Evaluation of Virtual Faces
We evaluate the performance of our proposed framework in terms of anonymity, diversity, synchronism, detection rate, interactivity, and visual quality, and compare these performances with SOTA methods.
§.§.§ Anonymity and Diversity
As shown in Figure <ref>, we give examples of the original faces and the corresponding virtual faces, where the first two rows are original faces from the same identity, and the last two rows correspond to virtual faces. It can be seen that there are significant visual differences between the original face and the virtual face, and the virtual face maintains the head posture and facial expression of the original face. We quantify the anonymity of the virtual face by calculating the mismatch rate between the virtual face and the original face. When the distance between features of the original face and the virtual face is less than the threshold of the face recognizer, we regard them to be from different identities. Table <ref> summarizes the performance of different methods on the LFW and CelebA datasets. Our method achieve a high anonymization of around 0.98, which significantly outperforms CIAGAN <cit.> and VFGM <cit.>, and is similar to IVFG <cit.>.
In addition, we explore the effects of using different keys to generate virtual face images. We calculate the unsuccessful matching rate between virtual face images generated using different keys. The results are shown in Table <ref>, the diversity is superior to SOTA on the LFW dataset, and is only 0.059 lower than IVFG <cit.> on the CelebA dataset. Figure <ref> shows eight virtual face images generated using eight different keys from four original face images, respectively. It can be seen that under different keys, virtual faces have significant differences in appearance.
§.§.§ Synchronism and Detection Rate
One of the goals of our method is to generate virtual face images that have high utility, including high synchronism (the original faces of the same identity are controlled by the same key to generate the virtual face images with the same virtual identity) and high detection rate. We use randomly generated 8-bit keys and the LFW testing set as inputs for HPVFG to generate the corresponding virtual face images. We can see from Figure <ref> that different original faces of the same identity are able to generate virtual faces with the same virtual identity by using the same key. At the same time, the virtual faces maintain the head posture and facial expression of the original face.
To demonstrate the superiority of our proposed method, we also conducted a quantitative analysis that compares the performance with SOTA on the LFW and CelebA datasets,
we choose AUC and EER as the criteria for measuring synchronism, and use the MTCNN model to detect the virtual faces for measuring the detection rate. The results of synchronism and detection rate are shown in Table <ref>. It can be seen that our proposed method outperforms SOTA in both aspects.
§.§.§ Interactivity
Interactivity is one of the important properties of our virtual faces, which requires the virtual face to have the same head posture and facial expression as the original face. As shown in Figures <ref> and <ref>, the virtual face maintains the head posture and facial expression of the original face, demonstrating the visual interactivity of our method.
In addition, we utilize the API of Face++ to measure the preservation of head posture and facial expression. Specifically, we detect the similarity between head posture angles (Yaw, Pitch, and Roll) and facial expression. As shown in Table <ref>, the offset angle between our virtual face and the original faces in each direction is the lowest among different schemes. The similarity of facial expression can reach 0.8, which is higher than that of SOTA.
§.§.§ Visual Quality
We use FID to quantitatively evaluate the visual quality of the generated images. Table <ref> shows FID of the virtual face images generated by different methods. It can be seen that our method achieves good image quality. It can also be seen from Figure <ref> and Figure <ref> that our virtual face images have high perceptual quality.
§.§.§ Summary
Compared with SOTA, our virtual faces achieves better performance in anonymity, diversity, synchronism and detection rate, as well as the ability in preservation of head posture and facial expression. Although IVFG <cit.> achieves higher visual quality, it still has the following limitations. (1) Its virtual faces have random head postures and expressions. (2) The recognition performance of IVFG <cit.> is only reflected in determining whether the virtual faces come from the same original identity, and cannot be used for source tracking tasks.
CIAGAN <cit.> has made some progress in preserving head posture, but its virtual faces are often suffer from poor visual quality and the original identity is permanently lost, making it impossible to perform downstream recognition tasks. VFGM <cit.> visually protects the original face while maintaining the head posture. It satisfies the need for interactivity. However, the lack of feature-level protection results in the original identity being arbitrarily and illegally used and recognized without authorization. Our HPVFG generates the virtual faces with high visual quality while maintaining head posture and facial expression. In addition, our proposed KVFA can achieve authorized recognition of the original identity, which prevents the virtual faces from illegal access and identity abuse. This breakthrough protects the original face in both visual and machine perception, which mitigates the contradictory between privacy and face authentication.
§.§ Identity Authentication
In this section, we evaluate the recognition accuracy and identity authentication ability of the KVFA module separately. We use cosine similarity as a measure of the similarity between anonymous face features and original face features. We set the threshold as 0.7, which means when the similarity is greater than 0.7, the original and the virtual faces belong to the same identity.
§.§.§ Recognition Accuracy of KVFA
We use a series of evaluation indicators: correct recognition rate (CRR), false acceptance rate (FAR), and AUC to evaluate the recognition accuracy of KVFA. We conduct tests on the LFW and CelebA datasets respectively, with the inputs of KVFA being the original faces, or the virtual faces with correct keys. The experimental results are shown in Table <ref>. It can be observed that our KVFA module performs well on the two datasets, which demonstrates its ability in tracing the original identities of the virtual faces.
§.§.§ Different Authentication Scenarios
During the authentication, the following scenarios may occur:
Scenario 1: The adversary has the correct key, but does not have access to KVFA. They input the correct key and virtual face into HPVHG to recover the real face. Then they use a universal face recognizer to obtain face feature and attempt to match it with the feature of original face.
Scenario 2: The adversary bypasses the key and directly input the virtual face into the KVFA module to obtain the face feature, and attempts to match it with the original face.
Scenario 3: The adversary inputs the wrong key and the virtual face into KVFA, to obtain the face feature, and attempts to match it with the original face.
Scenario 4: The adversary/user holds the correct key, inputs the virtual face and key into KVFA to obtain the feature and matches it with the original face.
Table <ref> gives the similarity between the virtual face and the original face under different authentication scenarios. It can be observed that, when adversary does not have access to KVFA, the cosine similarity between features of the recovered face and the original face is low, indicating a failed authentication. When adversary attempts to bypass the key or conduct the authentication with the wrong key, the cosine similarity between the features of the original face and the virtual face is lower in both LFW and CelebA, indicating a failure of authentication. Therefore, our method can resist illegal authentication by bypassing keys or using incorrect keys. When using the correct key for authentication, the cosine similarity between the features of the original and virtual face is over 0.8, meaning that successful authentication can be carried out by using the correct keys.
§.§ "In-the-wild" Experiment
To validate the generalization ability of our model, we further select the FFHQ dataset for evaluation. FFHQ is a high-quality face image dataset that contains 70,000 high-resolution face images with a resolution of 1024×1024. We randomly select 100 images from the FFHQ dataset as the validation set.
We resize the face images to the same size as the inputs of our model trained on LFW. After resizing, we input the 100 face images into our HPVFG (trained on LFW) to obtain the virtual faces. For each original face image, we assign eight different keys to generate eight virtual faces. As such, we have 800 virtual faces in total. Table <ref> gives the performance of the virtual faces as well as the recognition accuracy of KVFA (trained on LFW). It can be seen that we can obtain lower FID on the FFHQ dataset thanks to the high quality face images in FFHQ. For the other performance indicators, however, the results on the FFHQ are slightly lower than those on the LFW dataset. This may be attributed to the more diverse face features and more complex backgrounds in the FFHQ dataset. But the overall performance is still higher than SOTA, which demonstrates the good generalization ability of our method among different datasets.
§.§ Ablation Studies
§.§.§ Head Posture Correction Module
Here, we demonstrate the importance of the posture correction module in maintaining head posture and facial expression. Without this module, the identity, posture, and the expression of the virtual face generated by the generator are highly controlled by the key. As shown in Figure 7 that the head posture and facial expression of the virtual face and the original face are inconsistent without the attitude correction module. That is, under the same key, the virtual faces generated from different face images of the same identity not only belong to the same virtual identity but may also have the same head posture and facial expression. This causes the virtual face to lose the diversity of head posture and expression, which cannot meet the interactivity of the metaverse.
§.§.§ Each Loss of HPVFG
We remove each loss component from the overall training objective of HPVFG and summarize the corresponding performance in Table <ref>. It can be seen that, by using all the loss functions, we can achieve optimal performance.
If there is no L_ano loss, the performance of virtual face recognition will decrease, which results poor synchronism. L_div plays a crucial role in the diversity of virtual faces. Without L_dif, the virtual faces of different identities generated under the same key may belong to the same identity, leading to identity collisions.
§.§.§ Each Loss of KVFA
Similarly, we remove each loss component from the overall training objective of KVFA and summarize the recognition accuracy of the KVFA in Table <ref>. When either L_tot1 or L_tot2 is removed, the recognition accuracy of the KVFA module will significantly decrease. When L_tot1 is removed, KVFA will lose its ability to prevent misidentification, faces of different identities will be recognized as the same identity. When L_tot2 is removed, KVFA will lose its ability to prevent false rejection and faces with the same identity will be recognized as different identities. When both losses participate in the training of KVFA, the model achieves the best recognition performance.
§.§.§ Threshold Setting for KVFA
To evaluate the impact of different threshold setting on identity authentication, we use the validation set of LFW to see how CRR changes under different thresholds, the results of which are shown in Figure <ref>(a). It can be seen that we can obtain the highest CRR when the threshold is set as 0.7.
§.§.§ Weight Setting for the Loss Functions of KVFA and HPVFG
We evaluate the performance of our method on the validation set of LFW by varying the weights of different loss components in the total losses of KVFA and HPVFG. For the weight setting of the loss function of the KVFA, we gradually change each weight (i.e., λ_pmis1, λ_pmis2, λ_pmis3, λ_per1 and λ_per2) from 0.1 to 1 with the rest four weights being 1, to see how the AUC changes, the results of which are shown in Figure <ref>(b). It can be seen that we can achieve the best performance by setting all the weights as 1.
For the same token, we gradually change each weight (i.e., λ_ano, λ_syn, λ_div and λ_dif) from 0.1 to 1 in the total loss of HPVFG to see how the performance of the virtual faces would be changed. Figure <ref> plots the AUC, anonymity, diversity and FID of the virtual faces under different weight setting. It can be seen that, when we set the λ_ano as 0.4, and the other weights as 1, we can achieve a good balance among the AUC, anonymity, diversity and FID for the virtual faces.
§ SECURITY ANALYSIS AND DISCUSSIONS
§.§ Privacy Leakage
We note that successful extraction of the original identity from the virtual face requires two conditions: obtaining the user key that controls the generation of the virtual face and the authentication server VFAS. Therefore, privacy breaches may mainly occur during the face image upload stage and the virtual face authentication stage.
§.§.§ Face Image Upload Stage
U sends a key and the original face image to FAS, which generates a virtual face image based on the user key. As an honest server, FAS will automatically delete the original face image and key after the generation of virtual faces. At this stage, the adversary can only obtain the virtual face whose virtual identity is different from the original identity.
§.§.§ Virtual Face Authentication Stage
In the process of verifying the original identity of the virtual faces, the adversary needs to hold the correct key and obtains the usage permission of the KVFA model to perform the face authentication. We here discuss two scenarios where only the key is leaked or the KVFA model is leaked.
∙ User Key Leakage: When the adversary holds the correct key but does not have access to the KVFA model. The adversary may input the virtual face and key into the virtual face generation model HPVFG to recover the original face. As shown in Figure <ref>, the first row is the original face, the second row is the virtual face, and the third row is the recovered face. It can be observed that the recovered face is visually different from the original face.
It can also be seen from Scenario 1 in Table <ref> that the similarity between the recovered and the original faces are very low.
Therefore, from the perspective of visual perception and machine recognition, even if the key is leaked, the face authentication cannot be performed correctly.
∙ VFAS Access Violation: When the adversary obtains access to VFAS but does not obtain the correct user key, they are unable to obtain the original identity from the virtual faces. Please refer to Scenario 3 in Section 5-C-2) for details.
§.§ Security Analysis of Key
We consider the proposed framework to be secure if the adversary cannot obtain the original identity of our virtual face.
* Key Generation: The generation of keys uses a secure random number generator, KeyGen, to ensure that each key is unique. In addition, the key generation is done on the user side to prevent it from being obtained by the adversaries.
* Key Storage: The key is only stored on the user side. After the virtual face generation, FAS will delete the user key.
* The Key Length: The longer the key is, the harder it is to be guess. We discuss below the impact of key length on the performance of the virtual faces.
§.§.§ Impact of the Key Length on the Virtual Faces
We select randomly generated 8-bit, 128-bit, and 256-bit keys to generate virtual facesfrom the LFW testing set. Then we quantitatively evaluate the anonymity of the virtual faces. The experimental results are shown in Table <ref>. It can be seen that regardless of the key-length, the anonymity of the virtual face is around 0.96, with FID around 7. Among them, the 256-bit key has the best anonymity, and the 128-bit key has the best image quality. Therefore, it can be concluded that different the key-length has a relatively small impact on the anonymity and image quality of the virtual faces.
§.§.§ Fault Tolerance of the Keys With Different Length
We further use the above generated virtual faces for identity authentication, where 0 bit, 1bit, 3 bits, 5 bits, and 16 bits of the key are wrong. The feature (i.e., the output of KVFA) similarity between the virtual face and the original face is given in Table <ref>.
We believe that if the feature similarity between the virtual face and the original face is less than 0.7, they are considered to be not matched to result a failure of identity authentication. From Table <ref>, it can be seen that when the length of the key is 8 bits with 1 bit being incorrect, the feature similarity between the original face and the virtual face is 0.708. When the length of the key is 128 bits and 256 bits with 1, 3, and 5 bits being incorrect, the feature similarity between the original face and the virtual face is less than 0.7, which results in a failure of identity authentication.
Therefore, in practical applications, we suggest to set the key length to be over 128 bits to have a small fault tolerance of the incorrect key bits.
§.§ Discussions
§.§.§ Potential Threats
In practical applications, it is difficult for the adversary to obtain the correct key and the virtual face generation/identity authentication model simultaneously. Therefore, the adversary may engage in illegal identity authentication through some other means.
∙ Train a Surrogate Model: The adversary can train a surrogate KVFG model, which can produce similar features for the original face and the virtual face without the use of a key.
∙ Train an Inversion Model: The adversary can train an inversion model, the process of which is similar to the inverse process of HPVFG. Specifically, the adversary inputs the the virtual faces into this model to output the original faces.
By using such a model, the adversary can obtain both the original identity and appearance of the virtual faces.
However, the above two cases are difficult to be launched in practice. Both of them require collecting sufficient original-virtual face pairs corresponding to the same key for training. Such a behavior can be detected by the service provider. Besides, obtaining diverse
data is challenging as the users are only able to produce original-virtual face pairs with their identities.
§.§.§ Limitations of Randomly Selected Keys
In Section 4-B, we discuss the properties of virtual faces in theory, where the key is randomly chosen. One of our goals is to generate a virtual face which is different from the original face visually and statistically (by using ordinary face recognizers) to achieve high anonymity. In practice, however, some edge cases may happen. First of all, the original and virtual faces may be far away in the feature space but have similar visual appearance in the image space, or vise versa. Secondly, the original and the virtual faces may be close in both the feature space and the image space. These will result in a failure of privacy protection.
To deal with such edge cases, one possible solution is to carry out a similarity check between the original and virtual faces. If the virtual face falls into one of the edge cases, we revoke it and use a different key to generate another virtual face. Such a process can be repeatedly done until we generate a virtual face with satisfied properties.
§ CONCLUSION
In this paper, we propose a KFAAR framework for identifying-preserving face anonymization. In this framework, the HPVFG module can generate the virtual faces controlled by a key, while the KVFA module accomplishes the goal of extracting the virtual face's original identity without exposing the visual content of the original face. We propose and incorporate multi-task learning strategies to train for each module. Experimental results show that our method achieves good performance in generating head posture and facial expression preservation virtual faces with high anonymity, synchronism and diversity, which can be also used to obtain the original identity when the correct key is given.
§ ACKNOWLEDGMENT
This work was supported by the National Natural Science Foundation of China under Grant 62072295 and 62072114.
IEEEtranS
|
http://arxiv.org/abs/2409.02176v1 | 20240903180002 | The Magnetic Maze: A System With Tunable Scale Invariance | [
"Tian-Gang Zhou",
"Michael Winer",
"Brian Swingle"
] | hep-th | [
"hep-th",
"cond-mat.dis-nn",
"cond-mat.stat-mech"
] |
Quasi-periodic X-ray eruptions years after a nearby tidal disruption event
M. Nicholl^10000-0002-2555-3192,
D. R. Pasham^20000-0003-1386-7861,
A. Mummery^3,
M. Guolo^40000-0002-5063-0751,
K. Gendreau^5,
G. C. Dewangan^6,
E. C. Ferrara^7,8,50000-0001-7828-7708,
R. Remillard^2,
C. Bonnerot^9,10,
J. Chakraborty^20000-0002-0568-6000,
A. Hajela^11,
V. S. Dhillon^12,13,
A. F. Gillan^10000-0003-4094-9408,
J. Greenwood^1,
M. E. Huber^14,
A. Janiuk^150000-0002-1622-3036,
G. Salvesen^160000-0002-9535-4914,
S. van Velzen^17,
A. Aamer^1,
K. D. Alexander^18,
C. R. Angus^1,
Z. Arzoumanian^5,
K. Auchettl^19,20,
E. Berger^21,
T. de Boer^14,
Y. Cendes^21,22,
K. C. Chambers^14,
T.-W. Chen^230000-0002-1066-6098,
R. Chornock^24,
M. D. Fulton^1,
H. Gao^14,
J. H. Gillanders^25,
S. Gomez^26,
B. P. Gompertz^9,10,
A. C. Fabian^27,
J. Herman^14,
A. Ingram^28,
E. Kara^2,
T. Laskar^29,30,
A. Lawrence^31,
C.-C. Lin^14,
T. B. Lowe^14,
E. A. Magnier^14,
R. Margutti^24,
S. L. McGee^9,10,
P. Minguez^14,
T. Moore^1,
E. Nathan^32,
S. R. Oates^33,
K. C. Patra^24,
P. Ramsden^1,9,100009-0009-2627-2884,
V. Ravi^32,
E. J. Ridley^9,10,
X. Sheng^1,
S. J. Smartt^25,1,
K. W. Smith^1,
S. Srivastav^25,1,
R. Stein^340000-0003-2434-0387,
H. F. Stevance^25,1,
S. G. D. Turner^350000-0002-8641-7231,
R. J. Wainscoat^14,
J. Weston^1,
T. Wevers^26,
D. R. Young^1
^ 1Astrophysics Research Centre, School of Mathematics and Physics, Queens University Belfast,
Belfast BT7 1NN, UK
^ 2Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology,
Cambridge, MA, USA
^ 3Oxford Theoretical Physics, Beecroft Building, Clarendon Laboratory, Parks Road, Oxford,
OX1 3PU, UK
^ 4Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles St.,
Baltimore MD 21218, USA
^ 5NASA Goddard Space Flight Center, Code 662, Greenbelt, MD 20771, USA
^ 6Inter-University Centre for Astronomy and Astrophysics (IUCAA), PB No.4, Ganeshkhind,
Pune-411007, India
^ 7Department of Astronomy, University of Maryland, College Park, MD, 20742, USA
^ 8Center for Research and Exploration in Space Science & Technology II (CRESST II),
NASA/GSFC, Greenbelt, MD 20771, USA
^ 9School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT
^ 10Institute for Gravitational Wave Astronomy, University of Birmingham, Birmingham
B15 2TT
^ 11DARK, Niels Bohr Institute, University of Copenhagen, Jagtvej 155, 2200 Copenhagen,
Denmark
^ 12Department of Physics and Astronomy, University of Sheffield, Sheffield, S3 7RH,
United Kingdom
^ 13 Instituto de Astrofísica de Canarias, E-38205 La Laguna, Tenerife, Spain
^ 14Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu HI
96822, USA
^ 15Center for Theoretical Physics, Polish Academy of Sciences, Al. Lotnikow 32/46,
02–668, Warsaw, Poland
^ 16Center for Theoretical Astrophysics, Los Alamos National Laboratory, Los Alamos,
NM 87545, USA
^ 17Leiden Observatory, Leiden University,
Postbus 9513, 2300 RA Leiden, The Netherlands
^ 18Department of Astronomy and Steward Observatory, University of Arizona, 933 North
Cherry Avenue, Tucson, AZ 85721-0065, USA
^ 19School of Physics, The University of Melbourne, VIC 3010, Australia
^ 20Department of Astronomy and Astrophysics, University of California, Santa Cruz,
CA 95064, USA
^ 21Center for Astrophysics, Harvard & Smithsonian, 60 Garden Street, Cambridge,
MA 02138-1516, USA
^ 22Department of Physics, University of Oregon, Eugene, OR 97403, USA
^ 23Graduate Institute of Astronomy, National Central University, 300 Jhongda Road,
32001 Jhongli, Taiwan
^ 24Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA
^ 25Astrophysics sub-Department, Department of Physics, University of Oxford, Keble Road,
Oxford, OX1 3RH, UK
^ 26Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA
^ 27Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge
CB3 0HA, UK
^ 28School of Mathematics, Statistics and Physics, Newcastle University, Herschel Building,
Newcastle upon Tyne, NE1 7RU, UK
^ 29Department of Physics & Astronomy, University of Utah, Salt Lake City, UT 84112, USA
^ 30Department of Astrophysics/IMAPP, Radboud University, P.O. Box 9010, 6500 GL,
Nijmegen, The Netherlands
^ 31Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill,
Edinburgh EH9 3HJ, UK
^ 32Cahill Center for Astronomy and Astrophysics, California Institute of Technology,
Pasadena, CA 91125, USA
^ 33Department of Physics, Lancaster University, Lancaster LA1 4YB, UK
^ 34Division of Physics, Mathematics, and Astronomy, California Institute of Technology,
Pasadena, CA 91125, USA
^ 35Department of Applied Mathematics and Theoretical Physics, University of
Cambridge, Wilberforce Road, Cambridge, CB3 0WA, UK
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The fate of randomness in physical systems has long been an intriguing topic. There are many well-known examples that consider quantum particles moving in random scalar or vector potentials. Such randomness often leads to non-ergodic dynamics, manifesting in behaviors like glassiness and localization. One classic model is the p-spherical model <cit.>, in which a quantum particle moves in high dimensions in a random scalar potential. This is a standard solvable model of glassy physics with quenched disorder. Another related model is Parisi's hypercube model <cit.>, in which a particle jumps among the vertices of a high-dimensional hypercube with random magnetic flux through each face. The hypercube model was motivated by random Josephson junctions.
Still another class of models is motivated by various two-dimensional devices with magnetic impurities that exhibit spatially random vector potentials. These models typically result in electron localization physics <cit.>, although these two-dimensional results are not easily generalized to higher dimensions.
However, certain models can evade non-ergodic behavior and exhibit novel forms of low-energy dynamics. A well-known example is the Sachdev-Ye model, along with its extension, the Sachdev-Ye-Kitaev model <cit.>. Recently, a spin-based version of the SYK model was studied as part of a search for bosonic models with possible SYK-like behavior <cit.>. Although there is ample evidence that these systems can avoid a glass-like phase with replica symmetry breaking at low temperature, we wanted a system that would be manifestly replica symmetric. We then wanted to understand the potential for exotic quantum dynamics that might emerge at low energy.
Hence, in this paper we study a system—the “magnetic maze” (MM)—consisting of a single non-relativistic charged quantum particle moving in many spatial dimensions under the influence of a random magnetic vector potential. The vector potential is time-independent but varies randomly in space with translation-invariant spatial correlations. Focusing on a special case of power-law decaying vector potential correlations, we study the thermodynamics, real-time dynamics, and Lyapunov physics as a function of temperature and coupling. The large spatial dimensionality, N, provides a control parameter for our analysis. Our findings are summarized as follows; see also Figure <ref>.
At high temperature, the system is effectively classical, and we do not expect the physics to depend sensitively on the precise choice of vector potential correlations. We show that its thermodynamics are those of a free particle and that it exhibits chaotic dynamics in which the large-scale motion of the particle is diffusive, velocity auto-correlations decay exponentially in time, and there is a positive Lyapunov exponent[These results are obtained primarily for a specific choice of vector potential correlations discussed below. We consider a more general class of correlations in Appendix <ref>, primarily in the Euclidean context. Within this more general class, we have also studied a few examples of real-time dynamics in which we obtained the same signatures of chaotic dynamics.].
At low temperature, quantum effects manifest and the physical properties depend sensitively on the choice of vector potential correlations. We show that, for a special choice of power-law spatial correlations, the system develops an emergent scaling symmetry at low energy. Whereas the high-temperature thermodynamics is identical to that of a particle with energy, E, related to momentum, p, by the usual non-relativistic dispersion, E ∼ p^2, the low-temperature thermodynamics is instead determined by an effective dispersion, E ∼ p^z with a dynamical exponent z which is tunable via the strength of the magnetic field.
The low temperature dynamics is also chaotic, and for the same special choice of power-law spatial correlations, the emergent scaling symmetry determines the dynamical properties. In particular, there is a single timescale β = k_B T/ħ and a single length scale ℓ_β∼β^1/z which set the dynamical properties. The decay rate of the velocity auto-correlation function and the Lyapunov exponent determined from an out-of-time-order correlator are both of order β^-1, and the diffusivity is of order D∼ℓ_β^2/β∼β^2/z-1. Henceforth, we set k_B = ħ = 1; Appendix <ref> discusses the physical scales in the MM.
For other choices of vector potential correlations, the low-energy physics changes significantly. For example, when the spatial correlations decay more rapidly than the special case with scaling symmetry, the low temperature properties are still chaotic but more closely resemble those of a weakly perturbed free theory, with a relaxation rate and chaos exponent that vanish more rapidly than β^-1 at large β. We investigate this case and the opposite case of uniform magnetic field (corresponding to very long-ranged correlations) in Appendices <ref> (uniform field) and <ref> (more rapid decay of correlations).
These results are obtained via large N path integral methods <cit.>. We formulate the system on a thermal contour to access thermodynamics, on a Schwinger-Keldysh contour <cit.> to access dynamics, and on a contour with two time-folds to access out-of-time-order correlations and Lyapunov physics. Because we use large N as a control parameter, our results correspond to taking the limit N →∞ first before the low temperature limit. It is of course interesting to study 1/N corrections and the finite dimensional fate of the phenomena we find, but we do not pursue that here.
Central to our study is the result, discussed below, that the MM cannot have static correlations between different replicas and thus cannot have an equilibrium glass phase. The dynamics which emerges at low energy is instead that of a scale-invariant chaotic quantum system, one which features a tunable dynamical exponent. Moreover, at the lowest temperature and largest couplings we were able to access, we found a chaos exponent that comes within 15 % of the MSS bound <cit.>. We have not been able to analytically show that the model becomes maximally chaotic at infinite coupling, but we see no evidence of sub-maximal saturation in the chaos exponent. Hence, this system may provide a new example of maximal chaos.
In the remainder of the introduction, we will introduce the model in more detail and give a guide to the structure of the paper. We will also throughout the paper compare and contrast our findings with the properties of the Sachdev-Ye-Kitaev model.
§.§ Setup and Overview
The Magnetic Maze consists of a single particle in N-dimensional space subject to a random vector potential A_i(x). In order to keep our partition functions finite, we will sometimes consider a confining potential V_conf=ϵ/2∑_i x_i^2, but most of our results are most interesting when ϵ is far smaller than any other scale. The Hamiltonian is thus
H=∑_i=1^N [ 1/2m(p_i-A_i(x))^2+ϵ/2x_i^2 ].
The elements of A are picked from a Gaussian distribution with mean zero. The covariance of A falls off as A_i(x)A_j(y)=f(|x-y|^2/N) δ_ij. We are particularly interested in the case where f(x^2/N)∼1/x^2/N for large x, which is the special power-law form highlighted in the introduction. We take the function to be
f(x^2/N)=J^2/ℓ^2+x^2/N,
where ℓ is some short-distance scale and J is a dimensionless measure of the field strength.
Although our focus is (<ref>), we note that the model makes sense for other choices of f. We address some of these in Appendices: the case of a uniform field (Appendix <ref>), which is an integrable system with f(x^2/N)=C-1/2^2x^2/N, and the case of power-law correlations with different power-law exponent at large x (Appendix <ref>). As we show, (<ref>) is the most interesting choice because it leads a host of interesting zero- and low-temperature properties including emergent scaling symmetry and near-maximal chaos. By contrast, the high temperature properties are less sensitive to the form of f.
This paper focuses on studying the classical and quantum behavior of (<ref>) with f given by (<ref>). Our main quantity of interest will be Δ(t_1,t_2)=1/N∑_i𝒯 (x_i(t_1)-x_i(t_2))^2, which can be interpreted as the squared distance traveled between t_1 and t_2, and we will study both the imaginary-time and real-time behavior of this quantity. The physics is conveniently discussed in terms of T/J and J as sketched in Figure <ref>. Both quantities are dimensionless in units where ℓ, m, and ħ are set to one (see Appendix <ref>). T≫ J is the classical regime and T ≪ J is the quantum regime. Moreover, J ≪ 1 is weak coupling and J ≫ 1 is strong coupling.
At zero temperature, T=0, we find the imaginary time-dependence Δ(τ) ∼τ^α with α determined by J through
2tanπα/2πα/(α+1)=J^-2
which takes us from the free value α=1 at J^2=0 to the confined α=0 as J^2 goes to infinity. In terms of the previously introduced dynamical exponent, we have z= 2/α. This identification arises because Δ scales like two powers of length and length scales like time to the 1/z power.
Moreover, we find that the system has reparameterization symmetry like the SYK model, and power-law decay in the velocity correlations. There is also a rescaling symmetry in space, completely separate from the rescaling in time included in the reparameterization mode. This is in contrast with the SYK model, where the fermion modes stretch in a way controlled by the reparameterization mode.
Turning on a non-zero but low temperature, T ≪ J, the thermodynamics is controlled by the modified dispersion with non-trivial dynamical exponent, leading to a heat capacity equal to N/z = N α/2. For the real time dynamics, we find that the system exhibits properties characteristic of a quantum chaotic system. The velocity auto-correlation now decays exponentially in time with a time-scale set by β = 1/T. Out-of-time-order correlators also feature an initial period of exponential growth with chaos exponent κ∼ 1/β. As J increases, κ gets closer to the chaos bound, κ^max = 2π/β. Both the relaxation time and the Lyapunov time depend on J, so we have in fact an infinite family of ensembles of non-relativistic scale-invariant quantum systems.
At higher temperatures, T ≫ J, the system becomes effectively classical and the thermodynamics reduces to that of a free particle, corresponding to z=2 and α=1. The real time dynamics is still chaotic and we compute the diffusivity, the velocity relaxation time, and the largest Lyapunov exponent.
Our detailed analysis of the Magnetic Maze proceeds as follows. In Section <ref> we use the large-N limit to derive a mean-field action in terms of collective displacement variables. We then derive a saddle-point equation. Throughout Section <ref> we assume a general correlation function f for the As.
In Section <ref> we specialize to the situation in equation (<ref>), and work at low temperature. We derive the power-law behavior of the correlation function, as well as equation (<ref>). We study the low temperature thermodynamics, showing that the low-temperature heat capacity is α N/2, which is consistent with the picture of power-law dispersion with exponent E∝ p^z with z= 2/α. Finally, we show that at low temperatures and long times, the system has a reparameterization symmetry and a rescaling symmetry.
Section <ref> covers the real time dynamics of the system. We derive Schwinger-Dyson mean-field equations, and numerically solve them. We show that a particle in the Magnetic Maze exhibits diffusive transport, with the squared displacement after time t growing in proportion to t. More precisely, the displacement grows at 2Dt, where the diffusion constant D scales as β^α-1 at low temperature. In the same temperature regime, the velocity auto-correlation decays exponentially with a time scale set by β. We also note that the system has a somewhat unusual structure on the Schwinger-Keldysh (SK) contour, as we discuss in detail.
Section <ref> covers the Lyapunov physics. We setup a ladder calculation of an out-of-time-order correlator (OTOC) and carry out a detailed numerical analysis. We find that the low temperature chaos exponent is of order the bound, 2π/β, and comes close to it as J increases.
Section <ref> discusses directions for further work and open questions. Following it are several technical appendices.
Appendix <ref> provides a crash course on the physics of magnetic fields in high dimensions, analyzes the uniform field problem (which is integrable and exactly solvable at finite N), and explains how the dimensionless parameters T/J and J arise.
The other appendices are as follows. Appendix <ref> reviews the exact solution for J=0. Appendix <ref> discusses other choices of f. Appendix <ref> presents details of derivation of the real-time equations of motion. Finally, Appendix <ref> discusses the numerical method used to solve the equations of motion.
§ ACTION AND EQUATIONS OF MOTION
We are going to perform a mean-field calculation in which the collective variable
G(_1,_2)=1/N𝒫∑ x_i(_1)x_i(_2)
will play a central role. Here, 𝒫 denotes an appropriate path- or time-ordering as specified below. Our goal is to find a mean-field expression for the free energy of the system, expressed as a path integral over G.
We start with the partition function
Z=∫exp(∫_0^β-m/2 v^2-iA· v-ϵ/2 x^2d)𝒟 x.
We then introduce the Lagrange multiplier Σ enforcing (<ref>),
Z=∫exp(∫_0^β-m/2 v^2-iA· v-ϵ/2 x^2d-1/2∫_0^βΣ(_1,_2){ N G(_1,_2) -x(_1)· x(_2)}d_1 d_2)𝒟 x 𝒟 G 𝒟Σ.
Next we perform the average over As to get
Z̅ =∫exp(-S)𝒟 x 𝒟 G 𝒟Σ
S =∫_0^βm/2ẋ^2+ϵ/2 x^2d+1/2∫_0^βΣ(_1,_2){NG(_1,_2) -x(_1)· x(_2)}+f((x(_1)-x(_2))^2)v(_1)· v(_2)d_1 d_2.
Note this leads to an “annealed” free energy ∝lnZ, whereas the more physical “quenched” free energy is ∝ln Z. However, as we will see shortly, these free energies always coincide at large N in the Magnetic Maze.
Using the delta function setting G to (<ref>), we can replace (x(τ_1) - x(τ_2))^2/N = Δ(τ_1,τ_2) where
Δ(_1,_2)=G(_1,_1)+G(_2,_2)-2G(_1,_2).
Similarly, we can replace v(τ_1) · v(τ_2) with ∂_τ_1∂_τ_2 G(τ_1,τ_2). The resulting functional integral is now quadratic in the xs, and performing the x integral gives
S/N=1/2log(-m∂_^2+ϵ-Σ)+1/2∫Σ G+f(Δ(_1,_1))∂__1∂__2G(_1,_2)d_1d_2.
This is the sought-for mean-field action which can be used to compute the free energy.
The equations of motion can now be obtained as functional derivatives of (<ref>) with respect to G and Σ. Using the functional derivative
δΔ(_1,_2)/δ G(_1',_2') = δ(_1 - _1')δ(_1 - _2') + δ(_2 - _1')δ(_2 - _2') - 2 δ(_1 - _1')δ(_2 - _2')
and the identity -2∂__1∂__2 G = ∂__1∂__2Δ, we get the following equations:
(-m∂_^2+ϵ-Σ)*G=δ(_1-_2)
Σ(_1,_2)=∂__1∂__2(f(Δ(_1,_2))+f'(Δ)∂__1∂__2Δ-δ(_1-_2)∫_0^βf'(Δ(_1,'))∂__1∂_'Δd'.
It is the last term in the equation for Σ that is most unfamiliar. It is a consequence of the fact that the interaction term depends not just on G(_1,_2), but also on G(_1,_1) and G(_2,_2). This special dependence of the interaction on G only through its derivatives and through Δ means that adding an overall constant to G (corresponding to translating the entire path by a random vector) doesn't affect the magnetic part of the action, a consequence of the statistical translational symmetry of our magnetic field. Another consequence is that Σ_ω=0 is exactly zero.
It is the first equation in (<ref>) that illustrates why we included ϵ. Without it, the zero-frequency mode of -m∂_^2+ϵ-Σ would be zero, and the equation wouldn't be satisfiable. The weak potential from ϵ saves us from a diverging position by limiting the motion of the particle to a sphere of squared radius N/ϵβ. This also cuts out divergences in the partition function and entropy due to the infinite volume of space, instead replacing them with a contribution to the entropy
Entropy from volume of box=-N/2log(βϵ).
It is also useful to write out equations (<ref>) assuming time-translation invariance. In this case, cutting out G entirely, we have
Δ_ω≠ 0=-2/mω^2+ϵ-Σ_ω≠0
Σ()=-2∂_^2 Δ f'(Δ)-(∂_Δ)^2 f”(Δ)+δ()∫_0^βf'(Δ('))∂_'^2Δ(') d'.
Equations (<ref>) illustrate why the quenched and annealed free energies are equal in the Magnetic Maze. The quenched free energy can be obtained from a replica trick involving n replicas of the system in the limit n → 0. The mean-field variables would be generalized to G_ab, Δ_ab, and Σ_ab, where a,b=1,⋯,n are replica indices. Replica symmetry breaking requires correlations between the replicas, but the inter-replica correlations are typically time-independent owing to the separate time-translation symmetry in each replica. The generalization of (<ref>) to the multi-replica case would relate inter-replica terms in the self-energy, Σ_a ≠ b, to time derivatives of the inter-replica terms in the displacement, Δ_a ≠ b. Since the inter-replica correlators are time-independent, we can conclude from the generalization of the second equation of (<ref>) that Σ_a ≠ b = 0. Self-consistency from the multi-replica generalization of the first equation of (<ref>) then implies that Δ_a ≠ b=0. Hence, multi-replica saddle points are always replica diagonal in the Magnetic Maze.
§ LOW-TEMPERATURE LIMIT WITH F∼J^2/Δ
In this section, we study the thermodynamics of the Magnetic Maze with f(Δ) ∼J^2/Δ and reveal its fascinating low-temperature physics. In the IR limit, where Δ is large and ∂_ is small, we can simplify our action. Equation (<ref>) becomes
S/N=1/2log(Σ)-1/4∫( ΣΔ +J^2∂__1∂__2Δ(_1,_2)/Δ(_1,_1))) d_1d_2.
This IR action behaves in a simple way under two important classes of transformations: time reparameterizations and scaling transformations.
For reparameterizations of time, →σ(), the transformations are
G(_1,_2) → G(σ(_1),σ(_2))
Δ(_1,_2) →Δ(σ(_1),σ(_2))
Σ(_1,_2) →Σ(σ(_1),σ(_2)) |σ'(_1) σ'(_2)|.
It is straightforward to verify that this transformation preserves the last two terms in (<ref>). The lnΣ term is not manifestly symmetric, but by writing the transformation as
Σ(_1,_2) →∫ d_1' d_2' M(_1,_1') Σ(_1',_2') M(_2',_2)
for a “matrix” M(_1,_1') = δ(_1' - σ(_1)) | σ'(_1)|, we can argue that the determinant is invariant by arguing that the determinant of M is unity.[For example, if σ is a time shift, σ()=+a, then M = δ(_1' - _1-a). This is effectively a permutation, which has determinant one. For a general infinitestisimal transformation, σ = + ϵ(), one can directly show that the determinant is one by using the single-valuedness of ϵ().]
For scalings, the time is unchanged but we send
G →λ G
Δ →λΔ
Σ →λ^-1Σ.
We can also extend this transformation to the underlying x() variables as x() →√(λ) x(). The last two terms in (<ref>) are again manifestly invariant, but the first term transforms by a field-independent shift that depends on λ. As a consequence, the equations of motion in the IR limit are invariant under the scaling transformation but the action is not. This situation is slightly unusual but it can be understood as follows. Going back to (<ref>) before the x variables were integrated out, we see that the action is fully invariant under the scaling transformation provided we drop the corresponding terms (the x kinetic term, the confining potential, and the short-distance part of f), but the 𝒟x measure is not invariant (while the combined 𝒟G 𝒟Σ measure is). The scaling symmetry is also explicitly broken by the aforementioned terms we neglected in the IR limit.
Both of these transformations are approximate symmetries of the action in the IR limit, (<ref>), with the caveat about scalings discussed just above. However, we will see in the next subsection that the ground state spontaneously breaks these symmetries to a special subgroup.
§.§ Power-Law Behavior of Ground State Correlations
We now show that a power-law ansatz solves the equations of motion arising from (<ref>) in the zero temperature limit. We consider a translation invariant ansatz in which Δ(_1,_2) = Δ(_1-_2) and similarly for Σ.
Suppose the IR behavior of Δ(τ) is
Δ(τ)= k|τ|^α.
Then the IR value of Σ is given by
Σ(τ)=-J^2 2α/k |τ|^-α-2.
This gives
Δ_ω=-2sinπα/2Γ(α+1)k |ω|^-α-1
Σ_ω=2sinπα/2Γ(-α-1)J^2 2α/k |ω|^α+1,
as well as
Δ_ωΣ_ω=-8sin^2 πα/2Γ(-α-1)Γ(+α+1)α J^2=-2.
We can use the reflection formula,
Γ(z)Γ(-z)=-π/zsin(π z),
to get
2tanπα/2πα/(α+1)=J^-2.
It turns out that for any J we pick, there is a unique 0<α<1 which solves equation (<ref>). We see in figure <ref> that equation (<ref>) aligns perfectly with numerics for reasonably small J^2 at a certain large fixed β. For larger J^2, one would need correspondingly larger β to see this agreement.
From the explicit form Δ∼τ^α, we learn that time reparameterizations are broken down to translations. Moreover, the scaling symmetry is also broken. However, the combination of a time reparameterization that rescales time and a scaling transformation leaves the form of Δ invariant. In this sense, the symmetries discussed in the previous subsection are spontaneously broken in the ground state to a subgroup of translations and combined rescalings. Of course, these symmetries are also explicitly broken by the non-IR terms in the full action.
§.§ Imaginary Time Numerics
We saw in the previous subsection that at zero temperature, Δ() is proportional to ^α with a specific α given by the solution to the transcendental equation (<ref>). By analogy with the SYK model, one might expect the finite-temperature analog to be Δ()∝(sinπ/β)^α. In fact, we see from the graphs in Figure <ref> that for small J^2 the solution looks like
Δ()∝((β-))^α.
When J^2=0 and α=1, this ansatz becomes exact (see appendix <ref>). At large J^2, the magnetic length becomes much smaller than ℓ. This maps onto the constant field limit discussed in Appendix <ref>, where the large-β solution is found to be proportional to logβsinπ/β for large β. This is not the small-α limit of equation (<ref>), suggesting that the function takes the more general form
Δ()=(β F_α(/β))^α
where F_α is some α dependent function satisfying F_α(x)=F_α(1-x) (by the KMS condition <cit.>) and F_α(x)∼ x for small x. For α∼ 1 F_α(x) seems to take the form x(1-x), while for α∼ 0 we have F_α(x)=sinπ x.
§.§ The Free Energy at Low Temperature
To cap off our discussion of thermodynamics, we will calculate the free energy of our system at low temperatures.
As already discussed in equation (<ref>), there is a contribution to the entropy of -N/2log(βϵ), and a corresponding contribution to the free energy of N/2β^-1log(βϵ), essentially the log-volume of the "box" the particle is in.
The most physically meaningful quantities are the ones that don't depend on the box volume: energy and heat capacity. Even more meaningful is kinetic energy. The potential energy is an artifact of our mathematically convenient choice to confine our system with a quadratic potential; other shapes such as quartic or infinite well would give different potential energies, but the kinetic energy would be invariant so long as the box is large.
One can see by examination that Hamiltonian (<ref>) is always positive. Furthermore, at high temperatures the As become negligible compared to the ps, and our particle becomes essentially classical. A classical particle in a magnetic field will have kinetic energy Nβ^-1/2 by the equipartition theorem.
At low temperatures, there is an equally natural interpretation of the thermodynamics. The imaginary-time distance function Δ()∼^α corresponds to a particle with dispersion E∼|p|^2/α. Working out the thermodynamics of such a particle, we see that the entropy goes as S=αN/2logβ+const, and the heat capacity is precisely αN/2. We see in Figure <ref> that the intensive heat capacity is indeed α/2 at low temperatures.
§ REAL-TIME DYNAMICS
In this section, we study the real-time dynamics of the magnetic maze. We firstly set up the Schwinger-Keldysh formalism and derive the saddle point equations. Secondly, we solve these equations for specific vector potential distribution Eq. (<ref>). The system shows a universal long-time limit in which the particle motion is diffusive.
§.§ Schwinger-Keldysh Formulism
§.§.§ Derivation of the Saddle Point Equations
To study the two-point correlation function in real-time, it is convenient to represent the time-evolved density matrix using the Keldysh contour [Kamenev, Altland]
Z = (U_-∞,∞ U_∞,-∞ρ_0)=∫𝒟 x exp(∑_j ∑_a (-1)^a∫_-∞^∞ t ( m/2ẋ_a,j^2 + A(x_a,j)·ẋ_a,j -ϵ/2 x_a,j^2 ) ),
where a = +, - indicates the forward and backward contours, respectively, with the phase (-1)^a representing the direction of unitary evolution. By convention, we assume a = 0, 1 corresponds to the contour labels +, -. Here, ρ_0 = e^-β H is the thermal density matrix. For the real-time dynamics, the corresponding Green's function is defined as follows:
G_ab(t_1,t_2)=1/N𝒫∑_i x_a,i(t_1)x_b,i(t_2).
We introduce the Lagrange multiplier Σ_ab to enforce Eq. (<ref>)
Z =∫𝒟 x exp(∑_j ∑_a (-1)^a∫_-∞^∞ t ( m/2ẋ_a,j^2 + A(x_a,j)·ẋ_a,j -ϵ/2 x_a,j^2 )
- ∑_ab (-1)^a+b1/2∫ t_1 t_2 [ NΣ_ab(t_1,t_2)G_ab(t_1,t_2) + ∑_j Σ_ab(t_1,t_2)x_a,j(t_1)· x_b,j(t_2) ] )
Similar to the imaginary-time derivation, we average over the A fields to obtain
Z̅= ∫exp( S)𝒟 x
S/N = ∑_a (-1)^a∫ t_1 t_2 x_a(t_1) ( -m/2∂_t_1∂_t_2δ(t_1-t_2)δ_ab -ϵ/2δ(t_1-t_2)δ_ab - ∑_b(-1)^bΣ_ab(t_1,t_2)/2) x_b(t_2)
- ∑_ab (-1)^a+b∫ t_1 t_2 ( 1/2Σ_ab(t_1,t_2)G_ab(t_1,t_2) + 1/2 f((x_a(t_1)-x_b(t_2))^2/N) ẋ_a(t_1)·ẋ_b(t_2)/N )
We can perform the quadratic integral over the x_a, which leads to the effective action.
S/N= - 1/2log( (-1)^a(- m ∂_t_1∂_t_2-ϵ)δ(t_1-t_2)δ_ab - (-1)^a+bΣ_ab)
- ∑_ab (-1)^a+b∫ t_1 t_2 ( 1/2Σ_ab(t_1,t_2)G_ab(t_1,t_2) + 1/2 f(Δ_ab(t_1,t_2) ) ∂_t_1∂_t_2 G_ab(t_1, t_2) )
where we defined
Δ_ab(t_1,t_2)=(G_aa(t_1,t_1)+G_bb(t_2,t_2)-G_ab(t_1,t_2) - G_ba(t_2,t_1))
to be the squared distance between the positions at t_1 and t_2. In Eq. (<ref>), all fields are defined on the original Keldysh contour. It is convenient to obtain the relation between the self-energy and Green's function by using the first set of saddle point equations, ∂ S/∂ G_ab = 0, which reads
Σ_ab(t_1,t_2) =-∂_t_1∂_t_2f(Δ_ab(t_1,t_2)) - f'(Δ_ab(t_1,t_2))∂_t_1∂_t_2Δ_ab(t_1,t_2) + δ(t_12)δ_ab∫t_3(
f'(Δ_ab(t_1,t_3))∂_t_1∂_t_3Δ_ab(t_1,t_3) -1/2 f'(Δ_+-(t_1,t_3))∂_t_1∂_t_3Δ_+-(t_1,t_3) - 1/2 f'(Δ_-+(t_1,t_3))∂_t_1∂_t_3Δ_-+(t_1,t_3) )
where G_ab≡[ G_T G_<; G_> G_T̃; ]_ab and the same definition applies for Σ_ab. Here, T and T̃ denote time-ordering and anti-time-ordering, respectively, while <,> label the lesser and greater Green's functions, as the two time variables are on different contours.
The Schwinger-Dyson equations are obtained by computing ∂ S/∂Σ_ab, with a,b either in the basis +,- or cl,q. In the basis x_cl, x_q after the Keldysh rotation, the Schwinger-Dyson equations read
[ 0 -δ(t_12) (m∂_t_2^2+ϵ)-Σ_A; -δ(t_12) (m∂_t_2^2+ϵ)-Σ_R -Σ_K; ]∘[ G_K G_R; G_A 0; ] = 𝕀
Here, the ∘ symbol denotes convolution: A(t_1, t_2) ∘ B(t_2, t_3) ≡∫ t_2 A(t_1, t_2) B(t_2, t_3). It is also useful to express Eq. (<ref>) under the assumption of time-translation invariance and perform a Fourier transformation. The detailed derivation is provided in Appendix <ref>, and the self-energy is summarized in Table <ref>. We obtain the retarded component of the Schwinger-Dyson equation as follows:
(m(ω+ η)^2 - ϵ-(Σ̅_R(ω)-Σ̅_R(ω=0))) G_R(ω)=1
Σ̅_R(t)= Θ(t)(Σ_>(t)-Σ_<(t))
The central result from the derivation is that Σ_R(ω) = Σ̅_R(ω) - Σ̅_R(ω=0). This exactly satisfies the condition Σ_R(ω=0)=0, as expected from the imaginary time calculation. The retarded Green's function generally requires a small cutoff η to retain the retarded causal structure. In principle, η should be taken as an infinitesimal number, η = 0^+, but a finite η is necessary in numerics to help with the convergence of the system, thus introducing artificial dissipation. In this case, we take ϵ to be zero while keeping the retarded cutoff η. Consequently, we focus only on the real-time Green's function where the time variable t ≪ 1/η to eliminate unphysical artifacts. This sets a lower bound for the low-temperature region, requiring the inverse temperature β≪ 1/η.
To self-consistently solve the equation, we also require the fluctuation-dissipation theorem to determine the relation between G_R and G_≷,
ρ_G(ω) = -1/2π (G_R(ω) - G_R(ω)^†)
G_<(ω) = -2π n_B(ω) ρ(ω)
G_>(ω) = +2π n_B(-ω) ρ(ω),
where n_B(ω) = 1/(e^βω - 1) is the bosonic distribution function and β is the inverse temperature.
Note that Green's function G is not well-defined without a trapping potential ϵ. An example of this is the analytical solution for free particles at J=0, as shown in Appendix <ref>. For the real-time Green's functions G_≷(t) and G_K(t), as well as the imaginary-time-ordered Green's function G(τ), all of them diverge in the limit ϵ→ 0. However, Δ_≷(t), Δ_K(t), or Δ(τ), which represent the squared displacement, are well-defined. Therefore, it is convenient to relate everything to the correlation function Δ.
From the definition of Δ_ab in Eq. (<ref>), we can assume time-translation invariance and transform to the frequency domain. Using the symmetry Δ_ab(t) = Δ_ba(-t), we obtain:
Δ_ab(ω) = (const.) δ(ω) -2 G_ab(ω).
Hence, up to a constant shift at zero frequency, Δ_ab and G_ab(ω) are related by -2. The constant shift can be ignored since we always enforce the condition Δ_ab(t=0)=0 by taking Δ_ab(ω = 0) = - ∑_ω≠ 0Δ_ab(ω) in the numerics.
Consequently, we can define ρ_Δ = - 1/2π(Δ_R(ω) - Δ_A(ω)) for ω≠ 0, and the corresponding relation between the retarded and advanced components is given by
Δ_R^*(ω) ≡ -Δ_A(ω), Δ_R^*(t) ≡ -Δ_A(-t).
Here, the * symbol denotes taking the complex conjugate, and the interchange of time variables has been carried out explicitly. The spectral function ρ_Δ also satisfies the corresponding fluctuation-dissipation theorem for the correlator Δ.
In summary, we have the full set of equations for Δ in Eq. (<ref>)-(<ref>). These can, in principle, be self-consistently solved to yield all real-time Green's functions by providing an initial guess for Δ_R(ω).
[0.98]
ρ_Δ(ω) = -1/2π (Δ_R(ω) - Δ_A(ω)), ω≠ 0
Δ_<(ω) = -2π n_B(ω) ρ_Δ(ω), ω≠ 0
Δ_>(ω) = +2π n_B(-ω) ρ_Δ(ω), ω≠ 0
Σ_≷(t) = ∂_t^2 f(Δ_≷(t)) + f'(Δ_≷(t))∂_t^2 Δ_≷(t)
Σ̅_R(t) = Θ(t) (Σ_>(t)- Σ_<(t))
(m(ω+ η)^2 - ϵ-(Σ̅_R(ω)-Σ̅_R(ω=0) ) ) Δ^R(ω)=-2 , ω≠ 0
§.§.§ Numerical Methods to Obtain the Solution
Here we explain how the real-time equations can be solved numerically. Since the value of Δ_≷(ω = 0) is undetermined in the self-consistent process, using the simple mixing method Δ_R^(n)=(1-ζ) Δ_R^(n-1)+ζΔ_R,new^(n-1) is numerically unstable. Here, n is the iteration step and Δ_R,new^(n-1) is calculated through the Schwinger-Dyson equation Eq. (<ref>) based on the solution Δ_R^(n-1). Instead, we choose to use a gradient descent protocol to achieve self-consistency.
If we only consider the retarded component, we will find that the gradient being zero is the same as the Schwinger-Dyson equation Eq. (<ref>). Leaving the details to the supplementary material, we choose the update program to be:
Δ(ω)_R^(n)) = Δ(ω)_R^(n-1) + ζ/4( (-2)(Δ_R^(n-1)))^-1(ω) - (G_0,R^-1(ω) - Σ_R^(n-1)(ω)) ) (Δ_R^(n-1)(ω) )^2
Here, the second term /4( (-2)Δ_R^-1(ω) - (G_0,R^-1(ω) - Σ_R(ω)) ) = ∂ S[Δ]/∂Δ_R is the gradient of the action with respect to the variable Δ_R. We also introduce an extra Δ_R(ω) as a numerical trick, which stabilizes the iteration process. This is because Δ_R^-1, Σ_R, and G_0,R^-1 all approach zero as ω→ 0, and Δ_R^2 enhances the numerical difference around ω≈ 0. We have checked that convergence is reached when ||Δ_R^(n)(ω)-Δ_R^(n-1)(ω)||_2 < 10^-6 ||Δ_R^(n)(ω)||_2.
After obtaining the numerical solution, we can check the real-time dynamics by directly comparing them with the imaginary time dynamics as a benchmark. The correlator can be related to the spectral function by the spectral representation
Δ_R(ω) = ∫ω'ρ_Δ(ω')/ω' - (ω + η).
In real-time dynamics, we can directly sample the imaginary time Green's function through the analytical continuation
Δ(ω_n) = ∫ω'ρ_Δ(ω')/ω' - ω_n,
with the Fourier transformation
Δ(τ) = ∑_ω_n=2π n/βΔ(ω_n) e^ω_n τ.
§.§ Long-Time Limit of The Dynamics
Here we describe the structure of the real-time dynamics in equilibrium at inverse temperature β. We demonstrate two typical numerical results in Fig. <ref>, with (a1,a2,c1,c2) in the high temperature regime and (b1,b2,d1,d2) in the low temperature regime. From this data, it is apparent that the equilibrium real-time dynamics exhibits a rich structure. Here we show that, for all parameters we studied, the dynamics eventually enters into a late-time “hydrodynamic” regime in which the large scale motion is diffusive, Δ_<(t) → 2D t, with D the diffusivity. Moreover, the system relaxes to this diffusive behavior exponentially with a time constant t_rel. The time-scale for the onset of the late-time hydrodynamic regime is also of order t_rel.
Before proceeding, we note that the dynamics at early time is quite complex. For example, the Euclidean scaling form, Eq. (<ref>), has an imprint on the early time dynamics at very low temperature (see (b1,b2) of Fig. <ref>). Nevertheless, we focus on the long-time dynamics as it is the most universal and it is closely related to existence of a non-vanishing chaos exponent (discussed in Sec. <ref>).
The main goal of the rest of this subsection is to explain our results for D and t_rel at very high and very low temperature. At low temperature, we find t_rel∼β and D ∼β^2/z-1. These low temperature forms are determined by the emergent scaling symmetry and the associated dynamical exponent z, but we have not been able to give an analytic derivation of the coefficients. At high temperature, we find t_rel∼β^-1/2 J^-2 and D ∼β^-3/2 J^-2. These we will derive analytically below.
The values for D and t_rel just quoted, and the remainder of the discussion in this subsection, all consider f ∼ J^2/Δ. For other choices of f, we expect the system will still exhibit exponential relaxation to diffusive dynamics at long time (apart from the special case of uniform magnetic field or zero coupling). It is also interesting to consider the possibility that the dynamics could localize in some fashion, but we have not seen evidence for this in our numerics.
§.§.§ Theory of Low-Temperature Parameters
Here we discuss the low-temperature parameters, relying heavily on the emergent scaling symmetry at low temperature. In Fig. <ref>, we show the numerics of real-time dynamics in the low-temperature region. We focus on two quantities to describe the late-time behavior.
* The diffusion constant D is obtained by fitting the slope of the late-time correlator Δ_<(t) ∼ 2Dt + const. The fitting region is chosen such that ^2/ t^2Δ_<(t) is small enough, ensuring that the system has entered the diffusion region.
* The relaxation times are obtained by fitting the slope of log| ^2/ t^2Δ_<(t) | ∼ t (or log| ^2/ t^2Δ_<(t) | ∼ t), as illustrated in the insets in Fig. <ref>(c1, c2) and (d1, d2). We can exactly prove that real and imaginary part fitting leads to the same relaxation time in the high-temperature case, as indicated by Eq. (<ref>). However, in low-temperature case, the numerical extraction of for the real and imaginary parts is slightly different, which we attribute to numerical limitations. We find that both methods give a relaxation time that scales linearly with β. Therefore, we extract from the slope fitting of log| ^2/ t^2Δ_<(t) | ∼ t.
Remarkably, our model exhibits power-law scaling with D ∼β^2/z-1, as shown in Fig. <ref>(a). Here we use the exponent 2/z-1 to label the slope on the D, β log-log plot, where z is the dynamical exponent. This scaling arises from dimensional analysis of the diffusion behavior Δ_<(t) ∼ 2Dt + ⋯, where [Δ_<(t)] ∼ [⟨ x^2 ⟩] ∼ [t]^2/z, and therefore [D] ∼ [t]^2/z-1∼ [β]^2/z-1. The brackets, [], indicate the unit of the physical quantity.
We observe that the dynamical exponent is continuously tunable and depends on J. In Fig. <ref>(b), the fitted exponent 2/z from the diffusion constant generally shows quantitative agreement with α_Theory from the IR analysis in Eq. (<ref>). When J^2 ≤ 0.1, the error increases. This artificial numerical effect, also encountered in the high-temperature region, occurs because smaller J leads to larger t_, causing the system to potentially not enter the diffusion region during the fitting range t ∈ [0, 1/η], which results in a larger numerical error. In conclusion, in low-temperature regions, the dynamical exponent is precisely determined by the IR theory α_Theory with z = 2/α_Theory.
We are also interested in the relaxation time to the diffusion region in the low-temperature system. In the numerical data of Fig. <ref>(c), is obtained by linearly fitting log| ^2/ t^2Δ_<(t)| in the time region t∈ [1.4β, 1.55β], with an R-square value R^2>0.999. The fitting regions ensure that the fast decaying mode has already vanished and only one late-time decaying mode exists. Furthermore, we verify the linear behavior of ∼β by checking the R-square value R^2>0.9998 for all different J^2 in Fig. <ref>(c).
We can extract the coefficient /β (and an analogous quantity from fitting the imaginary part) for different J^2. The relaxation time is an O(1) value as a function of J^2. Both /β and the analogous quantity extracted from the imaginary part show similar behavior regarding J^2, while they differ by a constant value. With a larger temporal fitting window, we expect the time constants of the exponential decay of both the real and imaginary parts to converge.
We can unify the behavior of these two quantities, and D, at low temperatures within one framework. In the low temperature regime, the largest characteristic time scale is β. Therefore, we expect the relaxation time to be proportional to β since it has the unit of time. By virtue of the dynamical exponent relating space and time scaling, the diffusion constant has the unit of [time]^2/z-1, and therefore we expect that D ∼β^α - 1 using the relation 2/z=α.
§.§.§ Theory of High-Temperature Parameters
Here we discuss the high-temperature parameters, relying on some useful approximations to give an analytical treatment. At high temperature, the approximation for Δ_<(t) ≈Δ_>(t) ≈t^2/m β works at a relatively large time scale t∈ [β, ], where the imaginary part of Δ_≷(t) can be ignored due to the condition | t/m |≪t^2/m β when t≫β. We obtain Σ_K by referring to the self-energy formula in Table <ref>,
Σ_K(t) = Σ_<(t) + Σ_>(t) = 8 β J^2 m (t^2-l^2 β m)/(l^2β m+t^2)^3.
After Fourier transformation, the corresponding Σ_K(ω) is
Σ_K(ω) = -2 i π J^2 e^-l ω√(β m)(l ω(β l m ω +√(β m))+1)/l^3 √(β m).
In the small ω region, it can be expanded as the following equation
Σ_K(ω) = -2 i π J^2/l^3 √(β m)-i πβ J^2 m ω ^2/l √(β m) + O(ω^3).
We aim to obtain the relaxation time, so we consider the time scale when | ω | ^2≪(1/l^2β m), or effectively t^2≫ l^2β m. We can approximate Σ_K as a constant by dropping the quadratic term in ω,
Σ_K(ω) ≈ - 2 i π J^2 /l^3√(β m).
By the fluctuation-dissipation theorem in Table <ref>, we obtain the retarded self-energy term,
Σ_R(ω) = 1/2 Σ_K(ω) tanhβω/2≈ - βω/2π J^2 /l^3√(β m),
where we approximate tanh(βω/2) as βω/2 in the high temperature limit. In principle, the real part of Σ_R should be determined from the Kramers-Kronig relation ReΣ_R(ω) = 1/π𝒫∫ImΣ_R(ω')/ω' - ωω', but the analytic continuation can be read off as
Σ_R(ω) ≈ - ωβ/2π J^2 /l^3√(β m).
Notice that Σ_R(ω)∝ω also automatically satisfies the condition Σ_R(ω=0)=0 shown in Table <ref>.
With all the approximations above, the retarded component of the Schwinger-Dyson equation in Eq. (<ref>) can be further simplified to
(m(ω+ η)^2 - ϵ) Δ_R(ω) + ωπ J^2 √(β)/2l^3√(m)Δ_R(ω) =-2 , ω≠ 0 .
In the time domain, we set ϵ to zero and Eq. (<ref>) becomes
-m ^2 Δ_R(t)/ t^2 - π J^2 √(β)/2l^3√(m)Δ_R(t)/ t = -2δ(t).
We solve the ordinary differential equation Eq. (<ref>) with the boundary condition Δ_R(0^+)=0, Δ_R'(0^+)=2/m, Δ_R(0^-)=0, Δ_R'(0^-)=0. Here Δ_R(0^+)=0, Δ_R'(0^+)=2/m can be directly obtained from the free theory J=0. The solution for the retarded Green's function reads
Δ_R(t) = Θ (t) 2/m(1-e^-|t|/),
where the relaxation time is
= 2 m^3/2 l^3/πβ^1/2 J^2.
Similarly, the advanced component can be obtained as
Δ_A(t) = Θ (-t) 2/m(1-e^-|t|/).
Next, Δ_K(t) can be evaluated using the FDT,
tanhβω/2Δ_K(ω) ≈βω/2Δ_K(ω) = (Δ_R(ω) - Δ_A(ω) ) .
Using the high-temperature approximation, we can further replace ω as ∂_t and obtain Δ_K(t) from a first-order ordinary differential equation,
Δ_K(t)/ t=2/β(Δ_R(t)-Δ_A(t)).
Consequently, we obtain Δ_K(t) and then the real-part Δ_≷, which leads to the final result
Δ_K(t) = 4(e^-|t|/-1 + |t|/)^2 /(mβ)
Δ_≷(t) = 2(e^-|t|/-1 + |t|/)^2 /(mβ) ±(t) /m(1-e^-|t|/).
From this result, the system displays diffusive behavior at late time with Δ_<(t) ≈ 2Dt + C, with constant C=-2^2/(mβ). The diffusion constant D can be simply related to the by
D = /mβ = 2 m^2 l^3/πβ^3/2 J^2 .
To benchmark our theory, we directly perform numerical calculations based on the Schwinger-Dyson equation. In Fig. <ref>, we provide concrete evidence for our high-temperature solution.
Firstly, we can check that the high-temperature theory, Eq. (<ref>), perfectly matches the numerics, as indicated by the overlap between the dashed lines and solid lines in Fig. <ref>. Additionally, in the derivation of the self-energy, Eq. (<ref>), we assume the real part of Δ_<(t) is significantly larger than the imaginary part by using the free theory solution, and the numerics show that this approximation remains true for the exact dynamics.
Secondly, after verifying the functional form of Δ_<(t), we focus on the characteristic relaxation time , or equivalently, the diffusion constant D in the high-temperature region. In Fig. <ref>(b,c,d), the diffusion constant (or ) is obtained by directly fitting the formula, Eq. (<ref>), in the reliable time region t ∈ [0, 1/η]. We choose different cutoffs, η = [0.007, 0.01, 0.015], in the numerics and extrapolate the data to η = 0.
In Fig. <ref>(b), we quantitatively verify the numerically obtained diffusion constant against the theoretically predicted value in Eq. (<ref>). In the small β→ 0 limit, the ratio D_Fit/D_Theory≈ 1 within an error of 5%. We note that the error increases as J approaches zero. This is an artificial numerical effect because smaller J leads to larger t_, and the system might not enter the diffusion region during the fitting region t ∈ [0, 1/η], which causes larger numerical error.
In Fig. <ref>(c) and Fig. <ref>(d), we illustrate that the diffusion constant follows a power-law dependence on β and J by observing the linearity in the log-log plot. Most importantly, we extract the scaling law by comparing with D ∼β^-3/2 and D ∼ J^-2, which again confirms the correctness of our prediction in the high-temperature region as given by Eq. (<ref>).
§ LYAPUNOV PHYSICS
In this section, we setup the calculation of an appropriate out-of-time-order correlator (OTOC) <cit.> in order to extract a quantum Lyapunov exponent. We then report the results of a numerical study of the derived equations. In particular, at low temperature we find a chaos exponent which is proportional to 1/β and grows with increasing J. We do not observe any satuaration of this growth and the chaos exponent comes close to 90% of maximal at the largest Js we studied.
We study the 4-point function of xs by performing a perturbation around the saddle point of the full effective action Eq. (<ref>)
G(τ_1,τ_2) = G̃(τ_1,τ_2) + δ G(τ_1,τ_2)
Σ(τ_1,τ_2) = Σ̃(τ_1,τ_2) + δΣ(τ_1,τ_2).
Then the effective action becomes
I_eff[δ G,δΣ]/N = 1/4∫τ_1τ_2τ_3τ_4δΣ(τ_1, τ_2) ( G̃(τ_1, τ_3) G̃(τ_2, τ_4) ) δΣ(τ_3, τ_4) + 1/2∫τ_1τ_2
[δ G(τ_1,τ_2)δΣ(τ_1,τ_2)
+ (δ G(τ_1,τ_2) )^2 f”(Δ̃)/2 (2(δ(τ_1-τ_2)-1))^2 ∂_τ_1∂_τ_2G̃(τ_1,τ_2)
+ (δ G(τ_1,τ_2) ) f'(Δ̃) (2(δ(τ_1-τ_2)-1)) ∂_τ_1∂_τ_2(δ G(τ_1,τ_2) ) ]
After integrating out δΣ, the effective action reads
I_eff[δ G]/N = ∫τ_1τ_2τ_3τ_4δ G(τ_1,τ_2) (1/4K^-1 - 1/2(I∘Λ) ) δ G(τ_3,τ_4),
where the kernel K is
K(τ_1,τ_2,τ_3,τ_4) = - G̃(τ_1, τ_3) G̃(τ_2, τ_4) ,
the interaction 2-point vertex induced by the magnetic field is
Λ(τ_1,τ_2) = [ f”(Δ̃)/2 (2(δ(τ_1-τ_2)-1))^2 ∂_τ_1∂_τ_2G̃(τ_1,τ_2) + f'(Δ̃) (2(δ(τ_1-τ_2)-1)) (∂_τ_1∂_τ_2) ],
and the identity 4-point vertex for the boson is
I(τ_1,τ_2,τ_3,τ_4) = 1/2( δ(τ_1-τ_3)δ(τ_2-τ_4) + δ(τ_1-τ_4)δ(τ_2-τ_3)).
We can check that ∫τ_1τ_2τ_3τ_4δ G(τ_1,τ_2) ( 1/2(I∘Λ) ) δ G(τ_3,τ_4) is the same magnetic interaction term in Eq. (<ref>).
The 4-point function of x can be regarded as the 2-point function of the perturbed Green's function in the effective action Eq. (<ref>). We introduce
F(τ_1,τ_2,τ_3,τ_4) = ⟨δ G(τ_1,τ_2) δ G(τ_3,τ_4) ⟩
= 1/2(I - 2K ∘ (I∘Λ)))^-1 K∘I1/N
Then we can directly prove that the 4-point correlation function is generated by a series of ladder diagrams. To simplify the above equation we notice that
(K ∘I)(τ_1,τ_2,τ_5,τ_6) = ∫τ_3τ_4K(τ_1,τ_2,τ_3,τ_4) I(τ_3,τ_4,τ_5,τ_6)
= - 1/2G̃(τ_1, τ_5) G̃(τ_2, τ_6) - 1/2G̃(τ_1, τ_6) G̃(τ_2, τ_5)
where we can further define the decoupled 4-point function
F_0(τ_1,τ_2,τ_3,τ_4)=-1/N(G̃(τ_1,τ_4)G̃(τ_2,τ_3)+G̃(τ_1,τ_3)G̃(τ_2,τ_4) ).
Combining all the results in Eq. (<ref>), (<ref>), (<ref>) we get the final result
F = (I - 2 K ∘ (I∘Λ)))^-1 F_0
= ∑_n=0(2 K ∘ (I∘Λ)) )^n F_0
Therefore 2 K ∘ (I∘Λ)) is the effective ladder diagram in our model, which reads
K_lad(τ_1,τ_2,τ_3,τ_4) ≡(2 K ∘ (I∘Λ)(τ_1,τ_2,τ_3,τ_4))
= -2G̃(τ_1, τ_3) G̃(τ_2, τ_4)
[(f”(Δ̃)/2 (2(δ(τ_3-τ_4)-1))^2 ∂_τ_3∂_τ_4G̃(τ_3,τ_4)
+ (f'(Δ̃) (2(δ(τ_3-τ_4)-1)) (∂_τ_3∂_τ_4) ] .
Finally, we consider the OTOC as a special kind of 4-point correlator on the double Keldysh contour with the parameterization.
OTOC(t_1,t_2,t_3,t_4) = 1/N F(τ_1,τ_2,τ_3,τ_4),
with τ_1=β/2+ t_1,τ_2= t_2,τ_3=β/4+t_3,τ_4=-β/4+t_4
It is known that any 4-point function satisfies the Bethe-Salpeter equation even for the deformed contour <cit.>,
F(τ_1,τ_2,τ_3,τ_4) = ∫_𝒞τ_5τ_6 K_lad(τ_1,τ_2,τ_5,τ_6)F(τ_5,τ_6,τ_3,τ_4),
where we ignore the inhomogeneous term F_0, which means the ladder with no rungs, since it's much smaller than the OTOC contribution. In Eq. (<ref>), 𝒞 refers to the deformed double Keldysh contour.
Following the same technique used in the analysis of SYK, the imaginary variable τ_5 can be represented as t_5 + β/2 and t_5, with each single t_5 having two points in the backward and forward contour. These points are denoted by the blue dots 1, 2, 3, and 4, respectively. For τ_1= t + β/2, the contributions of blue points 3 and 4 cancel out, but the contribution from points 1 and 2 is non-vanishing, leading to the contribution Θ(t_1-t_5)( G_<(t_1,t_5)- G_>(t_1,t_5))=- G_R(t_1, t_5) <cit.>. Similarly, we find that τ_6 only contributes from blue points 3 and 4, leading to - G_R(t_2, t_6). This automatically leads to the simplification that τ_5 and τ_6 differ by an imaginary time β/2, which means the delta function in Eq. (<ref>) has no contribution. Additionally, F(τ_5,τ_6,τ_3,τ_4) will be invariant when considering points 1 and 2, since the causal relation between the blue points and τ_3 and τ_4 is fixed with certain imaginary time. After considering these structures, the real-time kernel reads.
K_lad,R(t_1,t_2,t_5,t_6) = -2G̃_R(t_1,t_3) G̃_R(t_2, t_5)
[f”(Δ̃_W(t_5,t_6)) ∂_t_5∂_t_6G̃_W(t_5,t_6) + 2 f'(Δ̃_W(t_5,t_6)) (∂_t_5∂_t_6) ]
where the Wightman Green's functions are defined by
G_W(t) ≡⟨ x(t-β/2) x(0) ⟩, Δ_W(t) ≡⟨(x(t-β/2)- x(0) )^2 ⟩
and their relation to the spectral function can be found in the Appendix <ref>.
Finally, we take an ansatz for OTOC
OTOC(t_1, t_2, 0, 0) = e^ϰ (t_1+t_2)/2ℱ(t_1-t_2),
and plug it into the Bethe-Salpeter equation
OTOC(t_1,t_2,0,0) = ∫_ℝt_5t_6 K_lad,R(t_1,t_2,t_5,t_6)OTOC(t_5,t_6,0,0).
These manipulations yield at last an eigenvalue problem for the OTOC,
ℱ(ω) = -2∫ω'/2π|G_R(ω+ϰ/2)|^2 ( Λ_W,1(ω-ω') + Λ_W,2(ω-ω') ( ϰ^2/4+(ω')^2)) ℱ(ω'),
where the Wightman rung diagrams are
Λ_W,1(ω) = ∫t f”(Δ̃_W(t)) (-∂_t^2 Δ̃_W(t)) e^ω t
Λ_W,2(ω) = 2 ∫t f'(Δ̃_W(t)) e^ω t.
Additionally, the analytical continuation to G_R(ω+ϰ/2) can be represented using spectral function, which reads
G_R(ω + ϰ/2) =1/-2∫ω'ρ_Δ(ω')/ω' - (ω+ ϰ/2).
We can diagonalize the RHS of Eq. (<ref>) to find the largest eigenvalue. As a sanity check of the numerics, we find that all the eigenvalues are real. We first discuss the numerical results for the high-temperature Lyapunov physics in Fig. <ref>. We find the Lyapunov exponent ϰ/(2π/β) exhibits the same power law for all the J^2 in Fig. <ref>(a). Via linear fitting on a log-log plot, we extract the power-law behavior ϰ/(2π/β) ∼β^0.8. In the high-temperature region, the Lyapunov exponent is far from the chaos bound, i.e., ϰ/(2π/β) ≪ 1. We also provide the eigenvalue h_i and the corresponding eigenfunction in the frequency domain ℱ_i(ω) by solving Eq. (<ref>) for a typical high-temperature setting, where β=0.06, J^2=0.6. As shown in Fig. <ref>(b), the largest eigenfunction corresponds to a positive and symmetric function centered at ω=0. Other eigenfunctions are negative in some regions, and the cancellation leads to smaller eigenvalues for those modes.
We show the numerical results for the low-temperature Lyapunov physics in Fig. <ref>.
In Fig. <ref>(a), the Lyapunov exponent ϰ is seen to be set by 1/β, albeit with some mild temperature dependence. The smaller the J, the weaker the temperature dependence of ϰ/(2π/β) in this range of β. The results are all consistent with the chaos bound, ϰ≤ 2π/β. In Fig. <ref>(b), the Lyapunov exponent increases monotonically when J^2 increases. Due to numerical limitations, we are only able to probe the parameter region β < 800, J^2 ∼ O(1) with controlled numerical error. Hence, it is still open whether the system approaches maximal chaotic behavior at the largest Js and βs. But we observe no saturation with increasing J. Additionally, we also plot the eigenfunction ℱ_i(ω) associated with the eigenvalue in the low-temperature region. We find that the eigenfunctions ℱ_1, ⋯, ℱ_5 are qualitatively the same in the high-temperature and low-temperature regions. In the intermediate large ω region, ℱ_1(ω) shows exponential decaying behavior. However, the high-temperature eigenfunction in the small ω region will significantly deviate from the exponential decay, as illustrated in Fig. <ref>(b). But this phenomenon is not significant in Fig. <ref>(c).
§ DISCUSSION
In this paper, we presented a systematic study of the Magnetic Maze (MM), including its Euclidean dynamics, real-time dynamics, and Lyapunov physics. The structure of the large-N path integral framework, specifically the presence of certain time derivatives in the equations of motion, guarantees the absence of a glassy equilibrium state in this model. We formulated all the governing equations for general vector potential correlations, but we focused our analysis on the case where those correlations have a special power-law falloff, f(Δ) ∼ J^2/Δ. We found a rather rich theory in the low-energy regime, one with tunable dynamical exponent, which gives a new kind of controllable scale invariant quantum system.
We conclude by highlighting some open questions and directions for further work:
Low Energy EFT There is a lot we know about the Magnetic Maze at low temperature. We know the low-energy equations of motion have two emergent symmetries: reparameterization and rescaling. However, it is unclear what signatures these symmetries have in Lorentzian time, or what form any Goldstone bosons for these symmetries might take.
Maximal chaos? We did not observe any signature that the chaos exponent saturated with increasing J, therefore it is possible that the Magnetic Maze approaches maximal chaos in the limit of large J and low temperature. It would be very interesting to better understand this regime, possibly by making use of the Low Energy EFT.
Gravity dual? It is interesting to ask if the low-energy physics of the Magnetic Maze could have a gravity dual. This question is particularly interesting if it can be shown that the Magnetic Maze is maximally chaotic at low temperature in the strong coupling limit. The dynamical exponent z is a key feature to match, along with the pattern of symmetries.
Generalizations There are interesting generalizations of the Magnetic Maze, including the possibility of adding random electric fields (which could be equivalent to adding random magnetic fields to the p-spherical model), the possibility of considering externally imposed time-dependent fields (analogous to a random circuit model), and various many-body generalizations. On the latter point, we note that the variable dynamical exponent allows one to tune the particle interactions to be relevant or marginal even in high dimensions. We can also generalize to other observables, such as the spectral form factor which probes statistical properties of the energy spectrum.
Finite N It is also interesting to explore finite N corrections to our analysis. A systematic 1/N expansion would be useful as well as an analysis at small finite N, even N=2. The problem of random magnetic fields is well-studied in 2d, but the special power-law correlations we studied may not have been considered before. Finally, we note that cosmic magnetic fields can have a random character <cit.>, and our work might provide an alternative perspective on such systems via a 1/N expansion.
We thank Xiao Chen, Ping Gao, Martin Sasieta
and Pengfei Zhang for useful discussions.
T.-G. Z. acknowledges support from the Tsinghua Visiting Doctoral Students
Foundation. MW acknowledges support from the Joint Quantum Institute. BGS acknowledges support from the U.S. Department of Energy through DE-SC0009986.
§ UNIFORM FIELD PROBLEM AND PHYSICAL SCALES
In this appendix, we review the basic physical scales in the magnetic maze. Throughout the paper, we absorb the charge into the vector potential and set k_B= 1, but we keep ħ explicit in this appendix. It is useful to approach this analysis by first discussing the case of a uniform magnetic field. This clarifies the physical meaning of the dimensionless parameters in the more general problem.
In N dimensions, a random spatially uniform magnetic field is an antisymmetric tensor B_ij which we take to be an N × N antisymmetric matrix with matrix elements given by Gaussian random variables with zero mean and variance equal to ^2/N. Antisymmetric matrices don't have real eigenvalues, but they do have eigenplanes in which the matrix takes the form [ 0 λ; - λ 0 ] for some field strength λ. It can further be shown that the field strengths, which vary from plane to plane, have a semi-circle distribution at large N,
ρ(λ) = N/2π^2√(4 ^2 - λ^2),
for 0 < λ < 2. When N is odd, there is one special direction that does not experience any magnetic field.
§.§ Review of the 2d Uniform Field Problem
The N-dimensional problem thus breaks up into N/2 copies of the 2-dimensional problem. In the 2d classical problem with a uniform field λ, a particle with speed v undergoes uniform circular motion with angular frequency ω_c = λ/m and radius r_c = v/ω_c. In the corresponding 2d quantum problem, we also have the magnetic length, ℓ_B^2 ∼ħ/λ, and the characteristic energy of the Landau levels, ħω_c ∼ħ^2/m ℓ_B^2.
For a particle with a thermal velocity, v ∼√(T/m), the ratio of the classical orbit size to the magnetic length is
r_c/ℓ_B∼√(T/ħω_c).
Hence, at high temperature the problem is effectively classical and at low temperature quantum effects can appear, as signaled by the size of the classical orbit approaching the magnetic length.
We are primarily interested in the squared displacement. At the classical level, the trajectory is
x(t) = r_c cos( ω_c t ), y(t) = r_c sin( ω_c t) .
The squared displacement, averaged over a thermal distribution for the velocity at temperature T = 1/β, is
⟨ (x(t)-x(0))^2 + (y(t)-y(0))^2 ⟩ = ⟨ v^2 ⟩4/ω_c^2sin^2 ω_c t/2 = 8/ m βω_c^2sin^2 ω_c t/2.
Quantum effects modify this result when ħω_c ≳ T.
Working in Landau gauge, the full quantum result is obtained by expressing the position operator in terms of the Landau level ladder operator, a, and momentum in the y-direction, p_y,
x = √(ħ/2 m ω_c) (a + a^†) + p_y/m ω_c.
The corresponding Heisenberg operator is
x(t) = √(ħ/2 m ω_c) (a e^- i ω_c t + a^† e^i ω_c t) + p_y/m ω_c.
From these, we get the thermally averaged correlator,
⟨ x(t)x(0) ⟩ = ħ/2 m ω_c( [n_B +1] e^-iω_c t + n_B e^i ω_c t) + ∑_k_yħ^2 k_y^2/m^2 ω_c^2,
where we note that the final term does not depend on time.
The average square displacement is
⟨ (x(t) - x(0))^2 ⟩ = 2 ⟨ x(0)^2 ⟩ - ⟨ x(t) x(0) ⟩ - ⟨ x(0) x(t) ⟩ = 2 ħ/ m ω_c (2 n_B + 1)sin^2 ω_c t /2.
In the limit that T ≫ħω_c, we have n_B ∼ (βħω_c)^-1 and (<ref>) approaches half of (<ref>); the other half comes from including the average of (y(t)-y(0))^2.
§.§ Uniform Field in N Dimensions
Returning now the N-dimensional problem, the key new effect is that we have a distribution of the field strengths λ. The N-dimensional averaged square displacement is thus
Δ(t) = 1/N∫^2 _0 dλρ(λ) 4 ħ/λ(2 n_B(λ/m) + 1) sin^2 λ t/2m .
At zero temperature, with n_B=0, we can obtain a slow logarithmic increase with t.
At non-zero temperature, the dominant effect comes from the classical contribution, which can be estimated as follows. We adjust the upper limit to take into account the freezing out of the classical motion for planes with a high cyclotron frequency. Then at large t, all but the smallest frequencies contribute an erratic amount proportional to sin^2 λ t/2m. Replacing this by its value averaged over one cycle, we estimate
Δ(t) ∼1/N∫^m T/ħ_2m/t dλρ(λ) 4 m/βλ^2∼4 m/βπ∫^m T/ħ_2m/t d λ1/λ^2 = 2/βπ t + ⋯.
The first ∼ is replacing sin^2 λ t/2m with 1/2 and putting the lower cutoff where the erratic contributions to Δ begin. The second ∼ is approximating the density of λ by its central value, which is appropriate since the low field planes are the ones which contribute significantly to Δ. The modes with λ < 2m/t give a contribution of the same order. The final result shows that the particle moves diffusively at large t (cutoff only when t ∼ m/λ_min∼ N). It is also possible to perform the integral exactly using Bessel functions.
§.§ Imaginary Time Dynamics Within Mean Field Theory
Strictly speaking, the mean field theory developed in section <ref> doesn't apply to the case of constant field. This is because it is impossible to choose a gauge where the A-field has statistical translation symmetry for constant fields. This, however, is a matter of fine print. One can get arbitrarily close to constant field by choose f(Δ) to be a positive function whose slope is a constant f'(Δ)=-^2 in the range of Δs we care about. In this limit, equation
(<ref>) for the imaginary-time dynamics becomes quadratic in the Fourier transform G_ω, for all ω≠ 0. We have the imaginary-time equation
(mω^2 +^2ω^2 G_ω)G_ω=1.
It can be solved exactly, giving us
G_ω=2/mω^2+√(m^2ω^4+4^2ω^2).
If we are interested in large βs and small ωs, we can simplify further to
G_ω=1/ |ω|
Δ() ∼2/π^2 logβsinπ/β+𝒪(1).
§.§ Spatially Varying Field
Now consider the problem of a spatially varying field. With our conventions, the vector potential has units of momentum and its correlations are translation invariant and given by
A_i(x) A_j(0) = δ_ij f(x^2/N).
We will focus on the case in which
f(u) = ħ^2 J^2/u + ℓ^2,
although other choices of f can be similarly analyzed. J is a dimensionless coupling which dials the strength of the local magnetic and ℓ is a characteristic length scale which sets the correlation length of the random vector potential.
The magnetic field is obtained from A_i(x) as
B_ij = ∂_i A_j - ∂_j A_i,
and its correlations can be straightforwardly obtained from those of f. At large N, the leading contribution to the magnetic field correlation is
∂_i A_j(x) ∂_k A_l(0) = - δ_ikδ_j l/N f'(x^2/N) + ⋯,
which in our case is
∂_i A_j(x) ∂_k A_l(0) = δ_ikδ_j l/Nħ^2 J^2/(x^2/N+ℓ^2)^2 + ⋯.
To analyze the dimensionless parameters in the problem, it is convenient to choose ℓ as a unit of length. Combining this length with the mass m and ħ we get a characteristic energy
E_0 = ħ^2/mℓ^2.
The characteristic scale of the magnetic field is
∼ħ J/ℓ^2.
As discussed just above, the magnetic field is a tensor with a characteristic semi-circle spectrum of field strengths (which now vary in space), and sets the overall scale of the spectrum. The corresponding magnetic length is
ℓ_B/ℓ∼1/√(J)
and the magnetic energy is
ħω_c/E_0∼ J .
Putting this all together, the physics can be classified in terms of two dimensionless ratios,
r_c/ℓ_B∼√(T/ħω_c)∼√(1/JT/E_0)
and
ℓ_B/ℓ∼1/√(J).
In units where ℓ, m, and ħ are all set to one, we see that the problem is classical for T ≫ J and quantum for T ≪ J. In addition to setting the quantum-classical boundary, J also plays the role of the coupling, with larger J leading to larger deviations from the physics in no magnetic field.
From the analysis in this appendix, we can form some intuition as to how the physics depends on T and J. When T ≫ J, all the local eigen-planes of B_ij are effectively classical, whereas when T ≪ J, only the low field eigen-planes remain classical. It is this smaller number of “active” planes which should be responsible for the low-temperature dynamics. Moreover, according to the uniform field analysis, (<ref>), the average square displacement after a thermal time, t ∼β, is 1/J. Hence, the larger J is, the slower the particle is moving. In the main text, we indeed find that larger J leads to slower motion in many senses, including by increasing the dynamical exponent at low temperature.
§ ANALYTICAL SOLUTION AT J=0
To simplify the formulas, we set m=1. We also include the confining potential as a convenient regulator and only set it to zero at the end of the calculation.
§.§ Imaginary Time
At limit J→ 0, the imaginary time Green's function in freqeuncy domain is
G(ω_n) = 1/ω_n^2 + ω_0^2,
where ω_0^2 ≡ϵ.
We can perform the Matsubara frequency summation to obtain
G(τ) = 1/β∑_n G(ω_n)e^-ω_n τ
= e^-τω_0((e^2 τω_0+1) n_B(ω_0)+1)/2 ω_0.
From Δ(τ) =2G(0) - 2G(τ) we get
Δ(τ) = 2 e^-βω_0csch(βω_0/2) sinh(τω_0/2) sinh(1/2ω_0 (β -τ ))/ω_0
= τ (β -τ )/β+ τ (τ -β )ω_0 + O(ω_0^2),
at which point it is safe to take ω_0 → 0.
§.§ Real Time
We start with the basic formulas
G_R(ω) = 1/(ω + 0^+)^2 - ω_0^2,
ρ_G(z) = 1/2ω_0δ(z-ω_0) + 1/-2ω_0δ(z+ω_0),
G_>(t) = -/2ω_0 n_B(ω_0) ( e^-ω_0 t e^βω_0 - e^ω_0 t),
G_<(t) = -/2ω_0 n_B(ω_0)( e^-ω_0 t - e^ω_0 t e^βω_0).
The zero-temperature version is actually given by
G_R(t) = ∫ω/2π1/2 ω _0( 1/-ω _0+(ω +i η )-1/ω _0+(ω +i η ))
= Θ(t) e^i t ω _0-e^-i t ω _0/2 ω _0
Notice that G_R(t) is actually well defined in the ω_0 → 0 limit, where G_R(t)=Θ(t) t. But for G_>,G_<,G_K,G_W, the limit is not well defined.
After subtracting the appropriate zero point, Δ(t) becomes
Δ_>(t) = 2 csch(βω_0/2) sin(t ω_0/2) sin(1/2ω_0 (t+i β ))/ω_0
We find that Δ_>(t) > 0. If we take ϵ→ 0, we obtain the same formula via analytic continuation,
Δ_>(t) = t (t+i β )/β
Δ_<(t) = t (t-i β )/β.
§.§ Wightman Green's Function
The Wightman Green's function is defined by
G_W(t) ≡⟨ x(t-β/2) x(0) ⟩ .
Therefore we can derive it using non-equilibrium Green's function
G_W(t) = G_>(t-β/2)
= ∫ω/2π 2π n_B(-ω) ρ_G(ω) e^-βω/2 e^-ω t
⇒ G_W(ω) = -π1/sinh(βω/2)ρ_G(ω)
= 1/2sinh(βω/2)(G_R(ω) - G_R^†(ω) ) .
For Δ, we have Δ_W(ω)= -2 G_W(ω) for ω≠ 0. The ω=0 zero point needs special treatment due to the condition
Δ_>(t-β/2,0) = 2(G_>(t-β/2,t-β/2)+G_>(0,0) - 2 G_>(t-β/2,0)).
With translation invariance, the correlator is
Δ_>(t-β/2) = 2(G_>(0) - G_>(t-β/2)),
which leads to the Wightman Green's function at t=0
Δ_W(0) = 2(G_>(0) - G_>(-β/2)) = (-i)Δ_τ(β/2) ,
Δ_τ is imaginary time Green's function. This is explicitly true for the J=0 case, where from the analytic continuation we get
Δ_W(t) = Δ_>(t-β/2) = t^2 + β^2/4/β.
§ OTHER CHOICES OF VECTOR POTENTIAL CORRELATIONS
It is an interesting question to ask what's special about the 1/Δ form of the vector potential covariance. To answer this question, we can generalize the vector potential correlations to
⟨ A_i(x_1) A_j(x_2)⟩=δ_ij J^2 ℓ^2(ξ-1) / ( ℓ^2+|x_1-x_2|^2/N)^ξ,
with arbitrary parameter ξ >0. We can examine the behavior by performing imaginary time calculations for different ξ and trying to compare to the scaling ansatz in Eq. (<ref>). We try to extract an effective value of α and coefficient c_α from Δ(τ) using the ansatz
Δ(τ)=c_α( τ(β-τ)/β)^α
In Fig. <ref>, we demonstrate the result for different ξ. Only ξ=1 gives a result consistent with the IR value α_Theory obeying Eq. <ref>. When ξ>1, we expect the system will flow to the free theory, which corresponds to α=1 and a J dependent coefficient c_α. This is consistent with the data in Fig. <ref>.
§ SCHWINGER-KELDYSH PATH INTEGRAL
§.§ Definition of Keldysh Rotation
To establish the Schwinger-Keldysh formalism, we introduce the double contour with x_+, x_- fields. In order to construct other useful Green's function, we perform a Keldysh rotation on the original basis and introduce the new basis x_cl, x_q:
[ x_cl; x_q; ] = M
[ x_+; x_-; ]
Here the transformation matrix is M= 1/√(2)[ 1 1; 1 -1; ].
Correspondingly, the new Green's function or self-energy is defined on such basis, which reads
[ G_K G_R; G_A 0; ]≡
-<[ x_cl; x_q; ][ x_cl x_q; ]>
= M [ G_++ G_+-; G_-+ G_–; ] M^T
[ 0 Σ_A; Σ_R Σ_K; ]≡ (M^T)^-1[ (-1)^0+0Σ_++ (-1)^0+1Σ_+-; (-1)^1+0Σ_-+ (-1)^1+1Σ_–; ] (M)^-1.
The rationale behind defining Σ_R, Σ_A, and Σ_K is to maintain the structure of the Schwinger-Dyson equation (G_0^-1 - Σ) ∘ G = 𝕀 invariant after the Keldysh rotation.
It's noticed that the statistical translational symmetry of the magnetic field still appears in our real-time formalism, even though δ(t_12) term doesn't appear in the Σ_≷. This is because the analog of the Schwinger-Dyson equation in real-time involves the retarded (or advanced) component of G and Σ after the Keldysh rotation. The retarded component Σ_R should, in principle, be represented as Σ_+-, Σ_+-, Σ_++, Σ_++, where the delta function term could enter the self-energy through Σ_++ and Σ_–.
Similar to the imaginary time formalism, we should also expect Σ_R(ω=0)=0 in real-time dynamics. This is consistent with the fluctuation dissipation theorem 2Σ_R(ω) = tanh(βω/2)Σ_K(ω), which is always true in the Keldysh formalism.
§.§ Calculation of the Rotated Self-Energy
Since the effective action Eq. (<ref>) involves the correlator Δ_ab, it leads to the unusual delta function term in the diagonal component of the self-energy in Eq. (<ref>). We need to prove that it still leads to the correct structure of self-energy. The structure of self-energy after and before Kelydsh rotation is
[ 0 Σ_A; Σ_R Σ_K; ]=1/2[ Σ _++-Σ _+--Σ _-++Σ _– Σ _+++Σ _+--Σ _-+-Σ _–; Σ _++-Σ _+-+Σ _-+-Σ _– Σ _+++Σ _+-+Σ _-++Σ _–; ]
To begin with, we show all central results in Tab. <ref>
In the following subsection, we will derive the first four results below. The lesser and greater self-energy are by definition. The fluctuation-dissipation theorem for the self-energy can be a consistent check for our special Schwinger-Keldysh structure, which requires Σ_R(ω=0) - Σ_A(ω=0)=0.
§.§.§ Constraint on Σ
Before we discuss each component, we can obtain some useful relations. First, we recall the definition of time-ordered and anti-time-ordered correlator
Δ_++(t_1,t_2) = Θ(t_12)Δ_-+(t_1,t_2) + Θ(-t_12)Δ_+-(t_1,t_2)
Δ_–(t_1,t_2) = Θ(t_12)Δ_+-(t_1,t_2) + Θ(-t_12)Δ_-+(t_1,t_2).
Then we calculate the summation of time-ordered and anti-time-ordered self-energy
Σ_+++Σ_–
= -∂_t_1∂_t_2f(Δ_++(t_1,t_2)) - f'(Δ_++(t_1,t_2))∂_t_1∂_t_2Δ_++(t_1,t_2) -∂_t_1∂_t_2f(Δ_–(t_1,t_2)) - f'(Δ_–(t_1,t_2))∂_t_1∂_t_2Δ_–(t_1,t_2)
+ δ(t_12) ∫t_3( f'(Δ_++(t_1,t_3))∂_t_1∂_t_3Δ_++(t_1,t_3) + f'(Δ_–(t_1,t_3))∂_t_1∂_t_3Δ_–(t_1,t_3)
- f'(Δ_+-(t_1,t_3))∂_t_1∂_t_3Δ_+-(t_1,t_3) - f'(Δ_-+(t_1,t_3))∂_t_1∂_t_3Δ_-+(t_1,t_3) )
After we expand the time-ordered and anti-time-ordered correlator using Eq. (<ref>), we find the delta function term is exactly canceled, and the regular self-energy terms can be simplified as Σ_+-+Σ_-+. Therefore it immediately leads to the first result:
1/2(Σ_+++Σ_–-Σ_+--Σ_-+)= 0.
§.§.§ Keldysh Self-Energy Σ_K
After using the constrain relation Eq. (<ref>), Keldysh Green's function reads
Σ_K≡1/2(Σ_+++Σ_–+Σ_+-+Σ_-+) = Σ_+-+Σ_-+.
§.§.§ Retarded Self-Energy Σ_R
To deal with the retarded self-energy component, we further assume time translation invariance and simplify the formula Σ_++-Σ_–, which reads
Σ_++-Σ_–
= ∂_t_1^2f(Δ_++(t_12)) + f'(Δ_++(t_12))∂_t_1^2Δ_++(t_12) -∂_t_1^2f(Δ_–(t_12)) - f'(Δ_–(t_12))∂_t_1^2Δ_–(t_12)
+ δ(t_12) ∫t_3( - f'(Δ_++(t_13))∂_t_3^2Δ_++(t_13) + f'(Δ_–(t_13))∂_t_3^2Δ_–(t_13) )
= (Σ̅_++ - Σ̅_–) + δΣ_1 + δΣ_2.
The result can be separated into a regular part Σ̅_++ - Σ̅_– and the δΣ_1,δΣ_2 part corresponding to the delta function term.
Firstly we consider δΣ_2, which is contributed by the explicit delta function term in the self-energy Σ_++,Σ_–, where
δΣ_2(t_12) =δ(t_12)∫t_3(- f'(Δ_++(t_13))(∂_t_3^2)Δ_++(t_13) + f'(Δ_–(t_13))(∂_t_3^2)Δ_–(t_13) )
It can be further simplified using the following tricks
δΣ_2(t_12)
=δ(t_12)∫t_3( - f'(Δ_++(-t_3))∂_t_3^2Δ_++(-t_3) + f'(Δ_–(-t_3))∂_t_3^2Δ_–(-t_3) )
=δ(t_12)∫t_3( -Θ(-t_3) f'(Δ_-+(-t_3))∂_t_3^2Δ_-+(-t_3) - Θ(t_3) f'(Δ_+-(-t_3))∂_t_3^2Δ_+-(-t_3)
+Θ(-t_3) f'(Δ_+-(-t_3))∂_t_3^2Δ_+-(-t_3) + Θ(t_3) f'(Δ_-+(-t_3))∂_t_3^2Δ_-+(-t_3) )
=-2 δ(t_12)∫t_3Θ(t_3)( f'(Δ_-+(t_3))∂_t_3^2Δ_-+(t_3) - f'(Δ_+-(t_3))∂_t_3^2Δ_+-(t_3) )
In the first equation we absorb variable t_1 into the t_3 due to the integration is carried out from -∞→∞. In the second equation, we expand the time-order and anti-time-ordered correlator. In the third equation, we want to deal with the minus sign on the t_3 variable. For the terms with Θ(-t_3), we can change the integration variable t_3 → - t_3. Besides, for the terms with Θ(t_3), we can remove the minus sign in Δ by using the condition Δ_ab(t)=Δ_ba(-t), which origins from the fact that displacement operator x is a Hermitian operator.
Secondly, the δΣ_1 denotes the delta function origins from the kinks in the time-ordered or anti-time-ordered correlator Δ_++,Δ_–. This contribution only appears in the combination of ∂_t_1^2f(Δ_++(t_12))-∂_t_1^2f(Δ_–(t_12)).
By expanding Δ_++,Δ_– using Eq. (<ref>), we can obtain the kink contribution
∂_t_1^2 f(Δ_++(t_12)) - ∂_t_1^2 f(Δ_–(t_12))
= ∂_t_1^2 ( Θ(t_12) f(Δ_-+(t_12))+Θ(-t_12) f(Δ_+-(t_12)) ) - ∂_t_1^2 ( Θ(t_12) f(Δ_+-(t_12))+Θ(-t_12) f(Δ_-+(t_12)) )
= 2δ(t_12)( ∂_t_12 f(Δ_-+(t_12))- ∂_t_12 f(Δ_+-(t_12)) ) + (t_12) (∂_t_1^2 f(Δ_-+(t_12)) - ∂_t_1^2 f(Δ_+-(t_12)) )
From the second line to the third line, we have considered both the kink contribution using ∂_t Θ(t) = δ(t), and the regular part with a sign function. Therefore, the δΣ_1(t_12) can be defined as simplified as
δΣ_1(t_12) = 2δ(t_12)( ∂_t_12 f(Δ_-+(t_12))- ∂_t_12 f(Δ_+-(t_12)) )
= -2δ(t_12) ∫t_3(Θ(t_3)∂_t_3^2 f(Δ_-+(t_3)) - Θ(t_3)∂_t_3^2 f(Δ_+-(t_3)) )
Finally, the regular part is
Σ̅_++-Σ̅_– = (t_12) (∂_t_1^2 f(Δ_-+(t_12)) - ∂_t_1^2 f(Δ_+-(t_12)) + f'(Δ_-+(t_12))∂_t_1^2Δ_-+(t_12) - f'(Δ_+-(t_12))∂_t_1^2Δ_+-(t_12))
=(t_12) (Σ_-+(t_12) -Σ_+-(t_12)),
which immediately leads to the regular part contribution to the retarded self-energy
Σ̅_R(t_12) ≡1/2(Σ̅_++ - Σ̅_– +Σ _-+ -Σ _+-) = Θ(t_12)(Σ_-+(t_12) -Σ_+-(t_12)).
This is the structure that we expect for the regular retarded self-energy term. Remarkably, the delta function contribution to the retarded self-energy can also be greatly simplified to
1/2(δΣ_1 +δΣ_2)= - δ(t_12) ∫t_3Θ(t_3) (Σ_-+-Σ_+-) = -δ(t_12) Σ̅_R(ω=0).
This leads to the final result for retarded self-energy:
Σ_R(t) = 1/2(Σ _++-Σ _+-+Σ _-+-Σ _–) = Σ̅_R(t) + 1/2(δΣ_1 +δΣ_2)
=Σ̅_R(t) - δ(t) Σ̅_R(ω=0).
§.§.§ Advanced Self-Energy Σ_A
The advanced self-energy can be easily obtained in a similar way:
Σ̅_A(t) ≡Θ(t)(Σ_+-(t) -Σ_-+(t))
Σ_A(t) = Σ̅_A(t) - δ(t) Σ̅_A(ω=0).
§ NUMERICAL METHODS FOR THE REAL-TIME SADDLE POINT EQUATIONS
As we discuss in the main text section <ref>, due to the statistical spatial translational invariance of the system, the real-time saddle point cannot be solved self-consistently using a simple mixing method since it's numerically unstable. Instead, we choose to use a generalized gradient descent protocol with a mask function.
From equation Eq. (<ref>), we can insert the saddle point equation and represent all self-energies via Δ,
S/N= - 1/2log
(G^-1) - ∫ t_1 t_2 ( 1/2Σ∘ G + 1/2 I_0(G) )
= - 1/2log
(G^-1) - ∫ t_1 t_2 ( 1/2( G_0^-1∘ G - 𝕀δ(t_12) ) + 1/2 I_0(G) )
= - 1/2log
(-2Δ(ω)^-1) - (const+ 1/2∫ω/2π G_0(ω)^-1Δ(ω)/-2 + ∫ t_1 t_2 1/2 I_0(G) ).
The last step is to decompose ∫ t_1 t_2 I_0(G) and take its derivative:
δ/δΔ(ω)∫ t_1 t_2 ( 1/2 I_0(G) )
= ∫ t δ G(t)/δΔ(ω)δ/δ G(t)∫ t_1 t_2 ( [∂_t_1∂_t_2 f(Δ_ab(t_12)) ] G(t_12) )
= ∫ t 1/∫ t' δΔ(t')/δ G(t)e^-ω t' (-Σ(t))
= ∫ t 1/∫ t' (-2δ(t'-t) - 2δ(t)) e^-ω t' (-Σ(t))
= ∫ t 1/-2 e^-ω t- 2δ(t)δ(ω) (-Σ(t)).
Since we're working in the ω≠ 0 sector, the δ(ω) term can be ignored. Summing all results we arrive at
S/Δ(ω)/N
= 1/2Δ^-1(ω) + ( -1/2( Δ_0^-1) + 1/2Σ(ω)/-2)
= /4( (-2)Δ^-1(ω) - (G_0^-1 - Σ) ).
If we only consider the retarded component, we will find the gradient to be zero is the same as the Schwinger-Dyson equation(<ref>). In the numerics, we choose the update rule to be
Δ(ω)_new = Δ(ω)_old + ζ/4( (-2)Δ^-1_R(ω) - (G_0,R^-1(ω) - Σ_R(ω)) ) Δ_R(ω)^2.
Since the gradient can diverge near small ω, we introduce an extra Δ_R(ω)^2 factor as a mask function. This numerical trick stabilizes the iteration process. We consider convergence to be achieved when ||Δ_R,new(ω)-Δ_R,old(ω)|| < 10^-6.
JHEP
|
http://arxiv.org/abs/2409.03475v1 | 20240905123700 | An Effective Current Limiting Strategy to Enhance Transient Stability of Virtual Synchronous Generator | [
"Yifan Zhao",
"Zhiqian Zhang",
"Ziyang Xu",
"Zhenbin Zhang",
"Jose Rodriguez"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
UTF8gbsn
An Effective Current Limiting Strategy to Enhance Transient Stability of Virtual Synchronous Generator
This work is financially supported by Chinese National Natural Science Foundation(52277192), Chinese National Natural Science Foundation(52277191)。
Yifan Zhao^1, Zhiqian Zhang^1, Ziyang Xu^2, Zhenbin Zhang^1*, Jose Rodriguez^3.
^1 School of Electrical Engineering, Shandong University, Jinan, China
^2 Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China
^3 Faculty of Engineering, Universidad San Sebastian Santiago, 8420524, Chile.
[email protected], [email protected], [email protected], [email protected], [email protected]
September 9, 2024
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
VSG control has emerged as a crucial technology for integrating renewable energy sources. However, renewable energy have limited tolerance to overcurrent, necessitating the implementation of current limiting (CL)strategies to mitigate the overcurrent. The introduction of different CL strategies can have varying impacts on the system. While previous studies have discussed the effects of different CL strategies on the system, but they lack intuitive and explicit explanations. Meanwhile, previous CL strategy have failed to effectively ensure the stability of the system.
In this paper, the Equal Proportional Area
Criterion (EPAC) method is employed to intuitively explain how different CL strategies affect transient stability. Based on this, an effective current limiting strategy is proposed. Simulations are conducted in MATLAB/Simulink to validate the proposed strategy. The simulation results demonstrate that, the proposed effective CL strategy exhibits superior stability.
Effective current limiting strategies, transient stability,
Equal proportional area
criterion (EPAC)
§ INTRODUCTION
As the integration of Distributed Energy Resources (DERs) in power systems continues to grow, they present significant operational challenges due to their inherent weak damping and low inertia. These characteristics can lead to frequency fluctuations and voltage instability <cit.>, affecting the overall stability and reliability of the grid. To address these issues, VSG control strategy has been introduced <cit.>. The VSG emulates the dynamics of traditional synchronous generators by mimicking the swing equations, thereby providing synthetic inertia and damping to enhance system stability. This strategy allows DERs to dynamically adjust their output in response to grid conditions, helping to maintain frequency and voltage within acceptable ranges. The VSG's flexibility and adaptability are key advantages, as it can quickly increase power output during frequency drops to support system recovery. Moreover, the VSG can be integrated with advanced control techniques such as model predictive control and optimization scheduling <cit.>, further improving the efficiency and stability of the power system. The VSG control strategy offers an effective technical solution to the challenges posed by DERs, contributing to the sustainable and efficient operation of power systems.
During disturbances in VSG system, overcurrent may occur. However, devices have limited capability to withstand high current <cit.>, prompting the proposal of various current limiting strategies to mitigate overcurrent. Previous studies have introduced current limter, Virtual impedance and Voltage limter control to suppress overcurrent <cit.>. For current limter control, there are typically three types: d-axis priority, q-axis priority and angle priority. The transient stability of a VSG is significantly affected by different current limiting strategies. It has been shown in other paper that taking the q-axis current prioritized strategy exhibits improved transient stability performance. In <cit.>, it merely conducted experiments for validation without performing theoretical analysis. In <cit.>, it adopted Liapunov function method to analysis, but lacked intuitiveness. Although the q-axis priority CL strategy has been verified to have better transient stability compared to other strategies, it still cannot guarantee the transient stability of the system under large disturbances.
In this paper, the Equal Proportional Area Criterion (EPAC) method is employed to intuitively and qualitatively explain how different current limiting strategies affect transient stability. Based on this analysis, an effective current limiting strategy is proposed to effectively address transient stability issues encountered by VSG during disturbances. the simulation conducted in MATLAB/Simulink demonstrated that the proposed effective CL strategy enhances transient stability, thus offering a promising solution to improve the performance of VSG under large disturbances.
§ VSG CONTROL PRINCIPLE AND PREVIOUS CURRENT LIMITING STRATEGY
This section describes the basic control strategy of VSG, then analyzes the existing current limiting strategies.
§.§ VSG topology
The basic topology of the grid-tied VSG control strategy is given by Fig. <ref>, U_d c is the dc-side voltage, I and
U are the VSG output current and voltage, and L_f is the filter
inductance. C_f is the filter capacitance, the I and U are the VSG
output current and voltage, L_g is the grid inductance and V_g is the grid voltage.
The VSG control loop consists of swing equation (<ref>)and an automatic voltage regulator (AVR) (<ref>).
As shown in (<ref>), Swing equations are used to mimic the electromagnetic characteristics of synchronous generators. P_m and P_e are the active power reference and output active power. J is the virtual interia, D is damping coefficient. w_m and w_0 are virtual rotor frequency and nominal frequency. δ is the power angle.
P_m-P_e=J ω_m d ω_m/d l+D(ω_m-ω_0)
E=E_ref+k(Q_ref-Q_e)
δ=∫ω_m-ω_0 d t
As shown in (<ref>). AVR is used to mimic characteristics of the voltage regulation in synchronous generators. E and E_ref are the voltage rated and the amplitude of reference voltage voltage rated. k is the Q-U coefficient value.
§.§ Previous Current Limiting Strategy
When the VSG experiences transient state, overcurrent occurs, necessitating the implementation of a CL, previous paper have proposed various current limiting strategy to reduce the current, including Current limter, Virtual impedance, Voltage limiter. In this paper, only current limiter is discussed.
§.§.§ Angle Priority CL
As shown in Fig. <ref>(a), The angle priority CL strategy ensure the power angle of i^r e f remains unchanged after passing through the CL controller, i_d^ref※ and i_q^ref※ are the new reference current after CL. They are determined by (<ref>).
i_d^*=i_d^ref/|i_d^ref|×min(|i_d^ref|,|i_d^ref| ×I_max/√((i_d^ref)^2+(i_q^ref)^2))
i_q^*=i_q^ref/|i_q^ref|×min(|i_q^ref|,|i_q^ref| ×I_max/√((i_d^ref)^2+(i_q^ref)^2))
§.§.§ d-axis priority CL
As shown in Fig. <ref>(b), d-axis priority CL keeps i_d^r e f as constant as possible and achieves current limiting by cutting i_q^r e f, i_d^ref※ and i_q^ref※ are determined by (<ref>).
i_d^*=i_d^ref/|i_d^ref|×min(|i_d^ref|, I_max)
i_q^*=i_q^ref/|i_q^ref|×min(|i_q^ref|, √((I_max)^2-(i_d^*)^2))
§.§.§ q-axis priority CL
As shown in Fig. <ref>(c), q-axis priority CL keeps i_q^r e f as constant as possible and achieves current limiting by cutting i_d^r e f, i_q^ref※ and i_d^ref※ are determined by (<ref>).
i_q^*=i_q^ref /|i_q^ref |×min(|i_q^ref|, I_max)
i_d^*=i_d^ref /|i_d^ref |×min(|i_d^ref |, √((I_max)^2-(i_q^*)^2))
§ CASE STUDY OF DIFFERENT CURRENT LIMITING STRATEGIES ON TRANSIENT
STABILITY
The introduction of different current limiting strategies can have varying impacts on the transient stability of the system, <cit.> pointed out that taking the q-axis priority CL possesses a better transient stability, but none of them performed a mechanism analysis, <cit.> adopted the method of Liyapunov's function but it is not enough to be intuitive, here we adopt the method of EPAC.
§.§ Active Power Analysis under Different CL
Under the operating conditions where the inverter currents do not reach their limits, from <cit.>, the active power can be represent as:
P_e=E/R_v^2+X_v^2(X_v V sinδ+R_v(E-V cosδ))
(<ref>) can be simplified as:
P_e=E V/X_vsinδ .
§.§.§ Angle Priority
Combined (<ref>) and (<ref>).
The implementation of angle priority CL results in a new active power curve.
P_e={[ E V/X_vsinδ,|i^ref |<I_max; I_maxcosδ/2, Otherwise ].
§.§.§ d-axis Priority
Combined (<ref>) and (<ref>).
The implementation of d-axis priority CL results in a new active power curve.
P_e={[ E V/X_rev sinδ,|i^ref |<I_max; i_d^ret /|i_d^red | V I_maxcosδ,|i_d^ref | ≥ I_max; V^2/2 X_vsin 2 δ-V sinδ√(I_max^2-(V/Xsinδ)^2), OW ].
§.§.§ q-axis Priority
Combined (<ref>) and (<ref>).
The implementation of q-axis priority CL results in a new active power curve.
P_e={[ E V/X_vsinδ,|i^ref |<I_max; -V sinδ I_maxi_q^ref /|i_q^ref |,|i_q^ref | ≥ I_max; V cosδ√(I_max^2-(V/X_v(cosδ-1))^2); -V^2/X_vsinδ(cosδ-1), OW ].
§.§ Transient Stability Analysis Using EPAC Method
The Equal Proportional Area Criterion (EPAC) is commonly utilized in the transient stability analysis of traditional power systems. Given the adoption of the VSG control strategy in this context, the network exhibits characteristics akin to those of conventional power systems, thus justifying the application of the EPAC here. The EPAC informs us that, upon the occurrence of a transient disturbance in the power system, due to characteristic (<ref>), the rotor angle will gradually increase, corresponding to an area of acceleration. After the fault is cleared, the acceleration persists, causing the rotor angle to continue to rise but at a diminishing rate, corresponding to an area of deceleration. To ensure the system's transient stability, the area of acceleration must be less than the area of deceleration. It indicates that the rotor gains more energy during the acceleration process than it loses during the deceleration process, finaly resulting in instability.
As shown in Fig. <ref>, The red curve represents P-δ curve with CL, δ_o is the fault occur time. δ_c is the fault cleared time. According to EPAC, to keep transient stability of the VSG, the acceleration area must smaller than the deceleration area. Fig. <ref>(a)(b)(c) represents the transient stability analysis of angle priority CL, d-axis priority CL, q-axis priority CL, respectively. We can intuitively see that the q-axis prioritized current limiting strategy has a significantly smallest acceleration area, thus, compared to other CL, it has better stability.
§ PROPOSED EFFECTIVE CURRENT LIMITING STRATEGY
Based on the above EPAC method, different current limiting strategies affect the transient stability of VSG by affecting the acceleration area, here, an effective current limiting strategy is proposed to obtain a smallest acceleration area for optimal transient stability.
§.§ Mathematical Analysis
As shown in Fig. <ref>, Unlike other literature that adopts a prioritization approach, we achieve current limiting by adaptively adjusting the current angle after passing through the current limiter.
It is easy to get d-axis and q-axis components of the PCC voltage.
V_d=V cosδ V_q=-V sinδ
When the current is not influenced by the current limiting strategy, it can be represented as:
I=E ∠θ_m-V ∠θ_g/R_v+j X_v .
By combining with (<ref>) and assuming E ≈ V, it can be known that:
|I| ≈2 V/X_vsinδ/2 ∠ I ≈θ_g+δ/2
When proposed adaptive current limiting control strategies are implemented, we can obtain the magnitude of the current affected by the current limiting strategies.
i_d^*=I_maxcos(δ/2+φ)
i_q^*=-I_maxsin(δ/2+φ)
By substituting (<ref>) and (<ref>) into (<ref>), we can derive the expression for the active power that has been influenced by the adaptive current limiting strategy.
P_e^*=V I_maxcosδcos(δ/2+φ)+V I_maxcosδsin(δ/2+φ)
Which can be simplified as follows:
P_e^* =V I_max[cosδcos(δ/2+φ)+sin(δ/2+φ) sinδ]
=V I_maxcos(φ-δ/2)
§.§ Transient Stability Analysis
To enhance the transient stability of the VSG, it is essential to minimize the acceleration area. This can be achieved by continuously adjusting the parameter φ to ensure that P_e-P_e^* remains at its minimum value, If precision is not critical, a simplification can be made by adjusting φ=δ/2 to minimize P_e-P_e^*, thereby obtaining a smallest acceleration area.
Fig. <ref> presents a transient stability analysis under an adaptive current limiting strategy. In comparison with previous current limiting strategies, it is evident that there is a smaller area of acceleration while the area of deceleration remains constant. This implies that when the system encounters the same transient disturbances, the proposed current limiting strategy exhibits superior transient stability.
§ SIMULATION RESULTS
To validate the effective CL proposed in Section 4, the VSG system in Fig. <ref> is implemented in MATLAB/Simulink and the data provided in Table <ref>. Given that the q-axis priority CL has been demonstrated to be optimal than angle priority, d-axis priority in <cit.>, the subsequent comparison focuses on the proposed effective CL against the q-axis priority CL. Faults are introduced at 0.5 seconds and cleared at 0.8s.
Fig. <ref> illustrates the voltage and current waveforms for q-axis priority CL, it can be seen in Fig. <ref>, voltage drops at 0.5 seconds and recover at 0.8 seconds. While the current increases at 0.5 seconds, due to the current limiter, the amplitude of the current keep at 2.4pu.
Fig. <ref> illustrates the voltage and current waveforms for proposed adaptive priority CL, it can be seen in Fig. <ref>, voltage drops at 0.5 seconds and recover at 0.8 seconds. While the current increases at 0.5 seconds, due to the current limiter, the amplitude of the current keep at 2.4pu.
Fig. <ref> shows the power angle of VSG under proposed adaptive priority CL, it can be found the power angle increases at 0.5 seconds. After fault cleared at 0.8 seconds, the power angle decreased, it indicates the system maintain its transient stability.
While the power angle of VSG under q-axis priority CL, it can be found the power angle increases at 0.5 seconds. After fault cleared at 0.8 seconds, the power angle keeps increasing, it indicates the system lose its transient stability.
§ CONCLUSIONS
In this paper, regarding different current limiting strategies, previous studies lacked intuitive theoretical analyses. We utilize the EPAC method to provide an intuitive explanation of why different current limiting strategies exhibit varying impacts on transient stability. Additionally, to enhance the system's transient stability, an effective current limiting strategy is proposed based on this analytical approach. MATLAB/Simulink simulation results indicate that the proposed adaptive CL outperforms other strategies in terms of transient stability.
IEEEtran
|
http://arxiv.org/abs/2409.02287v1 | 20240903205448 | Silicon Nitride Photonic Waveguide-Based Young's Interferometer for Molecular Sensing | [
"Sahar Delfan",
"Mohit Khurana",
"Zhenhuan Yi",
"Alexei Sokolov",
"Aleksei M. Zheltikov",
"Marlan O. Scully"
] | physics.optics | [
"physics.optics",
"physics.app-ph",
"physics.ins-det"
] |
1 Department of Physics and Astronomy, Texas A&M University, College Station, TX 77843, USA
2Institute of Quantum Science and Engineering, Texas A&M University, College Station, TX 77843, USA
3Baylor University, Waco, Texas 76704, USA
4Princeton University, Princeton, New Jersey 08544, USA
*[email protected]
Devices based on photonic integrated circuits play a crucial role in the development of low-cost, high-performance, industry-scale manufacturable sensors. We report the design, fabrication, and application of a silicon nitride waveguide-based integrated photonic sensor in Young's interferometer configuration combined with Complementary Metal-Oxide-Semiconductor (CMOS) imaging detection. We use a finite-difference time-domain method to analyze the performance of the sensor device and optimize the sensitivity of the fundamental transverse-electric (TE) mode. We develop a low-cost fabrication method for the photonic sensor chip, using photolithography-compatible dimensions, and produce the sensing region with wet-etching of silicon dioxide. We demonstrate the sensor's functioning by measuring the optical phase shift with glucose concentration in an aqueous solution. We obtain consistent interference patterns with fringe visibility exceeding 0.75 and measure the phase differences for glucose concentrations in the 10 μ g/ml order, corresponding to the order of 10^7 molecules in the sensing volume. We envision extending this work to functionalized surface sensors based on molecular binding. Our work will impact biosensing applications and, more generally, the fabrication of interferometric-based photonic devices.
§ INTRODUCTION
Biosensors play an important role in metrological studies of biological and chemical elements in clinical health, agricultural, and environmental applications <cit.>. Detection of ultra-low concentration solutes is a challenging problem with the additional requirements of low-cost, high sensitivity, high-speed testing, portable sensor device, and label-free detections on the sensing scheme <cit.>. Clinical applications demand the detection of low concentrations (c) of analytes (biomolecules, chemicals, particles, etc.) in the range of μg/ml, corresponding to refractive index (n_sol) changes in the order of 10^-7 RIU (δn_sol/ δc∼ 10^-1 RIU/(g/ml)), where RIU is refractive index unit <cit.>. Optical biosensors provide robust ultra-high sensitivity technology in the measurement of solute concentrations in μg/ml order <cit.>. Optical or photonic devices that manipulate the light waves, such as interferometers, achieve phase shifts, and intensity variations are the fundamental building blocks in many applications like sensors, modulators, and optical switches <cit.>. Photonic biosensors can integrate all biosensing components onto a CMOS-compatible chip, enabling multiplexing with compact electronic readouts and offering the advantage of a system with smaller noise levels <cit.>.
Various types of sensors are capable of sensing tiny concentrations of solute or analyte. Surface plasmon resonance (SPR) and microcavity sensors depend on resonance effects, which are sensitive to environmental fluctuations and necessitate precise resonance measurements <cit.>. Photonic crystal sensors consist of structures spatially arranged periodic dielectric materials that uniquely interact with light and are sensitive to the changes in environment refractive index<cit.>. Inverse design methods use computational algorithms to optimize photonic structures for specific functionalities, such as refractive index sensing, often resulting in unconventional geometries tailored for maximum performance <cit.>. However, photonic waveguides, while perhaps less optimized than inverse-designed structures, offer several practical advantages. Waveguides are based on well-established fabrication techniques, making them more straightforward to manufacture with high yield and reproducibility. They also tend to be more robust to fabrication imperfections, whereas inverse-designed structures can be susceptible to small deviations from the ideal design, potentially leading to performance degradation. Inverse-design-based sensors necessitate heavy computational optimizations and high-resolution electron-beam lithography (EBL) methods in fabrication. In contrast, photonic waveguides can be easily simulated, optimized, and fabricated using the photolithography method, and the measurements are simpler to execute, as reported here.
Photonic biosensors, employing configurations such as Young's interferometer (YI) in photonic waveguides do not require the EBL method in fabrication and enable accurate measurements of solute concentrations. However, external temperature and pressure, laser wavelength, linewidth and stability, low precision or noisy detection schemes, and fabrication errors can increase total noise in the sensor signal, therefore reduce the sensor's sensitivity and limit detection capabilities. The lowest concentrations that can be reliably detected in the employed sensing scheme are commonly referred to as the limit of detection (LoD) and depend both on the sensor's sensitivity and the read-out system's noise floor. Common ways to increase the sensitivity of sensors are by increasing the interaction length, optimizing the sensing arm waveguide, reducing the instrument noise floor in the system, and making high precision measurements of the phase shift <cit.>. New techniques to counter such effects have also been demonstrated, such as temperature-independent MZI-based biosensors and coherent interferometric sensors <cit.>.
Thin film silicon nitride (Si_3N_4) offers a prominent platform for integrated photonic circuits due to their low propagation loss in visible-near infrared wavelengths, high refractive index, low-cost and compatibility with CMOS fabrication processes <cit.>. MZI-based sensors require its operation near the quadrature condition for high sensitivity measurement of phase differences, but Young's interferometer is applicable for arbitrary phase shifts. In addition, the maturity of the CMOS industry enables low-cost instrumentation of image-based sensing, whereas MZI-based sensors still require an expensive photodetection scheme. Here, we implement Young's interferometer in Si_3N_4 photonic waveguide platform, where Si_3N_4 is a core waveguide material and silicon dioxide (SiO_2) serves as cladding on silicon (Si) substrate. We confine the light in the waveguide's fundamental transverse electric (TE) mode and build a photonic sensor. We demonstrate its capability of sensing refractive index change through fringe shifts with different concentrations of glucose introduced to the sensing arm of the interferometer. Our sensor's sensitivity performs better than that of the works reported by Wang et al. <cit.>, Zhou et al. <cit.>, and Wong et al. <cit.>.
§ DESIGN AND SIMULATIONS
The operating principle of waveguide sensors is based on the evanescent electromagnetic (EM) wave of confined mode in the waveguide interacting with the environment in the vicinity of the waveguide surface. The propagation of EM wave is susceptible to the environment refractive index due to the mode fields' presence outside the waveguide, which causes changes in the effective index of mode due to small perturbations in the environment. This interaction allows us to develop a technology to measure tiny changes in analyte concentration in the aqueous solution. The change in phase of an EM wave is detected by interfering it with another non-interacting beam in the interferometer <cit.>.
Waveguide-based YI splits a waveguide into two waveguides via a 50:50 Y-splitter; one waveguide is the sensing arm, and the other acts as the reference arm. The 50:50 Y-splitter divides the incoming guided light intensity equally into each waveguide, as well as prevents optical losses and conversion to other modes. The chip configuration of waveguide-based YI is shown in Fig. <ref>. The two confined modes in the sensing and reference waveguides come out of the chip end, form diverged beams, and interfere in the free space. The interference pattern is captured by a CMOS camera. The reference waveguide is buried in SiO_2, and the sensing arm is open to the aqueous medium as shown in Fig. <ref>(b). The propagating mode in the sensing arm interacts with the solution, causing an additional phase shift with respect to the mode propagating in the reference arm. When the solution's refractive index changes, the phase of the propagating mode in the reference arm remains constant, while the phase of the propagating mode in the sensing arm changes. This phase shift of sensing arm mode is given by, δϕ = 2π L Δ n_eff/λ, where L is the length of sensing window, Δ n_eff is the change in the effective index of sensing arm waveguide mode, λ is the wavelength of the light. The ratio of the variation in the effective index of the sensing arm mode to the variation in the solution's refractive index is defined as the bulk sensitivity of the sensing arm waveguide or the sensor, S_wg,
S_wg = δn_eff/δn_sol
where δn_eff is change in effective index of mode and δn_sol is change in solution's refractive index.
The effective indices of modes confined in the waveguides are obtained using numerical solutions of Maxwell's equations. We use finite-difference time-domain (FDTD) analysis to estimate the effective indices of modes and mode profiles. At the operating wavelength of our laser, 633 nm, the refractive index of SiO_2, Si_3N_4 and pure water are taken as 1.457, 2.01 and 1.33, respectively. Fig. <ref> shows the effective index of TE modes dependence on Si_3N_4 core waveguide thickness for buried waveguide in SiO_2 and sensing arm waveguide in air and water. Since the cladding SiO_2 index is 1.457, any mode with an effective index lower than the cladding index would not be guided in the core waveguide, and as the core waveguide's thickness increases, the modes' effective indices increase. The plot also shows that as the index of surrounding material on top of the core waveguide decreases, a larger thickness of the core waveguide is required for the existence of the guided modes. For instance, 40 nm thick 4 μm wide Si_3N_4 core waveguide doesn't support a guided mode with surrounding material air on top, thus this configuration is not plotted in Fig. <ref>. To confine the mode in sensing arm waveguide covered with water, a thicker core waveguide is needed compared to the thickness required for single-mode (SM) operation in the buried waveguide. This is because the effective indices of the confined modes in the buried waveguide are higher than those in the sensing arm waveguide covered with water.
Fig. <ref> shows the fundamental mode profiles confined in three different thicknesses: 40 nm, 60 nm, and 80 nm, with 4 μm wide core waveguide covered by 1 μm SiO_2, bulk water or air, respectively. First, the modes are well confined within the 1 μm SiO_2 at all core thicknesses, and they are quite symmetric with respect to the horizontal symmetry axis of the core. The mode profile is asymmetric for sensing arm waveguide due to asymmetry of refractive indices of water and SiO_2 as shown in Fig. <ref>. The fraction of mode fields in solution, i.e., the portion above the red dotted line indicating the interface, decreases for thinner waveguides, thus in order to increase the sensitivity, the core waveguide's thickness must be kept larger than SM criteria in general <cit.>. The sensitivity dependence on the core waveguide thickness is shown in Fig. <ref>. The relationship between δn_sol and δn_eff is shown in Fig. <ref> for three out of 36 data points in Fig. <ref>, corresponding to 36 nm, 60 nm and 80 nm thick and 4 μm wide core waveguide. Fig. <ref> shows that the thickness of the core waveguide for maximum sensitivity is larger than the thickness satisfying the SM condition; this can be seen from Fig. <ref> where TE1 mode starts to exist. Furthermore, sensitivity decreases for the larger thickness of the core waveguide than the thickness for the maximum sensitivity due to the increase of mode confinement in the core waveguide and the decrease in mode field overlap with water or solution.
The design of the curved waveguide and interferometer implemented on the chip is adjusted such that the output from the chip is free of scattered light and unguided cladding modes that may occur at the input fiber-chip interface and sensing window area as shown in Fig. <ref>. For a demonstration of the sensor's sensing application, we utilize the following dimensions for our waveguide-based interferometer: the width of Si_3N_4 core waveguide is 4 μm. The gap between two waveguides of the interferometer is 84 μm (edge to edge) as shown in Fig. <ref>(b). The sensing window measures 12 mm long, 80 μm wide, centered on the sensing arm waveguide and 1 μm deep. We use 54 nm thick, 4 μm wide Si_3N_4 core waveguide in the demonstration experiment as discussed in the next sections. Since the coupling of light from SM fiber to waveguide (experimental setup is described in section 4) is mostly into fundamental TE mode of the core waveguide as shown in Fig. <ref> and only TE0 mode propagates as depicted in Fig. <ref>, the higher modes are not excited in the sensor and do not play any role in our sensor measurements. The simulated n_eff of TM0 mode for 54 nm thick, 4 μm wide Si_3N_4 sensing arm waveguide is 1.4556 which is below SiO_2 cladding index 1.457 at 633 nm, therefore TM0 mode is not confined in sensing arm waveguide and do not play a role in our experiment.
In Fig. <ref>(a), light propagates from the buried waveguide into the sensing window and couples back into the buried waveguide. Fig. <ref> shows a total of four guided modes in the sensing window for 54 nm thick, 4 μm wide Si_3N_4 core waveguide, including a fundamental and three higher TE modes. The propagating TE0 mode from the buried waveguide does not couple to any of those three higher modes in the sensing window. Similarly, the propagating TE0 mode in the sensing window does not couple to any higher mode confined in the buried waveguide. In general, it is advised to use the adiabatic waveguide design at the transition regions of the sensing window (buried waveguide to sensing arm waveguide and sensing arm waveguide to buried waveguide) to improve the mode conversion from buried waveguide to sensing arm waveguide and vice versa. Typically, a triangular shape with dimensions estimated by suitable simulations can be implemented at both ends of the sensing window to reduce light scattering and coupling to any mode other than the fundamental TE mode. However, this adiabatic design was not used in the demonstrated chip presented here.
§ FABRICATION OF PHOTONIC SENSOR CHIP
The fabrication of a photonic sensor chip begins with a 3-inch silicon (Si) wafer. A layer of silicon dioxide (SiO_2) with a thickness of 2 μm is thermally grown on the wafer's surface to provide a base isolation layer. Next, a thin layer of Si_3N_4 approximately 73 nm is deposited using the Low-Pressure Chemical Vapor Deposition (LPCVD) technique. This Si_3N_4 layer forms the core of the waveguide. The deposited Si_3N_4 layer is then etched down to the desired waveguide thickness using a plasma etching process, i.e., 54 nm thick Si_3N_4 in the demonstration experiment. The wafer is baked on a hot plate for 10 minutes at 160 ^∘C to dehumidify the surface of the wafer. A positive photoresist, S1818, is then spin-coated onto the wafer at 3500 revolutions per minute (rpm) for 40 seconds, resulting in a resist layer of about 2 μm thick. This resist layer acts as a mask during the following patterning process. The resist-coated wafer undergoes a soft bake on a hot plate at 115 ^∘C for 2 min to solidify the resist and improve its patterning properties. After cooling down to room temperature, the wafer is then transferred to the photo-lithography system (Heidelberg MLA150 Maskless) to pattern the interferometer and waveguide designs on photoresist with ultraviolet (UV) light exposure. The exposure dose is set to 160 mJ/cm^2 to achieve optimal resist patterning. The exposed resist is then developed in a developer solution, MF-319, for 90 seconds. This development process removes the unexposed regions of the resist, leaving behind the designed waveguide pattern. The patterned resist layer acts as a mask for the subsequent reactive-ion etching (RIE) process, which precisely etches away uncovered Si_3N_4 layer. Finally, the remaining resist mask is removed using acetone, completing the fabrication of the core waveguide structure. After that, 1 μm thick SiO_2 is deposited on the fabricated chip using the PECVD technique. To create a sensing window on the sensing arm waveguide, positive photoresist S1818 is spin-coated onto the wafer at 3500 rpm for 40 seconds to form a resist layer around 2 μm, then soft baking at 115 ^∘C for 2 min is performed. The wafer is then transferred to EVG 610 Double-sided Mask Aligner photo-lithography system to align to markers fabricated waveguides during the previous lithography step and pattern the sensing windows on photoresist with UV exposure dosage 160 mJ/cm^2. The pattern is developed in the MF-319 developer solution for about 90 seconds. After development, a wet etching method, Buffered oxide etch (BOE) chemical is used for 4 min to etch SiO_2 from patterned sensing areas down to Si_3N_4 core waveguide. Following the etching, the resist mask is lifted off with acetone, leaving behind the completed photonic waveguide with a defined sensing region ready for further characterization and integration as shown in Fig. <ref> and <ref>. The chip is then cleaved with a diamond cutter and gentle pressure by a clip to propagate the cleave cut.
§ EXPERIMENTAL SETUP
The schematic diagram of the photonic chip characterization experiment is shown in Fig. <ref>. A continuous-wave (CW) laser (Thorlabs HRS015B) operating at a wavelength of 633 nm is launched to an optical isolator (Newport ISO-04-650-MP) to minimize any reflection back to the laser and is then coupled into a SM fiber (Thorlabs P1-630Y-FC-2) with a core diameter of approximately 4 μm. One end of the SM fiber is cleaved at a 0-degree angle to have a perfectly Gaussian fundamental Transverse EM (TEM) mode at the output. A photonic sensor chip is placed under an optical microscope equipped with 5× and 20× objective lenses. The cleaved fiber end is then launched from a control stage towards the photonic chip and butt-coupled to the Si_3N_4 core waveguide to achieve the coupling of light. The output from the chip is captured by a CMOS camera and monitored on a computer with a Labview application, allowing us to visualize the interference pattern and apply further data analysis.
§ RESULTS AND DISCUSSIONS
For the glucose solution concentration measurement, we fabricate a photonic chip with 54 nm thick Si_3N_4 core waveguide, which is close to the optimum thickness for sensitivity as discussed in Fig. <ref>. The simulated coupling of TE0 waveguide mode to higher modes at sensing window interfaces is shown in Fig. <ref>, illustrating our sensor's operation in fundamental TE mode. Drops of solution are placed on the sensing window using a syringe from prepared glucose (D-glucose, Sigma-Aldrich) solutions to fill the sensing area of the sensor device. We wait for 5 min after putting solution for each measurement to guarantee the equilibrium state of the interaction of solution with sensing arm waveguide and monitor fringe shifts with live setup of camera output on Labview application with only less than 100 ms integration time for each output image captured. Interfered beam output from the chip end is a diverging beam, and the camera collects a small region of light output for about 80 ms integration time, as shown in Fig. <ref>. The camera is placed away from the chip to capture only 4-5 fringes on the camera to get enough pixels of data points per fringe along the fringe shift axis (x-axis). The fringes produced by our photonic chip have visibility ((I_max- I_min)/(I_max+ I_min)) more than 0.75 in all measurements. We use the Asin(ω x +ϕ) + b function to fit the fringe data along the fringe shift axis (x-axis) to estimate the output phase. We fit the data with a fitting function using the Python scipy module and extract the fit parameters. Based on the data fittings of multiple fringes, the estimated phases' standard deviation is σ_ϕ ∼ 0.03 radians. Phase differences, δϕ, are calculated for different glucose conc. solutions using the pure water solution as a reference, i.e., δϕ = ϕ_conc. - ϕ_0, where ϕ_0 and ϕ_conc. are estimated phases in radians for pure water and glucose conc. solutions respectively. The measured values δϕ vs conc. of glucose are plotted in Fig. <ref>, and the data fits a linear function relation between conc. and phase differences. Based on the work of Tan et al. <cit.>, at room temperature ∼300 K, δn_sol/ δc for glucose conc. solution is ∼ 1.56 × 10^-1 RIU/(g/ml). The smallest glucose conc. used in this experiment is 68 μ g/ml, which corresponds to bulk refractive index change of ∼ 1.06 × 10^-5 RIU from pure water.
To determine the number of molecules around the waveguide contributing to phase shift in interference, first, we estimate the effective mode area <cit.> using the following equation:
A_eff = ∬n^2(x,y)|E(x,y)|^2 dx dy/max[n^2(x,y)|E(x,y)|^2] ,
where, A_eff is the effective mode area <cit.>, |E(x,y)| is the magnitude of electric field and n(x,y) is the refractive index. For a mode spans waveguide and solution, the effective mode area A_eff, mode is
A_eff, mode = ∬_moden^2(x,y)|E(x,y)|^2 dx dy/max[n^2(x,y)|E(x,y)|^2]_mode ,
while the effective mode area in the solution is
A_eff, solution = ∬_soln^2(x,y)|E(x,y)|^2 dx dy/max[n^2(x,y)|E(x,y)|^2]_mode .
For the TE0 mode confined in the 54 nm thick, 4 μm wide Si_3N_4 sensing arm, the simulated values are:
max[n^2(x,y)|E(x,y)|^2]_mode = 4.0401,
∬_moden^2(x,y)|E(x,y)|^2 dx dy = 1.3192× 10^-12 m^2,
and
∬_soln^2(x,y)|E(x,y)|^2 dx dy = 2.112×10^-13 m^2.
Molar mass (M_m) of glucose (C_6H_12O_6) is 180 g/mol, given the concentration, c in g/m^3, the number of molecules contributing to change in phase shift over the sensing window length (L) is
N_molecules = c/M_mN_ALA_eff, solution,
where N_A is Avogadro's constant.
For concentration in the order of 10 μg/ml, number of molecules contributing to measurable phase difference is ∼ 10^7. Using δn_sol/ δc∼ 1.56 × 10^-1 RIU/(g/ml) for glucose solution and eqn. <ref>, the relation between change in phase and glucose concentration can be found: δϕ = S_wg(2 πδn_solL)/λ,
or δϕ /δ c = S_wg(2π)(0.156L)/λ. Given that the length of the sensing window is 12 mm, the wavelength of the laser is 633 nm and the calculated slope, δϕ /δ c = 0.01062 × 10^6 rad/(g/ml) from Fig. <ref>, we estimate the experimental value of S_wg = 0.57.
The phase LoD is defined as Δϕ_limit = k×σ_ϕ, where k is a constant that depends on the confidence level or detection criterion. Common values of k are: for a 68.3% confidence level (1-sigma detection), k = 1; for a 95.4% confidence level (2-sigma detection), k = 2; for a 99.7% confidence level (3-sigma detection), k = 3. Since σ_ϕ∼ 0.03 rad, therefore, for k=3, we have Δϕ_limit = 0.09 rad, which corresponds to bulk refractive index change ∼ 1.32 × 10^-6 RIU and the capability of measuring 10 μg/ml glucose concentration.
§ CONCLUSION
We present design, simulations and fabrication for operating waveguide-based Young's interferometer in Si_3N_4 photonic platform for applications in molecules concentration measurement. We present numerical results of mode indices in different configurations of the core waveguide. We showed the optimum thickness of the strip waveguide in the fundamental TE mode operation of the sensing arm waveguide. We demonstrate bulk sensing measurements using glucose solution with different concentrations and the chip producing high-quality fringes in output. The phase estimations from each fitted interference pattern have a low error due to a minimum 0.75 visibility of fringes and less noisy interference in general. The experimental sensitivity of the sensor is found to be 0.57. The sensor's refractive index change detection limit is ∼ 1.32 × 10^-6 RIU. Currently, CMOS imaging sensors enable low-cost detection schemes combined with Young's interferometer for arbitrary phase shift detection.
This work can easily be extended to measuring the bulk concentration of antibodies and biomolecules in an aqueous solution and a thin layer of biomolecules captured on the functionalized surface of a core waveguide.
§ FUNDING
S.D. is supported by Herman F. Heep and Minnie Belle Heep Texas A&M University Endowed Fund held/administered by the Texas A&M Foundation. We want to thank the Robert A. Welch Foundation (grants A-1261 and A-1547), the DARPA PhENOM program, the Air Force Office of Scientific Research (Award No. FA9550-20-10366), and the National Science Foundation (Grant No. PHY-2013771). This material is also based upon work supported by the U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research under Award Number DE-SC-0023103, DE-AC36-08GO28308.
§ ACKNOWLEDGMENTS
Fabrication of photonic chips was performed at the Aggiefab facility of Texas A&M University.
§ DISCLOSURES
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ DATA AVAILABILITY
Data underlying the results presented in this paper are not publicly available but can be obtained from the authors upon reasonable request.
|
http://arxiv.org/abs/2409.02669v1 | 20240904125326 | Causality-Aware Transformer Networks for Robotic Navigation | [
"Ruoyu Wang",
"Yao Liu",
"Yuanjiang Cao",
"Lina Yao"
] | cs.RO | [
"cs.RO",
"cs.AI",
"cs.LG"
] |
Causality-Aware Transformer Networks for Robotic Navigation
Wang et al.
University of New South Wales Macquarie University Commonwealth Scientific and Industrial Research Organisation, Australia
Causality-Aware Transformer Networks for Robotic Navigation
Ruoyu Wang 1 Yao Liu 2 Yuanjiang Cao 2 Lina Yao 1,3
September 9, 2024
===========================================================
§ ABSTRACT
Recent advances in machine learning algorithms have garnered growing interest in developing versatile Embodied AI systems. However, current research in this domain reveals opportunities for improvement. First, the direct adoption of RNNs and Transformers often overlooks the specific differences between Embodied AI and traditional sequential data modelling, potentially limiting its performance in Embodied AI tasks. Second, the reliance on task-specific configurations, such as pre-trained modules and dataset-specific logic, compromises the generalizability of these methods. We address these constraints by initially exploring the unique differences between Embodied AI tasks and other sequential data tasks through the lens of Causality, presenting a causal framework to elucidate the inadequacies of conventional sequential methods for Embodied AI. By leveraging this causal perspective, we propose Causality-Aware Transformer (CAT) Networks for Navigation, featuring a Causal Understanding Module to enhance the models's Environmental Understanding capability. Meanwhile, our method is devoid of task-specific inductive biases and can be trained in an End-to-End manner, which enhances the method's generalizability across various contexts. Empirical evaluations demonstrate that our methodology consistently surpasses benchmark performances across a spectrum of settings, tasks and simulation environments. Extensive ablation studies reveal that the performance gains can be attributed to the Causal Understanding Module, which demonstrates effectiveness and efficiency in both Reinforcement Learning and Supervised Learning settings.
§ INTRODUCTION
Navigation is a fundamental task in the research of Embodied AI, and the methods for Navigation can be broadly grouped into Supervised Learning methods and Reinforcement Learning methods, distinguished by the nature of their training processes <cit.>. Supervision in Navigation generally refers to a demonstration of a possible solution to a problem, and the methods are typically focused on matching the behaviour of their demonstrator. In Reinforcement Learning (RL) settings, an agent learns the policy by interactions with an environment. In both settings, while some methods have been proposed in recent years, many of them share similar limitations.
First, many methods involve the incorporation of task-specific inductive bias thus lack of generalizability <cit.>. For example, they usually require dataset-specific hand-crafted logics <cit.>, pre-trained modules to construct semantic maps <cit.>, additional large dataset <cit.>, or multi-stage training <cit.>. These limitations render them deficient in terms of generalizability and reproducibility, precluding their extension to alternative datasets or tasks.
Second, most existing methods directly adopt the traditional sequential data modeling methods such as RNN or Transformer as a core module. While employing these strategies from analogous tasks involving sequential data might seem expedient, it is crucial to acknowledge that Embodied AI tasks exhibit distinct characteristics from a causal perspective (Section <ref>). However, existing methods haven't adequately engaged with these distinctions thus limiting their effectiveness on Embodied AI tasks.
While a few existing works <cit.> tackled the generalizability problem by proposing an End-to-End method without task-specific architectures and inductive biases, the second weakness introduced above remains unaddressed. Therefore, we introduce a novel method to address these identified shortcomings in this paper.
Firstly, to enhance the generalizability and reproducibility of the approaches, we present an end-to-end method for visual navigation. This framework eschews any task-specific architecture and inductive biases, aligning with the principles demonstrated in EmbCLIP <cit.>. Secondly, we conduct a thorough analysis of the disparities between Embodied AI tasks and other types of sequential tasks through the lens of causality. By encapsulating these distinctions within a causal framework, we elucidate the inadequacies of conventional sequential methods, such as RNNs and Transformers, in addressing Embodied AI tasks. Consequently, we propose an innovative solution incorporating a Causal Understanding Module, designed to significantly enhance model performance in these contexts. In a nutshell, our contributions are three-folded:
* We propose Causality-Aware Transformer (CAT), a novel End-to-End framework for Navigation tasks without task-specific inductive bias, and experimentally show that it outperforms the baseline methods by a significant margin across various tasks and simulators.
* We introduce a causal framework for Navigation tasks,
offering a comprehensive rationale for the limitations in existing methods that potentially hinder agent performance. Without incurring additional computational cost, we design a Causal Understanding Module to substantially enhance the methods' effectiveness and efficiency, demonstrating the framework's ability to optimize performance through a more nuanced understanding of causality.
* We conduct comprehensive ablation studies and demonstrate the efficiency, effectiveness and necessity of the Causal Understanding Module. Besides, while we primarily focus on the tasks in the Reinforcement Learning setting, experiments show that the proposed Causal Understanding Module is also generally beneficial in the Supervised Learning setting.
R0.6
< g r a p h i c s >
Our method encourages the model to understand the environment by highlighting the direct causal relationships and diminishing the non-direct causal associations.
§ BACKGROUND
§.§ Motivation
Across diverse settings and paradigms within Embodied AI tasks, many necessitate the modelling of sequential data accumulated from preceding time steps. Predominantly, research endeavours utilize Recurrent Neural Networks (RNNs) or Transformers for this purpose, given their established efficacy in handling sequential data. Nevertheless, these methods were primarily proposed for other scenarios such as NLP, our question is, are these methods well-suited for the Navigation tasks?
We argue that directly adopting methods such as RNNs or Transformers to navigation tasks may result in specific limitations. This is due to the intrinsic differences between navigation tasks and conventional sequential data modelling tasks such as NLP, particularly when viewed through the framework of Causality. For instance, if we consider the following sentence in NLP:
The doctor asked the nurse a question, “...?”, She said, ....
It is obvious that the token She refers to the nurse, which was mentioned several time steps earlier. This indicates that direct causal relationships in such scenarios can be long-term or short-term, and do not adhere to a universal pattern.
In Navigation tasks, however, based on our empirical understanding of the physical world, the direct causal relationship can only be one-step, because it is generally believed that the transition from one state to another is typically attributed to the actions undertaken in the interim. For example, the current standing position of the agent and the action performed in that position serves as the direct determinants of its subsequent position, as demonstrated in Figure <ref>.
This fundamental difference underscores why methods such as RNNs or Transformers may not be ideally suited for Embodied AI tasks, because they are inherently designed to capture long-term associations across time steps. While this feature is advantageous for tasks that exhibit long-term causal connections, it proves counterproductive for navigation tasks, where such associations may not be desirable. Therefore, modifications to the architecture are necessary to address these disparities.
§.§ Motivation Formulation from a Causal Perspective
We formalize the motivation introduced above into a causal framework. First, based on the nature of the Navigation tasks, we make the following assumptions, which closely align with our empirical observations in the real world, as elaborated in Section <ref>.
At any given time step t, the state S_t and the action a_t are the only direct causal parent influencing the subsequent state at time step t+1, denoted as S_t+1.
At any given time step t, the state S_t and the Objective are the only causal parents of the action a_t.
Differences with Markov Property While the intuition of these assumptions may appear similar to the Markov Property <cit.>, they are distinct concepts from three perspectives: 1) Our assumptions pertain to causality, whereas the Markov property concerns transitional probability. 2) The Markov property underpins problem formulations such as Markov Decision Processes (MDPs), rather than serving as a specific modeling technique. However, our focus is on the practical aspects, and we discuss the appropriate architecture for modelling the scenario. 3) Our method is broadly applicable in both Reinforcement Learning and Supervised Learning contexts, thus not restricted to MDPs.
We depict these causal assumptions in a causal graph, as illustrated in the enlarged Causal Understanding Module in Figure <ref>, where the edge S_t-1→ S_t← a_t-1 corresponds to the Assumption <ref>, and the edge S_t→ a_t corresponds to the Assumption <ref>. Further, we derive Proposition <ref> based on these assumptions, which suggests that there exist NO direct causal relationships between S_t-1 and S_t+1, so S_t-1→ S_t+1 is marked by a light dotted line, indicating the causation between them is indirect and weak.
At any given time step t, and for any integer δ≥ 2, there exist no direct causal relationships between S_t and S_t-δ, the causal relationships between states S_t and S_t-δ are indirect and must be mediated by states S_t' for all t' where t-δ≤ t' ≤ t.
As discussed in Section <ref>, we identify Proposition <ref> as a crucial distinction between Navigation tasks and other sequential data tasks such as NLP. Therefore, we aim to make the direct causal relationship S_t-1→ a_t-1, S_t-1→ S_t and a_t-1→ S_t stand out from the associations in other forms. While S_t-1→ a_t-1 is inherently highlighted by the data collection process in the task, S_t-1→ S_t and a_t-1→ S_t need to be addressed in the historical data modelling step. Building on these concepts, we propose our method to make these causal associations stand out from the associations in other forms.
§ CAUSALITY-AWARE TRANSFORMER NETWORKS
The architecture of our proposed methodology is depicted in Figure <ref> and encompasses multiple components: Visual Encoder, Goal/Action Embedding, Feature Post-Processing, Multi-modal Transformer, and the Causal Understanding Module. The functions and operations of these individual modules are delineated in the subsequent sections.
Visual Encoder We fine-tune the pre-trained CLIP ResNet-50 model <cit.> as the Visual Encoder to convert the visual signal S_i into features z_s_i because CLIP is proven to be effective for the Embodied AI tasks <cit.>.
Objective & Action Embedding The Objective and Actions are given in the text format, such as “Find the laptop”, “Move Forward”, and “Turn Left”. We utilize a simple Embedding layer to transform the Objective and previous actions a_i into features h_o and h_a_i.
Feature Post-Processing After obtaining the representations of the visual signal of each state z_s_i and the representation of the goal h_o, we pass these features to a Feature Post-Processing Module, which is designed to integrate objective-related information into the visual representation of each state serves to reinforce the causal relationship between the objective and the subsequent action (Assumption <ref>). Consequently, the Feature Post-Processing Module generates a distinct set of features h_s_i for each state.
Multi-modal Transformer All the features obtained in the earlier steps are passed into a Multi-modal Transformer Module, as illustrated in Figure <ref>. We use h_visual and h_action to denote the sequence of h_v_i and h_a_i for simplicity.
First, we concatenate h_visual and h_action along the direction of the time step, as demonstrated by Equation <ref>, and apply the Positional Encoding on the concatenated features h_concat. Then, the features will be passed to the transformer encoder <cit.>, which is built with causal attention that prevents visual and action embeddings from attending to subsequent time steps. Consequently, the transformer produces a set of updated features h'_concat.
h_concat = [ h_visual; h_actions]
Similarly, the output of the transformer contains the updated features of visual states and previous actions, as illustrated by Equation <ref>. Then, we take the updated visual features h'_visual and feed them to an Actor layer, which predicts the proper action to be taken at the given state, as illustrated by Equation <ref>. Finally, the agent will interact with the environment through a_t to get the visual signal of the next state, then the framework proceed to the next time step.
h'_concat = [ h'_visual; h'_actions]
a_t = Actor(h'_visual)
Causal Understanding Module Following the idea in Assumption <ref>, Assumption <ref> and Proposition <ref>, we use a Causal Understanding Module to encourage the one-step direct causation to stand out from the associations in other forms. In particular, we claim that for any time step t, the state S_t and the corresponding action a_t should be the only direct causes of the next state S_t+1. And any states in earlier time steps only hold indirect causation with S_t+1, thus the association between them should be weaker than the direct causation between adjacent states. To align the model architecture with these assumptions, we aim to make the direct causations stand out from other forms of associations. Therefore, we introduce a Causal Understanding Module, which predicts the representation of the next state S_t+1 by taking the representation of S_t and a_t as input. Ideally, a model demonstrating accurate predictions on this auxiliary task signifies a thorough understanding and effective encoding of the underlying environment. Mathematically, the Causal Understanding Module is trained by minimizing the Causal Loss as defined in Equation <ref>.
ℒ_causal(θ) = 𝔼_t[ (Causal(h_v_t, h_a_t) - h_v_t+1 )^2 ]
Training Process For experiments in the Reinforcement Learning setting, we train our model with Proximal Policy Optimization (PPO) <cit.>. So the overall objective of our framework becomes Equation <ref>, where ℒ_PPO denotes the original objective of PPO <cit.> which is to be maximized, ℒ_causal is our proposed Causal Loss which is to be minimized, so we subtract the causal loss from the PPO objective.
ℒ_total(θ) = ℒ_PPO - αℒ_causal
We also conduct experiments in the Supervised Learning setting, where the training process differs from that in the Reinforcement Learning setting. This will be elaborated in Section <ref>.
§ EXPERIMENTS
In Section <ref>-<ref>, we evaluate the performance of our method in the Reinforcement Learning setting. In Section <ref>, we conduct ablation studies and found the Causal Understanding Module contributes most of the performance gain. In Section <ref>, we demonstrate that the Causal Understanding Module is also generally applicable and effective in the Supervised Learning setting.
§.§ Experiment Setting
§.§.§ Task Descriptions
We evaluate our method over three tasks in the Reinforcement Learning setting:
(1) Object Navigation in RoboTHOR <cit.>, which requires an agent to navigate through its environment and find an object of a given category. For example, “Find an apple". The task consists of 12 possible goal object categories, and the agent is allowed to MoveAhead, RotateRight, RotateLeft, LookUp, and LookDown. The agent is considered to have completed the task if it takes a special Stop action and one goal object category is visible within 1 meter of the agent.
(2) Object Navigation in Habitat, which is defined similarly to the task in RoboTHOR, but Habitat has 21 objects and does not require the agent to be looking at a target object to succeed. Habitat uses scenes from the MatterPort3D <cit.> dataset of real-world indoor spaces.
(3) Point Navigation in Habitat, which requires an agent to navigate from a random initial position to polar goal coordinates. For example, “Navigate to (X, Y)". The agent is allowed to do three actions, which include MoveAhead, RotateRight, and RotateLeft. The agent should perform a special Done action when it reaches its goal coordinates. We train within the Gibson Database <cit.>.
§.§.§ Evaluation Metrics
Various metrics are employed to assess an agent's performance across different tasks. Following the previous works, we evaluate the performance of an agent on Object Navigation in RoboTHOR by Success Rate (SR) and Success weighted by Path Length (SPL).
Success Rate measures the frequency with which an agent successfully completes a task, while Success weighted by Path Length (SPL) takes into account the length of the path traversed by the agent to accomplish the task. On the other hand, for the two tasks in Habitat, apart from SR and SPL, we also evaluate the agent's performance on Goal Distance (GD), which quantifies the agent's proximity to the goal upon task completion. In general, an agent is considered to be better if it achieves a higher SR and SPL, or lower GD.
§.§.§ Baselines
(1) We compare our method with EmbCLIP, because both methods do not contain any task-specific designs and can be trained in an End-to-End manner, thus lying in the same category and can be compared directly. For both methods, we train the model for 20M steps. And during the training process, we select the checkpoint with the highest SR for evaluation and comparison.
(2) We compare our method with baseline methods specifically designed for each task. However, as discussed earlier, it is worth noting that these methods are not directly comparable with us because they are not in the same setting as our method. For example, these methods employ either task-specific hand-crafted logic, undergo training across multiple stages, or leverage extensive offline datasets to facilitate the training process. But for a comprehensive understanding of the performance of our method, we report the validation results of these methods as a reference.
Specifically, we compare our method with Action Boost, RGB+D, ICT-ISIA, and ProcTHOR <cit.> on RoboTHOR ObjNav; and compare with Stubborn <cit.>, TreasureHunt <cit.>, Habitat on Web (IL-HD) <cit.>, Red Rabbit <cit.>, PIRLNav<cit.> and RIM on Habitat ObjNav; and compare with DD-PPO <cit.>, Monocular Predicted Depth, Arnold and SRK AI on Habitat PointNav.
§.§.§ Environment and Parameters
For all the experiments, we trained our method for 20M steps. and select the checkpoint with the highest Success Rate (SR) for evaluation on other metrics. Regarding the model architecture, we use an Embedding layer for Action and Objective Embedding and a linear layer for the Feature Post-Processing, the Casual Understanding and the Actor Module. The Multi-modal Transformer Encoder is a single-layer transformer encoder with 4 self-attention heads and the dimension for each self-attention head is 392 thus the total dimension is 1568. The episode length is 128. The weight α for Causal Loss is 1. We train our framework using Adam <cit.> with a learning rate of 1e-4, and schedule the learning rate to linearly Decay to 0 when the training is finished. Our code is based on AllenAct <cit.>, a learning framework designed for Embodied-AI research. Any settings not specified here remain the same as <cit.>.
§.§ Results
We provide the results of our experiments for each task in Table <ref> and Table <ref>. As introduced in Section <ref>, we compare our method with two types of baselines, thus our findings similarly fall into two aspects:
(1) Compared with EmbCLIP, our method outperforms the baseline significantly on all the tasks across all evaluation metrics, as illustrated in Table <ref>-<ref>. This shows the effectiveness of our method. In particular, our method achieves more than a doubling of EmbCLIP on SPL and SR, in RoboTHOR Object Navigation and Habitat Object Navigation tasks.
(2) Compared with other methods with stronger assumptions and inductive bias, we observe that our method also achieves better results than these methods. While a few of these methods achieve similar performance to our method on some evaluation metrics, these method lacks generalizability and reproducibility due to various reasons, as elaborated earlier. For example, in Table <ref>, while ProcTHOR<cit.> also performs well, it requires extremely large offline data to train a large model first, and then fine-tune the pre-trained model as a second stage. Similarly, in Table <ref>, while RIM and PIRLNav <cit.> achieve a similar result to us, they also require multi-stage training and task-specific designs.
R6cm
Results on Habitat ObjNav.
SPL SR GD
CAT (Ours) 0.16 0.41 6.76
Emb-CLIP 0.07 0.15 7.13
RIM 0.15 0.37 6.80
PIRLNav 0.14 0.35 6.95
Stubborn 0.10 0.22 9.17
TreasureHunt 0.09 0.21 9.20
Habitat on Web 0.08 0.24 7.88
Red Rabbit 0.06 0.24 9.15
Besides, we conducted some case studies to investigate the benefits of implementing our method. In Figure <ref>a-b, we tested our method and EmbCLIP with the task “Find an AlarmClock” in RoboTHOR ObjNav, and found that our method encouraged the agent to stop at a spot closer to the objective, thus benefiting the performance. In Figure <ref>c-d, we test the methods in Habitat PointNav with the Nuevo scenario, where the green dot denotes the starting point and the red star denotes the target point. We found that our agent can directly navigate to the target point with an optimal route, while EmbCLIP failed to find the shortest path. We speculate that this is because our method enables the agent to understand the environment more efficiently, so the agent can navigate to a place that was less explored before.
In summary, both quantitative results and the qualitative sample check demonstrated the effectiveness and generalizability of our proposed method.
§.§ Ablation Studies
Our proposed method differs from EmbCLIP mainly from three perspectives: (1) We propose a Causal Understanding Module to reinforce the model's capability for environmental understanding; (2) We implement a Multi-modal transformer for feature encoding; (3) We fine-tune the CLIP visual encoder in the training process instead of holding it fixed. Therefore, we conduct ablation studies to examine the impact of these components. We conduct the studies on all three tasks and present the result in Table <ref>.
§.§.§ Impact of Causal Understanding Module
We first examine the impact of the Causal Understanding Module by implementing this module on EmbCLIP <cit.>. We keep all the configurations in EmbCLIP unchanged and compare the cases with/without the causal understanding module. We refer to this model as Causal-RNN in Table <ref> for simplicity.
R0.6
< g r a p h i c s >
Average Success Rate for EmbCLIP and Causal-RNN on RoboTHOR ObjNav. Our proposed Causal Understanding Module can: 1) significantly benefit the performance; and 2) significantly reduce the training time by 10 times.
Besides, we also plot a curve of Success Rate in Figure <ref> for direct comparison of the training progress, where the x-axis denotes the steps of training, and the y-axis denotes the Success Rate. We ran each case 10 times randomly and plotted the average Success Rate at each step. Due to the space limitation, we only plotted the curve for the first 10M steps.
We observe two clear benefits of implementing the Causal Understanding Module: (1) It significantly improves the performance of the baseline model without computational overhead. Specifically, we improve the success rate by more than 50% by adding one simple linear layer; (2) It significantly reduces the time required for training a satisfactory agent. Specifically, we trained the Causal-RNN model for 20M steps and achieved a success rate of 0.48. In contrast, as claimed in <cit.>, EmbCLIP achieved a success rate of 0.47 after training for 200M steps. Therefore, we significantly reduce the training time 10 times by adding one simple Linear layer.
§.§.§ Impact of Multi-modal Transformer
To examine the effectiveness of the Multi-modal Transformer Encoder, we simplify the architecture by removing the Causal Understanding Module from Figure <ref>, and holding CLIP fixed across the training process while keeping all other settings unchanged. In other words, we replace the encoder in EmbCLIP with our Multi-modal Transformer. We directly compare this method with EmbCLIP to understand the benefits of employing a multi-modal Transformer. We refer to this method as Transformer in Table <ref>, and observe that the Multi-modal Transformer Encoder is beneficial to the model's performance.
§.§.§ Impact of CLIP Fine-tuning
To examine the impact of a tunable visual encoder, we modify the architecture of EmbCLIP slightly by making the CLIP visual encoder trainable, and all other settings remain unchanged. We refer to this method as EmbCLIP (f.t.) in Table <ref>, and find that a tunable visual encoder is beneficial to the outcome.
§.§.§ Summary
Our ablation studies reveal that all three components are necessary for our method to achieve optimal outcomes. However, while a Multi-modal Transformer and a tunable Visual Encoder contribute to the agent's performance, the majority of the observed improvements in Section <ref> can be attributed to the Causal Understanding Module.
§.§ Causal Understanding Module in Supervised Learning
While our primary focus in the earlier sections is on experiments within the Reinforcement Learning setting, we also assess the effectiveness of our proposed method in Supervised Learning. Specifically, we implemented the Causal Understanding Module on existing methods including Seq2Seq <cit.>, Speaker Follower <cit.> and EnvDrop <cit.>, while maintaining all other settings as proposed in the respective papers. The experiments are conducted on the R2R dataset <cit.>, and we evaluate the performance of the methods by the commonly used metrics for the task including Navigation Error (NE), Oracle Success Rate (OSR), and SR introduced in earlier sections. The results of the validation unseen split are presented in Table <ref>. Overall, the results have demonstrated the effectiveness, consistency and generalizability of the proposed Causal Understanding Module.
§.§ Computational Cost
Our proposed architecture requires more computational resources than the baseline method. However, our experiments reveal that our method is not computationally expensive. We trained the model on our machine with a single NVIDIA TITAN X GPU, and it occupies only 5GB of Memory to train the model. It takes around 40 GPU hours to train the model for 20M steps, which is nearly identical to the baseline (EmbCLIP). Besides, our Causal-RNN model in Section <ref> is proven to be effective without any extra resources.
§ RELATED WORK
Many simulators and tasks have been proposed in recent years for Embodied AI. Simulators such as <cit.> enables the interaction between agents and their environments, and tasks such as Navigation <cit.>, Rearrangement <cit.>, Interpreting Instructions <cit.> are proposed to evaluate the performance of the agent. While fast progress has been made on some of the tasks, most of these tasks remain challenging.
While various methods have been proposed in recent years, most methods are specifically designed for certain tasks aiming for better results for the challenges <cit.>. However, although these methods achieve state-of-the-art performances, they cannot be generalized to other scenarios due to the dataset-specific architecture, inductive bias and hand-crafted logic <cit.>. Recently, <cit.> addressed these issues. They found that CLIP <cit.> makes an effective visual encoder and proposed a novel method that does not require any task-specific architectures and inductive bias.
§ CONCLUSION
In conclusion, this paper addresses key challenges in the domain of Navigation tasks in Embodied AI, particularly the limitations associated with prevalent vanilla sequential data modelling methods and task-specific designs, which often hinder generalizability and performance. By elucidating the intrinsic disparities between Navigation tasks and conventional sequential data modelling tasks, we introduce a novel causal framework to explain the necessity of the causal environment understanding module and proposed Causality-Aware Transformer (CAT), an End-to-End transformer-based method that exhibits notable performance improvements across diverse tasks and simulators, surpassing baseline approaches on multiple evaluation metrics. Furthermore, comprehensive ablation studies reveal that most of the performance gain of our method can be attributed to the Causal Understanding Module, which is proven to be effective and can be implemented in other methods across the paradigm of Reinforcement Learning and Supervised Learning without computational overhead.
splncs04
|
http://arxiv.org/abs/2409.03246v1 | 20240905045108 | A priori and a posteriori error bounds for the fully mixed FEM formulation of poroelasticity with stress-dependent permeability | [
"Arbaz Khan",
"Bishnu P. Lamichhane",
"Ricardo Ruiz-Baier",
"Segundo Villa-Fuentes"
] | math.NA | [
"math.NA",
"cs.NA",
"65N30, 65N15, 65J15, 76S05, 35Q74"
] |
Tensor network square root Kalman filter
for online Gaussian process regression
[
Received 16 July 2024; accepted 04 September 2024
=================================================================================
§ ABSTRACT
We develop a family of mixed finite element methods for a model of nonlinear poroelasticity where, thanks to a rewriting of the constitutive equations, the permeability depends on the total poroelastic stress and on the fluid pressure and therefore we can use the Hellinger–Reissner principle with weakly imposed stress symmetry for Biot's equations. The problem is adequately structured into a coupled system consisting of one saddle-point formulation, one linearised perturbed saddle-point formulation, and two off-diagonal perturbations. This system's unique solvability requires assumptions on regularity and Lipschitz continuity of the inverse permeability, and the analysis follows fixed-point arguments and the Babuška–Brezzi theory. The discrete problem is shown uniquely solvable by applying similar fixed-point and saddle-point techniques as for the continuous case. The method is based on the classical PEERS_k elements, it is exactly momentum and mass conservative, and it is robust with respect to the nearly incompressible as well as vanishing storativity limits. We derive a priori error estimates, we also propose fully computable residual-based a posteriori error indicators, and show that they are reliable and efficient with respect to the natural norms, and robust in the limit of near incompressibility. These a posteriori error estimates are used to drive adaptive mesh refinement. The theoretical analysis is supported and illustrated by several numerical examples in 2D and 3D.
Mixed finite elements,
stress-based formulation, nonlinear poroelasticity, fixed-point operators, error estimates.
65N30, 65N15, 65J15, 76S05, 35Q74.
§ INTRODUCTION
Nonlinear interaction between flow and the mechanical response of saturated porous media is of a great importance in many applications in biophysics, geomechanics, and tissue engineering, for example. One of such models is the equations of nonlinear poroelasticity, whose mathematical properties were studied in great detail, for example, in the references <cit.>. In these works, it becomes clear that a distinctive property of nonlinear poroelasticity models targeted for, e.g., soft tissue (cartilage, trabecular meshwork, brain matter, etc.), is that the nonlinear permeability (the hydraulic conductivity, defined as how easily pore fluid escapes from the compacted pore spaces) that depends on the evolving total amount of fluid, does not entail a monotone operator, and therefore one cannot readily apply typical tools from monotone saddle-point problems.
Our interest is in deriving mixed finite element (FE) formulations (solving also for other variables of interest), and for this we can cite in particular <cit.>, where fully mixed formulations based on the Hu–Washizu principle are studied. Writing the poroelasticity equations in terms of the strain tensor was motivated in particular in <cit.> because the permeability – at least in the regime we focus here – depends nonlinearly on the total amount of fluid, which is a function of strain.
The upshot here compared to <cit.> is that we are able to rewrite the constitutive equation for permeability to depend on the total poroelastic stress and on the fluid pressure (similarly as in, e.g., <cit.>). This allows us to revert to the more popular Hellinger–Reissner type of mixed formulations for poroelasticity <cit.> (without solving explicitly for the strain). Consequently, another appealing advantage with respect to the formulation in <cit.> is that, as in the Hellinger–Reissner formulation, the model becomes robust with respect to the Lamé constants. Also in contrast to <cit.>, in this work we use a mixed form for the fluid flow (adding the discharge flux as additional unknown), which gives the additional advantage of mass conservativity.
Regarding the well-posedness analysis, the aforementioned non-monotonicity of the permeability suggest, for example, to use a fixed-point argument. We opt for freezing the arguments of permeability, turning the double saddle-point structure with three perturbations coming from the stress trace operator and from the L^2 pressure blocks, into two decoupled saddle-point problems whose separate solvability can be established from the classical literature for weakly symmetric elasticity and mixed reaction-diffusion equations. Banach fixed-point theorem is then used to show well-posedness of the overall problem. This analysis needs to verify conditions of ball-mapping and contraction of the fixed-point map, and this imposes a small data assumption, which can be carried over to the external load, mass source, boundary displacement, and boundary fluid pressure. Compared to <cit.>, these conditions are less restrictive and imply also a less restrictive discrete analysis (which follows closely the continuous one), due to the analysis being performed using the inverse of the Hooke tensor, which allows us to achieve robustness with respect to the first Lamé parameter λ.
Note that at the discrete level we can simply use conforming FE spaces. Discrete inf-sup conditions are already well-known for the chosen FE families of PEERS_k and Raviart–Thomas elements used for the solid and fluid sub-problems (but several other inf-sup stable spaces that satisfy a discrete kernel characterisation are also possible). We emphasise that, similarly to <cit.>, all estimates hold uniformly in the limit of nearly incompressibility (implying that the formulation is Poisson locking-free) as well as when the constrained storage coefficient vanishes (poroelastic locking-free), and therefore they are free of non-physical pressure oscillations.
An additional goal of this work is to derive efficient and reliable residual a posteriori error estimators for the nonlinear poroelasticity equations. The approach follows a similar treatment as that of <cit.> (which focuses on mixed formulations of stress-assisted diffusion equations), with the difference that here we do not need to include augmentation terms for the mixed form of the mixed diffusion problem. The main ingredients in the analysis of these estimates are Helmholtz decompositions and a global inf-sup (together with boundedness and Lipschitz continuity of coupling terms), local inverse and trace estimates, bubble-based localisation arguments, and properties of Clément and Raviart–Thomas interpolators. See also <cit.> for estimators in a similar multiphysics context and, e.g., <cit.> for mixed linear elasticity. Note that for the reliability of the estimator the aforementioned Helmholtz decompositions – for both tensor-vector and vector-scalar cases – should be valid for mixed boundary conditions. For this we follow <cit.> and <cit.>, from which we inherit an a convexity assumption on the Neumann sub-boundary (where we impose traction and flux boundary conditions).
Outline. The rest of the paper is organised as follows. The remainder of this Section has a collection of preliminary definitions and notational convention, as well as the statement of the governing partial differential equations. The weak formulation and proofs of the uniform boundedness of the bilinear forms and suitable inf-sup conditions are shown in Section <ref>. The fixed-point analysis of the coupled problem is carried out in Section <ref>. Section <ref> then focuses on the Galerkin discretisation, including its well-posedness analysis and definition of specific FE subspaces that provide momentum and mass conservativity. In Section <ref> we show a Céa estimate and using appropriate approximation properties we derive optimal a priori error bounds including also the higher order case. The definition of a residual a posteriori error estimator and the proofs of its reliability and efficiency are presented in Section <ref>. We conclude in Section <ref> with some numerical tests that both validate and underline the theoretical properties of the proposed discretisations.
Notation and preliminaries.
Let ^2(Ω) be the set of all square-integrable functions in Ω⊂^d where d ∈{2,3} is the spatial dimension, and denote by ^2(Ω)=^2(Ω)^d its vector-valued counterpart and by ^2(Ω)=^2(Ω)^d× d its tensor-valued counterpart. We also write
:={∈^2(Ω): = -^ t},
to represent the skew-symmetric tensors in Ω with each component being square-integrable. Standard notation will be employed for Sobolev spaces ^m(Ω) with m≥ 0 (and we note that ^0(Ω)=^2(Ω)). Their norms and seminorms are denoted as ·_m,Ω and |·|_m,Ω, respectively (as well as for their vector and tensor-valued counterparts ^m(Ω), ^m(Ω)) see, e.g., <cit.>.
As usual 𝕀 stands for the identity tensor in ^d× d,
and |·| denotes the Euclidean norm in ^d. Also, for any vector field =(v_i)_i=1,d we set the gradient and divergence operators as
:= (∂ v_i/∂ x_j)_i,j=1,d and ÷ := ∑_j=1^d ∂ v_j/∂ x_j.
In addition, for any tensor fields =(τ_ij)_i,j=1,d
and = (ζ_ij)_i,j=1,d, we let be the divergence operator ÷ acting along the rows of , and define the transpose, the trace, the tensor inner product, and the deviatoric tensor as
^ := (τ_ji)_i,j=1,d,
() := ∑_i=1^dτ_ii,
: := ∑_i,j=1^nτ_ijζ_ij, and
^ := - 1/n (), respectively.
We also recall the Hilbert space
(÷;Ω) := {∈^2(Ω): ÷ ∈L^2(Ω)},
with norm _÷;Ω^2:=_0,Ω^2+÷ _0,Ω^2, and introduce its tensor-valued version
(;Ω) := {∈^2(Ω): ∈^2(Ω)}.
Governing equations.
Let us consider a fully-saturated poroelastic medium (consisting of a mechanically isotropic and homogeneous fluid-solid mixture) occupying the open and bounded domain Ω in ^d, the Lipschitz boundary ∂Ω is partitioned into disjoint sub-boundaries ∂Ω:= Γ_∪Γ_, and it is assumed for the sake of simplicity that both sub-boundaries are non-empty |Γ_|·|Γ_|>0.
The symbol will stand for the unit outward normal vector on the boundary. Let ∈^2(Ω) be a prescribed body force per unit of volume (acting on the fluid-structure mixture) and let g ∈ L^2(Ω) be a net volumetric fluid production rate.
The balance of linear momentum for the solid-fluid mixture is written as
- = in Ω,
with being the total Cauchy stress tensor of the mixture (sum of the effective solid and fluid stresses), whose dependence on strain and on fluid pressure is given by the constitutive assumption (or effective stress principle)
= () -α p 𝕀 in Ω.
Here the skeleton displacement vector from the position ∈Ω is an unknown, the tensor () := 1/2 ( + []^ t ) is the infinitesimal strain, by we denote the fourth-order elasticity tensor, also known as Hooke's tensor (symmetric and positive definite and characterised by := λ()𝕀 + 2μ), 𝕀 is the identity second-order tensor, λ and μ are the Lamé parameters (assumed constant and positive), 0≤α≤ 1 is the Biot–Willis parameter, and p denotes the Darcy fluid pressure (positive in compression), which is an unknown in the system.
We also consider the balance of angular momentum, which in this context states that the total poroelastic stress is a symmetric tensor
= ^ t.
To weakly impose it, it is customary to use the rotation tensor
= 1/2 ( - []^ t ) = - ().
The fluid content (due to both fluid saturation and local volume dilation) is given by
ζ = c_0 p + α÷,
where c_0 ≥ 0 is the constrained specific storage
coefficient.
Using Darcy's law to describe the discharge velocity in terms of the fluid pressure gradient, the balance of mass for the total amount of fluid is ∂_tζ - ÷ (κ∇ p) = g in Ω× (0,t_end), where κ is the intrinsic permeability
of
the medium, a nonlinear function of the porosity. In turn, in the small strains limit the porosity can be approximated by a linear function of the fluid content ζ (see for example <cit.>), and so, thanks to
(<ref>), we can simply write κ( (),p). Furthermore, after a backward Euler semi-discretisation in time with a constant time step and rescaling appropriately, we only consider the type of equations needed to solve at each time step and therefore we will concentrate on the form
c_0 p + α () - ÷ (κ( (),p) ∇ p) = g in Ω.
Typical constitutive relations for permeability are, e.g., exponential or Kozeny–Carman type (cf. <cit.>)
κ( (),p) = k_0/μ_f𝕀 + k_1/μ_fexp(k_2 (c_0p + α ()))𝕀, κ( (),p) = k_0/μ_f𝕀 + k_1(c_0p + α ())^3/μ_f(1-(c_0p + α ()))^2𝕀,
where μ_f denotes the viscosity of the interstitial fluid and k_0,k_1,k_2 are model constants. We note that in the case of incompressible constituents one has c_0 = 0 and α = 1, indicating that permeability depends only on the dilation () = ÷ (see, e.g., <cit.>). We also note that even in such a scenario (of incompressible phases) the overall mixture is not necessarily incompressible itself.
More precise assumptions on the behaviour of the permeability are postponed to Section <ref>.
Next we note that from (<ref>) we can obtain
= (dλ+2μ)÷ -dα p
^-1 + αdλ+2μ p 𝕀 = () in Ω.
Then, from the first equation in
(<ref>) we get
() = 1dλ + 2μ + dαdλ+2μ p,
and therefore
the dependence of κ on () and p (cf. (<ref>)) can be written in terms of and p as follows
[ κ(,p) = k_0/μ_f𝕀 + k_1/μ_fexp(k_2dλ+2μ( (c_0(dλ+2μ)+dα^2) p + α) )𝕀,; κ(,p) = k_0/μ_f𝕀 + k_1( (c_0(dλ+2μ)+dα^2) p + α)^3/(dλ+2μ)μ_f(dλ+2μ-((c_0(dλ+2μ)+dα^2) p + α))^2𝕀. ]
In addition, putting together the second equation in (<ref>) and (<ref>) we obtain:
^-1 + αdλ+2μ p 𝕀 = - in Ω.
Finally, we introduce the discharge flux as an unknown defined by the constitutive relation
κ(,p)^-1 = ∇ p,
and combining (<ref>) and (<ref>), we are able to rewrite the mass balance equation as
c_0 p + αdλ+2μ + dα^2dλ+2μ p - ÷ = g in Ω.
To close the system, we consider mixed boundary conditions for a given _∈^1/2(Γ_) and p_∈^1/2(Γ_):
= _ p = p_Γ_, · = 0 = 0Γ_.
§ WEAK FORMULATION AND PRELIMINARY PROPERTIES
§.§ Derivation of weak forms
Let us define the following spaces
_(;Ω) := {∈(;Ω): =0Γ_}, _(÷;Ω) := {∈(÷;Ω): ·=0 Γ_}.
We test equation (<ref>) against ∈^2(Ω), equation (<ref>) against ∈_(;Ω), impose the symmetry of weakly, test equation (<ref>) against q∈^2(Ω), equation (<ref>) against ∈_(÷;Ω), integrate by parts and using the boundary conditions (<ref>) naturally, and then reorder the resulting equations. Then we arrive at
∫_Ω^-1: + αdλ+2μ∫_Ω p + ∫_Ω· + ∫_Ω: = ⟨,_⟩_Γ_ ∀ ∈_(;Ω),
∫_Ω· = -∫_Ω· ∀ ∈^2(Ω),
∫_Ω: =0 ∀ ∈,
∫_Ωκ(,p)^-1· + ∫_Ω p ÷ = ⟨·,p_⟩_Γ_ ∀ ∈_(÷;Ω),
(c_0 +dα^2dλ+2μ)∫_Ω p q + αdλ+2μ∫_Ω q - ∫_Ωq÷ = ∫_Ω g q ∀ q∈^2(Ω),
where ⟨·,·⟩_Γ_ denotes the duality pairing between ^-1/2(Γ_) and its dual ^1/2(Γ_) with respect to the inner product in L^2(Γ_), and we use the same notation, ⟨·,·⟩_Γ_, in the vector-valued case.
Introducing bilinear forms a:_(;Ω)×_(;Ω)→, b:_(;Ω)×^2(Ω) ×→, c:_(;Ω)×^2(Ω)→, the nonlinear form a_, p:_(÷;Ω)×_(÷;Ω)→, and the bilinear weak forms b:_(÷;Ω) ×^2(Ω) → and c:^2(Ω) ×^2(Ω) →, defined by
[ a(,) := ∫_Ω^-1:,
b(,(,)) := ∫_Ω· + ∫_Ω:, c(,q):=αdλ+2μ∫_Ω q ,; [3ex]
a_, p(,) := ∫_Ωκ(, p)^-1· , b(,q) := ∫_Ω q ÷, c(p,q):=(c_0 +dα^2dλ+2μ)∫_Ω p q , ]
respectively, and linear functionals H∈_(;Ω)', F∈ (^2(Ω)×)', H∈_(÷;Ω)', G∈^2(Ω)'
H(): = ⟨,_⟩_Γ_,
F(,):= -∫_Ω·, H(): = ⟨·,p_⟩_Γ_,
G(q):= -∫_Ω g q ,
we arrive at:
find (,,,,p)∈_(;Ω)×^2(Ω)××_(÷;Ω)×^2(Ω), such that:
a(,) + b(,(,)) + c(,p) = H() ∀ ∈_(;Ω),
b(,(,)) = F(,) ∀ ∈^2(Ω), ∀ ∈,
a_,p(,) + b(,p) = H() ∀ ∈_(÷;Ω),
b(,q) - c(p,q) - c(,q) = G(q) ∀ q∈^2(Ω).
§.§ Stability properties and suitable inf-sup conditions
For the sake of the analysis, we allow the permeability κ(,p) to be anisotropic but still require κ(,p)^-1 to be uniformly positive definite in ^∞(Ω) and Lipschitz continuous with respect to p∈^2(Ω). That is, there exist positive constants κ_1,κ_2 such that
κ_1||^2 ≤^ tκ(·,·)^-1,
κ(·,p_1)^-1 - κ(·,p_2)^-1_^∞(Ω)≤κ_2 p_1 -p_2_0,Ω,
for all ∈ℝ^d∖{}, and for all p_1,p_2∈^2(Ω).
We start by establishing the boundedness of the bilinear forms a, b, c, b, c:
|a(,)|≤1μ_;Ω_;Ω, |b(,(,))|≤_;Ω(_0,Ω + _0,Ω),
|c(,q)|≤γ_;Ωq_0,Ω,
| b(,q)|≤_÷;Ωq_0,Ω , | c(p,q)|≤γp_0,Ωq_0,Ω,
where
γ:=α√(d)dλ+2μγ:=c_0 +dα^2dλ+2μ.
On the other hand, using Hölder's and trace inequalities we can readily observe that the right-hand side functionals are all bounded
|H()| ≤__1/2,Γ__;Ω,
|F(,)| ≤_0,Ω_0,Ω≤_0,Ω(_0,Ω + _0,Ω),
| H()| ≤p__1/2,Γ__÷;Ω, |G(q)| ≤g_0,Ωq_0,Ω.
Let us now denote by and the kernels of b and b, respectively. They are characterised, respectively, as
= {∈_(;Ω) : = = ^ tΩ},
= {∈_(÷;Ω) : ÷ = 0 in Ω}.
From <cit.> we easily deduce that there exists c_a>0 such that
a(,) ≥ c_a _;Ω^2 ∀ ∈.
The following inf-sup conditions are well-known to hold (see, e.g., <cit.>):
sup_≠∈_(;Ω) b(,(,))/_;Ω ≥β(_0,Ω +_0,Ω)
∀ (,)∈^2(Ω)×,
sup_≠∈_(÷;Ω) b(,q)/ψ_÷;Ω ≥βq_0,Ω ∀ q∈^2(Ω).
Finally, we observe that c is elliptic over ^2(Ω)
c(q,q) ≥γ q_0,Ω^2.
§ ANALYSIS OF THE COUPLED PROBLEM
We now use a combination of the classical Babuška–Brezzi and Banach fixed-point theorems to establish the well-posedness of (<ref>) under appropriate assumptions on the data.
§.§ A fixed‑point operator
We adopt a similar approach to, e.g., <cit.>.
Firs, we define a closed ball of ^2(Ω) centred at the origin and of given radius r>0
:= { p ∈^2(Ω) : p_0,Ω≤ r }.
Then, for a given (, p)∈_(;Ω)×, thanks to the assumptions on the nonlinear permeability, we can infer that the form a_, p (cf. (<ref>)) is continuous, as well as coercive over
| a_, p(,)| ≤ C_ a _÷;Ω_÷;Ω,
a_, p(,) ≥κ_1 _÷;Ω^2 ∀ , ∈.
Then, we define the auxiliary operators
:⊆^2(Ω)→_(;Ω)×(^2(Ω)×) and : _(;Ω)×→_(÷;Ω)×^2(Ω), given by
( p):=(R_1( p),(R_2( p),R_3( p)))=(,(,)) ∀ p∈,
with (,(,))∈_(;Ω)×(^2(Ω)×) satisfying
[ a(,) + b(,(,)) = H() - c(, p) ∀ ∈_(;Ω),; [1ex]
b(,(,)) = F(,) ∀ (,)∈^2(Ω)×, ]
and
(, p):=(S_1(, p),S_2(, p) )=(,p) ∀ (, p)∈_(;Ω)×,
where (,p) is such that
[ a_, p(,) + b(,p) = H() ∀ ∈_(÷;Ω),; [1ex]
b(,q) - c(p,q) = G(q) + c(,q) ∀ q∈^2(Ω) . ]
By virtue of the above, by defining the operator :⊆^2(Ω)→^2(Ω) as
( p):=S_2(R_1( p), p),
it is clear that (,,,,p) is a solution to (<ref>) if and only if p∈ solves the fixed-point problem
(p)=p.
Thus, in what follows, we focus on proving the unique solvability of (<ref>).
§.§ Well‑definedness of Lg
From the definition of in (<ref>) it is evident that its well-definedness requires the well-posedness of problems (<ref>) and (<ref>). We begin by analysing that of (<ref>).
Let p ∈ (cf. (<ref>)). Then, there exists a unique (,(,)) ∈_(;Ω)×^2(Ω)× solution to (<ref>).
In addition, there exist C_1, C_2>0, such that
[ _;Ω≤ C_1 (__1/2,Γ_ + _0,Ω) + 1c_a γ p_0,Ω,; _0,Ω + _0,Ω≤ C_2 (__1/2,Γ_ + _0,Ω) + 1β(1 + 1μ c_a) γ p_0,Ω. ]
It is a direct consequence of the Babuška–Brezzi theory <cit.>, using (<ref>) and (<ref>) with
C_1:=(1c_a + 1β)(1 + 1μ c_a) C_2:= 1β(1 + 1μ c_a)(1 + 1μβ);
we omit further details.
Next, we provide the well-definedness of , or equivalently, the well-posedness of (<ref>).
Let (, p)∈_(;Ω)×. Then, there exists a unique (,p) ∈_(÷;Ω)×^2(Ω) solution to (<ref>).
In addition, there exist C>0 such that
_÷;Ω + p_0,Ω≤ C (g_0,Ω + p__1/2,Γ_ + γ_;Ω ).
The existence of a unique solution (,p) to (<ref>) is straightforward given the properties of the forms a, b, and c. By examining (<ref>), (<ref>), and (<ref>), we can confirm that the assumptions of <cit.> are satisfied. In addition, and p satisfy the following bounds
_÷;Ω ≤( 1κ_1 + C_1 + C_1√(γ))(2max{ C_2, C_3})^1/2 (g_0,Ω + p__1/2,Γ_ + γ _;Ω ),
p_0,Ω ≤1β(1 + κ_2 r (1κ_1 + C_1 + C_1√(γ)) (2max{ C_2, C_3})^1/2) (g_0,Ω + p__1/2,Γ_ + γ _;Ω ),
where
C_1:=1β(1 + C_ aκ_1), C_2:= 1κ_1 + C_1 + γ C_1^2 C_3:= C_1 ( 1 + C_ aβ + C_1 C_ a^2 γβ^2);
and the above implies (<ref>). We leave out additional minor details.
Given r>0, let us assume that
C (1 + γ C_1 )(
g_0,Ω+ p__1/2,Γ_ + __1/2,Γ_ + _0,Ω) + C c_a γ^2 r≤ r,
where C_1, C_1 and γ are defined in (<ref>), (<ref>) and (<ref>), respectively. Then, for a given p∈ (cf. (<ref>)), there exists a unique p ∈ such that (p) = p.
From Lemmas <ref> and <ref>, we ascertain that the operators and , respectively, are well-defined, thereby ensuring the well-definition of . Furthermore, from (<ref>) and (<ref>), for each p ∈, we deduce that
( p)_0,Ω = S_2(R_1( p), p)_0,Ω
≤ C (g_0,Ω + p__1/2,Γ_ ) + C γ (R_1( p)_;Ω
≤ C (g_0,Ω + p__1/2,Γ_ ) + C γ C_1 (__1/2,Γ_ + _0,Ω) + Cc_a γ^2 p_0,Ω,
this, combined with assumption (<ref>), implies ()⊆, which concludes the proof.
Another option for defining the operator (see (<ref>)) is to introduce the perturbation c on the right-hand side of the system, given by
[ a_, p(,) + b(,p) = H() ∀ ∈_(÷;Ω),; [1ex]
b(,q) = G(q) + c(,q) + c( p,q) ∀ q∈^2(Ω). ]
But in this case, the assumption of small data in (<ref>) (as well as in other instances, later on) would also involve the storativity parameter c_0, making the analysis slightly more restrictive.
§.§ Existence and uniqueness of weak solution
We begin by establishing two lemmas deriving conditions under which the operator is a contraction.
Given p_1, p_2, ∈, the following estimate holds
R_1( p_1) - R_1( p_2)_;Ω≤1c_a γ p_1- p_2_0,Ω.
Let (_1,(_1,_1)), (_2,(_2,_2)) ∈_(;Ω)×(^2(Ω)×), such that ( p_1)=(_1,(_1,_1)) ad ( p_2)=(_2,(_2,_2)). Then, from the definition of (cf. (<ref>)), we have
[ a(_1-_2,) + b(,(_1-_2,_1-_2)) = - c(, p_1- p_2) ∀ ∈_(;Ω),; [1ex]
b(_1-_2,(,)) = 0 ∀ (,)∈^2(Ω)×. ]
Since _1-_2 ∈ (cf. (<ref>)), taking = _1-_2 in (<ref>), and utilising the ellipticity of a on (cf. (<ref>)) along with the bound of c (cf. (<ref>)), we obtain:
c_a _1-_2_;Ω^2 ≤ a(_1-_2,_1-_2) = - c(_1-_2, p_1- p_2) ≤ γ _1-_2_;Ω p_1- p_2_0,Ω,
which concludes the proof.
Given (_1, p_1), (_2, p_2), ∈_(;Ω)×, the following estimate holds
[ S_2(_1, p_1) - S_2(_2, p_2)_0,Ω; ≤2κ_2 Cmin{γ, κ_1} (g_0,Ω + p__1/2,Γ_ + γ_2_;Ω) p_1 - p_2_0,Ω + 2min{γ, κ_1} γ _1-_2_;Ω. ]
Let (_1,p_1), (_2,p_2)∈_(÷;Ω)×^2(Ω), such that (_1, p_1)=(_1,p_1) and (_2, p_2)=(_2, p_2). Then, from the definition of (cf. (<ref>)), and employing similar arguments to those in Lemma <ref>, we have
a__1, p_1(_1,_1-_2) - a__2, p_2(_2,_1-_2) + c(p_1-p_2,p_1-p_2) = - c(_1-_2,p_1-p_2),
by adding ± a__1, p_1(_2,_1-_2) in the last equation, we obtain
a__1, p_1(_1-_2,_1-_2) + c(p_1-p_2,p_1-p_2)
= a__2, p_2(_2,_1-_2) - a__1, p_1(_2,_1-_2) - c(_1-_2,p_1-p_2).
Then, using the the first assumption for κ (cf. (<ref>)), the ellipticity of c (see (<ref>)), the definition of a_, p (cf. (<ref>)) and the continuity of the form c (see (<ref>)), we deduce
[ κ_1 _1-_2_0,Ω^2 + γp_1-p_2_0,Ω^2 ≤ a__1, p_1(_1-_2,_1-_2) + c(p_1-p_2,p_1-p_2); = ∫_Ω (κ(_2, p_2)^-1-κ(_1, p_1)^-1) _2 · (_1-_2) - c(_1-_2,p_1-p_2); ≤κ(_2, p_2)^-1- κ(_1, p_1)^-1_^∞(Ω) _2_0,Ω_1-_2_0,Ω + γ _1-_2_;Ωp_1-p_2_0,Ω. ]
From the last equation, by utilising the second assumption regarding κ (see (<ref>)), we obtain
[ 1/2min{γ,κ_1}(_1-_2_0,Ω + p_1-p_2_0,Ω)^2 ≤κ_1 _1-_2_0,Ω^2 + γp_1-p_2_0,Ω^2; ≤κ_2 p_2- p_1_0,Ω_2_0,Ω_1-_2_0,Ω + γ _1-_2_;Ωp_1-p_2_0,Ω; ≤(κ_2 p_2- p_1_0,Ω_2_0,Ω + γ _1-_2_;Ω) (_1-_2_0,Ω + p_1-p_2_0,Ω), ]
the last, together with the fact that _2 satisfies (<ref>), leads to the following bound
[ 1/2min{γ,κ_1}(_1-_2_0,Ω + p_1-p_2_0,Ω) ≤κ_2 p_2- p_1_0,Ω_2_0,Ω + γ _1-_2_;Ω; ≤κ_2 p_2- p_1_0,Ω C (g_0,Ω + p__1/2,Γ_ + γ_2_;Ω ) + γ _1-_2_;Ω, ]
and this yields (<ref>), concluding the proof.
The following theorem presents the main result of this section, establishing the existence and uniqueness of the solution to the fixed-point problem (<ref>), or equivalently, the well-posedness of problem (<ref>).
Given r>0, assume that ∈^2(Ω), g ∈ L^2(Ω), _∈^1/2(Γ_), p_∈^1/2(Γ_) and γ satisfies
[ 2 max{1, κ_2}min{γ, κ_1 , r} { C (1 + C_1γ)(g_0,Ω + p__1/2,Γ_ + __1/2,Γ_ + _0,Ω)+γ^2c_a(1κ_2 + C r)} < 1. ]
Then, (cf. (<ref>)) has a unique fixed point p∈. Equivalently, (<ref>) has a unique solution (,,,,p)∈_(;Ω)×^2(Ω)××_(÷;Ω)×. In addition,
there exists C>0, such that
_;Ω + _0,Ω + _0,Ω + _÷;Ω + p_0,Ω≤ C (g_0,Ω + p__1/2,Γ_ + __1/2,Γ_ + _0,Ω + γ r ).
Recall that (<ref>) ensures the well-definedness of . Let p_1, p_2, p_1, p_2 ∈, such that ( p_1)=p_1 and ( p_2)=p_2. From the definition of (see (<ref>)), and the estimates (<ref>) and (<ref>), we deduce
p_1-p_2_0,Ω = ( p_1)-( p_2)_0,Ω = S_2(R_1( p_1), p_1)-S_2(R_1( p_2), p_2)_0,Ω
≤2κ_2 Cmin{γ, κ_1} (g_0,Ω + p__1/2,Γ_ + γR_1( p_2)_;Ω) p_1 - p_2_0,Ω+ 2min{γ, κ_1} γ R_1( p_1)-R_1( p_2)_;Ω
≤2κ_2 Cmin{γ, κ_1} (g_0,Ω + p__1/2,Γ_) p_1 - p_2_0,Ω + 2κ_2 Cmin{γ, κ_1}γR_1( p_2)_;Ω p_1 - p_2_0,Ω
+ 2c_a min{γ, κ_1} γ^2 p_1- p_2_0,Ω,
the above, along with the fact that R_1( p_2) satisfies (<ref>) and p_2∈, implies
p_1-p_2_0,Ω≤2κ_2 Cmin{γ, κ_1} (g_0,Ω + p__1/2,Γ_) p_1 - p_2_0,Ω+ 2c_a min{γ, κ_1} γ^2 p_1- p_2_0,Ω
+2κ_2 Cmin{γ, κ_1}γ( C_1 (__1/2,Γ_ + _0,Ω) + 1c_a γ r ) p_1 - p_2_0,Ω
≤2min{γ, κ_1} {κ_2 C (1 + C_1γ)(g_0,Ω + p__1/2,Γ_ + __1/2,Γ_ + _0,Ω)+γ^2c_a(1+κ_2 C r)} p_1- p_2_0,Ω,
which together with (<ref>) and the Banach fixed-point theorem yields that has a unique fixed point in .
Finally, (<ref>) is derived analogously to the estimates in (<ref>) and (<ref>), which completes the proof.
The operator (see (<ref>)) could be also defined, for example :→, with := { (, p) ∈_(;Ω)×^2(Ω) : _;Ω + p_0,Ω≤ r } and (, p):=(R_1( p),S_2(, p))=(,p), with R_1 and S_2 defined as in (<ref>) and (<ref>), respectively.
§ FINITE ELEMENT DISCRETISATION
In this section, we present and analyse the Galerkin scheme for problem (<ref>). It is worth mentioning upfront that the well-posedness analysis can be straightforwardly extended from the continuous problem to the discrete case. Therefore, we omit many of the details.
§.§ Finite element spaces and Galerkin scheme
Let us consider a regular partition 𝒯_h of Ω̅ made up of triangles K (in ℝ^2) or tetrahedra K (in ℝ^3) of diameter h_K, and denote the mesh size by h := max{ h_K: K ∈𝒯_h}.
Given an integer ℓ≥ 0 and K ∈𝒯_h, we first let P_ℓ(K) be the space of polynomials of degree ≤ℓ defined on K, whose vector and tensor versions are denoted _ℓ(K) := [P_ℓ(K)]^d and ℙ_ℓ(K)
= [P_ℓ(K)]^d × d, respectively. Also, we let 𝐑𝐓_ℓ(K) := _ℓ(K) ⊕P_ℓ(K) be the local Raviart–Thomas space of order ℓ defined on K, where stands for a generic vector in ^d, and denote by ℝ𝕋_k(K) the tensor-valued counterpart of this space.
For each K∈𝒯_h we consider the bubble space of order k, defined as
𝐁_k(K):=
𝐜𝐮𝐫𝐥^t(b_KP_k(K)) in ℝ^2,
∇× (b_K𝐏_k(K)) in ℝ^3,
where b_K is a suitably normalised cubic polynomial on K, which vanishes on the boundary of K (see <cit.>).
We recall the classical PEERS_k elements (cf. <cit.>) to define the discrete subspaces for the stress tensor , the displacement , and the rotation tensor
^_h :={_h ∈_(;Ω): _h|_K∈ℝ𝕋_k(K)⊕[𝐁_k(K)]^d ∀ K∈𝒯_h},
^_h := {_h ∈^2(Ω): _h|_K∈P_k(K) ∀ K∈𝒯_h},
^_h :={_h∈∩ℂ(Ω) and _h|_K∈ℙ_k+1(K) ∀ K∈𝒯_h},
and the following estimates are proven for the PEERS_k elements (cf. <cit.>)
sup_≠_h∈^_hb(_h,(_h,_h))/_h_;Ω ≥β^*(_h_0,Ω +_h_0,Ω)
∀ (_h,_h)∈^_h×^_h,
a(_h,_h) ≥ c_a _h_;Ω^2 ∀ _h∈_h,
where _h denotes the discrete kernel of b, that is
_h := {_h∈^_h : b(_h,(_h,_h)) = 0 ∀ (_h,_h)∈^_h×^_h}.
Additionally, for and the pressure p, we define the FE subspaces
^_h :={_h∈_(÷;Ω): _h|_K∈𝐑𝐓_k(K) ∀ K∈𝒯_h},
^p_h := {q_h ∈^2(Ω): q_h|_K∈P_k(K) ∀ K∈𝒯_h},
and it is well known that b satisfies the inf-sup condition (see, e.g., <cit.>)
sup_≠_h∈^_h b(_h,q_h)/_h_÷;Ω≥β^* q_h_0,Ω ∀ q_h∈^p_h.
Note that it is of course possible to consider other conforming and inf-sup stable spaces such as Arnold–Falk–Winther and Brezzi–Douglas–Marini instead of (<ref>) and (<ref>), respectively.
The Galerkin scheme for (<ref>) reads: find (_h,_h,_h,_h,p_h)∈^_h×^_h×^_h×^_h×^p_h, such that:
[ a(_h,_h) + b(_h,(_h,_h)) + c(_h,p_h) = H(_h) ∀ _h∈^_h,; [1ex]
b(_h,(_h,_h)) = F(_h,_h) ∀(_h,_h)∈^_h×^_h ,; [1ex]
a__h,p_h(_h,_h) + b(_h,p_h) = H(_h) ∀ _h∈^_h,; [1ex]
b(_h,q_h) - c(p_h,q_h) - c(_h,q_h) = G(q_h) ∀ q_h∈^p_h. ]
§.§ Analysis of the discrete problem
In this section, we analyse the Galerkin scheme (<ref>). It's worth noting that establishing well-posedness can be readily achieved by extending the results derived for the continuous problem to the discrete setting.
Firstly, and similarly to the continuous case, we define the following set
_h := { p_h ∈^p_h : p_h_0,Ω≤ r }.
Next, for a fixed p_h in _h, we have that the bilinear form a__h,p_h satisfies
a__h, p_h(_h,_h) ≥κ_1 _h_÷;Ω^2 ∀ _h∈_h,
where _h is the discrete kernel of b
[ _h := {_h∈^_h : b(_h,q_h) = 0 ∀ q_h∈^p_h}. ]
Additionally, we define the discrete operators _h:_h⊆^p_h→^_h×(^_h×^_h) and _h: ^_h×_h →^_h×^p_h, respectively, by
_h( p_h):=(R_1,h( p_h),(R_2,h( p_h),R_3,h( p_h)))=(_h,(_h,_h)) ∀ p_h∈_h,
where (_h,(_h,_h))∈^_h×(^_h×^_h) is the unique solution of
[ a(_h,_h) + b(_h,(_h,_h)) = H(_h) - c(_h, p_h) ∀ _h∈^_h,; [1ex]
b(_h,(_h,_h)) = F(_h,_h) ∀ (_h,_h)∈^_h×^_h, ]
and
_h(_h, p_h):=(S_1,h(_h, p_h),S_2,h(_h, p_h) )=(_h,p_h) ∀ (_h, p_h)∈^_h×_h,
where (_h,p_h) is the unique tuple in ^_h××^p_h such that
[ a__h, p_h(_h,_h) + b(_h,p_h) = H(_h) ∀ _h∈^_h,; [1ex]
b(_h,q_h) - c(p_h,q_h) = G(q_h) + c(_h,q_h) ∀ q_h∈^p_h. ]
Employing properties (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) and proceeding exactly as for the continuous case (Lemmas <ref> and <ref>), it can be
easily deduced that both operators are well-defined. Then, analogously to the continuous case, we define the following fixed-point operator
_h:_h⊆^p_h→^p_h, p_h↦_h( p_h):=S_2,h(R_1,h( p_h), p_h),
which is clearly well-defined (since R_h and S_h are). Further, it can be easily deduced that _h(_h)⊆_h if
C^* (1 + γ C_1^* )(
g_0,Ω+ p__1/2,Γ_ + __1/2,Γ_ + _0,Ω) + C^* c_a γ^2 r≤ r,
where C^* and C_1^* (depending on c_a, μ, κ_1, κ_2, C_ a, β^*, β^*) are the discrete versions of the constants C and C_1 (cf. (<ref>) and (<ref>)). Finally, it is clear that (_h,_h,_h,_h,p_h) is a solution to (<ref>) if and only if p_h satisfies
_h(p_h)=p_h.
The main outcome of this section is presented in the following theorem, establishing the existence and uniqueness of a solution to the fixed-point problem (<ref>), equivalently proving the well-posedness of problem (<ref>).
Given r>0, assume that
the data and γ satisfy
2 max{1, κ_2}/min{γ, κ_1 , r} { C^* (1 + C_1^*γ)(g_0,Ω + p__1/2,Γ_ + __1/2,Γ_ + _0,Ω)+γ^2c_a(1/κ_2 + C^* r)} < 1.
Then, _h (cf. (<ref>)) has a unique fixed point p_h∈_h. Equivalently, problem (<ref>) has a unique solution (_h,_h,_h,_h,p_h)∈^_h×^_h×^_h×^_h×_h.
In addition,
there exists C^*>0, such that
_h_;Ω + _h_0,Ω + _h_0,Ω + _h_÷;Ω + p_h_0,Ω
≤ C^* (g_0,Ω + p__1/2,Γ_ + __1/2,Γ_ + _0,Ω + γ r ).
First, we observe that, similar to the continuous case (as seen in the proof of Theorem <ref>), assumption (<ref>) ensures the well-definedness of _h and that _h(_h)⊆_h. Now, by adapting the arguments used in Section <ref> (cf. Lemmas <ref> and <ref>), one can derive the following estimates
R_1,h( p_1) - R_1,h( p_2)_;Ω ≤1c_a γ p_1- p_2_0,Ω,
S_2,h(_1, p_1) - S_2,h(_2, p_2)_0,Ω ≤2κ_2 C^*/min{γ, κ_1} (g_0,Ω + p__1/2,Γ_ + γ_2_;Ω) p_1 - p_2_0,Ω
+ 2/min{γ, κ_1} γ _1-_2_;Ω,
for all p_1, p_2∈_h and _1, _2 ∈^_h, which together with the definition of _h (see (<ref>)), yield
_h( p_1)-_h( p_2)_0,Ω ≤2min{γ, κ_1} {κ_2 C^* (1 + C_1^*γ)(g_0,Ω + p__1/2,Γ_ + __1/2,Γ_ + _0,Ω)
+ γ^2c_a(1+κ_2 C^* r)} p_1- p_2_0,Ω,
for all p_1, p_2∈_h. In this way, using estimate (<ref>) we obtain that _h is a contraction
mapping on _h, thus problem (<ref>), or equivalently (<ref>) is well-posed.
Finally, analogously to the proof of Theorem <ref> (see also Lemmas <ref> and <ref>) we can obtain (<ref>), which concludes the proof.
§ A PRIORI ERROR ESTIMATES
In this section, we aim to provide the convergence of the Galerkin scheme (<ref>) and derive the corresponding rate of convergence. From now on we assume that the hypotheses of Theorem <ref> and Theorem <ref> hold.
§.§ Preliminaries
Let the tuples (,,,,p)∈_(;Ω)×^2(Ω)××_(÷;Ω)×^2(Ω) and (_h,_h,_h,_h,p_h)∈^_h×^_h×^_h×^_h×^p_h be the unique solutions of (<ref>) and (<ref>), respectively.
Let us write _ = - _h, _ = - _h, _ = - _h, _ = - _h and _p = p - p_h.
As usual, for a given (_h,(_h,_h) )∈^_h×(^_h×^_h) and (_h,q_h)∈^_h×^p_h, we shall then decompose these errors into
_ = _ + _, _ = _ + _, _ = _ + _, _ = _ + _, _p = ξ_p + χ_p,
with
_ := - _h,
_ := _h - _h,
_ := - _h,
_ := _h - _h,
_ := - _h,
_ := _h - _h,
_ := - _h,
_ := _h - _h,
ξ_p := p - q_h, and
χ_p := q_h - p_h.
Considering the first two equations of problems (<ref>) and (<ref>), the following identities hold
[ a(,) + b(,(,)) = H() - c(,p) ∀ ∈_(;Ω),; [1ex]
b(,(,)) = F(,) ∀ ∈^2(Ω), ∀ ∈, ]
and
[ a(_h,_h) + b(_h,(_h,_h)) = H(_h) - c(_h,p_h) ∀ _h∈^_h,; [1ex]
b(_h,(_h,_h)) = F(_h,_h) ∀(_h,_h)∈^_h×^_h. ]
From these relations we can obtain that for all (_h,(_h,_h))∈^_h×(^_h×^_h), there holds
[ a(_,_h) + b(_h,(_,_)) = - c(_h,_p),; [1ex]
b(_,(_h,_h)) = 0 ]
which together with the definition of the errors in (<ref>), implies that
[ a(_,_h) + b(_h,(_,_)) + b(_,(_h,_h)); = -a(_,_h) - b(_h,(_,_)) - b(_,(_h,_h)) - c(_h,_p) - c(_h,_p) ]
for all (_h,(_h,_h))∈^_h×(^_h×^_h).
Next, considering the last two equations of both problems (<ref>) and (<ref>), we obtain
[ a_,p(,) + b(,p) = H() ∀ ∈_(÷;Ω),; [1ex]
b(,q) - c(p,q) = G(q) + c(,q) ∀ q∈^2(Ω). ]
and
[ a__h,p_h(_h,_h) + b(_h,p_h) = H(_h) ∀ _h∈^_h,; [1ex]
b(_h,q_h) - c(p_h,q_h) = G(q_h) + c(_h,q_h) ∀ q_h∈^p_h. ]
Then, using arguments similar to those in Lemma <ref>, by adding ± a__h, p_h(,_h), we have
[ a__h,p_h(__h,_h) + b(_h,_p_h) + b(__h,q_h) - c(_p_h,q_h)= - ∫_Ω(κ(,p)^-1-κ(_h,p_h)^-1) ·_h + c(_,q_h), ]
which together with (<ref>), implies that
[ a__h,p_h(_,_h) + b(_h,χ_p_h) + b(_,q_h) - c(χ_p,q_h)+ a__h,p_h(_,_h); =- b(_h,ξ_p) - b(_,q_h) + c(ξ_p,q_h) - ∫_Ω(κ(,p)^-1-κ(_h,p_h)^-1) ·_h + c(_,q_h). ]
§.§ Derivation of Céa estimates
There exist C_3^*, C_4^*>0, independent of h, such that
__;Ω + __0,Ω + __0,Ω≤ C_3^* (__;Ω + __0,Ω + __0,Ω + _p_0,Ω ) + C_4^* γχ_p_0,Ω.
From the properties of a and b (refer to (<ref>) and (<ref>)), and <cit.>, we derive the following discrete global inf-sup condition
__;Ω + __0,Ω + __0,Ω
≤ (C_1^* +C_2^*) sup_≠(_h,_h,_h)∈^_h×^_h×^_ha(_,_h) + b(_h,(_,_)) + b(_,(_h,_h))/_h_;Ω + _h_0,Ω + _h_0,Ω,
with C_1^*, C_2^*>0 independent of h, are the discrete version of the constants C_1, C_2 defined in (<ref>). Then, combining the last inequality with (<ref>), and the continuity properties of a and b (see (<ref>)),
we obtain
[ __;Ω + __0,Ω + __0,Ω; ≤ (C_1^* + C_2^* )(1μ__;Ω + __0,Ω + __0,Ω + __;Ω + γχ_p_0,Ω + γξ_p_0,Ω ), ]
which implies (<ref>) with C_3^*:=(C_1^* + C_2^* )(1/μ+1+γ) and C_4^*:=C_1^* + C_2^* , and
concludes the proof.
There exist C_5^*, C_6^*>0, independent of h, such that
[ __÷;Ω + _p_0,Ω≤ C^*_5 (__÷;Ω + ξ_p_0,Ω + __;Ω); + C^*_6 ( (g_0,Ω + p__1/2,Γ_ + __1/2,Γ_ + _0,Ω + γ r ) χ_p_0,Ω + γ__;Ω). ]
Similarly to Lemma <ref>, using the properties of a, b and c (refer to (<ref>), (<ref>) and (<ref>)), and <cit.>, we derive the following discrete global inf-sup condition
__÷;Ω + χ_p_0,Ω≤ 2 C^* sup_≠(_h,q_h)∈^_h×^p_h a__h,p_h(_,_h) + b(_h,χ_p) + b(_,q_h) - c(χ_p,q_h)/_h_÷;Ω + q_h_0,Ω,
with C^* defined as in (<ref>). Then, using the equation (<ref>), the bound properties of a, b and c (see (<ref>) and (<ref>)), and the second assumption for κ (cf. (<ref>)), we obtain
__÷;Ω + χ_p_0,Ω
≤ 2 C^* (C_ a__÷;Ω + ξ_p_0,Ω + __÷;Ω + γξ_p_0,Ω + κ(,p)^-1 - κ(_h,p_h)^-1_^∞(Ω)_÷;Ω + γ__;Ω)
≤ 2 C^* (C_ a__÷;Ω + ξ_p_0,Ω + __÷;Ω + γξ_p_0,Ω + κ_2_p_0,Ω_÷;Ω + γ__;Ω),
hence, using the fact that satisfies (<ref>) and the error decomposition (<ref>), we have
__÷;Ω + χ_p_0,Ω ≤ 2 C^* (C_ a__÷;Ω + ξ_p_0,Ω + __÷;Ω + γξ_p_0,Ω + κ_2ξ_p_0,Ω_÷;Ω + γ__;Ω)
+ 2 C^* ( κ_2χ_p_0,Ω_÷;Ω + γ__;Ω)
≤ C^*_5 (__÷;Ω + ξ_p_0,Ω + __;Ω)
+ 2 C^* ( κ_2 C (g_0,Ω + p__1/2,Γ_ + __1/2,Γ_ + _0,Ω + γ r ) χ_p_0,Ω + γ__;Ω),
the last equation implies (<ref>), with C^*_5 := 2 C^* ( C_ a+1 +γ +γ + κ_2 C (g_0,Ω + p__1/2,Γ_ + __1/2,Γ_ + _0,Ω + γ r ) ) and C^*_6 := 2 C^*(κ_2 C + 1), and concludes the proof.
Assume that
(C_4^* + C^*_6 + C^*_6 r) γ + C^*_6 (g_0,Ω + p__1/2,Γ_ + __1/2,Γ_ + _0,Ω ) ≤12,
with C_4^* and C^*_6 being the constants in Lemmas <ref> and <ref>. Furthermore, assume that the hypotheses of Theorem <ref> and Theorem <ref> hold. Let (,,,,p)∈_(;Ω)×^2(Ω)××_(÷;Ω)×^2(Ω) and (_h,_h,_h,_h,p_h)∈^_h×^_h×^_h×^_h×^p_h be the unique solutions of (<ref>) and (<ref>), respectively.
Then, there exists C_Céa>0, independent of h, such that
[ __;Ω + __0,Ω + __0,Ω + __÷;Ω + _p_0,Ω; ≤ C_Céa ( (,,,,p), ^_h×^_h×^_h×^_h×^p_h). ]
Combining (<ref>) and (<ref>), and using the assumption (<ref>), we deduce
[ __;Ω + __0,Ω + __0,Ω + __÷;Ω + χ_p_0,Ω≤ C_3^* (__;Ω + __0,Ω + __0,Ω + ξ_p_0,Ω ); + C^*_5 (__÷;Ω + ξ_p_0,Ω + __;Ω) + 12χ_p_0,Ω + 12__;Ω. ]
And from the latter inequality we obtain
[ __;Ω + __0,Ω + __0,Ω + __÷;Ω + χ_p_0,Ω; ≤ 2(C_3^* + C^*_5) (__;Ω + __0,Ω + __0,Ω + __÷;Ω + ξ_p_0,Ω ). ]
In this way, from (<ref>), (<ref>) and the triangle inequality we obtain
[ __;Ω + __0,Ω + __0,Ω + __÷;Ω + _p_0,Ω≤__;Ω + __;Ω + __0,Ω + __0,Ω; + __0,Ω + __0,Ω + __÷;Ω + __÷;Ω + χ_p_0,Ω+ ξ_p_0,Ω; ≤ (2C_3^* + 2 C^*_5 + 1) (__;Ω + __0,Ω + __0,Ω + __÷;Ω + ξ_p_0,Ω ), ]
which combined with the fact that (_h,(_h,_h) )∈^_h×(^_h×^_h) and (_h,q_h)∈^_h×^p_h are arbitrary (see (<ref>)), concludes the proof.
§.§ Rates of convergence
In order to establish the rate of convergence of the Galerkin scheme (<ref>), we first recall the following approximation properties
associated with the FE spaces specified in Section <ref>.
For each 0 < m ≤ k+1 and for each ∈^m(Ω)∩_(;Ω) with ∈^m(Ω),
∈^m(Ω), ∈^m(Ω)∩^2_(Ω), ∈^m(Ω)∩_(;Ω) with ÷ ∈^m(Ω), and q∈^m(Ω),
there holds
(,^_h) := inf__h ∈^_h - _h_;Ω≲ h^m {_m,Ω + _m,Ω},
(,^_h) := inf__h ∈^_h - _h_0,Ω≲ h^m _m,Ω,
(,^_h) := inf__h ∈^_h - _h_0,Ω≲ h^m _m,Ω,
(,^_h) := inf__h ∈^_h - _h_÷;Ω≲ h^m {_m,Ω + ÷ _m,Ω},
(q,^p_h) := inf_q_h ∈^p_hq - q_h_0,Ω≲ h^m q_m,Ω.
For (<ref>), (<ref>) and (<ref>) we refer to <cit.>, whereas (<ref>) and (<ref>) are provided in <cit.> and <cit.>, respectively.
With these steps we are now in position to state the rates of convergence associated with the Galerkin scheme (<ref>).
Assume that the hypotheses of Theorem <ref> hold and let (,,,,p)∈_(;Ω)×^2(Ω)××_(÷;Ω)×^2(Ω) and (_h,_h,_h,_h,p_h)∈^_h×^_h×^_h×^_h×^p_h be the unique solutions of the continuous and discrete problems (<ref>) and (<ref>), respectively.
Assume further that ∈^m(Ω), ∈^m(Ω), ∈^m(Ω), ∈^m(Ω), ∈^m(Ω), ÷ ∈^m(Ω) and p∈^m(Ω), for 1≤ m ≤ k+1.
Then there exists C_rate>0, independent of h, such that
[ __;Ω + __0,Ω + __0,Ω + __÷;Ω + _p_0,Ω; ≤
C_rate h^m{_m,Ω + _m,Ω + _m,Ω + _m,Ω + _m,Ω + ÷ _m,Ω + p_m,Ω}. ]
The result follows from Céa estimate (<ref>)
and the approximation properties (<ref>).
Similarly to <cit.>, the analysis developed in Sections <ref>-<ref> can be adapted to a formulation without the variable (_h in the discrete problem), imposing symmetry of by taking ∈ := {∈: ∈^2(Ω) } and :={∈^2(Ω): = ^ t}, utilizing results from <cit.> (<cit.> for the discrete problem), and adapting the strategy used in, e.g., <cit.>.
§ A POSTERIORI ERROR ESTIMATES
In this section we derive residual-based a posteriori error estimates and demonstrate the reliability and efficiency of the proposed estimators. Mainly due to notational convenience, we confine our analysis to the two-dimensional case. The extension to three-dimensional case should be quite straightforward (see, e.g., <cit.>).
Similarly to <cit.>, we introduce additional notation. Let ℰ_h be the set of edges of 𝒯_h, whose corresponding diameters are denoted h_E, and define
ℰ_h(Ω) := { E∈ℰ_h : E⊆Ω }, ℰ_h(Γ_) := { E∈ℰ_h : E⊆Γ_ }, ℰ_h(Γ_) := { E∈ℰ_h : E⊆Γ_ } .
On each E∈ℰ_h, we also define the unit normal vector _E:=(n_1,n_2)^ and the tangential vector _E:=(-n_2,n_1)^. However, when no confusion arises, we will simply write and instead of _E and _E, respectively. Also, by d/d we denote the tangential derivative. The usual jump operator [[ · ]] across internal edges are defined for piecewise continuous matrix, vector, or scalar-valued functions. For sufficiently smooth scalar ψ, vector := (v_1,v_2)^, and tensor fields := (τ_ij)_1≤ i,j≤2, we let
(ψ):=(∂ψ/∂ x_2 , -∂ψ/∂ x_1)^ t , ():=∂ v_2/∂ x_1-∂ v_1/∂ x_2 , ()=[ (v_1)^; (v_2)^ ]()=[ (_1); (_2) ].
In addition, we denote by Π_h the Raviart–Thomas interpolator and by I_h the Clément operator (see, e.g., <cit.> for their properties). In what follows, we denote by _h the tensor version of Π_h, which is defined row-wise by Π_h and by _h the corresponding vector version of I_h which is defined componentwise by I_h.
In what follows, we will assume that the hypotheses of Theorems <ref> and <ref> are satisfied. Let _h, _h, _h, _h, p_h denote the FE solutions of (<ref>). We define the residual-based and fully computable local contributions to the error estimator Ξ_K^2, defined as the sum of Ξ_s,K^2 and Ξ_f,K^2, where Ξ_s,K and Ξ_f,K pertain to the solid (mixed elasticity) and fluid (mixed Darcy) components, respectively:
Ξ_s,K^2 := +_h _0,K^2+_h-_h^ t^2_0,K+h_K^2𝒞^-1_h + αdλ+2μ p_h 𝕀 - _h +_h_0,K^2
+h_K^2𝐜𝐮𝐫𝐥(𝒞^-1_h + αdλ+2μ p_h 𝕀 +_h)_0,K^2+∑_E∈∂ K ∩ℰ_h(Ω)h_E [[(𝒞^-1_h + αdλ+2μ p_h 𝕀 +_h)]]_0,E^2
+∑_E∈∂ K ∩ℰ_h(Γ_)h_E (
𝒞^-1_h + αdλ+2μ p_h 𝕀 +_h) -d_/d_0,E^2
+∑_E∈∂ K ∩ℰ_h(Γ_)h_E _ - _h _0,E^2,
Ξ_f,K^2 := -g+c_0 p_h + αdλ+2μ_h + dα^2dλ+2μ p_h - ÷_h_0,K^2
+h_K^2κ(_h,p_h)^-1_h- ∇ p_h_0,K^2
+h_K^2(κ(_h,p_h)^-1_h)_0,K^2+∑_E∈∂ K ∩ℰ_h(Ω)h_E [[(κ(_h,p_h)^-1_h)·]]_0,E^2
+ ∑_E∈∂ K ∩ℰ_h(Γ_)h_E κ(_h,p_h)^-1_h· -d p_/d_0,E^2
+ ∑_E∈∂ K ∩ℰ_h(Γ_)h_E p_ - p_h_0,E^2.
Then, we define the global estimator
Ξ^2:= ∑_K∈𝒯_hΞ_s,K^2 + Ξ_f,K^2.
§.§ Reliability of the a posteriori error estimator
First we prove preliminary results that will be key in showing the reliability of the global estimator.
There exists C_1>0, such that
-_h_;Ω + -_h_0,Ω + -_h_0,Ω≤ C_1 (_1+ +(_h)_0,Ω + _h-_h^t_0,Ω),
where
_1 ():=a(-_h,) + b(,(-_h,-_h)),
with _1 (_h)=-c(_h,p-p_h) for all _h ∈^_h, and ||_1||=sup_≠∈_(;Ω)ℛ_1()/_;Ω.
Using the properties of bilinear forms a and b, as outlined in equations (<ref>) and (<ref>), along with the insight from <cit.>, there exists C_1>0 depending on μ, c_a, β such that
e__;Ω + e__0,Ω + e__0,Ω ≤ C_1 sup_≠(,,)
∈_(;Ω)×^2(Ω)×a( e_,) + b(,( e_, e_)) + b( e_,(,))/_;Ω + _0,Ω + _0,Ω
≤ C_1 (sup_≠∈_(;Ω) _1()/_;Ω+ sup_≠(,)∈^2(Ω)×b( e_,(,))/_0,Ω + _0,Ω).
Then, recalling the definitions of the bilinear form b (cf. (<ref>)), using the equation (<ref>), along with the fact that ∫_Ω_h: =1/2∫_Ω (_h-_h^ t): for ∈, the following estimate holds
|b( e_,(,))|≤ ( ||+ _h||_0,Ω + ||_h-_h^ t||_0,Ω)(_0,Ω + _0,Ω),
and this gives the asserted inequality.
There exists C_2>0 such that
-_h_÷;Ω + p-p_h_0,Ω≤ C_2 (_2+ g-c_0 p_h - αdλ+2μ_h - dα^2dλ+2μ p_h + ÷_h _0,Ω + γ-_h _;Ω),
where
_2 ():= a_,p(-_h,) + b(,p-p_h),
satisfies _2 (_h)=0 for all _h ∈^_h, and ||_2||=
sup_≠∈_(÷;Ω) _2()/_÷;Ω.
Similarly to Lemma <ref>, using the properties of bilinear forms a_,p and b (as outlined in equations (<ref>), (<ref>) and (<ref>)), along with the insight from <cit.>, we establish that there exists C_2>0 depending on κ_1, κ_2, C_ a, γ, γ, β such that
e__÷;Ω + e_p_0,Ω ≤ C_2 sup_≠(,q)∈_(÷;Ω)×^2(Ω) a_,p( e_,) + b(, e_p) + b( e_,q) - c( e_p,q)/_÷;Ω + q_0,Ω
≤ C_2 (sup_≠∈_(÷;Ω) _2()/_÷;Ω+ sup_0≠ q∈^2(Ω) b( e_,q) - c( e_p,q)/q_0,Ω).
Hence, recalling the definitions of b, c, adding ± c (_h,p), and using (<ref>), we arrive at
| b( e_,q) - c( e_p,q)|≤g-c_0 p_h - αdλ+2μ_h - dα^2dλ+2μ p_h + ÷_h _0,Ωq_0,Ω + γ-_h _;Ωq_0,Ω,
and therefore, we obtain the desired result.
Throughout the rest of this section, we provide suitable upper bounds for _1 and _2. We begin by establishing the corresponding estimates for _1, which are based on a suitable Helmholtz decomposition of the space _(;Ω) (see <cit.> for details), along with the following two technical results.
There exists a positive constant C_3, independent of h, such that for each ∈^1(Ω) there holds
[ |_1(-_h())| ≤ C_3 γp-p_h_0,Ω_1,Ω; + C_3(∑_K∈𝒯_h h_K^2||^-1_h + αdλ+2μ p_h 𝕀 - _h +_h||_0,K^2 + ∑_E ∈ℰ_h(Γ_)h_E||_-_h||_0,E^2 )^1/2_1,Ω. ]
From the definition of _1 (cf. (<ref>)), adding ± c (-_h(),p_h), and using equation (<ref>), we have
[ _1(-_h())= H(-_h()) - c(-_h(),p) -a(_h,-_h())
-b(-_h(),(_h,_h)); = ⟨ (-_h()),_⟩_Γ_ - αdλ+2μ∫_Ω (p-p_h) (-_h()) - αdλ+2μ∫_Ω p_h (-_h()); - ∫_Ω^-1_h:(-_h()) - ∫_Ω_h:(-_h()) - ∫_Ω_h·(-_h()), ]
then, applying a local integration by parts to the last term above, using the identity
∫_E _h()·=∫_E ·, for all ∈_k(E), for all edge E of 𝒯_h, the fact that _∈^2(Γ_), and the Cauchy-Schwarz inequality, we obtain
[ _1(-_h()) = ∑_K∈𝒯_h∫_K (-^-1_h - αdλ+2μ p_h 𝕀 + _h -_h):(-_h()); +∑_E ∈ℰ_h(Γ_)⟨ (-_h()),_ - _h⟩_E - αdλ+2μ∫_Ω (p-p_h) (-_h()); ≤∑_K∈𝒯_h^-1_h + αdλ+2μ p_h 𝕀 - _h +_h_0,K-_h()_0,K; +∑_E ∈ℰ_h(Γ_)_ - _h_0,E-_h()_0,E + γp-p_h_0,Ω-_h()_0,Ω, ]
with γ defined as in (<ref>). Therefore, using the approximations properties of _h (see, e.g., <cit.>) and the Cauchy–Schwarz inequality, we obtain the desired result.
Let ∈^1_Γ_(Ω):= {∈^1(Ω): = on Γ_} and assume that _∈^1(Γ_). Then, there exists C_4>0, independent of h, such that
[ |_1( ( - _h ) )|
≤ C_4 γ p- p_h||_0,Ω_1,Ω; + C_4( ∑_K∈𝒯_h h_K^2||𝐜𝐮𝐫𝐥(^-1_h + αdλ+2μ p_h 𝕀 +_h)||_0,K^2 +∑_E∈ℰ_h(Ω)h_E ||[[(^-1_h + αdλ+2μ p_h 𝕀 +_h)]]||_0,E^2; +∑_E∈ℰ_h(Γ_)h_E ||(C^-1_h + αdλ+2μ p_h 𝕀 +_h) -d_d||_0,E^2)^1/2_1,Ω. ]
Similarly to Lemma <ref>, adding ± c ( ( - _h ),p_h), we have
[ _1( ( - _h )); = H( ( - _h )) - c( ( - _h ),p) -a(_h, ( - _h )) - b( ( - _h ),(_h,_h)); = ⟨ ( ( - _h )),_⟩_Γ_ - αdλ+2μ∫_Ω (p-p_h) ( ( - _h )); - ∫_Ω^-1_h: ( - _h ) - ∫_Ω_h: ( - _h ) - αdλ+2μ∫_Ω p_h ( ( - _h )). ]
Then, applying a local integration by parts, using that _∈^1(Γ_), the identity ⟨(-_h ) ,_⟩_Γ_= -⟨-_h , d_/d⟩_Γ_, and the Cauchy–Schwarz inequality, we obtain
[ _1( ( - _h )) = -∑_K∈𝒯_h∫_K (^-1_h + αdλ+2μ p_h 𝕀 +_h):( - _h ); +∑_E∈ℰ_h(Ω)∫_E [[(^-1_h + αdλ+2μ p_h 𝕀 +_h)]]·( - _h ) - αdλ+2μ∫_Ω (p-p_h) ( ( - _h )); +∑_E∈ℰ_h(Γ_)∫_E (^-1_h + αdλ+2μ p_h 𝕀 +_h -_)·( - _h ); ≤∑_K∈𝒯_h(^-1_h + αdλ+2μ p_h 𝕀 +_h)_0,K - _h _0,K; +∑_E∈ℰ_h(Ω)[[(^-1_h + αdλ+2μ p_h 𝕀 +_h)]]_0,E - _h _0,E +γp-p_h_0,Ω ( - _h )_0,Ω; +∑_E∈ℰ_h(Γ_) (^-1_h + αdλ+2μ p_h 𝕀 +_h) -d_d_0,E - _h _0,E. ]
As in the previous result, the approximation properties of the Clément interpolation (see, e.g., <cit.>) in conjunction with the Cauchy–Schwarz inequality, produces the desired result.
The following lemma establishes the desired estimate for _1.
Assume that there exists a convex domain B such that Ω⊆ B and Γ_⊆∂ B, and that _∈^1(Γ_). Then, there exists a constant C_5>0, independent of h, such that
-_h_;Ω + -_h_0,Ω + -_h_0,Ω ≤ C_5 { ∑_T∈𝒯_hΞ_s,K^2}^1/2 + C_5γp-p_h_0,Ω.
Let ∈_(;Ω). From <cit.>, there exist ∈^1(Ω) and ∈_Γ_^1(Ω), such that
= + _1,Ω+_1,Ω≤ C_Helm_; ,
Using that _1(_h)=c(_h,p_h-p) for all _h∈^_h, we have
_1()=_1(-_h) + c(_h,p-p_h) ∀ _h∈^_h.
In particular, this holds for _h defined as _h=_h+(_h ), whence
_1()=_1(-_h)+_1((-_h )) + c(_h,p-p_h).
Hence, the proof follows from Lemmas <ref>, <ref> and <ref>, and estimate (<ref>).
The following lemma establishes the estimate for _2.
Assume that there exists a convex domain B such that Ω⊆ B and Γ_⊆∂ B, and that p_∈^1(Γ_). Then, there exists a constant C_8>0, independent of h, such that
-_h_÷;Ω + p-p_h_0,Ω ≤ C_8 { ∑_T∈𝒯_hΞ_f,K^2}^1/2 + C_8(_h_0,Ωp-p_h_0,Ω + γ-_h _0,Ω).
It follows the steps of Lemma <ref>. From <cit.>, we have that for all ∈_(÷;Ω) there exist ∈^1(Ω) and ϕ∈_Γ_^1(Ω), such that = + ϕ and _1,Ω+ϕ_1,Ω≤ C_Helm_;.
Thus, proceeding similarly to Lemmas <ref> and <ref>, applying local integration by parts and the approximation properties of Π_h and I_h along with the Cauchy–Schwarz inequality, we can obtain the following estimates
[ |_2(-Π_h())| ≤ C_6 _h_0,Ωp-p_h_0,Ω_1,Ω; + C_6(∑_K∈𝒯_h h_K^2κ(_h,p_h)^-1_h- ∇ p_h_0,K^2 + ∑_E ∈ℰ_h(Γ_)h_E p_ - p_h_0,E^2 )^1/2_1,Ω,; |_2( (ϕ - I_h ϕ) )|
≤ C_7 _h_0,Ωp-p_h_0,Ωϕ_1,Ω; + C_7( ∑_K∈𝒯_h h_K^2(κ(_h,p_h)^-1_h)_0,K^2 +∑_E∈ℰ_h(Ω)h_E [[(κ(_h,p_h)^-1_h)·]]_0,E^2; +∑_E∈ℰ_h(Γ_)h_E κ(_h,p_h)^-1_h· -d p_/d_0,E^2)^1/2ϕ_1,Ω. ]
Then, noting that _2(_h)=0 for all _h∈^_h, and defining _h as _h=Π_h+(I_h ϕ), we have
_2()=_2(-_h)=_2(-Π_h)+_2((ϕ-I_h ϕ)).
Hence, the proof follows from Lemma <ref>, estimates (<ref>), and the Helmholtz decomposition of _(÷;Ω).
Finally we state the main reliability bound for the proposed estimator.
Assume that the hypotheses stated in Theorem <ref> and Lemmas <ref>-<ref> are satisfied. Let (,,,,p)∈_(;Ω)×^2(Ω)××_(÷;Ω)×^2(Ω) and (_h,_h,_h,_h,p_h)∈^_h×^_h×^_h×^_h×^p_h be the unique solutions of
(<ref>) and (<ref>), respectively.
Assume further that
(C_5+C_8)γ + C_8 C^* (g_0,Ω + p__1/2,Γ_ + __1/2,Γ_ + _0,Ω + γ r ) ≤1/2.
Then, there exists C_rel>0, independent of h, such that
__;Ω + __0,Ω + __0,Ω + __÷;Ω + _p_0,Ω≤ C_rel Ξ.
It follows directly from Lemmas <ref> and <ref>, using the fact that _h satisfies the estimate (<ref>), and the assumption (<ref>).
§.§ Efficiency of the a posteriori error estimator
In this section we derive the efficiency estimate of the estimator defined in (<ref>). The main result of this section is stated as follows.
There exists C_eff>0, independent of h, such that
C_eff Ξ ≤ -_h_;Ω + -_h_0,Ω + -_h_0,Ω + -_h_÷;Ω + p-p_h_0,Ω + h.o.t,
where h.o.t. stands for one or several terms of higher order.
We begin with the estimates for the zero order terms appearing in the definition of Ξ_s,K (cf. (<ref>)).
For all K∈_h there holds
+_h _0,K≲-_h_;K _h-^ t_h_0,K≲-_h_;K.
By employing the same arguments as in <cit.>, we can conclude that = -, which, together with the symmetry of , implies the desired result. Further details are omitted.
In order to derive the upper bounds for the remaining terms
defining the error estimator Ξ_s,K, we use results from <cit.>, inverse inequalities, and the localisation technique based on element-bubble and edge-bubble functions. The main properties that we will use can be found in <cit.>.
For all K∈_h there holds
[ h_K𝒞^-1_h + αdλ+2μ p_h 𝕀 - _h +_h_0,K; ≲ h_K(-_h_;K + -_h_0,K + p-p_h_0,K) + -_h_0,K. ]
It follows from an application of <cit.> with = 𝒞^-1_h + αdλ+2μ p_h 𝕀 - _h +_h, using that ^-1 + αdλ+2μ p 𝕀 = - and
<cit.>. We refer to <cit.> for further details.
For all K∈_h and E∈ℰ_h(Ω), there holds
[ h_K𝐜𝐮𝐫𝐥(𝒞^-1_h + αdλ+2μ p_h 𝕀 +_h)_0,K≲-_h_;K + -_h_0,K + p-p_h_0,K,; h_E^1/2[[(𝒞^-1_h + αdλ+2μ p_h 𝕀 +_h)]]_0,E≲-_h_;ω_E + -_h_0,ω_E + p-p_h_0,ω_E, ]
where the patch of elements sharing the edge E is denoted as ω_E:=∪{K'∈𝒯_h E∈ℰ_h(K')}.
It suffices to apply <cit.> with :=^-1 + αdλ+2μ p 𝕀 + = and _h:=^-1_h + αdλ+2μ p_h 𝕀 +_h.
Assume that _ is piecewise polynomial. Then, for all E∈ℰ_h(Γ_), there holds
[ h_E^1/2(
𝒞^-1_h + αdλ+2μ p_h 𝕀 +_h) -d_/d_0,E≲-_h_;K_E + -_h_0,K_E + p-p_h_0,K_E,; h_E^1/2_-_h_0,E≲ h_K_E(-_h_;K_E + -_h_0,K_E + p-p_h_0,K_E) + -_h_0,K_E, ]
where K_E is a triangle in 𝒯_h that contains E on its boundary.
The first estimate follows as in <cit.>, defining and _h as in Lemma <ref>. On the other hand, the second estimate follows from an application of the discrete trace inequality (see <cit.>), using that ^-1 + αdλ+2μ p 𝕀 = -, and the fact that =_ on Γ_. See also <cit.>.
A direct application of Lemmas <ref>-<ref> yields
∑_K∈𝒯_hΞ_s,K≲__;Ω + __0,Ω + __0,Ω + __÷;Ω + _p_0,Ω.
Similarly, using the same arguments as in Lemmas <ref>-<ref>, along with algebraic manipulations as in Section <ref>, assuming that p_ is piecewise polynomial, together with the Lipschitz continuity of κ (cf. (<ref>)), we can bound each of the terms that appear in the estimator Ξ_f,K and obtain the following result
∑_K∈𝒯_hΞ_f,K≲__;Ω + __0,Ω + __0,Ω + __÷;Ω + _p_0,Ω.
We remark that the efficiency of Ξ (cf. (<ref>)) in Theorem <ref> is now a straightforward consequence of estimates (<ref>) and (<ref>). In turn, we emphasize that the resulting constant, denoted by C_eff>0 is independent of h.
For simplicity, we have assumed that _ and p_ are piecewise polynomial in the derivation of (<ref>) and (<ref>). However, similar estimates can also be obtained by assuming _ and p_ are sufficiently smooth (taking, for example, _∈^1(Γ_) and p_∈^1(Γ_), as in Lemmas <ref>-<ref>), and proceeding as in <cit.>. In such cases, higher-order terms, stemming from errors in the polynomial approximations, would appear in (<ref>) and (<ref>), accounting for the presence of h.o.t in (<ref>).
We conclude this section by noting that the a posteriori error estimation analysis developed here can be adapted to the three-dimensional case. In particular, in <cit.> and <cit.>, one can find the suitable Helmholtz decompositions for the spaces _(;Ω) and _(÷;Ω), respectively.
§ NUMERICAL RESULTS
The computational examples in this section verify the theoretical properties (optimal convergence, conservativity, and robustness) of the proposed schemes. The implementation has been carried out using the FE library FEniCS <cit.>. The nonlinear systems were solved with Newton–Raphson's method with a residual tolerance of 10^-7, and the linear systems were solved using the sparse LU factorisation of MUMPS <cit.>.
§.§ Optimal convergence to smooth solutions and conservativity in 2D
We first consider a simple planar problem setup with manufactured exact solution. We take the unit square domain Ω = (0,1)^2, the bottom and left segments represent Γ_ and the top and right sides are Γ_. We choose the body load f, mass source g, boundary displacement _, boundary pressure p_, as well as (not necessarily homogeneous, but standard arguments can be used to extend the theory to the inhomogeneous case) boundary data · = φ_ and = σ_, such that the exact displacement and fluid pressure are
(x,y) = 1/20[ cos[3π/2(x+y)]; sin[3π/2(x-y)] ], p(x,y) = sin(π x) sin (π y).
These exact primary variables are used to construct exact mixed variables of stress, rotation, and discharge flux. We choose the second constitutive relation for the permeability in (<ref>) and choose the following arbitrary model parameters (all adimensional)
k_0 = k_1 = c_0 = α = 0.1, λ = μ = μ_f = 1.
These values indicate a mild permeability variation and it is expected that the nonlinear solver (in this case, Newton–Raphson) converges in only a few iterations. We construct six levels of uniform mesh refinement of the domain, on which we compute approximate solutions and the associated errors for each primal and mixed variable in their natural norms. Convergence rates are calculated as usual:
rate =log(e/e)[log(h/h)]^-1 ,
where e and e denote errors produced on two consecutive meshes of sizes h and h, respectively. Table <ref> reports on this error history focusing on the methods defined by the PEERS_k family with k=0 and k=1, showing a O(h^k+1) convergence for all unknowns as expected from the theoretical error bound of Theorem <ref> (except for the rotation approximation that shows a slight superconvergence for the lowest-order case and only in 2D – a well-known phenomenon associated with PEERS_k elements). With the purpose of illustrating the character of the chosen manufactured solution and the parameter regime, we show sample discrete solutions in Figure <ref>.
We also exemplify the momentum and mass conservativity of the formulation. To do so we
represent the loss of momentum and mass as
_h := 𝒫_h[ (_h) + f]_ℓ^∞, _h := 𝒫_h[(c_0+ dα^2/dλ + 2μ) p_h + α/dλ + 2μ_h + ÷(_h) + g]_ℓ^∞,
where 𝒫_h: L^2(Ω)→P_k(𝒯_h) is the scalar version of 𝒫_h.
They are computed at each refinement level and tabulated in Table <ref> together with the total error := e() + e() + e() + e() + e(p), and its experimental convergence rate. We report on the nonlinear iteration count as well. The expected optimal convergence of the total error, and the announced local conservativity are confirmed. We also note that, at least for this parameter regime, for all the refinements and polynomial degrees the nonlinear solver takes three iterations to get a residual below the tolerance. In the last column of the same table we report on the efficiency of the global a posteriori error estimator designed in Section <ref> (Ξ) = /Ξ, which – in this case of smooth solutions – is asymptotically constant (approximately 0.98 for k=0 and 1.52 for k=1).
§.§ Convergence in 3D using physically relevant parameters
Next we investigate the behaviour of the proposed numerical methods in a 3D setting and taking model parameters more closely related to applications in tissue poroelasticity. We still use manufactured solutions to assess the accuracy of the formulation, but take an exact displacement that satisfies ÷→ 0 as λ→∞. The domain is the 3D box Ω=(0,L)×(0,L)× (0,2L) with L=0.01 m, and mixed boundary conditions were taken analogously as before, separating the domain boundary between Γ_ defined as the planes x=0, y=0 and z=0, and Γ_ as the remainder of the boundary. The manufactured displacement and pressure head are
(x,y,z) = L/4[ sin(x/L)cos(y/L)sin(z/(2L)) + x^2/λ; -2cos(x/L)sin(y/L)cos(z/(2L)) + y^2/λ; 2cos(x/L)cos(y/L)sin(z/(2L)) - 2z^2/λ ], p(x,y,z) = sin(x/L)cos(y/L)sin(z/(2L)).
First we set again the model parameters to mild values λ = μ = c_0=k_0=α=μ_f =1, k_1=k_2=0.1, and we compare them against the following values (from, e.g., <cit.>)
k_0 = 2.28× 10^-11 m^3, k_1 = 5×10^-12 m^3, λ =1.44×10^6 Pa, μ = 9.18×10^3 Pa, μ_f = 7.5×10^-4 Pa·s,
and c_0 = 0, α = 0.99. Table <ref> reports on the convergence of the method. While the magnitude of the stress errors is much higher for the second parameter regime, the discharge flux error magnitude is smaller than in the first case and the displacement, rotation, and fluid pressure errors remain roughly of the same magnitude. In any case, the table confirms that the optimal slope of the error decay is not affected by a vanishing storativity nor large Lamé constants. The numerical solutions are displayed in Figure <ref>.
§.§ Convergence in the case of adaptive mesh refinement
We continue with a test targeting the recovery of optimal convergence through adaptive mesh refinement guided by the a posteriori error estimator proposed in Section <ref>.
We employ the well-known adaptive mesh refinement approach of solving, then computing the estimator, marking, and refining. Marking is done as follows <cit.>: a given K∈𝒯_h is added to the marking set ℳ_h⊂𝒯_h whenever the local error indicator Ξ_K satisfies
∑_K ∈ℳ_hΞ^2_K ≥ζ∑_K∈𝒯_hΞ_K^2,
where ζ is a user-defined bulk density. All elements in ℳ_h are marked for refinement and also some neighbours are marked for the sake of closure. For convergence rates we use the alternative form
rate = -2log(e/e)[log(DoFs/ DoF)]^-1.
Let us consider the non-convex rotated L-shaped domain Ω = (-1,1)^2∖ (-1,0)^2 and use manufactured displacement and fluid pressure with sharp gradients near the domain re-entrant corner (see, e.g., <cit.> for the displacement and <cit.> for the fluid pressure)
(r,θ) =r^χ/2μ[ -(χ + 1) cos([χ+1]θ) + (M_2 - χ - 1)M_1cos([χ-1]θ); (χ + 1) sin([χ+1]θ) + (M_2 + χ - 1)M_1sin([χ-1]θ) ],
p(r,θ) = r^1/3sin(1/3(π/2 + θ)),
with polar coordinates r = √(x_1^2 + x_2^2), θ= arctan(x_2,x_1), χ≈ 0.54448373, M_1 = -cos([χ+1]ω)/cos([χ-1]ω), and M_2 = 2(λ + 2μ)/(μ + λ). The boundary conditions (taking as Γ_ the segments at x=±1 and y =± 1 and Γ_ the remainder of the boundary) and forcing data are constructed from these solutions and the model parameters are λ = 10^3, μ = 10, k_0 = 1/2, μ_f =c_0=k_1= 0.1, α = 1/4, where for this test we consider a Kozeny–Carman permeability form. As in <cit.>, a sub-optimal rate of convergence is expected for the mixed elasticity sub-problem in its energy norm. Note that since the exact pressure is in ^4/3-ϵ(Ω) for any
ϵ > 0 (cf. <cit.>), it is still regular enough to have optimal convergence. However its gradient (and therefore also the exact discharge flux ) has a singularity located at the reentrant corner and therefore we expect an order of convergence of approximately O(h^1/3).
The numerical results of this test are reported in Table <ref>. We observe the expected sub-optimal convergence under an uniform mesh refinement while the optimal convergence in all variables is attained as the mesh is locally refined (the first three rows are very similar as most of the elements are refined in the first three steps. This can be controlled by the bulk density, here taken as ζ = 9.5·10^-5). We also note that the individual errors are approximately of the same magnitude in the last row of each section of the table, but for the adaptive case this is achieved using approximately 5.5% of the number of degrees of freedom needed in the uniformly refined case. The last column of the table again confirms the reliability and efficiency of the a posteriori error estimator. Note that for this case we compute the divergence part of the error norm in the stress and fluxes as projections of the momentum and mass residuals onto the displacement and pressure discrete spaces, respectively. We plot in Figure <ref> the approximate displacement and pressure as well as sample triangulations obtained after a few adaptive refinement steps that confirm the expected agglomeration of vertices near the reentrant corner.
§.§ Adaptive computation of cross-sectional flow and deformation in a soft tissue specimen
Finally, we apply the proposed methods to simulate the localisation of stress, deformation, and flow patterns in a multi-layer cross-section of cervical spinal cord. We follow the setup in <cit.>. The geometry and unstructured mesh have been generated using GMSH <cit.> from the images in <cit.>. The heterogeneous porous material consists of white and grey matter surrounded by the pia mater (a thin layer, also considered poroelastic. See Figure <ref>, top two left panels). All components are assumed fully saturated with cerebospinal fluid. The transversal cross-section has 1.3 cm in maximal diameter and the indentation region is a curved subset of the anterior part of the pia mater (a sub-boundary of length 0.4 cm). Boundary conditions are of mixed load-traction type, but slightly different than the ones analysed in the previous section. We conduct an indentation test applying a traction = (0,-P)^- t, with P a constant solid pressure of 950 dyne/cm^2. The posterior part of the pia mater acts as a rigid posterior support where we prescribe zero displacement. The remainder of the boundary of the pia mater is stress-free. For the fluid phase we impose a constant inflow pressure of cerebrospinal fluid of 1.1 dyne/cm^2 and zero outflow pressure at the stress-free sub-boundary, as well as zero normal discharge flux at the posterior support. For the three different layers of the domain we use the following values for Young modulus, Poisson ratio, and lower bound for permeability (some values from <cit.>)
E^pia = 23'000 dyne/cm^2, ν^pia = 0.3, E^white = 8'400 dyne/cm^2, ν^white = 0.479, E^grey = 16'000 dyne/cm^2, ν^grey = 0.49, k_0^grey = 1.4·10^-9 dyne/cm^2, k_0^white = 1.4·10^-6 dyne/cm^2, k_0^pia = 3.9·10^-10 dyne/cm^2. Further, we take =0, g=0, μ_f = 70 dyne/cm^2· s (for cerebrospinal fluid at 37^∘), k_1 = 1/2k_0, α = 1/4, and c_0 = 10^-3.
The initial and the final adapted mesh, together with samples of solutions are shown in Figure <ref>, where we have used the mesh density parameter ζ = 5.5·10^-4. After each adaptation iteration guided by the a posteriori error indicator (<ref>), a mesh smoothing step was included. The figure indicates that most of the refinement occurs near the interface between the heterogeneous components of the porous media, and the plots also confirm a flow pattern moving slowly from top to bottom, consistently with a typical indentation test.
siam
|
http://arxiv.org/abs/2409.02963v1 | 20240904001340 | Fair Minimum Representation Clustering via Integer Programming | [
"Connor Lawless",
"Oktay Gunluk"
] | math.OC | [
"math.OC",
"cs.CY",
"cs.LG"
] |
1]Connor Lawless
2]Oktay Günlük
[1]Management Science and Engineering, Stanford University
[2]Industrial and Systems Engineering, Georgia Institute of
Technology
Fair Minimum Representation Clustering via Integer Programming A preliminary version of this paper appeared in CPAIOR 2024.
[
===========================================================================================================================
§ ABSTRACT
Clustering is an unsupervised learning task that aims to partition data into a set of clusters. In many applications, these clusters correspond to real-world constructs (e.g., electoral districts, playlists, TV channels) whose benefit can only be attained by groups when they reach a minimum level of representation (e.g., 50% to elect their desired candidate). In this paper, we study the k-means and k-medians clustering problems with the additional constraint that each group (e.g., demographic group) must have a minimum level of representation in at least a given number of clusters. We formulate the problem through a mixed-integer optimization framework and present an alternating minimization algorithm, called MiniReL, that directly incorporates the fairness constraints. While incorporating the fairness criteria leads to an NP-Hard assignment problem within the algorithm, we provide computational approaches that make the algorithm practical even for large datasets. Numerical results show that the approach is able to create fairer clusters with practically no increase in the clustering cost across standard benchmark datasets.
§ INTRODUCTION
Clustering is an unsupervised learning task that aims to partition data points into sets of similar data points called clusters <cit.>. Clustering is widely used due to its broad applicability in domains such as customer segmentation <cit.>, grouping content together for entertainment platforms <cit.>, and identifying subgroups within a clinical study <cit.> amongst others. However the wide-spread application of clustering, and machine learning broadly, to human-centric applications has raised concerns about its disparate impact on minority groups and other vulnerable demographics. Motivated by a flurry of recent results highlighting bias in many automated decision making tasks such as facial recognition <cit.> and criminal justice <cit.>, researchers have begun focusing on mechanisms to ensure machine learning algorithms are fair to all those affected. One of the challenges of fairness in an unsupervised learning context, compared to the supervised setting, is the lack of ground truth labels.
Consequently, instead of enforcing approximately equal error rates across groups, fair clustering generally aims to ensure that composition of the clusters or their centers (for settings like k-means and k-medians clustering) represent all groups fairly.
A common approach to fair clustering is to require each cluster to have a fair proportion of itself represented by each group
(i.e., via balance <cit.> or bounded representation <cit.>). This approach aims to balance the presence of each group in each cluster and therefore tries to spread each group uniformly across the clusters.
Notice that this approach might not be desirable in settings where a group only gains a significant benefit from the cluster when they reach a minimum level of representation in that cluster. Consider the problem of clustering a set of media (e.g., songs, tv shows) into cohesive segments (e.g., playlists, channels). A natural fairness consideration in designing these segments would be to ensure that there is sufficient representation for different demographic groups. In these settings the benefit of the representation is only realized when a large percentage of the segment is associated with a demographic group (i.e., so that listeners can consistently watch or hear programming that speaks to them). This is even legislated in some countries, for example Canadian television channels are required to have at least 50% Canadian programming <cit.>. Note that, in this setting, spreading a minority demographic group across all clusters ensures that the demographic group will never have majority representation in any segment.
As another example, consider a simple voting system for a committee where the goal is to first cluster voters (e.g., employees, faculty) into different constituencies that can then elect a committee representative. Here, a proportionally fair clustering would assign a minority group that represents 30% of the vote equally among each cluster. However, the minority group only gets a benefit (i.e., the ability to elect a candidate of their choice) if they have at least 50% representation in the cluster. In this paper we introduce a new notion of fairness in clustering that addresses this issue. Specifically, we introduce minimum representation fairness (MR-fairness) which requires each group to have a certain number of clusters where they cross a given minimum representation threshold (i.e., 50% in the voting example).
Arguably the most popular algorithms for clustering is Lloyd's algorithm for k-means clustering <cit.>, and the associated alternating minimization approach for k-medians <cit.>.
These iterative algorithms alternate between fixing cluster centers and assigning each point to the closest cluster center.
Both algorithms are guaranteed to return a locally optimal
solution (i.e., no perturbation of the cluster centers around the solution leads to a better clustering cost). However, using these algorithms can lead to clusters that violate MR-fairness. As an example, consider the Adult dataset which contains census data for 48842 individuals in 1994 <cit.>. Suppose we wanted to cluster these individuals into groups that represent different stakeholder groups for a local committee and geographic contiguity was not a concern. A natural fairness criteria in this setting would be to ensure that there are a sufficient number of groups where minority groups (i.e., non-white in this dataset) have majority voting power. Despite the fact that only approximately 85% of the dataset is white, every cluster produced by Lloyd's algorithm is dominated by white members even when the number of clusters is as high as twenty. This highlights the need for a new approach to address fair minority representation.
In this paper we introduce a modified version of Lloyd's algorithm for k-means and k-medians that ensures minimum representation fairness, henceforth referred to as MINIimum REpresentation fair Lloyd's algorithm (MiniReL for short). The key modification behind our approach is to replace the original greedy assignment step with a new optimization problem that finds the minimum cost assignment while ensuring fairness. In contrast to the standard clustering setting, we show that finding a minimum cost clustering that respects MR-fairness is NP-Hard even when the cluster centers are already fixed. We show that this optimization problem can be solved via integer programming (IP) in practice and introduce a number of computational approaches to improve the run-time. We empirically show that our approach is able to construct fair clusters which have nearly the same clustering cost as those produced by Lloyd's algorithm.
§.§ Minimum Representation Fair Clustering Problem
The input to the standard clustering problem is a set of n m-dimensional data points 𝒳 = {x^i ∈R^m}_i=1^n. Note that assuming the data points to have real-valued features is not a restrictive assumption as categorical features can be converted to real-valued features through the one-hot encoding scheme. The goal of the clustering problem is to partition the data points into a set of K clusters 𝒞 = {C_1, …, C_K } in such a way that some measure of clustering cost is minimized.
Let K = {1, …, K} be the set of clusters.
In the fair clustering setting, each data point x^i has a (small) number of sensitive features such as gender and race associated with it. Each sensitive feature can take a finite number of possible values (e.g., male, female, or non-binary for gender).
We denote the set of sensitive features with F and possible values of a feature f ∈ F with the set G_f.
We use G = ∪_f ∈ F G_f to be the set of all possible values of all features, and call each g∈ G a group.
With this notation, each data point x^i is associated with | F| many groups, one for each f∈ℱ (i.e., gender, race).
Let X_g be the set of data points associated with group g∈𝒢 and note that unlike other fair machine learning work, we do not assume that {X_g}_g∈ G form a partition of 𝒳 when | F|>1.
Instead, {X_g}_g∈ G_f forms a partition of 𝒳 for each sensitive feature f ∈ F.
The key intuition behind MR-fair clustering is that individuals belonging to a group gain material benefit only when they have a minimum level of representation in their cluster. We denote this minimum representation threshold α∈ (0,1], and define the associated notion of an α-representation as follows:
A group g∈𝒢 is said to be α-represented in a cluster C_k if
| C_k ∩ X_g | ≥α |C_k|
Note that α represents the minimum threshold needed for a given group to receive benefit from a cluster and thus depends on the application. For instance, most voting systems require majority representation (i.e., α = 0.5). Our framework also allows for α to be group-dependent (i.e., α_g for each group g), however in most applications of interest α is a fixed threshold regardless of group.
For a given clustering 𝒞, group g∈ G, and α∈ (0,1], let Λ(𝒞, X_g, α) be the number of clusters where group g has α-represention. In MR-fairness, each group g has a parameter β_g that specifies a minimum number of clusters where that group should have α-represention.
A given clustering 𝒞 = {C_1, …, C_K} is said to be an (α,β)-minimum representation fair clustering if for every group g ∈𝒢:
Λ(𝒞, X_g, α) ≥β_g
for a given β = {β_g ∈ℤ^+}_g ∈𝒢.
The definition of MR-fairness is flexible enough that the choice of β can (and should) be specialized to each application as well as the choice of α. Also note that up to ⌊ 1/α⌋ groups corresponding to a single feature f can be α-represented in a cluster.
In the remainder of the paper we explore two different natural choices for β that mirror fairness definitions in the fair classification literature. The first sets β_g to be equal for all groups corresponding to the same feature:
β_g = ⌊1/|𝒢_f|⌊α^-1⌋ K ⌋ ∀ g ∈𝒢, f ∈ F
which we denote cluster statistical parity. The second set choice sets β_g to be proportional to the size of the group:
β_g = ⌊|X_g|/n⌊α^-1⌋ K ⌋ ∀ g ∈𝒢
which we denote cluster equality of opportunity. Note that in both cases, β_g should be at most K (i.e., if α is very small, we set β_g = K).
For a given α∈ (0,1] and β = {β_g ∈ℤ^+}_g ∈𝒢, the minimum representation fair k-means and k-medians problems are:
min_𝒞, c_1,…,c_K ∈ L∑_k ∈𝒦∑_x^i ∈ C_k D(x^i, c_k)
s.t. Λ(𝒞, X_g, α) ≥β_g ∀ g ∈𝒢
where L = R^m, D(x^i, c) = x^i - c_2^2 for k-means, and L = X, D(x^i, c) is a distance function for k-medians problem. can the center not belong to the cluster? In the unfair formulation it never wouldn't, not sure what's meaningful in our setting (I think our implementation requires it).
For a given α∈ (0,1] and β = {β_g ∈ℤ^+}_g ∈𝒢, the minimum representation fair k-means problem is:
min_𝒞, c_1,…,c_K∑_k ∈𝒦∑_x^i ∈ C_kx^i - c_k^2
s.t. Λ(𝒞, X_g, α) ≥β_g ∀ g ∈𝒢
The main difference between the fair and the standard versions of the k-means clustering problem is that greedily assigning data points to their closest cluster center may no longer be feasible for the fair version (i.e., assigning a data point to a farther cluster center may be necessary to meet the fairness criteria). Thus the problem can no longer be viewed simply as an optimization problem over cluster centers.
We integrate MR-fairness into two popular clustering paradigms: k-means and k-medians. In both settings the aim is to find a clustering together with a set of cluster centers {c_k ∈ L_k }_k=1^K. In the k-means setting the center of a cluster can be located anywhere in the feature space (i.e., L_k = R^m ∀ k), whereas in k-medians it must be at one of the data points assigned to the cluster (i.e., L_k = C_k). Note that in the standard k-medians setting it is not necessary to explicitly require c_k∈ C_k as any (local) optimal solution would satisfy this. However, this is no longer true in the MR-fairness setting and thus we explicitly constraint the center to belong to the cluster. We denote the cost of assigning a data point x^i to a cluster with center c with D(x^i, c) and the objective of the problem is to minimize the total cost of the clustering over all data points.
In the k-means setting the cost is equal to the squared distance between the data point and the center (i.e., D(x^i, c) = x^i - c^2), whereas in k-medians one can use any distance function.
Combining the standard clustering problem with MR-fairness requirement leads to the following optimization problem:
For a given α∈ (0,1] and β = {β_g ∈ℤ^+}_g ∈𝒢, the minimum representation fair k-means and k-medians problems are:
min_𝒞, c_1,…,c_K∑_k ∈𝒦∑_x^i ∈ C_k D(x^i, c_k)
s.t. Λ(𝒞, X_g, α) ≥β_g ∀ g ∈𝒢,
c_k ∈ L_k ∀ k ∈ K
where L_k = R^m ∀ k, D(x^i, c) = x^i - c_2^2 for k-means, and L_k = C_k, D(x^i, c) is any given distance function for k-medians problem.
The main difference between the fair and the standard versions of the k-means and k-medians clustering problem is that greedily assigning data points to their closest cluster center may no longer be feasible for the fair version (i.e., assigning some data points to farther cluster centers may be necessary to meet the fairness criteria). Thus the problem can no longer be viewed as an optimization problem over cluster centers.
§.§ Related Work
A recent flurry of work in fair clustering has given rise to a number of different notions of fairness. One broad line of research, started by the seminal work of <cit.>, puts constraints on the proportion of each cluster that comes from different groups. This can be in the form of balance <cit.> which ensures each group has relatively equal representation, or a group specific proportion such as the bounded representation criteria <cit.> or maximum fairness cost <cit.>. MR-fairness bares a resemblance to this line of work as it puts a constraint on the proportion of a group in a cluster, however instead of constraining a fixed proportion across all clusters it looks holistically across all clusters and ensures that threshold is met in a baseline number of clusters. Another line of work tries to minimize the worst case average clustering cost (i.e., k-means cost) over all the groups, called social fairness <cit.>.
Most similar to our algorithmic approach is the Fair Lloyd algorithm introduced in <cit.> which studies social fairness. They also present a modified version of Lloyd's algorithm that converges to a local optimum. As a consequence of the social fairness criterion their approach requires a modified center computation step that can be done in polynomial time. MR-fairness, however, requires a modified cluster assignment step that is NP-hard which we solve via integer programming.
Most similar to MR-fairness is diversity-aware fairness introduced in <cit.> and the related notion of fair summarization <cit.>. These notions of fairness require that amongst all the cluster centers selected, a minimum number comes from each group. MR-fairness differs in that our criteria is not tied to the group membership of the cluster center selected but the proportion of each group in a given cluster. Our notion of fairness
is more relevant in settings where the center cannot be prescribed directly, but is only a function of its composition (i.e., in voting where members of a `cluster' elect an official).
There is also a long line of research that looks at fairness in the context of gerrymandering <cit.>. While our notion of fairness shares some similarity with different notions of fairness in gerrymandering, the gerrymandering problem places different constraints on the construction of the clusters such as contiguity. Consequently the algorithmic approaches to tackle gerrymandering generally require more computationally intensive optimization procedures that do not readily transfer to the machine learning setting.
§.§ Main Contributions
We summarize our main contributions as follows:
* We introduce a novel definition of fairness for clustering of practical importance called MR-fairness, which requires that a specified number of clusters should have at least α percent members from a given group.
* We formulate the problem of finding a MR-fair k-means or k-medians clustering in a mixed integer optimization framework, and introduce a new heuristic algorithm MiniReL, based on Lloyd's algorithm for clustering, to find a local optimum.
* We show that unlike other notions of proportional fairness, this problem can not be approximated by adjusting unfair cluster centers.
* We show that incorporating MR-fairness into Lloyd's algorithm leads to a NP-Hard sub-problem to assign data points to fixed cluster centers, which we call the Fair Minimum Representation Assignment (FMRA) Problem.
* We introduce a two-stage decomposition approach to solving the FMRA problem that includes a polynomial time bi-criteria approximation algorithm based on a network flow formulation. We also introduce a polynomial time heuristic for setting the first-stage variables.
* We present numerical results to demonstrate that MiniReL is able to construct MR-fair clusterings with only a modest increase in run-time and with little to no increase in clustering cost compared to the standard k-means or k -medians clustering algorithm.
An initial version of this paper was published in a conference publication <cit.> which introduced MR-fairness and a basic version of the MiniReL algorithm with pre-fixing. In this work we extend MiniReL to the k-medians setting and present new computational studies to show its efficacy. In addition, we build upon MiniReL's algorithmic framework and introduce a two-stage decomposition framework to solve the FMRA including a polynomial time heuristic for solving the FMRA under pre-fixing built around a network flow model. We also provide additional theoretic results on the total unimodularity of the heuristic pre-fixing IP presented in the conference version, and additional NP-hardness results on variants of the FMRA problem.
The remainder of the paper is organized as follows. In Section <ref> we present a mixed integer optimization formulation for the MR-fair clustering problem and introduce MiniReL. In Section <ref> we introduce computational approaches to help our algorithm scale to large datasets. Finally Section <ref> presents a numerical study of MiniReL compared to the standard Lloyd's algorithm.
§ MIXED INTEGER OPTIMIZATION FRAMEWORK
We start by formulating the MR-fair k-means clustering problem as a mixed-integer program with a non-linear objective. We use binary variable z_ik to denote if data point x^i is assigned to cluster k, and variable c_k ∈ℝ^d to denote the center of cluster k. The binary variable y_gk indicates whether group g is α-represented in cluster k. Let L_k(z) be the set of allowable cluster center locations for cluster k as a function of the current cluster assignments, (i.e., L_k(𝐳) = R^m for k-means). To represent L_k in the k-medians case, we introduce additional binary decision variables d_ik to denote if data point i is selected as the center for cluster k. The set L_k(𝐳) is the set defined by the following constraints:
L_k(𝐳) = {c_k∈ℛ^m : c_k = ∑_x^i ∈ X d_ik x^i,
d_ik≤ z_ik ∀ x^i ∈ X , ∑_x^i ∈ X d_ik = 1,
d_ik∈{0,1} ∀ x^i ∈ X}
We can now formulate the MR-fair clustering problem as follows:
min ∑_x^i ∈𝒳∑_k ∈𝒦 D(x^i,c_k) z_ik
s.t. ∑_k ∈𝒦 z_ik = 1 ∀ x^i ∈𝒳
∑_x^i ∈ X_g z_ik + M (1 - y_gk) ≥α∑_x^i ∈𝒳 z_ik ∀ g ∈𝒢, k ∈𝒦
∑_k ∈𝒦 y_gk ≥β_g ∀ g ∈𝒢
u≥∑_x^i ∈𝒳 z_ik ≥ l ∀ k ∈𝒦
z_ik ∈{0,1} ∀ x^i ∈𝒳, k ∈𝒦
y_gk ∈{0,1} ∀ g ∈𝒢, k ∈𝒦
c_k ∈ L_k(z) ∀ k ∈𝒦
The objective (<ref>) is to minimize the cost of the clustering. Constraint (<ref>) ensures that each data point is assigned to exactly one cluster. Constraint (<ref>) tracks whether a cluster k is α-represented by a group g, and includes a big-M which can be set to α n. Finally, constraint (<ref>) tracks that each group g is α-represented in at least β_g clusters. In many applications of interest, it might also be worthwhile to add a constraint on the size of the clusters to ensure that each cluster has a minimum/maximum number of data points. Constraint (<ref>) captures this notion of a cardinality constraint where l and u represent the lower and upper bound for the cardinality of each cluster respectively. Note that in cases where the cardinality constraint is used, the big-M in constraint (<ref>) can be reduced to α u. In all our experiments we make sure that l ≥ 1 so that exactly k clusters are returned by the algorithm. This ensures that each group is α-represented in non-trivial clusters. Note that every group would be trivially α-represented in an empty cluster according to Definition <ref> but would provide little practical use.
We can now formulate the MR-fair clustering problem as follows:
min ∑_x^i ∈𝒳∑_k ∈𝒦x^i - c_k_2^2 z_ik
s.t. ∑_k ∈𝒦 z_ik = 1 ∀ x^i ∈𝒳
∑_x^i ∈ X_g z_ik + M (1 - y_gk) ≥α∑_x^i ∈𝒳 z_ik ∀ g ∈𝒢, k ∈𝒦
∑_k ∈𝒦 y_gk ≥β_g ∀ g ∈𝒢
u≥∑_x^i ∈𝒳 z_ik ≥ l ∀ k ∈𝒦
z_ik, y_gk ∈{0,1} ∀ x^i ∈𝒳, g ∈𝒢, k ∈𝒦
c_k ∈R^m ∀ k ∈𝒦
The objective (<ref>) is to minimize the sum of the squared distances between the data points and the centers of the clusters they are assigned to. Constraint (<ref>) ensures that each data point is assigned to exactly one cluster. Constraint (<ref>) tracks whether a group g is α-represented in cluster k, and includes a big-M which can be set to α n. Finally, constraint (<ref>) enforces that each group g is α-represented in at least β_g clusters. In many applications of interest, it might also be necessary to add a constraint on the size of the clusters. Constraint (<ref>) captures this notion where l and u represent the lower and upper bounds on the cardinality of each cluster respectively. Note that in cases where the cardinality constraint is used, the big-M in constraint (<ref>) can be reduced to α u. For all our experiments we set l = 1 to ensure that exactly k clusters are returned by the algorithm. Adding a lower bound also ensures that each group is α-represented in non-trivial clusters. Note that without this lower bound every group would be trivially α-represented in an empty cluster according to Definition <ref>.
To solve problem (<ref>)-(<ref>) in practice, we introduce a modified version of Lloyd's algorithm called MiniReL, which requires solving a fair assignment problem, in Section <ref>. In Section <ref> we show that,
unlike other notions of fairness in clustering, first solving the clustering problem without fairness constraints and then finding a fair assignment of the data points to these centers can lead to arbitrarily poor results under MR-fairness which further justifies the use of an iterative approach such as MiniReL.
§.§ MiniReL Algorithm for Fair Clustering
Solving the optimization problem outlined in the preceding section to optimality is computationally challenging as it is a large scale integer optimization problem with a non-convex objective function. To solve the problem in practice, we introduce a modified version of Lloyd's algorithm which we call the Minimum Representation Fair Lloyd's Algorithm (MiniReL) that alternates between adjusting cluster centers and fairly assigning data points to clusters to converge to a local optimum, see Algorithm <ref>.
Note that the only difference between Lloyd's algorithm for k-means and k-medians is how centers are computed. Given a fixed set of cluster assignments (i.e., when variables z are fixed in (<ref>)-(<ref>)) the optimal choice of c_k is the mean value of data points assigned to C_k for k-means whereas it is found by searching across all data points in the cluster for k-medians <cit.>. In the case of multiple centers with the same cost in the k-medians setting we use deterministic tie-breaking and chose the center corresponding to the data point that comes first in the lexicographic ordering of the dataset which we set-up at the beginning of the algorithm. For simplicity, we will refer to both algorithms (with separate center computation approaches) as Lloyd's algorithm for the remainder of the paper.
For a given (fixed) set of cluster centers c_k we denote the problem (<ref>)-(<ref>) the fair minimum representation assignment (FMRA) problem which is a linear integer program. While the optimal assignment step in Lloyd's algorithm can be done in polynomial time, the following result shows that the FMRA problem is NP-Hard.
The fair minimum representation assignment problem is NP-Hard.
See Appendix <ref> for proof. Note that if the FMRA problem is infeasible (for any given collection of cluster centers), it provides a certificate that no MR-fair clustering with the given α, β exists. While integer programs do not always scale well to large datasets, in our computational experiments we observed that FMRA can be solved to optimality in a reasonable amount of time even for datasets with tens of thousands of data points. In Section <ref> we describe the computational techniques that help scale our algorithm.
A natural question is whether MiniReL converges to a locally optimal solution as Lloyd's algorithm does. When discussing local optimality, it is important to formally define the local neighborhood of a solution. In the absence of fairness constraints, data points must be assigned to the closest centers to minimize cost. Consequently, a clustering is locally optimal if perturbing the centers does not improve the clustering cost.
In our setting, we define a local change as any perturbation to a cluster center, or an individual change to cluster assignment (i.e., moving a data point from one cluster to another). With this notion of a local neighborhood, the following result shows that the MiniReL converges to a local optimum in finite time. Note that while MiniReL converges to a local optimum, the solution may be arbitrarily worse than the global optimum as is the case for Lloyd's algorithm.
MiniReL converges to a local optimum in finite time.
For proof please see Appendix <ref>.
§.§ A Natural Solution Approach and an Inapproximability Result
One natural approach <cit.> used for other notions of fairness in clustering is to first obtain cluster centers by solving the clustering problem without fairness constraints and then find a fair assignment of the data points to these centers. This has been shown to provide an overall approximation guarantee for the full optimization problem in some settings even when the centers are only approximately optimal for the clustering problem without fairness constraints <cit.>.
Unfortunately, we show that in our MR-fairness setting this approach, which we call one-shot fair adjustment, can lead to arbitrarily bad solutions. Let z^*, c^* be the optimal solution to the MR-fair clustering problem. Let c^*_UF be the optimal centers for the clustering problem without fairness constraints and z^*_FA be the optimal (fair) assignment of data points to centers c^*_UF. Note both z^* and z^*_FA are feasible fair assignments. Let COST(z, c) = ∑_x^i ∈𝒳∑_k ∈𝒦 D(x^i, c_k) z_ik be the objective for the clustering problem.
There does not exist a constant M > 0 such that:
COST(z^*_FA, c^*_UF) ≤ M · COST(z^*, c^*)
In other words, fairly assigning data points to (approximately) optimal unfair centers can lead to arbitrarily bad performance relative to the optimal solution of the MR-fair clustering problem.
Proof of Theorem <ref>
To prove the claim we will construct an instance of the MR-fair k-means problem in two dimensions with four data points and three groups (Red, Blue, and Yellow). The first two points are located at (0,0) and belong to the red and blue groups, respectively. The next two data points belong to the yellow group and are located at (γ,0) and (γ,ϵ) for some γ,ϵ>0. For the fair clustering problem, we set K=3, α > 0.5, and β_R=β_Y=β_B=1.
Note that any optimal solution to the clustering problem without fairness constraints will place the cluster centers c^*_UF at (0,0), (γ,0), (γ,ϵ) with cost 0. It is straightforward to see that fairly assigning points to these fixed centers requires assigning either the red or blue data point to the center at (γ, 0) and assigning the yellow data point at (γ, 0) to the center at (γ, ϵ) yielding a cost of γ^2 + ϵ^2. Now consider the optimal solution to the fair clustering problem which selects centers at (0,0), (0,0), (γ,ϵ/2). It assigns the red data point to the first center at (0,0), the blue data point to the second center at (0,0), and both yellow data points to the center at (γ,ϵ/2) which satisfies the fairness constraints and gives a total cost of ϵ^2/2. The ratio of the costs of the two solutions is γ^2 + ϵ^2/ϵ^2/2 which can be made arbitrarily large by increasing γ, completing the proof. Note that the same construction, with the same unfair centers and optimal fair centers at (0,0), (0,0), (γ,0) proves the same result in the k-medians setting. Figure <ref> shows both the bad instance and the optimal centers for the fair and unfair problems.
Note that using the simple example constructed above, one can argue that the cost of the optimal fair clustering can be arbitrarily larger than the cost of the optimal clustering without fairness. Also note that the above proof also shows that one can get an arbitrarily bad solution when the (unfair) cluster centers are chosen using a constant factor approximation algorithm.
§ SCALING MINIREL: TWO-STAGE DECOMPOSITION
The main computational bottleneck of the MiniReL algorithm is solving the FMRA problem, which simultaneously selects which groups are α-represented in which clusters (i.e., set y_gk variables) and assigns data points to clusters in a way that satisfies said constraints (i.e. sets z_ik variables). It is a computationally demanding problem to solve in practice both due to its scale (i.e., as the number of binary variables scales with the number of data points), symmetry, and its use of big-M constraints (i.e., that lead to weak linear relaxations <cit.>). To alleviate these computational challenges, we introduce a heuristic two-stage decomposition scheme that separates setting the y variables and z variables into two sequential problems.
The first stage problem, which we call the (), sets the y_gk variables.
The second stage problem, which we call the (), sets the z_ik variables. One approach to solve the first-stage problem is to solve the FMRA problem with relaxed z variables (i.e., problem (<ref>)-(<ref>) with fixed centers, y_gk binary and z_ik∈ [0,1]). This dramatically reduces the number of binary variables (relaxing nK binary variables), making the problem much faster to solve than FMRA in practice. While this approach does not guarantee the feasibility of an integral second-stage solution, in Section <ref> we show that this fractional assignment can be rounded to an integer assignment with only small violation to the fairness constraints. In Section <ref> we present an alternative heuristic approach to find a solution to the first-stage problem in polynomial time without the accompanying feasible fractional assignment of data points.
The second stage problem () is akin to solving the FMRA problem with a fixed set of y_gk variables.
This removes the need for the y variables and the associated big-M constraints in the model formulation and breaks symmetry in the IP (i.e., removes permutations of feasible cluster assignments), dramatically improving the computation time. However, despite this simplification, the following result shows that the second-stage problem is still NP-Hard.
The Problem () is NP-Hard.
See Appendix <ref> for proof. Despite this negative result, in Section <ref> we present a polynomial time algorithm that solves with a small additive fairness violation. We emphasize that this two-stage decomposition scheme is a heuristic which is not guaranteed to solve the FMRA problem to optimality. However, we found in practice this two-stage approach is able to find solutions that have similar objectives in a fraction of the time of solving the full IP (see Section <ref> for an empirical evaluation). While both stages of the two-stage decomposition scheme can be solved sequentially at every iteration of the MiniReL algorithm, we found in practice the first stage solution changed infrequently during the execution of the algorithm. To further improve the computation time, we also present a pre-fixed version of the algorithm where we only set the y_gk variables once at the beginning of the algorithm. In problems where only a single group can have α-represention in a cluster (i.e., a data point can only be part of one group and α > 0.5), pre-fixing preserves an optimal solution to the full problem. However, in more complicated settings pre-fixing may remove all optimal solutions and becomes a heuristic for improving run-time. It is worth noting that the MiniReL algorithm is itself a heuristic, and thus the pre-fixing scheme has ambiguous effects on the cost of the solution as it may cause the algorithm to converge to a better local optimum. Figure <ref> presents a visual summary of the three different control flows for the MiniReL algorithm.
§.§ Solving via Polynomial Time Heuristic
Recall that the goal of the first stage problem is to find an assignment of groups to be α-represented in clusters to meet the MR-fairness constraints. In this section we present a polynomial time heuristic to find good performing settings of the y_gk variables.
To find a good pre-fixing of the y variables, we use a fixed set of cluster centers and formulate a small integer program to find the lowest cost way to greedily meet the MR-fairenss constraints. There are multiple ways to formulate the cost of setting the y variables - we look at the myopic increase in clustering cost needed to meet the constraints (see Appendix <ref> for a computational comparison of different objectives). For a given cluster k and group g, let q_kg≥0 be the additional number of points from group g needed to make this group α-represented in cluster k (i.e., smallest integer q_kg≥0 that satisfies q_kg + |C_k∩ X_g| ≥α(q_kg + |C_k|)). Let c_(x) = _ c ∈{c_1, …, c_K}{x - c_2^2} denote the closest center for point x ∈ X.
To find a good pre-fixing, we estimate the (myopic) increase in cost m_gk to make group g α-represented in cluster k as follows:
m_gk = min_X ⊂ X_g ∖ C_K: |X| = q_kg∑_x ∈ X(D(x^i, c_k) - D(x^i, c_(x))))
Note that this objective is a heuristic that does not take into account the impact of moving data points on the satisfaction of α-representation constraints from the cluster in which it is being moved from. We can now formulate the problem of performing the pre-fix assignment as follows:
1.5cmmin ∑_(g,k) ∈𝒲 m_gk y_gk
s.t. ∑_k ∈𝒦:(g,k) ∈𝒲 y_gk ≥β_g ∀ g ∈𝒢
∑_g ∈𝒢_fy_gk ≤⌊1/α⌋ ∀ k ∈𝒦, f ∈ F
y_gk ∈{0,1} ∀ g ∈𝒢,k ∈𝒦
where y_g,k is a binary variable indicating whether group g will be α-represented in cluster k.
The objective (<ref>) is a proxy for the cost of pre-fixing. Constraint (<ref>) ensures enough clusters are allocated to each group to meet the MR-fairness constraint. Finally constraint (<ref>) ensures no cluster is assigned more groups corresponding to the same feature than can simultaneously have α-representation.
Note that the size of this formulation does not depend on the size of the dataset and it has only |𝒦| × |𝒢| variables. As mentioned earlier, when α > 0.5 and the groups are disjoint (i.e., no group can be in multiple groups) then this pre-fixing scheme simply removes symmetry. In other cases, this operation is a heuristic that leads to substantial speedups to the overall solution time. We now show that the constraint matrix for the pre-fixing problem (<ref>)-(<ref>) is totally unimodular, and thus can be solved in polynomial time.
The constraint matrix for the pre-fixing problem is totally unimodular.
Proof of Theorem <ref>
Note that every variable y_gk only appear in exactly two constraints - one constraint <ref> for g and one constraint <ref> for k. Thus it suffices to show that there exists an equitable row bi-coloring for the full constraint matrix <cit.>. Set all constraints <ref> to one color, and all constraints <ref> to another color. Note that this is an equitable bi-coloring as the difference in row sums between the two colors will all be 0, completing the proof.
§.§ Solving via Network Flow Rounding
In this section, we introduce a polynomial time algorithm to solve the second-stage problem while approximately meeting the fairness constraints. The key intuition behind our approach is that we take the fractional assignment of data points coming from the first stage problem (or from solving the linear relaxation of the second stage problem) and round it to an integer assignment using a min-cost network flow model. Recall that in the problem, clusters have already been assigned groups that must be α-represented in them (i.e., the y_gk variables are fixed).
Given a fractional solution z^LP, we construct a graph G=(V,E) as follows. V consists of three sets of vertices V_x, V_HK, V_K which we define below. For every data point x^i ∈ X we construct a vertex v_i ∈ V_X with a supply of 1. For every cluster k we create a set of vertices V_HK that partition the dataset (i.e., every data point can be mapped to exactly one vertex) and allow us to track how many data points of each group with y_gk=1 are assigned to cluster k. Let G_k be the set of groups g such that y_gk=1. We create one vertex for every possible combination of attributes in the dataset such that the attribute corresponds to a group g ∈ G_k plus one `remainder' vertex that corresponds to any other data point. For instance, consider a dataset with two sensitive features : Gender (Male, Female, Non-binary) and Age (Youth, Adult, Senior), and a cluster k that was pre-fixed to have the Female, Youth, and Adult groups α-represented. A set of combinations for the cluster are (Female, Youth), (Female, Adult), (Female, NOT {Youth or Adult}), (NOT Female, Youth), (NOT Female, Adult), and (NOT Female, NOT {Youth or Adult}). Let H_k be the set of such combinations for a cluster k. For each combination of groups c ∈ H_k we create create one node v_ck∈ V_HK with demand d_ck = ⌊∑_x^i ∈⋂_g ∈ c X_g z_ik^LP⌋. We create an edge with capacity one between data point x^i and every node v_ck if x^i ∈∩_g ∈ c X_g. The cost for these edges is equal to the clustering cost of assigning x^i to the fixed center c_k (i.e., D(x^i, c_k)). The costs of all other edges in this graph are 0.
For every cluster k we also create a node v_k ∈ V_K with demand d_k = ⌊∑_x^i ∈ X z_ik^LP⌋ - ∑_c ∈ H_k d_ck. We create an edge with capacity one between every node v_k and every node v_ck corresponding to the same cluster. Finally, we add a sink note v_t with demand d= | X| - ∑_k (d_k + ∑_c ∈ H_k d_ck) to capture any residual supply. For every node v_k ∈ V_k with fractional weight assigned to it in the LP solution (i.e., ∑_x^i ∈ Xz^LP_ik∉ℤ) we create an edge with capacity one between it and the node v_t. Figure <ref> shows a sample network flow formulation. Note that this min-cost flow problem can be solved in polynomial time via standard min-cost flow algorithms <cit.>. We denote the minimum cost flow problem associated with this graph the Flow- problem. We now show that by solving the Flow- problem we can generate a good assignment of data points to clusters:
Solving the Flow- problem yields a binary assignment of data points to clusters that respects the cardinality constraints and has a cost that does not exceed the cost of the LP relaxation of the problem.
Proof Recall z^LP is the optimal fractional assignment of the problem (found by solving the LP relaxation or as the output of the problem). We now construct an integer solution z̅ via solving the Flow- problem. Note that since all capacities and demands in the Flow- problem are integer, the optimal solution to the FMRA-problem is an integer flow. We can interpret this flow as a binary assignment of data points to clusters by setting z̅_ik = 1 if there is a unit of flow between vertex v_i ∈ V_ X and any vertex v ∈ V_Hk, and 0 otherwise. By construction of the Flow- problem and the integrality of the optimal flow, we know every data point will be assigned to exactly one cluster. Since the LP solution is feasible for the network flow problem, we know that the cost of the solution z̅ is lower or equal to the cost of the LP solution.
Note that at least ⌊∑_x^i ∈ X z_ik⌋ and at most ⌈∑_x^i ∈ X z_ik⌉ data points are assigned to cluster k. This follows from the fact that a total of d_k + ∑_c ∈ H_k d_ck = ⌊∑_x^i ∈ X z_ik⌋ demand is consumed in nodes corresponding to cluster k and the edge capacity of the edge of 1 between v_k and v_t ensures at most one additional point is allotted to cluster k. We now claim that the solution to the FMRA-flow problem satisfies the cardinality constraints. Suppose it did not, then ⌊∑_x^i ∈ X z_ik⌋ < l or ⌈∑_x^i ∈ X z_ik⌉ > u. This implies that ∑_x^i ∈ X z_ik < l or ∑_x^i ∈ X z_ik > u, which would violate constraints (<ref>) and contradicts z^LP being a solution to the LP relaxation of the FMRA problem.
It remains to show the impact of this rounding scheme on the fairness of the final solution. Specifically, we look at the impact of this network flow rounding procedure on the satisfaction of the α-representation fairness constraints (i.e., constraints (<ref>) for group g and cluster k where y_gk=1). Recall the following definition for an additive constraint violation:
For a given constraint g(x) ≤ 0, a solution x̅ is said to have an additive constraint violation of δ for δ≥ 0 such that
g(x̅) ≤δ.
We now show that the assignment produced by the Flow- problem has a small additive violation of the MR-fairness constraints.
Let γ = min(⌈α^-1⌉, max_f ∈ F| G_f|). The binary assignment from the Flow- problem has at most an additive fairness violation of:
γ^| F| - 1 + α𝕀(γ > 2)
Proof
Let δ≥ 0 be the additive fairness violation of the constraint for group g in cluster k by binary assignment z̅. We can re-write this violation as:
δ = α∑_x^i ∈ Xz̅_ik - ∑_x^i ∈ X_gz̅_ik =α∑_x^i ∉ X_gz̅_ik - (1-α)∑_x^i ∈ X_gz̅_ik
The worst-case scenario when rounding the LP solution, with respect to the fairness constraint, is to round down the number of data points in g assigned to the cluster and to round up the number of data points outside g assigned to the cluster. Let f_g, f_g'≥ 0 be the difference between the fractional and integer assignment for the group g and data points outside g respectively:
f_g = ∑_x^i ∈ X_g z_ik^LP - ∑_x^i ∈ X_gz̅_ik
f_g' = ∑_x^i ∉ X_gz̅_ik - ∑_x^i ∉ X_g z_ik^LP
Rewriting the fairness constraint we get:
δ = α∑_x^i ∉ X_g z_ik^LP + α f_g' -
(1-α)∑_x^i ∈ X_g z_ik^LP - (1-α)f_g
= (1-α)f_g + α f_g' + ( α∑_x^i ∉ X_g z_ik^LP - (1- α)∑_x^i ∈ X_g z_ik^LP)
≤ (1-α)f_g + α f_g'
with the final inequality coming from the feasibility of the LP solution for the fairness constraint. Let F_k be the set of sensitive features f with at least one group g ∈ G_f α-represented in cluster k, and let η_fk = max(1+∑_g ∈ G_f y_gk, | G_f|). Note that there are a total of ∏_f ∈ F_kη_fk vertices for each cluster k. Let V_gk⊂ V_HK be the set of nodes v_ck such that g ∈ c, and note |V_gk| = ∏_f ∈ F_k:
g ∉ G_fη_fk. Note that the V_gk sets do not for a partition of V_HK as a node v_ck can correspond to multiple groups. Similarly let V_g'k be the set of nodes v_ck such that g ∉ c. Note that V_gk≤ V_g'k, with the comparison being strict only when | G_f| > 2. By construction we know that for every node v_ck∈ V_gk at least d_ck = ⌊∑_x^i ∈∩_g ∈ c X_g z_ik^LP⌋ units of flow are routed to it. Thus f_g ≤∑_v_ck∈ V_gk∑_x^i ∈∩_g ∈ c X_g z_ik^LP - d_ck≤ |V_gk|. By a similar argument we also have f_g'≤ |V_g'k|. By construction the total number of data points assigned to cluster k is at most ⌈∑_x^i ∈ X z_ik^LP⌉, which means f_g' - f_g≤ 1. Therefore the worst-case fairness violation from rounding can be seen as the following optimization problem:
max_f_g, f_g'≥ 0 (1-α)f_g + α f_g' s.t. f_g' - f_g≤ 1, f_g'≤ |V_g'k|, f_g≤ |V_gk|
Recall that V_gk≤ V_g'k and thus by inspection, the optimal solution is (1-α)|V_gk| + αmin(1+|V_gk|, |V_g'k|). Note that |V_g'k| > |V_gk| only if max_f ∈ F | G_f| > 2 AND ⌈α^-1⌉ > 2 (i.e., there can be ≥ 3 nodes corresponding to the same sensitive feature). This condition is equivalent to saying γ > 2. Finally, we complete the proof by bounding |V_gk|. Recall that by construction:
|V_gk| = ∏_f ∈ F_k: g ∉ G_fη_fk ≤∏_f ∈ F_k: g ∉ G_fγ ≤ γ^| F|-1
We now show that this result generalizes the earlier result of <cit.> and gives an additive fairness violation of 1 in the special case of two disjoint groups.
In the special case when | G| = 2 and | F|= 1, the FMRA-flow problem guarantees of an additive fairness violation of at most 1.
Proof of Corollary <ref>
Follows from the fact that in the two group case | F| = 1 and γ = 2.
One limitation of Theorem <ref> is the the worst-case fairness violation scales exponentially in the number of sensitive features | F|. In the following result, we show that the fairness violation is also upper bounded by K and the number of α-representation constraints and thus scales linearly with | F|:
The binary assignment from the Flow- problem has at most an additive fairness violation of:
K + ∑_g ∈ Gβ_g
≤ K + K γ | F|
Proof We prove this result via a counting argument based on the linear relaxation of the problem. Start by solving this linear relaxation to generate z^LP. Take all z_ik variables with values in {0,1} and fix them, then re-solve the LP. We now have a LP where all basic z_ik variables are fractional. We now bound the number of data points with fractional variables, denoted X̂. Let n_frac be the number of fractional basic z_ik variables. Note that for every data point x^i ∈ X there must be at least two basic fractional variables z_ik associated with it, implying 2 |X̂| ≤ n_frac.
Also note that the LP has at most the following tight constraints:
* |X̂| constraints corresponding to constraint (<ref>).
* ∑_g ∈ Gβ_g ≤ K γ | F| constraints of type (<ref>) corresponding to the pre-fixed α-representation constraints.
* K active upper or lower bound constraints of type (<ref>).
In total there are at most K + K γ_MAX + X̂ tight constraints, implying n_frac≤ K + K γ_MAX + X̂.
Combining both the upper and lower bound for n_frac and re-arranging terms we get that X̂≤ K + ∑_g ∈ Gβ_g. The worst-case fairness violation is upper-bounded by the number of fractional variables, completing the result.
§ NUMERICAL RESULTS
To benchmark our approach, we evaluate it on three datasets that have been used in recent work in fair clustering: adult (n=48842, m=14) <cit.>, default (n=30000, m=24) <cit.>, and Brunswick County voting data (n=49190, m=2) <cit.> where n denotes the number of data points and m denotes the initial number of features. We pre-process the Brunswick County voting data by geocoding the raw addresses to latitude and longitude using the US census bureau geocoding tool and retain all data with a successful geocoding that belong to white and Black voters. For each dataset we use one sensitive feature to represent group membership - namely gender for both adult (67% Male, 33% Female) and default (60% Male, 40% Female), and race for voting (92% white, 8% Black). We also report results for the Adult dataset with two sensitive features (Adult-2F) where the second sensitive feature is marital status (47% married, 33% never married, 20% divorced or widowed). For all datasets we normalize all real-valued features to be between [0,1] and convert all categorical features to be real-valued via the one-hot encoding scheme. For datasets that were originally used for supervised learning, we remove the target variable and do not use the sensitive attribute as a feature for the clustering itself. The k-medians algorithms require computing a distance matrix between all pairs of points leading to a large memory requirement (O(n^2)). To circumvent memory issues for large datasets, we sub-sample all datasets to have 10,000 data points. We use the same random sub-sample for all algorithms to provide a fair comparison.
We implemented MiniReL in Python with Gurobi 10.0 <cit.> for solving all IPs. We warm-start MiniReL with the output from the baseline version of Lloyd's algorithm (details and evaluation of this warm-starting is included in Appendix <ref>). All experiments were run on a computing environment with 16 GB of RAM and 2.7 GHz Quad-Core Intel Core i7 processor. For the following experiments we set α = 0.51 to represent majority representation in a cluster. We experiment with different settings of α in Appendix <ref>. We also set ℓ = 1, u = n to provide a fair comparison to Lloyd's algorithm with no cardinality constraints. We provide some additional experiments with balanced clusters (i.e., l ≈n/k) in Appendix <ref>.
We benchmark MiniReL against Lloyd's algorithm for k-means and its associated alternating minimization algorithm for k-medians <cit.>. For the k-means setting we use the implementation available in scikit-learn <cit.> with a k-means++ initialization. For the k-medians setting we compare MiniReL against the alternating minimization approach for k-medians using the implementation available in scikit-learn extra package <cit.> with a k-mediods++ initialization. We use euclidean distance to compute distances between points. For both settings, we run the algorithm with 100 different random seeds. We report the clustering with the lowest clustering cost, which we denote k-means/k-medians respectively, and the fairest clustering with respect clustering statistical parity (k-means-SP/k-medians-SP) and cluster equality of opportunity( k-means-EqOp/ k-medians-EqOp).
§.§ Comparing different variants of the MiniReL
We evaluate the following 5 different variants of the MiniReL algorithm to show the impact of different algorithmic components on its performance:
* MiniReL: The baseline algorithm that solves the full FMRA at each iteration.
* MiniReL-TwoStage: Decomposes the FMRA by sequentially solving the , by relaxing the z variables and solving the problem via IP, and , using IP.
* MiniReL-TwoStageFlow: Uses the two-stage decomposition and solves the second stage using the network flow approach introduced in section <ref>.
* MiniReL-PrefixFlow: Uses the two-stage decomposition with pre-fixing (i.e., solves the via IP once) and solves the second stage using the network flow approach introduced in section <ref>.
* MiniReL-PrefixHeurFlow: Uses the two-stage decomposition with pre-fixing with the polynomial time heuristic introduced in section <ref> and solves the second stage using the network flow approach introduced in section.
For the sake of brevity we report results for this ablation study in the k-means setting, but the results were similar in the k-medians setting. Figure <ref> shows the performance of these five different variants with respect to cluster quality, computation time, and average time to solve the assignment stage of the algorithm. All the amounts presented are normalized to show the percent change with respect to the base MiniReL algorithm, and averaged over k ∈ [2,14]. Negative values indicate an improvement over the baseline algorithm. Overall, the results show that the two-stage decomposition (MiniReL-TwoStage) approach leads to a decrease in computation time over the baseline algorithm leading to as much as a 20% percentage reduction in computation time on the Adult data set with two sensitive features. Solving the second stage problem via network flow (MiniReL-TwoStageFlow) also gives a small additional reduction in computation time over the the two-stage decomposition alone. Both approaches have no large impact on the objective (i.e., clustering cost) of the final solutions). Adding pre-fixing (Minirel-PrefixFlow) while still solving the problem via IP yields an additional reduction in computation time in most of the datasets - the exception being the Default that had on average only 1.2 iterations that needed to use IP. Solving the problem via the polynomial time heuristic led to a large additional reduction in computation time (an additional 30% in the adult dataset) but comes with a slight reduction to the quality of the clusters found. In most datasets this increase in cluster cost is small (under 1% on average) but in the voting dataset this led to a 7% point increase.
§.§ k-means results
We compare the full version of MiniReL with heuristic pre-fixing and network flow assignment in the k-means setting with the standard Lloyd's algorithm for k-means. We evaluate both algorithms with respect to cluster statistical parity (i.e., every group must be α-represented in the same number of clusters), and cluster equality of opportunity (i.e., every every group must be α-represented in a number of clusters proportional to the size of the group).
Figure <ref> (top 2 rows) shows the maximum deviation from the fairness parameters β_g (i.e., max_g max(β_g - Λ (C, X_g, α), 0)) for the k-means algorithms and the variants of MiniReL that use IP to strictly enforce fairness. Figure <ref> (bottom 2 rows) shows the total additive fairness violation normalized by the size of the dataset for the k-means algorithms and the variant of MiniReL that uses network flow to approximately enforce fairness.
Across all three datasets we can see that k-means can lead to outcomes that violate MR-fairness constraints significantly, and that selecting the fairest clustering has only marginal improvement. This is most stark in the default dataset where there is as much as an 11 cluster gap (6 cluster fairness violation) between the two groups despite having similar proportions in the dataset. In contrast, the MiniReL algorithm is able to generate fair clusters under both notions of fairness and for all three datasets. This result also applies in the additive fairness violation setting, where the k-means algorithms generate large constraint violations (as much as 20% of the dataset size for the voting dataset) whereas the MiniReL algorithms with flow assignment have normalized additive fairness violations close to 0%.
Table <ref> shows the average computation time in seconds for Lloyd's algorithm and MiniReL-PrefixHeurFlow under cluster statistical parity. We display results for the cluster statistical parity as it was the more computationally demanding setting requiring more MiniReL iterations than cluster equality of opportunity. As expected, the harder assignment problem in MiniReL leads to higher overall computation times when compared to the standard Lloyd's algorithm. However, MiniReL is still able to solve large problems in under 200 seconds demonstrating that the approach is of practical use.
One remaining question is whether fairness comes at the expense of the cost of the clustering. Figure <ref> shows the k-means clustering cost for Lloyd's algorithm and MiniReL under both definitions of fairness. Although there is a small increase in the cost when using MiniReL the overall cost closely matches that of the standard k-means algorithm in 5 out of 6 instances, with the sole exception being the voting dataset under statistical parity, showing that we can gain fairness at practically no additional increase to clustering cost.
§.§ k-medians results
In this section we benchmark MiniReL against the Lloyd-style k-medians algorithm <cit.>. The results closely parallel those in the k-means section and thus in the interest of brevity we present a subset of results for the algorithm under statistical parity and include the extended results in Appendix <ref>. Figure <ref> shows the maximum fairness violation and normalized additive fairness violation for MiniReL and the baseline k-medians algorithm. Similar to the k-means setting, the k-medians algorithm without fairness constraints can lead to unfair outcomes across all datasets, whereas MiniReL constructs fair clusters by design. Figure <ref> shows the cluster cost for both algorithms in the same setting. Adding fairness in these settings again leads to only moderate increase in cluster cost, with the only notable increase coming on the voting dataset under statistical parity. Table <ref> shows the runtime of both algorithms under statistical parity. MiniReL still leads to a modest increase in runtime compared to the baseline algorithm but is able to solve all datasets (10,000 data points after sub-sampling) in under 60 seconds in most instances.
§ CONCLUSION
In this this paper we introduce a novel definition of group fairness for clustering that requires each group to have a minimum level of representation in a specified number of clusters. This definition is a natural fit for a number of real world examples, such as voting and entertainment segmentation. To create fair clusters we introduce a modified version of Lloyd's algorithm called MiniReL that solves an assignment problem in each iteration via integer programming. While solving the integer program remains a computational bottleneck, we note that our approach is able to solve problems of practical interest, including datasets with tens of thousands of data points, and provides a mechanism to design fair clusters when Lloyd's algorithm fails.
apalike
§ PROOF OF THEOREM <REF>
Proof
The reduction is from the exact cover by 3 sets (X3C) problem,
with 3q elements U = {u_1, …, u_3q} and t = q + r subsets W = {W_1, …, W_t} where each subset W_i ⊆ U and |W_i| = 3.
Recall that X3C is one of Karps 21 NP-Complete problems and its goal is to select a collection of subsets W^* ⊆ W such that all elements of U are covered exactly once.
Given an instance of the X3C problem, we first construct an undirected bipartite graph G = (∪, E) which we then convert into a FMRA instance following a similar construction as <cit.>.
Graph G has a node for each u_i∈ and a node w_j for each W_j∈. There is an edge {u_i,w_j} between nodes
u_i and w_j provided that u_i ∈ W_j. Note that with slight abuse of notation, we use both for the subsets in the X3C instance and the nodes corresponding to them in the graph G.
Using graph G, we construct an instance of the FMRA problem with two groups by interpreting each vertex in the graph as a data point where points corresponding to vertices in U and W belong to separate groups.
We set K = t (i.e., one cluster for each subset), α = 0.75, β_W = t - q and β_U = q (minimum number of α-represented clusters for each group).
We place the (fixed) K centers at the data points associated with W.
The distances between pairs of data points are set to be the length of the shortest path distances between the corresponding vertices in the graph.
We now argue that the optimal solution to the FMRA instance has a cost of 3q if and only if there exists a feasible solution to the X3C instance.
First note that any feasible solution to the FMRA instance has a cost of at least 3q as each point u_i∈ incurs a cost of at least 1 and there are 3q such points. Next, note that given a feasible solution ^*⊆ to the X3C instance, we can construct a solution to the corresponding FMRA instance with cost 3q by creating one cluster for each W_j∈^* in the solution.
These q clusters contain w_j together with the 3 elements u_i∈ W_j. The remaining t-q clusters contain a single point w_j, one for for each W_j∈∖^*. Note that these clusters satisfy the fairness constraints. Furthermore, letting the center of each cluster to be the w_j that it contains gives a cost of 3q.
Finally we argue that if the FMRA instance has a cost of 3q, then the X3C instance is feasible. If the solution to the FMRA has cost 3q, then (i) each point u_i∈ must be assigned to a center w_j∈ such that u_i∈ W_j, and, (ii) each point w_j∈ must be assigned to the center w_j and consequently, each cluster must contain exactly one w_j∈.
In addition, the fairness constraint for the group requires that at least t-q clusters must have 75% of its points belonging to the group. As there is exactly one w_j∈ in each cluster, this can only happen if at least t-q clusters have no points from group in them.
Consequently all 3q points form the group are assigned to at most q clusters. As all u_i∈ must be assigned to a center w_j∈ such that u_i∈ W_j, and |W_j|=3, the solution to the FMRA instance must have exactly t-q clusters with no points from group and q clusters with one w_j and three u_i∈ W_j, which gives a feasible solution to the X3C instance.
§ PROOF OF THEOREM <REF>
Proof.
We start by noting that given an assignment of data points to clusters, the optimal cluster centers for this assignment are precisely the ones computed as in Step <ref> (<ref>).
Consequently, if improved_cost = current_cost in Step <ref>, then the cluster centers used in Step <ref> and the ones computed in Step <ref> or <ref> (in the last iteration) must be identical. This follows from the uniqueness of the optimal centers in the k-means setting and the deterministic tie-breaking in the k-medians setting. Therefore, if Algorithm 1 terminates, then the current assignment of the data points to the current cluster centers is optimal and the cost cannot improve by changing their assignment due to Step <ref>.
In addition, Steps <ref> and <ref> guarantees that perturbing cluster centers cannot not improve the cost for the current assignment.
It remains to show that the algorithm will terminate after a finite number of iterations.
Similar to the proof for Lloyd's algorithm, we leverage the fact that there are only a finite number of partitions of the data points. By construction, the objective value decreases in each iteration of the algorithm, and thus can never cycle through any partition multiple times as for a given partition we use the optimal cluster centers when computing improved_cost in Step <ref>. Thus the algorithm can visit each partition at most once and thus must terminate in finite time.
§ PROOF OF THEOREM <REF>
Proof Given an instance of a 3-SAT problem, one of Karp's 21 NP-Complete problems, we construct an instance of the problem as follows. We start with a 3-SAT problem with n variables and m clauses K_1, …, K_m. Each clause K_i takes the form v_i1∨ v_i2∨ v_i3 where v_ij is either one of the original variables or its negation.
To construct the instance of the problem, start by creating two new data points x_v_i, x_v̅_i, corresponding to each original variable v_i and its negation respectively. We construct two clusters C_1 and C_2 that each variable can be assigned to. Let W represent the set of allowable group-cluster assignments (i.e., (g, k) ∈ W if y_gk = 1). For each original variable v_i we create one group g_i = {x_v_i, x_v̅_i} that can be α-represented in either cluster (i.e. (g_i, 1), (g_i,2) ∈𝒲). For each group g_i we set β_g_i = 2 - ensuring that both clusters must be α-represented by the group. We also create one group for each clause K_i corresponding to its three conditions g_K_i = {x_v_i1, x_v_i2, x_v_i3} that must be α-represented in cluster 1 (i.e. (g_K_i,1) ∈𝒲, (g_K_i,2) ∉𝒲). For these groups we set β_K_i = 1. Finally, we set α = 1/2n - this ensures that any assignment of a group's data point to a cluster will satisfy the α-representation constraint (as there are 2n data points and thus at most 2n data points in a cluster). We also add a cardinality lower bound on both clusters of 1. Clearly the above scheme can be set-up in polynomial time.
We now claim that a feasible solution to the aforementioned problem corresponds to a solution to the original 3-SAT instance. We start by taking the variable settings by looking at C_1. We start by claiming that for each variable exactly one of x_v_i, x_v̅_i are included in C_1. Suppose this weren't true, either both variables were included in C_1 or C_2. However, whichever cluster has neither of the variables would not be α-represented by group g_i, thereby contradicting the fairness constraints. Since C_1 contains either x_v_i or x_v̅_i we set v_i = T if x_v_i is included and v_i = F otherwise. We now claim that such a setting of the variables satisfies all the clauses. Assume it did not, then there exists a clause K_i such that none of x_v_i1, x_v_i2, x_v_i3 are included in C_1. However, this violates the α-representation constraint for g_K_i providing a contradiction to the feasibility of the fair assignment problem. Note that since the feasibility problem does not consider the objective or the location of the centers the proof applies for both the k-means and k-medians setting.
§ EXPERIMENTS WITH DIFFERENT OBJECTIVES FOR HEURISTIC
To test the efficacy of the myopic cost for pre-fixing we perform a computational study. We experiment with three different choices of cost function:
* Proportion: Set the cost to the proportion of the cluster that needs to be changed for g to be α-represented in cluster k:
c_gk = max(α - p_gk,0)
where p_gk is the current proportion of cluster k that belongs to group g.
* Weighted Proportion: Set the cost to the proportion weighted by the size of the cluster:
c_gk = |C_k|max(α - p_gk,0).
* Local Cost: The myopic (i.e. one-shot) increase in cost of moving additional data points to cluster k to meet the fairness constraints. Let q be the additional number of points from group g needed for g to be α-represented in this cluster (i.e. q + p_gk|C_k| ≥α(q + |C_k|)). Let c_(x) = _ c ∈{c_1, …, c_K}{D(x, c)} be the closest center for x ∈ X. The myopic cost c_gk associated with α-representing group g in cluster k then becomes:
c_gk = min_X ⊂ X_g ∖ C_K: |X| = q∑_x ∈ X(D(x,c_k) - D(x, c_(x)))
Figure <ref> shows the impact of different choices of the objective in IP model (8)-(11) on run-time for MiniReL on three different datasets. For these experiments we focused on the k-means setting (though k-medians showed a similar result) and ran 10 trials with different random initial seeds, and warm-start the algorithm with the standard Lloyd's algorithm. We benchmark using the IP model to perform pre-fixing with naively pre-fixing the group cluster assignments (i.e. random assignment). In the small 150 data point iris dataset, the pre-fixing scheme has little impact on the total run-time of the algorithm as the overhead of running the IP model outweighs any time savings from a reduced number of iterations. However, for larger datasets (i.e. adult and default which both have over 30K data points), using the IP model to perform pre-fixing outperforms the naive approach - leading to as large as a 3x speed-up. However, there is a relatively small difference in performance between the three choices for the objective, with the local cost objective reducing the speed by approximately 0.1% compared to the other two when averaged across all three datasets.
Figure <ref> shows the impact of pre-fixing strategy on the number of iterations needed for MiniReL to converge. For iris, pre-fixing has practically no impact on on the number of iterations, however for larger datasets like adult and default using the pre-fix IP model with any objective leads to substantially fewer iterations. The same holds for cluster cost as shown in Figure <ref> where the IP model leads to solutions with slightly better clustering cost in both adult and default. Both results show that the choice of objective function has relatively little impact on the performance of pre-fixing, but outperform random assignment.
§ WARM-STARTING MINIREL
To reduce the number of iterations needed to converge in MiniReL, we warm-start the initial cluster centers with the final centers of the unfair variants of Lloyd's algorithm. The key intuition behind this approach is that it allows us to leverage the polynomial time assignment problem for the majority of iterations, and only requires solving the fair assignment problem to adjust the locally optimal unfair solution to a fair one. To incorporate warm-starting into MiniReL, we replace step 1 in Algorithm (<ref>) with centers generated from running Lloyd's algorithm. For the sake of brevity, we report results in the k-means setting. We benchmark this approach against two baselines: (i) randomly sampling the center points, and (ii) using the k-means++ initialization scheme without running Lloyd's algorithm afterwards. We also compare using the k-means warm-start with 1 initialization and 100 initializations. Figure <ref> shows the impact of these initialization schemes on the total computation time including time to perform the initialization. Each initialization scheme was tested on three datasets. For each dataset we randomly sub-sample 2000 data points (if n > 2000), and re-run MiniReL with 10 random seeds. The results show that using Lloyd's algorithm to warm-start MiniReL can lead to a large reduction in computation time, even taking into account the cost of running the initialization. However, there are diminishing returns. Namely running 100 different initialization for k-means and selecting the best leads to slightly larger overall run-times.
§ EXPERIMENTS WITH DIFFERENT VALUES OF Α
In this section we explore the impact of α on the fairness and runtime of the different algorithms. For these experiments we replicate the experimental set-up of Section <ref> but vary α. For brevity we focus on the k-means setting, but k-medians showed similar results. Figures <ref> and <ref> show the normalized fairness violation (the number of α-represented clusters away from meeting the required amount β_g normalized by K) and normalized additive fairness violation (the number of data points away from meeting the fairness constraints normalized by n). In both cases, lower value of α lead to much laxer fairness requirements that are easier to meet. Both the standard k-means and the heuristic fair k-means algorithms are able to generate fair clusters for α≤ 0.3 for both statistical parity and equality of opportunity settings of β_g. However for larger values of α both struggle to generate fair clusters having normalized (additive) violations close to 100% (25%). In constrast, by design MiniReL with IP-based assignment (MiniRel-Prefix) is able to generate fair clusters at all levels of α. While the version of MiniReL with network flow rounding (MiniReL-PrefixFlow) can lead to fairness violations, the size of the violations are consistently small (in line with our theoretical results) - violating the constraints by close to 0% of the size of the dataset. Figure <ref> shows the impact of α and K on the computation time of all algorithms. As expected, both k-means and fair k-means run very quickly regardless of K (and α evidently has no impact on the runtime). In contrast, the runtime of MiniReL increases with both K and α showing that harder assignment problems lead to larger runtimes overall.
§ EXPERIMENTS WITH BALANCED CLUSTERS
In this section we explore the impact of constructing balanced clusters (i.e., clusters with approximately equal size) on the objective and computation time of MiniReL. For these experiments we require each cluster to have at least 80% of the balanced allocation of data points to clusters (i.e., ℓ = 0.8*n/K. Figure <ref> shows the k-means clustering cost of both Lloyd's algorithm (with no cardinality constraints) and the cost of MiniReL with balanced clusters. The first thing to note is that under balanced clusters, not every dataset and setting for α, β remains feasible. For the voting dataset under statistical parity, MiniReL certifies that no fair clustering exists (and thus has no associated objective or runtime). Overall, requiring balance leads to a modest increase in cluster cost across all instances - exceeding the deviation that occurs from enforcing fairness alone. Table <ref> shows the average runtime over 10 random seeds for K-Means and MiniReL-HeurFlow under balanced clusters for statistical parity. Adding the requirement that clusters are balanced leads to a small increase in runtime for MiniReL on all datasets. The one exception is the voting dataset, which is an infeasible problem, where MiniReL can certify infeasibility in under 30 seconds for all settings of K.
§ FULL K-MEDIANS RESULTS
The following section includes the extended empirical results for MiniReL in the k-medians section. Figure <ref> shows both the maximum fairness violation and normalized additive fairness violation of both algorithms under both notions of fairness. Regardless of the setting, the baseline k-medians algorithm and its fair heuristic analog are unable to generate fair clusters - whereas MiniReL can construct fair clusters or clusters with negligible additive fairness violations by design. Figure <ref> shows the clustering objective of both sets of algorithms. With the exception of the voting dataset under statistical parity, MiniReL is able to generate fair clusters with practically no increase in the cluster cost - showing that in practical settings achieving minimum representation fairness comes at little cost over Lloyd's algorithm.
|
http://arxiv.org/abs/2409.03232v1 | 20240905035741 | Strategy for mitigation of systematics for EoR experiments with the Murchison Widefield Array | [
"Chuneeta D. Nunhokee",
"Dev Null",
"Cathryn M. Trott",
"Christopher H. Jordan",
"Jack B. Line",
"Randall Wayth",
"Nichole Barry"
] | astro-ph.CO | [
"astro-ph.CO",
"astro-ph.IM"
] |
§ ABSTRACT
Observations of the 21 cm signal face significant challenges due to bright astrophysical foregrounds that are several orders of magnitude higher than the brightness of the hydrogen line, along with various systematics. Successful 21 cm experiments require accurate calibration and foreground mitigation. Errors introduced during the calibration process such as systematics, can disrupt the intrinsic frequency smoothness of the foregrounds, leading to power leakage into the Epoch of Reionisation (EoR) window. Therefore, it is essential to develop strategies to effectively address these challenges. In this work, we adopt a stringent approach to identify and address suspected systematics, including malfunctioning antennas, frequency channels corrupted by radio frequency interference (RFI), and other dominant effects. We implement a statistical framework that utilises various data products from the data processing pipeline to derive specific criteria and filters. These criteria and filters are applied at intermediate stages to mitigate systematic propagation from the early stages of data processing. Our analysis focuses on observations from the Murchison Widefield Array (MWA) Phase I configuration. Out of the observations processed by the pipeline, our approach selects 18%, totalling 58 hours, that exhibit fewer systematic effects. The successful selection of observations with reduced systematic dominance enhances our confidence in achieving 21 cm measurements.
§ INTRODUCTION
The Epoch of Reionisation (EoR) marked a significant transition in the history of the Universe, about 400 million years after the Big Bang, when the first galaxies along with other cosmic structures formed, and the intergalactic medium transitioned from a neutral state to an ionised state. This epoch unveils a wealth of information including formation and ionisation of the first cosmic structures aiding us to gain insights into the early universe's physical processes. One way to study this era is through mapping of neutral hydrogen and subsequently tracing its evolution. Neutral hydrogen emits and absorbs radiation at the specific wavelength of 21 cm. The 21 cm hydrogen line (HI) corresponds to the transition between two energy states of the hydrogen atom, specifically the spin-flip transition of the electron in ground state.
The 21 cm HI can be mapped through its spatial fluctuations, measured from the difference in brightness temperatures across the intergalactic medium, encoding information about the density and ionisation state of neutral hydrogen, as well as the clustering and growth of cosmic structures.
The underlying physical processes driving the EoR can be inferred through statistical properties of these fluctuations using power spectrum analysis <cit.>. Experiments such as the Giant Metrewave Radio Telescope Epoch of Reionisation <cit.>, the Hydrogen Epoch of Reionisation <cit.>, the Murchison WideField Array <cit.> and the LOw Frequency ARray <cit.> are currently focused at the statistical detection of the 21 cm HI. The aforementioned instruments are a combination of both first and second-generation telescopes focused at studying large-scale structures before and during reionisation between redshifts of 6 to 11. HERA has recently reported the most stringent upper limits of Δ^2 ≤ (21.4 mK)^2 at k=0.34 h Mpc^-1 and Δ^2 ≤ (59.1 mK)^2 at k=0.36 hMpc^-1 at redshifts of 7.9 and 10.4 respectively, placing new constraints on astrophysical parameters of reionisation. These results suggest heating of the intergalactic medium above the adiabatic cooling limit must have occurred by at least z=10.4 <cit.>.
While 21 cm experiments hold great potential to enhance our understanding on the evolution of the early Universe, they are challenged by strong astrophysical foregrounds, both Galactic and extraGalactic, several orders of magnitude higher than the 21 cm HI. To date, two fundamental approaches have been applied solely or in a hybrid system to mitigate foreground contamination: 1) the subtraction method whereby foregrounds are modelled and subtracted from the data <cit.>; and 2) the avoidance scheme where foregrounds are constrained to lower spatial scales and an `EoR window' is defined <cit.>. However, both techniques are prone to calibration errors <cit.>, uncertainties in foreground and primary beam <cit.>, and systematics such as Radio Frequency Interferences (RFI), instrumental polarisation leakage and mutual coupling between antennas, etc <cit.>. These errors, if not treated can potentially lead to biases and leakages in our EoR measurements. Our work presents a strategy to mitigate these systematics by quantifying them for each observation through a set of metrics to prevent them from propagating to the power spectrum.
The paper is broken down such that Section <ref> introduces the methodology followed by details of observation setup in Section <ref>. The data processing pipeline is discussed in Section <ref> where detailed explanation of each stage is provided. Section <ref> describes the power spectrum analysis and section <ref> talks on the data quality assessment. Results are interpreted in Section <ref> and conclusions are drawn in Section <ref>.
§ METHODOLOGY
Astronomers have been grappling with systematic propagation to avoid biases in the 21 cm power spectra measurements for years. Two fundamental methods have been employed to date:
* Identify potential systematics and discard or flag them.
* Identify potential systematics and apply mitigation techniques to address them.
Systematics can arise from the presence of corrupted timestamps, corrupted frequency channels, RFI sources, instrumental leakages, as well as from unknows origins. Efforts have been dedicated towards detecting the known systematic sources and alleviating them. However, we are not confident about the goodness of data points surrounding the corrupted ones. We could potentially extend the flags to neighbours, but this avenue may turn into an indefinite process <cit.>. A quantitative approach to how much the identified flags could leak into the non-identified is required and this demands precise understanding of the systematic source. Further, mitigation techniques struggle to remove systematics of unknown origins which ultimately leave some traces behind. These residual systematics, even with low intensities could potentially harm 21 cm measurements <cit.>. The data might also be prone to uncertainties associated with the mitigation techniques, adding to the existing systematics.
This work embraces the first method where we reject any outlying observations. We developed a statistical framework that interrupts the data processing pipeline such that the output data products are thoroughly inspected before they proceed to the next step. It is designed such that any dysfunctional antennas, bad timestamps or frequency channels are discarded before the filtering process. After successful passing through the preliminary gateways, a set of filters are formulated from the derived metrics to identify outlying observations. However, the derived metrics may not be sufficiently robust to capture faint systematics <cit.>.
The data processing pipeline is shown in Figure <ref> whereby statistical metrics are derived and administered at the intermediate steps, with some of the main ones highlighted in red. The caveat of this strict approach is reduction in the number of observations that survives. Nevertheless we believe it is better to prevent systematics from escaping into the final measurements that would induce biases. We used observations from the MWA to implement the statistical framework. Details of the observational setup is presented in the next section.
§ OBSERVATIONS
The MWA is a radio telescope, located at Inyarrimanha Ilgari Bundara, the
Commonwealth Scientific and Industrial Research Organisation (CSIRO) Murchison Radio-astronomy Observatory, in the mid-west of Western Australia, about 300 kilometres inland from the coastal town of Geraldton. The location is considered pristine to study the evolution of our Universe for its low-level radio frequency interference <cit.>. The instrument serves as a precursor for the Low-Frequency Square Kilometre Array telescope, currently under construction on the same site.
The development of MWA is split into several phases. The instrument kicked off with 32 tiles in 2009 <cit.>, extending to 128 tiles in December 2012 <cit.>. It was further upgraded to Phase II through the addition of 72 new tiles arranged in two compact hexagons along with 56 existing tiles pseudo-randomly spread (for detailed information refer to <cit.>). However, this work is restricted to observations from Phase I configuration. Each tile is made up of 4x4 dual polarized dipoles, optimized to operate between 80-300 MHz.
The telescope was steered at seven pointings listed in Table <ref>. In this paper, we will be targeting Phase I high band frequency observations between 167–197 MHz from EoR0 field centred at (RA=0 h, DEC=-27^∘). The field consists of foregrounds contributed by the setting of the Galactic plane <cit.> on the western horizon. The EoR experiment has observations spanning from 2013 to 2015 for the Phase I configuration. Each observation lasted for 2 minutes, totalling to about 322 hours (9655 in number). A breakdown of the data with respect to pointings are illustrated in Figure <ref>.
§ EOR DATA PIPELINE
The data described in Section <ref> were first downloaded from the MWA All-Sky Virtual Observatory (ASVO) database as raw files produced by the correlator. The downloaded data were then passed through the EoR pipeline. The boxes highlight the processes that include flagging, calibration, foreground subtraction, delay transform analysis, imaging, and power spectrum analysis. The data quality assessment procedures are denoted by the red rhombuses.
§.§ Flagging and Pre-processing
We applied https://aoflagger.readthedocs.io/en/latest/AOFlagger <cit.> with the default MWA RFI strategy settings to the data. In addition to RFI identification and mitigation, AOFlagger flags known corrupted frequency channels namely the edges and centre of the coarse bands. Pre-processing was then conducted by https://github.com/MWATelescope/BirliBirli where the data was transformed from the correlator output format to a UVFITS format. The frequency channels were averaged to 40 kHz and the time intervals to 2 s.
The receiving signal chain undergoes a state change at the start of each observation, occurring 2 seconds after the initiation of the GPS time. Additionally, the ending timestamps may potentially be affected by pointing, frequency or attenuation changes, rounded up to the next correlator dump time. Therefore, all timestamps 2 seconds after the start and 1 second before the end of each observation were flagged. The time flags vary across observations for several reasons: 1) only common timestamps of the coarse frequency channels were used; 2) some observations had late starting or early ending times; 3) averaging setting across time were different due to a mix of time resolutions in our observations, resulting in different weights being assigned. While averaging in frequency, the centre channel and 80 kHz edge channels were flagged per 1.28 MHz coarse band.
We then focused on the autocorrelated visibilities, where the signal from one antenna is correlated with itself. Since we expect the bandpass gains to behave similarly across antennas, potential outliers can be identified from the autocorrelations before calibration. As the gains are stable across each observation (2 minutes interval), we averaged the autocorrelations in time. It is important to note that the autocorrelated visibilities were normalised by a reference antenna, which was taken to be last antenna in a non-flagged instrument configuration. Modified z-scores were evaluated on the amplitudes of the averaged autocorrelations. Antennas with modified z-scores greater that 3.5 were identified as outliers or dysfunctional antennas. The z-score analysis was iterated until no outliers were found. Subsequently, the dysfunctional antennas were flagged during calibration.
§.§ Calibration
In a two-element radio interferometer, the correlation of the signal received at each element, termed as visibility, is measured. Assuming a flat sky, the relationship between the sky brightness distribution S and the visibility V across a baseline is given by
V( b, ν) = ∫_Ω A (r̂, ν) S(r̂, ν) e^-2π i ν b. ŝ/c dΩ.
Here, b = (u,v, w) represents the baseline projection, r̂ is the unit vector representing the direction cosines on the celestial sphere and ν is the observing frequency.
The primary beam response of the antenna is denoted by the 2× 2 matrix:
A=
[ A_EW D_EW; D_NS A_NS ]
where A_x and A_y represent the antenna response along the East-West (EW) and North-South (NS) directions respectively, and D_EW and D_NS are the terms that describe any instrumental leakage resulting from the signal from one polarisation escaping into the other <cit.>.
The signal received at the antenna gets corrupted along its propagation path by both direction independent and direction dependent antenna gains, thereby corrupting the visibilities. In this work, we solved for only the direction independent gains with https://mwatelescope.github.io/mwa_hyperdrive/Hyperdrive <cit.>, leaving the direction dependent gains for future.
We used the MWA Long Baseline Epoch of Reionisation Survey (LoBES) catalogue as the foreground model, derived from the EoR0 field targeting EoR experiments <cit.>. Given that the model contains information only for the Stokes I parameter, the remaining three Stokes components in equation <ref> were assumed to be zero. We utilised the Full Embedded Element (FEE) primary beam model generated with Hyperbeam <cit.>.
The calibration process involves applying least square minimization, iteratively solving for per-antenna complex gains for each frequency and time, enabling the capture of spectral structures. With the number of iterations set to 50 and the convergence threshold at 10^-6, the solutions encapsulated most spectral features. Ideally, a convergence closer to zero is desirable, but this would require increasing the number of iterations, thus increasing computational requirements. As this work focuses on the diagnosis side of the data processing pipeline, the convergence aspect falls beyond the scope of this paper. For each antenna, frequency channels that did not reach the specified threshold were flagged within the calibration process, possibly due to RFI or systematics.
We then investigated the calibration solutions for anomalies. The gain solutions were normalised with respect to the last unflagged antenna. We began by evaluating the RMS of the gain amplitudes and antennas and a three sigma thresholding was applied for identifying misbehaving antennas. However, as depicted in Figure <ref>, we were not able to spot poor behaviour such as fast fringing of the phases at low or high frequencies from the amplitudes. These behaviours were hence, manually identified and removed.
The solutions were then applied to form calibrated visibilities that were fed into foreground mitigation step.
§.§ Foreground Subtraction
In this work, we employed a foreground subtraction approach, wherein we modelled and subtracted foreground visibilities for 4000 sources within the field of view. These model visibilities were constructed following the methodology outlined in Section <ref>, utilizing the LoBES catalogue and FEE primary beam. This method effectively decreased foreground contamination at low k modes by approximately an order of magnitude in the EoR window.
However, our analysis revealed shortcomings in the results obtained through standard subtraction techniques, manifesting as either under-subtraction or over-subtraction. This behavior can primarily be attributed to direction-dependent effects caused by the ionosphere, leading to positional phase shifts. Such phenomena have been previously investigated in MWA observations and addressed through the `peeling' technique, described in <cit.>.
Following suit, we incorporated this peeling approach into our analysis, correcting phase offsets for the 1000 brightest sources in the field of view. The selection of sources for peeling was determined based on the minimal requirement of approximately 50-60 sources for an image size of ∼ 30^∘, as evaluated in <cit.>. However, the computational resources imposed a cap on the number of sources that could be included.
We evaluated the efficacy of this peeling technique through imaging improvements, which will be discussed in the following subsection.
§.§ Imaging
The next step in our data processing pipeline is generating images to reinforce the quality assurance through visual inspection and add to the statistical metrics. Images with angular resolutions of 40 arcseconds were formed by Fourier transforming the visibilities along the East-West and North South directions (EW and NS polarisations) using WSClean <cit.>. All the sub bands were combined together using the multifrequency synthesis algorithm. Briggs weighting with robustness of -1 was used such that we emphasised more on the resolution and reduction of the sidelobes but at the same time increasing the signal to noise ratio for quality assurance <cit.>.
Cotton–Schwab algorithm was employed and the images were deconvolved down to a threshold of 1 Jy, chosen to reduce computational cost as deeper cleaning was not required for diagnostic purpose. Example of a pseudo-Stokes I image is shown in plot (a) of Figure <ref>. Stokes V images were also created in the same fashion.
Panel (b) of Figure <ref> were formed from the subtracted visibilities that were corrected for ionospheric phase offset. The overall RMS in Stokes I drops from 1.66 Jy beam^-1 to 0.5 Jy beam^-1. We found that subtraction across EW polarisation performed better with a decrease in RMS by a factor of 60 while for NS, the factor is 23 because of the galactic plane aliasing being more prominent along this direction. Stokes V plotted in panel (c) showed marginal difference after subtraction. The improvement made to the subtraction after accounting for phase offsets is illustrated in the difference image in (d). It is observed that apart from a better subtraction of the brightest source in the field of view PKS0026-23 resulting into reduced sidelobe intensities, there are other visible sources that were under subtracted. We also found a flux difference of 900 mJy in PKS0026-23. The overall RMS for this observation drops by an order of magnitude for EW polarisation while marginal difference is found in NS and no difference in Stokes V. The RMS across all the pixels for each observation is plotted in Figure <ref>. The mean is seen to be shifted slightly to the left after correction and the standard deviation is more constrained for both polarisations. These pointers indicate a significant refinement in the foreground removal.
After foreground subtraction, observations were fed into the power spectrum machinery.
§ POWER SPECTRUM ANALYSIS
The power spectrum defines the power of the signal as a function of k modes. In k space, (u, v) represents the Fourier modes of the measured
visibility points, (l, m) are the angular modes k_⊥ and k_|| are modes paralell to the line-of -sight mapped from the spectral channels. The power spectrum can then be given by
P(k) = P(√(k_⊥^2 + k_∥^2))
= 1/Ω⟨Ṽ(k) Ṽ^*(k)⟩
where Ω is the observing volume. The visibilities in Equation <ref> are gridded onto a uv grid and Fourier transformed along the frequency axis. The one-dimensional power spectra can hence be defined as the integrated power over k space:
Δ^2 = k^3/2π^2 P(k).
In our work, we utilised the Cosmological HI Power Spectrum estimator (CHIPS) pipeline for generating power spectra from MWA observations <cit.>. While we did not perform a direct comparison or validation of our results with other approaches, we believe it is valuable to explain our choice.
Among the active pipelines for MWA data power spectrum generation, including Fast Holographic Deconvolution <cit.> and simpleDS <cit.>, we opted for CHIPS due to several key considerations. Firstly, we ruled out the simpleDS pipeline because it is primarily designed for redundant configurations, targeting Phase II MWA data, which did not align with our observational setup.
CHIPS constructs power spectra from a discrete uv-plane, whereas FHD/eppsilon adopts an image-based procedure. By utilizing the uvw plane, CHIPS effectively circumvents aliasing issues that FHD/eppsilon encounters. Moreover, CHIPS applies an inverse variance weighting, to account for the frequency-dependent weights in an optimal way.
Previous studies <cit.> have demonstrated that the results obtained from both CHIPS and FHD/eppsilon pipelines exhibit consistency and follow a general trend. <cit.> also demonstrates that the Hyperdrive-CHIPS pipeline does not suffer from signal loss. This further supports our decision to utilise CHIPS for our analysis, ensuring robust and reliable power spectrum estimation from MWA observations.
§ DATA QUALITY ASSURANCE
A data quality assessment was conducted at the stages outlined in Figure <ref>, enabling us to identify and filter visibilities exhibiting anomalous behaviour and patterns. These anomalies were identified from various data products, as discussed in the following subsections. The cutoff thresholds and number of successful observations that passed through the various stages are presented in Table <ref>.
§.§ Data Quality Issues
The archival data from Section <ref> were triaged before pre-processing as described in Section <ref>, to ensure that no data quality uncertainties were associated with them. Elements considered in this process included:
* Errors in beamformer communication on individual tiles.
* Discrepancies in attenuation settings of the receiver.
* Recorded events from the Monitor and Control System.
* Presence of two or more disabled dipoles along the same polarisation, indicating a flagged tile.
Additionally, observations with high levels of ionospheric activity, capable of distorting our measurements, were identified and discarded. This was achieved using the ionospheric metric developed by <cit.>, which incorporates the median source offset and source offset anisotropy derived from the measured versus expected source positions of 1000 point sources in the field of view. Observations yielding an ionospheric metric greater than the cut-off threshold estimated in <cit.> were excluded, resulting in 4943 observations (equivalent to 165 hours), as depicted by the green area in Figure <ref>. The ionospheric distributions are illustrated in Figure <ref>, with mean values ranging from 3.8 to 4.5 arcminutes across pointings. This metric serves as a proxy for ionospheric activity in an observation, with lower values indicating less ionospheric activity. The Kolmogorov-Smirnov test indicates that these distributions are not normally distributed.
§.§ Flagging Occupancy
The flagging occupancy was calculated based on flags generated from data quality issues outlined in Section 6.1, as well as flags obtained through the application of AOFlagger and autocorrelation analysis on the observations (refer to Section <ref>). The left panel of Figure Figure <ref> illustrates the flagging occupancy for a set of MWA observations. Observations with a flagging occupancy greater than 25% were discarded.
However, studies by <cit.> and <cit.> revealed that AOFlagger may overlook faint systematics, particularly faint RFI residing below thermal noise. Hence, we further evaluated the flagging occupancy on the flagged visibilities to assess the quality of our data. Notably, flags produced by SSINS were used solely for the filtering process.
The SSINS flagger provides occupancies for four classes of RFI: faint broadband streak, narrow broadband interference, DTV Signal, and total occupancy. Faint broadband refers to systematics of unknown origins occupying a wide band, while narrow interference relates to systematics at a specific frequency or a narrow band. DTV signal represents the full propagation of the DTV interference across all baselines, partly identified by AOFlagger, and total occupancy evaluates the underlying faint systematics identified by the algorithm over the entire observation, as illustrated in the right panel of Figure <ref>.
Observations exhibiting any broadband, narrow interferences, or DTV signal were discarded. The averaged flagging occupancy for each night was evaluated for both flaggers, shown in Figure <ref>. It serves as another comparison between the occupancies flagged by AOFlagger and SSINS, highlighting faint systematics overlooked by AOFlagger, but identified by SSINS. These faint systematics may originate from faint RFI, sources at the horizon attenuated enough by the primary beam to evade detection by AOFlagger, or other unknown RFI origins. Observations with SSINS occupancy exceeding 25% were rejected, resulting in only one quarter of observations remaining (2161; 72 hours). This outcome aligns with the findings of <cit.>, who reported that one third of the data used for the power spectrum in <cit.> was contaminated by DTV RFI. It is also noteworthy that the difference in occupancies may partly be attributed to the way both flaggers operate: AOFlagger identifies RFI on an antenna basis, while SSINS performs its analysis on a per-baseline mode.
§.§ Calibration Solutions
The quality of each observation was further assessed concerning the calibration solutions derived in Section <ref>. The least-squares minimization algorithm yielded convergence values, indicating the degree to which the solutions approached zero. We employed the variance of these convergence values across frequencies as a deviation metric. Observations with deviations surpassing the square root of the specified stopping threshold were flagged as outliers and treated accordingly.
§.§ Delay-transformed power spectrum
The delay transformed power spectrum estimator <cit.> were used to derive:
* Power Spectra Wedge Power P_wed: The power in Jy^2 confined within the area underneath the boundary set by the chromaticity of the instrument for baselines < 100λ and averaged by the number of contributing cells.
* Power Spectra Window Power P_win: The power in Jy^2 in the EoR window up till the first coarse channel, corresponding to k_|| < 0.4 h Mpc^-1. Baselines < 100 λ were included in the averaging and the power was normalized by the number of contributing cells <cit.>.
We then constructed four data quality metrics with the above quantities to assess our observations and these are:
* P_win (unsub): Window power P_win evaluated from the delay-transformed visibilities before foreground subtraction.
* P_win/P_wed (unsub): Ratio of window power to wedge evaluated from the delay-transformed visibilities before foreground subtraction.
* P_win (sub/unsub): Ratio of window power evaluated from the delay-transformed visibilities after foreground subtraction to before subtraction.
* P_wed(sub/unsub): Ratio of wedge power evaluated from the delay-transformed visibilities after foreground subtraction.
Given the widely spread distribution of the above mentioned quantities, illustrated in Figure <ref>, particularly the distribution of pointing -3, we adopted the interquartile range as it is known for being resilient to extreme values. The derived cut-off thresholds, and the corresponding number of successful observations are presented in Table <ref>. Observations that portray sufficient leakage into the window, with a maximum power value of 17 Jy^2 were thrown.
If the ratio of the maximum power in the window to the wedge were greater than 5.7%, the observation were treated as an anomaly.
The power removed by our subtraction methodology were quantified using the ratio of maximum power measured after foreground subtraction in both wedge and window regions to the power measured before subtraction. If the ratio resulted in values greater than 21% and 73% for the wedge and window respectively, the observation was excluded. As a result, all observations from pointing -3 were filtered out.
§.§ Images
The images generated from the imaging process described in Section <ref> were used to determine the following:
* RMS of Stokes V image: The root mean square over a small area (100 by 100 pixels area) sitting at one corner of the image, away from the concentration of source emissions.
* PKS0026-23 flux density S: PKS0026-23 is the brightest source in the field of view with a flux density of 17.47 Jy at 150 MHz as reported in the GLEAM survey <cit.>. Here, a naive extraction of the flux density was done. It was evaluated as the integration over the area centred at the source within a radius spanning two synthesized beams. The radius was chosen to account for any remaining phase offset. The extraction was done separately on both polarisations.
Using the aforementioned quantities, five data quality assessment metrics were constructed:
* V_rms (unsub) : RMS across a selected pixel box in Stokes V image.
* S_V/(S_EW + S_NS): Ratio of PKS0023-026 flux density S extracted from Stokes V to sum of flux density extracted from EW and NS polarisations.
* (S_EW , S_NS): Difference between PKS0023-026 flux densities across EW and NS polarisations.
* S_EW(sub/unsub): Ratio of PKS0023-026 flux density from subtracted image to unsubtracted image along EW polarisation.
* S_NS(sub/unsub): Ratio of PKS0023-026 flux density from subtracted image to unsubtracted image along NS polarisation.
The cutoff thresholds for the derived quantities were evaluated using the 3σ-rule. The results are provided in Table <ref>. Since the Murchison Widefield Array (MWA) consists of linearly polarized dipoles, we anticipate minimal circularly polarized visibilities, as there are no bright Stokes V sources within our primary beam. The distribution of the pixels in Stokes V should, in principle, be noise-like; therefore, the mean is expected to be around zero, with no skewness. Observations not adhering to this criterion and bearing an RMS value greater than 9.5 mJy are filtered out. The top panel of Figure <ref> presents the RMS values in Stokes V for observations that passed this criterion as a function of local sidereal time. We observed that this filter excludes all observations from pointing -3. It is evident that the East-West negative pointings exhibit higher RMS values than the positive ones. Pointings -2 and -1 have the Galactic Centre in the second sidelobe of the primary lobe, and the emission could be leaking into Stokes V. The increase in RMS towards the positive pointings may be attributed to the influence of Fornax A, as it moves into the first sidelobe of the primary beam.
The flux density ratio of the source PKS0023-026 in Stokes V to the sum of EW and NS polarisations (equivalent to pseudo-Stokes I) was calculated. This quantity informs us about the percentage of instrumental leakage from (EW+NS) → V that could also occur in the reverse direction, V → (EW+NS). Observations with an estimated leakage greater than 0.3% were discarded. Fractional ratios for the successful observations are shown in the bottom panel of Figure <ref>, following a similar trend as the RMS, except for pointings 2 and 3. The flux densities of PKS0026-23 from Stokes V images conform to the RMS trend, except for pointings 2 and 3. This behavior might be attributed to the position of PKS0026-23, such that Fornax A has a negligible contribution.
The ratio of the flux density after to before foreground removal was also scanned, such that observations with values below 13% were allowed to proceed.
The aforementioned metrics aimed to mitigate observations dominated by RFI, ionosphere, or contamination from nearby bright emissions. They also identified observations for which calibration and foreground subtraction did not perform well. This under-performance may be attributed to unidentified or unclassified systematics.
§ RESULTS
The filtering process discussed in Section <ref> yielded a set of 1734 observations (58 hours) from six pointings. Before delving into constructing the power spectrum from the observations, we compared our pipeline with the traditional MWA Real Time System pipeline <cit.> used in <cit.>.
§.§ Pipeline vs RTS
The RTS pipeline constitutes the following: conversion of raw MWA files to uvfits using https://github.com/MWATelescope/cottercotter; 2) use of AOFlagger for flagging; 3) DI calibration using a sky model ; direction dependent calibration on the five brightest sources and 4) catalogue subtraction for foreground removal. As observed, there are quite a few differences between the two pipelines. Our pipeline uses the latest LoBES catalogue as the sky model while RTS used the catalogue created from GLEAM along with cross-matched sources from TGSS GRMT described in <cit.>. To reduce comparative complexities, we used the same sky model and propagated the same flags to the observation, hence the comparison here is mainly between the calibration implementation.
We computed the power spectrum for a single observation from both pipelines.
Figure <ref> presents the spherically averaged power spectra resulting from both pipelines. The k bins that went into the averaging are k_⊥ < 0.04 hMpc^-1 to exclude low contaminated modes and k_|| > 0.15 hMpc^-1 to exclude any possible left over leakage. The spectra was generated across the whole frequency band (167–197 MHz). Both strategies perform similarly, with hyperdrive performing slightly better at some k modes. This result is useful to us as it informs us that any improvements to the power spectrum would be primarily attributed to the data quality assurance strategy.
§.§ Improvements to the power spectrum
We now compare the power spectrum formed before implementing our data quality assurance strategy described in Section <ref> to our current pipeline. The first set of observations includes 300 observations randomly picked from the 9655 datasets we started off with while the second set includes 300 observations chosen from the filtered set. All pointings are included. The comparison of the one-dimensional power spectra averaged over the k bins stated in Section <ref> is shown in Figure <ref>. Our filtering strategy does show an improvement in the power level particularly at low k modes indicating that excluding unreliable observations helps in preventing power to leak beyond the wedge region. This behaviour is observed along both polarisations.
§.§ Power Spectra for each pointing
We also compared the power spectra generated under the same conditions as mentioned in Section <ref> for different pointings. After the filtering process discussed in Section <ref>, we were left with observations for only six pointings (3, 2, 1, 0, -1, -2). Each of these pointings carry different number of observations, therefore to avoid noise bias in our results, we used the same number of observations, amounting to about 4.3 hours.
The two-dimensional power spectra for EW polarisation constructed across the full band for the six pointings are displayed in Figure <ref>. The black dotted line marks the horizon limit marking the baseline-dependent boundaries. The harmonics are due to the flags applied on the edges and centre frequency channels. The foreground subtraction performed a decent job in reducing the overall foreground power.
The bright foreground emission constrained at low k-modes are mostly from the Galactic emission that were not included in the subtraction model.
The EoR window we are interested in lies above the black dotted lines with a buffer of 0.05 hMpc^-1, bounded by 0.01 hMpc^-1 <k_⊥ <0.04 hMpc^-1 and
Mpc^-1 0.15 h < k_|| < 3.4 hMpc^-1, region enclosed by the blue dashed lines in Figure <ref>. We chose these k modes to escape potential foreground spilling into the window.
It is hard to provide a quantitative difference between the pointings as they all are behaving at different k modes. The most obvious pattern, analogous to top panel of Figure <ref>, are the modes corresponding to 0.01 h,Mpc^-1 < k_⊥ < 0.015 hMpc^-1.
The cleanest window within the stated boundary is produced by pointings 0, 1 and 2. Power leakage beyond the horizon limit is more prominent in pointings -2 and -1. Even though the Galactic plane in these pointing is past the second sidelobe of the primary beam response of MWA, it still impacts the foregrounds via aliasing as it sets over the horizon <cit.>.
We also generated power spectra at z=6.5 using visibilities across (182, 197) MHz band. The results are similar to the full band in Figure <ref>. The dimensionless power spectra Δ^2, averaged over 0.01 <k_⊥ < 0.04 h Mpc^-1 and k_|| > 0.15 h Mpc^-1, is plotted in Figure <ref>. The positive pointings have a lower floor compared to the negative ones at large angular scales for both polarisations supporting our statement about the Galactic plane contribution across these modes. The inset shows a clear representation of the power level for each pointing at low k modes. At high k modes the pointings seem to converge.
Although some of the pointings performs better than other at specific k modes, the distribution of the power spectra do not indicate any far-flung behaviours. Therefore, we proceeded with the power spectrum analysis using observations from all six pointings.
§.§ Validating our strategy
We analyzed our systematic strategy on a set of observations across all pointings, chosen arbitrarily. Our aim was to construct and compare the power spectra at each of the filtering steps. However, we were limited by our pipeline settings and computational resources, preventing us from constructing the power spectra after data quality issues were reported in the database. Therefore, we began with a set of 300 observations that had passed data quality inspection.
Out of these 300 observations, 62 were found to have an ionospheric metric greater than 5. Discarding highly-ionospherically active observations produced a difference of about an order in power at most of the k-modes, as illustrated by the one-dimensional power spectra in Figure <ref>. Applying the flagging occupancy left us with 130 observations, raising the power level higher. This rise is due to the reduction in the number of observations, resulting in a higher noise level, delineated by the thermal noise in dashed lines. The metrics from the delay-transformed visibilities rejected 30 observations, slightly improving the results. The image statistics did not spot any misbehaving observations for this set.
Since it is not straightforward to validate the power spectra due to the varying number of observations, we use the gap between the power and the thermal noise to infer any improvements. As mentioned in the previous paragraph, the power spectra evaluated after filtering out using the ionoQA show major improvements. Comparing power spectra produced after accounting for flagging occupancy and metrics evaluated from delay spectra also indicate an improvement when the contaminated observations indicated by the delay spectra metrics are removed.
At this point, it is challenging to discuss the specific improvements of flagging occupancy over ionospheric filtering using the same principle. However, the distance of the power from the corresponding thermal noise level does exhibit an improvement. After discarding observations ruled out by flagging occupancy, the power is closer to the thermal noise compared to before filtering these observations, indicating a cleaner dataset.
§.§ Combined Power spectra
In the final step, we combined the filtered observations (58 hours) described in Section <ref> and formed power spectra from the individual polarisations for diagnosis purposes as this paper is intended to present the improved pipeline, discussing explicitly the systematic mitigation approach.
The resulting one-dimensional power spectra at z=6.5 obtained by averaging over 0.01 <k_⊥ < 0.04 h Mpc^-1 and k_|| > 0.15 h Mpc^-1, is shown in Figure <ref>. The lowest measurements for EW and NS polarisations are Δ^2= (57.2 mK)^2 and Δ^2= (74.6 mK)^2 at k=0.19 h Mpc^-1 respectively.
We overlaid upper limits from <cit.> and <cit.>. However, these upper limits cannot be directly compared with our measurements due to differences in calculation conditions. <cit.> utilised only phase I data and focused on a different observing field. On the other hand, <cit.> used observations from the same field but included both phase I and phase II data, employing a different sky model. Incorporating the sky model from <cit.> yielded marginal differences. To enhance compatibility, averaging was performed on the same k modes as in <cit.>.
A back-of-the-envelop comparison of the sensitivity levels between a single phase I and phase II observation yielded an improvement of about 0.6. Applying this factor to our current measurements does indicate an improvement to our current results which is promising.
§ CONCLUSION AND FUTURE WORK
In this paper, we presented a statistical framework whereby metrics were derived at intermediate stages to prevent systematic errors from propagating to the power spectrum. These metrics were used to assess the quality of an observation, informing the pipeline whether it should proceed to the next stage. We found that without these metrics, many systematics, such as bad frequency channels, malfunctioning antennas, and corrupted observations, would not have been identified. When compared to observations from the EoR0 field used in the estimation by <cit.>, about one third of the observations in common were flagged by our methodology.
Our strategy filtered out 82% of our initial observations, leaving approximately 58 hours of data (half the number used in <cit.>). We achieved a comparable lowest floor of Δ^2= (57.2 mK)^2 at k=0.19,h Mpc^-1 at z=6.5 along the EW polarisation. These results were obtained from observations less dominated by systematic errors, as determined by our statistical framework, increasing the reliability and confidence in our power spectrum results.
We also evaluated the accuracy and reliability of the calibration software Hyperdrive <cit.>. Furthermore, this work explores the latest sky model LoBES, designed specifically for EoR experiments targeting the EoR0 field. Overall, our current processing pipeline implemented in NextFlow <cit.> is efficient, with most integrated software components working harmoniously with minimal human intervention, thereby reducing errors.
The methodology and results presented in this paper can be improved upon and we have identified few avenues:
* Including observations from the Phase II compact configuration would help in obtaining lower power levels as demonstrated by our rough estimations.
* Increasing the number of iterations for direction-independent calibration. This would enhance convergence of the algorithm producing more accurate complex antenna gains.
* Improving on the current calibration algorithm using gain solutions from the autocorrelations <cit.>.
* Strengthening our current data quality framework, by automating the anomaly detection from the phases of the complex antenna gains and incorporating machine learning.
* Clustering observations sharing similar features using the derived statistical metrics to identify the optimal cluster of observations for power spectrum estimation.
Some of the aforementioned strategies are already in development and will be featured in <cit.>.
§ ACKNOWLEDGEMENTS
This research was supported
by the Australian Research Council Centre of Excellence for
All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through
project number CE170100013.
This scientific work uses data obtained from Inyarrimanha Ilgari Bundara / the Murchison Radio-astronomy Observatory. We acknowledge the Wajarri Yamaji People as the Traditional Owners and native title holders of the Observatory site. Establishment of CSIRO's Murchison Radio-astronomy Observatory is an initiative of the Australian Government, with support from the Government of Western Australia and the Science and Industry Endowment Fund. Support for the operation of the MWA is provided by the Australian Government (NCRIS), under a contract to Curtin University administered by Astronomy Australia Limited. This work was supported by resources provided by the Pawsey Supercomputing Research Centre with funding from the Australian Government and the Government of Western Australia. The International Centre for Radio Astronomy Research (ICRAR) is a Joint Venture of Curtin University and The University of Western Australia, funded by the Western Australian State government. Some of these data and the pipeline development was undertaken with industry partners, Downunder Geosolutions, and we acknowledge their role in the progress of this project. We thank Michael Wilensky for his useful suggestions on the paper draft. N.B. acknowledges the support of the Forrest Research Foundation, under a postdoctoral research fellowship.
§ DATA AVAILABILITY
The data used in this paper is publicly available on MWA Archival System.
|
http://arxiv.org/abs/2409.03619v1 | 20240905152413 | Solving bilevel problems with products of upper- and lower-level variables | [
"Sina Hajikazemi",
"Florian Steinke"
] | math.OC | [
"math.OC"
] |
Photoelectron – residual-ion entanglement in streaked shake-up ionization of helium
Hongyu Shi and Uwe Thumm
September 9, 2024
===================================================================================
§ ABSTRACT
Bilevel programming problems frequently arise in real-world applications across various fields, including transportation, economics, energy markets and healthcare.
These problems have been proven to be NP-hard even in the simplest form with linear upper and lower-level problems.
This paper addresses a specific type of bilevel programming problem where the upper-level is linear, and the lower level includes bilinear terms involving product of variables from both levels.
We propose a new iterative algorithm that addresses this specific class of bilevel problems by penalizing the duality gap and linearizing the bilinear terms.
The effectiveness of the algorithm is argued and demonstrated through a numerical example.
§ INTRODUCTION
Bilevel programming problems frequently arise in real-world applications across various fields, including transportation, economics, energy markets and healthcare.
This problem is NP-hard even in the simplest form with linear upper and lower-level problems<cit.>.
A common approach to solve the LP-LP bilevel programming problem to the global optimum is to replace the lower-level optimization problem with its KKT conditions, which turns it into a single-level optimization problem.
However, the complementary slackness constraints involve nonlinear terms.
These constraints can be linearized using either binary variables and a big-M value <cit.> or SOS1 constraints <cit.>.
Both approaches are typically very slow to solve and are not suitable for large problems.
In addition, finding an appropriate big-M value is very challenging;
<cit.> shows that finding an appropriate big-M value is as hard as solving the original problem.
Another method to solve this problem is the Penalty Alternating Direction Method (PADM) <cit.>. Unlike the previous methods
it only converges to a partial optimum, but it is computationally feasible for large problems and is shown to yield solutions close to the global optimum in a large fraction of sample problems.
The problem studied in this research, is linear in the upper level and nonlinear in the lower level, with bilinear terms of the upper and lower-level variables in the constraints.
It is easy to prove that any LP-LP bilevel problem can be reduced to this problem by defining and fixing some lower level variables. Thus, the studied problem is harder than an LP-LP bilevel problem.
To the best of our knowledge, there is no prior research that addresses this specific class of problems.
To solve the proposed, computationally hard problem,
we design a scalable, iterative algorithm called Penalty Adaptive Linearization Method (PALM).
This iterative algorithm penalizes the optimality gap of the lower-level problem
similar to PADM <cit.>, balancing the optimality of the upper- and lower-level problems with increasing weight on the lower-level optimality gap during the course of the algorithm execution.
During this process, it also linearizes the bilinear terms in the constraints and iteratively readjusts the linearization.
The rest of the paper is organized as follows.
Section <ref> formulates the problem.
Section <ref> introduces the proposed algorithm.
Section <ref> demonstrates the algorithm with a small numerical example.
Finally, section <ref> discusses the results and outlines possible future work.
§ PROBLEM FORMULATION
The studied problem consists of two levels.
The upper-level problem is given in (<ref>), where A ∈ℝ^p× mn, B ∈ℝ^p× n, c ∈ℝ^mn, d ∈ℝ^n and a ∈ℝ^p are the coefficients.
The variables X ∈ℝ^m × n and y ∈ℝ^n correspond to the upper and lower-level problems, respectively.
Additionally, function vec is used to vectorize the elements of a matrix,
and S(X) denotes the point to set mapping from X to the optimal solutions of the lower-level problem.
min_X,y c^T vec(X) + d^Ty
A vec(X) + B y ≥ a
y ∈ S(X)
The lower-level problem is given in (<ref>), where C ∈ℝ^m× n, b ∈ℝ^m and e ∈ℝ^n are the coefficients.
Constraint (<ref>) in the upper-level problem ensures that y attains the optimal value of the lower-level problem.
min_y e^T y
s.t. (C+X) y ≥ b
§ PROPOSED SOLUTION METHOD
To obtain a single-level formulation of the bilevel problem, the lower-level problem can be replaced by a set of optimality conditions.
For a linear programming problem, these can be primal feasibility, dual feasibility, and strong duality constraints. The strong duality constraint forces the objective function of the primal minimization problem to be less than or equal to the objective function of the dual problem.
Since the lower-level problem (<ref>) is linear with respect to the primal variables y, its dual is given by (<ref>), where λ∈ℝ^m denotes the dual variables.
max_λ b^T λ
s.t. (C+X)^T λ = e
λ≥ 0
Replacing (<ref>) with the above mentioned optimality conditions yields the following single-level reformulation of (<ref>):
min_X,y,λ c^T vec(X) + d^Ty
s.t. (<ref>) upper-level constraints
(<ref>) primal constraints
(<ref>)-(<ref>) dual constraints
e^Ty ≤ b^T λ strong duality constraint
Constraint (<ref>) is the strong duality constraint.
Including this constraint along with the primal and dual feasibility constraints strictly enforces the optimality of the lower-level problem.
Problem (<ref>) is a non-convex quadratically constrained problem which are NP-hard to solve globally in general <cit.>.
Even finding a feasible point is difficult due to the strictness of the strong duality constraint (<ref>).
Hence, we proceed iteratively and move the strong duality constraint into the objective function as a penalty term, gradually increasing its weight to progressively enforce the optimality of the lower-level problem, similar to the PADM method.
Shifting the strong duality constraint to the objective function as a penalty term with weight μ results in the following formulation:
min_X,y,λ c^T vec(X) + d^Ty + μ[ e^Ty - b^T λ]
s.t. (<ref>) upper-level constraints
(<ref>) primal constraints
(<ref>)-(<ref>) dual constraints
Constraints (<ref>) and (<ref>) both contain bilinear terms which makes problem (<ref>) non-convex and difficult to solve.
Our approach is to linearize the bilinear terms in (<ref>) and solve the problem iteratively.
To linearize the Xy term in (<ref>), we approximate it around a point (X̅,y̅).
Let dX ∈ℝ^m× n and dy ∈ℝ^n be defined as dX = X - X̅ and dy = y -y̅.
Then the approximation is as follows:
X y = X̅ y̅ + X̅ dy + dX y̅ + dX dy
≈X̅ y + dX y̅
Assuming that dX and dy are small, the term dX dy is ignored in the second step.
This approximation helps to linearize the bilinear terms in (<ref>).
The same approximation can be applied to linearize the X^Tλ terms in (<ref>).
Replacing the bilinear terms with the linear approximation changes the primal form of the lower-level problem (<ref>) to (<ref>) and the dual form of the lower-level problem (<ref>) to (<ref>).
min_y e^T y
(C+X̅) y + dX y̅≥ b
max_λ b^T λ
(C+X̅)^T λ + dX^T λ̅= e
λ≥ 0
Substituting the primal and dual feasibility constraints (<ref>) and (<ref>) in problem (<ref>) with (<ref>) and (<ref>),
yields the following formulation:
min_dX,y,λ c^T vec(X̅+dX) + d^Ty + μ[ e^Ty - b^T λ]
s.t. (<ref>)
(<ref>)
(<ref>)-(<ref>)
The problem given in (<ref>) is linear. Let S^L(X̅,y̅,λ̅) denote the set of optimal values of (<ref>).
The proposed algorithm is presented in <ref> and consists of two loops.
The first (outer) loop (starting at line <ref>) evaluates the optimality of the lower-level problem.
At the end of each iteration, the penalty coefficient μ is doubled, thereby increasing the priority of the optimality of the lower-level problem over the upper-level objective function.
This loop terminates when the optimality of the lower-level problem is satisfied, that means a feasible solution to the upper-level problem is found.
If it does not terminate after a certain number of iterations, it means that either the problem is infeasible or the algorithm is not able to find a feasible solution.
The second loop (inner) (starting at line <ref>), solves the linear problem <ref>, updates the values of X̅, y̅ and λ̅ and reapeats until these values converge.
Fixing the variable X to X̅ in problem (<ref>) results in a linear problem. Moreover, the constraints involving y and λ in are separable in (<ref>), so the problem can be decomposed into two subproblems (<ref>) and (<ref>), which are solved independently.
min_y d^Ty + μ e^Ty
s.t. A vec(X̅) + B y ≥ a
(C+X̅) y ≥ b
max_λ b^T λ
s.t. (C+X̅)^T λ = e
λ≥ 0
So for a given X̅, the solutions to the subproblems (<ref>) and (<ref>) determine the values of y̅ and λ̅, respectively.
Let S^P(X̅) and S^D(X̅) be the optimal solution sets of the subproblems (<ref>) and (<ref>), respectively.
If the solutions of (<ref>) and (<ref>) are not unique for all X̅, then y̅ and λ̅ may jump between different alternative solutions within the sets S^P(X̅) and S^D(X̅).
This can cause the algorithm to fail to converge in the inner loop. To mitigate this, we select the optimal solution that is closest to the previous one.
To find it we first compute the optimal objective value by solving the LP. We then resolve it with the objective value fixed to the optimal value and minimizing the distance to the prior solution. Depending on whether the 1-norm or 2-norm is used, this results in a linear or convex quadratic programming problem, respectively. The values of y̅ and λ̅ are then updated according to lines <ref> and <ref>.
Similarly, when choosing among the alternative optimal solutions in S^L(X̅,y̅,λ̅), we choose the solution that results in the smallest change in X̅.
The algorithm needs an initial value X_0 for the upper-level variable, for which a feasible primal and dual solution exists to start with.
Let Ω be the set of feasible solutions of problem (<ref>).
Then the initial values for y and λ are computed as in line <ref>.
Note that this problem is linear since X is fixed to X̅.
Since the bilinear terms are omitted in the linearization of both the primal and dual constraints, the solution to the linear problem (<ref>) may not be feasible for the original problem (<ref>).
Nonetheless, the step size dX^* tends to increase as the penalty parameter μ is raised, allowing indirect control over dX^* through adjustments to μ.
If the solution to the linear problem is not feasible for the original problem, then slowing the rate of increase of the penalty parameter μ can help reduce the step size dX^* and improve the likelihood of achieving a feasible solution.
§ A MINIMAL EXAMPLE
In this section a minimal numerical example is presented to show how the algorithm works.
The notation used here aligns with that in Section <ref> except the upper-level variables which is denoted by x instead of a matrix X.
The upper-level problem is as follows:
min_x,y_1,y_2 |x|
s.t. y_2 ≤ 1.5
y_1, y_2 ∈ S(x)
The absolute value function can easily be replaced with an auxiliary variable and two constraints.
The lower-level problem is as follows:
min_y_1,y_2 y_1 + y_2
s.t. 0.5y_1 + y_2 + x y_1 ≥ 3
y_1 + 0.5 y_2 - x y_1 ≥ 3
y_1, y_2 ≥ 0
The problem is solved to the global optimum using a non-convex QCQP solver.
The optimal solution is y^*=(2.5, 1.5) and x^*=0.1.
Then the Penalty Adaptive Linearization Method is applied to the problem.
The initial value of the upper-level variable is set to zero and the optimal solution of the lower-level problem is y^*=(2.0, 2.0) for this inital value.
The algorithm successfuly converges to the optimal solution of the problem, which is y^*=(2.5, 1.5) and x^*=0.1.
Figure <ref> shows the convergence of the duality gap of the lower-level problem to zero which means that the algorithm converges to a feasible solution of the main problem.
As expected, the optimal value y̅_̅2̅ remain constant at 1.5 throughout the iterations.
§ CONLCUSION
In this paper we designed a new algorithm for solving bilevel programming problems with product of upper and lower-level variables in the lower-level problem.
We showed that the algorithm was able to iteratively approximate the bilinear terms in the sample problem and solve it to the global optimum.
The most crucial aspect missing in this algorithm is to prove that the inner loop is guaranteed to converge.
It’s possible that convergence only occurs under specific conditions.
In such cases, identifying non-convergent scenarios can help clarify the algorithm’s limitations, highlight areas for improvement, and specify the conditions under which convergence to a partial minimum is guaranteed.
From a practical perspective, exploring the algorithm’s performance in solving a wide range of real-world problems is of great interest.
However, unlike LP-LP bilevel programming problems, using KKT conditions or strong duality conditions along with the big-M method does not linearize the problem, making it challenging to determine the global optimal solution.
§ ACKNOWLEDGEMENTS
This research was funded by the German Federal Ministry of Education and Research Infrastructure in project RODES (grant number 05M22RDA).
|
http://arxiv.org/abs/2409.02387v2 | 20240904023012 | Large Language Models and Cognitive Science: A Comprehensive Review of Similarities, Differences, and Challenges | [
"Qian Niu",
"Junyu Liu",
"Ziqian Bi",
"Pohsun Feng",
"Benji Peng",
"Keyu Chen"
] | cs.AI | [
"cs.AI",
"cs.CL"
] |
Large Language Models and Cognitive Science: A Comprehensive Review of Similarities, Differences, and Challenges
Qian Niu12,
Junyu Liu2,
Ziqian Bi3,
Pohsun Feng4,
Benji Peng5,
Keyu Chen5
2Kyoto University
3Indiana University
4National Taiwan Normal University
5Georgia Institute of Technology
Corresponding Email: [email protected]
September 9, 2024
===========================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
This comprehensive review explores the intersection of Large Language Models (LLMs) and cognitive science, examining similarities and differences between LLMs and human cognitive processes. We analyze methods for evaluating LLMs cognitive abilities and discuss their potential as cognitive models. The review covers applications of LLMs in various cognitive fields, highlighting insights gained for cognitive science research. We assess cognitive biases and limitations of LLMs, along with proposed methods for improving their performance. The integration of LLMs with cognitive architectures is examined, revealing promising avenues for enhancing artificial intelligence (AI) capabilities. Key challenges and future research directions are identified, emphasizing the need for continued refinement of LLMs to better align with human cognition. This review provides a balanced perspective on the current state and future potential of LLMs in advancing our understanding of both artificial and human intelligence.
Large Language Models, Cognitive Science, Cognitive Psychology, Neuroscience
§ INTRODUCTION
The emergence of Large Language Models (LLMs) has sparked a revolution in artificial intelligence (AI), challenging our understanding of machine cognition and its relationship to human cognitive processes. As these models demonstrate increasingly sophisticated capabilities in language processing, reasoning, and problem-solving, they have become a focal point of interest for cognitive scientists seeking to unravel the mysteries of human cognition. This intersection of LLMs and cognitive science has given rise to a new frontier of research, offering unprecedented opportunities to explore the nature of intelligence, language, and thought.
The relationship between LLMs and cognitive science is multifaceted and bidirectional. On one hand, insights from cognitive science have informed the development and evaluation of LLMs, inspiring new architectures and training paradigms that aim to more closely mimic human cognitive processes. On the other hand, the remarkable performance of LLMs on various cognitive tasks has prompted researchers to reevaluate existing theories of cognition and consider new perspectives on how intelligence emerges from complex systems.
This review aims to provide a comprehensive overview of the current state of research at the intersection of LLMs and cognitive science. We explore the similarities and differences between LLMs and human cognitive processes, examining how these models perform on tasks traditionally used to study human cognition. We also delve into the methods developed for evaluating LLMs cognitive abilities, highlighting the challenges and opportunities in assessing AI through the lens of cognitive science. Furthermore, we investigate the potential of LLMs to serve as cognitive models, discussing their applications in various domains of cognitive science research and the insights they provide into human cognition. The review also addresses the cognitive biases and limitations of LLMs, as well as the ongoing efforts to improve their performance and align them more closely with human cognitive processes. We examine recent developments in this area, discussing the potential synergies and challenges that arise from combining these approaches.
As LLMs continue to evolve and their capabilities expand, it becomes increasingly important to critically assess their relationship with human cognition and their potential impact on cognitive science research. This review offers a balanced and comprehensive examination of these issues, presenting insights into the current state of the field. It identifies key areas for future research and discusses the challenges and opportunities at the exciting intersection of LLMs and cognitive science. By bridging AI with cognitive science, this line of inquiry promises to deepen our understanding of human cognition and inform the development of more sophisticated, ethical, and human-centric AI systems. This comprehensive and critical examination not only highlights the current achievements but also maps out a path forward in this dynamic area of study.
§ COMPARISON OF LLMS AND HUMAN COGNITIVE PROCESSES
LLMs have revolutionized our understanding of AI and its potential to mimic human cognitive processes. These models have shown capabilities that resemble human cognition in various tasks, including language processing, sensory judgments, and reasoning. However, despite these similarities, there are fundamental differences between LLMs and human cognitive processes that merit close examination. This section explores these similarities and differences, evaluates the methods used to assess LLMs cognitive abilities, and discusses the potential of LLMs as cognitive models. By comparing LLMs with human cognition, we can better understand the strengths and limitations of these models in emulating human thought processes.
§.§ Similarities and differences between LLMs and human cognitive processes
LLMs have demonstrated remarkable capabilities in various cognitive tasks, often exhibiting human-like behaviors and performance. One of the key similarities observed is in the domain of language processing. LLMs can achieve human-level word prediction performance in natural contexts, suggesting a deep connection between these models and human language processing <cit.>. Studies have shown that LLMs represent linguistic information similarly to humans, enabling accurate brain encoding and decoding during language processing <cit.>. This similarity extends to the neural level, where larger neural language models exhibit representations that are increasingly similar to neural response measurements from brain imaging <cit.>.
LLMs also demonstrate human-like cognitive effects in certain tasks. For instance, GPT-3 exhibits priming, distance, SNARC, and size congruity effects, which are well-documented phenomena in human cognition <cit.>. Additionally, LLMs show content effects in logical reasoning tasks similar to humans, particularly in challenging tasks like syllogism validity judgments and the Wason selection task <cit.>. Research has shown that LLMs can capture aspects of human sensory judgments across multiple modalities. Marjieh et al. <cit.> demonstrated that similarity judgments from GPT models are significantly correlated with human data across six sensory modalities, including pitch, loudness, colors, consonants, taste, and timbre. This suggests that LLMs can extract significant perceptual information from language alone.
However, significant differences exist between LLMs and human cognitive processes. Humans generally outperform LLMs in reasoning tasks, especially with out-of-distribution prompts, demonstrating greater robustness and flexibility <cit.>. LLMs struggle to emulate human-like reasoning when faced with novel and constrained problems, indicating limitations in their ability to generalize beyond their training data. Lamprinidis <cit.> found that LLMs' cognitive judgments are not human-like in limited-data inductive reasoning tasks, with higher errors compared to Bayesian predictors. This suggests that LLMs may not model basic statistical principles that humans use in everyday scenarios as effectively as previously thought.
Moreover, while LLMs exhibit near human-level formal linguistic competence, they show patchy performance in functional linguistic competence <cit.>. This suggests that LLMs may excel at surface-level language processing but struggle with deeper, context-dependent understanding and reasoning. Another notable difference lies in the memory properties of LLMs compared to human memory. Although LLMs exhibit some human-like memory characteristics, such as primacy and recency effects, their forgetting mechanisms and memory structures differ from human biological memory <cit.>. Suresh et al. <cit.> found that human conceptual structures are robust and coherent across different tasks, languages, and cultures, while LLMs produce conceptual structures that vary significantly depending on the task used to generate responses. This highlights a fundamental difference in the stability and consistency of conceptual representations between humans and LLMs.
§.§ Methods for evaluating LLMs cognitive abilities
Researchers have developed various methods to evaluate the cognitive abilities of LLMs, often drawing inspiration from cognitive science and psychology. These methods aim to provide a comprehensive assessment of LLMs' capabilities and limitations in comparison to human cognition.
One prominent approach is the use of cognitive psychology experiments adapted for LLMs. For example, CogBench, a benchmark with ten behavioral metrics from seven cognitive psychology experiments, has been developed to evaluate LLMs <cit.>. This benchmark allows for a systematic comparison of LLMs performance across various cognitive tasks. Another method involves using neuroimaging data to compare LLMs representations with human brain activity. Studies have employed Functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) recordings to analyze the similarity between LLMs activations and brain responses during language processing tasks <cit.>. This approach provides insights into the neural-level similarities and differences between LLMs and human cognition.
Researchers have also adapted traditional psychological tests for use with LLMs. For instance, cognitive reflection tests and semantic illusions have been used to evaluate the reasoning capabilities of LLMs <cit.>. These tests help reveal the extent to which LLMs exhibit human-like biases and reasoning patterns. Additionally, methods from developmental psychology have been proposed to understand the capacities and underlying abstractions of LLMs <cit.>. These approaches focus on testing generalization to novel situations and using simplified stimuli to probe underlying abstractions.
In an effort to create more comprehensive evaluation tools, Zhang et al. <cit.> introduced MulCogBench, a multi-modal cognitive benchmark dataset for evaluating Chinese and English computational language models. This dataset includes various types of cognitive data, such as subjective semantic ratings, eye-tracking, fMRI, and MEG, allowing for a comprehensive comparison between LLMs and human cognitive processes. Ivanova <cit.> provided a set of methodological considerations for evaluating the cognitive capacities of LLMs using language-based assessments. The paper highlights common pitfalls and provides guidelines for designing high-quality cognitive evaluations, contributing to best practices in AI Psychology.
Delving deeper into specific cognitive abilities, Srinivasan et al. <cit.> proposed novel methods based on cognitive science principles to test LLMs' common sense reasoning abilities through prototype analysis and proverb understanding. These methods offer new ways to assess LLMs' cognitive capabilities in more nuanced and context-dependent tasks. Binz and Schulz <cit.> used tools from cognitive psychology to study GPT-3, assessing its decision-making, information search, deliberation, and causal reasoning abilities. Their approach demonstrates the potential of cognitive psychology in studying AI and demystifying how LLMs solve tasks.
In summary, Large Language Models exhibit remarkable parallels with human cognitive processes, particularly in language and sensory tasks, yet they fall short in several critical areas, such as reasoning under novel conditions and functional linguistic competence. The diverse methodologies employed to evaluate LLMs' cognitive abilities highlight both their potential and limitations as models of human cognition. As LLMs continue to evolve, they provide a valuable tool for exploring the nature of human intelligence, but their differences from human cognitive processes must be carefully considered. Future research should aim to refine these models further, improving their alignment with human cognition and addressing the gaps that currently exist. Understanding the complex interplay between LLMs and human cognitive processes will advance both AI and cognitive science, bridging the divide between machine and human intelligence.
§ APPLICATIONS OF LLMS IN COGNITIVE SCIENCE
The integration of LLMs into cognitive science research has opened up new avenues for understanding human cognition and developing more sophisticated AI systems. This section explores the multifaceted applications of LLMs in cognitive science, examining their role as cognitive models, their contributions to theoretical insights, and their specific applications in various cognitive domains. By synthesizing recent research, we aim to provide a comprehensive overview of the current state and future potential of LLMs in advancing our understanding of human cognition.
§.§ LLMs as Cognitive Models
The potential of LLMs to serve as cognitive models has gained significant attention in recent research. Studies have demonstrated that LLMs can be turned into accurate cognitive models through fine-tuning on psychological experiment data, offering precise representations of human behavior and often outperforming traditional cognitive models in decision-making tasks <cit.>. These models have shown promise in capturing individual differences in behavior and generalizing to new tasks after being fine-tuned on multiple tasks, suggesting their potential to become generalist cognitive models capable of representing a wide range of human cognitive processes. Versatility of LLMs in various cognitive domains have been explored. Wong et al. <cit.> introduced a computational framework called rational meaning construction, integrating neural language models with probabilistic models for rational inference. This approach demonstrates LLMs' ability to generate context-sensitive translations and support commonsense reasoning across various cognitive domains. Piantadosi and Hill <cit.> highlighted LLMs' capacity to capture essential aspects of meaning through conceptual roles, challenging skepticism about their ability to possess human-like concepts.
In the realm of language processing, Schrimpf et al. <cit.> conducted a systematic integrative modeling study, revealing that transformer-based ANN models can predict neural and behavioral responses in human language processing. Their findings support the hypothesis that predictive processing shapes language comprehension mechanisms in the brain, aligning with contemporary theories in cognitive neuroscience. Kallens et al. <cit.> demonstrated that LLMs can produce human-like grammatical language without an innate grammar, providing valuable computational models for exploring statistical learning in language acquisition and challenging traditional views on language learning. Lampinen's <cit.> research further challenges our understanding of human language processing, demonstrating that with minimal prompting, LLMs can outperform humans in processing recursively nested grammatical structures. This raises questions about the cognitive mechanisms underlying both human and artificial language comprehension. Nolfi <cit.> explored the unexpected cognitive abilities developed by LLMs through indirect processes, including dynamical semantic operations, theory of mind, affordance recognition, and logical reasoning. These findings suggest that LLMs can develop integrated cognitive skills that work synergistically, despite being primarily trained on next-word prediction tasks. This research highlights the importance of understanding these emergent capabilities in relation to human cognition. Sartori and Orrú <cit.> provided empirical evidence that LLMs perform at human levels in a wide variety of cognitive tasks, including reasoning and problem-solving. Their findings support associationism as a unifying theory of cognition and demonstrate the potential for significant impact on cognitive psychology, suggesting new avenues for modeling human cognitive processes. Li and Li <cit.> proposed an intriguing duality between LLMs and Tulving's theory of memory, suggesting that consciousness may be an emergent ability based on this duality. This perspective offers a novel approach to understanding the relationship between LLMs and human cognition, potentially bridging artificial and biological intelligence research.
However, it is important to note that while LLMs can serve as plausible models of human language understanding, there are ongoing debates about the extent to which they truly capture human-like cognitive abstractions <cit.>. Some researchers argue that it is premature to make definitive claims about the abilities or limitations of LLMs as models of human language understanding, emphasizing the need for further empirical testing. Katzir <cit.> provided a balanced assessment of the strengths and weaknesses of LLMs, highlighting their sophisticated inductive learning capabilities while also addressing significant limitations such as opacity, data requirements, and differences from human cognitive processes. Besides, the use of LLMs as cognitive models offers new opportunities for understanding human cognition. By analyzing the internal representations and processes of these models, researchers can gain insights into potential mechanisms underlying human cognitive abilities. However, caution is necessary when interpreting these findings, as the fundamental differences in architecture and learning processes between LLMs and the human brain must be considered. Ren et al. <cit.> investigated how well LLMs align with human brain cognitive processing signals using Representational Similarity Analysis (RSA). Their findings suggest that factors such as pre-training data size, model scaling, and alignment training significantly impact the similarity between LLMs and brain activity, providing insights into how LLMs might be improved to better model human cognition.
In conclusion, while LLMs show great promise as cognitive models, further research is needed to fully understand their capabilities and limitations in representing human cognitive processes. The ongoing exploration of LLMs as cognitive models continues to provide valuable insights into both artificial and human cognition, potentially reshaping our understanding of language, reasoning, and cognitive processes.
§.§ Insights from LLMs for cognitive science research
LLMs have provided valuable insights for cognitive science research, challenging existing theories and offering new perspectives on human cognition. Veres <cit.> argued that while LLMs challenge rule-based theories, they do not necessarily provide deeper insights into the nature of language or cognition. This perspective highlights the need for careful interpretation of LLMs capabilities in the context of cognitive science and cautions against overinterpretation of model performance. Shanahan <cit.> emphasized the importance of understanding the true nature and capabilities of LLMs to avoid anthropomorphism and ensure responsible use and discourse around AI in cognitive science research. This cautionary approach underscores the need for precise language and philosophical nuance in AI discourse, particularly when drawing parallels between artificial and human cognition. Blank <cit.> explored whether LLMs can be considered computational models of human language processing, discussing different interpretations and implications for future research. This work highlights the ongoing debate about whether LLMs process language like humans and the significance of this question for cognitive science, emphasizing the need for rigorous empirical investigation. Grindrod <cit.> argued that LLMs can serve as scientific models of E-languages (external languages), providing insights into the nature of language as a social entity. This perspective offers a novel approach to using LLMs in linguistic inquiry and cognitive science research, potentially bridging computational linguistics and sociolinguistics.
The application of LLMs in cognitive science research has opened up new avenues for exploring human behavior and decision-making processes. Horton <cit.> demonstrated the potential of using LLMs as simulated economic agents to replicate classic behavioral economics experiments. This innovative approach suggests new possibilities for using LLMs to explore human behavior and decision-making processes in cognitive science, offering a cost-effective method for piloting studies and generating hypotheses. Connell and Lynott <cit.> evaluated the cognitive plausibility of different types of language models, emphasizing the importance of learning mechanisms, corpus size, and grounding in assessing their relevance to human cognition. Their work provides a framework for critically evaluating the applicability of LLMs to cognitive modeling. Mitchell and Krakauer <cit.> surveyed the debate on whether LLMs understand language in a humanlike sense, advocating for an extended science of intelligence to explore diverse modes of cognition. This perspective highlights the need for a broader understanding of intelligence and cognition in the context of LLMs, encouraging interdisciplinary collaboration in AI and cognitive science research. Buttrick <cit.> proposed using LLMs to study cultural distinctions by analyzing the statistical regularities in their training data, offering new avenues for exploring cultural cognition and representation. This approach demonstrates the potential of LLMs as tools for investigating complex sociocultural phenomena in cognitive science. Finally, Demszky et al. <cit.> reviewed the potential of LLMs to transform psychology by enabling large-scale analysis and generation of language data. They emphasized the need for further research and development to address ethical concerns and harness the full potential of LLMs in psychological research, highlighting both the opportunities and challenges in this emerging field.
In conclusion, LLMs have demonstrated significant potential as cognitive models and have provided valuable insights for cognitive science research. However, their limitations and the need for careful interpretation of their capabilities underscore the importance of continued research and interdisciplinary collaboration in this rapidly evolving field. Future work should focus on refining LLMs to better align with human cognitive processes, developing more rigorous evaluation methods, and addressing ethical considerations to ensure responsible and productive integration of LLMs in cognitive science research.
§.§ Application of LLMs in specific cognitive fields
LLMs have demonstrated significant potential in various cognitive domains, including causal reasoning, lexical semantics, and creative writing. In the realm of causal inference, Liu et al. <cit.> conducted a comprehensive survey exploring the mutual benefits between LLMs and causal inference, highlighting how causal perspectives can enhance LLMs' reasoning capacities, fairness, and safety. Similarly, Kıcıman et al. <cit.> benchmarked the causal capabilities of LLMs, finding that they outperform existing methods in generating causal arguments across various tasks, while also noting their limitations in critical decision-making scenarios. In the field of lexical semantics, Petersen and Potts <cit.> utilized LLMs to conduct a detailed case study of the English verb "break," demonstrating that LLM representations can capture known sense distinctions and identify new sense combinations. Their findings suggest a reconsideration of the commitment to discreteness in semantic theory, favoring a more fluid, usage-based approach. Extending to creative domains, Chakrabarty et al. <cit.> investigated the utility of LLMs in assisting professional writers through an empirical user study. Their research revealed that writers find LLMs most helpful for translation and review tasks rather than planning, while also identifying significant weaknesses in current models, such as reliance on clichés and lack of nuance.
These studies collectively underscore the diverse applications of LLMs in cognitive fields, from enhancing causal reasoning to supporting creative processes, while also highlighting areas for improvement and future research directions. In conclusion, the application of Large Language Models in cognitive science research represents a significant advancement in our ability to model and understand human cognition. LLMs have demonstrated remarkable potential as cognitive models, offering insights into language processing, reasoning, and decision-making that challenge and expand existing theories. Their versatility in addressing diverse cognitive tasks, from causal inference to creative writing, underscores their value as research tools across multiple domains of cognitive science.
However, the integration of LLMs into cognitive research is not without challenges. Researchers must navigate issues of interpretability, ethical considerations, and the potential for overinterpretation of model capabilities. The ongoing debate about the nature of LLMs "understanding" and its relationship to human cognition highlights the need for continued critical examination and empirical investigation. As the field progresses, interdisciplinary collaboration will be crucial in refining LLMs to better align with human cognitive processes, developing more rigorous evaluation methods, and addressing ethical concerns. The future of LLMs in cognitive science research holds promise for transformative insights into the nature of intelligence, both artificial and biological, potentially bridging gaps between computational models and human cognition. By carefully leveraging the strengths of LLMs while acknowledging their limitations, researchers can continue to push the boundaries of our understanding of the mind and pave the way for more advanced AI systems that complement and enhance human cognitive abilities.
§ LIMITATIONS AND IMPROVEMENT OF LLMS CAPABILITIES
The rapid advancement of LLMs has necessitated a comprehensive evaluation of their capabilities and limitations. This section examines the cognitive biases and constraints inherent in LLMs, as well as proposed methods for enhancing their performance. By critically analyzing these aspects, researchers aim to develop more robust and reliable AI systems that can better emulate human-like cognition and language understanding.
§.§ Cognitive biases and limitations of LLMs
Recent studies have extensively explored the cognitive biases and limitations of LLMs. Ullman <cit.> demonstrated that LLMs fail on trivial alterations to Theory-of-Mind tasks, suggesting a lack of robust Theory-of-Mind capabilities. Talboy and Fuller <cit.> identified multiple cognitive biases in LLMs similar to those found in human reasoning, highlighting the need for increased awareness and mitigation strategies. Thorstad <cit.> advocated for cautious optimism about LLMs performance while acknowledging genuine biases, particularly framing effects. Singh et al. <cit.> investigated the confidence-competence gap in LLMs, revealing instances of overconfidence and underconfidence reminiscent of the Dunning-Kruger effect. Marcus et al. <cit.> argued that LLMs currently lack deeper linguistic and cognitive understanding, leading to incomplete and biased representations of human language. Macmillan-Scott and Musolesi <cit.> evaluated seven LLMs using cognitive psychology tasks, finding that they display irrationality differently from humans and exhibit significant inconsistency in their responses. Jones and Steinhardt <cit.> presented a method inspired by human cognitive biases to systematically identify and test for qualitative errors in LLMs, uncovering predictable and high-impact errors. Smith et al. <cit.> proposed using the term "confabulation" instead of "hallucination" to more accurately describe inaccurate outputs of LLMs, emphasizing the importance of precise metaphorical language in understanding AI processes.
§.§ Methods for improving LLMs performance
Researchers have proposed various methods to improve LLMs performance and address their limitations. Nguyen <cit.> introduced the bounded pragmatic speaker model to understand and improve language models by drawing parallels with human cognition and suggesting enhancements to reinforcement learning from human feedback (RLHF). Lv et al. <cit.> developed CogGPT, an LLM-driven agent with an iterative cognitive mechanism that outperforms existing methods in facilitating role-specific cognitive dynamics under continuous information flows. Prystawski et al. <cit.> demonstrated that using chain-of-thought prompts informed by probabilistic models can improve LLMs' ability to understand and paraphrase metaphors. Aw and Toneva <cit.> found that training language models to summarize narratives improves their alignment with human brain activity, indicating deeper language understanding. Du et al. <cit.> reviewed recent developments addressing shortcut learning and robustness challenges in LLMs, suggesting the combination of data-driven schemes with domain knowledge and the introduction of more inductive biases into model architectures.
These studies collectively highlight the importance of understanding and addressing cognitive biases and limitations in LLMs while exploring innovative methods to enhance their performance and alignment with human cognition. Future research should focus on developing more robust evaluation techniques, integrating insights from cognitive science, and creating LLMs that exhibit deeper linguistic and cognitive understanding.
In conclusion, the assessment and improvement of LLM capabilities remain critical areas of research in the field of AI. The studies reviewed in this section collectively highlight the importance of understanding and addressing cognitive biases and limitations in LLMs while exploring innovative methods to enhance their performance and alignment with human cognition. Future research should focus on developing more robust evaluation techniques, integrating insights from cognitive science, and creating LLMs that exhibit deeper linguistic and cognitive understanding. By addressing these challenges, researchers can pave the way for more advanced and reliable AI systems that can better serve human needs and contribute to various domains of knowledge and application.
§ INTEGRATION OF LLMS WITH COGNITIVE ARCHITECTURES
Recent research has explored various approaches to integrate LLMs with cognitive architectures, aiming to enhance AI systems' capabilities. This synergistic approach leverages the strengths of both LLMs and cognitive architectures while mitigating their respective weaknesses. Romero et al. <cit.> presented three integration approaches: modular, agency, and neuro-symbolic, each with its own theoretical grounding and empirical support. Kirk et al. <cit.> explored the direct extraction of task knowledge from GPT-3 by cognitive agents, using template-based prompting and natural-language interaction. They proposed a six-step process for knowledge extraction and integration into cognitive architectures. Joshi and Ustun <cit.> proposed a method to augment cognitive architectures like Soar and Sigma with generative LLMs, using them as prompt-able declarative memory within the architecture. González-Santamarta et al. <cit.> integrated LLMs into the MERLIN2 cognitive architecture for autonomous robots, focusing on enhancing reasoning capabilities and human-robot interaction.
Several studies have demonstrated the potential benefits of combining LLMs with cognitive architectures in various domains. Zhu and Simmons <cit.> presented a framework that combines LLMs with cognitive architectures to create an efficient and adaptable agent for performing kitchen tasks. Their approach demonstrated improved efficiency and fewer required tokens compared to using LLMs alone. Nakos and Forbus <cit.> discussed the integration of BERT into the Companion cognitive architecture, showing improvements in disambiguation and fact plausibility prediction for natural language understanding tasks. Wray et al. <cit.> reviewed the capabilities of LMs for cognitive systems and proposed a research strategy for integrating LMs into cognitive agents to improve task learning and performance. They emphasized the need for effective prompting, interpretation, and verification strategies. Zhou et al.<cit.> proposed a Cognitive Personalized Search (CoPS) model that integrates LLMs with a cognitive memory mechanism inspired by human cognition to enhance user modeling and improve personalized search results.
These studies collectively demonstrate the potential of integrating LLMs with cognitive architectures to create more robust, efficient, and adaptable AI systems. However, challenges remain, including ensuring the accuracy and relevance of extracted knowledge, managing computational costs, and addressing the limitations of both LLMs and cognitive architectures. Future research directions include exploring more sophisticated integration methods, improving the efficiency of LLM-based reasoning, and investigating the application of these integrated systems in various domains.
§ DISCUSSION
The intersection of LLMs and cognitive science has opened up a fascinating new frontier in AI and our understanding of human cognition. This review has highlighted the significant progress made in comparing LLMs and human cognitive processes, developing methods for evaluating LLMs cognitive abilities, and exploring the potential of LLMs as cognitive models. However, it also reveals several important areas for future research and consideration.
One of the most striking findings is the remarkable similarity between LLMs and human cognitive processes in certain domains, particularly in language processing and some aspects of reasoning. The ability of LLMs to exhibit human-like priming effects, content effects in logical reasoning, and even capture aspects of human sensory judgments across multiple modalities suggests a deep connection between these artificial systems and human cognition. This similarity extends to the neural level, with larger neural language models showing representations increasingly similar to neural response measurements from brain imaging.
However, the review also underscores significant differences between LLMs and human cognitive processes. Humans generally outperform LLMs in reasoning tasks, especially with out-of-distribution prompts, demonstrating greater robustness and flexibility. The struggle of LLMs to emulate human-like reasoning when faced with novel and constrained problems indicates limitations in their ability to generalize beyond their training data. Moreover, while LLMs exhibit near human-level formal linguistic competence, they show patchy performance in functional linguistic competence, suggesting a gap in deeper, context-dependent understanding and reasoning.
These findings highlight the need for future research to focus on enhancing the generalization capabilities of LLMs and improving their performance in functional linguistic competence. Developing methods to imbue LLMs with more robust and flexible reasoning abilities, particularly in novel and constrained problem spaces, could significantly advance their cognitive capabilities.
The review also reveals the potential of LLMs as cognitive models, with studies demonstrating that fine-tuned LLMs can offer precise representations of human behavior and often outperform traditional cognitive models in decision-making tasks. This suggests a promising avenue for using LLMs to gain insights into human cognitive processes. However, caution is necessary when interpreting these findings, as the fundamental differences in architecture and learning processes between LLMs and the human brain must be considered.
§ FUTURE CHALLENGE
Future research should focus on developing more sophisticated methods for aligning LLMs with human cognitive processes. This could involve integrating insights from cognitive science into the architecture and training of LLMs, as well as exploring novel ways to evaluate and compare LLMs performance with human cognition across a wider range of cognitive tasks.
The application of LLMs in specific cognitive fields, such as causal reasoning, lexical semantics, and creative writing, demonstrates their potential to contribute to various areas of cognitive science research. However, it also highlights the need for continued refinement and specialization of LLMs for specific cognitive domains. Future work could focus on developing domain-specific LLMs that more accurately model human cognition in particular areas of expertise.
The review also addresses the cognitive biases and limitations of LLMs, revealing that these models can exhibit biases similar to those found in human reasoning. This finding presents both challenges and opportunities. On one hand, it underscores the need for increased awareness and mitigation strategies to address these biases in AI systems. On the other hand, it offers a unique opportunity to study cognitive biases in a controlled, artificial environment, potentially leading to new insights into the nature and origins of these biases in human cognition.
The integration of LLMs with cognitive architectures represents a promising direction for future research. This approach aims to leverage the strengths of both LLMs and cognitive architectures while mitigating their respective weaknesses. Future work in this area could focus on developing more sophisticated integration methods, improving the efficiency of LLM-based reasoning within cognitive architectures, and exploring the application of these integrated systems in various real-world domains.
In conclusion, the intersection of LLMs and cognitive science offers exciting possibilities for advancing our understanding of both artificial and human intelligence. However, it also presents significant challenges that require careful consideration and further research. As we continue to explore this frontier, it is crucial to maintain a balanced perspective, acknowledging both the remarkable capabilities of LLMs and their current limitations. By doing so, we can work towards developing AI systems that not only perform well on specific tasks but also contribute to our understanding of cognition itself.
IEEEtran
*
|
http://arxiv.org/abs/2409.02848v1 | 20240904161943 | Subspace-thermal discrete time crystals from phase transitions between different n-tuple discrete time crystals | [
"Hongye Yu",
"Tzu-Chieh Wei"
] | quant-ph | [
"quant-ph",
"cond-mat.dis-nn"
] |
UTF8gbsn
C. N. Yang Institute for Theoretical Physics,
State University of New York at
Stony Brook, Stony Brook, NY 11794-3840, USA
Department of Physics and Astronomy, State University of New York at
Stony Brook, Stony Brook, NY 11794-3800, USA
C. N. Yang Institute for Theoretical Physics,
State University of New York at
Stony Brook, Stony Brook, NY 11794-3840, USA
Department of Physics and Astronomy, State University of New York at
Stony Brook, Stony Brook, NY 11794-3800, USA
§ ABSTRACT
We propose a new Floquet time crystal model that responds in arbitrary multiples of the driving period. Such an n-tuple discrete time crystal is theoretically constructed by permuting spins in a disordered chain and is well suited for experiment implementations. Transitions between these time crystals with different periods give rise to a novel phase of matter that we call subspace-thermal discrete time crystals, where states within subspaces are fully thermalized at an early time. However, the whole system still robustly responds to the periodic driving subharmonically, with a period being the greatest common divisor of the original two periods. Existing theoretical analysis from many-body localization fails to understand the rigidity of such subspace-thermal time crystal phases. To resolve this, we develop a new theoretical framework from the perspective of the robust 2π/n quasi-energy gap. Its robustness is analytically proved, under a reasonable conjecture, by a new perturbation theory for unitary operators. The proof applies beyond the models considered here to other existing discrete time crystals realized by kicking disordered systems, thus offering a systematic way to construct new discrete time crystal models. We also introduce the notion of DTC-charges that allow us to probe the observables that spontaneously break the time-translation symmetry in both the regular discrete time crystals and subspace-thermal discrete time crystals. Moreover, our discrete time crystal models can be generalized to higher spin magnitudes or qudits, as well as higher spatial dimensions.
Subspace-thermal discrete time crystals from phase transitions between different n-tuple discrete time crystals
Tzu-Chieh Wei
Accepted Sep 3 2024 to ApJ Letters
===============================================================================================================
§ INTRODUCTION
A time crystal was originally proposed <cit.> by Wilczek as a phase of matter that spontaneously breaks the continuous time translation symmetry (τ). Despite that breaking it in equilibrium was later proved <cit.> not possible, various models <cit.> that spontaneously break the discrete time translation symmetry have been proposed. These discrete time crystals (DTC) are realized in periodic-driven (Floquet) systems, with some observables responding with periods that are multiples of the driving period. Comprehensive reviews of time crystals can be found in Refs. <cit.>.
A major feature and challenge of a DTC system is its stability of subharmonic response against perturbations. A typical Floquet system's state can be thermalized to infinite temperature, whereas a DTC system contains a mechanism that robustly breaks ergodicity, thus preventing thermalization. The robust ergodicity breaking can be induced in several different settings, such as many-body localization <cit.> (MBL), Floquet prethermalization <cit.>, and others <cit.>.
Recently, various DTC phases have been observed in experiments <cit.>. Despite several higher n-tuple DTC models <cit.> having been theoretically proposed, there are rarely experimentally realized DTC models that exhibit periods higher than doubling. In this work, we propose a new model for an arbitrary integer n-tuple DTC feasible for experimental realizations, based on permutations on disordered one-dimensional spin-1/2 chains. For n=2, our model is reduced to a recently proposed swapping Floquet DTC model <cit.>. As our n-DTC model is based on “kicking" a disordered spin chain, it shares similar properties to existing MBL-DTC models, such as kicked Ising models <cit.>.
A remarkable property of an MBL-DTC system is the robust
2π/n gap in its quasi-spectrum (note we absorb the gap dependence on driving frequency into the effective Hamiltonian for simplicity). For typical MBL-DTC systems <cit.> with size L, when local period-T perturbations are added, the deviations from the 2π/n are exponentially <cit.> small e^-O(L) in L. Most proofs for this property rely on corollaries from the MBL <cit.>, such as the existence of a quasi-local unitary relating the perturbed eigenstates and unperturbed ones. In this work, we offer a new proof from the perturbative perspective that does not rely on the system being MBL. In our proof, a specific form of eigenstates with minimum quasi-spectral gap not closing severely, together with a conjecture assuming effects from globally different states can be neglected, suffice for the robust 2π/n gap against k-local period-T perturbations (with k finite). Such properties appear not just in MBL systems, hinting that MBL may not be a necessary condition for this type of DTC.
Indeed, during the phase transition among our n-DTCs, we identify a new form of DTC that is fully thermalized but still exhibits robust subharmonic oscillations for some observables. The thermalization saturates quickly but is restricted in subspaces of the whole Hilbert space and remains there for an expentially long time ∼ e^O(L). Such restriction is due to the symmetries of our model in the unperturbed case but is robust against perturbations even if they break the symmetries. The subharmonic oscillations can be viewed as a thermal state restricted in one subspace, cyclically jumping to another subspace without mixing with other subspaces, which can be observed by measuring the symmetry charges that divide those subspaces. We call this new phase a subspace-thermal DTC (ST-DTC). The ST-DTC is different from MBL-DTC: In MBL-DTC <cit.>, the system retains the memory of its initial state forever, whereas, in ST-DTC, the memory is lost at the early time due to the thermalization in subspaces. The ST-DTC is also different from prethermal-DTC: in the prethermal-DTC, the lifetime scale exponentially <cit.> with the driving frequency e^O(ω_0/J) (J is the local energy scale), whereas the lifetime of ST-DTC is not directly related with ω_0, but with the system size e^O(L) similar to MBL-DTC. In addition, the robust subharmonic response in the prethermal-DTC only holds for low-energy initial states <cit.>, whereas in the ST-DTC it holds for almost every initial state.
Our perturbative analysis for the robustness of 2π/n gap can explain the existence of the ST-DTC well. In addition, the analysis offers a systematic way to construct potential n-DTC models, which is much broader than MBL systems, and is not limited by the dimension or interaction range.
Moreover, as shown in previous works <cit.>, a non-DTC phase is inevitable during the phase transition between different DTCs. In this work, we show that the transition can be done through an ST-DTC instead of a fully thermalized phase.
The rest of the paper is organized as follows: In Sec. <ref>, we briefly outline the proof for robust 2π/n gap in DTC systems and leave details to Appendix. <ref> and <ref>, where we introduce a new unitary perturbation theory and apply it to the Floquet operator. In Sec. <ref>, we present the n-tuple DTC model by permuting a disordered 1-D spin-1/2 chain. In Sec <ref>, we show that the subspace-thermal DTC emerges, oscillating with period n_G≡(n_1,n_2), during the phase transition between n_1-DTC and n_2-DTC. With perturbation, the thermalization is robustly confined in the perturbed subspaces due to the emergent symmetries, and one can observe the robust subharmonic oscillations by measuring DTC-charges. In Sec. <ref>, we briefly discuss the potential experimental realization of our models, such as in current noisy intermediate-scale quantum (NISQ) computers. We conclude in
Sec. <ref>.
§ SKETCH OF PROVING THE ROBUST 2Π/N GAP OF DTCS
In this section, we briefly outline the proof for the robustness of 2π/n gap of disordered DTCs against period-T local perturbations, and refer the readers to Appendix. <ref> and <ref> for further details.
For a Floquet system driven by a periodic Hamiltonian H_0(t+T)=H_0(t), the evolution of the system is governed by U_0(t)≡𝒯 e^-i ∫_0^t H_0(t') d t' (where 𝒯 denotes time ordering), whose stroboscopic properties can be fully described by the Floquet operator U_F^0≡ U_0(T). We define the quasi-energy ε of the U_F^0 by writing the eigenvalues of U_F^0 as e^-i ε. We are interested in how the quasi-spectrum of U_F^0 changes in the presence of perturbations.
Supposing a periodic perturbation λV̂(t+T)=λV̂(t) is added, the new Floquet operator becomes
U_F(λ)=𝒯exp(-i ∫_0^T(H_0(t)+λV̂(t)) d t).
The perturbed Floquet operator U_F(λ) can be alternatively <cit.> written as
U_F(λ)≡ U_F^0 U_λ,
where U_λ= 𝒯exp(-i ∫_0^T U_0^†(t) λV̂(t) U_0(t)d t)=: e^-iλ V and we have defined an effective Hermitian term λ V for U_λ.
To analytically study the stability of an L-site DTC system, we must accurately obtain the perturbed quasi-energies up to O(L)-th order. Existing perturbation theories <cit.> either have convergence problems for unitary operators or are hard to analyze in arbitrarily high order (more detailed discussions are in Appendix <ref>). Thus, we require a new perturbation theory for unitary operators that can be explicitly calculated to arbitrary order and rigorously converged for small enough perturbations.
§.§ Unitary perturbation theory
The goal of the unitary perturbation theory is to find quasi-energies and eigenstates for the perturbed unitary operator U_F(λ)≡ U_F^0 U_λ. The unitaries can be generic, not necessarily the Floquet operators for periodic-driven systems that we focus on here. For convenience, we define effective Hamiltonians for these unitary operators e^-i H_F(λ) ≡ U_F(λ) , e^-i H_F(0) ≡ U_F^0 and e^-i λ V ≡ U_λ (the last was already defined above). Then we obtain the relation
e^-i H_F(λ) =e^-i H_F(0) e^-i λ V .
Compared to the standard definition of effective Hamiltonians for Floquet systems, we absorb the T in effective Hamiltonians for simplicity. Suppose we are interested in an eigenstate |ψ_i⟩ with quasi-energy E_i of U_F(λ), which is close to the eigenstate |i⟩ with quasi-energy ε_i of the unperturbed unitary U_F^0. This is equivalent to solving
e^i [E-H_F(0)]|ψ⟩= e^ i λ V |ψ⟩.
Directly applying conventional perturbation theories to this equation can lead to convergence problems. To remedy this, we introduce a novel trick by rewriting the equation as
tan(E-H_F(0)/2) |ψ⟩= tan(λ V/2) |ψ⟩,
which is equivalent to Eq. (<ref>), as they share exactly the same solutions [If we also regard the infinity as one point for tanπ/2, we can construct a bijective map between e^i A and tanA/2.]. Since eigenvalues of tanλ V/2 are small when λ→ 0, we can apply conventional perturbation theories to Eq. (<ref>) without convergence problems. Below, we use a similar procedure to the Brillouin-Wigner perturbation theory <cit.>. To begin with, we look for a solution |ψ_i⟩ close to the original eigenstate |i⟩ and define the projector P_i≡|i⟩⟨i|, as well as the associated matrix R_i ≡∑_m≠ i(E-ε_m/2)|m⟩⟨m|.
Applying R_i to the Eq. (<ref>), we obtain (1-P_i)|ψ_i⟩=R_i tan(λ V/2) |ψ_i⟩,
which leads to |ψ_i⟩=1/1-R_i tan(λ V/2)P_i|ψ_i⟩, which can be expanded as follows,
|ψ_i⟩=(1+R_i tan(λ V/2)+⋯) P_i|ψ_i⟩,
with the convergence condition tanλ V/2 < tanE_i-ε_m/2, for m≠ i. Using the relation ⟨i| tan(E-H_F(0)/2)| ψ_i|=⟩⟨i|tan(λ V/2) |ψ_i⟩, we obtain the expansion for the perturbed quasi-energy E_i:
tan(E_i (λ)-ε_i/2)= ⟨i|tanλ V/2|i|+⟩
∑_m_1 ≠ i⟨i|tanλ V/2| m_1|⟨%s|%s⟩⟩ m_1|tanλ V/2| i/tanE_i (λ)-ε_m_1/2+…,
where we see that the j-th term on the RHS is of order O(λ^j). Note that the perturbed energy E_i appears on both sides of the equation, so it is hard to solve it directly. Instead, one can solve it recursively. Let E_i^(j)(λ) be the j-th order approximation of E_i(λ) (i.e., |E_i^(j)(λ)-E_i(λ)|∼ o(λ^j) ), then one can obtain E_i^(j+1)(λ) by setting E_i(λ)=E_i^(j)(λ) on the RHS of Eq. (<ref>), and repeat the procedure recursively.
So far, we have introduced a new perturbation theory for unitary operators based on the “tangent trick" applied to the Brillouin-Wigner perturbation theory. This theory works for general quasi-spectral perturbation by unitary operators, including Floquet operators of periodic-driven systems, which paves the way for our perturbative analysis for DTC models. We present in Appendix <ref> more rigorous and detailed discussions, along with the unitary perturbation theory near degeneracy.
§.§ Differences in quasi-energy corrections are O(L)-order
In this section, we discuss a class of DTC systems with solvable points, where the Floquet operator U_F exactly transforms one state |χ_m⟩ to another globally different orthogonal state |χ_m+1⟩, in a cyclic way. In addition, we also require the system to exhibit a “local spectral gap”: states that differ locally have non-vanishing quasi-energy gaps, which are usually induced by random interactions. We remark that these conditions are satisfied by most existing MBL-DTC models <cit.>.
For an n-DTC, the state returns to itself after n periods of driving. One can verify that if U_F acts in this way for the whole Hilbert space, all {|χ_m⟩} form an orthogonal basis, and thus, we can divide the Hilbert space into dynamically disjoint subspaces. Within the α-th subspace sector that has period n, the action of U_F is
U_F|χ_α,m⟩= e^-i φ_α,m+1|χ_α,m+1⟩,
where 1≤ m ≤ n for n-DTC, and |χ_α,n+1⟩≡|χ_α,1⟩. In such a case, the quasi-energies ε_α,j and eigenstates |Φ_α,j⟩ can be exactly solved as follows,
|Φ_α,j⟩ =1/√(n)∑_m=1^n e^i (θ_α,m+m j2π/n )|χ_α,m⟩,
ε_α,j =1/n∑_m=1^n φ_α,m + j 2π/n,
where θ_α,m=1/n∑_i=1^n (m-i n) φ_α,i and U_F|Φ⟩=e^-iε|Φ⟩. Thus, the quasi-energies are separated with exact 2π/n spacing, and the eigenstates are all composed of an equal-weight superposition of |χ_α,m⟩'s, with different relative phases among them for different eigenstates. When |χ_α,m⟩'s are globally different from each other, the eigenstates {|Φ_α,j⟩} are generalized Schrödinger's cat states. We define l_α being the minimum distance among states in the α-th subspace, and define |χ_α,m⟩'s to be globally different if l_α is O(L). The distance can be defined in various way, e.g., Hamming distance for Z-basis states. For general states, one can define the distance of |χ_α,m⟩ and |χ_α,m'⟩ being l if ⟨χ_α,m|𝒱|χ_α,m'|$⟩ is non-zero only if𝒱contains interactions involving at leastlsites. In DTC systems,l_αis typicallyO(L).
A conjecture. Before proceeding to include perturbations, we introduce a conjecture we need in the proof. Suppose when a finite-body period-Tperturbation of strengthO(λ)is added to a (sufficiently strong) disordered DTC system defined above, the original eigenstate|Φ_α,i(0)⟩is perturbed to|Φ_α,i(λ)⟩. For a sufficiently small but non-vanishing perturbation strengthλ>0, we conjecture that there exists a finite and sufficiently largeμ∼log1/λ, so that for distancel≥l_α/4andl_α>γL(withγconstant), one can find an operator𝒱^(<l)involving at mostl-body interactions, satisfying
|Φ_α,i(λ)⟩-
𝒱^(<l)|Φ_α,i(0)⟩<C e^-μ l,
where0<C<1is a constant. We also give an intuitive argument for its validness in disordered DTC, in Appendix <ref>. We remark that the conjecture is a weaker version of the “local perturbations perturb locally” (LPPL) <cit.>, an analog of a similar property in gapped systems <cit.>. The LPPL can be equivalently written in a similar form as Eq. (<ref>), with𝒱^(<l)limited to short-range interactions, whereas our conjecture has no restriction on the interaction length. Moreover, LPPL requires that Eq. (<ref>) holds forlgreater than a finite number, whereas we only require that Eq. (<ref>) holds forlgreater than a portion of system sizeL. The LPPL is usually satisfied and well accepted in MBL-DTC systems <cit.> with local spectral gaps and served as a stepping stone to prove their stability.
When applying our unitary perturbation theory to disordered DTC systems, eigenstates (<ref>) ofU_F^0have a special property against local perturbations: the magnitudes of terms in the quasi-energy perturbation (<ref>) for different eigenstates in the same subspace ({|Φ_α,j⟩}with differentj's) are almost identical, with deviations at the order ofλ^O(L). The terms can have relative phases of the forme^i c 2π/n, withcbeing an integer, but they can be eliminated if states involved are not too far away from{|Φ_α,j⟩}. In addition, using the above conjecture, contributions from globally distinct states to the quasi-energy changes can be shown to be exponentially small. Furthermore, the relation holds up toO(L)-order perturbation theory. Thus, the overall quasi-energy changes for states in the same sector are almost identical, with deviations at the order ofλ^O(L). Thus, the difference in the perturbed quasi-energiesE_α,j+1-E_α,jremains almost2π/n, withλ^O(L)corrections, despite that the change from original quasi-energyE_α,j-ε_α,jcan be still as large asO(λ). We leave the rigorous statements and details in Appendix <ref>.
Moreover, the analytical analysis also works for many existing model (see discussions in Appendices <ref> and <ref>), and offers a systematic way to construct newn-tuple DTC models. To do this, one can firstly consider a solvable model, where the Floquet operatorU_Fexactly transforms one state|χ_m⟩to another globally different orthogonal state|χ_m+1⟩, in a cyclical way with periodn>1. Then one can add disorders that do not mix|χ_m⟩'s, to break the quasi-spectral degeneracies. Note that such disorder may not always lead to MBL, but is usually sufficient to construct a DTC model. According the proof, the2π/ngap for such model is exponentially robust against local perturbations, which leads to a DTC phase.
§ N-TUPLE DTC MODEL FROM PERMUTATION
We define the period-nTdiscrete time crystal on a 1-D length-Lspin-1/2chain with open boundary condition [different boundary conditions will not bring qualitative differences in this model], whose evolution is governed by a period-THamiltonianH(t+T)=H(t). Within one period (T=t_1+t_2+t_3), the Hamiltonian is (see Fig. <ref>)
H^(n)(t)=
H_1^(n), for 0 ≤ t<t_1 ,
H_2^(n), for t_1 ≤ t<t_1+t_2,
H_ int, for t_1+t_2 ≤ t<t_1+t_2+t_3,
whereH_1^(n)andH_2^(n)both consist of swap gates acting as permutations and the interaction HamiltonianH_intincludes disordered couplings and fields
H_ int=∑_i<j J_ijσ_i^z σ_j^z+∑_i=1^L h_i^z σ_i^z+∑_i=1^L ϵ_i^x σ_i^x,
where the parametersh_i^z∈[0,2h̅^̅z̅]are randomly chosen from uniform distributions, andϵ_i^x∈[-ϵ^x,ϵ^x]'s are small uniformly random perturbations. We choose the couplingJ_ijto be a disordered power-law interaction obtained by
J_ij=1/ℒ_L, κJ̃_i, j/|i-j|^κ,
whereJ̃_ij∈[1/2J̅,3/2J̅]is uniformly and randomly chosen, andℒ_L, κis a coefficient to make the energy extensive <cit.>
ℒ_L, κ= 1 if κ⩾ 1,
ln L if κ=1,
L^1-κ if κ<1.
We remark that power-law interaction is not the only choice [For example, one can randomly choose J_ij uniformly for |i-j|≤ 2n, and set J_ij=0 for |i-j| > 2n, where n is the length of the spatial permutation units.] for our model being DTC; however, nearest-neighbor interactions are not sufficient <cit.> to break all the degeneracies in the quasi-spectrum. For the even periodn≥2, theH_1^(n)andH_2^(n)are
H_1^(n) = π/2 t_1(1-ϵ_1)∑_i=1^L/2-11/2(σ̂_2i-1·σ̂_2i-1),
H_2^(n) = π/2 t_2(1-ϵ_2)( ∑_i=1^L/2-11/2 (σ̂_2i·σ̂_2i+1-1).
+.∑_i=1^L/n-11/2(1-σ̂_ni·σ̂_ni+1) ),
whereσ̂_i·σ̂_j=σ_i^x σ_j^x+σ_i^y σ_j^y+σ_i^z σ_j^zis the Heisenberg type of interaction. For odd periodn≥3, theH_1^(n)andH_2^(n)are
H_1^(n) = π/2 t_1(1-ϵ_1)( ∑_i=1^L/2-11/2(σ̂_2i-1·σ̂_2i-1)+∑_i=1^L/(2n)-11/2(1-σ̂_(2i-1)n·σ̂_(2i-1)n+1) ),
H_2^(n) = π/2 t_2(1-ϵ_2)( ∑_i=1^L/2-11/2( σ̂_2i·σ̂_2i+1-1)+∑_i=1^L/(2n)-11/2(1- σ̂_2ni·σ̂_2ni+1) ),
whereϵ_1andϵ_2are close to 0, implying small imperfections. For convenience, in all the following discussions, we assume the system lengthLis divisible [This assumption is for convenience and is not essential for DTC properties. If L is not divisible by n or 2n, one can always set the H_1^(n) and H_2^(n) to be zero at the remainder sites.] bynfor evenLor2nfor oddL, respectively. The Floquet evolution operator for a single period is thenU_F^(n)(ϵ̂)=U_intU_2^(n)U_1^(n)≡e^-i H_int t_3e^-i H_2^(n)t_2e^-i H_1^(n)t_1.
To grasp thenTperiods of such evolution, it is helpful to look into the solvable, unperturbed case (ϵ_i^x=ϵ_1=ϵ_2=0). We define the unperturbed Floquet operator asU_F^(n)≡U_F^(n)(ϵ̂=0)for simplicity. Firstly, we notice that the unitariesU_1^(n), U_2^(n)are groups of swap gate, asSWAP_i,j≡1/2(I+σ̂_i·σ̂_j)=e^-i π/4 (σ̂_i·σ̂_j-1) . Thus,U_1^(n)andU_2^(n)effectively perform swap gates on odd and even bonds, respectively (see Fig. <ref>). The gate structure is periodic in space, with each smallest spatial permutation unit consisting ofnspins [For odd period n, the minimum permutation unit consisting of 2n spins, but the left half and the right half are symmetrical (see Fig. <ref>) and do not have qualitative differences in analytical analysis.]. Within each unit, it can be proved (see Appendix <ref>) that such spin permutation has a period ofn. In addition, theH_intonly adds a phase forZ-basis states. Thus, the whole system follows annT-period evolution forZ-basis initial states. One may notice that someZ-basis states can oscillate with a smaller periodk<n, wherekdividesn, i.e.,k|n. Taken=L=4, for example. While all of the states satisfy(U_F^(4))^4 |ψ^(4)⟩=|ψ^(4)⟩, some of the states (e.g.,|↑↓↓↑⟩) also satisfy(U_F^(4))^2|ψ^(2)⟩=|ψ^(2)⟩and the ferromagnetic states (e.g.,|↑↑↑↑⟩) satisfyU_F^(4)|ψ^(1)⟩=|ψ^(1)⟩, where we label thek-period state as|ψ^(k)⟩. We prove that such lower-period states are exponentially rare, and most of them also have robustk-period subharmonic oscillation and will not be mixed withn-period states even if perturbations are added (see discussions in Appendices <ref> and <ref>). Thus, we will mostly considern-period states in the rest of the discussion in the main text.
When local perturbations are added, we expect that the subharmonic response will be preserved. However, different initial states may have different robustness. As shown in Sec. <ref>, the robustness is determined by the minimum distance among states in the same subspace. If ann-period state has ak-period unit (withk<n), then this unit does not contribute to the distance. Thus, the robustness only relies on the number ofn-period units for states inα-th subspace, which is denoted byl_αin our model. Note that thel_αhere may differ from the definition for general DTC models in Sec. <ref> by a factor, but will not change the overall scaling. In Appendix <ref>, we prove that for the vast majority of the Hilbert space,l_αisO(L/n). Thus, for most states, the subharmonic oscillation will maintain up to ane^O(L/n)time scale. In the thermodynamic limit, the time scale approaches infinity.
In the following discussions on numerics, we set, for convenience,J̅=4, h̅^̅z̅=12, κ=0.5, and fix the evolution time for each Hamiltoniant_1=t_2=1/2andt_3=1. For simplicity, we set allϵproportional to a parameterλ, controlling the overall perturbation strength, whereϵ^x=λ, ϵ_1=0.9λand ϵ_2=1.1λ.
§.§ Symmetries and DTC-charges in solvable case
In the unperturbed case (allϵ=0), the Floquet operatorU_F^(n)exactly permutes spins to different sites. For convenience, we relabel them asσ_i,1≡σ_(i-1)n+1and
σ^z_i,j≡(U_F^(n))^j-1∘ (σ^z_i,1),
whereU∘σ≡U^†σUand we useU_F^(n)to denote the unperturbed Floquet operator forn-DTC throughout the paper. The actions ofU_F^(n)on them are
U_F^(n)∘ (σ^z_i,j)=σ^z_i,j+1,
[U_F^(n)]^n ∘ (σ^z_i,j)=σ^z_i,j.
A slightly different picture arises from the perspective of states (see Fig. <ref>b). ForZ-basis states , the Floquet operator cyclically transforms oneZ-basis state|z⟩to anotherU_F^(n)|z_m⟩=e^-i E_int(z_m+1)|z_m+1⟩, whereE_int(z)is the energy of theH_intfor the state|z⟩. Thus, the whole Hilbert space is divided into exponentially many dynamically disjoint fragments, with each subspace consisting ofkZ-basis states, wherek(with k| n) is the period of them. Forn-period states in theα-th subspace, the corresponding eigenstates|ϕ_α,j^(n)⟩and quasi-energiesε_α,j^(n)are (see Eq. (<ref>) and Appendix <ref> for more details)
|ϕ_α,j^(n)⟩ =1/√(n)∑_m=1^n e^i (θ_α,m^(n)+m j2π/n )|z^(n)_α,m⟩,
ε_α,j^(n) =1/n∑_m=1^n E_ int(z^(n)_α,m) + j 2π/n,
whereθ_α,m^(n)=1/n∑_i=1^n (m-in) E_int(z_α, i^(n)). We thus see that the quasi-energies within each subspace are separated by exactly2π/nspacing. Similarly, one can obtain2π/kspacing fork-period states.
With the exact2π/k(wherek|n) quasi-spectral separations, one can identify a symmetry groupℤ_nfor the system using the group generator
S ≡ ∑_α∑_j=1^k_α e^-i j2π/k_α|ϕ^(k_α)_α,j⟩⟨ϕ^(k_α)_α,j|,
wherek_α(which dividesn) is the dimension of the subspaceα.
By construction, it is easy to see thatS^n=1and thatScommutes withU_F^(n). The non-trivial relation of theU_F^(n)andSis
.U_F^(n)|_α= .e^-i E_α/k_α S|_α,
whereE_α≡∑_m=1^k_α E_int(z^(k_α)_α,m)andU|_αis the operatorUrestricted in theα-th subspace. This indicates that, when acting within theα-th subspace, the Floquet operatorU_F^(n)has exactly the same effects as theℤ_nsymmetry generatorS, up to a global phasee^-i E_α/k_α. This immediately leads to[U_F^(n)]^n |_α= e^-i E_αn/k_αbeing a pure phase in theα-th sector.
Thus, allZ-basis states within the same sector have the same quasi-energies w.r.t. then-th power ofU_F^(n), i.e.,[U_F^(n)]^n, which gives rise to more symmetries thanU_F^(n); see Eq. (<ref>). We are interested in symmetries that only appear in[U_F^(n)]^nbut not inU_F^(n)so that they naturally exhibit subharmonic oscillations underU_F^(n). As expectation values of symmetry operators are conserved charges for[U_F^(n)]^n, but they oscillate underU_F^(n)within eachn-period evolution, so we also refer to them as DTC-charges. One can verify that allσ^z's , along with most their linear combinations, are DTC-charges ofU_F^(n). By measuring those DTC-charges, we can observe the subharmonic oscillations, which can be seen by Eq. (<ref>).
§.§ Emergent symmetries and robust subharmonic oscillation against local perturbations
We now show that the subharmonic oscillations remain robust against local perturbations. In the unperturbed case, forZ-basis states in theα-th subspace, their components within one spatial unit are distinct if they are oscillating with period-nin the unit. We usel_αto count the number of period-nunits in the whole chain. From the discussion in Sec. <ref>, the correction for the2π/nquasi-spectral separation isΔ^(n) ∼λ^O(l_α). Thus, the time-crystalline structure is exponentially robust to local perturbations and becomes exact in the thermodynamic limit. We have also numerically simulated the quasi-spectral gap dependence withλfor 4-DTC (see Fig. <ref>). With sorted quasi-energies in full space{ε_i}andα-th subspace{ε_α,i}, we define the quasi-spectral spacing and the deviation from2π/nquasi-spectral separation, respectively
Δ^(0)_i =ε_i+1-ε_i,
Δ^(n)_α,i =|ε_α,i+1-ε_α,i-2 π / n|.
We then average them as
log_10Δ^(0) ≡𝐚𝐯𝐠.[1/𝒩∑_ilog_10Δ^(0)_i],
log_10Δ^(n) ≡𝐚𝐯𝐠.[max_α,i(log_10Δ^(n)_α,i)],
where the symbol𝐚𝐯𝐠.[⋯]means averaging over disorders,𝒩counts the number of gaps, and the maximum is only picked fromn-period sectors. Sincel_αaffects the scaling behavior of2π/ngap, in numerical simulations, we average theΔ^(0)_iover the whole Hilbert space, but only pick the maximum ofΔ^(n)_α,iin a subspace with the samel_α's for the consistency of the scaling. In the unperturbed caseλ=0, the subspace is chosen in theZ-basis, where each spatial unit contains exactly one spin down. Forλ>0, the subspace is induced by applying𝒱(λ)to the unperturbed subspace, where𝒱(λ)relates the unperturbed eigenstates to perturbed ones. Forn=4, we see from Fig. <ref> that the average quasi-spectral spacing is almost fixedΔ^(0) ∼O(1/2^L), whereas the average changes of the2π/n=π/2(forn=4) quasi-spectral separation show a clear exponential scalingΔ^(4)∼O(λ^L/4). The scaling can also be analytically obtained (see Appendix <ref>).
With robust2π/ngaps, the symmetryℤ_nand the DTC-chargesσ^z's are perturbed into approximate symmetries and DTC-charges, where the deviations from their exact definitions are exponentially small. Suppose that the original eigenstates ofU_F^(n)(0)are{|ϕ_α,j⟩}and the perturbed eigenstates ofU_F^(n)(λ)are{|ψ_α,j(λ)⟩}, respectively. We define a unitary operator𝒱that connects the original eigenstates to the perturbed ones:𝒱(λ)|ϕ_α,j⟩=|ψ_α,j(λ)⟩. Defining|z̃(λ)⟩≡𝒱(λ)|z⟩, one can verify that (see Appendix <ref> for details)
U_F^(n)(λ)|z̃_α,m(λ)⟩=e^-i Ẽ_α,m+1|z̃_α,m+1(λ)⟩+λ^O(l_α).
Theℤ_nsymmetry is then perturbed toS̃(λ)≡𝒱(λ)S𝒱^†(λ).
One can verify thatS̃(λ)is still an exactℤ_ngenerator satisfyingS̃^n=1, and remains an exact symmetry ofU_F^(n)(λ). However, the relation Eq. (<ref>) no longer holds exactly; instead, it becomes an approximate relation
.U_F^(n)(λ)|_α= .e^-i Ẽ_α/k_αS̃|_α + λ^O(l_α),
where theα-th subspace is induced by the perturbed eigenstates|ψ_α⟩'s. These results are a direct consequence of the fact that the2π/k_αquasi-energy separation is perturbed at the order ofλ^O(l_α)(see Appendix <ref> for details). Therefore,U_F^(n)(λ)approximately acts like aℤ_nsymmetry generator in the new basis, up to aλ^O(l_α)correction.
This leads to the perturbed DTC-chargesτ_i^z's ofU_F^(n)(λ), which can be obtained from similar transformations onσ^z's. We can defineτ_i^a≡𝒱(λ)σ_i^a𝒱^†(λ)witha ∈{x,y,z}. It is easy to check that |z̃⟩'s are eigenstates ofτ_i^z's. From Eq. (<ref>), we see thatτ^z's become approximate emergent symmetries of[U_F^(n)(λ)]^n, and act like exact DTC-charges in the thermodynamic limit, i.e.,
[(U_F^(n)(λ))^n,τ^z]|_α ∼λ^O(l_α),
U_F^(n)(λ) ∘ (τ^z_i,j)|_α =τ^z_i,j+1|_α+λ^O(l_α).
In addition, we expect𝒱(λ)to be close to identity whenλis small; we can then expandσ^z's inτ-basis
σ^z_i,j=𝒱(λ)^†τ^z_i,j𝒱(λ)=[1-O(λ)]τ^z_i,j+O(λ).
Thus, by measuringσ^z's, we will observe that their major components (dominated byτ^z's) have robust subharmonic oscillations, up to a time scalee^O(L/n), which tends to infinity in the thermodynamic limit. |
http://arxiv.org/abs/2409.02378v1 | 20240904015656 | Bayesian Dynamic Generalized Additive Model for Mortality during COVID-19 Pandemic | [
"Wei Zhang",
"Antonietta Mira",
"Ernst C. Wit"
] | stat.AP | [
"stat.AP"
] |
Nodeless superconductivity and topological nodal states in molybdenum carbide
Toni Shiroka
=============================================================================
§ ABSTRACT
While COVID-19 has resulted in a significant increase in global mortality rates, the impact of the pandemic on mortality from other causes remains uncertain. To gain insight into the broader effects of COVID-19 on various causes of death, we analyze an Italian dataset that includes monthly mortality counts for different causes from January 2015 to December 2020. Our approach involves a generalized additive model enhanced with correlated random effects. The generalized additive model component effectively captures non-linear relationships between various covariates and mortality rates, while the random effects are multivariate time series observations recorded in various locations, and they embody information on the dependence structure present among geographical locations and different causes of mortality. Adopting a Bayesian framework, we impose suitable priors on the model parameters. For efficient posterior computation, we employ variational inference, specifically for fixed effect coefficients and random effects, Gaussian variational approximation is assumed, which streamlines the analysis process. The optimisation is performed using a coordinate ascent variational inference algorithm and several computational strategies are implemented along the way to address the issues arising from the high dimensional nature of the data, providing accelerated and stabilised parameter estimation and statistical inference.
Keywords: generalized additive model, state space model, variational inference
§ INTRODUCTION
The COVID-19 pandemic has had profound impacts across the globe. Researches focus on various aspects such as health disparities linked to racial and socio-economic factors and the healthcare system's adaptation in terms of testing, contact tracing, and vaccine rollouts <cit.>. A key area of investigation is excess mortality, which provides an overarching view of the pandemic's impact on human health. This includes factors such as government lockdown measures and disruptions to non-COVID healthcare services <cit.>. While excess mortality offers a general perspective, examining the consequences of specific causes of death as a result of these factors during the pandemic is crucial for developing more targeted future mitigation strategies. For example, the pandemic has contributed to increases in deaths from chronic conditions as observed by <cit.>. There also has been a rise in accidental deaths, homicides or suicides <cit.>. Contributing to this research field, we analyze Italian monthly death counts recorded in 21 Italian regions from 2015 to 2020, categorized according to the International Classification of Diseases, 10th Revision (ICD-10) <cit.>, to understand the effects on specific human mortality during the pandemic better.
We apply the generalized additive model (GAM) <cit.> to study non-linear relationships between response, cause-specific mortality rate, and continuous covariates, including lockdown intensity level and age, in terms of smooth functions. GAMs have demonstrated considerable potential to study COVID-19 mortality <cit.>. In the more general context of spatial-temporal data modeling, GAMs are well-suited for analyzing changes over time and across different geographic regions, crucial in understanding the differentiating consequences. One way of employing GAMs to model the change of spatial pattern over time is via a tensor product smoother <cit.>. Alternatively, <cit.> introduce dynamic GAMs where extra dynamic spatial random effects are incorporated into the mean of response variable, offering a solution to forecasting discrete time series while estimating relevant nonlinear predictor associations that conventional generalized linear models (GLM) plus spatial temporal random effects are not able to account for <cit.>. In our study, we follow <cit.>'s approach and assume that the mortality rate of specific cause in a region for a certain month combines a GAM regression component and random effects. The random effects are further assumed to be correlated as opposed to being independent, facilitating statistical inference on dependence structure between geographical regions and various causes of death <cit.>. We adopt a Bayesian approach and assign suitable priors on model parameters to better infer correlation structure from the data.
The model framework is a special case of non-Gaussian state space models where the observation equation is Poisson distribution. Inference with state space models includes filtering to estimate the current state given past observations and smoothing to estimate past states given all observations up to the current time <cit.>. Filtering and smoothing is exact in linear Gaussian state space models where Kalman Filter can be applied. When the model is non-Gaussian, particle filter approximates the states using a set of weighted particles <cit.>. The posterior samples in <cit.> are drawn either in the Gibbs sampling software JAGS <cit.> or with the Hamiltonian Monte Carlo in Stan <cit.>. However, the approximation for the target deteriorates as the dimension increases <cit.>. Due to the high dimensionality inherent in the data and model, these sampling algorithms are impractical. To address the issue, we use a variational inference algorithm for fast approximation <cit.>; more specifically, the joint variational density of fixed effect coefficients and random effects takes the Gaussian form. Various methods exist for optimizing the mean and variance of this approximating Gaussian distribution. For instance, <cit.> propose to parameterize the density in terms of its mean and a lower triangular scale matrix whereas <cit.> incorporate sparsity in the Cholesky factors of the precision matrix. Both developed stochastic gradient methods for optimization. Our approach instead involves Newton's method and fixed point iteration as in <cit.> to achieve faster convergence in fewer iterations. This aligns with recent advancements with recent development in Gaussian variational approximations for high-dimensional state space models <cit.>, but distincts in that we do not define structure of the proposed variational approximation thanks to our chosen optimization technique.
The rest of the paper is organized as follows. In Section <ref>, we formulate the model and specify the priors imposed on parameters. In Section <ref>, we demonstrate how to derive the ELBO in this model setup and present the variational algorithm for posterior inferences. We then apply the model and the algorithm to the Italian monthly mortality data in Section <ref>. Finally, Section <ref> gives some concluding remarks and points to future work.
§ BAYESIAN DYNAMIC GAM
Let Y_n,t be the mortality count of instance n at time t, n=1,…,N, t=1,…,T. For each n, covariates such as ordinal a_n for age categorical g_n for gender, l_n for L geographical locations and k_n for K different causes of death are available. Additionally, we are interested in the lockdown effect on outcome variable Y_n,t, therefore we include a stringency index that quantifies the intensity of government restriction policies for each geographical location over time and we denote it by r_n,t, which depends on n through l_n. We assume that Y_n,t is Poisson random variable with rate equal to
ϵ_n,texp[𝐱^β_nβ + f^r(r_n,t) + f^a(a_n) + f^k,r(k_n, r_n,t) + f^k,a(k_n, a_n) + f^g,a(g_n, a_n) + z^*_n,t]
where ϵ_n,t is the offset, 𝐱^β_nβ are parametric terms which include gender effect. f^r(r_n,t) and f^a(a_n) are natural cubic splines for government intervention effect and age effect respectively and they take the form
f^r(r) = ∑_j=1^J u^r_j(r)β^r_j, f^a(a) = ∑_j=1^J u^a_j(a)β^a_j,
where J is the number of knots, u^r_j, u^a_j are natural cubic spline bases, β^r_j and β^a_j are coefficients to be estimated. The smoothness penalization terms associated with the bases are defined as
λ^r_1(β^r)' S^r_1β^r + λ^r_2(β^r)' S^r_2β^r, λ^a_1(β^a)' S^a_1β^a + λ^a_2(β^a)' S^a_2β^a.
Here S^r_1, S^r_1, S^a_1, S^a_2 contain known coefficients. β^r=(β^r_1, …, β^r_J)' and β^a=(β^a_1, …, β^a_J)' are vectors of spline coefficients. From a Bayesian perspective, this is equivalent to imposing the following multivariate normal priors on β_r and β_a
β^r ∼𝒩(0, (λ^r_1S^r_1+λ^r_2 S^r_2)^-1)
β^a ∼𝒩(0, (λ^a_1 S^a_1+λ^a_2 S^a_2)^-1).
λ^r_1, λ^r_2, λ^a_1 and λ^a_2 control the level of roughness.
As for the two-way interaction terms, f^k,r(k, r) models interactions between cause of death and stringency index in a non-parametric manner. f^k,a(k, a) accounts for interactions between causes of death and age while f^g,a(g, a) captures interactions between gender and age. The three interactions terms are constructed in the following way. For each level k of causes of death or each g of gender, a unique spline is specified <cit.>. We formulate f^k,r(k, r) as an example, f^k,a(k, a) and f^g,a(g, a) are modeled in a similar way. For each k other than the baseline, f^k,r(k, r) is assumed to be natural cubic spline such that
f^k,r(k, r) = ∑_j=1^J u^k,r_k,j(r)β^k,r_k,j.
The J(K-1) dimensional vector β^k,r=(β^k,r_2,1,…,β^k,r_2,J,β^k,r_3,1…,β^k,r_K,J)' is jointly penalised by S^k,r_1 and S^k,r_2 with smoothing parameters λ^k,r_1 and λ^k,r_2 and the penalisation term is λ^k,r_1(β^k,r)' S^r_1β^k,r + λ^k,r_2(β^k,r)' S^r_2β^k,r,
which, in terms of Bayesian prior, translates to
β^k,r∼𝒩(0, (λ^k,r_1S^k,r_1+λ^k,r_2 S^k,r_2)^-1).
The last term z^*_n,t embeds the spatial-temporal structure in the data; in fact, z^*_n,t depends on n through l_n and k_n, therefore in total, we have LK time series of length T. z^*_n,t can be modeled with great flexibility. We make the following latent state assumption. Let 𝐳^*_t=(z^*_1,t, …, z^*_N,t)' and 𝐳_t=(z_1,t, …, z_LK,t)' such that
𝐳^*_t = (I_LK⊗1)𝐳_t,
where 1 stands for a vector whose elements are all equal to 1. The dimension of 1 is determined by the number of age groups and gender. The assumption implies that residual mortality rates z^*_n,t of the same causes of death in the same region are identical regardless of age and gender, which is reasonable since we have already taken into account age and gender effect in GAM component. We further assume that marginally the latent state 𝐳_t∼𝒩(μ,Σ) and P(𝐳_t - μ) follows a simple autoregressive (AR) process such that
P(𝐳_t - μ) = Φ P(𝐳_t-1 - μ) + ϵ_t, ϵ_t∼𝒩(0, I_LK-ΦΦ'),
where P is an upper triangular matrix such that P'P=Ω=Σ^-1, i.e. P are the Cholesky factors of the precision matrix Ω.
Φ contains multivariate autoregressive coefficients. For simplicity, we require Φ to be diagonal with diagonal entries ϕ=c(ϕ_1, …, ϕ_LK)' such that -1<ϕ_1, …, ϕ_LK<1. The pre-multiplying of 𝐳_t by P is to ensure that marginally 𝐳_t ∼𝒩(0, Σ), t=0, …, T where Σ incorporates dependence structure of geographical locations as well as mortality causes. Reorganizing 𝐳_t as matrices Z_t of size L× K, we assume that
Z_t ∼MN(μ, Σ^k, Σ^l),
which is a matrix normal distribution equivalent to
𝐳_t ∼𝒩(μ, Σ^k ⊗Σ^l),
where ⊗ stands for the Kronecker product between Σ^k and Σ^l, the covariance matrices of the causes of death dependence and regional dependence respectively. We impose the following priors on the model parameters. Firstly, μ is assigned a normal prior 𝒩(0, σ^2_μ I_LK). Ω^k = (Σ^k)^-1 is the precision matrix that underlines the conditional independence of mortality causes. For simplicity, we stick to the usual Wishart distribution
Ω^k ∼Wishart(δ^k, θ^k I_K).
The same reasoning applies to the precision matrix Ω^l=(Σ^l)^-1 and it follows that
Ω^l ∼Wishart(δ^l, θ^l I_L).
We complete the model with a hierarchy of prior specification on the linear regression coefficients, on penalization parameters that control smoothness of splines and on the autoregressive coefficients, on the cause specific mean. Denoting by 𝐱_n the vector containing 𝐱^β_n and all bases in GAMs for n, the hierarchical model can be summarized as
Y_n,t∼Poisson[ ϵ_n,texp(𝐱_nβ^* + z^*_n,t)], β^* = (β', (β^r)', (β^a)', (β^k,r)', (β^k,a)', (β^g,a)')',
β∼𝒩(0, σ^2_β I), β^r ∼𝒩(0, (λ^r_1S^r_1+λ^r_2 S^r_2)^-1), β^a ∼𝒩(0, (λ^a_1 S^a_1+λ^a_2 S^a_2)^-1),
β^k,r∼𝒩(0, (λ^k,r_1S^k,r_1+λ^k,r_2 S^k,r_2)^-1), β^k,a∼𝒩(0, (λ^k,a_1S^k,a_1+λ^k,a_2 S^k,a_2)^-1),
β^g,a∼𝒩(0, (λ^g,a_1S^g,a_1+λ^g,a_2 S^g,a_2)^-1),
λ^r_1, λ^r_2, λ^a_1, λ^a_2, λ^k,r_1, λ^k,r_2, λ^k,a_1, λ^k,a_2, λ^g,a_1, λ^g,a_2 ∼Gamma(α_λ, β_λ),
𝐳^*_t = (I_LK⊗1)𝐳_t,
P(𝐳_t-μ) = Φ P(𝐳_t-1-μ) + ϵ_t, ϵ_t∼𝒩(0, I_LK-ΦΦ'), μ∼𝒩(0,σ^2_μ I_LK),
Φ = diag(ϕ_1,…,ϕ_LK), ϕ_1+1/2, …, ϕ_LK+1/2i.i,d∼Beta(α_ϕ, β_ϕ),
P'P = Ω = Ω^k ⊗Ω^l,
Ω^k ∼Wishart(δ^k, θ^k I_K), Ω^l ∼Wishart(δ^l, θ^lI_L).
§ POSTERIOR INFERENCE VIA VARIATIONAL APPROXIMATION
To make posterior inference, one may devise suitable Markov Chain Monte Carlo (MCMC) algorithms to obtain posterior samples. However, the algorithm may fail to deliver desirable convergent output within reasonable time when it is difficult to explore the geometry of the target distribution due to the sheer dimension of the data. Therefore we resort to variational inference approach for fast approximation. The target posterior distribution is
p(. β^*,λ,𝐳_0:T, .ϕ, μ, Ω^k, Ω^l |𝐲)
∝ p(𝐲|β^*, 𝐳_0,T)p(β^* |λ)p(λ)p(𝐳_0:T|ϕ, μ, Ω^k, Ω^l)p(μ)p(ϕ)p(Ω^k)p(Ω^l),
and the goal is to find the optimal q^* (β^*,λ,𝐳_0:T, μ,ϕ, Ω^k, Ω^l ) from a pre-specified family of distributions such that the Kullback–Leibler (KL) divergence, defined as
KL[q (·) || p(·|𝐲)] = E_q[logq(·)/p(·|𝐲)],
with expectation taken with respect to q(·), is minimized. This is equivalent to maximizing the evidence lower bound (ELBO) given by
!ELBO[ p(·|𝐲)]
= E_q [logp(𝐲|β^*, 𝐳_0,T)p(β^* |λ)p(𝐳_0:T|ϕ, μ, Ω^k, Ω^l)p(λ)p(μ)p(ϕ)p(Ω^k)p(Ω^l)/q (β^*,𝐳_0:T, λ,μ,ϕ, Ω^k, Ω^l )].
We further assume that the family of candidate approximation q(·) can be factorized as
q (β^*,𝐳_0:T, λ,μ,ϕ, Ω^k, Ω^l ) = q (β^*,𝐳_0:T)q(λ)q(μ)q(ϕ)q(Ω^k)q(Ω^l),
where q (β^*,𝐳_0:T) is multivariate normal distribution with mean 𝐦 and covariance matrix M. This is essentially the variational Gaussian approximation (VGA) that has been widely implemented in literature and its theoretical properties when applied to Poisson data are studied
by <cit.>. M can be full or it can be sparse block diagonal matrix so that there is no correlation between β^* and 𝐳_0:T, which means that q (β^*,𝐳_0:T) can be further factorized as the product of two multivariate normal densities, q (β^*) and q (𝐳_0:T). The sparse version of M greatly reduces algorithm complexity. However, in our application, since the dimension of 𝐳_0:T is overwhelmingly larger than β^*, the block diagonal assumption does not produce much gain; therefore we stick to the full matrix characterisation of M. Furthermore, we choose the following variational densities implied by mean field approximation: for each element in λ, it is a point mass at λ^q,r_1, λ^q,r_2, λ^q,a_1, λ^q,a_2, λ^q,k,r_1, λ^q,k,r_2, λ^q,k,a_1, λ^q,k,a_2, λ^q,g,a_1, λ^q,g,a_2. Together they are λ^q. The Dirac measure choice as a variational density avoids evaluating expectation with respect to otherwise non-trivial variational densities. For q(μ), we assume independent normal approximation densities q(μ)=𝒩(μ^q, diag[(σ^q)^2]). For q(ϕ), even though it is analytically possible to compute the expectation when assuming that (ϕ_1+1)/2i.i,d∼Beta(α^q,ϕ_1, β^q,ϕ_1), …, (ϕ_LK+1)/2 i.i,d∼Beta(α^q,ϕ_LK, β^q,ϕ_LK), the optimization scheme we adapt diverges as only the means of beta variational distributions is identifiable, therefore we use Dirac measures on q(ϕ) so they are ϕ^q. Finally, we set q(Ω^k) = Wishart(δ^q,k, D^q,k) and q(Ω^l) = Wishart(δ^q,l, D^q,l).
§.§ ELBO calculation
With (<ref>), the ELBO can be written as the sum of expectations
ELBO[ p(·|𝐲)]
= E_q [logp(𝐲|β^*, 𝐳_0,T)p(β^* |λ)p(𝐳_0:T|ϕ, μ, Ω^k, Ω^l)/q(β^*,𝐳_0:T)] + E_q(λ)[logp(λ))/q(λ))]
+ E_q(μ) [logp(μ)/q(μ)] + E_q(ϕ) [logp(ϕ)/q(ϕ)] + E_q(Ω^k) [logp(Ω^k)/q(Ω^k)] + E_q(Ω^l ) [logp(Ω^l )/q(Ω^l )].
The first term on the right hand side of the equation bears most importance as it connects likelihood with prior and it is also the most computationally heavy part to optimize due to the dimension of 𝐳_0,T. Denote the mean and covariance matrix of joint multivariate normal prior on β^* and 𝐳_0:T by 𝐦_0 and M_0.
M_0 is block diagonal, whose upper diagonal block corresponds to the covariance of β^* and lower diagonal block is the vector autoregressive matrix that derives from the assumed 𝐳_t dynamics. The lower diagonal block is
(I_T⊗ P^-1)[ I_LK Φ Φ^2 ⋯ Φ^T-1; Φ I_LK Φ ⋯ Φ^T-2; Φ^2 Φ I_LK ⋯ Φ^T-3; ⋮ ⋮ ⋮ ⋱ ⋮; Φ^T-1 Φ^T-2 Φ^T-3 ⋯ I_LK; ](I_T⊗(P^-1)').
It is more convenient to use the precision matrix as the term repeatedly appears in the object function that we aim to optimize. The precision matrix can be expressed as (I_T⊗ P')R(I_T⊗ P) with
!R= [ (I_LK - Φ^2)^-1 -(I_LK - Φ^2)^-1Φ 0 ⋯ 0; -(I_LK - Φ^2)^-1Φ (I_LK - Φ^2)^-1(I_LK + Φ^2) -(I_LK - Φ^2)^-1Φ ⋯ 0; 0 -(I_LK - Φ^2)^-1Φ (I_LK - Φ^2)^-1(I_LK + Φ^2) ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ (I_LK - Φ^2)^-1; ].
The first terms is hence
E_q [logp(𝐲|β^*, 𝐳_0,T)p(β^* |λ)p(𝐳_0:T|μ, ϕ, Ω^k, Ω^l)/q(β^*,𝐳_0:T)]
= 𝐲'X𝐦-∑_i=1^NTϵ_iexp(𝐱'_i𝐦+1/2𝐱'_i M 𝐱_i) -1/2[𝐦-E_q(𝐦_0)]'E_q(M_0^-1)[𝐦-E_q(𝐦_0)]
-1/2tr[E_q(M_0^-1)M] + 1/2log|M| - 1/2tr[E_q(M_0^-1))COV_q(𝐦_0)] + 1/2E_q(log|M_0^-1|)
+ constant
Here, X is a two-block design matrix. The left block matrix consists of all regressors while the right block is the kronecker product I_LKT⊗1. The length of vector 1 relies on specific data configuration; in our case, it equals the product between number of age groups and gender. 𝐱_i is the column vector of i-th row of the design matrix X. The expected value E_q(𝐦_0)=(0', (1_T⊗μ^q)')' and E_q(M_0^-1) is
!E_q(M_0^-1)=[ 1/σ_β^2I 0 ⋯ 0 0; 0 λ_1^q,r S_1^r+λ_2^q,r S_2^r ⋯ 0 0; ⋮ ⋮ ⋱ 0 0; 0 0 0 λ_1^q,g,a S_1^g,a+λ_2^q,g,a S_2^g,a 0; 0 0 0 0 E_q[(I_T⊗ P')E_q(R)(I_T⊗ P)] ],
where
E_q( R ) =[ E^1_q( R ) E^2_q( R ) 0 ⋯ 0; E^2_q( R ) E^3_q( R ) E^2_q( R ) ⋯ 0; 0 E^2_q( R ) E^3_q( R ) ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ E^1_q( R ) ],
with diagonal matrices
E^1_q( R ) = diag(1/1-(ϕ^q)^2), E^2_q( R ) = diag(-ϕ^q/1-(ϕ^q)^2), E^3_q( R ) = diag(1+(ϕ^q)^2/1-(ϕ^q)^2).
Here we vectorize the function by writing its arguments in terms of vectors ϕ^q. The expectation is therefore
!E_q[(I_T⊗ P')E_q(R)(I_T⊗ P)] = [ E_q[P'E^1_q( R )P] E_q[P'E^2_q( R )P] 0 ⋯ 0; E_q[P'E^2_q( R )P] E_q[P'E^3_q( R )P] E_q[P'E^2_q( R )P] ⋯ 0; 0 E_q[P'E^2_q( R )P] E_q[P'E^3_q( R )P] ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ E_q[P'E^1_q( R )P] ].
Recall that P'P=Ω = Ω^k ⊗Ω^l, when the variational densities of precision matrices Ω^k and Ω^l are Wishart(δ^q,k, D^q,k) and Wishart(δ^q,l, D^q,l), the derived Cholesky upper triangular matrix P can be written as P = (A^kV^q,k) ⊗(A^lV^q,l) with (V^q,kA^k)'A^kV^q,k = Ω^k, (V^q,lA^l)'A^lV^q,l = Ω^l. V^q,k, V^q,l are Cholesky factors of D^q,k and D^q,l and
A^k = [ c^k_1 n^k_1,2 n^k_1,3 ⋯ n^k_1,K; 0 c^k_2 n^k_2,3 ⋯ n^k_2,K; 0 0 c^k_3 ⋯ n^k_3,K; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ c^k_K ],
A^l = [ c^l_1 n^l_1,2 n^l_1,3 ⋯ n^l_1,K; 0 c^l_2 n^l_2,3 ⋯ n^l_2,K; 0 0 c^l_3 ⋯ n^l_3,K; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ c^l_K ],
with c^k_i∼χ^2_δ^q,k-i+1, c^l_i∼χ^2_δ^q,l-i+1, n^k_i,j, n^l_i,j∼𝒩(0,1) independently. This is known as the Bartlett decomposition <cit.>. Thanks to the decomposition and diagonal assumption on Φ, we are able to work out the expectation of the quadratic forms. In the following, we demonstrate the details of deriving E_q[P'E^1_q( R )P]. E_q[P'E^2_q( R )P] and E_q[P'E^3_q( R )P] have similar structures.
E_q[P'E^1_q( R )P] = E_q[(V^q,k⊗ V^q,l)'(A^k⊗ A^l)'E^1_q( R )(A^k⊗ A^l)(V^q,k⊗ V^q,l)]
= (V^q,k⊗ V^q,l)'E_q[(A^k⊗ A^l)'E^1_q( R )(A^k⊗ A^l)](V^q,k⊗ V^q,l).
Now the problem simplifies to take the expectation of (A^k⊗ A^l)'E^1_q( R )(A^k⊗ A^l) with
A^k⊗ A^l = [ c^k_1 A^l n^k_1,2A^l n^k_1,3A^l ⋯ n^k_1,KA^l; 0 c^k_2 A^l n^k_2,3A^l ⋯ n^k_2,KA^l; 0 0 c^k_3 A^l ⋯ n^k_3,KA^l; ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 ⋯ c^k_K A^l ].
Since c^k_i,n_i,j^k are independent, the expected value is a block diagonal matrix with diagonal entries E[(c_1^k)^2]E{(A^l)'[E^1_q( R )]_1:LA^l}, E[(n_1,2^k)^2]E{(A^l)'[E^1_q( R )]_1:LA^l} +E[(c_2^k)^2]E{(A^l)'[E^1_q( R )]_(L+1):2LA^l}, …, E[(n_1,K^k)^2]E{(A^l)'[E^1_q( R )]_1:LA^l} +⋯+
E[(c_K^k)^2]E{(A^l)'[E^1_q( R )]_((K-1)L+1):KLA^l}
where [E^1_q( R )]_1:L denotes the diagonal blocks indexed from 1 to L and so on. By the same reasoning, E{(A^l)'[E^1_q( R )]_1:LA^l} is a diagonal matrix whose diagonal entries are E[(c_1^l)^2][E^1_q( R )]_1, E[(n_1,2^l)^2][E^1_q( R )]_1+E[(c_2^l)^2][E^1_q( R )]_2, …, E[(n_1,L^l)^2][E^1_q( R )]_1+⋯+E[(c_L^l)^2][E^1_q( R )]_L. The same structure also holds for the remaining matrix expectations. The last two quantities in the expectation E_q [logp(𝐲|β^*, 𝐳_0,T)p(β^* |λ)p(𝐳_0:T|μ, ϕ, Ω^k, Ω^l)/q(β^*,𝐳_0:T)] are
COV_q(𝐦_0)= [ 0 0; 0 I_T ⊗diag[(σ^q)^2] ],
and
E_q(log|M_0^-1|) =
log|λ_1^q,rS_1^r+λ_2^q,rS_2^r| +
log|λ_1^q,aS_1^a+λ_2^aS_2^q,a| + log|λ_1^q,k,rS_1^k,r+λ_2^q,k,rS_2^k,r|
+ log|λ_1^q,k,aS_1^k,a+λ_2^q,k,aS_2^k,a| + log|λ_1^q,g,aS_1^g,a+λ_2^q,g,aS_2^g,a|
+(T-1)∑_i=1^LKlog(1/1-(ϕ_i^q)^2)+LT[log|D^q,k|+∑_k=1^K ψ(δ^q,k-k+1/2)]
+ KT[log|D^q,l|+∑_l=1^L ψ(δ^q,l-l+1/2)] + constant,
with ψ(·) representing the digamma function.
The remaining terms in the ELBO are
E_q(λ) [logp(λ)/q(λ)] = (α - 1)logλ^q - βλ^q + constant
E_q(μ) [logp(μ)/q(μ)] = -1/2σ_μ^2μ^q'μ^q - 1/2σ_μ^2∑(σ^q)^2
+ 1/2∑log[(σ^q)^2] + constant
E_q(ϕ) [logp(ϕ)/q(ϕ)] = ∑_i=1^LK (α_ϕ-1)log(1+ϕ_i^2) + (β_ϕ-1)log(1-ϕ_i^2) + constant
E_q(Ω^k) [logp(Ω^k)/q(Ω^k)] = δ^k/2log|D^q,k| + δ^k-δ^q,k/2∑_i=1^K ψ(δ^q,k-i+1/2) - δ^q,k/2θ^ktr(D^q,k)
+δ^q,kK/2+logΓ_K(δ^q,k/2) + constant
E_q(Ω^l) [logp(Ω^l)/q(Ω^l)] = δ^l/2log|D^q,l| + δ^l-δ^q,l/2∑_i=1^L ψ(δ^q,l-i+1/2) - δ^q,l/2θ^ltr(D^q,l)
+δ^q,lL/2+logΓ_L(δ^q,l/2) + constant
Now that we have formulated the ELBO, a coordinate ascent variational inference (CAVI) algorithm is devised and employed to maximize the target; that is we optimize the ELBO with respect to 𝐦, M, λ^q, ϕ^q, δ^q,k, D^q,k, δ^q,l, D^q,l sequentially while keeping the other parameters fixed. This is outlined in Algorithm <ref>.
§.§ optimizing with respect to m and M
The first order conditions of optimal 𝐦, M are
∂ELBO/∂𝐦 = X' 𝐲 - ∑_i=1^NTϵ_i exp(𝐱'_i𝐦+ 1/2𝐱'_i M𝐱_i)𝐱_i - E_q(M_0^-1)[𝐦-E_q(𝐦_0)]
∂ELBO/∂ M = 1/2{ -X'diag[ϵ_1exp(𝐱'_1𝐦+1/2𝐱'_1 M 𝐱_1), …,
ϵ_NTexp(𝐱'_NT𝐦+1/2𝐱'_NT M 𝐱_NT)]X
-E_q(M_0^-1)+M^-1}
We follow the numeric algorithm proposed by <cit.> and update 𝐦 using Newton's method, which requires the computation of Hessian matrix
H^ELBO_𝐦 = -X'diag [ϵ_1exp(𝐱'_1𝐦+1/2𝐱'_1 M 𝐱_1), …, .
. ϵ_NTexp(𝐱'_NT𝐦+1/2𝐱'_NT M 𝐱_NT)]X - E_q(M_0^-1),
and the new 𝐦^(h+1) at iteration h+1 is updated based on the previous value 𝐦^(h)
𝐦^(h+1) = 𝐦^(h) -(H^ELBO_𝐦)^-1∂ELBO/∂𝐦.
To obtain the optimal M, a fixed-point method is employed and at iteration h+1, it updates M according to
M^(h+1) = g(M^(h)) = (X'diag [ϵ_1exp(𝐱'_1𝐦+1/2𝐱'_1 M^(h)𝐱_1), …, .
. ϵ_NTexp(𝐱'_NT𝐦+1/2𝐱'_NT M^(h)𝐱_NT)]X +E_q(M_0^-1))^-1,
where M^(h) stands for the value at iteration h.
In practice, it is possible that the fixed-point iteration algorithm takes many iterations to converge or it oscillates between two values. Since to compute g(M^(h)) is the most expensive step in the whole algorithm due to the matrix inversion operation of the potentially high dimensional matrix, we wish to avoid such situations from happening, therefore we employ the Anderson acceleration technique as outlined in <cit.>. Essentially, the Anderson acceleration algorithm uses a combination of S previous iterates to form the next iterate, improving convergence rate. The pseudo code of the algorithm is given in Algorithm <ref>.
§.§ optimizing with respect to lambda
We optimize λ^q jointly through Newton's method as well. The steps to obtain first order conditions and Hessian matrices with respect to λ_1^q,r and λ_2^q,r are illustrated as examples and the rest of the smoothing parameters have similar formulas. The first order conditions of λ^q,r_1, λ^q,r_2 are
∂ELBO/∂λ_1^q,r = -1/2(𝐦^r)'S_1^r𝐦^r-1/2tr(S_1^rM^r)+1/2tr[(λ_1^q,rS_1^r+λ_2^q,rS_2^r)^-1S_1^r]+ α-1/λ_1^q,r - β,
∂ELBO/∂λ_2^q,r = -1/2(𝐦^r)'S_2^r𝐦^r-1/2tr(S_2^rM^r) +1/2tr[(λ_1^q,rS_1^r+λ_2^q,rS_2^r)^-1S_2^r]+ α-1/λ_2^q,r - β,
where 𝐦^r and M^r take corresponding entries in 𝐦 and M that are associated with the spline of stringency index. The Hessian matrix block associated with λ^q,r_1, λ^q,r_2 is
!H_λ_1^q,r,λ_2^q,r^ELBO = [ -1/2tr[(λ_1^q,rS_1^r+λ_2^q,rS_2^r)^-1S_1^r(λ_1^q,rS_1^r+λ_2^q,rS_2^r)^-1S_1^r] - α-1/(λ_1^q,r)^2 -1/2tr[(λ_1^q,rS_1^r+λ_2^q,rS_2^r)^-1S_1^r(λ_1^q,rS_1^r+λ_2^q,rS_2^r)^-1S_2^r]; -1/2tr[(λ_1^q,rS_1^r+λ_2^q,rS_2^r)^-1S_1^r(λ_1^q,rS_1^r+λ_2^q,rS_2^r)^-1S_2^r] -1/2tr[(λ_1^q,rS_1^r+λ_2^q,rS_2^r)^-1S_2^r(λ_1^q,rS_1^r+λ_2^q,rS_2^r)^-1S_2^r] - α-1/(λ_2^q,r)^2 ].
Similarly we can derive first order conditions and Hessian matrices with respect to λ_1^q,a, λ_2^q,a, λ_1^q,a, λ_2^q,a, λ_1^q,k,r, λ_2^q,k,r, λ_1^q,k,a, λ_2^q,k,a, λ_1^q,g,a, λ_2^q,g,a.
The joint updating of smoothing parameters λ^q at iteration h+1 is then
(λ^q)^(h+1) = (λ^q)^(h) - (H^ELBO_λ^q)^-1∂ELBO/∂λ^q.
Here H^ELBO_λ^q is a block diagonal matrix with block entries H_λ_1^q,r,λ_2^q,r^ELBO, H_λ_1^q,a,λ_2^q,a^ELBO,
H_λ_1^q,k,r,λ_2^q,k,r^ELBO,
H_λ_1^q,k,a,λ_2^q,k,a^ELBO and
H_λ_1^q,g,a,λ_2^q,g,a^ELBO. Note that λ^q is subject to positive constraints, therefore in each step, we apply a projected Newton's method with backtracking line search to find the optimal solution <cit.>.
§.§ optimizing with respect to mu and sigma
First order conditions with respect to μ_i^q, the i-th element of μ^q is
∂ELBO/∂μ_i^q = E_q(M_0^-1)[𝐦-E_q(𝐦_0)]∂E_q(𝐦_0)/∂μ_i^q-1/σ^2_μμ_i^q.
The second derivatives are
∂^2 ELBO/∂μ_i^q∂μ_j^q = (∂E_q(𝐦_0)/∂μ_i^q)'E_q(M_0^-1)∂E_q(𝐦_0)/∂μ_j^q-1/σ^2_μ
when i=j and
∂^2 ELBO/∂μ_i^q∂μ_j^q = (∂E_q(𝐦_0)/∂μ_i^q)'E_q(M_0^-1)∂E_q(𝐦_0)/∂μ_j^q
when i≠ j. Newton's method is then directly applicable to obtain optimal μ^q.
First order conditions to update (σ^q_i)^2, the i-th element of (σ^q)^2 are
∂ELBO/∂(σ^q_i)^2 = - 1/2tr[E_q(M_0^-1) COV_q(𝐦_0)/∂(σ^q_i)^2] - 1/2σ_μ^2
+ 1/2(σ^q_i)^2.
(σ^q)^2 can be updated by setting first order conditions equal to 0 and then solve the linear system. The positivity constraint is automatically satisfied here.
§.§ optimizing with respect to phi
To update a specific ϕ^q_i in the l-th position of sequence 1,…,L and k-th position of sequence 1,…,K, first note that the term containing ϕ^q_i in the expectation of quantity [(A^k)'⊗(A^l)']E^1_q( R )(A^k⊗ A^l) is
1/(1-(ϕ^q_i)^2)diag[(0,…,0,δ^q,k-k+1,1,…,1)'⊗(0,…,0,δ^q,l-l+1,1,…,1)'],
with l-th entry of vector (0,…,0,δ^q,k-k+1,1,…,1)' equal to δ^q,k-k+1 and k-th entry of vector (0,…,0,δ^q,l-l+1,1,…,1)' equal to δ^q,l-l+1. Similarly, in the expectation of [(A^k)'⊗(A^l)']E^2_q( R )(A^k⊗ A^l) and [(A^k)'⊗(A^l)']E^3_q( R )(A^k⊗ A^l), we change 1/(1-(ϕ^q_i)^2) to -ϕ^q_i/(1-(ϕ^q_i)^2) and (1+(ϕ^q_i)^2)/(1-(ϕ^q_i)^2) accordingly. The first order condition with respect to ϕ^q_i is therefore
∂ELBO/∂ϕ^q_i = -1/2tr{ (I_T ⊗ V^q,k⊗ V^q,l) [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]'
(I_T ⊗ V^q,k⊗ V^q,l)'∂E^*(ϕ,δ^q,l, δ^q,k)/∂ϕ^q_i}
-1/2tr{ (I_T ⊗ V^q,k⊗ V^q,l)M(I_T ⊗ V^q,k⊗ V^q,l)'∂E^*(ϕ,δ^q,l, δ^q,k)/∂ϕ^q_i}
+ T-1/2(1/1-ϕ^q_i - 1/1+ϕ^q_i) + α_ϕ-1/1+ϕ^q_i - β_ϕ-1/1-ϕ^q_i
with E^*(ϕ,δ^q,l, δ^q,k) = E_q[(I_T ⊗ A^k⊗ A^l)'E_q( R )(I_T ⊗ A^k⊗ A^l) ].
We can further derive the second order condition
∂^2 ELBO/∂ (ϕ^q_i)^2 = -1/2tr{ (I_T ⊗ V^q,k⊗ V^q,l) [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]'
(I_T ⊗ V^q,k⊗ V^q,l)'∂^2 E^*(ϕ,δ^q,l, δ^q,k)/∂ (ϕ^q_i)^2}
-1/2tr{ (I_T ⊗ V^q,k⊗ V^q,l)M(I_T ⊗ V^q,k⊗ V^q,l)'∂^2 E^*(ϕ,δ^q,l, δ^q,k)/∂ (ϕ^q_i)^2}
+ T-1/2(1/(1-ϕ^q_i)^2 + 1/(1+ϕ^q_i)^2) - α_ϕ-1/(1+ϕ^q_i)^2 - β_ϕ-1/(1-ϕ^q_i)^2
Newton's method is then applied to update ϕ^q_i for i=1,…,LK sequentially. If the number LK is large in the application, ϕ^q_i can also be updated in parallel as the second order condition involves only ϕ^q_i.
§.§ optimizing with respect to deltal and Vl
δ^q,l and V^q,l are updated jointly. First order conditions are
∂ELBO/∂δ^q,l = -1/2tr{ (I_T ⊗ V^q,k⊗ V^q,l) [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]'
(I_T ⊗ V^q,k⊗ V^q,l)' ∂E^*(ϕ,δ^q,l, δ^q,k)/∂δ^q,l}
-1/2tr{ (I_T ⊗ V^q,k⊗ V^q,l)M(I_T ⊗ V^q,k⊗ V^q,l)'∂E^*(ϕ,δ^q,l, δ^q,k)/∂δ^q,l}
+KT/4 ∑_l=1^L ψ_1(δ^q,l-l+1/2) + δ^l - δ^q,l/4∑_l=1^Lψ_1(δ^q,l-l+1/2)-1/2θ^ltr(D^q,l) + L/2
for degree of freedom δ^q,l,
∂ELBO/∂ V^q,l_i,i = -tr{ [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]' (I_T ⊗ V^q,k⊗ V^q,l)'
E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i,i}
-tr{ M (I_T ⊗ V^q,k⊗ V^q,l)'E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i,i}
+ KT+δ^l/V^q,l_i,i - δ^q,lV^q,l_i,i/θ^l
for diagonal elements V^q,l_i,i, and
∂ELBO/∂ V^q,l_i,j = -tr{ [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]' (I_T ⊗ V^q,k⊗ V^q,l)'
E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i,j}
-tr{ M (I_T ⊗ V^q,k⊗ V^q,l)'E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i,j}-δ^q,lV^q,l_i,j/θ^l,
for off-diagonal elements V^q,l_i,j with i<j.
In the Hessian matrix, the second derivatives are
∂^2 ELBO/(∂δ^q,l)^2 = KT/8∑_l=1^L ψ_2(δ^q,l-l+1/2) - 1/4∑_l=1^L ψ_1(δ^q,l-l+1/2)+δ^l - δ^q,l/8∑_l=1^L ψ_2(δ^q,l-l+1/2),
∂^2 ELBO/(∂ V^q,l_i,i)^2 = -tr{ [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]' (I_T ⊗ V^q,k⊗ V^q,l)'/∂ V^q,l_i,i
E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i,i}
-tr{ M (I_T ⊗ V^q,k⊗ V^q,l)'/∂ V^q,l_i,iE^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i,i}-KT+δ^l/(V^q,l_i,i)^2 - δ^q,l/θ^l,
and
∂^2 ELBO/(∂ V^q,l_i,j)^2 = -tr{ [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]' (I_T ⊗ V^q,k⊗ V^q,l)'/∂ V^q,l_i,j
E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i,j}
-tr{ M (I_T ⊗ V^q,k⊗ V^q,l)'/∂ V^q,l_i,jE^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i,j}- δ^q,l/θ^l.
Off-diagonal entries in the Hessian matrix are
∂^2 ELBO/∂δ^q,l∂ V^q,l_i,j = -tr{ E^*(ϕ,δ^q,l, δ^q,k)/∂δ^q,l[𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]'
(I_T ⊗ V^q,k⊗ V^q,l)'∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i,j}
-tr{ E^*(ϕ,δ^q,l, δ^q,k)/∂δ^q,lM (I_T ⊗ V^q,k⊗ V^q,l)'∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i,j}- V^q,l_i,j/θ^l,
with i≤ j, and
∂^2 ELBO/∂ V^q,l_i_1,j_1∂ V^q,l_i_2,j_2 = -tr{ [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]' ∂(I_T ⊗ V^q,k⊗ V^q,l)'/∂ V^q,l_i_2,j_2
E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i_1,j_1}
-tr{ [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]' (I_T ⊗ V^q,k⊗ V^q,l)'
E^*(ϕ,δ^q,l, δ^q,k)∂^2 (I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i_1,j_1∂ V^q,l_i_2,j_2}
-tr{ M ∂(I_T ⊗ V^q,k⊗ V^q,l)'/∂ V^q,l_i_2,j_2E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i_1,j_1}
-tr{ M (I_T ⊗ V^q,k⊗ V^q,l)'E^*(ϕ,δ^q,l, δ^q,k)∂^2 (I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i_1,j_1∂ V^q,l_i_2,j_2}
for i_1≤ j_1, i_2≤ j_2, i_1≠ i_2,j_1≠ j_2.
By definition δ^q,l>L-1 and the diagonal entries are real positive numbers. The constrained optimization problem is again solved using projected method with backtracking line search algorithm. The Newton's method has the merit to converge faster than linear rate to the optimal solution when the current value is in the neighborhood of the solution, however, when the current value is far away from the optimal point, the performance is less satisfying. Therefore, we perform both Newton's method and Cauchy's method in each iteration. The Cauchy's method simply uses the gradient to construct the updating rule and it has superior convergence radius compared to Newton's method.
§.§ optimizing with respect to deltak and Vk
The procedure to update δ^q,k and V^q,k is essentially identical to the one outlined in Section <ref>. The optimization is performed using a hybrid of Cauchy and Newton's methods while the constraints are satisfied with projected method with backtracking line search. Below, we formulate first order conditions and the Hessian matrix.
∂ELBO/∂δ^q,k = -1/2tr{ (I_T ⊗ V^q,k⊗ V^q,l) [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]'
(I_T ⊗ V^q,k⊗ V^q,l)' ∂E^*(ϕ,δ^q,l, δ^q,k)/∂δ^q,k}
-1/2tr{ (I_T ⊗ V^q,k⊗ V^q,l)M(I_T ⊗ V^q,k⊗ V^q,l)'∂E^*(ϕ,δ^q,l, δ^q,k)/∂δ^q,k}
+LT/4 ∑_k=1^K ψ_1(δ^q,k-k+1/2) + δ^k - δ^q,k/4∑_k=1^Kψ_1(δ^q,k-k+1/2)-1/2θ^ktr(D^q,k) + K/2
for degree of freedom δ^q,k,
∂ELBO/∂ V^q,k_i,i = -tr{ [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]' (I_T ⊗ V^q,k⊗ V^q,k)'
E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,l_i,i}
-tr{ M (I_T ⊗ V^q,k⊗ V^q,l)'E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,k_i,i}
+ LT+δ^k/V^q,k_i,i - δ^q,kV^q,k_i,i/θ^k
for diagonal elements V^q,k_i,i, and
∂ELBO/∂ V^q,k_i,j = -tr{ [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]' (I_T ⊗ V^q,k⊗ V^q,l)'
E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,k_i,j}
-tr{ M (I_T ⊗ V^q,k⊗ V^q,l)'E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,k_i,j}-δ^q,kV^q,k_i,j/θ^k,
for off-diagonal elements V^q,k_i,j with i<j.
In the Hessian matrix, the second derivatives are
∂^2 ELBO/(∂δ^q,k)^2 = LT/8∑_k=1^K ψ_2(δ^q,k-k+1/2) - 1/4∑_k=1^K ψ_1(δ^q,k-k+1/2)+δ^k - δ^q,k/8∑_k=1^K ψ_2(δ^q,k-k+1/2),
∂^2 ELBO/(∂ V^q,k_i,i)^2 = -tr{ [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]' (I_T ⊗ V^q,k⊗ V^q,l)'/∂ V^q,k_i,i
E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,k_i,i}
-tr{ M (I_T ⊗ V^q,k⊗ V^q,l)'/∂ V^q,k_i,iE^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,k_i,i}-LT+δ^k/(V^q,k_i,i)^2 - δ^q,k/θ^k,
and
∂^2 ELBO/(∂ V^q,k_i,j)^2 = -tr{ [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]' (I_T ⊗ V^q,k⊗ V^q,l)'/∂ V^q,k_i,j
E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,k_i,j}
-tr{ M (I_T ⊗ V^q,k⊗ V^q,l)'/∂ V^q,k_i,jE^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,k_i,j}- δ^q,k/θ^k.
Off-diagonal entries in the Hessian matrix are
∂^2 ELBO/∂δ^q,k∂ V^q,k_i,j = -tr{ E^*(ϕ,δ^q,l, δ^q,k)/∂δ^q,k[𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]'
(I_T ⊗ V^q,k⊗ V^q,l)'∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,k_i,j}
-tr{ E^*(ϕ,δ^q,l, δ^q,k)/∂δ^q,kM (I_T ⊗ V^q,k⊗ V^q,l)'∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,k_i,j}- V^q,k_i,j/θ^k,
with i≤ j, and
∂^2 ELBO/∂ V^q,k_i_1,j_1∂ V^q,k_i_2,j_2 = -tr{ [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]' ∂(I_T ⊗ V^q,k⊗ V^q,l)'/∂ V^q,k_i_2,j_2
E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,k_i_1,j_1}
-tr{ [𝐦-E_q(𝐦_0)] [𝐦-E_q(𝐦_0)]' (I_T ⊗ V^q,k⊗ V^q,l)'
E^*(ϕ,δ^q,l, δ^q,k)∂^2 (I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,k_i_1,j_1∂ V^q,k_i_2,j_2}
-tr{ M ∂(I_T ⊗ V^q,k⊗ V^q,l)'/∂ V^q,k_i_2,j_2E^*(ϕ,δ^q,l, δ^q,k)∂(I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,k_i_1,j_1}
-tr{ M (I_T ⊗ V^q,k⊗ V^q,l)'E^*(ϕ,δ^q,l, δ^q,k)∂^2 (I_T ⊗ V^q,k⊗ V^q,l)/∂ V^q,k_i_1,j_1∂ V^q,k_i_2,j_2}
for i_1≤ j_1, i_2≤ j_2, i_1≠ i_2,j_1≠ j_2.
§ ITALIAN MORTALITY ANALYSIS
To analyze non-linear relationship between mortality rates and covariates as well as evolving mortality trends of COVID-19 and other causes before and during the pandemic, we applied our method to the provisional monthly mortality data from Italy. The data, spanning from January 2015 to December 2020 (a total of T=72 months), includes declarations of 18 different causes of death reported by physicians for all deaths in Italy. These causes of death are listed in Table <ref>. Additionally, the death counts are compiled across 420 categories, derived from combining 10 age groups, 2 genders, and L=21 Italian regions, which together with mortality causes, gives 7,560 samples. In total, there are 544,320 observations. A detailed description of this mortality data is available at https://www.istat.it/it/archivio/240401. In our analysis, we set K=17 and exclude cause No.14 complications of pregnancy, childbirth and the puerperium. The reasons for discarding this category are the following. First, death counts are almost all 0's (30,165 out of 30,240); second, estimation with this category is unstable and the convergence rate of the CAVI algorithm drastically slows down; third, excluding one category does not affect the inference on other causes of death. Consequently, we examine N=7,140 time series Y_n,t for n=1,…,N, t=1,…,T. In terms of non-Gaussian state space models, the dimension of observations is 7,140 while the dimension of latent states is 357. Due to high-dimensional structure of the data and the model, conventional sampling algorithms are not applicable, which motivates to approximate the posterior distribution in a variational approach.
We have briefly described covariates included in the model in Section <ref> when formulating the GAM component. Now we elaborate more on one key variable, the Italian Stringency Index (ISI) developed by <cit.>, analogous to the Oxford Stringency Index (OSI) by <cit.>, which measures non-pharmaceutical interventions implemented in Italy to combat COVID-19. The index tracks both national and regional intervention intensity and it is ideal for studying mortality rate in regional level. We focus on non-linear interactions f^k,r(k, r) between ISI and different causes of death, acknowledging the potential varying impacts of the pandemic on other mortality causes as highlighted in existing studies. As for the offsets ϵ_n,t, we consider days per month, monthly aggregated COVID-19 cases by region, and regional population figures for other causes of death. Notably, for external causes like trauma and poisoning, we incorporate a mobility index from the Google COVID-19 Community Mobility Reports (Google LLC "Google COVID-19 Community Mobility Reports". https://www.google.com/covid19/mobility/), to model mortality rate changes per mobility unit. Parameters in the prior distributions are
α_λ = 1, β_λ = 1000, α_ϕ = 10, β_ϕ = 10,
δ^k = K = 17, δ^l = L, θ^k = K-2 = 15, θ^l = L-2 = 19, σ^2_β = 10,
σ^2_μ = 1. To check whether the proposed CAVI method achieves convergence, we start the algorithm from different initialization. The trajectories of ELBO are shown in Figure <ref>; an optimal ELBO value of 6,564,738 is reached within reasonable steps despite that the initial values are randomly generated.
We first discuss the non-linear relationship between mortality rates and ISI. Figure <ref> shows that GAMs predict similar smoothing patterns for the following mortality causes, endocrine, nutritional and metabolic diseases in Figure <ref>, psychic and behavioral disorders in Figure <ref>, diseases of the nervous system and sense organs in Figure <ref>, diseases of the circulatory system in Figure <ref>, diseases of the respiratory system in Figure <ref>, and external causes of trauma and poisoning in Figure <ref>. When the Italian government implemented loose intervention measures, the mortality rates went down as the intervention became more strict until intervention reached certain median-to-low level (ISI between 20 and 40), the mortality rates took a reverse trend and went up as more strict policies were carried out. In general, early and mild interventions like social distancing and improved hygiene practices not only reduced the spread of COVID-19 but also other infectious diseases, which could benefit individuals with a wide range of health conditions. The initial stages of the pandemic also led to increased health awareness and changes to more regular lifestyle. However, these benefits subsided as healthcare resources were increasingly diverted to treat COVID-19 patients, individuals with other health conditions might have faced delayed or reduced access to necessary health care. More specifically, for some individuals with psychic and behavioral disorders, when the intervention measures were mild, staying at home might have initially reduced stressors such as workplace pressures or social anxiety, potentially benefiting mental health, however, the potential benefits were replaced by increased isolation and loneliness, heightened anxiety and stress as well as substance abuse when situation deteriorated and higher lockdown levels were enforced. As for diseases of the circulatory system and diseases of the respiratory system, initial reduction of mortality rates at low ISI levels could also be attributed to factors like reduction in air pollution, reduced exposure to allergens, less physical strain from commuting as a result of early intervention procedures. Lastly, we comment on external causes of trauma and poisoning. In the early stages of the pandemic, there was rising public vigilance and a focus on safety, which might have extended beyond COVID-19 precautions to general safety practices, potentially reducing accidents. Meanwhile, people were adapting to new routines which might have included safer practices at home, reducing the risk of domestic accidents or poisonings. When the stringency measures prolonged, people started to be impacted by mental health issues as well as social and economic hardship, which could potentially contribute to the upward trend.
Another common non-linear pattern of predicted mortality rates by ISI shared among some mortality causes is an overall downward trend. The mortality causes exhibiting such behaviors are some infectious and parasitic diseases in Figure <ref>, tumors in Figure <ref>, diseases of the digestive system in Figure <ref>, diseases of the skin and sub-cutaneous tissue in Figure <ref> and congenital malformations and chromosomal anomalies in Figure <ref>. For these causes, benefits of social distancing due to high stringency levels dominates factors such as disrupted health care access, delayed diagnoses and treatments. We also note that credible intervals are wider when ISI is close to 100, suggesting distinct mortality patterns appearing in different regions and age groups.
We selected to display three representative mortality patterns between age and causes of death in Figure <ref>. Almost all mortality causes show the same feature that the mortality increases with age, similar to the pattern of some infectious and parasitic diseases shown in Figure <ref>. The mortality rate of COVID-19, however, peaked around slightly over 80 years old. Although there have been studies pointing out that age is a positive predictor of the morality rate <cit.>, their stratification usually sets 80 years old and above as a group. The age groups in the data set that we analyse are more refined with 4 separate age groups for age 80+. Our result offers new insights on the relationship between COVID-19 mortality and age.
As for temporal evolving z_n,t^* in the Poisson rates, we focus on three aspects. The approximated posterior distribution of μ indicates the average mortality levels of each cause of death in each region unexplained by covariates in the model. From Figure <ref>, we can see that both tumors in Figure <ref> and diseases of the circulatory system in Figure <ref> contribute most to death counts whereas some morbid conditions that originate in the perinatal period shown in Figure <ref> have the lowest mortality rate on average. Figure <ref> also reveals certain geographical disparity. For instance, Piedmont, Liguria, Apulia and Abruzzo are the regions mostly affected by COVID-19 according to Figure <ref>, however this conclusion is in contradiction with the observation that regions such as Lombardy, Veneto, and Campania experienced many cases and deaths. Recall that μ measures unexplained mortality rate per 1,000 cases since we include COVID-19 cases in offset ϵ_n,t, the discrepancy could be attributed to factors including healthcare system quality, timing of the outbreak, testing capacity and so on. For example, regions with less comprehensive testing may report fewer mild or asymptomatic cases, resulting in a higher mortality rate, as the total cases are underestimated. Nevertheless, to answer this question, we need access to more covariates. Certain regional gap is more in line with conventional knowledge. Figure <ref> shows that endocrine, nutritional and metabolic diseases are more deadly in the south as southern regions have higher rates of obesity and diabetes. Temperature is another risk factor behind the phenomenon.
We now discuss the other two aspects associated with z^*_n,t, the dependence structure captured by (V^q,l)'V^q,l for the scale matrix D^q,l of the variational Wishart density and the correlation captured by (V^q,k)'V^q,k for the scale matrix D^q,k. Since D^q,l contains spatial information that controls the dynamic of z^*_n,t, we compute the partial correlation matrix, that is the normalized inverse, of D^q,l and then set the entries in the inverse matrix whose absolute value are below 0.1 to 0 while replace the remaining entries with 1. Figure <ref> maps the resulting adjacency matrix into Italy where red edges between two regions stand for 1's in the adjacency matrix, indicating a direct statistical relationship between them that is not explained by their relationships with other regions. There edges form four clusters; northern Italian regions are connected, and Lombardy appears to be the central hub for disease spread with many edges connected to it, possibly due to higher mobility or being a major transportation center. The remaining three clusters lie in the center and the southern Italy. The behavior indicates that regions inside each cluster might experience similar trends in mortality rates due to migration patterns, economic ties, climate conditions, or population behaviors. On the other hand, the absence of an edge between two regions implies conditional independence, meaning that once the influence of all other regions is controlled for, there is no direct statistical relationship between these two regions. This is the case with most regions.
Lastly, we turn to the correlation structure among various mortality causes. Figure <ref> shows strong positive correlation inside the following 5 causes of death, endocrine, nutritional and metabolic diseases, psychic and behavioral disorders, diseases of the nervous system and sense organs, diseases of the circulatory system, and diseases of the respiratory system. In terms of the relationship between COVID-19 and other causes of death, we only observe weak negative relationship between COVID-19 and symptoms, signs, abnormal results and ill-defined causes, which suggests that improved detection of COVID-19 death likely reduced the number of deaths classified under ill-defined causes. The lack of correlation between between COVID-19 and other causes of death indicates that after controlling government intervention intensity during the pandemic, COVID-19 itself does not have strong influence on moralities in other death categories.
§ SUMMARY AND FUTURE WORK
We develop an effiecient variational inference procedure for Bayesian dynamic GAMs where the dependent variable follows a Poisson distribution. The GAM component in the Poisson rate captures non-linear relationship between the outcome and covariates while the dynamics in the rate has a state space representation. The latent states in the model are assumed to be stationary with covariance matrix defined using a Kronecker product that disentangles one large covariance matrix into smaller covariance matrices. High-dimensional setting is considered, which prohibits the use of exact sampling algorithms, therefore we adopt variational algorithm to achieve fast convergence. The approach is applied to Italian mortality data to facilitate our understanding of mortality pattern changes during the COVID-19 outbreak.
Even though we concentrate on explanatory power of the model, it is potentially suitable for forecasting as the state space component models dynamics in mortality patterns. Another extension makes more sophisticated assumptions on the covariance matrix of the latent states. For instance, it is interesting to impose hierarchical G-Wishart distribution prior instead of Wishart prior on the precision matrices so that conditional dependence structure can be directly inferred from the model. Alternatively, the covariance matrix can be time-varying, allowing for more flexibility to account for changes in the dependence structure, especially before the COVID-19 pandemic and after the pandemic. We leave these discussion to future work.
apalike
|
http://arxiv.org/abs/2409.02185v1 | 20240903180011 | Radio signatures of cosmic-ray showers with deep in-ice antennas | [
"Simon Chiche",
"Nicolas Moller",
"Abby Bishop",
"Simon de Kockere",
"Krijn D. de Vries",
"Uzair Latif",
"Simona Toscano"
] | astro-ph.HE | [
"astro-ph.HE",
"astro-ph.IM"
] |
Impact Evaluations in Data Poor Settings: The Case of Stress-Tolerant Rice Varieties in BangladeshCorrespondence to mailto:[email protected]@arizona.edu. A pre-analysis plan for this research has been filed with Open Science Framework (OSF): https://doi.org/10.17605/OSF.IO/YE7PVhttps://osf.io/ye7pv. We gratefully acknowledge funding from the Standing Panel on Impact Assessment (SPIA), the Bill and Melinda Gates Foundation (BMGF), and the CGIAR Research Program on Rice. We are especially grateful to Aileen Maunahan, Jorrel Aunario, Pavan Yeggina, and Renaud Mathieu for their work on the early stages of EO data generation, model building, and creating training data sets. We also greatly appreciate the work on Donald Villanueva and Humnath Bhandari in the 2022 data collection effort as well as Donald, Rose San Valentin, and Rowell Dikitanan for initial data cleaning and database construction. This paper has been shaped by conversations with participants at the Center for Environmental Economics and Sustainability Policy (CEESP) seminar at Arizona State University, the 7^th African Conference of Agricultural Economists in Durban, the 6^th International Rice Congress in Manila, and the the 32^nd International Conference of Agricultural Economists in New Delhi. An earlier version of this paper was presented at the AAEA annual meeting in Anaheim.
[
August 2024
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
In-ice radio detection of high-energy astroparticles is a promising technique to detect the first ultra-high-energy neutrinos (E>10^17 eV) and open a new astronomical window, thanks to the gigantic effective volume it can instrument. The method was pioneered by experiments such as RICE <cit.>, ANITA <cit.> and ARIANNA <cit.>, that probed its feasibility while setting limits on the neutrino flux. The interaction of a neutrino with the ice creates a cascade of secondary particles resulting in a radio emission that can reach antennas located either at the ice's surface or buried deep in the ice. ARA <cit.> and RNO-G <cit.> in particular, are two in-ice experiments with close detection concepts that rely on strings of antennas buried at various depths in the ice to detect UHE neutrino. While RNO-G is still under construction, it will build on the knowledge gained from previous in-ice experiments, feature more stations than ARA, and combine both surface and deep antennas, making it one of the most sensitive next-generation UHE neutrino detectors. Yet, in-ice experiments should also detect the radio emission from cosmic-ray air showers, which can propagate into the ice and reach the deep antennas, acting as one of the main backgrounds for neutrino searches. The cosmic-ray flux is much larger than the neutrino flux, which implies that: (1) cosmic-ray detection should be more readily attainable to neutrino detection and could thus help calibrate in-ice radio detectors and validate their detection principle, (2) neutrino/cosmic-ray discrimination is needed to ensure a successful neutrino detection. In this work, we use the FAERIE Monte-Carlo code <cit.> that simulates the radio emission from cosmic-ray air showers for in-ice observers to characterize this emission and identify preliminary cosmic-ray signatures.
§ RADIO EMISSION FROM COSMIC-RAY AIR SHOWERS FOR IN-ICE OBSERVERS
A typical sketch of the radio emission from a cosmic-ray air shower seen by an in-ice observer is displayed in Fig. <ref>. The emission can be divided into two parts. First, the emission from the in-air cascade: as the shower develops, radio waves emitted in the air propagate without attenuation, a part of this emission is transmitted to the ice and can reach the deep antennas. Second, the emission from the in-ice cascade: if the shower is energetic enough, some particles can penetrate the ice and induce a secondary particle cascade emitting radio waves that can also reach the deep antennas. For the emission from the in-air cascade, we expect that both geomagnetic and charge-excess mechanisms contribute to the radio signal. However, for the in-ice cascade only a charge-excess emission is expected due to the higher density of the medium in which the shower develops. This yields different polarization signatures between the emissions from the in-air and the in-ice cascade. The geomagnetic emission is linearly polarized in the 𝐯 × 𝐁 direction, where 𝐯 is the direction of propagation of the shower and 𝐁 is the direction of the local Earth magnetic field. On the other hand the charge-excess emission is radially polarized in a plane perpendicular to the shower propagation axis <cit.>.
The relative contribution between the in-air and the in-ice emission is highly dependent on the shower inclination (zenith angle). For vertical showers, the shower maximum X_ max, is reached close to the ice surface and we expect that many energetic particles can penetrate the ice and contribute to the in-ice emission. On the opposite, for inclined showers, the propagation distance between X_ max and the ground is large, hence most particles lose their energy before reaching the ice which yields a higher in-air/in-ice ratio.
Additionally, the Cerenkov cone, which defines the region where the electric field amplitude is highest, is observed at different angles for in-air and in-ice emissions. The Cerenkov angle is given by θ_c = 1/arccos(n β), where n is the index of refraction of the medium and where β = v/c, with the v the velocity of the shower particles. Hence, we expect a larger Cerenkov angle for the in-ice emission, with θ_c^ air∼ 1^∘ and θ_c^ ice∼ 50^∘. Eventually, both the emission from the in-air and the in-ice cascades should be bent while propagating in the ice, due to the rapidly varying density of the medium.
§ LIBRARY OF COSMIC-RAY SHOWERS
To study in-ice radio signatures of cosmic-ray air showers we use the FAERIE simulation code. In this section we discuss the different steps to set up the simulations and build a library of cosmic-ray events.
§.§ FAERIE simulation code
FAERIE, the Framework for the simulation of Air shower Emission of Radio for in-Ice Experiments <cit.>, is a numerical tool that combines CORSIKA 7.7500 <cit.> and GEANT4 <cit.> Monte-Carlo codes to simulate both the cosmic-ray induced in-air and in-ice cascades respectively. The radio emission is then generated using the CoREAS radio extension <cit.> for the in-air cascade, while a code from the T-510 experiment is used for the in-ice cascade <cit.>. In both cases, the computation of the radio emission relies on the endpoint formalism <cit.>, which derives the emission from any single charge in the particle cascades using straight-line propagation of rays. For a charged particle at a position x moving along a track at a speed β^⋆ = v^⋆/c during a time Δ t, the electric field seen by an observer at a distance R reads <cit.>
E_±(x, t) = ±1/Δ tq/c(r× [r×β^⋆]/|1-n β^⋆·r|R) ,
where the plus/minus sign is applied to the start/end point of the track, q is the particle charge, n the medium refractive index and r the unit vector that goes from the emission point to the observer.
Eventually, to account for the changing refractive index profile in both air and ice, as well as the transition of the radiation from air to ice, FAERIE relies on ray-tracing and the endpoint formalism needs to be adapted. One of the main modifications being that in the ice the variable R in Eq. <ref> must be re-interpreted as the geometrical path length connecting the emitter and the receiver. This length can be derived with ray-tracing using Snell's law n_1sini_i = n_2sini_2, which links the incident and refracted angles i_1,2 of a given ray at the transition boundary between two medium with refractive indices n_1,2. For a given point-like emitter and receiver we find two solutions, a direct and a reflected path, as shown in Fig. <ref>.
§.§ Simulation setup
Using the FAERIE simulation code we then build a library of cosmic-ray events. To propagate the radio emission we first need to define an ice-model describing the evolution of the ice refractive index as a function of depth. The ice density can usually be well modeled using an exponential profile following n(z) = A - Bexp-C|z|, with A, B, C, being three fit parameters and |z| the depth. For the South Pole ice we use A =1.775, B =0.43, C =0.0132 <cit.>. For Greenland however, the ice profile is such that at a depth of |z| = 14.9 m the ice reaches a critical density and becomes more compact so using a double exponential profile is more suitable. Hence we find A = 1.775, B =0.5019, C =0.03247 for |z| < 14.9 m and A = 1.775, B =0.448023, C =0.02469 for |z| > 14.9 m <cit.>.
For the antennas layout we consider several square grids of antennas at depths following the position of the antennas along ARA's strings and RNO-G's main triggering string (power line). For ARA we consider 12 depths for each grid, evenly spaced between 145 m and 200 m. For RNO-G we consider 5 depths [0, 40, 60, 80, 100] m. To build our cosmic-ray library we generate showers for 4 energies E = [10^16.5, 10^17, 10^17.5, 10^18] eV, one azimuth angle φ = 0^∘ (shower propagating towards magnetic north) and 9 zenith angles evenly spaced in cosθ, θ = [0^∘, 10^∘, 20^∘, 28^∘, 34^∘, 39^∘, 43^∘, 47^∘, 50^∘].
§ SIMULATION RESULTS
Once the simulation inputs are set, we generate the library of cosmic-ray showers and discuss some of the simulation results below.
§.§ Electric field maps
In Fig. <ref>, we show the electric field maps simulated with FAERIE for an antenna layer at a depth |z| = 40 m, corresponding to a shower with zenith angle θ = 0^∘ (top plots) and a shower with zenith angle θ = 50^∘ at the same depth (bottom plots). For the top plots we separate between the emission from the in-air cascade (left-hand panel) and the one from the in-ice cascade (right-hand panel). For the in-air emission, we retrieve a bean-shaped pattern as expected from the interference between the charge-excess and the geomagnetic emission. On the other hand, for the in-ice emission the radio signal is focused along an annulus-shaped region corresponding to the Cerenkov angle of the in-ice cascade. It should be noted that though the Cerenkov angle of the in-ice emission is larger than the one from the in-air emission, at a depth of |z|=40 m both in-air and in-ice emissions are spread over a similar area, as the in-air emission point is located much further away from the antenna grid.
For the bottom plots (zenith angle θ = 50^∘),the left-hand plot shows a bean-shaped pattern for the in-air emission, similar to the previous case. However, this emission is spread over a larger area compared to the vertical shower, as inclined showers propagate through the atmosphere over longer distances than vertical ones and have an ellipsoidal projection on the ground. Finally, the bottom right-hand plot displays the coherent sum of the in-air and in-ice emissions at a zenith angle of θ = 50 ^∘. The resulting electric field map is very similar to that of the in-air emission alone. As mentioned in Section <ref>, for inclined showers, only a few energetic particles reach the ground, so that the expected contribution to the total radio emission from the in-ice cascade is negligible.
§.§ Cosmic-ray signatures
Using FAERIE simulations, we can then identify cosmic-ray signatures that would help us to design a discriminant between cosmic-rays and neutrinos. In Fig. <ref> we show the cosmic-ray induced emission measured by surface antennas, i.e., antennas located at the top of the ice sheet, as used in RNO-G stations. Since neutrino showers usually develop deep in the ice, no neutrino signal is expected at the surface antennas which should therefore act as a veto for cosmic-rays. This means that, the cosmic-ray induced emission detected by the surface antennas should come solely from the in-air cascade. In the left-hand panel of Fig. <ref>, we show that the electric field map of the total radio emission displays a bean-shaped pattern, as expected for an in-air emission. In the right-hand panel of Fig. <ref>, we also show that the polarisation vector at the antenna level points dominantly towards the -𝐯 × 𝐁 direction, as expected from a dominant geomagnetic emission.
We can also use the electric field time traces to search for cosmic-ray signatures. In the left-hand panel of Fig. <ref> we show a time trace at a given antenna with a typical double pulse signature. This feature is expected if both the in-air and the in-ice emission reach the same antenna and thus could be used to identify cosmic-ray induced showers. In the right-hand panel of Fig. <ref>, we also show the time trace of two antennas along the same line of sight, one antenna at a depth of 0 m (blue pulse) and the other at a depth of 100 m (orange pulse), considering X_ max as emission point. We can see that the radio emission arrives earlier at the surface antenna and with a higher amplitude since this antenna is closer to X_ max. Additionally, we could link the delay between the two pulses to the path length of the radio emission between the antennas. Approximating this path length by the depth difference between the antennas, we get Δ t ∼ 100 m × n_ ice /c ∼ 470 ns if we assume n_ ice =1.4. These signatures should further help us to identify the cosmic-ray induced emission seen by in-ice observers.
§ CONCLUSION
Using the FAERIE Monte-Carlo code we simulated the radio emission induced by cosmic-ray showers deep in the ice, as targeted by experiments such as ARA and RNO-G. We studied the dependency of this emission with the shower geometry and, using raw traces, we identified preliminary cosmic-ray signatures based on the signal spatial distribution, polarization and timing. The successful detection of the first cosmic-ray event with deep antennas would be a breakthrough, as it would provide the first proof of concept for detecting particle cascades in nature using the in-ice radio technique. Such a detection would also allow in-ice experiments to calibrate their detectors and the identification of cosmic-ray signatures will be crucial to ensure the design of a cosmic-ray/neutrino discriminant.
99
Kockere_2024 S.D. Kockere et al, Physical Review D1100230102024.
Kravchenko_2003I. Kravchenko et al, Astroparticle Physics1915–362003.
Gorham_2009P.W. Gorham et al, Astroparticle Physics3210–412009.
Gorham_2019P.W. Gorham et al, Physical Review D992019.
Gorham_2021P.W. Gorham et al, Physical Review Letters1262021.
AriannaS. W. Barwick et al, arxivhttps://arxiv.org/abs/1410.73692014.
Anker_2019A. Anker et al, Advances in Space Research642595–26092019.
Miller_2012T. Miller et al, Icarus220877–8882012.
Allison_2016P. Allison et al, Physical Review D932016.
Aguilar_2021J.A. Aguilar et al, Journal of Instrumentation16P030252021.
Schr_der_2017F. Schroeder, Progress in Particle and Nuclear Physics931–682017.
Heck:1998vtD. Heck et al, CORSIKA: a Monte Carlo code to simulate extensive air showers1998.
ALLISON2016186J. Allison et al, Nuclear Instruments and Methods in Physics Research835186-2252016.
HuegeCOREAST. Huege et al, AIP Conference Proceedings1535128-1322013.
T510K. Bechtol et al, Phys. Rev. D1050630252022.
PhysRevE.84.056602C.W. James et al, Phys. Rev. E840566022011.
Kelley:2017wsJ. Kelley et al, PoSICRC201710302017.
RNOiceC. Deaconu et al, Physical Review D980430102018.
|
http://arxiv.org/abs/2409.03734v1 | 20240905174501 | Safety vs. Performance: How Multi-Objective Learning Reduces Barriers to Market Entry | [
"Meena Jagadeesan",
"Michael I. Jordan",
"Jacob Steinhardt"
] | cs.LG | [
"cs.LG",
"cs.CY",
"econ.GN",
"q-fin.EC",
"stat.ML"
] |
: Assessing Contextual Integrity Norms in Language Models
Yan Shvartzshnaider
York University
Vasisht Duddu
University of Waterloo
John Lacalamita
York University
=======================================================================================================================
§ ABSTRACT
Emerging marketplaces for large language models and other large-scale machine learning (ML) models appear to exhibit market concentration, which has raised concerns about whether there are insurmountable barriers to entry in such markets. In this work, we study this issue from both an economic and an algorithmic point of view, focusing on a phenomenon that reduces barriers to entry. Specifically, an incumbent company risks reputational damage unless its model is sufficiently aligned with safety objectives, whereas a new company can more easily avoid reputational damage. To study this issue formally, we define a multi-objective high-dimensional regression framework that captures reputational damage, and we characterize the number of data points that a new company needs to enter the market. Our results demonstrate how multi-objective considerations can fundamentally reduce barriers to entry—the required number of data points can be significantly smaller than the incumbent company's dataset size. En route to proving these results, we develop scaling laws for high-dimensional linear regression in multi-objective environments, showing that the scaling rate becomes slower when the dataset size is large, which could be of independent interest.
§ INTRODUCTION
Large language models and other large-scale machine learning (ML) models have led to an important shift in the information technology landscape, one which has significant economic consequences. Whereas earlier generations of ML models provided the underpinnings for platforms and services, new models—such as language models—are themselves the service. This has led to new markets where companies offer language models as their service and compete for user usage. As in other markets, it is important to reason about market competitiveness: in particular, to what extent there are barriers to entry for new companies.
A widespread concern about these markets is that new companies face insurmountable barriers to entry that drive market concentration <cit.>. The typical argument is that incumbent companies with high market share can purchase or capture significant amounts of data and compute,[Large companies can afford these resources since the marketplace is an economy of scale (i.e., fixed costs of training significantly exceed per-query inference costs). They also generate high volumes of data from user interactions.] and then invest these resources into the training of models that achieve even higher performance <cit.>. This suggests that the company's market share would further increase, and that the scale and scope of this phenomenon would place incumbent companies beyond the reach of new companies trying to enter the market. The scale is in fact massive—language assistants such as ChatGPT and Gemini each have hundreds of millions of users <cit.>. In light of the concerns raised by policymakers <cit.> and regulators <cit.> regarding market concentration, it is important to investigate the underlying economic and algorithmic mechanisms at play.
While standard arguments assume that market share is determined by model performance, the reality is that the incumbent company risks reputational damage if their model violates safety-oriented objectives. For example, incumbent companies face public and regulatory scrutiny for their model's safety violations—such as threatening behavior <cit.>, jailbreaks <cit.>, and releasing dangerous information <cit.>—even when the model performs well in terms of helpfulness and usefulness to users. In contrast, new companies face less regulatory scrutiny since compliance requirements often prioritize models trained with more resources <cit.>, and new companies also may face less public scrutiny given their smaller user bases.
In this work, we use a multi-objective learning framework to show that the threat of reputational damage faced by the incumbent company can reduce barriers to entry. For the incumbent, the possibility of reputational damage creates pressure to align with safety objectives in addition to optimizing for performance. Safety and performance are not fully aligned, so improving safety can reduce performance as a side effect. Meanwhile, the new company faces less of a risk of reputational damage from safety violations. The new company can thus enter the marketplace with significantly less data than the incumbent company, a phenomenon that our model and results formalize.
Model and results.
We analyze a stylized marketplace based on multi-objective linear regression (Section <ref>). The performance-optimal output and the safety-optimal output are specified by two different linear functions of the input x. The marketplace consists of two companies: an incumbent company and a new company attempting to enter the market. Each company receives their own unlabelled training dataset, decides what fraction of training data points to label according to the performance-optimal vs. safety-optimal outputs, and then runs ridge regression. The new company requires a less stringent level of safety to avoid reputational damage than the incumbent company. We characterize the market-entry threshold (Definition <ref>) which captures how much data the new company needs to outperform the incumbent company.
First, as a warmup, we characterize when the new company faces no safety constraint and the incumbent company has infinitely many data points (Section <ref>). Our key finding is that the new company can enter the market with finite data, even when the incumbent company has infinite data (Theorem <ref>; Figure <ref>). Specifically, we show that the threshold is finite; moreover, it is increasing in the correlation (i.e., the alignment) between performance and safety, and it is decreasing in a problem-specific scaling law exponent.
Next, we turn to more general environments where the incumbent has finite data < ∞ (Section <ref>). We find that the threshold scales sublinearly with the incumbent's dataset size , as long as is sufficiently large. In fact, the threshold scales at a slower rate as increases: that is, = Θ(^c) where the exponent c is decreasing in (Theorem <ref>; Figure <ref>). For example, for concrete parameter settings motivated by language models <cit.>, the exponent c decreases from 1 to 0.75 to 0 as increases. In general, the exponent c takes on up to three different values depending on , and is strictly smaller than 1 as long as is sufficiently large.
Finally, we turn to environments where the new company also faces a nontrivial safety constraint, assuming for simplicity that the incumbent company again has infinite data (Section <ref>). We find that is finite as long as the new company faces a strictly weaker safety constraint than the incumbent. When the two safety thresholds are closer together, the new company needs more data and in fact needs to scale up their dataset size at a faster rate: that is, = Θ(D^-c), where D measures the difference between the safety thresholds and where the exponent c increases as D decreases
(Theorem <ref>; Figure <ref>). For the parameter settings in <cit.>, the exponent c changes from -2.94 to -3.94 to an even larger value as D decreases. In general, the exponent c takes on up to three different values.
Technical tool: Scaling laws.
To prove our results, we derive scaling laws for multi-objective high-dimensional linear regression, which could be of independent interest (Section <ref>; Figure <ref>). We study optimally-regularized ridge regression where some of the training data is labelled according to the primary linear objective (capturing performance) and the rest is labelled according to an alternate linear objective (capturing safety).
We characterize data-scaling laws for both the loss along the primary objective and the excess loss along the primary objective relative to an infinite-data ridgeless regression. Our scaling laws quantify the rate at which the loss (Theorem <ref>; Figure <ref>) and the excess loss (Theorem <ref>; Figure <ref>) decay with the dataset size N, and how this rate is affected by the fraction of data labelled according to each objective and other problem-specific quantities. Our analysis improves upon recent works on scaling in multi-objective environments <cit.> by allowing for non-identity covariances and problem-specific regularization, which leads to new insights about scaling laws as we describe below.
Our results reveal that the scaling rate becomes slower as the dataset size increases, illustrating that multi-objective scaling laws behave qualitatively differently from classical single-objective environments. While a typical scaling exponent in a single-objective environment takes on a single value across all settings of N, the scaling exponent for multi-objective environments decreases as N increases. In particular, the scaling exponent takes on three different values depending on the size of N relative to problem-specific parameters.
The intuition is that the regularizer must be carefully tuned to N in order to avoid overfitting to training data labelled according to the alternate objective, which in turn results in the scaling exponent being dependent on N (Section <ref>).
Discussion.
Altogether, our work highlights the importance of looking beyond model performance when evaluating market entry in machine learning marketplaces. Our results highlight a disconnect between market entry in single-objective environments versus more realistic multi-objective environments. More broadly, a company's susceptibility to reputational damage affects how they train their model to balance between different objectives. As we discuss in Section <ref>, these insights have nuanced implications for regulators who wish to promote both market competitiveness and safety compliance, and also generalize beyond language models to online platforms.
§.§ Related work
Our work connects to research threads on competition between model providers as well as scaling laws and high-dimensional linear regression.
Competition between model providers. Our work contributes to an emerging line of work studying how competing model providers strategically design their machine learning pipelines to attract users. Model-provider actions range from choosing a function from a model class <cit.>, to selecting a regularization parameter <cit.>, to choosing an error distribution over user losses <cit.>, to making data purchase decisions <cit.>, to deciding whether to share data <cit.>, to selecting a bandit algorithm <cit.>. While these works assume that model providers win users solely by maximizing (individual-level or population-level) accuracy, our framework incorporates the role of safety violations in impacting user retention implicitly via reputational damage. Moreover, our focus is on quantifying the barriers to market entry, rather than analyzing user welfare or the equilibrium decisions of model providers.
Other related work includes the study of competition between algorithms <cit.>, retraining dynamics under user participation decisions <cit.>, the bargaining game between a foundation model company and a specialist <cit.>, and the market power of an algorithmic platform to shape user populations <cit.>.
Our work also relates to platform competition <cit.>, the emerging area of competition policy and regulation of digital marketplaces <cit.>, the study of how antitrust policy impacts innovation in classical markets <cit.>, and industrial organization more broadly <cit.>. For example, recent work examines how increased public scrutiny from inclusion in the S&P 500 can harm firm performance <cit.>, how privacy regulation impacts firm competition <cit.>, how regulatory inspections affect incentives to comply with safety constraints <cit.>, and how data-driven network effects can reduce innovation <cit.>.
Scaling laws and high-dimensional linear regression. Our work also contributes to an emerging line of work on scaling laws which study how model performance changes with training resources. Empirical studies have demonstrated that increases to scale often reliably improve model performance <cit.>, but have also identified settings where scaling behavior is more nuanced <cit.>. We build on a recent mathematical characterization of scaling laws based on high-dimensional linear regression
<cit.>. However, while these works focus on single-objective environments where all of the training data is labelled with outputs from a single predictor, we consider multi-objective environments where some fraction of the training data is labelled according to an alternate predictor.
We note that a handful of recent works similarly move beyond single-objective environments and study scaling laws where the training data comes a mixture of different data sources. <cit.> study high-dimensional ridge regression in a similar multi-objective environment to our setup. However, these results assume an identity covariance and focus on fixed regularization or no regularization. In contrast, we allow for richer covariance matrices that satisfy natural power scaling (Section <ref>), and we analyze optimally tuned regularization. Our analysis of these problem settings yields new insights about scaling behavior: for example, the scaling rate becomes slower with dataset size (Theorems <ref>-<ref>). Other related works study scaling laws under mixtures of covariate distributions
<cit.>, under data-quality heterogeneity <cit.>, under data addition <cit.>, under mixtures of AI-generated data and real data <cit.>, and with respect to the contribution of individual data points <cit.>.
More broadly, our work relates to collaborative learning <cit.>, federated learning <cit.>, optimizing data mixtures <cit.>, and adversarial robustness <cit.>.
Finally, our work relates to non-monotone scaling laws in strategic environments <cit.>, where increases to scale can worsen equilibrium social welfare.
§ MODEL
We define our linear-regression-based marketplace (Section <ref>), justify the design choices of our model (Section <ref>), and then delineate our statistical assumptions (Section <ref>).
§.§ Linear regression-based marketplace
We consider a marketplace where two companies fit linear regression models in a multi-objective environment.
Linear regression model. To formalize each company's machine learning pipeline, we consider the multi-objective, high-dimensional linear regression model described below. This multi-objective environment aims to capture how ML models are often trained to balance multiple objectives which are in tension with each other, and we consider linear regression since it has often accurately predicted scaling trends of large-scale machine learning models (see Section <ref> for additional discussion).
More concretely, given an input x ∈ℝ^P, let ⟨β_1, x⟩ be the output that targets performance maximization, and let ⟨β_2, x⟩ be the output that targets safety maximization. Given a linear predictor β, the performance loss is evaluated via a population loss, () = 𝔼_x ∼[(⟨, x⟩ - ⟨β, x⟩)^2], and the safety violation is captured by a loss () = 𝔼_x ∼[(⟨, x⟩ - ⟨β, x⟩)^2], where is the input distribution.
The model provider implicitly determines how to balance β_1 and β_2 when determining how to label their training dataset. In particular, each model provider is given an unlabelled training dataset X ∈ℝ^N × P with N inputs drawn from . To generate labels, they select the fraction α∈ [0,1] of training data to label according to each objective. They then sample a fraction of the training data uniformly from X and label it as Y_i = ⟨β_1, X_i⟩; the remaining 1 - fraction is labelled as Y_i = ⟨β_2, X_i⟩. The model provider fits a ridge regression on the labelled training dataset with least-squares loss ℓ(y, y') = (y-y')^2, and thus solves: (, , X) = _β(1/N∑_i=1^N (Y_i - ⟨β, X_i ⟩)^2 + ||β||^2_2 ).
Marketplace.
The marketplace contains two companies, an incumbent company I already in the market and a new (entrant) company E trying to enter the market. At a high level, each company C ∈{I, E}
faces reputational damage if their safety violation exceeds their safety constraint τ_C. Each company company C is given N_C unlabelled data points sampled from , and selects a mixture parameter _C and regularizer _C to maximize their performance given their safety constraint τ_C. We assume that the incumbent company I faces a stricter safety constraint, <, due to increased public or regulatory scrutiny (see Section <ref> for additional discussion).
When formalizing how the model providers choose hyperparameters, we make the following simplications. First, rather than work directly with the performance and safety losses of the ridge regression estimator, we assume for analytic tractability that they approximate these losses by ^* := ^*(β_1, β_2, , , N, α) and ^* := ^*(β_1, β_2, , α) defined as follows.
* Performance: We define ^* to be a deterministic equivalent ^(β_1, β_2, Σ, , N, α) which we derive in Lemma <ref>. The deterministic equivalent <cit.> is a tool from random matrix theory that is closely linked to the Marčenko-Pastur law <cit.>. Under standard random matrix assumptions (Assumption <ref>), the deterministic equivalent asymptotically approximates the loss L_1(β̂(α, λ, X)) when X is constructed from N i.i.d. samples from (see Appendix <ref> for additional discussion).
* Safety: For analytic simplicity, in the main body of the paper, we define ^* to be the safety violation of the infinite-data ridgeless regression estimator with mixture parameter α.[The infinite-data ridgeless regression estimator is
_β(α·𝔼_x ∼[⟨β - β_1, x⟩^2] + (1-α)·𝔼_x ∼[⟨β - β_2, x⟩^2] ). For this specification, the dataset size N and the regularization parameter only affect ^* and not ^*, which simplifies our analysis in Sections <ref>-<ref> and enables us to obtain tight characterizations.] In Appendix <ref>, we instead define ^* analogously to ^*—i.e., as a deterministic equivalent L_2^det(β_1, β_2, , λ, N, α)—and extend our model and results to this more complex setting.[We directly extend our results in Section <ref>, and we also show relaxed versions of our results in Section <ref>.]
Second, we assume that (β_1, β_2) ∼ for some joint distribution and that the model providers take expectations when choosing hyperparameters, since it will be easier to specify assumptions in Section <ref> over distributions of predictors.
Within this setup, a company C faces reputational damage if the safety violation exceeds a certain threshold:
𝔼_(β_1, β_2) ∼ [L_2^*(β_1, β_2, , α_C)] > τ_C.
We assume that the safety thresholds for the two companies satisfy the following inequalities:
>_(A)≥_(B)𝔼_(β_1, β_2) ∼[^*(β_1, β_2, , 0.5)].
Here, inequality (A) captures the notion that the incumbent needs to achieve higher safety to avoid reputational damage. Inequality (B) guarantees that both companies, C ∈{I, E }, can set the mixture parameter _C ≥ 0.5 without facing reputational damage, and thus ensures that the safety constraint does not dominate the company's optimization task.[More specifically, inequality (B) ensures that the safety constraint still allows both companies to label 50% of their training data according to the performance-optimal outputs.]
The company selects ∈ [0.5,1] and ∈ (0,1) to maximize their performance subject to their safety constraint, as formalized by the following optimization program:[Technically, the optimum might be achieved at = 0 or = 1, and the min should be replaced by a inf.]
(_C, _C) = _∈ [0.5, 1], ∈ (0,1)𝔼_[^*(β_1, β_2, , , N_C, α)] s.t. 𝔼_[^*(β_1, β_2, , α)] ≤τ_C.
Market-entry threshold. We define the market-entry threshold to capture the minimum number of data points that the new company needs to collect to achieve better performance than the incumbent company while avoiding reputational damage.
The market-entry threshold (, , , , )
is the minimum value of ∈ℤ_≥ 1 such that 𝔼_[^*(β_1, β_2, , , , )] ≤𝔼_[^*(β_1, β_2, , , , )].
The goal of our work is to analyze the function (,, , , ).
§.§ Model discussion
Now that we have formalized our statistical model, we discuss and justify our design choices in greater detail. We defer a discussion of limitations to Section <ref>.
Presence of competing objectives. Our multi-objective formulation is motivated by how ML models are often trained to balance multiple objectives which are in tension with each other. In some cases, the pretraining objective is in tension with the finetuning objective <cit.>. For example, the fine-tuning of a language model to be more aligned with user intent can degrade performance—e.g., because the model hedges too much—which creates an “alignment tax” <cit.>. In other cases, fine-tuning approaches themselves balance multiple objectives such as helpfulness (which can be mapped to performance in our model) and harmlessness (which can be mapped to safety in our model) <cit.>. These objectives can be in tension with one another, for example if the user asks for dangerous information.
High-dimensional linear regression as a statistical model. We focus on high-dimensional linear regression due to its ability to capture scaling trends observed in large-scale machine learning models such as language models, while still retaining analytic tractability. In particular, in single-objective environments, scaling trends for high-dimensional linear regression recover the empirically observed power-law scaling of the loss with respect to the dataset size <cit.>. Moreover, from an analytic perspective, the structural properties of high-dimensional linear regression make it possible to characterize the loss using random matrix machinery (see Appendix <ref>).
Impact of market position on model provider constraint τ. Our assumption that > (inequality (A) in (<ref>)) is motivated by how large companies face greater reputational damage from safety violations than smaller companies. One driver of this unevenness in reputational damage is regulation: for example, recent regulation and policy <cit.> places stricter requirements on companies that use significant amounts of compute during training. In particular, these companies face more stringent compliance requirements in terms of safety assessments and post-deployment monitoring. Another driver of uneven reputational damage is public perception: we expect that the public is more likely to uncover safety violations for large companies, due to the large volume of user queries to the model. In contrast, for small companies, safety violations may be undetected or subject to less public scrutiny.
§.§ Assumptions on linear regression problem
To simplify our characterization of scaling trends, we follow prior work on high-dimensional linear regression <cit.> and make the following empirically motivated power-law assumptions. Let Σ = 𝔼_x ∼[xx^T] be the covariance matrix, and let λ_i and v_i be the eigenvalues and eigenvectors, respectively. We require the eigenvalues to decay with scaling exponent γ > 0 according to λ_i = i^-1-γ for 1 ≤ i ≤ P.
For the alignment coefficients ⟨β_j, v_i⟩, it is cleaner to enforce power scaling assumptions in expectation, so that we can more easily define a correlation parameter. We require that for some δ > 0, the alignment coefficients satisfy 𝔼_[⟨β_j, v_i ⟩^2] = i^-δ, where v_i is the ith eigenvector of Σ, for j ∈{1,2} and 1 ≤ i ≤ P. We also introduce a similar condition on the joint alignment coefficients, requiring that for some ρ∈ [0,1), it holds that 𝔼_[⟨β_1, v_i ⟩⟨β_2, v_i ⟩ ] = ρ· i^-δ. Finally, we assume an overparameterized limit where the number of parameters P →∞ approaches infinity. Below, we provide an example which satisfies these assumptions.
Suppose that the covariance Σ is a diagonal matrix with diagonal given by λ_i = i^-1-γ. Let the joint distribution over β_1 and β_2 be a multivariate Gaussian such that:
𝔼_[ (β_j_1)_i_1 (β_j_2)_i_2] =
0 if 1 ≤ j_1, j_2 ≤ 2, 1 ≤ i_1 ≠ i_2 ≤ P
i_1 ^-δ if 1 ≤ j_1 = j_2 ≤ 2, 1 ≤ i_1 = i_2 ≤ P
ρ· i_1 ^-δ if 1 ≤ j_1 ≠ j_2 ≤ 2, 1 ≤ i_1 = i_2 ≤ P.
This implies that 𝔼_[⟨β_j, v_i ⟩^2] = i^-δ and 𝔼_[⟨β_1, v_i ⟩⟨β_2, v_i ⟩ ] = ρ· i^-δ.
We adopt the random matrix theory assumptions on the covariance matrix and linear predictors from <cit.> (see Assumption <ref> in Appendix <ref>), which guarantee that the Marčenko-Pastur law holds <cit.>. That is, the covariance (Σ̂ + λ I)^-1 of the samples can be approximated by a deterministic quantity (see Appendix <ref> for a more detailed discussion). We leverage this Marčenko-Pastur law to derive a deterministic equivalent L_1^ for the performance loss L_1(β̂(α, λ, X)) of the ridge regression estimator (Lemma <ref>).
§ WARM UP: INFINITE-DATA INCUMBENT AND UNCONSTRAINED ENTRANT
As a warmup, we analyze the market entry threshold in a simplified environment where the incumbent has infinite data and the new company faces no safety constraint. In this result, we place standard power-law scaling assumptions on the covariance and alignment coefficients (Section <ref>) and we characterize the threshold up to constants (Theorem <ref>; Figure <ref>).
Suppose that power-law scaling holds for the eigenvalues and alignment coefficients, with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0,1), and suppose that P = ∞.
Suppose that the incumbent company has infinite data (i.e., = ∞), and that the entrant faces no constraint on their safety (i.e., = ∞). Suppose that the safety constraint satisfies (<ref>). Then, it holds that:[Throughout the paper, we allow Θ() and O() to hide implicit constant which depends on the scaling exponents γ, δ.]
(∞, , ∞, , ) = Θ((√(L^*(ρ)) - √(min(, L^*(ρ))))^-2/),
where L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)] = Θ(1 - ρ), and where := min(2(1+γ), δ + γ).
The intuition is as follows. The safety constraint forces the incumbent company to partially align their predictor with the safety objective β_2. Since β_1 and β_2 point in different directions, this reduces the performance of the incumbent along β_1 as a side effect, resulting in strictly positive loss with respect to performance. On the other hand, since the new company faces no safety constraint, the new company can optimize entirely for performance along β_1. This means that the new company can enter the market as long as their finite data error is bounded by the incumbent's performance loss. We formalize this intuition in the following proof sketch.
The incumbent chooses the infinite-data ridgeless estimator β(α, 0) with mixture parameter α∈ [0,1] tuned so the safety violation is (Lemma <ref>). The resulting performance loss is √(L^*(ρ)) - √(min(, L^*(ρ))). Since the new company has no safety constraint, they choose the single-objective ridge regression estimator where α = 1 and where is chosen optimally.[We formally rule out the possibility that α≠ 1 using our multi-objective scaling law in Theorem <ref>.] Theorem <ref> (or alternatively,
existing analyses of high-dimensional linear regression <cit.>) demonstrate the loss follows a scaling law of the form inf_ > 0 L_1(β̂(1, , X)) = Θ(
N^-) where :=min(2(1+γ), δ +γ). The full proof is in Appendix <ref>.
Theorem <ref> reveals that the market-entry threshold is finite as long as (1) the safety constraint places nontrivial restrictions on the incumbent company and (2) the safety and performance objectives are not perfectly correlated. This result captures the notion that the new company can enter the market even after the incumbent company has accumulated an infinite amount of data.
Theorem <ref> further illustrates how the market-entry threshold changes with other parameters (Figure <ref>). When safety and performance objectives are more correlated (i.e., when ρ is higher), the market-entry threshold increases, which increases barriers to entry. When the safety constraint for the incumbent is weaker (i.e., when is higher), the market-entry threshold also increases. Finally, when the power scaling parameters of the covariance and alignment coefficients increase, which increases the scaling law exponent , the market-entry threshold decreases.
§ GENERALIZED ANALYSIS OF THE MARKET-ENTRY THRESHOLD
To obtain a more fine-grained characterization of the market-entry threshold, we now consider more general environments. Our key technical tool is multi-objective scaling laws, which capture the performance of ridge regression in high-dimensional, multi-objective environments with finite data (Section <ref>). Using these scaling laws, we characterize the market-entry threshold when the incumbent has finite data (Section <ref>) and when the new company has a safety constraint (Section <ref>).
Our results in this section uncover the following conceptual insights about market entry. First, our main finding from Section <ref>—that the new company can enter the market with significantly less data than the incumbent—applies to these generalized environments. Moreover, our characterizations of exhibit a power-law-like dependence with respect to the incumbent's dataset size (Theorem <ref>) and the difference in safety requirement for the two companies (Theorem <ref>). Interestingly, the scaling exponent c is not a constant across the full regime and instead takes on up to three different values. As a consequence, the new company can afford to scale up their dataset at a slower rate as the incumbent's dataset size increases, but needs to scale up their dataset at a faster rate as the two safety constraints become closer together. Proofs are deferred to Appendix <ref>.
§.§ Technical tool: Scaling laws in multi-objective environments
In this section, we give an overview of multi-objective scaling laws (see Section <ref> for a more formal treatment and derivations). Our scaling laws capture how the ridge regression loss L_1(β̂(α, λ, X)) along the primary objective β_1 scales with the dataset size N, when the regularizer is optimally tuned to both N and problem-specified parameters. We show scaling laws for both the loss inf_∈ (0, 1)𝔼[L_1(β̂(α, λ, X))] and the excess loss inf_∈ (0, 1) (𝔼[L_1(β̂(α, λ, X)) - L_1(β(α, 0))]) where β(α, 0) is the infinite-data ridgeless regression estimator.
Scaling law for the loss.
We first describe the scaling law for inf_∈ (0, 1)𝔼[L_1(β̂(α, λ, X))] (Theorem <ref>; Figure <ref>).
Suppose that the power-law scaling assumptions from Section <ref> hold with exponents γ, δ > 0 and correlation coefficient ρ∈ [0,1). Suppose also that P = ∞ and α≥ 0.5. Then, a deterministic equivalent for the expected loss under optimal regularization inf_∈ (0, 1)𝔼[L_1(β̂(α, λ, X))] scales according to N^-^*, where the scaling exponent ^* is defined to be:
^* =
if N ≤ (1-α)^-1/(1-ρ)^-1/
/ + 1 if (1-α)^-1/(1-ρ)^-1/≤ N ≤ (1-α)^-2+/ (1-ρ)^-1/
0 if N ≥ (1-α)^-2+/ (1-ρ)^-1/,
for := min(2(1+γ), δ+γ).
Theorem <ref> (Figure <ref>) illustrates that the scaling rate becomes slower as the dataset size N increases. In particular, while the scaling exponent in single-objective environments is captured by a single value, Theorem <ref> illustrates that the scaling exponent ^* in multi-objective environments takes on three different values, depending on the size of N relative to other parameters.
When N is small (the first regime), the scaling exponent ^* = is identical to that of the single-objective environment given by β_1. When N is a bit larger (the second regime), the scaling exponent reduces to ^* = /( + 1) <. To make this concrete, if we take = 0.34 to be an empirically estimated scaling law exponent for language models <cit.>, this would mean that ^* ≈ 0.34 in the first regime and ^* ≈ 0.25 in the second regime. Finally, when N is sufficiently large (the third regime), the scaling exponent reduces all the way to ^* = 0 and the only benefit of additional data is to improve constants on the loss.
Scaling law for the excess loss.
We next turn to the excess loss, inf_∈ (0, 1) (𝔼[L_1(β̂(α, λ, X)) - L_1(β(α, 0))]), which is normalized by the loss of the infinite-data ridgeless predictor β(α, 0). We show that the excess loss exhibits the same scaling behavior as the loss when N is sufficiently small, but exhibits different behavior when N is sufficiently large (Theorem <ref>; Figure <ref>).
Suppose that the power-law scaling assumptions from Section <ref> hold with exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1). Suppose also that P = ∞ and α≥ 0.75. Then, a deterministic equivalent for the expected loss under optimal regularization inf_∈ (0, 1) (𝔼[L_1(β̂(α, λ, X)) - L_1(β(α, 0))]) scales according to N^-^*, where the scaling exponent ^* is defined to be:
^* =
if N ≤ (1-α)^-1/(1-ρ)^-1/
/ + 1 if (1-α)^-1/(1-ρ)^-1/≤ N ≤ (1-α)^-'+1/ - '(1-ρ)^-'+1/ - '
'/' + 1 if N ≥ (1-α)^-'+1/ - '(1-ρ)^-'+1/ - ',
for := min(2(1+γ), δ+γ) and ' := min(1+γ, δ+γ).
Theorem <ref> (Figure <ref>) again shows that the scaling rate can become slower as the dataset size N increases, and again reveals three regimes of scaling behavior. While the first two regimes of Theorem <ref> resemble the first two regimes of Theorem <ref>, the third regime of Theorem <ref> (where N ≥ (1-α)^-'+1/ - '(1-ρ)^-'+1/ - ') behaves differently. In this regime, the scaling exponent for the excess loss is '/' + 1, rather than zero—this captures the fact that additional data can nontrivially improve the excess loss even in this regime, even though it only improves the loss up to constants. In terms of the magnitude of the scaling exponent '/' + 1, it is strictly smaller than the scaling exponent ν/ν +1 when δ > 1 and equal to the scaling exponent ν/ν +1 when δ≤ 1.
§.§ Finite data for the incumbent
We compute when the incumbent has finite data and the new company has no safety constraint (Theorem <ref>; Figure <ref>). The market-entry threshold depends on the incumbent's dataset size , the incumbent's performance loss G_I if they were to have infinite data but face the same safety constraint, the scaling exponents γ, δ, and the correlation coefficient ρ.
Suppose that the power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1), and suppose that P = ∞. Assume that = ∞. Suppose that the safety constraint satisfies (<ref>). Then we have that = (, , ∞, , ) satisfies:
:=
Θ() if ≤ G_I^-1/2 (1-ρ)^-1/2
Θ(^1/+1· G_I^-1/2(+1) (1-ρ)^-1/2(+1)) if G_I^-1/2 (1-ρ)^-1/2≤≤ G_I^-1/2 - 1/(1-ρ)^1/2
Θ(G_I^-1/) if ≥ G_I^-1/2 - 1/(1-ρ)^1/2,
where L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)] = Θ(1 - ρ), where G_I := (√(L^*(ρ)) - √(min(, L^*(ρ))))^2, and where = min(2(1+γ), δ + γ).
The market-entry threshold in Theorem <ref> exhibits three regimes of behavior depending on . In particular, the market-entry threshold takes the form = Θ(^c) where c decreases from 1 (in the first regime) to 1/ν + 1 (in the second regime) to 0 (in the third regime) as increases. To connect this to large-language-model marketplaces, we directly set = 0.34 to be the empirically estimated scaling law exponent for language models <cit.>; in this case, the scaling exponent c ranges from 1 to 0.75 to 0. The fact that there are three regimes come from the scaling law derived in Theorem <ref>, as the following proof sketch illustrates.
The key technical tool is the scaling law for the loss inf_∈ (0, 1)𝔼[L_1(β̂(α, λ, X))] (Theorem <ref>), which has three regimes of scaling behavior for different values of N. We apply the scaling law to analyze the performance of the incumbent, who faces a safety constraint and has finite data. Analyzing the performance of the new company—who faces no safety constraint—is more straightforward, given that the new company can set = 1. We compute as the number of data points needed to match the incumbent's performance level. The full proof is deferred to Appendix <ref>.
Theorem <ref> reveals that the new company can enter the market with = o() data, as long as the incumbent's dataset is sufficiently large (i.e., ≥ G_I^-1/2 (1-ρ)^-1/2). The intuition is when there is sufficient data, the multi-objective scaling exponent is worse than the single-objective scaling exponent (Theorem <ref>).
The incumbent thus faces a worse scaling exponent than the new company, so the new company can enter the market with asymptotically less data.
The three regimes in Theorem <ref> further reveal that the market-entry threshold scales at a slower rate as the incumbent's dataset size increases (Figure <ref>). The intuition is that the multi-objective scaling exponent ^* faced by the incumbent decreases as dataset size increases, while the single-objective scaling exponent faced by the new company is constant in dataset size (Theorem <ref>). The incumbent thus becomes less efficient at using additional data to improve performance, while the new company's efficiency in using additional data remains unchanged.
Theorem <ref> also offers finer-grained insight into the market-entry threshold in each regime. In the first regime, where the incumbent's dataset is small, the threshold matches the incumbent dataset size—the new company does not benefit from having a less stringent safety constraint. In the second (intermediate) regime, the new company can enter with a dataset size proportional to ^1/(+1). This polynomial speedup illustrates that the new company can more efficiently use additional data to improve performance than the incumbent company. A caveat is that this regime is somewhat restricted in that the ratio of the upper and lower boundaries is bounded. In the third regime, where the incumbent's dataset size is large, the market-entry threshold matches the market-entry threshold from Theorem <ref> where the incumbent has infinite data.
§.§ Safety constraint for the new company
We compute when the new company has a nontrivial safety constraint and the incumbent has infinite data. For this result, we strengthen the conditions on and from (<ref>), instead requiring:
>_(A)≥_(B)𝔼_(β_1, β_2) ∼[^*(β_1, β_2, , 0.75)],
where (<ref>) replaces the 0.5 with a 0.75 in the right-most quantity.[Inequality (B) in (<ref>) requires that the safety constraint still allows both company to label 75% of their training data according to performance-optimal outputs. We make this modification, since our analysis of multi-objective scaling laws for the excess loss assumes α≥ 0.75 (see Section <ref>).]
We state the result below (Theorem <ref>; Figure <ref>). The market-entry threshold depends on the incumbent's safety constraint , the performance loss G_I (resp. G_E) if the incumbent (resp. new company) had infinite data and faced the same safety constraint, the difference D = G_I - G_E in infinite-data performance loss achievable by the incumbent and new company, the scaling exponents γ, δ, and the correlation coefficient ρ.
Suppose that the power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1), and suppose that P = ∞. Suppose that the safety constraints and satisfy (<ref>).
Then it holds that = (∞, , , , ) satisfies:
:=
Θ(D^-1/ ) if D ≥ G_E^1/2 (1-ρ)^1/2
Θ(D^-+1/ G_E^1/2 (1-ρ)^1/2) if
G_E^/2( - ') (1-ρ)^/2( - ')≤ D ≤ G_E^1/2 (1-ρ)^1/2
Θ((D · G_E^-1/2 (1-ρ)^-1/2)^-'+1/') if D ≤ G_E^/2( - ') (1-ρ)^/2( - '),
where L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β)] = Θ(1 - ρ), where = min(2(1+γ), δ + γ) and ' = min(1+γ, δ + γ), where
G_I := (√(L^*(ρ)) - √(min(, L^*(ρ))))^2 and G_E := (√(L^*(ρ)) - √(min(, L^*(ρ))))^2, and where D := G_I - G_E.
The market-entry threshold in Theorem <ref> also exhibits three regimes of behavior depending on the difference D in the infinite-data performance loss achievable by the incumbent and new company. In particular, the market-entry threshold takes the form = Θ(D^-c) where c increases from 1/ to +1/ to '+1/' as D decreases. (The third regime only exists when δ > 1.) To connect this to large-language-model marketplaces, if we take = 0.34 to be the empirically estimated scaling law exponent for language models <cit.>, then c would range from 2.94 to 3.94 to potentially even larger. The fact that there are three regimes come from the scaling law derived in Theorem <ref>, as the following proof sketch illustrates.
The key technical tool is the scaling law for the excess loss inf_∈ (0, 1) (𝔼[L_1(β̂(α, λ, X)) - L_1(β(α, 0))]) (Theorem <ref>), which has three regimes of scaling behavior for different values of N. We apply the scaling law to analyze the performance of the new company, who faces a safety constraint and has finite data. Analyzing the performance of the incumbent—who has infinite data—is more straightforward, and the incumbent's performance loss is G_I = D + G_E. We compute the number of data points needed for the new company to achieve an excess loss of D. The full proof is deferred to Appendix <ref>.
Theorem <ref> illustrates that the new company can enter the market with finite data , as long as the safety constraint placed on the new company is strictly weaker than the constraint placed on the incumbent company (inequality (A) in (<ref>)). This translates to the difference D being strictly positive. The intuition is that when the new company faces a weaker safety constraint, it can train on a greater number of data points labelled with the performance objective β_1, which improves performance.
The three regimes in Theorem <ref> further reveal that the market-entry threshold scales at a faster rate as the difference D between the two safety constraints decreases (Figure <ref>). The intuition is since the new company needs to achieve an excess loss of at most D, the new company faces a smaller multi-objective scaling exponent ^* as D decreases (Theorem <ref>). The new company thus becomes less efficient at using additional data to improve performance.
§ DERIVING SCALING LAWS FOR MULTI-OBJECTIVE ENVIRONMENTS
We formalize and derive our multi-objective scaling laws for the loss (Theorem <ref>) and excess loss (Theorem <ref>). Recall that the problem setting is high-dimensional ridge regression when a fraction α of the training data is labelled according to β_1 and the rest is labelled according to an alternate objective β_2. First, following the style of analysis of single-objective ridge regression <cit.>, we first compute a deterministic equivalent of the loss (Section <ref>). Then we derive the scaling law under the power scaling assumptions on the eigenvalues and alignment coefficients in Section <ref>, both for the loss (Section <ref>) and for the excess loss (Section <ref>). Proofs are deferred to Appendix <ref>.
§.§ Deterministic equivalent
We show that the loss of the ridge regression estimator can be approximated as a deterministic quantity. This analysis builds on the random matrix tools in <cit.> (see Appendix <ref>). Note that our derivation of the deterministic equivalent does not place the power scaling assumptions on the eigenvalues or alignment coefficients; in fact, it holds for any linear regression setup which satisfies a standard random matrix theory assumption (Assumption <ref>).
We compute the following deterministic equivalent (proof deferred to Appendix <ref>).[Following <cit.>, the asymptotic equivalence notation u ∼ v means that u/v tends to 1 as N and P go to ∞.]
Suppose that N ≥ 1, P ≥ 1, , β_1, and β_2 satisfy Assumption <ref>. Let Σ be the covariance matrix of , and let ∈ [0,1] and ∈ (0,1) be general parameters. Let Σ_c = (Σ + c I) for c ≥ 0, let 1 = β_1 β_1^T, let = (β_1 - β_2)(β_1 - β_2)^T, and let 1 = (β_1 - β_2) β_1^T. Let κ = κ(λ, N, Σ) from Definition <ref>. Then, it holds that
L_1(β̂(α, λ, X)) ∼ L_1^(β_1, β_2, , , N, α) =: T_1 + T_2 + T_3 + T_4 + T_5/,
where:
T_1 := κ^2 ·(ΣΣ_κ^-21) , T_2 := (1-α)^2 ( (Σ_κ^-2Σ^3 ))
T_3 := 2 (1-α) κ·(Σ_κ^-2Σ^2 1) , T_4 := -2 (1-α) κ1/N(Σ^2 Σ_κ ^-2) ·(Σ_κ^-1Σ1) ,
T_5 := (1-α) 1/N(Σ^2 Σ_κ ^-2) ·((Σ)- 2 (1-α) ( Σ_κ^-1Σ^2 ) ), := 1 - 1/N(Σ^2 Σ_κ ^-2).
Lemma <ref> shows that the loss can be approximated by a deterministic quantity L_1^(β_1, β_2, , , N, α) which is sum of five terms, normalized by the standard degrees of freedom correction ^-1 <cit.>. The sum T_1 + T_2 +T_3 is the loss of infinite-data ridge regression with regularizer κ. Terms T_4 and T_5 capture additional error terms.
In more detail, term T_1 / captures the standard single-objective environment error for N data points <cit.>: i.e., the population error of the single-objective linear regression problem with regularizer λ where all of the N training data points are labelled with β_i. Term T_2 is similar to the infinite-data ridgeless regression error but is slightly smaller due to regularization. Term T_3
is a cross term which is upper bounded by the geometric mean of term T_1 and term T_2. Term T_4 is another cross term which is subsumed by the other terms. Term T_5 captures an overfitting error which increases with the regularizer κ and decreases with the amount of data N.
From deterministic equivalents to scaling laws. In the following two subsections, using the deterministic equivalent from Lemma <ref>, we derive scaling laws. We make use of the the power scaling assumptions on the covariance and alignment coefficients described in Section <ref>, under which the deterministic equivalent takes a cleaner form (Lemma <ref> in Appendix <ref>).
We note that strictly speaking, deriving scaling laws requires controlling the error of the deterministic equivalent relative to the actual loss; for simplicity, we do not control errors and instead directly analyze the deterministic equivalent.
§.§ Scaling law for the loss
We derive scaling laws for the loss L_1^ := L_1^(β_1, β_2, , , N, α). We first prove the following scaling law for a general regularizer (proof deferred to Appendix <ref>).
Suppose that the power-law scaling assumption holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1), suppose that P = ∞. Assume that α≥ 0.5 and λ∈ (0,1).
Let L_1^ := L_1^(β_1, β_2, , , N, α) be the deterministic equivalent from Lemma <ref>. Let := min(2(1+γ), δ+γ).
Then, the expected loss satisfies:
𝔼_[L_1^] =Θ( max(λ^/1+γ, N^-)_finite data error + (1-α)^2 · (1 - ρ)_mixture error + (1-α) (
min(λ^-1/1+γ, N)/N)
(1-ρ)
_overfitting error).
Theorem <ref> illustrates that the loss is the sum of a finite data error, an overfitting error, and a mixture error. The finite data error for L_1^ matches the loss in the single-objective environment for N data points labelled with objective β_1. The mixture error equals the loss of the infinite-data ridgeless regression predictor β(α, 0). The overfitting error for L_1^ equals the error incurred when the regularizer is too small. This term is always at most (1-α)^-1 times larger than the mixture error, and it is smaller than the mixture error when is sufficiently large relative to N.
Due to the overfitting error, the optimal loss is not necessarily achieved by taking → 0 for multi-objective linear regression. In fact, if the regularizer decays too quickly as a function of N (i.e., if = O(N^-1-γ)), then the error would converge to (1-α)(1-ρ), which is a factor of (1-α)^-1 higher than the error of the infinite-data ridgeless predictor β(α, 0).
The fact that → 0 is suboptimal reveals a sharp disconnect between the multi-objective setting and the single-objective setting where no explicit regularization is necessary to achieve the optimal loss <cit.>.[Tempered overfitting <cit.> can similarly occur in single-objective settings with noisy observations. In this sense, labelling some of the data with the alternate objective β_2 behaves qualitatively similarly to noisy observations. ]
In the next result, we compute the optimal regularizer and derive a scaling law under optimal regularization as a corollary of Theorem <ref>.
Consider the setup of Theorem <ref>.
Then, the loss under optimal regularization can be expressed as:
inf_∈ (0,1)𝔼_[L_1^] = Θ(N^-) if N ≤ (1-α)^-1/(1-ρ)^-1/
Θ((N/(1-α)(1-ρ))^-/ + 1) if (1-α)^-1/(1-ρ)^-1/≤ N ≤ (1-α)^-2+/ (1-ρ)^-1/
Θ((1-α)^2(1-ρ)) if N ≥ (1-α)^-2+/ (1-ρ)^-1/,
where := min(2(1+γ), δ+γ).
The scaling law exponent ^* ranges from , to /(+1), to 0 (Figure <ref>). To better understand each regime, we provide intuition for when error term from Theorem <ref> dominates, the form of the optimal regularizer, and the behavior of the loss.
* Regime 1: N ≤ (1-α)^-1/(1-ρ)^-1/. Since N is small, the finite data error dominates regardless of . As a result, like in a single-objective environment, taking = O(N^-1-γ) recovers the optimal loss up to constants. Note that the loss thus behaves as if all N data points were labelled according to β_i: the learner benefits from all of the data, not just the data is labelled according to β_i.
* Regime 2: (1-α)^-1/(1-ρ)^-1/≤ N ≤ (1-α)^-2+/ (1-ρ)^-1/. In this regime, the finite error term and overfitting error dominate. Taking = Θ(((1-α) (1-ρ)/N)^1+γ/ + 1), which equalizes the two error terms, recovers the optimal loss up to constants. The loss in this regime improves with N, but at a slower rate than in a single-objective environment.
* Regime 3: N ≥ (1-α)^-2+/ (1-ρ)^-1/.
Since N is large, the mixture and the overfitting error terms dominate. Taking = Θ((N(1-α))^-1-γ), which equalizes the two error terms, recovers the optimal loss up to constants. The loss behaves (up to the constants) as if there were infinitely many data points from the mixture distribution with weight α. This is the minimal possible loss and there is thus no additional benefit for data beyond improving constants.
The full proof of Corollary <ref> is deferred to Appendix <ref>.
§.§ Scaling law for the excess loss
Now, we turn to scaling laws for the excess loss 𝔼_[L_1^(β_1, β_2, , , N, α) - L_1(β(α, 0))], , which is normalized by the loss of the infinite-data ridgeless predictor β(α, 0). We first prove the following scaling law for a general regularizer , assuming that α≥ 0.75 (proof deferred to Appendix <ref>).[The assumption that α≥ 0.75 simplifies the closed-form expression for the deterministic equivalent of the excess loss in Lemma <ref>. We defer a broader characterization of scaling laws for the excess loss to future work.]
Suppose that the power-law scaling assumption holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1), suppose that P = ∞. Assume that α≥ 0.75 and λ∈ (0,1).
Let L_1^ := L_1^(β_1, β_2, , , N, α) be the deterministic equivalent from Lemma <ref>. Let := min(2(1+γ), δ+γ) and let ' = min(1+γ, δ+γ)
Then, the expected loss satisfies:
𝔼_[L_1^ - L_1(β(α, 0))]
= Θ( max(λ^/1+γ, N^-)_finite data error + (1-ρ) (1-α) max(λ^'/1+γ, N^-')_mixture finite data error + (1-α) (
min(λ^-1/1+γ, N)/N)
(1-ρ)
_overfitting error).
Theorem <ref> illustrates that the loss is the sum of a finite data error, an overfitting error, and a mixture finite data error. In comparison with Theorem <ref>, the difference is that the mixture error is replaced by the mixture finite data error. Interestingly, the mixture finite data error exhibits a different asymptotic dependence with respect to λ and N than the finite data error: the asymptotic rate of decay scales with ' rather than . In fact, the rate is slower for the mixture finite data error than the finite data error as long as δ > 1 (since this means that ' <).
Since the optimal excess loss is also not necessarily achieved by taking → 0, we compute the optimal regularizer for the excess loss and derive a scaling law under optimal regularization as a corollary of Theorem <ref>.
Consider the setup of Theorem <ref>.
The excess loss under optimal regularization can be expressed as:
inf_∈ (0,1) (𝔼_[L_1^ - L_1(β(α, 0))])
= Θ(N^-) if N ≤ (1-α)^-1/(1-ρ)^-1/
Θ((N/(1-α)(1-ρ))^-/ + 1) if
(1-α)^-1/(1-ρ)^-1/≤ N ≤ (1-α)^-'+1/ - '(1-ρ)^-'+1/ - '
Θ((1-α) (1-ρ) N^-'/'+1) if N ≥ (1-α)^-'+1/ - '(1-ρ)^-'+1/ - ',
where := min(2(1+γ), δ+γ) and ' = min(1+γ, δ + γ).
The scaling law exponent ^* ranges from , to /(+1), to '/('+1) (Figure <ref>). The first two regimes behave similarly to Corollary <ref>, and the key difference arises in the third regime (when N is large). In the third regime (N ≥ (1-α)^-'+1/ - '(1-ρ)^-'+1/ - '), the mixture finite data error and the overfitting error terms dominate. Taking = Θ(N^-1+γ/'+1)—which equalizes these two error terms—recovers the optimal loss up to constants. The resulting scaling behavior captures that in this regime, additional data meaningfully improves the excess loss, even though additional data only improves the loss in terms of constants. The full proof of Corollary <ref> is deferred to Appendix <ref>.
§ DISCUSSION
We studied market entry in marketplaces for machine learning models, showing that pressure to satisfy safety constraints can reduce barriers to entry for new companies. We modelled the marketplace using a high-dimensional multi-objective linear regression model. Our key finding was that a new company can consistently enter the marketplace with significantly less data than the incumbent. En route to proving these results, we derive scaling laws for multi-objective regression, showing that the scaling rate becomes slower when the dataset size is large.
Potential implications for regulation. Our results have nuanced design consequences for regulators, who implicitly influence the level of safety that each company needs to achieve to avoid reputational damage. On one hand, our results suggest that placing greater scrutiny on dominant companies can encourage market entry and create a more competitive marketplace of model providers. On the other hand, market entry does come at a cost to the safety objective: the smaller companies exploit that they can incur more safety violations while maintaining their reputation, which leads to a race to the bottom for safety. Examining the tradeoffs between market competitiveness and safety compliance is an important direction for future work.
Barriers to market entry for online platforms. While we focused on language models, we expect that our conceptual findings about market entry also extend to recommendation and social media platforms.
In particular, our motivation and modeling assumptions capture key aspects of these online platforms. Policymakers have raised concerns have been raised about barriers to entry for social media platforms <cit.>, motivated by the fact that social media platforms such as X and Facebook each have over a half billion users <cit.>. Incumbent companies risk reputational damage if their model violates safety-oriented objectives—many recommendation platforms have faced scrutiny for promoting hate speech <cit.>, divisive content <cit.>, and excessive use by users <cit.>, even when recommendations perform well in terms of generating user engagement. This means that incumbent platforms must balance optimizing engagement with controlling negative societal impacts <cit.>. Moreover, new companies face less regulatory scrutiny, given that some regulations explicitly place more stringent requirements on companies with large user bases: for example the Digital Services Act <cit.> places a greater responsibility on Very Large Online Platforms (with over 45 million users per month) to identify and remove illegal or harmful content.
Given that incumbent platforms similarly face more pressure to satisfy safety-oriented objectives, our results suggest that multi-objective learning can also reduce barriers to entry for new online platforms.
Limitations. Our model for interactions between companies and users makes several simplifying assumptions. For example, we focused entirely whether the new company can enter the market, which leaves open the question of whether the new company can survive in the long run. Moreover, we assumed that all users choose the model with the highest overall performance. However, different users often care about performance on different queries; this could create an incentive for specialization, which could also reduce barriers to entry and market concentration. Finally, we focused on direct interactions between model providers and users, but in reality, downstream providers sometimes build services on top of a foundation model. Understanding how these market complexities affect market entry as well as long-term concentration is an interesting direction for future work.
Furthermore, our model also made the simplifying assumption that performance and safety trade off according to a multi-objective regression problem.
However, not all safety objectives fit the mold of linear coefficients within linear regression. For some safety objectives such as privacy, we still expect that placing greater scrutiny on dominant companies could similarly reduce barriers to entry. Nonetheless, for other safety or societal considerations, we do expect that the implications for market entry might be fundamentally different. For example, if the safety objective is a multi-group performance criteria, and there is a single predictor that achieves zero accuracy on all distributions, then a dominant company with infinite data would be able to retain all users even if the company faces greater scrutiny. Extending our model to capture a broader scope of safety objectives is a natural direction for future work.
§ ACKNOWLEDGMENTS
We thank Alireza Fallah, Jiahai Feng, Nika Haghtalab, Andy Haupt, Erik Jones, Jon Kleinberg, Ben Laufer, Neil Mallinar, Judy Shen, Alex Wei, Xuelin Yang, and Eric Zhao for useful feedback on this project. This work was partially supported by an Open Philanthropy AI fellowship and partially supported by the European Union (ERC-2022-SYG-OCEAN-101071601).
plainnat
§ PROOFS FOR SECTION <REF>
In this section, we prove Theorem <ref>. First, we state relevant facts (Appendix <ref>) and prove intermediate lemmas (Appendix <ref>), and then we use these ingredients to prove Theorem <ref> (Appendix <ref>). Throughout this section, we let
L^*(ρ) = 𝔼_[(β_1 - β_2) Σ (β_1 - β_2)^T].
Moreover, let
β(α, λ) = _β(α·𝔼_X ∼[(⟨β - β_1, X ⟩)^2] + (1-α) ·𝔼_X ∼[(⟨β - β_2, X ⟩)^2] + λβ_2^2 )
be the infinite-data ridge regression predictor.
§.§ Facts
We can explicitly solve for the infinite-data ridge regression predictor
β(α, λ) = _β(α·𝔼_x ∼[⟨β - β_1, x⟩^2] + (1-α)·𝔼_x ∼[⟨β - β_2, x⟩^2] + ||β||^2_2 )
= Σ (Σ + λ I)^-1 (αβ_1 + (1-α) β_2).
A simple calculation shows that 𝔼_[L_1(β(α, 0))] = (1-α)^2 L^*(ρ) and 𝔼_[L_2(β(α, 0))] = α^2 L^*(ρ). Thus, it holds that:
α𝔼_[L_1(β(α, 0))] + (1-α) 𝔼_[L_2(β(α, 0))] = α (1-α) L^*(ρ).
Moreover, by the definition of the ridge regression objective, we see that:
α𝔼_[L_1(β(α, λ))] + (1-α) 𝔼_[L_2(β(α, λ))] ≥α𝔼_[L_1(β(α, 0))] + (1-α) 𝔼_[L_2(β(α, 0))].
§.§ Lemmas
The first lemma upper bounds the performance loss when there is regularization.
Suppose that power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1) and suppose that P = ∞. Let L^*(ρ) = (β_1 - β_2)^T Σ (β_1 - β_2)^T. Let
β(α, λ) = _β(α·𝔼_X ∼[(⟨β - β_1, X ⟩)^2] + (1-α) ·𝔼_X ∼[(⟨β - β_2, X ⟩)^2] + λβ_2^2)
be the infinite-data ridge regression predictor. Assume that α≥ 1/2. Then it holds that
𝔼_[L_1(β(α, λ))] ≥ (1-α)^2 L^*(ρ)
and
𝔼_[L_1(β(α, λ))]/𝔼_[L_2(β(α, λ))]≥(1-α)^2/α^2.
We define the quantities:
A := λ^2 ∑_i=1^P λ_i/(λ_i + λ)^2 i^-δ
B := (1-α)^2 (1-ρ)^2 ∑_i=1^P λ^3_i/(λ_i + λ)^2 i^-δ
C := λ (1-ρ) ∑_i=1^P λ^2_i/(λ_i + λ)^2 i^-δ.
We compute the performance loss as follows:
𝔼_[L_1(β(α, λ))]
= 𝔼_[(Σ (β_1 - β(α, λ)) (β_1 - β(α, λ))^T) ]
= 𝔼_[((Σ + λ I)^-2Σ( λβ_1 + Σ· (1-α) (β_1 - β_2) ) ( λβ_1 + Σ· (1-α)(β_1 - β_2) )^T)]
= 𝔼_[((Σ + λ I)^-2Σ·( λβ_1 + Σ· (1-α) (β_1 - β_2) ) ( λβ_1 + Σ· (1-α) (β_1 - β_2) )^T)]
= λ^2 𝔼_[((Σ + λ I)^-2Σ·β_1 β_1^T)] + (1-α)^2 𝔼_[((Σ + λ I)^-2Σ^3 · (β_1 - β_2) (β_1 - β_2)^T) ]
+ λ (1-α) 𝔼_[((Σ + λ I)^-2Σ^2 ·β_1 (β_1 - β_2)^T )]
= λ^2 ∑_i=1^P λ_i/(λ_i + λ)^2𝔼_[⟨β_1, v_i ⟩^2] + (1-α)^2 ∑_i=1^P λ^3_i/(λ_i + λ)^2𝔼_[⟨β_1 - β_2, v_i ⟩^2]
+ λ (1-α) ∑_i=1^P λ^2_i/(λ_i + λ)^2𝔼_[⟨β_1, v_i ⟩⟨β_1 - β_2, v_i⟩]
= λ^2 ∑_i=1^P λ_i/(λ_i + λ)^2 i^-δ + (1-α)^2 (1-ρ)^2 ∑_i=1^P λ^3_i/(λ_i + λ)^2 i^-δ + λ (1-α) (1-ρ) ∑_i=1^P λ^2_i/(λ_i + λ)^2 i^-δ
= A + (1-α)^2 B + (1-α) C.
An analogous calculation shows that the safety violation can be written as:
𝔼_[L_2(β(α, λ))] = A + α^2 B + α C
Since α≥ 1/2, then it holds that:
𝔼_[L_1(β(α, λ))]/𝔼_[L_2(β(α, λ))] = A + (1-α)^2 B + (1-α) C/A + α B + α C≥(1-α)^2/α^2.
Combining this with the facts from Appendix <ref>—which imply that α𝔼_[L_1(β(α, λ))] + (1-α) 𝔼_[L_2(β(α, λ))] ≥α (1-α) L^*(ρ)—we have that 𝔼_[L_1(β(α, λ))] ≥ (1-α)^2 L^*(ρ) as desired.
The following lemma computes the optimal values of and for the incumbent.
Suppose that power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1) and suppose that P = ∞. Let L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)^T]. Suppose that = ∞, and suppose that the safety constraint satisfies (<ref>). Then it holds that α_I = √(min(, L^*(ρ))/L^*(ρ)), and _I = 0 is optimal for the incumbent. Moreover, it holds that:
𝔼_[L^*_1(β_1, β_2, , _I, ∞, α_O)] = (√(L^*(ρ)) - √(min(, L^*(ρ)))^2.
First, we apply Lemma <ref> with N = ∞ to see that:
𝔼_[L^*_1(β_1, β_2, , , ∞, α)] = 𝔼_[L_1(β(α, λ))]
and apply the definition of L_2^* to see that:
𝔼_[L^*_2(β_1, β_2, , α)] = 𝔼_[L_2(β(α, 0))].
Let α^* =√(min(, L^*(ρ))/L^*(ρ)). By the assumption in the lemma statement, we know that:
α^* ≥√(𝔼_[^*(β_1, β_2, , 0.5)]/L^*(ρ)) = 0.5.
We show that (α_I, _I) = (α^*, 0). Assume for sake of contradiction that (α, ) ≠ (α^*, 0) satisfies the safety constraint 𝔼_[^*(β_1, β_2, , α)] ≤ and achieves strictly better performance loss:
𝔼_[^*(β_1, β_2, , , ∞, )] < 𝔼_[^*(β_1, β_2, , 0, ∞, ^*)].
We split into two cases: α^* = α, ≠ 0 and α^* ≠α.
Case 1: α^* = α, ≠ 0.
By Lemma <ref>, we know that
𝔼[L^*_1(β_1, β_2, , , ∞, α^*)] = 𝔼_[L_1(β(α^*, λ))] ≥ (1-α^*)^2 L^*(ρ).
Equality is obtained at = 0, which is a contradiction.
Case 2: α≠α^*.
By Lemma <ref>, it must hold that α > α^* in order for the performance to beat that of (α^*, 0). However, this means that the safety constraint
𝔼_[^*(β_1, β_2, , α)] = α^2 L^*(ρ) > (α^*)^2 L^*(ρ) =
is violated, which is a contradiction.
Concluding the statement. This means that(α_I, _I) = (α^*, 0), which also means that:
𝔼_[L^*_1(β_1, β_2, , _I, ∞, α_I)] = 𝔼_[L_1(β(α_I, λ_I))]
= (1-α_I)^2 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)]
= (√(L^*(ρ)) - √(min(, L^*(ρ)))^2.
The following claim calculates 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)].
Suppose that the power-law scaling assumption holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1), suppose that P = ∞. Then it holds that:
𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)] = 2 (1-ρ) (∑_i=1^P i^-δ-1-γ) = Θ(1-ρ).
Let Σ = V Λ V^T be the eigendecomposition of Σ, where Λ is a diagonal matrix consisting of the eigenvalues. We observe that
𝔼_[⟨β_1 - β_2, v_i⟩^2] = 𝔼_[⟨β_1 , v_i⟩^2] + 𝔼_[⟨β_2, v_i⟩^2] - 2 𝔼_[⟨β_1 , v_i⟩⟨β_2, v_i⟩] = i^-δ + i^-δ - 2 ρ i^-δ = 2(1-ρ) i^-δ.
This means that:
𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)] = (Σ𝔼_[(β_1 - β_2) (β_1 - β_2)^T])
= (Λ𝔼_[V^T (β_1 - β_2) (β_1 - β_2)^T V] )
= ∑_i=1^P i^-1-γ𝔼_[⟨β_1 - β_2, v_i⟩^2]
= 2 (1 - ρ) ∑_i=1^P i^-δ-1-γ
= Θ(1-ρ).
§.§ Proof of Theorem <ref>
We prove Theorem <ref> using the above lemmas along with Corollary <ref> (the proof of which we defer to Appendix <ref>).
We analyze (α_C, _C) first for the incumbent C = I and then for the entrant C = E.
Analysis of the incumbent C = I.
To compute α_I and _I, we apply Lemma <ref>.
By Lemma <ref>, we see that:
𝔼_[L^*_1(β_1, β_2, , _I, ∞, α_I)] = (√(L^*(ρ)) - √(min(, L^*(ρ)))^2.
Analysis of the entrant C = E. Since the entrant faces no safety constraint, the entrant can choose any α∈ [0.5, 1]. We apply Corollary <ref> to see that:
𝔼_[L^*_1(β_1, β_2, , _E, N, α_E)] = inf_α∈ [0.5, 1]inf_ > 0𝔼_[L^*_1(β_1, β_2, , , N, α)] = Θ(
N^-),
which means that:
^*(∞, , ∞, , ) = Θ((√(L^*(ρ)) - √(min(, L^*(ρ)))^-2/)
as desired.
We can further apply Claim <ref> to see that L^*(ρ) = Θ(1-ρ).
§ PROOFS FOR SECTION <REF>
§.§ Proofs for Section <ref>
We prove Theorem <ref>. The main technical tool is Theorem <ref>, the proof of which we defer to Appendix <ref>.
We analyze (α_C, _C) first for the incumbent C = I and then for the entrant C = E. Like in the theorem statement, let L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)] = Θ(1 - ρ) (Claim <ref>) and G_I := (√(L^*(ρ)) - √(min(, L^*(ρ))))^2, and = min(2(1+γ), δ + γ).
Analysis of the incumbent C = I. Recall from the facts in Appendix <ref> that:
L_1^*(β_1, β_2, , α) = α^2 L^*(ρ).
This means that the safety constraint is satisfied if and only if _I ≤√(min(, L^*(ρ))/L^*(ρ)) =: α^*. The bound in Corollary <ref> implies that:
𝔼_[L^*_1(β_1, β_2, , _I, , α_I)]
= inf_α∈[0.5, α^* ]inf_ > 0𝔼_[L^*_1(β_1, β_2, , , , α)]
= Θ(
inf_ > 0𝔼_[L^*_1(β_1, β_2, Σ, , , α^* )] )
= Θ(^-) if ≤ (1-α^* )^-1/(1-ρ)^-1/
Θ((/(1-α^* )(1-ρ))^-/ + 1) if (1-α^* )^-1/(1-ρ)^-1/≤≤ (1-α^* )^-2+/ (1-ρ)^-1/
Θ((1-α^* )^2(1-ρ)) if ≥ (1-α^* )^-2+/ (1-ρ)^-1/,
= Θ( ^-) if ≤ G_I^-1/2 (1-ρ)^-1/2
Θ(^-/+1· G_I^/2(+1) (1-ρ)^/2(+1)) if G_I^-1/2 (1-ρ)^-1/2≤≤ G_I^-1/2 - 1/(1-ρ)^1/2
Θ(G_I) if ≥ G_I^-1/2 - 1/(1-ρ)^1/2.
Analysis of the entrant C = E. Since the entrant faces no safety constraint, the entrant can choose any α∈ [0.5, 1]. We apply Corollary <ref> to see that:
𝔼_[L^*_1(β_1, β_2, , _E, N, α_E)] = inf_α∈ [0.5, 1]inf_ > 0𝔼_[L^*_1(β_1, β_2, , , N, α)] = Θ(
N^-),
which means that:
^*(, , ∞, , ) = Θ() if ≤ G_I^-1/2 (1-ρ)^-1/2
Θ(^1/+1· G_I^-1/2(+1) (1-ρ)^-1/2(+1)) if G_I^-1/2 (1-ρ)^-1/2≤≤ G_I^-1/2 - 1/(1-ρ)^1/2
Θ(G_I^-1/) if ≥ G_I^-1/2 - 1/(1-ρ)^1/2.
as desired.
§.§ Proofs for Section <ref>
We prove Theorem <ref>. When the the safety constraints of the two firms are sufficiently close, it no longer suffices to analyze the loss up to constants for the entrant, and we require a more fine-grained analysis of the error terms than is provided in the scaling laws in Corollary <ref>. In this case, we turn to scaling laws for the excess loss as given by Corollary <ref>.
We analyze (α_C, _C) first for the incumbent C = I and then for the entrant C = E. Like in the theorem statement, let L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)] = Θ(1 - ρ) (Claim <ref>), G_I = (√(L^*(ρ)) - √(min(, L^*(ρ))))^2, G_E = (√(L^*(ρ)) - √(min(, L^*(ρ))))^2,
D = G_I - G_E, and = min(2(1+γ), δ + γ).
Analysis of the incumbent C = I. Since the incumbent has infinite data, we apply Lemma <ref> to see that:
𝔼_[L^*_1(β_1, β_2, , _I, ∞, _I)] = (√(L^*(ρ)) - √(min(, L^*(ρ))))^2
= D + G_E.
Analysis of the entrant C = E. Recall from the facts in Appendix <ref> that:
L_1^*(β_1, β_2, , α) = α^2 L^*(ρ).
This means that the safety constraint is satisfied if and only if ≤√(min(, L^*(ρ))/L^*(ρ)) =: α^*. The bound in Corollary <ref> implies that:
𝔼_[L^*_1(β_1, β_2, , _E, N, α_E)]
= inf_α∈[0.5, α^* ]inf_ > 0𝔼_[L^*_1(β_1, β_2, , , N, α)]
= inf_α∈[0.5, α^* ](inf_ > 0(𝔼_[L^*_1(β_1, β_2, , , N, α) - L_1(β(α, 0))]) + 𝔼_[L_1(β(α, 0))] )
= inf_α∈[0.5, α^* ](inf_ > 0(𝔼_[L^*_1(β_1, β_2, , , N, α) - L_1(β(α, 0))]) + (1-α)^2 L^*(ρ) )
= Θ(
inf_ > 0(𝔼_[L^*_1(β_1, β_2, , , N, α) - L_1(β(α^*, 0))]) ) + (1-α^*)^2 L^*(ρ)
=
(1-α^*)^2 L^*(ρ) + Θ(N^-) if N ≤ (1-α^*)^-1/(1-ρ)^-1/
(1-α^*)^2 L^*(ρ) + Θ((N/(1-α^*)(1-ρ))^-/ + 1) if
(1-α^*)^-1/(1-ρ)^-1/≤ N ≤ (1-α^*)^-'+1/ - '(1-ρ)^-'+1/ - '
(1-α^*)^2 L^*(ρ) + Θ((1-α^*) (1-ρ) N^-'/'+1) if N ≥ (1-α^*)^-'+1/ - '(1-ρ)^-'+1/ - ',
=
G_E + Θ(N^-) if N ≤ (1-α^*)^-1/(1-ρ)^-1/
G_E + Θ((N/(1-α^*)(1-ρ))^-/ + 1) if
(1-α^*)^-1/(1-ρ)^-1/≤ N ≤ (1-α^*)^-'+1/ - '(1-ρ)^-'+1/ - '
G_E + Θ((1-α^*) (1-ρ) N^-'/'+1) if N ≥ (1-α^*)^-'+1/ - '(1-ρ)^-'+1/ - ',
.
Using this, we can compute the market-entry threshold as follows:
(∞, , , , )
= Θ(D^-1/ ) if D ≥ (1-α^*) (1-ρ)
Θ(D^-+1/ (1-α^*) (1-ρ) ) if
(1-α^*)^/ - '(1-ρ)^/ - '≤ D ≤ (1-α^*) (1-ρ)
Θ((D/(1 - α^*)(1-ρ))^-'+1/') if D ≤ (1-α^*)^/ - '(1-ρ)^/ - '
= Θ(D^-1/ ) if D ≥ G_E^1/2 (1-ρ)^1/2
Θ(D^-+1/ G_E^1/2 (1-ρ)^1/2) if
G_E^/2( - ') (1-ρ)^/2( - ')≤ D ≤ G_E^1/2 (1-ρ)^1/2
Θ((D/G_E^1/2 (1-ρ)^1/2)^-'+1/') if D ≤ G_E^/2( - ') (1-ρ)^/2( - ')
§ PROOFS FOR SECTION <REF>
In this section, we derive a deterministic equivalent and scaling laws for high-dimensional multi-objective linear regression. Before diving into this, we introduce notation, derive a basic decomposition, and give an outline for the remainder of the section.
Notation. Recall that (X_i, Y_i) denotes the labelled training dataset. Let the sample covariance be:
Σ̂ = 1/N∑_i=1^N X_i X_i^T.
We also consider the following reparameterization where we group together inputs according to how they are labelled. For j ∈{1,2}, we let X_1,j, …, X_N_j,j be the inputs labelled by _j.
We let
Σ̂_1 = 1/N_1∑_i=1^N_1 X_i, 1 X_i,1^T
Σ̂_2 = 1/N_2∑_i=1^N_2 X_i, 2 X_i,2^T.
It is easy to see that Σ = αΣ̂_1 + (1-α) Σ̂_2. Moreover, 𝔼[Σ̂] = 𝔼[Σ̂_1] = 𝔼[Σ̂_2] = Σ. Furthermore, Σ̂_1 and Σ̂_2 are fully independent. We let ∼ denote asymptotic equivalence following <cit.>.
Basic decomposition. A simple calculation shows that the solution and population-level loss of ridge regression takes the following form.
Assume the notation above. Let 1 = β_1 β_1^T, let = (β_1 - β_2)(β_1 - β_2)^T, and let 1 = (β_1 - β_2) β_1^T.
The learned predictor takes the form:
β̂(α, λ, X) = (Σ̂ + λ I)^-1 (αΣ̂_1 β_1 + (1-α) Σ̂_2 β_2).
Moreover, it holds that:
L_1(β̂(α, λ, X)) = λ^2 ((Σ̂ + λ I)^-1Σ (Σ̂ + λ I)^-11)_(T1) + (1-α)^2 (Σ̂_2 (Σ̂ + λ I)^-1Σ (Σ̂ + λ I)^-1Σ̂_2 )_(T2)
+ 2 λ (1-α) ·((Σ̂ + λ I)^-1Σ (Σ̂ + λ I)^-1Σ̂_2 1)_(T3).
For 1 ≤ i ≤ N, let Y_i be the label for input X_i in the training dataset. For i ∈{1, 2} and 1 ≤ i ≤ N_i, let Y_i,j := ⟨β_i, X_i, j⟩ be the label for the input X_i, j according to β_i.
For the first part, it follows from standard analyses of ridge regression that the learned predictor takes the form:
β̂(α, λ, X) = (Σ̂ + λ I)^-1(1/N∑_i=1^N X_i Y_i )
= (Σ̂ + λ I)^-1(1/N∑_i=1^N X_i, 1 Y_i,1 + 1/N∑_i=1^N X_i, 2 Y_i,2)
= (Σ̂ + λ I)^-1(αΣ̂_1 β_1 + (1-α)Σ̂_2 β_2)
as desired.
For the second part, we first observe that the difference β_1 - β̂(α, λ, X) takes the form:
β_1 - β̂(α, λ, X) = β_1 - (Σ̂ + λ I)^-1(αΣ̂_1 β_1 + (1-α)Σ̂_2 β_2)
= (Σ̂ + λ I)^-1(λβ_1 + (1-α) Σ̂_2 (β_1 - β_2) ).
This means that:
L_1(β̂(α, λ, X))
= (β_1 - β̂(α, λ, X)^T Σ (β_1 - β̂(α, λ, X)
= (λβ_1 + (1-α) Σ̂_2 (β_1 - β_2) )^T (Σ̂ + λ I)^-1Σ (Σ̂ + λ I)^-1(λβ_1 + (1-α) Σ̂_2 (β_1 - β_2) )
= λ^2 ·β_1^T Σ̂ + λ I)^-1Σ (Σ̂ + λ I)^-1β_1 + (1-α)^2 · (β_1 - β_2)^T Σ̂_2 Σ̂ + λ I)^-1Σ (Σ̂ + λ I)^-1Σ̂_2 (β_1 - β_2)
+ 2 λ (1-α) ·β_1^T Σ̂ + λ I)^-1Σ (Σ̂ + λ I)^-1Σ̂_2 (β_1 - β_2)
= λ^2 ((Σ̂ + λ I)^-1Σ (Σ̂ + λ I)^-11)+ (1-α)^2 (Σ̂_2 (Σ̂ + λ I)^-1Σ (Σ̂ + λ I)^-1Σ̂_2 )
+ 2 λ (1-α) ·((Σ̂ + λ I)^-1Σ (Σ̂ + λ I)^-1Σ̂_2 1).
as desired.
Outline for the rest of this Appendix. The bulk of our analysis in this section boils down to analyzing Term 1 (T1), Term 2 (T2), and Term 3 (T3) in Claim <ref>. Our main technical tool is the random matrix machinery from Appendix <ref>. In Appendix <ref>, we provide useful sublemmas about intermediate deterministic equivalents that we apply to analyze Terms 2 and 3. We then analyze Term 1 (Appendix <ref>), Term 2 (Appendix <ref>), and Term 3 (Appendix <ref>), and use this to prove Lemma <ref> (Appendix <ref>).
We apply the power scaling assumptions to derive a simpler expression for the deterministic equivalent (Lemma <ref> in Appendix <ref>). We then apply Lemma <ref> to prove Theorem <ref> (Appendix <ref>), and we prove Corollary <ref> (Appendix <ref>). We also apply Lemma <ref> to prove Theorem <ref> (Appendix <ref>), and we prove Corollary <ref> (Appendix <ref>). We defer auxiliary calculations to Appendix <ref>.
§.§ Useful lemmas about intermediate deterministic equivalents
The results in this section consider Z_1 := α/1-αΣ̂_1 + /1-α I, which we introduce when conditioning on the randomness of Σ̂_1 when analyzing (T2) and (T3). We derive several properties of Z_1 and the effective regularizer κ_1 = κ(1, N(1-α), Z_1^-1/2Σ Z_1^-1/2) below.
The first set of lemmas relate the trace of various matrices involving κ_1 and Z_1 to deterministic quantities. A subtlety is that κ_1 and Z_1 are correlated, so we cannot directly apply Marčenko-Pastur, and instead we must indirectly analyze this quantity.
Consider the setup of Lemma <ref>, and assume the notation above. Assume α < 1. Let Z_1 = α/1-αΣ̂_1 + /1-α I, and let κ_1 = κ(1, N(1-α), Z_1^-1/2Σ Z_1^-1/2). Suppose that B has bounded operator norm.
κ_1 ((Σ + κ_1 Z_1)^-1 B ) ∼(1-α) κ/λ((Σ + κ I)^-1 B )
By Claim <ref>, we know that:
(1-α) ((Σ̂ + λ I)^-1 B ) = ((Σ̂_2 + Z_1 )^-1 B )
∼_(A)κ_1 ((Σ + κ_1 I )^-1 B ).
where (A) applies Lemma <ref> and Claim <ref>.
Furthermore, by Lemma <ref>, it holds that:
λ((Σ̂ + λ I)^-1 B ) ∼κ((Σ + κ I )^-1 B ) .
Putting this all together yields the desired result.
Consider the setup of Lemma <ref>, and assume the notation above. Assume α < 1. Let Z_1 = α/1-αΣ̂_1 + /1-α I, and let κ_1 = κ(1, N(1-α), Z_1^-1/2Σ Z_1^-1/2). Suppose that A and B have bounded operator norm. Then it holds that:
(κ_1)^2 (((Σ + κ_1 Z_1 )^-1 A (Σ + κ_1 Z_1)^-1 B ) + E_1 )
∼(1-α)^2 κ^2/λ^2(((Σ + κ I )^-1 A (Σ + λ I )^-1 B ) + E_2 )
where
κ = κ(λ, N, Σ)
E_1 = 1/N (1-α)(A (Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1)/1 - 1/N (1-α)( (Σ + κ_1 Z_1)^-1) Σ (Σ + κ_1 Z_1)^-1Σ·((Σ + κ_1Z_1)^-1Σ (Σ + κ_1Z_1)^-1 B )
E_2 = 1/N(A Σ (Σ + κ I)^-2)/1 - 1/N(Σ^2 (Σ + κ I)^-2)·((Σ + κ I)^-1Σ (Σ + κ I)^-1 B )
By Claim <ref>, we know that:
(1-α)^2 ((Σ̂ + λ I)^-1 A (Σ̂ + λ I)^-1 B ) = ((Σ̂_2 + Z_1 )^-1 A (Σ̂_2 + Z_1 )^-1 B )
∼_(A)κ_1^2 ( ((Σ + κ_1 Z_1 )^-1 A (Σ + κ_1 Z_1 )^-1 B ) + E_1).
where (A) applies Lemma <ref> and Claim <ref>.
Furthermore, by Lemma <ref>, it holds that:
λ^2((Σ̂ + λ I)^-1 A (Σ̂ + λ I)^-1 B ) ∼κ^2 (((Σ + κ I )^-1 A (Σ + κ I )^-1 B ) + E_2 ).
Putting this all together yields the desired result.
Consider the setup of Lemma <ref>, and assume the notation above. Assume α < 1. Let Z_1 = α/1-αΣ̂_1 + /1-α I, and let κ_1 = κ(1, N(1-α), Z_1^-1/2Σ Z_1^-1/2). Then it holds that:
κ_1^2 ((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)/1 - 1/N(1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)∼(1-α)^2 κ^2/λ^2 (Σ^2 (Σ + κ I)^-2)/1 - 1/N(Σ^2 (Σ + κ I)^-2)
By Claim <ref>, we know that:
(1-α)^2 ((Σ̂ + λ I)^-1Σ(Σ̂ + λ I)^-1Σ)
= ((Σ̂_2 + Z_1 )^-1Σ(Σ̂_2 + Z_1 )^-1Σ)
∼_(A)κ_1^2 (1 + 1/N(1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)/1-1/N(1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)) ((Σ + κ_1 Z_1 )^-1Σ(Σ + κ_1 Z_1 )^-1Σ)
= κ_1^2 ((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)/1 - 1/N(1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)
where (A) applies Lemma <ref> and Claim <ref>.
Furthermore, by Lemma <ref>, it holds that:
λ^2 ((Σ̂ + λ I)^-1Σ(Σ̂ + λ I)^-1Σ)
∼_(A)κ^2 (1 + 1/N(Σ^2(Σ + κ I)^-2)/1-1/N(Σ^2(Σ + κ I)^-2) ((Σ + κ I )^-1Σ(Σ + κ I )^-1Σ)
= κ^2 ((Σ^2 (Σ + κ I )^-2)/1-1/N(Σ^2(Σ + κ I)^-2) .
where (A) applies Lemma <ref>.
Putting this all together yields the desired result.
Next, we relate the random effective regularizer κ_1 to the deterministic effective regularizer κ(, N, Σ).
Consider the setup of Lemma <ref>, and assume the notation above. Assume α < 1. Let Z_1 = α/1-αΣ̂_1 + /1-α I, and let κ_1 = κ(1, N(1-α), Z_1^-1/2Σ Z_1^-1/2). Let κ = κ(, N, Σ). Then, it holds that κ_1 ∼κ.
Recall that κ_1 = κ(1, N (1-α), Z_1^-1/2Σ Z_1^-1/2) is the unique value such that:
1/κ_1 + 1/ N (1-α)((Z_1^-1/2Σ Z_1^-1/2 + κ_1 I)^-1 Z_1^-1/2Σ Z_1^-1/2) = 1.
We can write this as:
1 + κ_1/ N (1-α)((Σ + κ_1 Z_1)^-1Σ) = κ_1.
Now we apply Lemma <ref> to see that:
κ_1 = 1 + κ_1/ N (1-α)((Σ + κ_1 Z_1)^-1Σ) ∼ 1 + 1/ N (1-α)(1-α) κ/λ((Σ + κ I)^-1Σ).
We can write this to see that:
κ_1 ∼κ/λ(λ/κ + 1/N((Σ + κ I)^-1Σ) ) = κ/λ.
This implies that λκ_1 ∼κ as desired.
The proofs of these results relied on the following facts.
Consider the setup of Lemma <ref>, and assume the notation above. Assume α < 1. Let Z_1 = α/1-αΣ̂_1 + /1-α I. Then it holds that:
(Σ̂ + λ I)^-1 = (1-α)^-1 (Σ̂_2 + Z_1)^-1.
We observe that:
(1-α) (Σ̂ + I)^-1 = (1-α) (αΣ̂_1 + (1-α) Σ̂_2 + I)^-1
= (1-α) (1-α)^-1(Σ̂_2 + α/1-αΣ̂_1 + /1-α I)^-1
= (Σ̂_2 + Z_1 )^-1,
where Z_1 = α/1-αΣ̂_1 + /1-α I.
Consider the setup of Lemma <ref>, and assume the notation above. Assume α < 1. Let Z_1 = α/1-αΣ̂_1 + /1-α I. Then it holds that Z_1 and Z_1^-1 both have bounded operator norm.
Since Σ̂_1 is PSD, we observe that:
Z_1_op = α/1-αΣ̂_1_op + /1-α.
The fact that Σ̂_1_op is bounded follows from the boundedness requirements from Assumption <ref>. This proves that Z_1_op is bounded.
To see that Z_1^-1 is also bounded, note that:
Z_1^-1_op≥1-α/
§.§ Analysis of Term 1 (T1)
We show the following deterministic equivalent for term 1. This analysis is identical to the analysis of the deterministic equivalent for single-objective linear regression <cit.>, and we include it for completeness.
Consider the setup of Lemma <ref>, and assume the notation above. Then it holds that:
λ^2 ((Σ̂ + I)^-1Σ (Σ̂ + I)^-11) ∼κ^2/1 - 1/N(Σ^2 (Σ + κ I)^-2)·(Σ (Σ + κ I)^-21)
We apply Lemma <ref> to see that:
λ^2 ((Σ̂ + I)^-1Σ (Σ̂ + I)^-11)
∼κ^2 ((Σ + κ I)^-1Σ (Σ + κ I)^-11) + 1/N(Σ^2 (Σ + κ I)^-2)/1 - 1/N(Σ^2 (Σ + κ I)^-2)
= κ^2/1 - 1/N(Σ^2 (Σ + κ I)^-2)·(Σ (Σ + κ I)^-21),
as desired.
§.§ Analysis of Term 2 (T2)
We show the following deterministic equivalent for term 2.
Consider the setup of Lemma <ref>, and assume the notation above. Then it holds that:
(1-α)^2 (Σ̂_2 (Σ̂ + I)^-1Σ (Σ̂ + I)^-1Σ̂_2 )
∼(1-α)^2/1 - 1/N(Σ^2 (Σ + κ I)^-2)( (( Σ + κ I )^-1Σ( Σ + κ I )^-1ΣΣ))
+ (1-α)1/N(Σ^2 (Σ + κ I)^-2)/1 - 1/N(Σ^2 (Σ + κ I)^-2)·((Σ)- 2 (1-α) ( (Σ+ κ I)^-1ΣΣ))
The key idea of the proof is to unwrap the randomness in layers. First, we condition on Σ̂_1 and replace the randomness Σ̂_2 with a deterministic equivalent where the effective regularizer κ_1 depends on Σ̂_1 (Lemma <ref>). At this stage, we unfortunately cannot directly deal with the randomness Σ̂_1 with deterministic equivalence due to the presence of terms κ_1 which depend on Σ̂_1, and we instead apply the sublemmas from the previous section.
The following lemma replaces the randomness Σ̂_2 with a deterministic equivalent.
Consider the setup of Lemma <ref>, and assume the notation above. Assume that < 1. Let Z_1 = α/1-αΣ̂_1 + /1-α I, and let κ_1 = κ(1, N(1-α), Z_1^-1/2Σ Z_1^-1/2). Then it holds that:
(1-α)^2 ( Σ̂_2 (Σ̂ + I)^-1Σ (Σ̂ + I)^-1Σ̂_2 )
∼(( Σ + κ_1 Z_1 )^-1Σ( Σ + κ_1 Z_1 )^-1ΣΣ)/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)
+ 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)·((Σ) - 2 (Σ (Σ + κ_1 Z_1)^-1Σ) ).
By Claim <ref> we have that:
(1-α)^2 ( Σ̂_2 (Σ̂ + I)^-1Σ (Σ̂ + I)^-1Σ̂_2 ) = (Σ̂_2 (Σ̂_2 + Z_1 )^-1Σ(Σ̂_2 + Z_1 )^-1Σ̂_2 )
∼_(A)(Σ (Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ) + E
= ((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1ΣΣ) + E
where (A) follows from Lemma <ref> and Claim <ref>, and E is defined such that
E :=1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)/1 - 1/N (1-α)(Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)· (κ_1)^2 (Z_1 ( Σ + κ_1 Z_1 )^-1Σ( Σ + κ_1 Z_1 )^-1 Z_1 ).
and κ_1 = κ(λ, N(1-α), Z_1^-1/2Σ Z_1^-1/2).
Note that:
(κ_1)^2 (Z_1 ( Σ + κ_1 Z_1 )^-1Σ( Σ + κ_1 Z_1 )^-1 Z_1 )
= ((κ_1 Z_1) ( Σ + κ_1 Z_1 )^-1Σ( Σ + κ_1 Z_1 )^-1 (κ_1 Z_1) )
= (( I - Σ (Σ + κ_1 Z_1)^-1) Σ( I - Σ (Σ + κ_1 Z_1)^-1)^T )
= (Σ) - 2 ((Σ + κ_1 Z_1)^-1ΣΣ) + ((Σ + κ_1Z_1)^-1Σ (Σ + κ_1Z_1)^-1ΣΣ).
Note that:
((Σ + κ_1Z_1)^-1Σ (Σ + κ_1Z_1)^-1ΣΣ)
+ ((Σ + κ_1Z_1)^-1Σ (Σ + κ_1Z_1)^-1ΣΣ) ·1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)/1 - 1/N (1-α)(Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)
= ((Σ + κ_1Z_1)^-1Σ (Σ + κ_1Z_1)^-1ΣΣ)/1 - 1/N (1-α)(Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ)
Now we are ready to prove Lemma <ref>.
The statement follows trivially if α = 1. By Lemma <ref>, it holds that:
(1-α)^2 ( Σ̂_2 (Σ̂ + I)^-1Σ (Σ̂ + I)^-1Σ̂_2 )
∼(( Σ + κ_1 Z_1 )^-1Σ( Σ + κ_1 Z_1 )^-1ΣΣ)/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )
+ 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )·((Σ)- 2 ((Σ + κ_1 Z_1)^-1ΣΣ))
∼_(A) (1-α)^2 ( (( Σ + κ I )^-1Σ( Σ + κ I )^-1ΣΣ))
+ 1/N(Σ^2 (Σ + κ I)^-2)/1 - 1/N(Σ^2 (Σ + κ I)^-2)· (1-α)^2 ·((Σ + κ I)^-1Σ (Σ + κ I)^-1ΣΣ)
+ 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )·((Σ)- 2(1-α) ( (Σ+ κ I)^-1ΣΣ))
= (1-α)^2/1 - 1/N(Σ^2 (Σ + κ I)^-2)( (( Σ + κ I )^-1Σ( Σ + κ I )^-1ΣΣ))
+ 1/N (1-α)(Σ^2 (Σ + κ_1 Z_1)^-2)/1 - 1/N (1-α)(Σ^2 (Σ + κ_1 Z_1)^-2)·((Σ)- 2(1-α) ( (Σ+ κ I)^-1ΣΣ))
∼ _(B)(1-α)^2/1 - 1/N(Σ^2 (Σ + κ I)^-2)( (( Σ + κ I )^-1Σ( Σ + κ I )^-1ΣΣ))
+ (1-α) 1/N(Σ^2 (Σ + κ I)^-2)/1 - 1/N(Σ^2 (Σ + κ I)^-2)·((Σ)- 2(1-α) ( (Σ+ κ I)^-1ΣΣ))
where (A) applies Lemma <ref>, Lemma <ref>, and (B) uses Lemma <ref> and Lemma <ref>.
§.§ Analysis of Term 3 (T3)
We show the following deterministic equivalent for term 3.
Consider the setup of Lemma <ref> and assume the notation above. Let 1 = (β_1 - β_2) β_1^T, and let κ = κ(λ, N, Σ). Then it holds that:
2 λ (1-α) ( (Σ̂ + I)^-1Σ (Σ̂ + I)^-1Σ̂_21)
∼2 (1-α) κ/1 - 1/N(Σ^2 (Σ + κ I)^-2)((Σ + κ I)^-1Σ (Σ + κ I)^-1Σ1)
- 2 (1-α) 1/N(Σ^2 (Σ + κ I)^-2)/1 - 1/N(Σ^2 (Σ + κ I)^-2)·κ( (Σ + κ I)^-1Σ1)
The analysis follows a similar structure to the analysis of (T2); we similarly unwrap the randomness in layers.
Consider the setup of Lemma <ref> and assume the notation above. Assume < 1. Let Z_1 = α/1-αΣ̂_1 + /1-α I, and let κ_1 = κ(1, N(1-α), Z_1^-1/2Σ Z_1^-1/2). Then it holds that:
2 λ (1-α)^2 ( (Σ̂ + I)^-1Σ (Σ̂ + I)^-1Σ̂_2 1)
∼ 2 λκ_1/(1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ1)/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ(Σ + κ_1 Z_1)^-1Σ )
- 2 λκ_1/(1-α)·1/N (1-α)(Σ^2 (Σ + κ_1 Z_1)^-2)/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ(Σ + κ_1 Z_1)^-1Σ )·(Σ(Σ + κ_1 Z_1)^-1Σ1).
By Claim <ref> we have that:
2 λ (1-α) ( (Σ̂ + I)^-1Σ (Σ̂ + I)^-1Σ̂_2 1) = 2 λ/(1-α)( (Σ̂_2 + Z_1 )^-1Σ(Σ̂_2 + Z_1 )^-1Σ̂_2 1)
∼_(A) 2 λ/(1-α)(κ_1 ( (Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ1) - E )
where (A) follows from Lemma <ref> and Claim <ref>, and E is defined such that
E :=1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )· (κ_1)^2 (( Σ + κ_1 Z_1 )^-1Σ( Σ + κ_1 Z_1 )^-1 Z_1 1).
and κ_1 = κ(λ, N(1-α), Z_1^-1/2Σ Z_1^-1/2).
Note that:
(κ_1)^2 ( ( Σ + κ_1 Z_1 )^-1Σ( Σ + κ_1 Z_1 )^-1 Z_1 1)
= κ_1 (( Σ + κ_1 Z_1 )^-1Σ( Σ + κ_1 Z_1 )^-1 (κ_1 Z_1) 1)
= κ_1 ( (Σ + κ_1 Z_1)^-1Σ( I - (Σ + κ_1 Z_1)^-1Σ) 1)
= κ_1 ( (Σ + κ_1 Z_1)^-1Σ1) - κ_1 ((Σ + κ_1Z_1)^-1Σ (Σ + κ_1Z_1)^-1Σ1)
Moreover, note that:
2 λκ_1 /(1-α)( (Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ1)
+ 2 λ/(1-α)·1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )·κ_1 ((Σ + κ_1Z_1)^-1Σ (Σ + κ_1Z_1)^-1Σ1)
= 2 λ/(1-α)((Σ + κ_1Z_1)^-1Σ (Σ + κ_1Z_1)^-1Σ1)/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )·κ_1.
Now we are ready to prove Lemma <ref>.
The statement follows trivially if α = 1.
By Lemma <ref>, it holds that:
2 λ (1-α)^2 ( (Σ̂ + I)^-1Σ (Σ̂ + I)^-1Σ̂_2 1)
∼ 2 λκ_1/(1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ1)/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )
- 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )· 2 λκ_1/(1-α)((Σ + κ_1 Z_1)^-1Σ1)
∼_(A) 2 (1-α) κ((Σ + κ I)^-1Σ (Σ + κ I)^-1Σ1)
+ 2 (1-α) κ1/N(Σ^2 (Σ + κ I)^-2)/1 - 1/N(Σ^2 (Σ + κ I)^-2)((Σ + κ I)^-1Σ (Σ + κ I)^-1Σ1)
- 2 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )·κ( (Σ + κ I)^-1Σ1)
= 2 (1-α) κ/1 - 1/N(Σ^2 (Σ + κ I)^-2)((Σ + κ I)^-1Σ (Σ + κ I)^-1Σ1)
- 2 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )/1 - 1/N (1-α)((Σ + κ_1 Z_1)^-1Σ (Σ + κ_1 Z_1)^-1Σ )·κ( (Σ + κ I)^-1Σ1)
∼_(B) 2 (1-α) κ/1 - 1/N(Σ^2 (Σ + κ I)^-2)((Σ + κ I)^-1Σ (Σ + κ I)^-1Σ1)
- 2 (1-α) 1/N(Σ^2 (Σ + κ I)^-2)/1 - 1/N(Σ^2 (Σ + κ I)^-2)·κ( (Σ + κ I)^-1Σ1)
where (A) applies Lemma <ref>, Lemma <ref>, and Lemma <ref>, and (B) uses Lemma <ref> and Lemma <ref>.
§.§ Proof of Lemma <ref>
Lemma <ref> follows from the sublemmas in this section.
We apply Claim <ref> to decompose the error in terms (T1), (T2), and (T3). We replace these terms with deterministic equivalents using Lemma <ref>, Lemma <ref>, and Lemma <ref>. The statement follows from adding these terms.
§.§ Reformulation of Lemma <ref> using assumptions from Section <ref>
Under the assumptions from Section <ref>, we show the following:
Suppose that power scaling holds for the eigenvalues and alignment coefficients with scaling γ, δ > 0 and correlation coefficient ρ∈ [0, 1), and suppose that P = ∞. Suppose that ∈ (0,1), and N ≥ 1. Let L_1^ := L_1^(β_1, β_2, , , N, α) be the deterministic equivalent from Lemma <ref>. Let κ = κ(λ, N, Σ) from Definition <ref>. Let L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)]. Then it holds that:
Q ·𝔼_[L_1^] = κ^2 (1 - 2(1-α)^2 (1-ρ)) ∑_i=1^P i^-δ -1- γ/(i^-1-γ + κ)^2 + (1-α)^2 L^*(ρ)
+ 2κ (1-ρ) (1-α) (1 - 2 (1-α)) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ 2 (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) · (1 - 2(1-α)) ∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ,
where Q = 1 - 1/N∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2.
Before proving Lemma <ref>, we prove a number of sublemmas where we analyze each of the terms in Lemma <ref> using the assumptions from Section <ref>. In the proofs in this section, we use the notation F ≈ F' to denote that F = Θ(F') where the Θ is allowed to hide dependence on the scaling exponents γ and δ. Moreover let Σ = V Λ V^T be the eigendecomposition of Σ, where Λ is a diagonal matrix consisting of the eigenvalues.
Suppose that the power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1), suppose that P = ∞. Assume the notation from Lemma <ref>. Let ν = min(2(1+γ), γ + δ). Then it holds that:
𝔼_[T_1] := κ^2 ·(ΣΣ_κ^-2𝔼_[1])= κ^2 ∑_i=1^P i^-δ-1-γ/(i^-1-γ + κ)^2 .
Observe that:
(ΣΣ_κ^-2𝔼_[1]) = (Λ (Λ + κ I)^-2𝔼_[V^Tβ_1 β_1^T V])
= ∑_i=1^P i^-1-γ/(i^-1-γ + κ)^2·𝔼_[⟨β_1, v_i⟩^2]
= ∑_i=1^P i^-δ-1-γ/(i^-1-γ + κ)^2
Suppose that the power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1), suppose that P = ∞. Assume the notation from Lemma <ref>. Then it holds that:
𝔼_[T_2] := (1-α)^2 ( (Σ_κ^-2Σ^3 𝔼_[] )) = 2 (1-α)^2 (1-ρ) ∑_i=1^P i^-δ - 3(1+γ)/(i^-1-γ + κ)^2.
First, we observe that
𝔼_[⟨β_1 - β_2, v_i⟩^2] = 𝔼_[⟨β_1 , v_i⟩^2] + 𝔼_[⟨β_2, v_i⟩^2] - 2 𝔼_[⟨β_1 , v_i⟩⟨β_2, v_i⟩] = i^-δ + i^-δ - 2 ρ i^-δ = 2(1-ρ) i^-δ.
It is easy to see that:
(Σ_κ^-2Σ^3 𝔼_[] ) = (Λ^3 (Λ + κ I)^-2𝔼_[V^T (β_1 - β_2) (β_1 - β_2)^T V])
= ∑_i=1^P i^-3(1+γ)/(i^-1-γ + κ)^2·𝔼_[⟨β_1 - β_2, v_i⟩^2]
= 2 (1-ρ) ∑_i=1^P i^-δ - 3(1+γ)/(i^-1-γ + κ)^2.
Suppose that the power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1), suppose that P = ∞. Assume the notation from Lemma <ref>. Then it holds that:
𝔼_[T_3] := 2 (1-α) κ·(Σ_κ^-2Σ^2 1) = 2(1-α) κ (1-ρ) ∑_i=1^P i^-δ-2-2γ/(i^-1-γ + κ)^2.
First, we observe that
𝔼_[⟨β_1 - β_2, v_i⟩⟨β_1, v_i⟩ ] = 𝔼_[⟨β_1 , v_i⟩^2] - 𝔼_[⟨β_1 , v_i⟩⟨β_2, v_i⟩] = i^-δ - ρ i^-δ = (1-ρ) i^-δ.
Observe that:
(Σ_κ^-2Σ^2 1) = (Λ^2 (Λ + κ I)^-2𝔼_[V^T (β_1 - β_2) β_1^T V])
= ∑_i=1^P i^-2(1+γ)/(i^-1-γ + κ)^2·𝔼_[⟨β_1 - β_2, v_i⟩⟨β_1, v_i⟩ ]
= (1-ρ) ∑_i=1^P i^-δ-2-2γ/(i^-1-γ + κ)^2.
This means that:
𝔼_[T_3] = 2(1-α) κ (1-ρ) ∑_i=1^P i^-δ-2-2γ/(i^-1-γ + κ)^2.
Suppose that the power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1), suppose that P = ∞. Assume the notation from Lemma <ref>. Then it holds that:
|𝔼_[T_4]| := 2 κ (1-α) 1/N(Σ^2 Σ_κ ^-2) ·(Σ_κ^-1Σ𝔼_[1] )
= 2 κ (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) (∑_i=1^P i^-δ-1-γ/i^-1-γ + κ)
First, we observe that
𝔼_[⟨β_1 - β_2, v_i⟩⟨β_1, v_i⟩ ] = 𝔼_[⟨β_1 , v_i⟩^2] - 𝔼_[⟨β_1 , v_i⟩⟨β_2, v_i⟩] = i^-δ + - ρ i^-δ = (1-ρ) i^-δ.
Observe that:
(Σ_κ^-1Σ𝔼_[1] ) = (Λ (Λ + κ I)^-1𝔼_[V^T (β_1 - β_2) β_1^T V])
= ∑_i=1^P i^-1-γ/i^-1-γ + κ·𝔼_[⟨β_1 - β_2, v_i⟩⟨β_1, v_i⟩]
= (1-ρ) ∑_i=1^P i^-δ-1-γ/i^-1-γ + κ.
Now, apply Lemma <ref>, we see that:
|𝔼_[T_4]| := 2 κ (1-α) 1/N(Σ^2 Σ_κ ^-2) ·(Σ_κ^-1Σ1)
=_(A) 2 κ (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) (∑_i=1^P i^-δ-1-γ/i^-1-γ + κ)
where (A) follows from Lemma <ref>.
Suppose that the power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1), suppose that P = ∞. Assume the notation from Lemma <ref>, and similarly let
𝔼_[T_5] := (1-α) 1/N(Σ^2 Σ_κ ^-2) ·((Σ𝔼_[] )- 2 (1-α) ( Σ_κ^-1Σ^2 𝔼_[] ) )
= 2 (1-α) (1-ρ) 1/N( ∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ·(∑_i=1^P i^-δ - 1-γ - 2 (1-α) ·∑_i=1^P i^-δ-2-2γ/(i^-1-γ + κ)).
First, we observe that
𝔼_[⟨β_1 - β_2, v_i⟩^2] = 𝔼_[⟨β_1 , v_i⟩^2] + 𝔼_[⟨β_2, v_i⟩^2] - 2 𝔼_[⟨β_1 , v_i⟩⟨β_2, v_i⟩] = i^-δ + i^-δ - 2 ρ i^-δ = 2(1-ρ) i^-δ.
Now, observe that:
𝔼_[T_5] := (1-α) 1/N(Σ^2 Σ_κ ^-2) ·((Σ𝔼_[] )- 2 (1-α) ( Σ_κ^-1Σ^2 𝔼_[] ) )
= (1-α) 1/N(Σ^2 Σ_κ ^-2) ·((Λ𝔼_[V^T (β_1 - β_2) (β_1 - β_2)^T V] ) )
- (1-α) 1/N(Σ^2 Σ_κ ^-2) ·(2 (1-α) ( (Λ + κ I)^-1Λ^2 𝔼_[V^T (β_1 - β_2) (β_1 - β_2)^T V] ) )
= (1-α) 1/N(Σ^2 Σ_κ ^-2) ·(∑_i=1^P i^-1-γ⟨β_1 - β_2, v_i⟩^2 - 2 (1-α) ·∑_i=1^P i^-2-2γ/(i^-1-γ + κ)⟨β_1 - β_2, v_i⟩^2 )
= 2 (1-α) (1-ρ) 1/N(Σ^2 Σ_κ ^-2) ·(∑_i=1^P i^-δ - 1-γ - 2 (1-α) ·∑_i=1^P i^-δ-2-2γ/(i^-1-γ + κ))
=_(A) 2 (1-α) (1-ρ) 1/N( ∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ·(∑_i=1^P i^-δ - 1-γ - 2 (1-α) ·∑_i=1^P i^-δ-2-2γ/(i^-1-γ + κ)).
where (A) uses Lemma <ref>.
The proofs of these sublemmas use the following fact.
Suppose that the power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1), suppose that P = ∞. Assume the notation from Lemma <ref>. Then it holds that:
(Σ^2 (Σ + κ I)^-2) = ∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2.
We see that:
(1-α) 1/N(Σ^2 Σ_κ ^-2) = (1-α) 1/N(V Λ^2 (Λ + κ I)^-2 V^T)
= (1-α) 1/N(Λ^2 (Λ + κ I)^-2)
= ∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2.
Now, we are ready to prove Lemma <ref>.
By Lemma <ref>, we know:
Q = 1 - 1/N(Σ^2 (Σ + κ I)^-2)= 1 - 1/N∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2.
Moreover, we have that:
Q ·𝔼_[L_1^] =_(A)𝔼_[T_1 + T_2 + T_3 + T_4 + T_5]
=_(B)κ^2 ∑_i=1^P i^-δ - 1- γ/(i^-1-γ + κ)^2 + 2 (1-α)^2 (1-ρ) ∑_i=1^P i^-δ - 3 (1+γ)/(i^-1-γ + κ)^2
+ 2κ (1-ρ) (1-α) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
- 2κ (1-ρ) (1-α) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) (∑_i=1^P i^-δ - 1-γ/i^-1-γ + κ)
+ 2 (1-α) (1-ρ) 1/N( ∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ·(∑_i=1^P i^-δ - 1-γ - 2 (1-α) ·∑_i=1^P i^-δ-2-2γ/(i^-1-γ + κ)).
where (A) follows from Lemma <ref>, and (B) follows from Lemmas <ref>-<ref>.
By Claim <ref>, we know that:
L^*(ρ) = 2 (1-ρ) ∑_i=1^P i^-δ-1-γ= 2 (1-ρ) ∑_i=1^P i^-δ-3(1+γ)/(i^-1-γ)^2.
This means that:
L^*(ρ) - 2 (1-ρ) ∑_i=1^P i^-δ - 3 (1+γ)/(i^-1-γ + κ)^2
= 2 (1-ρ) ∑_i=1^P (i^-δ-3(1+γ)/(i^-1-γ)^2 - i^-δ - 3 (1+γ)/(i^-1-γ + κ)^2)
= 2 (1-ρ) ∑_i=1^P (i^-δ-3(1+γ)· ((i^-1-γ + κ)^2 - (i^-1-γ)^2)/(i^-1-γ)^2 · (i^-1-γ + κ)^2)
= 2 κ^2 (1-ρ) ∑_i=1^P (i^-δ-3(1+γ)/(i^-1-γ)^2 · (i^-1-γ + κ)^2) + 4 κ (1-ρ) ∑_i=1^P (i^-δ-3(1+γ)· i^-1-γ/(i^-1-γ)^2 · (i^-1-γ + κ)^2)
= 2 κ^2 (1-ρ) ∑_i=1^P (i^-δ-1-γ/(i^-1-γ + κ)^2) + 4 κ (1-ρ) ∑_i=1^P (i^-δ-2(1+γ)/(i^-1-γ + κ)^2)
Applying this and some other algebraic manipulations, we obtain that:
Q · L_1^ = κ^2 (1 - 2(1-α)^2 (1-ρ)) ∑_i=1^P i^-δ - 1-γ/(i^-1-γ + κ)^2 + (1-α)^2 L^*(ρ)
+ 2κ (1-ρ) (1-α) (1 - 2 (1-α)) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
- 2 (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) (∑_i=1^P i^-δ - 1-γ - ∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ)
+ 2 (1-α) (1-ρ) 1/N( ∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ·(∑_i=1^P i^-δ - 1-γ - 2 (1-α) ·∑_i=1^P i^-δ-2-2γ/(i^-1-γ + κ))
= κ^2 (1 - 2(1-α)^2 (1-ρ)) ∑_i=1^P i^-δ - γ/(i^-1-γ + κ)^2 + (1-α)^2 L^*(ρ)
+ 2κ (1-ρ) (1-α) (1 - 2 (1-α)) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ 2 (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) · (1 - 2(1-α)) ∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ.
§.§ Proof of Theorem <ref>
We now prove Theorem <ref>. In the proof, we again use the notation F ≈ F' to denote F = Θ(F'). The main ingredient is Lemma <ref>, coupled with the auxiliary calculations in Appendix <ref>.
The proof boils down to three steps: (1) obtaining an exact expression, (2) obtaining an up-to-constants asymptotic expression in terms of κ and Q, and (3) substituting in κ and Q.
Step 1: Exact expression.
We apply Lemma <ref> to see that:
Q · L_1^ = κ^2 (1 - 2(1-α)^2 (1-ρ)) ∑_i=1^P i^-δ - 1-γ/(i^-1-γ + κ)^2 + (1-α)^2 L^*(ρ)
+ 2κ (1-ρ) (1-α) (1 - 2 (1-α)) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ 2 (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) · (1 - 2(1-α)) ∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ,
where Q = 1 - 1/N∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2, where L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)], and where κ = κ(Σ, N, λ) as defined in Definition <ref>.
Step 2: Asymptotic expression in terms of κ and Q. We show that
Q · L_1^≈κ^/1+γ + (1-α)^2 (1-ρ) +
(1-α) (1-ρ) κ^-1/1+γ/N.
We analyze this expression term-by-term and repeatedly apply Lemma <ref>. We see that:
κ^2 (1 - 2(1-α)^2 (1-ρ)) ∑_i=1^P i^-δ - 1- γ/(i^-1-γ + κ)^2≈_(A)κ^/1+γ (1 - 2(1-α)^2 (1-ρ)) ≈_(B)κ^/1+γ,
where (A) uses Lemma <ref> and (B) uses that α≥ 0.5. Moreover, we observe that:
(1-α)^2 L^*(ρ) ≈_(C) (1-α)^2 (1-ρ),
where (C) uses Claim <ref>. Moreover, we see that:
2κ (1-ρ) (1-α) (1 - 2 (1-α)) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2 ≈_(D) (1-α) (1-ρ) (1 - 2 (1-α)) max( κ, κ^δ + γ/1+γ)
=_(E) O((1-α) √(1-ρ)max( κ, κ^δ + γ/2(1+γ)))
= O( √((1-α)^2 (1-ρ) ·κ^min(2(1+γ), γ + δ)/1+γ))
=_(F) O( κ^min(2(1+γ), γ + δ)/1+γ + (1-α)^2 (1-ρ) )
= O( κ^/1+γ + (1-α)^2 (1-ρ) )
where (D) uses Lemma <ref>, (E) uses that 1-ρ≤ 1 and that κ = O(1) (which follows from Lemma <ref> and the assumption that λ∈ (0,1)) and (F) follows from AM-GM. Finally, observe that:
2 (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) · (1 - 2(1-α)) ∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ
≈ (1 - 2(1-α)) · (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ
≈_(G) (1 - 2(1-α)) ·
(1-α) (1-ρ) κ^-1/1+γ/N
where (G) uses Lemma <ref> twice.
Putting this all together, we see that:
Q · L_1^≈κ^/1+γ + (1-α)^2 (1-ρ) + (1 - 2(1-α)) ·
(1-α) (1-ρ) κ^-1/1+γ/N.
We split into two cases based on α. When α≥ 0.75, we observe that
(1 - 2(1-α)) ·
(1-α) (1-ρ) κ^-1/1+γ/N≈ (1-α) (1-ρ) κ^-1/1+γ/N,
and when α∈ [0.5, 0.75], we observe that
(1 - 2(1-α)) ·
(1-α) (1-ρ) κ^-1/1+γ/N = O( (1-α) (1-ρ) κ^-1/1+γ/N)
and
(1-α)^2 (1-ρ) ≈_(H) (1-α) (1-ρ) κ^-1/1+γ/N
where (H) follows from the fact that κ = Ω(N^-1-γ) by Lemma <ref>. Altogether, this implies that:
Q · L_1^≈κ^/1+γ + (1-α)^2 (1-ρ) +
(1-α) (1-ρ) κ^-1/1+γ/N,
as desired.
Step 2: Substitute in κ and Q. Finally, we apply Lemma <ref> to see that:
Q^-1 = (1 - 1/N∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2)^-1 = Θ(1).
We apply Lemma <ref> to see that
κ = κ(Σ, N, Σ) = max(N^-1-γ, λ).
Plugging this into the expression derived in Step 2, we obtain the desired expression.
§.§ Proof of Corollary <ref>
We prove Corollary <ref> using Theorem <ref>.
We apply Theorem <ref> to see that:
𝔼_[L_1^] = Θ( max(λ^/1+γ, N^-)_finite data error + (1-α)^2 · (1 - ρ)_mixture error + (1-α) (
min(λ^-1/1+γ, N)/N)
(1-ρ)
_overfitting error).
We split into three cases: N ≤ (1-α)^-1/(1-ρ)^-1/, (1-α)^-1/(1-ρ)^-1/≤ N ≤ (1-α)^-2+/ (1-ρ)^-1/, and N ≥ (1-α)^-2+/ (1-ρ)^-1/.
Case 1: N ≤ (1-α)^-1/(1-ρ)^-1/. We observe that the finite data error dominates regardless of . This is because the condition implies that
max(λ^/1+γ, N^-) ≥ (1-α)(1-ρ),
which dominates both the mixture error and the overfitting error.
Case 2: (1-α)^-1/(1-ρ)^-1/≤ N ≤ (1-α)^-2+/ (1-ρ)^-1/. We show that the finite error term and overfitting error dominate. Let Ñ = min(λ^-1/1+γ, N). We can bound the sum of the finite data error and the overfitting error as:
max(λ^/1+γ, N^-) + (1-α) (
min(λ^-1/1+γ, N)/N)
(1-ρ) = Ñ^- + (1-α) (1-ρ) Ñ/N.
Taking a derivative (and verifying the second order condition), we see that this expression is minimized when:
·Ñ^- - 1 = (1-α) (1-ρ)/N
which solves to:
Ñ = Θ(((1-α) (1-ρ)/N)^-1/1+).
The lower bound on N guarantees that:
Ñ = Θ(((1-α) (1-ρ)/N)^-1/1+) = O (((1-α)^1 + 1/ (1-ρ)^1 + 1/)^-1/1+)= O ((1-α)^-1/ (1-ρ)^-1/) = O(N)
which ensures that Ñ can be achieved by some choice of λ. In particular, we can take = Θ(((1-α) (1-ρ)/N)^1+γ/ + 1).
The resulting sum of the finite error and the overfitting error is:
max(λ^/1+γ, N^-) + (1-α) (
min(λ^-1/1+γ, N)/N) = Θ(((1-α) (1-ρ)/N)^/ + 1).
The upper bound on N guarantees that this dominates the mixture error:
Θ(((1-α) (1-ρ)/N)^/ + 1) = Ω(((1-α)^1 + 2+/ (1-ρ)^1 + 1/)^/ + 1) = Ω((1-α)^2 (1-ρ))
as desired.
Case 3: N ≥ (1-α)^-2+/ (1-ρ)^-1/.
We show that the mixture and the overfitting error terms dominate. First, we observe that the sum of the mixture error and the finite data error is:
(1-α)^2 (1-ρ) + (1-α) (
min(λ^-1/1+γ, N)/N)
(1-ρ) = Θ( (1-α)(1-ρ) (1 - α + min(λ^-1/1+γ, N)/N) ).
This is minimized by taking = Θ((N(1-α))^-1-γ), which yields Θ((1-α)^2 (1-ρ)).
The upper bound on N and the setting of guarantees that this term dominates the finite data error:
max(λ^/1+γ, N^-) = O((N(1-α))^-) ≤ O((1-α)^- (1-α)^2 + (1-ρ) ) = O((1-α)^2 (1-ρ),
as desired.
§.§ Proof of Theorem <ref>
We prove Theorem <ref>.
Like the proof of Theorem <ref>, the proof boils down to three steps: (1) obtaining an exact expression, (2) obtaining an up-to-constants asymptotic expression in terms of κ, and (3) substituting in κ.
Step 1: Exact expression.
We first apply Lemma <ref> to obtain the precise loss:
Q ·𝔼_[L_1^*(β_1, β_2, , λ_E, N, α_E)] =κ^2 (1 - 2(1-α)^2 (1-ρ)) ∑_i=1^P i^-δ -1- γ/(i^-1-γ + κ)^2 + (1-α)^2 L^*(ρ)
+ 2κ (1-ρ) (1-α) (1 - 2 (1-α)) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ 2 (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) · (1 - 2(1-α)) ∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ,
where Q = 1 - 1/N∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2 and where κ = κ(Σ, N, λ) as defined in Definition <ref>.
This can be written as:
𝔼_[L_1^*(β_1, β_2, , λ_E, N, α_E)] - (1-α)^2 L^*(ρ)
=Q^-1·κ^2 (1 - 2(1-α)^2 (1-ρ)) ∑_i=1^P i^-δ - 1- γ/(i^-1-γ + κ)^2
+ Q^-1· 2κ (1-ρ) (1-α) (1 - 2 (1-α)) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ Q^-1· 2 (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) · (1 - 2(1-α)) ∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ
+ 1-Q/Q (1-α)^2 L^*(ρ).
Step 2: Asymptotic expression in terms of κ.
We use the notation F ≈ F' to denote that F = Θ(F'). We obtain:
𝔼_[L_1^*(β_1, β_2, , λ_E, N, α_E)] - (1-α)^2 L^*(ρ)
≈_(A)κ^2 (1 - 2(1-α)^2 (1-ρ)) ∑_i=1^P i^-δ -1- γ/(i^-1-γ + κ)^2
+ κ (1-ρ) (1-α) (1 - 2 (1-α)) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) · (1 - 2(1-α)) ∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ
+ (1-Q) (1-α)^2 L^*(ρ)
≈_(B)κ^2 ∑_i=1^P i^-δ - 1- γ/(i^-1-γ + κ)^2 + κ (1-ρ) (1-α) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ
+ (1-α)^2 L^*(ρ) ·1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2).
where (A) uses that Q^-1 is a constant by Lemma <ref> and (B) uses that α≥ 0.75 and the definition of Q. Now, using the bounds from Lemma <ref>, and the bound from Claim <ref>, we obtain:
𝔼_[L_1^*(β_1, β_2, , λ_E, N, α_E)] - (1-α)^2 L^*(ρ)
≈κ^min(2(1+γ), γ + δ)/1+γ + (1-ρ) (1-α) max( κ, κ^γ + δ/1+γ) + (1-α) (1-ρ) κ^-1/1+γ/N + κ^-1/1+γ/N (1-α)^2 (1-ρ)
≈κ^/1+γ + (1-ρ) (1-α) κ^'/1+γ + (1-α) (1-ρ) κ^-1/1+γ/N.
Step 3: Substituting in κ. Finally, we apply Lemma <ref> to see that
κ = κ(Σ, N, Σ) = max(N^-1-γ, λ).
Plugging this into the expression derived in Step 2, we obtain the desired expression.
§.§ Proof of Corollary <ref>
We prove Corollary <ref> using Theorem <ref>.
We apply Theorem <ref> to see that:
𝔼_[L_1^ - L_1(β(α, 0))]
= Θ( max(λ^/1+γ, N^-)_finite data error + (1-ρ) (1-α) max(λ^'/1+γ, N^-')_mixture finite data error + (1-α) (
min(λ^-1/1+γ, N)/N)
(1-ρ)
_overfitting error).
We split into three cases: N ≤ (1-α)^-1/(1-ρ)^-1/, (1-α)^-1/(1-ρ)^-1/≤ N ≤ (1-α)^-'+1/ - '(1-ρ)^-'+1/ - ', and N ≥ (1-α)^-'+1/ - '(1-ρ)^-'+1/ - '.
Case 1: N ≤ (1-α)^-1/(1-ρ)^-1/. We observe that the finite data error dominates regardless of . This is because the condition implies that
max(λ^/1+γ, N^-) ≥ (1-α)(1-ρ),
which dominates both the mixture finite data error and the overfitting error.
Case 2: (1-α)^-1/(1-ρ)^-1/≤ N ≤ (1-α)^-'+1/ - '(1-ρ)^-'+1/ - '. We show that the finite error term and overfitting error dominate. Let Ñ = min(λ^-1/1+γ, N). We can bound the sum of the finite data error and the overfitting error as:
max(λ^/1+γ, N^-) + (1-α) (
min(λ^-1/1+γ, N)/N)
(1-ρ) = Ñ^- + (1-α) (1-ρ) Ñ/N.
Taking a derivative (and verifying the second order condition), we see that this expression is minimized when:
·Ñ^- - 1 = (1-α) (1-ρ)/N
which solves to:
Ñ = Θ(((1-α) (1-ρ)/N)^-1/1+).
The lower bound on N guarantees that:
Ñ = Θ(((1-α) (1-ρ)/N)^-1/1+) = O (((1-α)^1 + 1/ (1-ρ)^1 + 1/)^-1/1+)= O ((1-α)^-1/ (1-ρ)^-1/) = O(N)
which ensures that Ñ can be achieved by some choice of λ. In particular, we can take = Θ(((1-α) (1-ρ)/N)^1+γ/ + 1).
The resulting sum of the finite error and the overfitting error is:
max(λ^/1+γ, N^-) + (1-α) (
min(λ^-1/1+γ, N)/N) = Θ(((1-α) (1-ρ)/N)^/ + 1).
The upper bound on N and the choice of λ guarantees that this dominates the mixture finite data error, as shown below:
(1-ρ) (1-α) max(λ^'/1+γ, N^-')
= Θ((1-ρ) (1-α) ((1-α) (1-ρ)/N)^'/ + 1)
= Θ(((1-α) (1-ρ)/N)^/ + 1 (1-α) (1-ρ) ((1-α) (1-ρ)/N)^' - / + 1)
= Θ(((1-α) (1-ρ)/N)^/ + 1 (1-α)^'+1/ + 1 (1-ρ)^'+1/ + 1 N^ - '/ + 1)
= O(((1-α) (1-ρ)/N)^/ + 1 (1-α)^'+1/ + 1 (1-ρ)^'+1/ + 1 (1-α)^-'+1/ + 1 (1-ρ)^-'+1/ + 1)
= O(((1-α) (1-ρ)/N)^/ + 1)
as desired.
Case 3: N ≥ (1-α)^-'+1/ - '(1-ρ)^-'+1/ - '.
We show that the mixture finite data error and the overfitting error terms dominate. First, we observe that the sum of the mixture error and the finite data error is:
(1-ρ) (1-α) max(λ^'/1+γ, N^-') + (1-α) (
min(λ^-1/1+γ, N)/N)
(1-ρ)
= Θ( (1-α)(1-ρ) (λ^'/1+γ + min(λ^-1/1+γ, N)/N) )
This is minimized by taking = Θ(N^-1+γ/'+1), which yields Θ((1-α) (1-ρ) N^-'/'+1).
The upper bound on N and the setting of guarantees that this term dominates the finite data error:
max(λ^/1+γ, N^-) = Θ(N^-/'+1)
≤Θ((1-α) (1-ρ) N^-'/'+1 (1-α)^-1 (1-ρ)^-1 N^- - '/'+1)
= O((1-α) (1-ρ) N^-'/'+1 (1-α)^-1 (1-ρ)^-1 (1-α) (1-ρ) )
= O((1-α) (1-ρ) N^-'/'+1)
as desired.
§.§ Auxiliary calculations under power scaling assumptions
We show the following auxiliary calculations which we use when analyzing the terms in Lemma <ref> under the power scaling assumptions. Throughout this section, we again use the notation F ≈ F' to denote that F = Θ(F').
Suppose that power-law scaling holds for eigenvalues and alignment coefficients with scaling exponents γ, δ >0 and correlation coefficient ρ∈ [0, 1), and suppose that P = ∞. Let κ = κ(λ, N, Σ) be defined according to Definition <ref>. Then the following holds:
∑_i=1^P i^-δ -1 - γ/(i^-1-γ + κ)^2 ≈κ^-2κ^min(2(1+γ), γ + δ)/1+γ
∑_i=1^P i^-δ - 3(1+γ)/(i^-1-γ + κ)^2 ≈ 1
∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ ≈ 1
∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ + κ)^2 ≈max(1, κ^δ - 1/1+γ)
∑_i=1^P i^-δ - 1 - γ/i^-1-γ + κ ≈max(1, κ^δ - 1/1+γ)
∑_i=1^P i^- 2 - 2 γ/(i^-1-γ + κ)^2 ≈κ^-1/1+γ
∑_i=1^P i^- 1 - γ/i^-1-γ + κ ≈κ^-1/1+γ
∑_i=1^P i^-1 - γ/(i^-1-γ + κ)^2 ≈κ^-2κ^γ/1+γ
To prove the first statement, observe that:
∑_i=1^P i^-δ-1-γ/(i^-1-γ + κ)^2 = ∑_i ≤κ^-1/1+γi^-δ-1-γ/(i^-1-γ + κ)^2 + ∑_i ≥κ^-1/1+γi^-δ-1-γ/(i^-1-γ + κ)^2
≈∑_i ≤κ^-1/1+γ i^1+γ-δ + κ^-2∑_i ≥κ^-1/1+γ i^-δ-1-γ
≈max(1, κ^-2+γ-δ/1+γ) + κ^-2κ^δ+γ/1+γ
= κ^-2max(κ^2, κ^γ+δ/1+γ) + κ^-2κ^δ+γ/1+γ
≈κ^-2max(κ^2, κ^γ+δ/1+γ)
≈κ^-2κ^min(2(1+γ), γ + δ)/1+γ.
To prove the second statement, we use Lemma <ref> and the assumption that λ∈ (0,1) to see κ = Θ(max(λ, N^-1-γ)) = O(1). This means that
∑_i=1^P i^-δ - 3(1+γ)/(i^-1-γ + κ)^2 = Ω(∑_i=1^P i^-δ - 3(1+γ)) = Ω(1).
Moreover, we see that:
∑_i=1^P i^-δ - 3(1+γ)/(i^-1-γ + κ)^2 = O (∑_i=1^P i^-δ - 3(1+γ)/(i^-1-γ)^2)= O (∑_i=1^P i^-δ - 1 -γ)) = Ω(1).
To prove the third statement, we use Lemma <ref> and the assumption that λ∈ (0,1) to see κ = Θ(max(λ, N^-1-γ)) = O(1). This means that
∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ = Ω(∑_i=1^P i^-δ - 2(1+γ)) = Ω(1).
Moreover, we see that:
∑_i=1^P i^-δ - 2(1+γ)/i^-1-γ + κ = O (∑_i=1^P i^-δ - 2(1+γ)/i^-1-γ) = O (∑_i=1^P i^-δ - 1 - γ) = O(1).
To prove the fourth statement, observe that:
∑_i=1^P i^-δ-2-2γ/(i^-1-γ + κ)^2 ≈∑_i ≤κ^-1/1+γi^-δ-2-2γ/(i^-1-γ + κ)^2 + ∑_i ≥κ^-1/1+γi^-δ-2-2γ/(i^-1-γ + κ)^2
≈∑_i ≤κ^-1/1+γ i^-δ + κ^-2∑_i ≥κ^-1/1+γ i^-δ-2-2γ
≈max(1, κ^-1-δ/1+γ) + κ^-2κ^δ+1+2γ/1+γ
≈max(1, κ^δ-1/1+γ).
To prove the fifth statement, observe that:
∑_i=1^P i^-δ-1-γ/i^-1-γ + κ = ∑_i ≤κ^-1/1+γi^-δ-1-γ/i^-1-γ + κ + ∑_i ≥κ^-1/1+γi^-δ-1-γ/i^-1-γ + κ
≈∑_i ≤κ^-1/1+γ i^-δ + κ^-1∑_i ≥κ^-1/1+γ i^-δ-1-γ
≈max(1, κ^-1-δ/1+γ) + κ^-1κ^δ+γ/1+γ
≈max(1, κ^δ-1/1+γ).
To prove the sixth statement, observe that:
∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2 = ∑_i ≤κ^-1/1+γi^-2-2γ/(i^-1-γ + κ)^2 + ∑_i ≥κ^-1/1+γi^-2-2γ/(i^-1-γ + κ)^2
≈∑_i ≤κ^-1/1+γ 1 + κ^-2∑_i ≥κ^-1/1+γ i^-2-2γ
≈κ^-1/1+γ + κ^-2κ^1 + 2γ/1+γ
≈κ^-1/1+γ.
To prove the seventh statement, observe that:
∑_i=1^P i^-1-γ/i^-1-γ + κ = ∑_i ≤κ^-1/1+γi^-1-γ/i^-1-γ + κ + ∑_i ≥κ^-1/1+γi^-1-γ/i^-1-γ + κ
≈∑_i ≤κ^-1/1+γ 1 + κ^-1∑_i ≥κ^-1/1+γ i^-1-γ
≈κ^-1/1+γ + κ^-1κ^γ/1+γ
≈κ^-1/1+γ.
To prove the eighth statement, observe that:
∑_i=1^P i^-1-γ/(i^-1-γ + κ)^2 = ∑_i ≤κ^-1/1+γi^-1-γ/(i^-1-γ + κ)^2 + ∑_i ≥κ^-1/1+γi^-δ-1-γ/(i^-1-γ + κ)^2
≈∑_i ≤κ^-1/1+γ i^1+γ + κ^-2∑_i ≥κ^-1/1+γ i^-1-γ
≈max(1, κ^-2+γ/1+γ) + κ^-2κ^γ/1+γ
= κ^-2max(κ^2, κ^γ/1+γ) + κ^-2κ^γ/1+γ
≈κ^-2max(κ^2, κ^γ/1+γ)
≈κ^-2κ^γ/1+γ
Suppose that the power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1), suppose that P = ∞. Assume the notation from Lemma <ref>, and similarly let
Q := 1 - 1/N(Σ^2 Σ_κ ^-2).
Then it holds that Q^-1 = Θ(1).
Let Σ = V Λ V^T be the eigendecomposition of Σ, where Λ is a diagonal matrix consisting of the eigenvalues. By Definition <ref>, we see that:
λ/κ + 1/N(ΣΣ_κ ^-1) = 1.
This implies that:
Q = 1- 1/N(ΣΣ_κ ^-1) + 1/N( (ΣΣ_κ ^-1) - (Σ^2 Σ_κ^-2) )
= λ/κ + 1/N( (ΣΣ_κ ^-1) - (Σ^2 Σ_κ^-2) ).
Observe that:
(ΣΣ_κ ^-1) - (Σ^2 Σ_κ ^-2) = (Λ (Λ + κ I)^-1) - (Λ^2 (Λ + κ I)^-2)
= ∑_i=1^P (i^-1-γ/i^-1-γ + κ - i^-2-γ/(i^-1-γ + κ)^2)
= κ∑_i=1^P i^-1-γ/(i^-1-γ + κ)^2.
This means that:
Q = λ/κ + κ/N∑_i=1^P i^-1-γ/(i^-1-γ + κ)^2
≈_(A)λ/κ + Θ((κ/Nκ^-2κ^γ/1+γ))
=λ/κ + Θ(κ^-1/1+γ/N).
where (A) uses Lemma <ref>.
Case 1: κ = Θ(λ). In this case, we see that
Q = λ/κ + Θ(κ^-1/1+γ/N) = Θ(1).
This means that Q^-1 = Θ(1).
Case 2: κ = Θ(N^-1-γ). In this case, we see that
Q = λ/κ + Θ(κ^-1/1+γ/N) = Ω(κ^-1/1+γ/N) = Ω(1).
This means that Q^-1 = Θ(1).
Suppose that power-law scaling holds for the eigenvalues with scaling exponent γ, and suppose that P = ∞. Then it holds that κ(λ, M, Σ) = Θ(max(λ, M^-1-γ)).
Let Σ = V Λ V^T be the eigendecomposition of Σ, where Λ is a diagonal matrix consisting of the eigenvalues.
Observe that:
((Σ + κ I)^-1Σ) = (Λ (Λ + κ I)^-1)
= ∑_i=1^P i^-1-γ/i^-1-γ + κ
≈_(A)κ^-1/1+γ.
where (A) follows from Lemma <ref>. Using Definition <ref>, we see that for κ = κ(λ, M, Σ), it holds that:
λ/κ + 1/MΘ(κ^-1-γ) = 1.
This implies that κ = Θ(max(λ, M^-1-γ)) as desired.
§ MACHINERY FROM RANDOM MATRIX THEORY
In this section, we introduce machinery from random matrix theory that serves as the backbone for our analysis of multi-objective scaling laws in Appendix <ref>. In Appendix <ref>, we give a recap of known Marčenko-Pastur properties. In Appendix <ref>, we use these known properties to derive random matrix theory results which are tailored to our analysis.
§.§ Recap of Marčenko-Pastur properties
We introduce Marčenko-Pastur properties, following the treatment in <cit.>. Informally speaking, Marčenko-Pastur laws show that a random matrix (Σ̂ + λ I)^-1 (where Σ̂ is a sample covariance) behaves similarly to a deterministic matrix of the form (Σ̂ + κ I)^-1, where κ = κ(λ, M, Σ) is an effective regularizer.
Deriving this formally requires placing several structural assumptions on number of data points N ≥ 1, the number of parameters P ≥ 1, the distribution , and the vectors β_1 and β_2. We adopt assumptions from <cit.> which guarantee that a Marčenko-Pastur law holds for Σ, and we further introduce a boundedness assumption for technical reasons.
We assume that: (1) X ∼ takes the form X = Z Σ^1/2 where Z has bounded subgaussian i.i.d components with mean zero and unit variance, (2) N and P approach ∞ with P/N tending to γ > 0, (3) the spectral measure 1/P∑_i=1^P δ_λ_i of Σ converges to a probability measure with compact support, and Σ is invertible and bounded in operator norm, and (4) for j ∈{1,2}, the measure ∑_i=1^P ⟨ v_i, β_j ⟩^2 converges to a measure with bounded mass, and β_j has bounded ℓ_2 norm.
The effective regularizer κ(λ, M, Σ) is defined as follows.
For λ≥ 0, M ≥ 1, and a P-dimensional positive semidefinite matrix Σ with eigenvalues λ_i for 1 ≤ i ≤ P, the value κ(λ, M, Σ) is the unique value κ≥ 0 such that:
λ/κ + 1/N∑_i=1^P λ_i/λ_i + κ = 1.
We are now ready to state the key random matrix theory results proven in <cit.>. Following <cit.>, the asymptotic equivalence notation u ∼ v means that u/v tends to 1 as N and P go to ∞.
Let Σ̂ = 1/M∑_i=1^M X_i X_i^T be the sample covariance matrix from M i.i.d. samples from X_1, …, X_M ∼. Let κ = κ(λ, N, Σ). Suppose that A and B have bounded operator norm.
Then it holds that:
((Σ̂ + λ I)^-1 A ) ∼κ((Σ + κ I)^-1 A )
κ^2 ((Σ̂ + λ I)^-1 A (Σ̂ + λ I)^-1 B ) ∼((Σ + κ I)^-1 A (Σ + κ I)^-1 B )
+ κ^2 1/N(A Σ (Σ + κ I)^-2)/1 - 1/N(Σ^2 (Σ + κ I)^-2)((Σ + κ I)^-1Σ (Σ + κ I)^-1 B ).
We note that the requirement that B has bounded operator norm in Lemma <ref> is what forces us to require that β_1 and β_2 are bounded. However, <cit.> showed that the norm can be unbounded in several real-world settings, and thus instead opt to assume a local Marčenko-Pastur law and derive scaling laws based on this assumption. We suspect it may be possible to derive our scaling law with an appropriate analogue of the local Marčenko-Pastur law, which would also have the added benefit of allowing one to relax other requirements in Assumption <ref> such as gaussianity. We view such an extension as an interesting direction for future work.
§.§ Useful random matrix theory facts
We derive several corollaries of Lemma <ref> tailored to random matrices that arise in our analysis of multi-objective scaling laws.
Assume that satisfies the Marčenko-Pastur property (Assumption <ref>). Let Z be a positive definite matrix such that Z^-1 has bounded operator norm, and let A be a matrix with bounded operator norm. Let Σ̂ = 1/M∑_i=1^M X_i X_i^T be the sample covariance matrix from M i.i.d. samples from X_1, …, X_M ∼. Then it holds that:
λ·((Σ̂ + λ Z)^-1 A) ∼κ·((Σ + κ Z)^-1 A).
If A also has bounded trace and Z has bounded operator norm, then it holds that:
(Σ̂ (Σ̂ + λ Z)^-1 A) ∼(Σ· (Σ + κ Z)^-1 A)
where κ = κ(λ, M, Z^-1/2Σ Z^-1/2).
For (<ref>), observe that:
λ·((Σ̂ + λ Z)^-1 A)
= λ·(Z^-1/2 (Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 A)
= λ·((Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 A Z^-1/2)
∼_(A)κ·((Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2 A Z^-1/2)
= κ·(Z^-1/2 (Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2 A)
= κ·((Σ+ κ Z)^-1 A).
where (A) applies Lemma <ref> (using the fact that since A and Z^-1 have bounded operator norm, it holds that Z^-1/2 A Z^-1/2 has bounded operator norm).
For (<ref>), observe that:
(Σ̂ (Σ̂ + λ Z)^-1 A) =_(A)((I - λ Z^1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2) A )
=_(B)(A) - λ·(( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 A Z^1/2)
∼_(C)(A) - κ·(( Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2 A Z^1/2)
=_(D)((I - κ Z^1/2( Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2) A )
=_(E)(Σ (Σ + κ Z)^-1 A)
where (A) and (E) follows from Claim <ref>, (B) and (D) use the fact that (A) is bounded, and (C) follows from Lemma <ref> (using the fact that since A, Z, and Z^-1 have bounded operator norm, it holds that Z^-1/2 A Z^1/2 has bounded operator norm).
Assume that satisfies the Marčenko-Pastur property (Assumption <ref>). Let Z be any positive definite matrix such that Z and Z^-1 have bounded operator norm, and let A and B have bounded operator norm. Let Σ̂ = 1/M∑_i=1^M X_i X_i^T be the sample covariance matrix from M i.i.d. samples from X_1, …, X_M ∼. Then it holds that:
λ^2 ((Σ̂ + λ Z)^-1 A (Σ̂ + λ Z)^-1 B)
= λ^2 (Z^-1/2 (Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 A Z^-1/2 (Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 B)
∼κ^2 ((Σ + κ Z)^-1 A (Σ + κ Z)^-1 B)
+ κ^21/M((Σ + κ Z)^-1Σ (Σ + κ Z)^-1 A)/1 - 1/M((Σ + κ Z)^-1Σ (Σ + κ Z)^-1Σ)((Σ + κ Z)^-1Σ (Σ + κ Z)^-1 B)
where κ = κ(λ, M, Z^-1/2Σ Z^-1/2).
Let q = 1/M(Z^-1/2Σ Z^-1/2 (Z^-1/2Σ Z^-1/2 + κ I)^-2 Z^-1/2 A Z^-1/2)/1 - 1/M(Z^-1/2Σ Z^-1/2 (Z^-1/2Σ Z^-1/2 + κ I)^-2 Z^-1/2Σ Z^-1/2).
Observe that:
λ^2 ((Σ̂ + λ Z)^-1 A (Σ̂ + λ Z)^-1 B)
λ^2 (Z^-1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 A Z^-1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 B)
= λ^2 (( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 A Z^-1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 B Z^-1/2)
∼_(A)κ^2 (( Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2 A Z^-1/2( Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2 B Z^-1/2)
+ κ^2 q (( Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2Σ Z^-1/2( Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2 B Z^-1/2)
= κ^2 (Z^-1/2( Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2 A Z^-1/2( Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2 B)
+ κ^2 q (Z^-1/2( Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2Σ Z^-1/2( Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2 B )
= κ^2 ((Σ + κ Z )^-1 A (Σ + κ Z )^-1 B) + q κ^2 ((Σ + κ Z )^-1Σ(Σ + κ Z )^-1 B ),
where (A) follows from Lemma <ref> (using the fact that since A, B, Z, and Z^-1 have bounded operator norm, it holds that Z^-1/2 A Z^1/2, Σ, and Z^-1/2 B Z^1/2 have bounded operator norm).
We can simplify q as follows:
q = 1/M(Z^-1/2Σ Z^-1/2 (Z^-1/2Σ Z^-1/2 + κ I)^-2 Z^-1/2 A Z^-1/2)/1 - 1/M(Z^-1/2Σ Z^-1/2 (Z^-1/2Σ Z^-1/2 + κ I)^-2 Z^-1/2Σ Z^-1/2)
= 1/M((Z^-1/2Σ Z^-1/2 + κ I)^-1
Z^-1/2Σ Z^-1/2 (Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2 A Z^-1/2)/1 - 1/M((Z^-1/2Σ Z^-1/2 + κ I)^-1
Z^-1/2Σ Z^-1/2 (Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2Σ Z^-1/2)
= 1/M(Z^-1/2(Z^-1/2Σ Z^-1/2 + κ I)^-1
Z^-1/2Σ Z^-1/2 (Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2 A)/1 - 1/M(Z^-1/2(Z^-1/2Σ Z^-1/2 + κ I)^-1
Z^-1/2Σ Z^-1/2 (Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2Σ)
= 1/M((Σ + κ Z)^-1Σ (Σ + κ Z)^-1 A)/1 - 1/M((Σ + κ Z)^-1Σ (Σ + κ Z)^-1Σ).
Assume that satisfies the Marčenko-Pastur property (Assumption <ref>). Let Z be any positive definite matrix such that Z and Z^-1 have bounded operator norm. Let A and B have bounded operator norm, and suppose also that (A B) is bounded. Let Σ̂ = 1/M∑_i=1^M X_i X_i^T be the sample covariance matrix from M i.i.d. samples from X_1, …, X_M ∼. Then it holds that:
(Σ̂ (Σ̂ + λ Z)^-1 A (Σ̂ + λ Z)^-1Σ̂ B) ∼(Σ (Σ + κ Z)^-1 A (Σ + κ Z)^-1Σ B) + E,
where:
E := 1/M((Σ + κ Z)^-1Σ (Σ + κ Z)^-1 A)/1 - 1/M((Σ + κ Z)^-1Σ (Σ + κ Z)^-1Σ)·κ^2 (( Σ + κ Z )^-1Σ( Σ + κ Z )^-1 Z B Z ),
and κ = κ(λ, M, Z^-1/2Σ Z^-1/2).
Observe that:
(Σ̂ (Σ̂ + λ Z)^-1 A (Σ̂ + λ Z)^-1Σ̂ B)
= (Σ̂ (Σ̂ + λ Z)^-1 A (Σ̂ (Σ̂ + λ Z)^-1)^T B)
=_(A)((I - λ Z^1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2) A (I - λ Z^1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2)^T B )
=_(B)(A B) - λ(A (Z^1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2)^T B )
- λ(Z^1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 A B )
+ λ^2 (Z^1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 A ( Z^1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2)^T B )
= (A B) - λ(A Z^-1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^1/2 B )
- λ(Z^1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 A B )
+ λ^2 (Z^1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 A Z^-1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^1/2 B )
= (A B) - λ(( Σ̂ + λ Z )^-1 Z B A )_(1) - λ(( Σ̂ + λ Z )^-1 A B Z )_(2)
+ λ^2 ((Σ̂ + λ Z )^-1 A (Σ̂ + λ Z )^-1 Z B Z )_(3)
where (A) follows from Claim <ref>, (B) uses that (AB) is bounded,
For term (1) and term (2), we apply Lemma <ref> to see that:
λ(( Σ̂ + λ Z )^-1 Z B A ) ∼κλ(( Σ + κ Z )^-1 Z B A )
λ(( Σ̂ + λ Z )^-1 A B Z ) ∼κ(( Σ + κ Z )^-1 A B Z ).
For term (3), we apply Lemma <ref> to see that
λ^2 ((Σ̂ + λ Z )^-1 A (Σ̂ + λ Z )^-1 Z B Z )
∼κ^2 (( Σ + κ Z )^-1 A ( Σ + κ Z )^-1 Z B Z )
+ κ^2 1/M((Σ + κ Z)^-1Σ (Σ + κ Z)^-1 A)/1 - 1/M((Σ + κ Z)^-1Σ (Σ + κ Z)^-1Σ)((Σ + κ Z)^-1Σ (Σ + κ Z)^-1 Z B Z )
∼κ^2 (( Σ + κ Z )^-1 A ( Σ + κ Z )^-1 Z B Z ) + E
This means that:
(Σ̂ (Σ̂ + λ Z)^-1 A (Σ̂ + λ Z)^-1Σ̂) ∼(A B) - κ((Σ + κ Z )^-1 Z B A ) - κ(( Σ + κ Z )^-1 A B Z )
+ κ^2 ((Σ + κ Z )^-1 A (Σ + κ Z )^-1 Z B Z ) + E
=_(C)(Σ(Σ + κ Z)^-1 A (Σ + κ Z)^-1Σ B) + E,
where (C) uses an analogous analysis to the beginning of the proof.
Assume that satisfies the Marčenko-Pastur property (Assumption <ref>). Let Z be any positive definite matrix such that Z and Z^-1 have bounded operator norm, and let A and B have bounded operator norm. Let Σ̂ = 1/M∑_i=1^M X_i X_i^T be the sample covariance matrix from M i.i.d. samples from X_1, …, X_M ∼. Then it holds that:
λ( (Σ̂ + λ Z)^-1 A (Σ̂ + λ Z)^-1Σ̂ B ) ∼κ((Σ + κ Z)^-1 A (Σ + κ Z)^-1Σ B ) - E,
where:
E := 1/M((Σ + κ Z)^-1Σ (Σ + κ Z)^-1 A)/1 - 1/M((Σ + κ Z)^-1Σ (Σ + κ Z)^-1Σ)·κ^2 (( Σ + κ Z )^-1Σ( Σ + κ Z )^-1 Z B )
and κ = κ(λ, N, Z^-1/2Σ Z^-1/2).
Observe that:
λ((Σ̂ + λ Z)^-1 A (Σ̂ + λ Z)^-1Σ̂ B )
=_(A)λ(Z^-1/2(Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 A (I - λ Z^-1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^1/2) B )
= λ(Z^-1/2 (Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 A B )
- λ^2 ( Z^-1/2( Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^-1/2 A Z^-1/2 (Z^-1/2Σ̂ Z^-1/2 + λ I)^-1 Z^1/2 B )
= λ((Σ̂ + λ Z)^-1 A B )_(1) - λ^2 (( Σ̂ + λ Z )^-1 A (Σ̂ + λ Z)^-1 Z B )_(2)
where (A) follows from Claim <ref>.
For term (1), we apply Lemma <ref> see that:
λ((Σ̂ + λ Z)^-1 A B ) ∼κ((Σ + κ Z)^-1 A B ).
For term (2), we apply Lemma <ref> to see that
λ^2 (( Σ̂ + λ Z )^-1 A (Σ̂ + λ Z)^-1 Z B )
∼κ^2 (( Σ + κ Z )^-1 A ( Σ + κ Z )^-1 Z B )
+ κ^2 1/M((Σ + κ Z)^-1Σ (Σ + κ Z)^-1 A)/1 - 1/M((Σ + κ Z)^-1Σ (Σ + κ Z)^-1Σ)((Σ + κ Z)^-1Σ (Σ + κ Z)^-1 Z B )
∼κ^2 (( Σ + κ Z )^-1 A ( Σ + κ Z )^-1 Z B ) + E.
This means that:
λ((Σ̂ + λ Z)^-1 A (Σ̂ + λ Z)^-1Σ̂ B )
∼κ((Σ + κ Z)^-1 A B ) + κ^2 (( Σ + κ Z )^-1 A ( Σ + κ Z )^-1 Z B ) - E
= κ(Z^-1/2 (Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2 A (I - κ Z^1/2( Z^-1/2Σ Z^-1/2 + κ I)^-1 Z^-1/2) B ) - E
=_(A)κ((Σ + κ Z)^-1 A (Σ + κ Z)^-1Σ B ) - E,
where (A) uses an analogous analysis to the beginning of the proof.
The proofs of these results relied on the following basic matrix fact.
Let A be any matrix and let B be any symmetric positive definite matrix. Then it holds that:
A (A + λ B)^-1 = I - λ B^1/2( B^-1/2 A B^-1/2 + λ I )^-1 B^-1/2.
Observe that:
A (A + λ B)^-1
= A B^-1/2( B^-1/2 A B^-1/2 + λ I)^-1 B^-1/2
= B^1/2(B^-1/2 A B^-1/2) ( B^-1/2 A B^-1/2 + λ I)^-1 B^-1/2
= B^1/2(B^-1/2 A B^-1/2 + λ I ) ( B^-1/2 A B^-1/2 + λ I)^-1 B^-1/2 - B^1/2λ( B^-1/2 A B^-1/2 + λ I)^-1 B^-1/2
= I - λ B^1/2( B^-1/2 A B^-1/2 + λ I)^-1 B^-1/2.
§ EXTENSION: MARKET-ENTRY THRESHOLD WITH RICHER FORM FOR L_2^*
In this section, we modify the safety requirement to take into account the impact of dataset size N and regularization parameter λ, and we extend our model and analysis of the market-entry threshold accordingly. We show that the characterization in Theorem <ref> directly applies to this setting, and we also show relaxed versions of Theorem <ref> and Theorem <ref>. Altogether, these extended results illustrate that our qualitative insights from Sections <ref>-<ref> hold more generally.
We define a modified approximation of the safety violation L̃_2(β_1, β_2, , , N, α). This modified approximation is defined analogously to ^*(β_1, β_2, , , N, α). To formalize this, we define a deterministic equivalent L_2^ for the safety violation to be
L_2^(β_1, β_2, , , N, α) := L_1^(β_2, β_1, , , N, 1-α).
It follows from Lemma <ref> that L_2(β̂(α, λ, X)) ∼ L_2^(β_1, β_2, , , N, α): here, we use the fact that L_2(β̂(α, λ, X)) is distributed identically to L_1(β̂(1-α, λ, X)). Now, using this deterministic equivalent, we define L̃_2(β_1, β_2, , , N, α) = L_2^(β_1, β_2, , , N, α).
Using this formulation of L̃_2, we define a modified market entry threshold where we replace all instances of original approximation L_2^* with the modified approximation L̃_2. In particular, a company C faces reputational damage if:
𝔼_(β_1, β_2) ∼L̃_2(β_1, β_2, , α_C) ≥τ_C.
The company selects ∈ [0.5,1] and ∈ (0,1) to maximize their performance subject to their safety constraint, as formalized by the following optimization program:[Unlike in Section <ref>, there might not exist ∈ [0.5,1] and ∈ (0,1) which satisfy the safety constraint, if N_C is too small.]
(α̃_C, λ̃_C) = _∈ [0.5, 1], ∈ (0,1)𝔼_[^*(β_1, β_2, , , N_C, α)] s.t. 𝔼_[L̃_2(β_1, β_2, , α)] ≤τ_C.
We define the modified market-entry threshold as follows.
The modified market-entry threshold (, , , , )
is the minimum value of ∈ℤ_≥ 1 such that 𝔼_[^*(β_1, β_2, , λ̃_E, , α̃_E)] ≤𝔼_[^*(β_1, β_2, , λ̃_I, , α̃_I)].
In this section, we analyze the modified market entry threshold (,, , , ). We show an extension of Theorem <ref> (Appendix <ref>). We then derive a simplified version of the deterministic equivalent L_2^ (Appendix <ref>). Finally, we show a weakened extension of Theorem <ref> (Appendix <ref>) and a weakened extension of Theorem <ref> (Appendix <ref>). These weakened extensions derive upper bounds (rather than tight bounds) on the modified market entry threshold, and also assume that δ≤ 1.
§.§ Extension of Theorem <ref>
We study the market entry threshold in the environment of Theorem <ref> where the incumbent has infinite data and the new company faces no safety constraint. We show that the modified market entry threshold takes the same form as the market entry threshold in Theorem <ref>.
Suppose that power-law scaling holds for the eigenvalues and alignment coefficients, with scaling exponents γ, δ > 0 and correlation coefficient ρ∈ [0, 1), and suppose that P = ∞.
Suppose that the incumbent company has infinite data (i.e., = ∞), and that the entrant faces no constraint on their safety (i.e., = ∞). Suppose that the safety constraint satisfies (<ref>). Then, it holds that:
(∞, , ∞, , ) = Θ((√(L^*(ρ)) - √(min(, L^*(ρ))))^-2/),
where L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)] = Θ(1 - ρ), and where := min(2(1+γ), δ + γ).
Theorem <ref> shows that the qualitative insights from Theorem <ref>—including that the new company can enter with finite data—readily extend to this setting.
To prove Theorem <ref>, we build on the notation and analysis from Appendix <ref>. It suffices to show that each company C will select _C = _C and λ_C = λ̃_C. This follows trivially for the entrant C = E since they face no safety constraint, and there is no different between the two settings. The key ingredient of the proof is to compute _I and _I for the incumbent (i.e., an analogue of Lemma <ref> in Appendix <ref>).
To do this, we first upper bound the following function of the safety loss and performance loss for general parameters and .
For any α and , it holds that:
√(𝔼_[L_1(β(α, λ))]) + √(𝔼_[L_2(β(α, λ))])≥√(𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)^T]).
Note that:
T := √(𝔼_[L_1(β(α, λ))]) + √(𝔼_[L_2(β(α, λ))])
= √((β_1 - β(α, λ))^T Σ (β_1 - β(α, λ))) + √((β_2 - β(α, λ))^T Σ (β_2 - β(α, λ)))
= √((λβ_1 + (1-α) Σ (β_1 - β_2))^T Σ (Σ + λ I)^-2 (λβ_1 + (1-α) Σ (β_1 - β_2)))
+ √((λβ_2 + αΣ (β_2 - β_1))^T Σ (Σ + λ I)^-2 (λβ_2 + αΣ (β_2 - β_1)))
= √((λβ_1 + (1-α) Σ (β_1 - β_2))^T Σ (Σ + λ I)^-2 (λβ_1 + (1-α) Σ (β_1 - β_2)) )
+ √((-λβ_2 + αΣ (β_1 - β_2))^T Σ (Σ + λ I)^-2 (-λβ_2 + αΣ (β_1 - β_2))).
Now note that for any PSD matrix Σ' and any distribution, note that the following triangle inequality holds:
√(𝔼[(X_1 + X_2)^T Σ' (X_1 + X_2)])≤√(𝔼[X_1^T Σ' X_1]) + √(𝔼[X_2^T Σ' X_2]).
We apply this for X_1 = λβ_1 + (1-α) Σ (β_1 - β_2), X_2 = -λβ_2 + αΣ (β_1 - β_2), and distribution . This means that we can lower bound:
T ≥√(𝔼_[((Σ + λ I) (β_1 - β_2))^T Σ (Σ + λ I)^-2 ((Σ + λ I) (β_1 - β_2))])
= √(𝔼_[(β_1 - β_2))^T Σ (β_1 - β_2))])
as desired.
Now, we are ready to compute _I and _I for the incumbent.
Let L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)^T]. Suppose that = ∞, and suppose that the safety constraint satisfies (<ref>). Then it holds that α_I = √(min(, L^*(ρ))/L^*(ρ)), and _I = 0 is optimal for the incumbent. Moreover, it holds that:
𝔼_[L^*_1(β_1, β_2, , _I, ∞, α̃_I)] = (√(L^*(ρ)) - √(min(L^*(ρ), )))^2.
First, we apply Lemma <ref> with N = ∞ to see that:
𝔼_[L^*_1(β_1, β_2, , , ∞, α)] = 𝔼_[L_1(β(α, λ))]
and
𝔼_[L^*_2(β_1, β_2, , , ∞, α)] = 𝔼_[L_2(β(α, λ))].
Let α^* =√(min(, L^*(ρ))/L^*(ρ)). By the assumption in the lemma statement, we know that:
α^* ≥√(𝔼_[^*(β_1, β_2, , 0.5)]/L^*(ρ)) = 0.5.
Observe that:
√(𝔼_[ L_1(β(α^*, 0))]) + √(min(, L^*(ρ)))
=√(𝔼_[ L_1(β(α^*, 0))]) + √(𝔼_[ L_2(β(α^*, 0))])
= √((1-α^*)^2 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)^T]) + √( (α^*)^2 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)^T])
= √(𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)^T])
We show that (α̃_I, _I) = (α^*, 0). Assume for sake of contradiction that (α, ) ≠ (α^*, 0) satisfies the safety constraint 𝔼_[L̃_2(β_1, β_2, , α)] ≤min(, L^*(ρ)) and achieves strictly better performance loss:
𝔼_[^*(β_1, β_2, , , ∞, )] < 𝔼_[^*(β_1, β_2, , 0, ∞, ^*)].
Then it would hold that:
√(𝔼_[ L_1(β(, ))]) + √(𝔼_[ L_2(β(, ))]) < √(𝔼_[ L_1(β(α^*, 0))]) + √(min(, L^*(ρ)))
= √(𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)^T]),
which contradicts Lemma <ref>.
To analyze the loss, note that:
𝔼_[L^*_1(β_1, β_2, , _I, ∞, α̃_I)]
= 𝔼_[L_1(β(α̃_I, λ̃_I))]
= (1-α̃_I)^2 L^*(ρ)
= (√(L^*(ρ)) - √(min(L^*(ρ), )))^2
We now prove Theorem <ref>.
We analyze (α̃_C, _C) first for the incumbent C = I and then for the entrant C = E.
Analysis of the incumbent C = I.
By Lemma <ref>, we see that:
𝔼_[L^*_1(β_1, β_2, , _I, ∞, α̃_I)] = (√(L^*(ρ)) - √(min(, L^*(ρ))))^2.
Analysis of the entrant C = E. This analysis follows identically to the analogous case in the proof of Theorem <ref>, and we repeat the proof for completeness. Since the entrant faces no safety constraint, the entrant can choose any α∈ [0.5, 1]. We apply Corollary <ref> to see that:
𝔼_[L^*_1(β_1, β_2, , _E, N, α_E)] = inf_α∈ [0.5, 1]inf_ > 0𝔼_[L^*_1(β_1, β_2, , , N, α)] = Θ(
N^-),
which means that:
^*(∞, , ∞, , ) = Θ((√(L^*(ρ)) - √(min(, L^*(ρ)))^-2/)
as desired.
We can further apply Claim <ref> to see that L^*(ρ) = Θ(1-ρ).
§.§ Bounds on the excess loss for safety
We bound the excess loss α^2 L^*(ρ) - 𝔼_[L_2^]. We assume that α≥ 0.5 and we further assume that δ≤ 1.
Suppose that power scaling holds for the eigenvalues and alignment coefficients with scaling γ > 0 and δ∈ (0,1], and correlation coefficient ρ∈ [0, 1), and suppose that P = ∞. Suppose that α≥ 0.5, ∈ (0,1), and N ≥ 1. Let L_2^ := L_2^(β_1, β_2, , , N, α) be defined according to (<ref>). Let L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)]. Then it holds that:
α^2 L^*(ρ) - 𝔼_[L_2^] = O( max(λ^/1 + γ, N^-)
)
and
𝔼_[L_2^] - α^2 L^*(ρ) = O( max(λ^/1 + γ, N^-) + (1-α) (1-ρ) min(λ^-1/1 + γ, N)/N),
where = min(2(1+γ), δ + γ) = δ + γ.
To prove Lemma <ref>, we first simplify the deterministic equivalent L_2^(β_1, β_2, , , N, α) using the assumptions from Section <ref>.
Suppose that power scaling holds for the eigenvalues and alignment coefficients with scaling γ, δ > 0 and correlation coefficient ρ∈ [0, 1), and suppose that P = ∞. Suppose that ∈ (0,1), and N ≥ 1. Let L_2^ := L_2^(β_1, β_2, , , N, α) be defined according to (<ref>). Let κ = κ(λ, N, Σ) from Definition <ref>. Let L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)]. Then it holds that:
𝔼_[L_2^] - L^*(ρ) =
Q^-1·κ^2 ∑_i=1^P i^-δ -1- γ/(i^-1-γ + κ)^2 + Q^-1 2κα (1-α) (1-ρ) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ Q^-1 2 α (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ·∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ
- 2 α^2 κ (1-ρ) ∑_i=1^P i^-δ - 1 -γ/i^-1-γ + κ,
where Q = 1 - 1/N∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2.
First, we apply Lemma <ref>, coupled with the fact that L_2^(β_1, β_2, , , N, α) := L_1^(β_2, β_1, , , N, 1-α), to see that:
Q ·𝔼_[L_2^] = κ^2 (1 - 2 α^2 (1-ρ)) ∑_i=1^P i^-δ - 1- γ/(i^-1-γ + κ)^2 + α^2 L^*(ρ)
+ 2κ (1-ρ) α (1 - 2 α) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ 2 α (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) · (1 - 2α) ∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ,
where Q = 1 - 1/N∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2. Using that (Q^-1 - 1) α^2 L^*(ρ) = Q^-11/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) 2 α^2 (1-ρ) ( ∑_i=1^P i^-δ-1-γ), this means that:
𝔼_[L_2^] - α^2 L^*(ρ) = Q^-11/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) 2 α^2 (1-ρ) ( ∑_i=1^P i^-δ-1-γ)
+
Q^-1·κ^2 (1 - 2 α^2 (1-ρ)) ∑_i=1^P i^-δ - 1- γ/(i^-1-γ + κ)^2
+ Q^-1 2κ (1-ρ) α (1 - 2 α) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ Q^-1 2 α (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) · (1 - 2α) ∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ
By expanding some of these terms, we see that:
𝔼_[L_2^] - α^2 L^*(ρ) =
Q^-1 2 α^2 (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ·∑_i=1^P i^-δ-1-γ
+ Q^-1·κ^2 ∑_i=1^P i^-δ - 1-γ/(i^-1-γ + κ)^2 - Q^-12 α^2 (1-ρ) ·κ^2 ∑_i=1^P i^-δ -1- γ/(i^-1-γ + κ)^2
+ Q^-1 2κ (1-ρ) α (1 - α) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2 - Q^-1 2κ (1-ρ) α^2 ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ Q^-1 2 α (1 - α)(1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ·∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ
- Q^-1 2 α^2 (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ·∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ .
When we collect terms, we obtain:
𝔼_[L_2^] - α^2 L^*(ρ)
= Q^-1·κ^2 ∑_i=1^P i^-δ - 1-γ/(i^-1-γ + κ)^2 + Q^-1 2κ (1-ρ) α (1 - α) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ Q^-1 2 α (1 - α)(1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ·∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ
- Q^-1 2κ (1-ρ) α^2 (∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2 + ∑_i=1^P κ· i^-δ -1- γ/(i^-1-γ + κ)^2)
+
Q^-1 2 α^2 (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ·(∑_i=1^P i^-δ-1-γ - ∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ)
= Q^-1·κ^2 ∑_i=1^P i^-δ - 1-γ/(i^-1-γ + κ)^2 + Q^-1 2κ (1-ρ) α (1 - α) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ Q^-1 2 α (1 - α)(1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ·∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ
- Q^-1 2κ (1-ρ) α^2 (∑_i=1^P i^-δ - 1 - γ/(i^-1-γ+κ))
+
Q^-1 2 κα^2 (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ·i^-δ - 1-γ/i^-1-γ + κ.
Combining the last two terms gives us the desired statement.
Now, we are ready to prove Lemma <ref>.
For the first bound, we observe that:
α^2 L^*(ρ) - 𝔼_[L_2^]
≤_(A) 2 α^2 κ (1-ρ) ∑_i=1^P i^-δ - 1 -γ/i^-1-γ + κ
=_(B) O(α^2 (1-ρ) κ^min(1+γ, δ+γ)/1+γ)
=_(C) O(κ^/1+γ)
=_(D) O(max(λ^/1+γ, N^-) )
where (A) uses Lemma <ref>, (B) uses Lemma <ref>, (C) uses that δ≤ 1 and ρ∈ [0, 1), and (D) uses Lemma <ref>.
For the second bound, we observe that:
𝔼_[L_2^] - α^2 L^*(ρ)
≤_(A)
Q^-1·κ^2 ∑_i=1^P i^-δ -1- γ/(i^-1-γ + κ)^2
+ Q^-1 2κα (1-α) (1-ρ) ∑_i=1^P i^-δ - 2(1+γ)/(i^-1-γ+κ)^2
+ Q^-1 2 α (1-α) (1-ρ) 1/N(∑_i=1^P i^-2-2γ/(i^-1-γ + κ)^2) ·∑_i=1^P i^-δ - 2-2γ/i^-1-γ + κ
=_(B)
O( κ^min(2(1+γ), γ + δ)/1+γ + α (1-α) (1-ρ) κ^min(1+γ, γ + δ)/1+γ + α (1-α) (1-ρ) κ^-1/1+γ/N)
=_(C)
O( κ^γ + δ/1+γ + (1-α) (1-ρ) κ^γ + δ/1+γ + (1-α) (1-ρ) κ^-1/1+γ/N)
= O( κ^γ + δ/1+γ + (1-α) (1-ρ) κ^-1/1+γ/N)
=_(D) O( max(λ^/1+γ, N^-) + (1-α) (1-ρ) min(λ^-1/1+γ, N)/N)
where (A) uses Lemma <ref>, (B) uses Lemma <ref> and Lemma <ref>, (C) uses that δ≤ 1 and α≥ 0.5, and (D) uses Lemma <ref>.
§.§ Extension of Theorem <ref>
We next study the market entry threshold in the environment of Theorem <ref> where the incumbent has finite data and the new company faces no safety constraint. We place the further assumption that δ≤ 1. We compute the following upper bound on the modified market entry threshold.
Suppose that the power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ > 0, δ∈ (0, 1] and correlation coefficient ρ∈ [0, 1), and suppose that P = ∞. Assume that = ∞. Suppose that the safety constraint satisfies (<ref>). Then we have that = (, , ∞, , ) satisfies:
:=
O() if ≤G̃_I^-1/2 (1-ρ)^-1/2
O(^1/+1·G̃_I^-1/2(+1) (1-ρ)^-1/2(+1)) if G̃_I^-1/2 (1-ρ)^-1/2≤≤G̃_I^-1/2 - 1/(1-ρ)^1/2
O(G̃_I^-1/) if ≥G̃_I^-1/2 - 1/(1-ρ)^1/2,
where L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)] = Θ(1 - ρ), where α^* = √(min(, L^*(ρ))/L^*(ρ)), where α̃ := √((1-α^*) + (α^*)^2), where G̃_I = (1-α̃)^2 (1-ρ), and where = min(2(1+γ), γ + δ) = γ + δ.
Theorem <ref> shows that the key qualitative finding from Theorem <ref>—that the new company can enter with = o() data as long as the incumbent's dataset size is sufficiently large—readily extends to this setting. We note that the bound in Theorem <ref> and the bound in Theorem <ref> take slightly different forms: the term G_I = (√(L^*(ρ))- √(min(L^*(ρ), )))^2 = Θ((1-α^*)^2 (1-ρ)) is replaced by G̃_I = (1-α̃)^2(1-ρ). We expect some of these differences arise because the bound in Theorem <ref> is not tight, rather than fundamental distinctions between the two settings. Proving a tight bound on the modified market entry threshold is an interesting direction for future work.
To prove this, we compute a lower bound on the incumbent's loss 𝔼_[^*(β_1, β_2, , λ̃_I, , α̃_I)].
Suppose that the power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ > 0, δ∈ (0, 1] and correlation coefficient ρ∈ [0, 1), and suppose that P = ∞. Assume that = ∞. Suppose that the safety constraint satisfies (<ref>).
Then we have that:
𝔼_[L^*_1(β_1, β_2, , _I, , α̃_I)]
= Ω( ^-) if ≤G̃_I^-1/2 (1-ρ)^-1/2
Ω(^-/+1·G̃_I^/2(+1) (1-ρ)^/2(+1)) if G̃_I^-1/2 (1-ρ)^-1/2≤≤G̃_I^-1/2 - 1/(1-ρ)^1/2
Ω(G̃_I) if ≥G̃_I^-1/2 - 1/(1-ρ)^1/2.
where L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)] = Θ(1 - ρ), where α^* = √(min(, L^*(ρ))/L^*(ρ)), where α̃ := √((1-α^*) + (α^*)^2), where G̃_I = (1-α̃)^2 (1-ρ)
and where = min(2(1+γ), γ + δ) = γ + δ.
By Corollary <ref> and Lemma <ref>, we know that:
𝔼_[^*(β_1, β_2, , λ̃_I, , α̃_I)] = Ω(κ^/1+γ) = Ω(max(λ^/1+γ, ^-)).
Let C_δ, γ be an implicit constant[We need to introduce an implicit constant because of O() is permitted to hide constants that depend on δ and γ.] such that:
𝔼_[^*(β_1, β_2, , λ̃_I, , α̃_I)] ≥ C_δ, γmax(λ^/1+γ, ^-)
By Lemma <ref>, there also exists an implicit constant C'_δ, γ such that:
α^2 L^*(ρ) - 𝔼_[L_2^(β_1, β_2, , λ, , α)] ≤ C'_δ, γmax(λ^/1+γ, ^-).
We now split into two cases: (1) C'_δ, γ/C_δ, γ𝔼_[^*(β_1, β_2, , λ̃_I, , α̃_I)] ≥ (1-α^*)L^*(ρ), and (2) C'_δ, γ/C_δ, γ𝔼_[^*(β_1, β_2, , λ̃_I, , α̃_I)] ≤ (1-α^*)L^*(ρ).
Case 1: C'_δ, γ/C_δ, γ𝔼_[^*(β_1, β_2, , λ̃_I, , α̃_I)] ≥ (1-α^*)L^*(ρ). It follows from (<ref>) that:
𝔼_[^*(β_1, β_2, , λ̃_I, , α̃_I)] ≥ C_δ, γmax(λ^/1+γ, ^-) ≥ C_δ, γ^-.
Using the condition for this case, this implies that:
≤(1/C_δ, γ𝔼_[^*(β_1, β_2, , λ̃_I, , α̃_I)])^-1/
≤(1/C'_δ, γ (1-α^*) L^*(ρ) )^-1/
= O(((1-α̃) (1-ρ) )^-1/)
= O(G̃_I^-1/2 (1-ρ)^-1/2 ).
This proves that is up to constants within the first branch of the expression in the lemma statement. Since the bound in the lemma statement only changes by constants (that depend on δ and γ) between the first branch and second branch, this proves the desired expression for this case.
Case 2: C'_δ, γ/C_δ, γ𝔼_[^*(β_1, β_2, , λ̃_I, , α̃_I)] ≤ (1-α^*)L^*(ρ). Note that α^* = √(min(, L^*(ρ))/L^*(ρ)) is the mixture parameter that achieves the safety constraint in the infinite-data ridgeless setting. The incumbent's safety constraint means that:
𝔼_[L_2^(β_1, β_2, , λ̃_I, , α̃_I)] ≤ (α^*)^2 L^*(ρ).
By (<ref>), this implies that
(α̃_I)^2 L^*(ρ) ≤ C'_δ, γ·max(λ^δ+ γ/1+γ, ^-δ-γ) + (α^*)^2 L^*(ρ).
Now, applying (<ref>) and the assumption for this case, we see that:
(α̃_I)^2 L^*(ρ) ≤C'_δ, γ/C_δ, γ·𝔼_[^*(β_1, β_2, , λ̃_I, , α̃_I)] + (α^*)^2 L^*(ρ)
≤ (1-α^*)L^*(ρ) + (α^*)^2 L^*(ρ).
This implies that:
α̃_I ≤√((1-α^*) + (α^*)^2).
Let α̃ := √((1-α^*) + (α^*)^2). Plugging this into Corollary <ref>, we see that:
𝔼_[L^*_1(β_1, β_2, , _I, , α̃_I)]
≥inf_α∈[0.5, α̃]inf_ > 0𝔼_[L^*_1(β_1, β_2, , , , α)]
= Θ(
inf_ > 0𝔼_[L^*_1(β_1, β_2, Σ, , , α̃)] )
= Θ(^-) if ≤ (1-α̃ )^-1/(1-ρ)^-1/
Θ((/(1-α̃ )(1-ρ))^-/ + 1) if (1-α̃ )^-1/(1-ρ)^-1/≤≤ (1-α̃ )^-2+/ (1-ρ)^-1/
Θ((1-α̃ )^2(1-ρ)) if ≥ (1-α̃ )^-2+/ (1-ρ)^-1/,
= Θ( ^-) if ≤G̃_I^-1/2 (1-ρ)^-1/2
Θ(^-/+1·G̃_I^/2(+1) (1-ρ)^/2(+1)) if G̃_I^-1/2 (1-ρ)^-1/2≤≤G̃_I^-1/2 - 1/(1-ρ)^1/2
Θ(G̃_I) if ≥G̃_I^-1/2 - 1/(1-ρ)^1/2.
The statement follows in this case.
We are now ready to prove Theorem <ref>.
We analyze (α̃_C, _C) first for the incumbent C = I and then for the entrant C = E. Like in the theorem statement, let L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β_2)] = Θ(1 - ρ) (Claim <ref>) and G_I := (√(L^*(ρ)) - √(min(, L^*(ρ))))^2, and = min(2(1+γ), δ + γ).
Analysis of the incumbent C = I.
We apply Lemma <ref> to see that:
𝔼_[L^*_1(β_1, β_2, , _I, , α̃_I)]
= Ω( ^-) if ≤G̃_I^-1/2 (1-ρ)^-1/2
Ω(^-/+1·G̃_I^/2(+1) (1-ρ)^/2(+1)) if G̃_I^-1/2 (1-ρ)^-1/2≤≤G̃_I^-1/2 - 1/(1-ρ)^1/2
Ω(G̃_I) if ≥G̃_I^-1/2 - 1/(1-ρ)^1/2.
.
Analysis of the entrant C = E. Since the entrant faces no safety constraint, the entrant can choose any α∈ [0.5, 1]. We apply Corollary <ref> to see that:
𝔼_[L^*_1(β_1, β_2, , _E, N, α̃_E)] = inf_α∈ [0.5, 1]inf_ > 0𝔼_[L^*_1(β_1, β_2, , , N, α)] = Θ(
N^-),
which means that:
^*(, , ∞, , ) =
O() if ≤G̃_I^-1/2 (1-ρ)^-1/2
O(^1/+1·G̃_I^-1/2(+1) (1-ρ)^-1/2(+1)) if G̃_I^-1/2 (1-ρ)^-1/2≤≤G̃_I^-1/2 - 1/(1-ρ)^1/2
O(G̃_I^-1/) if ≥G̃_I^-1/2 - 1/(1-ρ)^1/2
as desired.
§.§ Extension of Theorem <ref>
We next study the market entry threshold in the environment of Theorem <ref> where the incumbent has infinite data and the new company faces a nontrivial safety constraint. We place the further assumption that δ≤ 1. We compute the following upper bound on the modified market entry threshold.
Suppose that the power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ > 0, δ∈ (0,1], and correlation coefficient ρ∈ [0, 1), and suppose that P = ∞. Suppose that the safety constraints and satisfy (<ref>).
Then it holds that = (∞, , , , ) satisfies:
:=
O(max(D̃^-1/, D̃^- + 1/( G_E^1/2 (1-ρ)^1/2 + 1/2 G_I - 1/2 G_E ) ) ),
where L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β)] = Θ(1 - ρ), where = min(2(1+γ), δ + γ) = δ + γ, where
G_I := (√(L^*(ρ)) - √(min(, L^*(ρ))))^2 and G_E := (√(L^*(ρ)) - √(min(, L^*(ρ))))^2, and where:
D̃ := α^*_E · (G_I - G_E) - (G_I - G_E)^2/4 · L^*(ρ) .
Theorem <ref> shows that the key qualitative finding from Theorem <ref>—that the new company can enter with finite data, as long as they face a strictly weaker safety constraint than the incumbent company—readily extends to this setting. We note that the bound in Theorem <ref> and the bound in Theorem <ref> take slightly different forms. Some of these differences are superficial: while the bound in Theorem <ref> contains two—rather than three—regimes, the third regime in Theorem <ref> does not exist in the case where δ≤ 1. Other differences are more substantial: for example, the bound in Theorem <ref> scales with D̃ while the bound in Theorem <ref> scales with D. However, we expect some of this difference arises because the bound in Theorem <ref> is not tight, rather than fundamental distinctions between the two settings. Proving a tight bound on the modified market entry threshold is an interesting direction for future work.
We compute an upper bound on the number of data points that the new company needs to achieve at most loss (√(L^*(ρ)) - √(min(, L^*(ρ))))^2 on performance.
Suppose that the power-law scaling holds for the eigenvalues and alignment coefficients with scaling exponents γ > 0, δ∈ (0, 1] and correlation coefficient ρ∈ [0, 1), and suppose that P = ∞. Suppose that the safety constraints and satisfy (<ref>). For sufficiently large constant C_δ, γ, if
≥
C_δ, γ·max(D̃^-1/, D̃^- + 1/( G_E^1/2 (1-ρ)^1/2 + 1/2 G_I - 1/2 G_E ) ),
then it holds that:
𝔼_[^*(β_1, β_2, , _E, , α̃_E)] ≤ G_I,
where L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β)] = Θ(1 - ρ), where = min(2(1+γ), δ + γ) = δ + γ, where
G_I := (√(L^*(ρ)) - √(min(, L^*(ρ))))^2 and G_E := (√(L^*(ρ)) - √(min(, L^*(ρ))))^2, and where:
D̃ := α^*_E · (G_I - G_E) - (G_I - G_E)^2/4 · L^*(ρ) .
It suffices to construct α̃ and λ̃ such that
𝔼_[L̃_2(β_1, β_2, , , , α̃)] ≤
and
𝔼_[^*(β_1, β_2, , , , α̃)] ≤ G_I
for = Ω(max(D̃^-1/, D̃^- + 1/( G_E^1/2 (1-ρ)^1/2 + 1/2 G_I - 1/2 G_E ) )).
To define α̃ and , it is inconvenient to work with the following intermediate quantities. Let α^*_E = (√(L^*(ρ)) - √(min(, L^*(ρ))))^2 and let α^*_I = (√(L^*(ρ)) - √(min(, L^*(ρ))))^2. We define an error function:
f(, α, λ) := max(λ^/1 + γ, ^-) + (1-α) (1-ρ) min(λ^-1/1 + γ, )/
We define:
α̃ := α^*_E + 1/2 (1 - α^*_E)^2 - 1/2 (1 - α^*_I)^2 = α^*_I + α^*_E - α^*_I/2.
and
:= inf_∈ (0,1) f(, α̃, λ).
At these values of α̃ and and under the condition on , observe that:
f(, α̃, ) = Θ(max(^-, (/(1-α̃)(1-ρ))^-/+1) )
= Θ(max(^-, (/G_E^1/2 (1-ρ)^1/2 + 1/2G_I + 1/2 G_E)^-/+1) )
= O(D̃),
where the implicit constant can be reduced by increasing the implicit constant on .
The remainder of the analysis boils down to showing that 𝔼_[L̃_2(β_1, β_2, , , , α̃)] ≤ and 𝔼_[^*(β_1, β_2, , , , α̃)] ≤ G_I. To show this, we first derive an error function and bound these losses in terms of the error function.
Bounding 𝔼_[L̃_2(β_1, β_2, , , , α̃)] ≤. Observe that:
𝔼_[L̃_2(β_1, β_2, , , , α̃)]
=_(A)α̃^2 L^*(ρ) + O( max(λ^/1 + γ, ^-) + (1-α) (1-ρ) min(λ^-1/1 + γ, )/)
= (α^*_E + 1/2 (1-α^*_E)^2 - 1/2 (1-α^*_I)^2) L^*(ρ) + O(f(, α̃))
≤((α^*_E)^2 L^*(ρ) + ((1-α^*_I)^2 - (1-α^*_E)^2)^2/4 - α^*_E ((1-α^*_I)^2 - (1-α^*_E)^2) ) L^*(ρ) + D̃
= + (G_I - G_E)^2/4 · L^*(ρ) - α^*_E(G_I - G_E) α^*_E · (G_I - G_E) - (G_I - G_E)^2/4 · L^*(ρ)
=
where (A) follows from Lemma <ref>. This gives us the desired bound.
Bounding 𝔼_[^*(β_1, β_2, , , , α̃)].
Observe that:
𝔼_[^*(β_1, β_2, , , , α̃)]
=_(A) (1-α̃)^2 L^*(ρ) + O( max(λ^/1 + γ, ^-) + (1-α) (1-ρ) min(λ^-1/1 + γ, )/)
≤ (1- α^*_E - 1/2 (1-α^*_E)^2 + 1/2 (1-α^*_I)^2)^2 L^*(ρ) + O(f(, α̃))
≤((1-α^*_E)^2 + ((1-α^*_I)^2 - (1-α^*_E)^2)^2/4 - (1-α^*_E) ((1-α^*_I)^2 - (1-α^*_E)^2) ) L^*(ρ) + D̃
≤
G_E + (G_I - G_E) (1-α^*_E) + (G_I - G_E)^2/4 L^*(ρ) + α^*_E · (G_I - G_E) - (G_I - G_E)^2/4 · L^*(ρ)
= G_I.
where (A) uses Theorem <ref>, coupled with the fact that δ≤ 1 (which means that ' =, so the mixture finite data error is subsumed by the finite data error) and coupled with Lemma <ref>. This gives us the desired bound.
We are now ready to prove Theorem <ref>.
We analyze (α̃_C, _C) first for the incumbent C = I and then for the entrant C = E. Like in the theorem statement, let L^*(ρ) = 𝔼_[(β_1 - β_2)^T Σ (β_1 - β)] = Θ(1 - ρ), let = min(2(1+γ), δ + γ) = δ + γ, let
G_I := (√(L^*(ρ)) - √(min(, L^*(ρ))))^2 and G_E := (√(L^*(ρ)) - √(min(, L^*(ρ))))^2, and let:
D̃ := α^*_E · (G_I - G_E) - (G_I - G_E)^2/4 · L^*(ρ).
Analysis of the incumbent C = I.
To compute α̃_I and _I, we apply Lemma <ref>.
The assumption ≥𝔼_[(β_1, β_2, Σ, 0.5)] in the lemma statement can be rewritten as ≥ 0.25 L^*(ρ), which guarantees the assumptions in Lemma <ref> are satisfied. By Lemma <ref>, we see that:
𝔼_[L^*_1(β_1, β_2, , _I, ∞, α̃_I)] = (√(L^*(ρ)) - √(min(, L^*(ρ)))^2 = G_I.
Analysis of the entrant C = E.
We apply Lemma <ref> to see
for sufficiently large constant C_δ, γ, if
≥
C_δ, γ·max(D̃^-1/, D̃^- + 1/( G_E^1/2 (1-ρ)^1/2 + 1/2 G_I - 1/2 G_E ) ),
then it holds that:
𝔼_[^*(β_1, β_2, , _E, , α̃_E)] ≤ G_I = 𝔼_[L^*_1(β_1, β_2, , _I, ∞, α̃_I)].
This means that:
= O( max(D̃^-1/, D̃^- + 1/( G_E^1/2 (1-ρ)^1/2 + 1/2 G_I - 1/2 G_E ) ))
as desired.
|
http://arxiv.org/abs/2409.02431v1 | 20240904041825 | Adversarial Learning for Neural PDE Solvers with Sparse Data | [
"Yunpeng Gong",
"Yongjie Hou",
"Zhenzhong Wang",
"Zexin Lin",
"Min Jiang"
] | cs.LG | [
"cs.LG"
] |
Adversarial Learning for Neural PDE Solvers with Sparse Data
Yunpeng Gong
School of Informatics
Xiamen University
Yongjie Hou
School of Informatics
Xiamen University
Zhenzhong Wang
Department of Computing
The Hong Kong Polytechnic University
Zexin Lin
School of Informatics
Xiamen University
Min JiangCorresponding author
School of Informatics
Xiamen University
September 9, 2024
=======================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Neural network solvers for partial differential equations (PDEs) have made significant progress, yet they continue to face challenges related to data scarcity and model robustness. Traditional data augmentation methods, which leverage symmetry or invariance, impose strong assumptions on physical systems that often do not hold in dynamic and complex real-world applications. To address this research gap, this study introduces a universal learning strategy for neural network PDEs, named Systematic Model Augmentation for Robust Training (SMART). By focusing on challenging and improving the model's weaknesses, SMART reduces generalization error during training under data-scarce conditions, leading to significant improvements in prediction accuracy across various PDE scenarios. The effectiveness of the proposed method is demonstrated through both theoretical analysis and extensive experimentation. The code will be available.
§ INTRODUCTION
Partial differential equations (PDEs) have a long-standing history of application across science and engineering, providing a formal mathematical framework to describe and solve dynamic systems involving multiple variables. These systems span across fields such as quantum mechanics, fluid dynamics, and electromagnetism. In the modern realms of science and engineering, optimizing system performance governed by physical laws is a common task across disciplines, including image processing <cit.>, shape optimization <cit.>, drug transport <cit.>, and finance <cit.>. We are witnessing an increasing application of PDEs in predicting capabilities from structural analysis of high-rise buildings and tunnels to the design of cars and rockets, and even to the thermal management and electromagnetic interference shielding in the latest smartphones. Specifically, we investigate PDEs of the following form:
u_t + A[u] = 0, x ∈Ω, t ∈ [0, T]
u(x,0) = h(x), x ∈Ω
u(x,t) = g(x,t), x ∈∂Ω
where u is the solution to the PDEs, A[u] is an operator acting on u, which can be linear or nonlinear. Ω is the domain, typically a subset of ℝ^D. T is the termination time, h(x) is the initial condition that defines the state of the solution at t = 0 at position x, and g(x,t) is the boundary condition that prescribes the value of the solution at the domain boundary ∂Ω at each time point.
Over the past few decades, various numerical methods such as finite difference, finite element, and spectral methods have gradually replaced analytical approximations for linear and coupled items. These numerical methods provide effective tools for addressing complex and atypical PDEs systems. However, traditional numerical methods face significant challenges when dealing with nonlinear features, multi-scale characteristics, uncertain boundary conditions, and the complexity of high-dimensional data processing. The revolutionary progress in machine learning offers a new approach to PDEs solving, with increasing applications in solving partial differential equations. Particularly, the idea of learning a computationally cheap but sufficiently accurate substitute for classical solvers has proved very effective. Neural networks, as powerful function approximators, have been introduced into PDEs solving <cit.>, demonstrating substantial potential in handling complex problems. These neural network PDE solvers are laying the groundwork and rapidly becoming a fast-growing and impactful research area.
In the field of scientific machine learning (SciML), particularly in deep learning, where large volumes of data are typically required, data resources are often scarce and costly. Faced with these challenges, data augmentation <cit.> has become a cost-effective strategy for expanding training datasets <cit.>. This approach not only increases the model's exposure to diverse data features but also acts as a regularization technique, helping to reduce overfitting to noise and atypical features. Although research on data augmentation in SciML is gradually increasing, the literature remains sparse, with only a few preliminary studies exploring innovative methods.
In the field of neural PDEs, current data augmentation techniques primarily rely on principles of symmetry and invariance <cit.>. A representative method based on symmetry, Brandstetter et al. <cit.> utilize Lie point symmetry for data augmentation. Lie point symmetry <cit.> is a concept in mathematics and physics that involves exploiting the symmetry of a system to generate new solutions, thereby expanding the solution space. By leveraging this symmetry in solving PDEs, the diversity of the training dataset can be increased, improving the generalization ability of the neural network model. However, the effectiveness of the Lie point symmetry method depends on the PDEs having identifiable symmetries. Not all PDEs exhibit sufficient or apparent symmetry for this method to be viable, which limits its universality.
A representative method based on invariance, Fanaskov <cit.> employs generalized covariance for data augmentation. Generalized covariance refers to the property that physical laws remain invariant under different coordinate systems or frames of reference. This method enhances data diversity by generating different perspectives of data instances through transformations of the PDE coordinate system, thereby training the neural network to better understand and adapt to the fundamental laws and structural changes of the physical system. The effectiveness of this method is highly contingent upon the PDE system's ability to maintain its form invariant across different coordinate descriptions, which is a strong assumption for many practical problems.
Different from traditional data augmentation methods, this paper explores the application of adversarial learning in PDE solvers, specifically addressing data scarcity and enhancing practicality. By targeting and rectifying underfitted areas within the model, this approach not only effectively augments limited data resources but also substantially enhances the model's robustness and generalization capabilities. Adversarial samples are designed to reveal and exploit the vulnerabilities of models by applying meticulously designed minute perturbations to the input data, causing model predictions to deviate from true values <cit.>. Traditionally, adversarial samples have been widely used in domains such as image classification and image retrieval to evaluate and improve the robustness of models. <cit.>.
Interestingly, our research shows that adversarial samples can also play a unique role in the field of physical modeling, particularly in revealing the vulnerabilities of models when fitting physical problems. In this paper, we extend this concept to the domain of physical modeling, especially for neural network models used to solve PDEs. By incorporating adversarial samples, our method can effectively simulate various disturbances and uncertainties that might be encountered in real physical systems, and force the model to maintain stable predictive performance under these disturbances, thereby expanding the model's application range and improving its adaptability to unknown situations.
In this research, we employ neural network-based PDE solvers, focusing on using adversarial learning strategies to enhance their robustness and generalization capabilities in scenarios characterized by data sparsity. We have developed an efficient model training strategy that allows the solver to adapt to complex partial differential equations even under conditions of data scarcity. By introducing carefully designed adversarial samples during the training process, our method effectively expands the learning range of the model and enhances its ability to handle uncertainties and potential anomalies. The effectiveness of this method is validated through experiments in various PDEs application scenarios, including equations with different physical and mathematical characteristics. In many test scenarios, the proposed method performed excellently, demonstrating significant improvements in prediction accuracy over traditional methods. These results showcase the potential of deep learning in the field of traditional scientific computing and provide empirical evidence for deploying similar technologies across a broader range of engineering and technological applications.
Our contributions are summarized as follows:
∙ Our work is the first to discuss how to design adversarial sample generation strategies tailored for the domain of physical modeling, and theoretically analyzes how the proposed method can effectively reduce overall generalization errors by integrating adversarial samples into the training process.
∙ By introducing an adversarial learning strategy, we have proposed an enhanced neural network PDEs solver that significantly improves the model's generalization capability and robustness under conditions of data scarcity and complex physical problems.
∙ Extensive experimental validation shows that our model significantly outperforms traditional methods in various complex PDEs scenarios, demonstrating the application potential of deep learning in traditional scientific computing.
§ PDE DATA AUGMENTATION EXAMPLES
§.§ General Covariance Data Augmentation
This method increases the diversity and coverage of the training dataset through coordinate transformations. Consider a simple one-dimensional elliptic equation problem:
d/dx(a(x) du(x)/dx) = f(x),
x ∈ [0, 1], u(0) = u(1) = 0.
where a(x) and f(x) are known functions representing the coefficients and source terms of the equation.
Coordinate Transformation Enhancement: To apply data augmentation, a simple coordinate transformation such as y(ξ) = ξ^3 is chosen, which is a monotonic function from [0, 1] to [0, 1] satisfying y(0) = 0 and y(1) = 1.
Under the new coordinate system ξ, the original PDE transforms into:
d/dξ(a(y(ξ)) dy(ξ)/dξdu(y(ξ))/dξ) = f(y(ξ)) (dy(ξ)/dξ)^2,
where dy(ξ)/dξ = 3ξ^2. With the specific transformation, the equation becomes:
d/dξ(a(ξ^3) 3ξ^2 du(ξ^3)/dξ) = f(ξ^3) (3ξ^2)^2.
Thus, we can generate new input-output pairs through the original solution u(x) and the transformed solution u(y(ξ)). These augmented data will be used to train the neural network, improving its generalization ability and prediction accuracy of PDEs solutions.
§.§ Lie Point Symmetry Data Augmentation
Lie point symmetry is a core concept in mathematics and physics, involving identifying symmetries of partial differential equations (PDEs) that preserve solutions. By determining all possible Lie point symmetries of a PDE, we can discover multiple transformations that do not alter the fundamental structure of the equation. This method allows us to generate new solutions from known ones, thereby expanding the training dataset's size and diversity. These newly generated solutions are mathematically valid and do not require additional costly physical simulations. A crucial preparatory step before implementing Lie point symmetry-based data augmentation is to derive all the Lie point symmetry transformations associated with a specific PDE. This step is vital as it determines the types and ranges of symmetry transformations that can be applied for data augmentation.
Here, using the Korteweg-de Vries (KdV) equation as an example, we show how to generate data augmentation samples using Lie point symmetry. The KdV equation describes a single scalar field u varying over space x and time t with the equation:
u_t + uu_x + u_xxx = 0,
where u_t is the first derivative of u with respect to time, uu_x is the product of u and its first derivative with respect to space, and u_xxx is the third derivative of u with respect to space.
Lie point symmetry enhances the dataset through the following transformations:
* Time Translation g_1(ϵ):
g_1(ϵ)(x, t, u) = (x, t + ϵ, u),
This transformation shifts the solution along the time axis by ϵ.
* Space Translation g_2(ϵ):
g_2(ϵ)(x, t, u) = (x + ϵ, t, u),
This transformation shifts the solution along the spatial axis by ϵ.
* Galilean Transformation g_3(ϵ):
g_3(ϵ)(x, t, u) = (x + ϵ t, t, u + ϵ),
This transformation involves dynamic adjustments in both space and the solution itself.
* Scaling Transformation g_4(ϵ):
g_4(ϵ)(x, t, u) = (e^ϵ x, e^3ϵ t, e^-2ϵ u),
This transformation adjusts the scales of space, time, and the solution.
When training the neural network solver, by randomly selecting one or more transformation parameters ϵ, we start with a solution u from the training set and apply the above transformations sequentially:
u' = g_4(ϵ_4) g_3(ϵ_3) g_2(ϵ_2) g_1(ϵ_1) u.
In this way, the newly generated solutions u' not only expand the size of the dataset but also enhance the model's understanding of the dynamics of physical systems and its generalization capabilities. This method requires precise symmetry derivation of the PDEs being processed before augmentation can be applied. Symmetries identified for specific PDEs may not apply to others, necessitating individual symmetry analysis and validation for each new PDEs problem.
§ PROPOSED METHODS
Here we demonstrate our method using examples of a one-dimensional heat conduction equation and a two-dimensional incompressible Navier-Stokes equation.
§.§ One-dimensional Burgers’ Equation Adversarial Sample Generation Example
We use the one-dimensional Burgers’ equation as an example to describe how to generate adversarial samples for PDE equations, described as follows:
∂ u/∂ t + u ∂ u/∂ x = ν∂^2 u/∂ x^2,
where u(x,t) represents the velocity field at position x and time t, and ν is the kinematic viscosity. The model f(x,t;θ) approximates the solution u(x,t). The loss function L is defined as:
L(f(x,t;θ), u(x,t)) = f(x,t;θ) - u(x,t)^2 .
The goal of generating adversarial samples is to maximize the loss function through minimal perturbations to the input.
§.§ Adversarial Sample Generation Steps
Initial Gradient Calculation. For a given initial input data point (x,t), compute the gradient of the loss function with respect to the inputs:
∇_x,t L(f(x,t;θ), u(x,t)) = ( ∂ L/∂ x).
Adversarial Sample Initialization. Set the initial adversarial sample as the original input data point:
(x_adv^0, t) = (x,t).
Iterative Update of Adversarial Samples. Gradually generate adversarial samples through multiple small-step iterations. In the k-th iteration, the adversarial sample updates as follows:
(x_adv^k+1, t) = (x_adv^k, t)
+ α·sign(∇_x L(f(x_adv^k, t; θ), u(x,t))),
where α is the step size parameter, and sign(·) operation determines the sign of the gradient to maximize the loss function.
Physical Reasonableness Check.
The goal of the physical reasonableness check is to ensure that the perturbation is within a reasonable range and meets boundary conditions. For convenience, let δ_k represent the perturbation obtained in the k-th iteration:
δ_k = α·sign(∇_x L(f(x_adv^k, t; θ), u(x,t))).
We implement the physical reasonableness check by clipping the perturbation, expressed with the following formula:
(x_adv^k+1, t) = clip((x_adv^k, t) + δ_k,
(-ϵ, ϵ), (x_min, t_min), (x_max, t_max)),
where (- ϵ, ϵ) and (x_min, t_min), (x_max, t_max) are respectively the clipping ranges for the perturbation amplitude and the boundaries for the physical quantities.
(- ϵ, ϵ) ensures that the perturbation is clipped, preventing it from exceeding the maximum allowable magnitude ϵ. If (x_adv^k+1) exceeds ϵ or - ϵ after the perturbation, the clipping operation ensures they are constrained within this range. (x_min, t_min) and (x_max, t_max) are the upper and lower boundaries of the physical quantities, ensuring that the generated adversarial samples remain within a reasonable physical range. This clipping ensures that the model does not output physically unreasonable values during the generation of adversarial samples.
In traditional numerical methods, the grid granularity is the smallest unit for spatial and temporal division when discretizing a PDE, and the perturbation size ϵ can be determined based on the grid granularity of traditional numerical methods, ensuring that the perturbation is within a reasonable range while maintaining physical consistency. Assuming that during the discretization process, the spatial variable x has a grid granularity Δ x, then the perturbation size ϵ can be set to a value proportional to the grid granularity:
ϵ = κΔ x,
where κ is a proportion coefficient less than 1, depending on the sensitivity of the physical problem and the robustness requirements of the model, ensuring that the perturbation does not exceed the grid resolution.
By matching the perturbation size with the grid granularity, the generation process of adversarial samples can be better controlled, ensuring the perturbations are physically reasonable and that the model remains stable in response to these perturbations.
Final Adversarial Sample Generation.
After k iterations, the final adversarial sample is obtained:
(x_adv, t) = (x_adv^k, t).
Through the above steps and formulas, it is ensured that the generated adversarial samples are physically reasonable and do not exceed the preset boundary conditions.
§.§ Two-dimensional Incompressible Navier-Stokes Equation Adversarial Sample Generation Example
We use the two-dimensional incompressible Navier-Stokes equation as an example to describe how to generate adversarial samples for PDE equations, described as follows:
∂ u/∂ t + u ·∇ u = -∇ p + ν∇^2 u, ∇· u = 0,
where:
* u = (u, v) is the velocity field of the fluid, with u and v representing the velocity components in the x and y directions, respectively.
* p is the pressure field of the fluid.
* ν is the viscosity of the fluid.
The neural network model f(x, y, t; θ) is used to approximate the solution of the velocity and pressure fields. The loss function L is defined as::
L(f(x, y, t; θ)) = u_pred - u_true^2
+ p_pred - p_true^2 ,
where:
* u_pred - u_true^2 represents the squared error between the predicted and true velocity fields.
* p_pred - p_true^2 represents the squared error between the predicted and true pressure fields.
§.§ Adversarial Sample Generation Steps
During the process of generating adversarial attacks for PDEs, different physical quantities might have different magnitudes, so applying the same magnitude of perturbation could lead to overly large perturbations for some quantities and too small for others. To address this issue, we introduce a normalization step, which normalizes different physical quantities to unify their dimensions. This aims to ensure a more uniform and regulated application of perturbations, avoiding inconsistencies due to differences in physical quantity scales.
Specifically, normalization converts the values of different physical quantities into a dimensionless standardized form, aligning them within the same numerical range. Using this method, we can apply a uniform perturbation step α during the generation of adversarial samples without needing to adjust the perturbation magnitude for each physical quantity individually. For this purpose, each physical quantity q is normalized to obtain its dimensionless standardized form q_norm:
q_norm = q - q_min/q_max - q_min,
where q_min and q_max are the minimum and maximum values of the physical quantity q. This transformation ensures that all physical quantities are normalized within the range [0, 1].
Initial Gradient Calculation. For a given initial input data point (x, y, t), compute the gradient of the loss function with respect to the spatial variables:
∇_x, y L(f(x, y, t; θ)) = ( ∂ L/∂ x, ∂ L/∂ y).
Adversarial Sample Initialization: Set the initial adversarial sample as the original input data point:
(x_adv^0, y_adv^0, t) = (x, y, t).
Iterative Update of Adversarial Samples. Gradually generate adversarial samples through multiple small-step iterations. In the k-th iteration, the adversarial sample updates as follows:
(x_adv^k+1, y_adv^k+1, t) = (x_adv^k, y_adv^k, t)
+ α·sign(∇_x, y L(f(x_adv^k, y_adv^k, t; θ))),
where α is the step size parameter, and sign(·) operation determines the sign of the gradient to maximize the loss function.
Physical Reasonableness Check. After each update, the adversarial sample undergoes a physical reasonableness check to ensure that the perturbation is within a reasonable range and does not disrupt the continuity of time-series data. Specifically, perturbations should only be applied in the spatial dimensions, not involving the time dimension. The size of the perturbation ϵ can be determined based on the grid granularity to ensure that the generated adversarial sample is physically reasonable:
(x_adv^k+1, y_adv^k+1, t) = clip((x_adv^k, y_adv^k, t) + δ_k,
(x_min, y_min, t_min), (x_max, y_max, t_max)),
where:
δ_k = α·sign(∇_x, y L),
represents the perturbation obtained in the k-th iteration.
Final Adversarial Sample Generation. After k iterations, the final adversarial sample is obtained:
(x_adv, y_adv, t) = (x_adv^k, y_adv^k, t).
Through these steps, adversarial samples that are physically reasonable can be generated, maintaining the continuity and reasonableness of time-series data. These adversarial samples will be used to test the robustness and consistency of the model, ensuring that the model can effectively cope with complex physical conditions in real-world applications.
§.§ Theoretical Analysis
The purpose of generating adversarial samples S_adv is to challenge the model by exposing it to regions in the input space where its predictions may be weak. By identifying and strengthening the model's fitting ability in these critical regions, we aim to reduce the generalization error.
We introduce a coverage measure C(f_θ, S), which represents the total error of the model f_θ over the entire data distribution S. For the original data distribution S, the coverage measure is defined as:
C(f_θ, S) = ∫_S f_θ(x) - u(x)^2 dx,
where f_θ(x) represents the model's prediction and u(x) represents the true solution or target value.
After introducing adversarial samples S_adv, which are perturbations of the original data points, the new coverage measure can be expressed as:
C(f_θ, S ∪ S_adv) = ∫_S ∪ S_advf_θ(x) - u(x)^2 dx.
By including adversarial samples, S_adv, the coverage measure now accounts for potential vulnerabilities in the model, and the errors associated with these adversarial samples are explicitly minimized during training. This leads to an overall reduction in the error:
C(f_θ, S) ≥ C(f_θ, S ∪ S_adv).
This indicates that adversarial training reduces the model's generalization error.
§.§ Example Pseudocode
The supplementary materials provide the pseudocode for the proposed SMART method. This approach systematically generates adversarial samples to challenge the model's performance in areas where predictions may be vulnerable. Initially, the model generates predictions based on the input data and calculates the loss function. Then, slight perturbations are applied to the input data using the gradient of the loss, creating adversarial samples that are used for further training. By incorporating these adversarial samples during training, the model's generalization ability is enhanced, effectively reducing overall generalization error. For more details, please refer to the supplementary materials.
§ EXPERIMENTS
As shown in Fig. <ref>, by comparing panels (b), (c), and (d), one can visually observe the impact of adversarial perturbations on the accuracy of the model's predicted solutions. Both random noise and adversarial noise were set to 8% of the grid size. Compared to random noise, the impact of adversarial noise on the model's predictions is significantly more pronounced. The effects of random and adversarial noise on the model across various metrics are presented in Tab. 1 of the supplementary materials.
These results demonstrate that small adversarial perturbations can significantly degrade the model's predictive accuracy, revealing vulnerabilities at specific data points. These adversarial perturbations, which form adversarial examples, are crucial for further optimizing and enhancing the model's robustness.
As shown in Supplementary Material Fig. 1, compared to traditional standard training methods, our SMART training strategy proposed in this paper achieves a faster decrease in training loss across datasets with different numbers of training points, especially during the initial stages of training. Our SMART strategy requires fewer iterations to converge in all training point configurations, thereby demonstrating its superiority in both efficiency and effectiveness.
§.§ Evaluation Criteria
The experiments evaluate several metrics <cit.>, including RMSE, normalized RMSE (N RMSE), RMSE of conserved variables (RMSE C), RMSE at boundaries (RMSE B), and Max Error. These metrics provide insights into the accuracy and physical consistency of the model's predictions. Lower values in these metrics generally indicate higher prediction accuracy, better adherence to physical laws, and greater robustness. Additionally, we calculate data gain using g = (1 - E_method/E_test) × 100% to provide a more intuitive demonstration of the method's effectiveness, where E_method is the test error of the proposed method, and E_test is the test error of the comparison target.
§.§ Comparison Experiments
In our experiments on the 1D Burgers equation, Tab. <ref> presents the performance comparison between the standard training method and our proposed approach across different numbers of training points. The results indicate that our method demonstrates significant advantages in scenarios with limited training data (sparse data scenarios). As the number of training points increases, the overall accuracy of the model improves, leading to a substantial reduction in error values. For instance, when the number of training points is 32, our method achieves a RMSE gain of 43.87%; however, this gain decreases to 5.75% as the number of training points reaches 512. This trend suggests that as the model’s accuracy improves, further enhancements become more challenging, reflecting a common phenomenon in machine learning where gains diminish as performance approaches optimal levels. In the 1D Advection equation, it can be observed that the experimental results are similar to those obtained with the 1D Burgers equation. The experimental results for the 2D CFD equation can be found in Tab. 2 of the Supplementary Material. Our approach shows considerable potential for improving model performance in data-limited scenarios, while still providing robustness as data volume increases.
The results in Tab. <ref> compare the performance of our SMART method against General Covariance Data Augmentation (GCDA) in solving two components of the fluid velocity field for the two-dimensional Navier-Stokes equation across various models, including Fourier Neural Operator (FNO) <cit.>, Dilated Residual Network (DilResNet) <cit.>, Multilayer Perceptron (MLP) <cit.>, and Structured Neural Operator (SNO) <cit.>. A particularly notable improvement is observed in the SNO model for the v2 component, where the error reduction using SMART jumps from 21.43% with GCDA to 50.00%, more than doubling the performance gain. This significant increase clearly demonstrates the superior ability of the SMART method to enhance model accuracy, particularly in complex physical modeling scenarios. Overall, the consistent performance improvements across all models underscore the effectiveness of SMART in reducing errors and enhancing the robustness of predictions. Additional comparative experiments are provided in the supplementary materials.
Our method is complementary to existing approaches and demonstrates greater adaptability. Table <ref> shows the percentage gain in solving the Burgers' equation when combining Lie Point Symmetry Data Augmentation (LPSDA) with our method. The column labeled “+ Ours” represents the additional gain achieved by integrating our method with LPSDA. Notably, when training the FNO(NO) model, the application of the g1 symmetry transformation in LPSDA resulted in a negative gain, revealing some limitations of the approach in certain scenarios. However, with the introduction of our method, this negative gain was significantly mitigated, indicating that our approach effectively compensates for the shortcomings of LPSDA and significantly enhances model performance under challenging conditions.
§ CONCLUSION
This paper introduced an innovative adversarial learning approach called Systematic Enhancement with Adversarial Robust Training (SMART), aimed at enhancing the robustness and generalization of neural network PDE solvers in sparse data scenarios. Through extensive experiments, our method demonstrated superior performance in complex PDE scenarios compared to traditional data augmentation techniques. Moreover, our theoretical analysis supports the empirical findings, indicating that adversarial learning effectively expands the exploration range of PDE solutions and reduces the model's generalization error. Future work will focus on integrating adversarial learning into broader areas of scientific computing to address challenges posed by data scarcity.
ieee_fullname
|
http://arxiv.org/abs/2409.03350v1 | 20240905085120 | Enhancing the performance of Variational Quantum Classifiers with hybrid autoencoders | [
"G. Maragkopoulos",
"A. Mandilara",
"A. Tsili",
"D. Syvridis"
] | quant-ph | [
"quant-ph"
] |
quantikz2,backgrounds,fit,decorations.pathreplacing
ε
theoremTheorem[section]
lemma[theorem]Lemma
proposition[theorem]Proposition
corollary[theorem]Corollary
proof[1][Proof]
#1
definition[1][Definition]
#1
example[1][Example]
#1
remark[1][Remark]
#1
APS/123-QED
Department of Informatics and Telecommunications, National and Kapodistrian
University of Athens, Panepistimiopolis, Ilisia, 15784, Greece
Department of Informatics and Telecommunications, National and Kapodistrian
University of Athens, Panepistimiopolis, Ilisia, 15784, Greece
Eulambia Advanced Technologies, Agiou Ioannou 24, Building Complex C, Ag. Paraskevi, 15342, Greece
Department of Informatics and Telecommunications, National and Kapodistrian
University of Athens, Panepistimiopolis, Ilisia, 15784, Greece
Department of Informatics and Telecommunications, National and Kapodistrian
University of Athens, Panepistimiopolis, Ilisia, 15784, Greece
Eulambia Advanced Technologies, Agiou Ioannou 24, Building Complex C, Ag. Paraskevi, 15342, Greece
§ ABSTRACT
Variational Quantum Circuits (VQC) lie at the forefront of quantum machine learning research. Still, the use of quantum networks for real data processing remains challenging as the number of available qubits cannot accommodate a large dimensionality of data –if the usual angle encoding scenario is used. To achieve dimensionality reduction, Principal Component Analysis is routinely applied as a pre-processing method before the embedding of the classical features on qubits. In this work, we propose an alternative method which reduces the dimensionality of a given dataset by taking into account the specific quantum embedding that comes after. This method aspires to make quantum machine learning with VQCs more versatile and effective on datasets of high dimension. At a second step, we propose a quantum inspired classical autoencoder model which can be used to encode information in low latent spaces. The power of our proposed models is exhibited via numerical tests. We show that our targeted dimensionality reduction method considerably boosts VQC's performance and we also identify cases for which the second model outperforms classical linear autoencoders in terms of reconstruction loss.
Enhancing the performance of Variational Quantum Classifiers with hybrid autoencoders
Dimitris Syvridis
MOX - Dipartimento di Matematica “F. Brioschi”, Politecnico di
Milano, via Bonardi 9, 20133 Milan, Italy
^1 [email protected]
^2 [email protected]
^3 [email protected]
September 9, 2024
===============================================================================================================================================================================================================================================
§ INTRODUCTION
Quantum hardware reaches new milestones faster than expected and IBM's roadmap indicates that quantum computers will be capable of running 1 billion gates beyond 2033 <cit.>.
Still, Quantum Machine Learning (QML) faces numerous challenges not only at hardware level but also at design level, with the domain trying to compete with, the more established in real world applications, classical approaches <cit.>. One of the challenges concerns the encoding of high-dimensional classical data into qubit states. In literature one mainly encounters four types of encodings: basis, amplitude, Hamiltonian and angle encoding <cit.> –also known as time-evolution or rotation encoding. While amplitude encoding is the most efficient in terms of qubit resources, popular packages <cit.> reveal that at this moment, angle encoding is the most widely used for QML models built with Variational Quantum Circuits (VQCs).
However, considering angle encoding, the number of qubits is, in practice, as large as the dimension of the embedded data <cit.>, making the task intractable both for simulations and experiments.
One way to reduce the number of qubits is to increase the number of features per qubit by performing
repetitions of layers across the quantum circuit <cit.>. In a second approach Principal Component Analysis (PCA) method is used to reduce the dimensionality before embedding the extracted components to a VQC <cit.>. Still the question whether more adapted methods exist for reducing the dimension of classical data before quantum embedding remains.
To address this question, different suggestions have been put forward and tested during the last years. In <cit.> pre-trained models are used as a preface for quantum circuits, enhancing the ability of the last to classify images. In <cit.>, a supervised hybrid system optimizes quantum embedding with the help of classical deep learning. In <cit.>, the authors integrate classical vision transformers in a VQC to create hybrid units which are then jointly trained. This hybrid model is a supervised learning model, demonstrating promising results.
In recent works <cit.>, explore different flavours of angle encoding in various datasets, providing further evidence on angle encoding being the most prevalent encoding method.
In this work, we propose a new method for enhancing the encoding of high dimensional classical data into qubits by adjusting the idea of classical autoencoders (AE) to quantum variational methods, presented in Section <ref>. We call the model PCA-embedded quantum autoencoder (PQAE), since this consists from a classical autoencoding structure, the quantum data-embedding block of the VQC
under study and a kernel PCA (KPCA) method that uses the generated quantum kernels.
In contrast to existing encoding methods for QML, the proposed PQAE works with datasets of arbitrary size and complexity, by creating an autoencoder which re-generates the dataset with a bottleneck layer that has the same structure as the data embedding scheme of the VQC. A classical KPCA method is interjected after the bottleneck enabling this hybrid method to treat a big number of qubits.
In addition, we propose a quantum inspired classical model for dimesionality reduction as a secondary model, named quantum-inspired autoencoder (QAE).
QAEs shares many similarities with PQAEs, yet their advantages come from a different scope. QAEs are classical models which regenerate the dataset using a quantum inspired transformation in the `bottleneck' layer which changes the data representation.
The numerical results of Section <ref> show that PQAEs can give an important boost in the VQCs performance as compared to PCA method. QAE also does achieve lower reconstruction error than common linear AEs in four typical classification tasks: Iris, Wines, Seed and (binary) MNIST datasets. Let us finally note that both proposed methods are unsupervised learners, meaning that the encoding is indifferent to the target of the task, thus the same encoded data could also be used for a range of classification tasks or regression.
Various methods in literature have claimed the name `quantum autoencoder', some of which we list in the following. In <cit.>, a quantum autoencoder is trained to compress quantum states, a task beyond classical capabilities. The compression occurs inside the quantum circuit, making it a quantum input to quantum output autoencoder, which differs from our classical input to classical output approach. In <cit.>, the authors directly follow the proposed scheme in <cit.>, and the quantum autoencoder is followed-up by a quantum classifier. This has a similar flavour to our approach, as they use a classifier after the encoder. In <cit.>, the authors construct a fully quantum variational autoencoder, which is applicable for both classical and quantum data compression. Their proposed model is a direct translation of classical autoencoders in a quantum computer, in contrast, the models that we propose offer a hybrid approach that is not associated with an existing classical model. In <cit.>, a model where a classical autoencoder is dressed with a quantum circuit is introduced. In this model, the trainable weights are present in both the classical and quantum parts of the model, in our approach we only train
`classical weights' and use KPCA as well, with the goal of passing the data in the VQC. Finally, quantum inspired activation functions are used in <cit.>, as the authors demonstrated that quantum activation functions are efficient in feature selection of input images, which showcase the potential of applying quantum inspired methods as we do with the QAE model.
§ BASIC CONCEPTS AND METHODS
In order to make this work self-contained, we briefly describe here the basic classical and quantum methods
which we use to build the new models.
§.§ PCA method and Classical Autoencoders
PCA is a statistical technique used for dimensionality reduction, widely applied in data analysis and Machine Learning (ML) <cit.>. It aims to transform a dataset with a high-number of features into a lower-dimensional form while retaining as much variability as possible.
Given n, N-dimensional samples, 𝐱_i∈ℝ^N with i=1, …,n, one first forms the data matrix 𝐗∈ℝ^n × N. PCA begins by computing the covariance matrix 𝐂∈ℝ^N × N of 𝐗, given by:
𝐂 = 1/n𝐗^⊤𝐗 .
The eigenvectors 𝐯_1, 𝐯_2, …, 𝐯_N ∈ℝ^N of 𝐂 capture the directions of maximum variance in the data while the corresponding eigenvalues λ_1, λ_2, …, λ_N
are assumed in descending order. One chooses the first k eigenvectors, with k<N as the principal components and uses them
to reduce the dimension of the initial data from N to k by projecting
the data matrix 𝐗 on the space spanned by the k N-dimensional principal components.
KPCA <cit.> offers a flexible approach as compared to traditional PCA, when the relationships between the data variables 𝐱_i are nonlinear. It operates by mapping the data into a higher-dimensional feature space using a kernel function k(𝐱_i, 𝐱_j) that computes the similarity between the data points 𝐱_i and 𝐱_j. In KPCA, instead of computing the covariance matrix directly, the kernel matrix 𝐊∈ℝ^n with elements K_ij = k(𝐱_i, 𝐱_j) is computed. The eigenvectors 𝐚_1, 𝐚_2, …, 𝐚_n and corresponding eigenvalues α_1, α_2, …, α_n of 𝐊 capture the principal components in the higher-dimensional space. To reduce dimensions using KPCA, one simply selects the eigenvectors of the kernel matrix with the highest eigenvalues.
AEs are a type of Neural Network (NN) used for unsupervised learning, particularly employed in the field of representation learning and dimensionality reduction <cit.>. The basic function of an AE is to learn a compressed representation of the input data. This is achieved by first reducing the dimensionality of the input, then reconstructing the original input from this representation, and train the weights in the model so that a loss function between input and output is minimized, see Fig. <ref>.
In more detail, the encoder part of an AE network takes an input data 𝐱_i ∈ℝ^N and maps it to a lower-dimensional latent space representation 𝐳_i ∈ℝ^k. This process involves a series of transformations, usually implemented through L layers of NN units, represented as:
𝐳_i = E(𝐱_i) = E_L(E_L-1(… E_1(𝐱_i) … )) .
where E_j with j=1,…,L denotes the j-th layer of the encoder.
The decoder part of the AE network takes the compressed representation 𝐳_i (output of the encoder) and attempts to reconstruct the original input data 𝐱_i from it. The reconstruction process involves a series of inverse transformations, typically implemented through layers that mirror the encoder, represented as:
𝐱_𝐢 = D(𝐳_i) = D_0(D_1(… D_L-1(𝐳_i) … ))
where D_j where j=1,…,L denotes the j-th layer of the decoder, and 𝐱_𝐢 is the reconstructed output. The goal is to make 𝐱_𝐢 as close to 𝐱_i as possible by minimizing a loss function. For the analysis in this work we employ the loss function of Mean Squared Error (MSE) defined as
MSE = 1/n∑_i=1^n (𝐱_i - 𝐱_i)^2 .
AEs can be used as an alternative method to PCA for dimensionality reduction. Interestingly in <cit.> is proven that PCA is equivalent to shallow AEs consisting of only linear activation functions.
§.§ VQCs and Quantum Kernels
VQCs stand for quantum circuits built under specific architectural ansatzes and with parametrized/variational quantum gates within.
VQCs were introduced with the conception of Variational Quantum Algorithms (VQAs) <cit.> which are hybrid (quantum-classical) algorithms designed to address a big variety of computational tasks by performing optimization in the parameters of a VQC. A VQA adjusts the design of the VQC, the measurements process on the output qubit states, the classical cost function built from these outputs and the classical optimization techniques to the specific problem under study.
In this work, we deal with the complex issue of encoding classical data to be processed in a VQC in the context of classification <cit.>, a supervised ML task. For such tasks, the simplest form of a VQC consists of two components: the data-embedding or encoding block and the variational or parametrised block. The data encoding block is the part of the quantum circuit where a classical data 𝐱 is encoded into the qubits' state |ϕ(𝐱) ⟩, generating the quantum feature map: 𝐱→ |ϕ(𝐱) ⟩ from the real space to the Hilbert space. In the variational block a sequence of parameterized quantum gates is applied on the encoded quantum state, and then the parameters, θ⃗ of these gates are optimized through a classical optimization process. The unitary action of the encoding block on the quantum states, followed by the variational part of the circuit can be described as |ψ_out⟩ = V̂(θ⃗) Û(ϕ(𝐱)) |ψ_in⟩ where habitually |ψ_in⟩=|00…0 ⟩. By repeated measurements on the quantum state |ψ_out⟩ for a selected observable, one can derive probabilities for each label as for an NN.
Since in our model we mostly occupy with the encoding block of the VQC, let us focus on the encoding methods. Generally speaking, the most “qubit” efficient encoding method is the amplitude encoding method where a 2^n dimensional normalized vector 𝐱 is mapped on the state of n qubits as |ϕ(𝐱)⟩ = ∑_i=1^2^n x_i |i⟩. However while there are algorithmic procedures for preparing the quantum state |ϕ(𝐱)⟩ the number of gates in the related circuits is increasing exponentially with n <cit.>. In this work, we assume angle encoding in VQCs as this is the most `physical' encoding method, using classical features to parametrise angles of rotation gates which are in turn experimentally tunable parameters.
More specifically, we implement Pauli Feature Maps, meaning that the classical data are embedded in the quantum circuit via angle encoding in single-qubit gates using the Pauli operators σ̂_x, σ̂_y, σ̂_z. For example, the trivial case of this encoding method leverages the product of a feature x ∈[0,1] with the Pauli operator σ̂_y as R_y(x π)=e^-i(x πσ̂_y/2). This expresses the incorporation of the feature x in the gate R_y(ϕ) that rotates the Bloch vector of a state around the y axis for an angle ϕ. Such angle encodings can be performed using multiple features and Pauli gates, as e^-i∑x_iσ̂_i, increasing the complexity of the feature map <cit.>.
In addition to single qubit gates used for the embedding of data, the encoding circuit also includes entangling gates for exploiting in full extend the Hilbert space and further increasing the complexity of the quantum feature map.
Finally, another flavour of KPCA can emerge if classical kernels are replaced by quantum kernels. The latter are expressed as
k(x, x') = |⟨ϕ(x) | ϕ(x') ⟩|^2
and can be understood as a similarity measure between data points 𝐱, 𝐱' in the space which emerges from the quantum feature map of an embedding circuit: 𝐱→ | ϕ(𝐱) ⟩, <cit.>.
The elements of a quantum kernel matrix can be experimentally estimated
via the SWAP or Inversion tests and then have the same use as classical ones <cit.>. Quantum kernels become particularly interesting when
the quantum feature map is not classically computable and in consequence quantum kernels are speculated to provide quantum advantage <cit.>.
In this work we employ KPCA with quantum kernels for another aim. We employ them as a method of extracting information from the data-embedding part of the VQC and the input data. In addition, under the assumption that there is access to the quantum circuit and possibility for performing SWAP tests KPCA with quantum kernels places our proposed method tractable even for a large number of qubits.
§ THE PQAE AND QAE MODELS
In this section we present PQAE and QAE models. The first is a pre-processing method of classical data
which is adjusted to the data-embedding part of a VQC. This has a hybrid classical-quantum structure and
requires access to the specialized hardware of the quantum circuit. QAE consists of a feature map emulating a quantum feature map, embedded in the bottleneck layer of a classical AE. QAE runs exclusively on classical hardware.
§.§ PQAE for encoding on qubit states
The goal of PQAE is to transform classical data into a format that quantum circuits can efficiently use. The PQAE learns how to compress and encode classical data into this format by passing it through the feature map and then reconstructing it back to the original dataset. To do this effectively, it uses the same feature map as the VQC that will process the data later.
Given the data-encoding block of a VQC of n_q qubits and an N-dimensional train dataset, PQAE is an algorithmic procedure that employs a classical network and a quantum circuit, see Fig. <ref>. The goal of the discussed machine learning method is to adjust the input parameters of the classical network, so that the overall loss – in this case given by the MSE – is minimised across the dataset. The procedure comprises (iteratively) repeated epochs and this stops when convergence in the value of MSE is achieved. For each epoch the train dataset is randomly divided in batches of M data, and for each batch the updating of the classical weights is performed as follows:
* Every data point of the batch is passed via the same classical linear layers, which have the structure of the encoding layer of an AE, see Fig. <ref>. The aim of this step is to adjust the dimensionality of data, N, to the number of qubits, n_q.
* The values on the n_q neurons are then used as features for the data-encoding layer of the VQC under study.
Each i data point produces a quantum state |ψ_i⟩, with i=1,… M.
* A SWAP or Inversion test is used to estimate the M × M quantum kernel matrix k_i,j= |⟨ψ_i | ψ_j ⟩|^2.
* Afterwards KPCA is performed using the quantum kernel k_i,j and this outputs n_d ≤ M principal components of dimension M.
* The elements of principal components are attributed to the related data of the batch and one proceeds by passing this n_d-dimensional vector via a series of classical hidden layers until the original dimensionality N is restored.
* The mean squared error (MSE) is calculated for the batch, Eq.(<ref>), and the weights in classical encoding and decoding layers are updated according to a gradient optimization method.
One repeats the procedure for all batches and then proceed to the next epoch.
We note here that practically the number of qubits n_q can be close to the number of features N, when N is small, or when technological advances allow for more qubits. Moreover, the latent dimension n_d can be considered greater than n_q the emdedding is performed in
in the Hilbert space of dimension 2^n_q. Therefore, without loss of generality we assume that n_q ≤ n_d < N. Finally in this work we restrict the structure of classical encoding and decoding layers to linear ones.
Let us describe a case example in order to further clarify the PQAE algorithmic procedure. Suppose that a dataset with N = 100 features is given and they want to use a VQC with n_q = 4 qubits and a ZZ feature map. The ZZ feature map was introduced in <cit.> and has been used successfully in <cit.> for solving classification tasks . In order to efficiently encode these 100 features into 4 qubits the PQAE method is used. They start with a hidden layer that maps 100 nodes to 32 nodes, and then another hidden layer that maps 32 nodes to 4 nodes. This could be alternatively done through multiple hidden layers or directly from 100 to 4. The 4 nodes are then passed to a quantum computer using the ZZFeatureMap for encoding. The output data are now encoded on qubits, and they calculate the kernel matrix using the quantum algorithmic procedure of SWAP or Inversion test. They then perform KPCA with the quantum kernels to reduce the data in the transformed space. Let one select the top 4 principal components (n_d=4). Finally, they use linear layers to decode from n_d=4 to 32, and then from 32 to 100, reproducing the input data. The weights of the connections between nodes are trained to reduce MSE. After training, the PQAE model can be used to encode new unseen test data into VQC, effectively serving as a “qubit encoding” black box.
§.§ QAEs for dimensionality reduction
In the PQAE method the KPCA is used to extract information from a large Hilbert space and bring it back to
the classical space in a compact way. Thus this component is necessary for the accessibility of the method in the case
of many qubits. Here, we re-consider the model omitting the KPCA. Unavoidably the model
that we build is not useful for encoding in many qubits, but on the other hand this can be seen as an unconventional classical AE
where its bottleneck layer is `dressed' with a quantum feature map. In the next section, we investigate its advantages numerically.
QAEs begin by reducing the dimensionality of the input N-dimensional data to a smaller set of L neurons. This reduction step is analogous to the initial compression phase in classical AEs and the encoded neurons are represented by Eq.(<ref>). The operator that creates the next layer of neurons, which is derived from the output of the feature map, is represented with Eq.(<ref>)
e^-i ∑_j=1^L z_j ĝ_j .
where the variables, z_j with j=1,…,L are used to parametrize an SU(n) rotation and the ĝ_j operators are a subset of the n^2 - 1 generators of the related Lie algebra <cit.>. For simplicity let us set here L=n^2-1 and the application of the method to the more general case L<n^2-1 is, then, straightforward. The features of the initial data points are encoded in a unitary matrix of the form
[ a b; -b^* a^* ]
with a,b ∈ℂ satisfying |a|^2+|b|^2=1, which is the SU(2) case (or single qubit case). We extract the elements of the first column of this n× n matrix, which can is equivalent to the action of operating the unitary matrix on the ground state of an n-dimensional quantum system.
Alternatively one can use the average values over each row's elements, that corresponds to the operation on a state that is in equal superposition over all states of the computational basis. Either way, one obtains 2n real numbers, which for n>2 are parametrically dependent to each other
and adhere to the normalization condition. These real numbers are encoded on 2n neurons and then the decoding starts until the original dimensionality is reached. We note that, the dimensionality of the bottleneck of any autoencoder, is determined by the layer with the fewest neurons. Only in the SU(2) case, this occurs in the layer before the feature map. In all other cases, where n>2, the bottleneck is in the layer after the feature map. The whole procedure is schematically presented in Fig. <ref>.
Since QAEs aim to offer a fruitful representation in the latent space, batched QAEs form a variant where input data is processed in m batches, enhancing scalability and performance. Following Fig. <ref>, segments of the reduced representation of the data is passed through the feature map. For example, starting with an 100 dimensional input, one can reduce the dimension to 12 with classical layers and separate these features in m=4 batches. Then, using four Pauli feature maps with 3 generators per batch, 16 real outputs are produced and processed for decoding via the classical layers. In this way, the feature map is essentially used more than once, offering greater encoding capabilities. The non-batched case presented earlier is a special case of batched QAEs with m=1.
§ NUMERICAL RESULTS
In order to exhibit the efficacy of the proposed methods we first apply the PQAE model for encoding classical data on a VQC and compare the results with those obtained via PCA. We then apply QAE methodology for dimensionality
reduction of data and we compare its reconstruction capability with the one of classical linear AEs. For all tasks, PyTorch <cit.> is used to construct the AEs and the ADAM and RMSprop optimizers are used for training the PQAE and QAE models accordingly. The VQCs for the Iris, Wines and Seed dataset, are implemented in IBM's Qiskit <cit.>. All programs and datasets are available on GitHub <cit.>.
§.§ Encoding VQCs with PQAE method
VQCs in IBM's Qiskit are composed of a Pauli feature map in the data-encoding block and a parametrized ansatz in the variational block. In this approach, we use a PQAE to encode the classical data into qubits for four different datasets available in scikit-learn <cit.>, namely Iris (N=4), Wines (N=13), Seed (N=7) and MNIST (N=64). The transformed data are then passed to a VQC in order to solve the corresponding classification task. For all datasets, we set-up a n_q=4 qubit VQC using Qiskit's ZZFeatureMap with 2 repetitions and a Real Amplitudes ansatz with 3 repetitions. The classical encoding part of the PQAE consists of one linear layer which transforms the data from ℝ^N to ℝ^n_q and the decoding part consists of a linear layer which transforms the reduced space ℝ^n_d to ℝ^N. The optimization procedure is repeated using 10 different random seeds and we collect the results with best accuracy for both PQAE and PCA methods. In both cases, we train the VQC for 100 epochs. The results are reported in Table <ref>. PQAEs increased the VQC's accuracy across three out of four datasets, as compared to PCA or Raw encoding, with the most remarkable achievement in accuracy being 97% in the Iris dataset, even surpassing the accuracy of qutrit-based quantum neural networks, which aspire to offer an improvement over qubit methods <cit.>.
The advantage exhibited by PQAE stems from the fact that this pre-processing model is trained to minimize the reconstruction MSE, ensuring that the encoded data retain the essential features of the original data while filtering out multicollinearity and irrelevant information. This training acts as an unsupervised feature extraction mechanism, resulting in a cleaner and more manageable data representation. Furthermore, by utilizing the same quantum feature map in both the PQAE and the VQC ensures consistent data transformation, aligning the features learned during the autoencoder's training with those used by the VQC. This consistency allows the VQC to focus on fine-tuning its decision boundaries rather than representing the data itself. Consequently, the dimensionality reduction achieved by PQAE enables the VQC to operate on a more compact and informative data representation, leading to faster convergence and better generalization.
The combination of classical and quantum methods in the approach that we present leverages the strengths of both.
The classical autoencoder excels at unsupervised feature extraction and multicollinearity reduction, while the quantum feature map and VQC capitalize on the expressive power of quantum computations for classification.
§.§ QAEs vs classical linear AE for dimensionality reduction
QAEs provide an exploration primitive of the synergy between quantum and classical ML techniques. By integrating quantum feature maps into classical autoencoder architectures, we investigate the improvement prospects of data processing tasks based on quantum mechanics over classical alternatives, while providing numerical results. The comparison of reconstruction error between QAE and AEs provides a framework for analyzing the expressive power of given quantum transformations.
The output of the exponential map used in QAE produces features which follow an l^2-normalization, and thus it is sensible to apply the method on data having this same property. For specific types of data, e.g. wavefunctions, this property naturally occurs, for others this can be imposed by taking the extra step of normalization. All models in this study are compared with classical linear AEs which follow the exact same architecture as the QAEs, except for the feature map, see for instance Fig. <ref>, subfig. (a).
We start our numerical investigations with the Iris dataset, that is first normalized. We use QAE
to reduce its feature space from ℝ^4 to ℝ^3 at the bottleneck, and in Table <ref> we compare the results, i.e, MSE, with the one achieved by a classical AE of same architecture. The Seed dataset is treated in a similar way, and its feature space is reduced from ℝ^7 to ℝ^3.
In Table <ref> we exhibit that QAE can offer improvement to non normalized data, as well, using the Wines dataset, with the dimensionality of data reduced from ℝ^13 to ℝ^12. We achieve this result by using a batched QAE with m = 4 batches. As depicted in Fig. <ref>, subfig. (b), QAE starts with a `classical' layer which reduces the dimensionality from ℝ^13 to ℝ^12. This is followed up by four feature maps. The output of the feature maps provides the values for a layer of 16 neurons. A decoding layer follows reducing the dimensions back to ℝ^13. This procedure achieves a reduction greater than 50% on the MSE achieved by the corresponding classical AE of Fig. <ref>, subfig. (a).
Furthermore, we implement a QAE model with an alternative feature map to the one presented in Section <ref>. In the latter case, the feature map described by the Eqs.(<ref>)-(<ref>) induces an encoding from ℝ^3 to ℝ^4. Here, we experiment with a quantum-inspired map that we name Bloch encoding, which gets a 2-dimensional input and outputs a 3-dimensional vector, i.e.,
ℝ^2→ℝ^3. This is achieved via the Bloch vector representation of a qubit:
|ψ⟩ = cos(θ/2) |0⟩ + e^iϕsin(θ/2) |1⟩
where as input we consider the angles θ, ϕ and as output
the three real numbers: {cos(θ/2), sin(θ/2) cos(ϕ), sin(θ/2) sin(ϕ) }.
The latent space offered by such a feature map is thus further reduced from 3 to 2.
We apply the QAE model equipped with the Bloch feature map to the normalized Iris dataset so as to reduce the bottleneck dimension to 2. The improvements over AE can be found in Table <ref> where one may see that this type of QAE bottleneck reduces the MSE by almost 2.5 times.
Finally in Fig. <ref> we compare the QAEs' results presented in Tables <ref>-<ref> with those reached by AEs equipped with polynomial feature map. Notably, the latter exhibit very good performance as compared to other kernels <cit.>. In more details we use the map {x_1, x_2}→{x_1^2, x_2^2, x_1x_2} for the ℝ^2→ℝ^3 case and the slightly altered version of the habitual polynomial feature map {x_1, x_2, x_3}→{x_1^2, x_2^2, x_3^2, x_1x_2+x_1x_3+x_2x_3}, to match the ℝ^3→ℝ^4 case. In every dataset, the comparisons show an advantage for the QAE method over the polynomial feature maps.
§ DISCUSSION
In this work we have introduced PQAE, a method for reducing the dimensions of features of classical data sets that is tailored to the
quantum feature map of a VQC. The method uses a hybrid structure: a classical linear AE with the bottleneck layer made of the quantum feature map
of the data-embedding circuit of the VQC.
The input of the quantum feature map is a batch of train data which have passed through the encoding layer of the AE. Its output is the quantum kernel matrix for the batch that is consequently used for KPCA. The vector comprising the principal components of the quantum kernel matrix
is the output treated by the decoding layer of the AE.
The numerical results of Section <ref>
support the claim that this hybrid method of pre-processing can considerably boost the performance of a VQC in solving classification tasks.
In addition, the quantum-inspired model QAEs and batched QAEs, create new possibilities for dimensionality reduction of classical data, since we have identified cases where they greatly outperform simple linear AEs of the same latent dimension.
There are several points which can be further investigated in order to potentially improve the performance of the proposed models. In this work, we employ MSE as a measure of reproducibility of datasets but other measures could be proven more fit to our methods. We also exclusively use a linear structure for the classical layers in the models as well as for the classical AEs with which we compare the results. An extension to nonlinear activation functions seems necessary for completeness. Furthermore, all numerical examples in this work concern quantum circuits made of qubits even though the methodology supports an extension to qudits. In a future work where more
complicated data sets and multi-class problems are investigated, we would be interested in employing the methods for qudit circuits. Finally, PQAE has been developed and exhibited for variational quantum classifiers but the methodology could be adapted to other variational algorithmic procedures, such as variational quantum regressors, which also require uploading of classical data on quantum circuits.
§ ACKNOWLEDGEMENTS
This work was supported by the project Hellas QCI co-funded by the European Union under the Digital Europe Programme grant agreement No.101091504. A.M. and D.S. acknowledge partial support from the European Union’s Horizon
Europe research and innovation program under grant agreement No.101092766 (ALLEGRO Project).
|
http://arxiv.org/abs/2409.02820v1 | 20240904153827 | Semiclassical instanton theory for reaction rates at any temperature: How a rigorous real-time derivation solves the crossover temperature problem | [
"Joseph E. Lawrence"
] | physics.chem-ph | [
"physics.chem-ph",
"cond-mat.stat-mech",
"hep-th"
] |
⟨#|1<#1|
|#⟩1|#1>
#1#2⟨#2| #1 |#2⟩
#1⟶^__X#1__X
Department of Chemistry, New York University, New York, NY 10003, USA
[email protected]
§ ABSTRACT
Instanton theory relates the rate constant for tunneling through a barrier to the periodic classical trajectory on the upturned potential energy surface whose period is τ=ħ/(k_ BT).
Unfortunately, the standard theory is only applicable below the “crossover temperature”, where the periodic orbit first appears.
This paper presents a rigorous semiclassical (ħ→0) theory for the rate that is valid at any temperature. The theory is derived by combining Bleistein's method for generating uniform asymptotic expansions with a real-time modification of Richardson's flux-correlation function derivation of instanton theory.
The resulting theory smoothly connects the instanton result at low temperature to the parabolic correction to Eyring transition state theory at high-temperature.
Although the derivation involves real time, the final theory only involves imaginary-time (thermal) properties, consistent with the standard theory.
Therefore, it is no more difficult to compute than the standard theory.
The theory is illustrated with application to model systems, where it is shown to give excellent numerical results.
Finally, the first-principles approach taken here results in a number of advantages over previous attempts to extend the imaginary free-energy formulation of instanton theory. In addition to producing a theory that is a smooth (continuously differentiable) function of temperature, the derivation also naturally incorporates hyperasymptotic (i.e. multi-orbit) terms, and provides a framework for further extensions of the theory.
Semiclassical instanton theory for reaction rates at any temperature: How a rigorous real-time derivation solves the crossover temperature problem
Joseph E. Lawrence
September 9, 2024
=====================================================================================================================================================
§ INTRODUCTION
The cornerstone of chemical reaction rate theory is Eyring transition state theory (TST).<cit.> Developed in the 1930s,<cit.> TST is still widely used today to estimate the rate of chemical reactions. The advantage of the theory is its simplicity, allowing for the estimation of the rate constant from only local knowledge of the Born-Oppenheimer potential in the reactant minimum and at the saddle point separating the reactants and products. The TST approximation to the rate has the simple form
k_ TST = κ/2πβħZ^/Z_r e^-β V^
where β=1/k_ B T is the inverse temperature, V^ is the height of the potential barrier, κ is the transmission coefficient, Z_r is the reactant partition function and Z^ is the transition state partition function for the modes orthogonal to the unstable coordinate. Typically, both Z_r and Z^ are treated quantum mechanically, using a harmonic approximation for the vibrations and rigid-rotors for the rotations. In contrast, in the basic version of the theory, motion over the barrier is assumed to be classical, which in the absence of dynamical recrossing leads to a transmission coefficient of κ=1.
Already in the 1930s, it was clear to the developers of TST that quantum tunneling along the reaction coordinate could result in a transmission coefficient, κ>1.
When the effect of tunneling is small, a reasonable approximation is given by Wigner's famous tunneling correction<cit.>
κ_ W = 1 + (βħω)^2/24,
where ω is the barrier frequency along the unstable coordinate. This is, in fact, just the first term in an expansion of the exact tunneling correction for the parabolic barrier<cit.>
κ_ PB = βħω /2 /sin(βħω/2).
Nevertheless, κ_ W is typically preferred for practical purposes over κ_ PB as the parabolic barrier approximation diverges at the “crossover temperature” T_ c=ħω/2π k_ B.
At low temperatures, where tunneling can enhance the rate by many orders of magnitude, such simple corrections are insufficient. Therefore, a commonly used approach is to retain the separable approximation of TST and compute a one-dimensional tunneling correction, e.g. by fitting the barrier to a function for which the result is known analytically,<cit.> or by using a semiclassical tunneling probability based on Wentzel–Kramers–Brillouin (WKB) theory.<cit.> These one-dimensional approximations, however, fail to capture non-separable effects such as “corner cutting”, where the system tunnels through higher but narrower regions of the potential.<cit.>
Instanton theory provides a rigorous way to go beyond these simple one-dimensional tunneling corrections. The instanton is the dominant tunneling path in a full-dimensional path-integral description of the reaction.<cit.>
Hence, it is not constrained to follow the minimum energy path and can, therefore, capture key nonseparable
effects on the rate.
In fact, rather than being a minimum energy path, the instanton is a stationary action path. Following Lagrange's principle, the instanton can thus be interpreted as a classical trajectory. Specifically, it is a periodic imaginary-time trajectory (which is equivalent to a real-time trajectory on the upturned potential) whose period is the thermal time, τ=βħ. The instanton can be found practically by optimisation of the action for a discretised path,<cit.>
which is not significantly more computationally challenging than locating the transition state.<cit.>
This approach has been used to apply instanton theory to a wide range of systems in full dimensions, both with pre-computed potential energy surfaces and also on-the-fly using high-level electronic structure theory.<cit.>
Having located the instanton path, the instanton approximation to the rate constant can be written in a similar form to Eyring-TST<cit.>
k_ inst = 1/√(2πħ)(- d^2 S_ inst/dτ^2)^1/2Z_ inst/Z_r e^-S_ inst(τ)/ħ
where S_ inst(τ) is the instanton action and Z_ inst the instanton partition function. Here, Z_ inst generalises Z^ by capturing the effect of the changing vibrational frequencies and rotational constants along the instanton path. This can be seen explicitly in the formal expression for Z_ inst, which in the absence of rotations is<cit.>
Z_ inst(τ) = ∏_j=1^f-11/2sinh[u_j(τ)/2],
here u_j(τ) is the j^ th stability parameter for the instanton orbit, which for a separable system reduces to u_j(τ)=τω_j^ vib.
Part of the power of instanton theory is that it is a rigorous semiclassical theory.<cit.>
In particular instanton theory is the first term in an asymptotic series expansion of the exact quantum rate as ħ→0.
Asymptotic expansions can be thought of as generalising the idea of a perturbative expansion to problems where a simple power series may not be applicable.<cit.> An important example is obtaining the expansion in terms of ε of an integral of the form
I(ε) = ∫_-∞^∞ A(x) e^-f(x)/ε dx
for ε→0. The asymptotic expansion for this integral can be found by Laplace's method (or equivalently steepest descent integration), resulting in an asymptotic series of the form<cit.>^,[Where it is assumed A(x^⋆)≠0.]
I(ε) ∼ A(x^⋆)√(2πε/f”(x^⋆)) e^-f(x^⋆)/ε ( 1+ a_1 ε + a_2 ε^2 + … ),
where x^⋆ is the global minimum of f(x), i.e. f'(x^⋆)=0. We see that, at leading order, this corresponds to approximating the integrand by expanding f(x) to second order about x^⋆ and treating A(x) as constant.
The apparent simplicity of this procedure belies the power and rich complexity of asymptotic analysis.
For example, although asymptotic series are generally not convergent, their first few terms typically give a very good approximation to the exact result, and when both are available are usually much more numerically efficient than a Taylor series representation.<cit.> Furthermore, despite the apparently approximate nature of asymptotics, in principle a thorough asymptotic analysis in combination with modern resummation methods allows exact results to be recovered.<cit.>
One might at this stage reasonably ask, what one means by an expansion in ħ? In particular, there is an apparent ambiguity in whether one defines the inverse temperature in terms of the thermal time, β=τ/ħ, or vice-versa τ=βħ.
The ambiguity is resolved by a more formal definition, such that, ħ→0 is actually shorthand for introducing a perturbation parameter ε next to the ħ in the path-integral exponent
⟨q_ f| e^-Ĥτ/ħ|q_ i⟩ = ∫^q(τ)=q_ f_q(0)=q_ i𝒟q e^-S_τ[q(τ')]/(εħ)
and considering the behaviour as ε→0.
This prescription is equivalent to a WKB analysis in one dimension. By analogy with Eq. (<ref>) one sees that the resulting asymptotic expansion of the path integral will be around a classical trajectory (where the action is stationary).
There exist two qualitatively different approaches to the derivation of instanton theory: one based on reactive scattering theory,<cit.> and the other on the concept of imaginary free-energy (the “F premise”).<cit.>
The F premise was first proposed by Langer in the context of droplet formation,<cit.> and then later by Coleman in the context of quantum field theory.<cit.>
Proposed heuristically, the F principle says that the rate of decay of a metastable state is related to the imaginary part of its free-energy as k= -2/ħ F. This is then evaluated using analytic continuation to define the integral over the unstable mode in an asymptotic (ħ→0) evaluation of the path-integral expression for the partition function. Separately, Miller arrived at instanton theory by starting from the flux-correlation formulation for the thermal rate, using arguments based on Weyl correspondence in combination with semiclassical results from Gutzwiller.<cit.>
Although the prefactor in Miller's theory appears qualitatively different to that of the F formulation, the two theories have been proven to be equivalent.<cit.>^,[We note that it has been argued recently that there should be an additional term that does not appear in the standard expressions for the instanton rate considered in these works, and that this additional term arises due to non-separability.<cit.> However, we note that these arguments were not based on a rigorous semiclassical (ħ→0) analysis. Differences to the standard expressions may, therefore, simply be explained as arising from subdominant terms.]
More recently Richardson has derived instanton theory from first principles by evaluating the exact flux-correlation expression for the quantum rate asymptotically as ħ→0.<cit.> Unlike the F derivation, which is rather subtle and thus difficult to generalise or extend, the flux-correlation formalism provides a rigorous framework for extending the theory, as has been exploited in recent years in the study of electronically nonadiabatic chemical reactions.<cit.> For this reason this is the approach taken in the present study.
Despite its success in describing deep tunneling processes, instanton theory has a major issue: it breaks down at high temperature.
This can be understood by noting that the shortest period for an instanton orbit is determined by the barrier frequency, τ_ c=2π/ω.
Hence, Eq. (<ref>) cannot be applied for temperatures above the same “crossover temperature”, T_ c=ħ/(k_ Bτ_ c), that appears when considering the parabolic barrier rate.
This is not a coincidence, as we will see later, it is because the parabolic barrier rate is closely related to the instanton result and is the rigorous semiclassical result for T>T_ c.
As one approaches the crossover temperature from below, instanton theory becomes less accurate (although does not diverge).
This is clearly undesirable, as we would like to be able to accurately describe the onset of tunneling in chemical reactions.
The goal of the present paper is therefore to derive a rigorous uniform semiclassical theory that is valid for all values of the thermal time τ=βħ>0. Here, uniform is a technical term that refers to an expression that is valid for a range of values of an additional (non-asymptotic) parameter that spans two (or more) regions exhibiting qualitatively different asymptotic behaviour.
That instanton theory breaks down at the crossover temperature was, of course, known by its initial proponents.<cit.> As such, there have been many previous attempts to both analyse and ameliorate this problem.<cit.> In particular,
in 1981 Affleck argued based on WKB that the F principle should satisfy: k=-2/ħ F for T≤ T_c and k=-ωτ/πħ F for T≥ T_c.<cit.>
These ideas have been developed further by several different authors<cit.> resulting in a piecewise theory.<cit.>
However, there are a number of drawbacks to this approach.
Being based on a heuristic combination of the F principle and WKB it is difficult to see how to rigorously extend the theory, for example by incorporating higher order asymptotic (i.e. perturbative) corrections.<cit.>
Furthermore, the derivation is based on the fundamental assumption that the instanton smoothly collapses to the transition state with increasing temperature, and hence cannot be generalised to more complex systems.
An alternative approach that has been suggested is to start from the uniform WKB approximation to the 1D microcanonical transmission probability,<cit.> and then calculate the thermal rate by computing the integral over energy numerically.<cit.> However, these approaches suffer from similar issues to those based on F, and require ad-hoc approximations to treat non-separable multidimensional systems.
Recently Upadhyayula and Pollak have proposed a theory they call “uniform semiclassical instanton theory” that replaces this numerical integration with an analytical approximation. However, despite the method's name (as is shown in the supplementary material) the resulting theory is not a rigorous uniform semiclassical approximation in the sense described above, and actually becomes less accurate near τ=τ_c as ħ→0.
§ THEORY
In order to derive our general semiclassical rate theory and solve the crossover problem we return to a first-principles derivation of instanton theory. The basis for our approach is the derivation of Richardson from Ref. , but with a small change to the flux operators. After a preliminary recap of the quantum flux-flux formulation of rate theory, Sec. <ref> introduces key definitions and derives the asymptotic form of the flux-flux correlation function. Section <ref> then discusses how, below the crossover temperature, the change made to the flux operators results in a new real-time perspective on instanton theory. Using this new perspective, Sec. <ref> provides a unified understanding of the breakdown of both instanton theory and the parabolic barrier approximation at the crossover temperature. Section <ref> then discusses how this unified understanding can be combined with modern methods for deriving uniform asymptotics expansions.
Completing the derivation, Sec. <ref> introduces the key result of the paper and discusses its behaviour.
The starting point for our derivation is the exact expression for the rate in terms of the time integral of a flux-flux correlation function<cit.>
k Z_r = ∫_0^∞ c_ ff(t) dt,
where we make use of the most general form of the correlation function, in which the two flux operators can be chosen to be different<cit.>
c_ ff(t) = [e^-(τ/2 - it)Ĥ/ħF̂_p e^-(τ/2 + it)Ĥ/ħF̂_r].
The flux operators are formally defined as Heisenberg time derivatives of projection operators onto reactants (P̂_r) and products (P̂_p) as
F̂_r = -i/ħ[Ĥ,P̂_r]
F̂_p = i/ħ[Ĥ,P̂_p]
where the projection operators are given by
P̂_r = 1-h(σ(q̂)-s_r)
P̂_p = h(σ(q̂)-s_p)
here h(x) is the Heaviside step function, such that σ(q)<s_r defines the reactants and σ(q)>s_p the products.
Thus F̂_r measures the flux out of the reactant states (through the “dividing surface” σ(q)=s_r) and F̂_p measures the flux into the product states (through the “dividing surface” σ(q)=s_p).
In Richardson's derivation<cit.> he took both flux operators to be the same and defined the dividing surface to pass through the saddle point of the potential.
Such a choice ensures that the flux correlation function has its maximum at t=0.
This is a natural choice when considering instanton theory as an extension of transition state theory, as integrating over time asymptotically (as ħ→0) then straightforwardly leads to a theory that involves no real-time quantities.
However, in the present work we choose instead to place the dividing surfaces well away from the barrier (out in the reactant and product asymptotes respectively) such that the correlation function reaches its maximum for finite real time.
The reader might be worried that the presence of real time will mean that we must contend with the infamous sign problem.
This concern would be realised if we tried to evaluate the resulting path-integral expressions exactly or via a quantum instanton approximation<cit.> (an idea discussed in Ref. ).
However, as we will evaluate all integrals analytically using asymptotic analysis our semiclassical theory will have no such issue.
In fact, instead of making the problem more difficult, the presence of real time will actually be the key to solving the problem at the crossover temperature.
§.§ Asymptotic expression for the flux-flux correlation function
Before we evaluate the integral over time, we begin by evaluating the flux-flux correlation function asymptotically. The steps we follow in this subsection follow closely those of Ref. , and are included here for completeness. However, for the sake of notational simplicity we restrict the discussion here to a system in one dimension.
To further simplify notation we define the forward and backward times, t_± =-iτ_± =± t-iτ/2, and introduce the imaginary time propagator K̂(θ)=e^-θĤ/ħ such that
c_ ff(t) = [K̂(τ_-)F̂_p K̂(τ_+) F̂_r].
Note that τ_± are therefore complex numbers, the imaginary part of which is (±) the “real time”.
In one dimension the flux operators can be written in the form
F̂_α = p̂/2mδ(q̂-s_α)+δ(q̂-s_α)p̂/2m
for α=r or p, where δ(x) is the Dirac delta function.
By first inserting resolutions of the identity 1=∫dq |q⟩⟨q| and then making use of the relation ⟨q|p̂=-iħ∂/∂ q⟨q| followed by integration over the resulting Dirac delta functions, it is straightforward to show that
c_ ff(t) = (-iħ/2m)^2 ( ∂ K(τ_-,s_p,s_r)/∂ s_r ∂ K(τ_+,s_r,s_p)/∂ s_p
- ∂^2 K(τ_-,s_p,s_r)/∂ s_r ∂ s_p K(τ_+,s_r,s_p)
- K(τ_-,s_p,s_r)∂^2 K(τ_+,s_r,s_p)/∂ s_r ∂ s_p
+ ∂ K(τ_-,s_p,s_r)/∂ s_p ∂ K(τ_+,s_r,s_p)/∂ s_r).
where the position representation of the propagator is defined as
⟨q”|K̂(θ)|q'⟩=K(θ,q',q”) .
Now we turn to the asymptotic evaluation of this exact expression as ħ→0. This can be done numerically by first writing the propagator as a discretised path integral and then evaluating the integrals over position by steepest descent [i.e. using Eq. (<ref>)]. For the present purpose, we note that in the continuum limit this is formally equivalent to using the semiclassical (imaginary-time) van-Vleck propagator
⟨q”|K̂(θ)|q'⟩=K(θ,q',q”) ∼∑_ paths(C/2πħ)^1/2 e^-S(θ,q',q”)/ħ
where S(θ,q',q”) is the Euclidean action calculated along the classical trajectory from q' to q” in total imaginary time θ,
S(θ,q',q”) = ∫_0^θ1/2m q̇^2(θ')+V(q(θ')) dθ',
the prefactor is defined as
C = -∂^2 S/∂ q' ∂ q”,
and the sum is over all classical paths that go from q' to q” in time θ. Note the branch of the square root in the prefactor is chosen so that the prefactor is continuous along the trajectory.[The prefactor is of course a property of the entire trajectory. The prefactor being continuous along the trajectory should be understood as corresponding to considering the set of prefactors that are given by fixing q' and then varying θ and q” such that they follow the trajectory, and ensuring that this set is continuous for some parameterisation of the trajectory.]
In addition to the asymptotic expression for the propagator we will also need to make use of the asymptotic expressions for its derivatives
∂K/∂ q' ∼∑_ paths -1/ħ∂ S/∂ q'(C/2πħ)^1/2 e^-S(θ,q',q”)/ħ
∂K/∂ q” ∼∑_ paths-1/ħ∂ S/∂ q”(C/2πħ)^1/2 e^-S(θ,q',q”)/ħ
∂^2K/∂ q' ∂ q” ∼∑_ paths1/ħ^2∂ S/∂ q'∂ S/∂ q”(C/2πħ)^1/2 e^-S(θ,q',q”)/ħ.
Combining Eq. (<ref>) with Eqs. (<ref>)-(<ref>) and retaining just the dominant path we then obtain the following asymptotic expression for the correlation function (valid as ħ→0)
c_ ff(t) ∼ (∂ S_ -/∂ s_r∂ S_ +/∂ s_p-∂ S_ -/∂ s_p∂ S_ -/∂ s_r
-∂ S_ +/∂ s_p∂ S_ +/∂ s_r+∂ S_ -/∂ s_p∂ S_ +/∂ s_r)
×-1/4m^2√(C_+C_-/(2πħ)^2) e^-[S_ +(τ_+,s_r,s_p)+S_ -(τ_-,s_p,s_r)]/ħ,
where we have labelled the action (and associated quantities) ± to distinguish between the forward and backward paths. Hence, defining the total Euclidean action as the sum of the forward and backward parts
S(t) = S_ +(τ/2+it,s_r,s_p) + S_ -(τ/2-it,s_p,s_r)
and the ħ independent part of the prefactor as
A(t) =(C_+C_-)^1/2/8m^2π(∂ S_ -/∂ s_p∂ S_ -/∂ s_r
-∂ S_ -/∂ s_r∂ S_ +/∂ s_p-∂ S_ -/∂ s_p∂ S_ +/∂ s_r+∂ S_ +/∂ s_p∂ S_ +/∂ s_r)
we see that the correlation function has the simple asymptotic form
c_ ff(t) ∼A(t)/ħ e^-S(t)/ħ.
Note that, while the preceding analysis was specific to a one-dimensional system,
the flux-flux correlation function for multidimensional systems also has the same form as Eq. (<ref>).
§.§ A real-time derivation of instanton theory
Combining Eq. (<ref>) with Eq. (<ref>) we can recover the standard instanton theory below the crossover temperature by integrating over time asymptotically using Eq. (<ref>) to obtain
k Z_r ∼A(t^⋆)/ħ√(2πħ/S”(t^⋆))e^-S(t^⋆)/ħ as ħ→ 0,
where t^⋆ satisfies the steepest descent condition S'(t^⋆)=0. We will now argue that, S(t^⋆)=S_ inst(τ), and that Eq. (<ref>) is equivalent to the standard instanton theory.
First note that S'(t^⋆)=0 implies
∂ S_ +/∂τ_+ - ∂ S_ -/∂τ_- = E_+ - E_- =0
i.e. the energy of the forward and backward paths, E_±, must be the same. Further, since K(θ,q',q”)=K^*(θ^*,q”,q') it follows that, on the real t axis, the action for the forward and backward paths are always complex conjugates S_ +(τ_+,s_r,s_p)=S^*_-(τ_-,s_p,s_r) making S(t) real. Differentiating this with respect to τ then gives E_+=E_-^* which combined with Eq. (<ref>) shows that the energy of the forward and backward trajectories at the stationary time are also real.
Before we discuss finding a trajectory that satisfies these conditions, note that there is a freedom we have not yet discussed: the contour of integration (the time path) in the definition of the action [Eq. (<ref>)]. This freedom corresponds to the order of the real- and imaginary-time propagators in the path-integral discretisation of Eq. (<ref>). Of course the exact expression is independent of this choice because real- and imaginary-time propagators commute. The semiclassical propagator must necessarily retain this property. This can be seen explicitly in Eq. (<ref>) as being a result of Cauchy's integral theorem. We are therefore free to choose the most convenient time path for our purposes (under the assumptions of the theorem).
With a careful choice of time path it is
trivial to find a trajectory that satisfies S(t^⋆)=S_ inst(τ). Figure <ref> depicts the corresponding trajectory and time path.
The forward and backward time paths each consist of three segments; two pure real-time segments either side of a pure imaginary-time segment of length τ/2.
Rather than defining the lengths of the real time segments (t_r and t_p) we instead define the trajectory along the imaginary-time segment to be half of the instanton orbit from one turning point to the other.
This then uniquely determines the real-time sections of the trajectories along with the times t_r and t_p. To see this note that, because the momentum is continuous along the trajectory it must be zero at the points connecting the real-time and imaginary-time segments. Hence, t_r (t_p) must be the time it takes for the system to roll from the reactant (product) end of the instanton to the reactant (product) dividing surface. This, therefore, uniquely determines the stationary time as t^⋆=t_r+t_p. Since the forward and backward trajectories are entirely real and follow the same path it is clear that E_+=E_- and hence [cf. Eq. (<ref>)] the trajectory corresponds to a stationary time, t^⋆.
The stationary trajectory gives an intuitive picture of
the reaction process.
The system starts at the reactants and moves in real time towards the barrier. Upon reaching the turning point of the trajectory at the barrier, instead of bouncing off the barrier in real time it switches to imaginary time. This effectively “turns the barrier upside-down” allowing the system to tunnel through to the product side. The system then switches back to real time and carries on to the products. Because of the cyclic nature of the trace in Eq. (<ref>) the system then retraces its steps, moving backwards in real time to the barrier before tunneling through the barrier in (positive) imaginary time and then back to the reactants again in negative real time. As the forward and backward real-time segments follow the same paths, their contributions to the action exactly cancel one another leaving only the imaginary time contribution, and hence S(t^⋆)=S_ inst(τ).
Importantly, the uniqueness of asymptotic series means that the prefactor must also be equivalent to the usual instanton prefactor.
Hence, we have that
A(t^⋆)/ħ√(2πħ/S”(t^⋆)) = Z_ inst(τ)/√(2πħ)(-d^2S_inst/dτ^2)^1/2.
For the inquisitive reader, a short aside: There are two obvious questions based on the preceding discussion. First, what would happen to the trajectory if we kept t^⋆ (and τ) fixed but deformed the time path? The answer is that the resulting trajectory must move into the complex position plane. This can be understood by noting that, if the momentum of the system is real then propagation in anything other than real time will lead to a change in the position that is complex. As the initial and final momenta are fixed changing the direction of the time path at any point (other than turning points) will, therefore, result in a complex trajectory. The second question is, when t is not t^⋆ can a time path still be found that keeps the trajectory real? The answer is that while this is possible for some t and τ in one dimension, it is generally not possible in multiple dimensions. Finally, the observant reader may note that there is another stationary trajectory that swaps the sign of the real time segments on the product side of the barrier to give t^⋆=t_r-t_p. One might, therefore, wonder why this trajectory doesn't also contribute. A careful consideration, however, shows that this corresponds to a different branch of the correlation function and hence does not need to be included.
§.§ Diagnosing the problem at the crossover temperature
With this new real time perspective we can now obtain a simple intuitive picture of what happens as we approach the crossover temperature, and hence why the standard instanton theory stops working there. First consider approaching the crossover temperature from below.
Figure <ref> shows how the steepest descent time, t^⋆, varies with τ for a typical system close to the crossover. We see that as τ→τ_ c the stationary time approaches infinity, t^⋆→∞. This can be understood intuitively by noting that as τ→τ_ c the turning points, where the real-time and imaginary-time parts of the path meet, get closer and closer to the top of the barrier. As this happens, the force at the turning points approaches zero. Hence, the real-time segments have to become longer and longer to give time for the system roll away from (or come to a stop at) the barrier.
Now we consider the behaviour above the crossover temperature. Based on our preceding discussion it is clear that in this regime the correlation function must be dominated by the behaviour near t=∞. For a one-dimensional barrier we show in the supplementary material (following Ref. ) that for large values of real time the action behaves like
S(t) ∼ S_∞ + a sin(ωτ/2) e^-ω t as t→∞
where S_∞=τ V^ and a is a constant with units of action. Furthermore, the pre-exponential factor varies as
A(t) ∼aω^2/4π e^-ω t as t→∞.
Hence, the correlation function obeys
c_ ff(t) ∼aω^2/4πħ e^-ω texp(-[S_∞ + a sin(ωτ/2) e^-ω t]/ħ)
as ħ→0 and t→∞.
Upon making the substitution v=e^-ω t we can see that (above the crossover temperature) the rate is asymptotic to[Note that after making the substitution we also extend the integration range from 0 to 1 to be from 0 to ∞. This is the correct thing to do because as ħ→0 the integrand becomes more and more narrowly peaked at the boundary v=0.]
k Z_r ∼∫_0^∞aω/4πħexp(-[S_∞ + a sin(ωτ/2) v]/ħ) dv
= ω/4πsin(ωτ/2)e^-S_∞/ħ≡ω/4πsin(βħω/2) e^-β V^
which is the well-known parabolic barrier rate.<cit.> Note that this expression differs qualitatively in its dependence on (the asymptotic parameter associated with) ħ from the instanton result evaluated below the crossover temperature, Eq. (<ref>).
We can now gain a unified perspective on the behaviour both above and below crossover. To do so we begin by making the same variable transformation as above, v=e^-ω t, such that the rate can be expressed as
k Z_r ∼∫_0^1 G(v)/ħ e^-S(v)/ħd v as ħ→0
where we define G(v):=A(t(v))/(vω) and S(v):=S(t(v)).[Importantly, with this definition G(v) is a smooth function of v for all v≥0.] It is important to recognise that we will still recover the standard instanton result if we perform the integral over v asymptotically below the crossover temperature [i.e. using Eq. (<ref>)].
This can be seen explicitly by first integrating asymptotically over v to give
k Z_r ∼G(v^⋆)/ħ√(2πħ/S”(v^⋆))e^-S(v^⋆)/ħ as ħ→ 0
and then using the chain rule -vω S'(v)=S'(t) to show that S(t^⋆)=S(v^⋆) and (v^⋆ω)^2 S”(v^⋆)=S”(t^⋆).
The advantage of making this transformation is that the behaviour at the crossover temperature becomes very simple. It just corresponds to the temperature at which the stationary point, v^⋆, moves from being inside to outside of the integration range.
Figure <ref> depicts e^-S(v)/ħ for three temperatures, one above crossover, one below crossover, and the other exactly at crossover. We see that, well below the crossover temperature, e^-S(v)/ħ is peaked far away from the boundary, and hence approximating the integrand as a Gaussian is a reasonable approximation. Equally, well above the crossover temperature we see that only the tail of the function e^-S(v)/ħ appears inside the integration region, and hence approximating the integrand with an exponential [as is done in Eq. (<ref>)] is also a reasonable approximation. However, exactly at the crossover temperature the stationary point S'(v^⋆)=0 occurs exactly on the integration boundary, v^⋆=0.
This means that essentially half of the peak is outside of the integration range.
Hence, instanton theory (which assumes the full peak in inside the integration bounds) is approximately a factor of two too large. When approaching the crossover temperature from above things are much worse, as the first derivative at the boundary approaches zero and hence an exponential approximation will give a divergent integral.
§.§ Uniform asymptotics
To obtain a rate theory that is valid both above and below the crossover temperature we need to make use of ideas from uniform asymptotics. A uniform asymptotic expression is one that is continuous in a parameter (distinct from the asymptotic parameter) that connects two or more regions of different asymptotic behaviour. We have seen in the previous section that, as τ varies the stationary point moves from inside to outside of the integration range, resulting in a qualitative change in the form of the asymptotic approximation for the rate. We are, therefore, interested in finding an asymptotic expression for the rate that is uniform in τ.
A common approach is to construct uniform asymptotic approximations in a piecewise manner. Following this approach one might be tempted to suggest that, below the crossover temperature, the integral over v should be approximated by the Gaussian integral
k Z_r ∼G(v^⋆)/ħ∫_0^∞ e^-[S(v^⋆)+(v-v^⋆)^2S”(v^⋆)/2]/ħdv,
and that this should then be combined with an expression valid above crossover that has the same value as τ→τ_ c.
This kind of uniform approximation is employed for example in Ref. .
The resulting expression has lots of the behaviour that one expects, it reduces the prediction by a factor of 1/2 at the crossover temperature and smoothly approaches the standard result as one lowers the temperature for fixed ħ (and also as one lowers ħ for fixed τ).
However, this is not the correct approach.
To see what is wrong with Eq. (<ref>), we can evaluate the integral and make use of the chain rules given earlier to show that it is equivalent to
kZ_r ∼A(t^⋆)/ħ√(2πħ/S”(t^⋆) ) e^-S(t^⋆)/
ħerfc(- √(S”(t^⋆) /2ω ^2ħ))/2
where erfc(x) is the complementary error function. Crucially, we see that the difference between this and the standard instanton result is the factor,
1/2erfc(- √(S”(t^⋆) /2ω ^2ħ)).
While this factor takes on values between 1/2 and 1, the precise value is dependent on the ratio S”(t^⋆) /2ω ^2ħ. This is a problem because the actual value of S”(t^⋆) is determined by the location of the dividing surfaces. This is clearly unphysical as the true quantum rate is independent of the choice of dividing surface. [As mentioned earlier, the standard instanton result is also independent of dividing surface.] Clearly we want our uniform asymptotic result to also have this property.
Fortunately, if one consults a textbook on uniform asymptotics<cit.> one will see that Eq. (<ref>) is not the recommended approach.
Instead,
one should use Bleistein's method<cit.> which has become the standard method for generating uniform asymptotic series.<cit.> Our final result will then be the first term in this series, just as the standard instanton result is
the first term in a regular asymptotic series. This has the advantage that the theory will not only be continuous in τ, but it will also be smooth and continuously differentiable.
Following Bleistein's method we begin by defining a new variable transformation
S(v) = u^2/2 - bu + c
and choosing the constants b and c so that u(v=0)=0 and u(v^⋆)=b. This then implies that
b = sgn(v^⋆)√(2S_∞-2S(v^⋆))
c = S(v=0) = S_∞.
Note that we assume here that S(v) has been analytically continued to v<0 such that for τ<τ_ c we can find v^⋆<0.
Solving for u we then obtain
u = b +sgn(v-v^⋆) √(2S(v)-2S(v^⋆))
where the sgn(v-v^⋆) ensures that the variable transform is single valued. After this variable transformation we have
k Z_r ∼ e^-c/ħ∫_0^∞g(u)/ħ e^-(u^2/2-bu)/ħd u as ħ→0
where g(u) = G(v(u)) dv/du.
The final step in the Bleistein method is to write this pre-exponential term as a linear function that passes through the function at the boundary and the stationary point (u=b) plus a remainder term
g(u) = g(b) + (u-b)g(b)-g(0)/b + r(u).
As the remainder r(u) is zero at both the stationary point and the boundary it can be ignored at leading order in the uniform asymptotic expansion. With this we can then perform all integrals analytically to obtain
kZ_r ∼ e^-c/ħg(b)/ħ√(2πħ) e^|b^2/2ħ1/2erfc(-b/√(2ħ))
+e^-c/ħg(b)-g(0)/b
To analyse this result we need to express it explicitly in terms of S(t) and A(t).
We begin by defining S_∞-S(t^⋆)=Δ S and noting that sgn(v^⋆) can be rewritten in terms of τ to give
b =sgn(τ-τ_ c)√(2Δ S).
Second, we note that b^2/2 can be combined with c to give the instanton action
-b^2/2 + c = S(t^⋆) = S_ inst(τ).
To complete our simplifications we need to determine explicit expressions for g(b) and g(0). This can be achieved by observing that
du/dv = sgn(v-v^⋆)S'(v)/√(2 S(v)-2S(v^⋆))
which can then be combined with the definition of g(u) to give
g(u) = G(v)dv/du = -A(t)dt/dvdv/du
=sgn(v^⋆-v)A(t) √(2S(t)-2S(t^⋆))/S'(t).
Hence, evaluating this at u=0 and taking the limit as u→ b from above or below gives
g(0) = sgn(τ-τ_ c) √(2Δ S)lim_t→∞A(t)/S'(t)
and
g(b) = A(t^⋆)/√(S”(t^⋆)).
Combining these results we obtain
k Z_r ∼ e^-S_∞/ħ( sgn(τ-τ_ c) A(t^⋆)/√(2 S”(t^⋆)Δ S )-lim_t→∞A(t)/S'(t))
+e^-S(t^⋆)/ħA(t^⋆)/ħ√(2πħ/S”(t^⋆)) 1/2erfc(sgn(τ_ c-τ)√(Δ S/ħ))
which is a rigorous uniform asymptotic expression for the thermal rate constant that correctly bridges between the parabolic barrier result above the crossover temperature and the instanton result below crossover.
Although we motivated the derivation in terms of a one-dimensional system, this expression is valid for any system in which the instanton collapses smoothly to the transition state. Furthermore, while we have used the real-time formulation to derive the theory, it can immediately be rewritten in terms of quantities that involve only imaginary time. Hence, it is clearly independent of the choice of dividing surface.
Making use of the relations
A(t^⋆)/√(S”(t^⋆)) = Z_ inst(τ)/2π(-d^2S_inst/dτ^2)^1/2
and
-lim_t→∞A(t)/S'(t) = ω/4πsin(τω/2)Z^(τ)
allows us to write
k Z_r ∼ e^-τ V^/ħsgn(τ-τ_ c)Z_ inst(τ)/2π√(2Δ S)(-d^2S_inst/dτ^2)^1/2
+e^-τ V^/ħω/4πsin(τω/2)Z^(τ)
+e^-S_inst(τ)/ħZ_ inst(τ)/√(2πħ)(-d^2S_inst/dτ^2)^1/21/2erfc(sgn(τ_ c-τ)√(Δ S/ħ))
which is equivalent to the more compact expression
k ∼ k_m-pb +sgn(τ-τ_ c)k_ inste^-Δ S/ħ/√(4πΔ S/ħ)
+k_ inst 1/2erfc(sgn(τ_ c-τ)√(Δ S/ħ))
where k_m-pb is the multidimensional parabolic barrier rate.
We are very nearly done with our derivation, however, the keen-eyed reader will note that Eqs. (<ref>), (<ref>), and (<ref>) are still not valid at all temperatures. This can be seen by noting that (sin(τω/2))^-1 in the parabolic barrier rate, k_m-pb, not only diverges at τ_ c=2π/ω, but also at all τ_ c,n=2π n/ω for n=1,2,…,∞, i.e. half the crossover temperature, a third of the crossover temperature, and so on. Again the real-time formulation makes it easy to understand the cause of these divergences. Each one corresponds to a different stationary point of the action passing through the v=0 boundary.
We can give a physical interpretation to these stationary points by observing that τ_c,n corresponds to the shortest possible time for a trajectory which completes n orbits on the upturned potential. Hence, we can attribute these stationary points to the trajectories that involve multiple orbits.
For example, when τ_c,3>τ>τ_c,2 (6π/ω>τ>4π/ω) there exists not one but two periodic trajectories on the upturned potential with a period τ. One is the usual instanton, that orbits just once, and the other is a trajectory that orbits twice in the same time, with each half of the trajectory following the same path as the standard instanton at twice the temperature.
These multi-orbit instantons (typically referred to as periodic instantons) have a higher action and hence are exponentially suppressed compared to the one-orbit instanton. Within standard (Poincaré) asymptotics these terms are, therefore, not included as they are smaller than every term in the one-orbit instanton's asymptotic series (as ħ→0). However, we now see that in order to obtain a rigorous uniform theory valid at any temperature we are naturally led to include them.
§.§ Semiclassical instanton theory valid at any temperature
Given the preceding discussion it is clear that in order to cancel the divergences in k_m-pb we must modify Eq. (<ref>) by including multi-orbit instanton terms. The form of these terms can easily be determined by inspection. However, we do not have to rely on inspection alone.
As shown in the Appendix, in one dimension we can rigorously derive the desired uniform thermal rate theory in an entirely different way. There, we start from the uniform energy dependent WKB transmission probability and then obtain the thermal rate by integrating over energy asymptotically using Bleistein's method.<cit.>
The resulting theory contains exactly the multi-orbit terms we were expecting. Comparing this one-dimensional result [Eq. (<ref>)] with Eq. (<ref>) it is trivial to generalise to the multidimensional multi-orbit case.
We thus arrive at the central result of the present paper, a uniform asymptotic expression for the thermal rate in multiple dimensions
k ∼ k_m-pb+ ∑_n=1^∞ (-1)^n+1sgn(τ-nτ_ c) k_n, inste^-Δ S_n/ħ/√(4πΔ S_n/ħ)
+∑_n=1^∞ (-1)^n+1 k_n, inst 1/2erfc(sgn(nτ_ c-τ)√(Δ S_ n/ħ))
where the n-orbit instanton action is defined as
S_n,inst(τ) = n S_ inst(τ/n)
and the difference to the collapsed/classical action is given by
Δ S_n = τ V^ - S_n,inst(τ).
The only remaining terms to define are the effective n-orbit instanton rate constants that are given by
k_n,instZ_r = e^-S_n,inst(τ)/ħZ_n,inst(τ)/√(2πħ)(-d^2S_n,inst/dτ^2)^1/2
with Z_n,inst(τ) the instanton partition function for the n-orbit instanton.
In the absence of rotations this is given by
Z_n, inst(τ) = ∏_j=1^f-11/2sinh[n u_j(τ/n)/2],
where again u_j(τ) is the j^ th stability parameter for the 1-orbit instanton of period τ.
Note that, just as with the standard instanton theory, Eq. (<ref>) is rigorously independent of dividing surface and satisfies detailed balance.
We stress again that, although the theory is applicable to a wide range of multidimensional systems, our analysis, and hence Eq. (<ref>), assumes that the instanton collapses smoothly to the transition state as the temperature is increased. Notable exceptions, such as quartic barriers, where there are multiple (interacting) instantons with the same τ are important exceptions<cit.> and will be the subject of future work.
§.§.§ Understanding the theory
Before demonstrating the accuracy of the theory numerically, we begin
with a qualitative discussion the terms that appear and how they interact with one another.
Perhaps the first thing one notices, is that the n-orbit terms appear with alternating signs. In the derivation from WKB presented in the Appendix, these alternating signs occur as a direct consequence of the form of the uniform transmission probability. However, we note that the necessity of the sign alternation is also evident from Eqs. (<ref>) and (<ref>)
as it is required to match the alternating divergences of (sin(ωτ/2))^-1 in k_m-pb. To understand this cancellation explicitly, we consider the behaviour of k_m-pb about τ→ nτ_ c. Using the following expansion around τ=nτ_ c
ω/4πsin(τω/2)∼(-1)^n/2π(τ-nτ_ c) +𝒪(τ-nτ_ c)
one can show that the parabolic barrier rate behaves like
e^τ V^/ħ k_m-pbZ_r∼Z^(nτ_ c)(-1)^n/2π (τ-nτ_ c) +Z^'(nτ_ c)(-1)^n/2π ,
again with an error of 𝒪(τ-nτ_ c).
To see how this divergent behaviour is cancelled, we expand the terms appearing in the first sum about τ=nτ_ c, retaining terms up to 𝒪(τ-nτ_ c), to give
sgn(τ-nτ_ c)/√(4πΔ S_n/ħ)∼√(nħ/2π/- S”_ inst(τ_ c))(1/τ-nτ_ c-S”'_ inst(τ_ c)/6n S”_ inst(τ_ c)).
Using this result, and noting that Z_n,inst(nτ_ c)=Z^(nτ_c) it can then be shown that
sgn(τ-nτ_ c) k_n, instZ_re^+S_n,inst/ħ/√(4πΔ S_n/ħ) ∼Z^(nτ_ c)/2π (τ-nτ_ c)
+ Z^(nτ_c)/2πS”'_ inst(τ_ c)/3n S”_ inst(τ_ c)
+ Z_n,inst'(nτ_c)/2π
again to 𝒪(τ-nτ_ c).
Comparing Eq. (<ref>) and Eq. (<ref>) we see that [once we combine Eq. (<ref>) with the factor of (-1)^n+1] the divergent terms exactly cancel leaving just a constant as τ→ nτ_c. From this we can see that, exactly at the crossover temperature, our theory predicts a correction to the rough “factor of 2 error” of instanton theory discussed earlier. Specifically, (ignoring the hyperasymptotic multi-orbit terms) we find that
k(τ_c)- 1/2k_ inst(τ_ c) ∼1/2πe^-τ_ c V^/ħ( Z^(τ_c)/Z_r(τ_c)S”'_ inst(τ_ c)/3S”_ inst(τ_ c)
+ Z_ inst'(τ_c)/Z_r(τ_c)-Z^'(τ_c)/Z_r(τ_c)).
Having considered the behaviour of the theory close the crossover temperature(s) where Δ S_n=0 let us know consider the behaviour of the new terms for Δ S_n/ħ≫ 1, both when τ>nτ_ c and τ<nτ_ c. The simpler of these two cases is τ>nτ_ c, where (half) the complementary error function approaches one and hence we have
k_n, inste^-Δ S_n/ħ/√(4πΔ S_n/ħ) + k_n, inst 1/2erfc(-√(Δ S_n/ħ)) ∼ k_n, inst.
The slightly more complicated case is τ<nτ_ c. Here we can make use of the standard asymptotic result
erfc(x)∼e^-x^2/√(π x^2)(1-1/2x^2+…) as x→∞
from which we observe that
-k_n, inste^-Δ S_n/ħ/√(4πΔ S_n/ħ) + k_n, inst 1/2erfc(√(Δ S_n/ħ)) ∼ -k_n, inste^-Δ S_n/ħ/4√(π(Δ S_n/ħ)^3)
which as we would expect is subdominant to k_m-pb even for n=1.
The derivation from the WKB transmission probability given in the Appendix suggests a natural separation of the theory into three parts: classical above barrier transmission, quantum above barrier reflection, and quantum tunnelling. Making this separation we can write Eq. (<ref>) as
k ∼ k_ TST - k_ reflect + k_ tunnel
where
k_ TST = 1/2πτZ^(τ)/Z_r(τ)e^-τ V^/ħ
is the above barrier transmission contribution (equivalent to the TST rate with κ=1). The reflection rate can then be expressed as
k_ reflect = 1/2πZ^(τ)/Z_r(τ)e^-τ V^/ħ∑_λ=1^∞ (-1)^λ+11/λτ_ c+τ
where to avoid potential confusion in the following sections we have used λ rather than n as the dummy index in the sum.
Note that as defined k_ reflect is always positive, and can be expressed as k_ reflect=ϕ(τ/τ_ c)k_ TST where ϕ(x) is a system independent function, with ϕ(0)=0, ϕ(1)=1-ln(2), and ϕ(∞)=1/2. Finally the tunneling contribution, which is system dependent can be expressed as
k_ tunnel = 1/2πZ^(τ)/Z_r(τ)e^-τ V^/ħ∑_λ=1^∞ (-1)^λ+11/λτ_ c-τ
+ ∑_n=1^∞ (-1)^n+1sgn(τ-nτ_ c) k_n, inste^-Δ S_n/ħ/√(4πΔ S_n/ħ)
+∑_n=1^∞ (-1)^n+1 k_n, inst 1/2erfc(sgn(nτ_ c-τ)√(Δ S_ n/ħ)).
§.§.§ Numerical Considerations
Having given a qualitative interpretation of the theory we now turn to some practical considerations about the numerical implementation of the theory. First, the presence of infinite sums in Eq. (<ref>) might appear daunting, and raises the obvious question: How many terms are needed to reach numerical conversion? Clearly, one requires enough terms to avoid the divergence of k_m-pb at the temperature of interest. We will see in the next section that this is the dominant consideration and that for 0<τ<2τ_ c excellent convergence is obtained with only n=2. It should be stressed that if implementing the theory using Eqs. (<ref>) and (<ref>) one must include a sufficient number of terms in the sum over λ to recover the parabolic barrier rate above the crossover temperature, and hence the maximum value of λ may be higher than n. Note that including an arbitrarily large number of terms in this sum is trivial.
One remaining aspect that we have not yet addressed is the meaning of S_ inst(τ) [and Z_ inst(τ)] when τ<τ_ c.
Because there exist no real-position periodic orbits for these values of imaginary time, the resulting trajectories must move into the complex position plane. Finding such trajectories is clearly impractical for realistic chemical applications. However, since the terms containing S_ inst(τ<τ_ c) are always subdominant we can develop numerical approximations of the action in this region that require only real positions without affecting the key asymptotic behaviour of the theory.
In the following section we will give an example of how this can be for one-dimensional systems, and compare to the results obtained using the exact action.
§ NUMERICAL RESULTS
§.§ Symmetric Eckart Barrier
To illustrate the accuracy of the new theory we consider the prototypical one-dimensional model of reactive scattering: the symmetric Eckart barrier. The potential for the symmetric Eckart barrier is defined as
V(q) = V^^2(q/L).
For this simple model system the instanton action can be evaluated analytically as
S_ inst(τ) = 2 τ_ cV^ - τ_ c^2 V^/τ
where the barrier frequency is given by
ω = √(2 V^/m L^2).
To aid comparison with previous work<cit.> we consider the following parameters L=0.66047 a_0, m=1836 m_e, V^=72ħ^2/(m π^2 L^2). The exact result was calculated for comparison by numerical integration of
k Z_r = 1/2πħ∫_0^∞ P(E)e^-τ E/ħdE
using the exact analytical result for the transmission probability, P(E).<cit.>
Figure <ref> compares the rate calculated with the new theory against the exact, classical, parabolic barrier, and standard (old) instanton results for the symmetric Eckart barrier.
The top panel shows the logarithm of the rate for each theory (multiplied by Z_rτ_ c to ensure it is dimensionless), and the bottom panel shows the ratio of the approximate theories to the exact result (plotted on a logarithmic scale so that overestimation and underestimation are treated on an equal footing).
One immediately notices the divergence of the parabolic barrier rate at the crossover temperature (β/β_ c=1) and the (approximate) factor of two error of the standard instanton result.
In contrast to both the standard theory and parabolic barrier rate, the new theory smoothly connects the low and high temperature limits. The accuracy of the new theory is as good as one could have hoped, with the error no greater in the vicinity of the crossover temperature than it is at low temperature. It is important to stress that quantum effects are not insignificant in this regime. The quantum rate more than a factor of 6 larger than the classical rate at T_ c and still more than factor 2 larger than the classical rate even at T=1.5× T_ c. This highlights the importance of the new theory in accurately describing the onset of quantum tunneling.
At lower temperatures, we see that the error of both the new theory and the standard instanton have approximately plateaued at τ=2τ_ c (half the crossover temperature). The close agreement between the standard theory and the new theory at τ=2τ_ c indicates that in this regime the multi-orbit terms do not significantly affect the rate. However, it is important to remember that if one were to neglect these terms and try to use Eq. (<ref>) instead of Eq. (<ref>) the result would diverge at τ=2τ_ c. One could of course consider approximating the full theory [Eq. (<ref>)] by removing both the multi-orbit terms and making appropriate modifications to k_m-pb to remove the divergences. This would break the formal asymptotic properties of the theory, but may nevertheless provide a useful approximation to the rate.
§.§ Asymmetric Eckart Barrier
Next we consider the asymmetric version of the Eckart barrier, for which the potential can be written as
V(q) = (√(V_1)+√(V_2))^2/4cosh^2(q/L) -V_2-V_1/1+exp(-2q/L),
note that V_1=V^.
For this system evaluating the action analytically is slightly more involved. Rather than give the expression for S_ inst(τ) explicitly we instead note it can be computed from knowledge of the reduced action, W(E). Note the reduced action is the Legendre transform of the action, S_ inst(τ)=W(E)+τ E.
The reduced action can be expressed as<cit.>
W(E) = A ( 1+√(V_2/V_1) - √(E/V_1) - √(E/V_1-1+V_2/V_1))
where
A = 4π/1+√(V_2/V_1)√(V_1 V_2/ω^2).
One can then obtain S_ inst(τ), by noting that the period of the orbit, τ, is related to the energy by τ(E)=W'(E). Inverting this to find E(τ) then allows one to calculate S_ inst(τ).
While for these simple models one can obtain analytical expressions for the action, in general it must be found numerically e.g. using the ring-polymer instanton method.<cit.> For τ>τ_ c this is precisely what is already done in standard instanton calculations.
However, for τ<τ_ c there does not exist a real periodic orbit. One might reasonably be concerned that finding a trajectory in complex positions would be impractical, however, we will now argue that this is unnecessary.
Note first that the terms in Eq. (<ref>) containing S_ inst(τ) for τ<τ_ c are subdominant.
This allows us to approximate the action in this region without changing the key asymptotic properties of the theory.
For example, when S”'_ inst(τ_ c)>0, we can use
S_ inst(τ) ≈[ S_ inst(τ) if τ≥τ_ c; S_ inst(τ_ c) + 1/τ∑_j=1^3 s_j/j!(τ-τ_ c)^j if τ<τ_ c ]
where
s_1 = S'_ inst(τ_ c)τ_ c≡ V^τ_ c≡ S_ inst(τ_ c)
s_2 = 2S'_ inst(τ_ c)+S”_ inst(τ_ c)τ_ c
s_3 = 3S”_ inst(τ_ c) + S”'_ inst(τ_ c)τ_ c.
Under the condition that S”'_ inst(τ_ c)>0, this approximation satisfies a number of key properties. First, it matches the first three derivatives of the action at τ_ c. Second, it correctly predicts that S_ inst(τ)→-∞ as τ→0. Finally, it also satisfies S_ inst(τ)<τ V^, and hence it is guaranteed that Δ S_ n>0. Importantly, as this approximation only involves information at τ≥τ_ c it is straightforward to evaluate using standard techniques.
As with the symmetric Eckart barrier the parameters for the model are chosen to correspond to previous studies<cit.> and are given in reduced units as
m=1, L=8/√(3π),
V_1=(6/π)^1/4,
V_2=(24/π)^1/4,
and ħ=1. The exact results are again calculated using Eq. (<ref>), with the exact analytical P(E).<cit.>
Figure <ref> compares the rate calculated with the new theory against the exact, classical, parabolic barrier, and standard (old) instanton results for the asymmetric Eckart barrier. We see that the new theory performs just as well for the asymmetric barrier as it does for the symmetric one. This is an important property of the standard instanton result that the present theory also retains, and stands in contrast to centroid based path-integral methods that are known to breakdown at low temperatures for asymmetric systems.<cit.> Pleasingly, we find that the results of the new theory using either the exact action, or the approximate action given by Eq. (<ref>) are graphically indistinguishable at all values of τ for this system, differing by at most 0.02%. This illustrates that the new theory can be accurately applied using just information that is available from standard ring-polymer instanton calculations.
The lower panel of Fig. <ref> also shows how the results converge with increasing number of instanton orbits included in the sums in Eq. (<ref>). One can immediately see that, for this range of temperatures, including terms up to n=2 already agrees almost perfectly with the full sum: the largest deviation being just over 0.2%. In fact, even including only a single orbit (n=1) is practically sufficient for nearly all temperatures considered, with the only significant deviation occurring in a very narrow range near the divergence at β/β_ c=2 [almost hidden by the frame of the graph]. Away from this divergence the largest deviation occurs close to the crossover temperature, where the full sum is just over 1% larger than the single orbit result. This is a significant result as it indicates that Eq. (<ref>), which only involves information available from a standard instanton calculation, is already sufficient for studying the behaviour of the rate for a wide range of temperatures near crossover. Finally, the narrow range of temperatures affected by the divergence at β/β_ c=2 may seem surprising when compared to the broad divergence of the parabolic barrier rate. However, this difference can be understood by noting that in the case of the parabolic barrier rate the diverging term is asymptotically dominant, whereas at β/β_ c=2 the term that is divergent is formally subdominant and has to overcome an exponential suppression.
§ DISCUSSION
Despite the limited numerical significance of the multi-orbit terms for the systems studied here, their natural appearance in the theory is theoretically interesting. For a fixed value of τ these multi-orbit terms are formally subdominant (i.e. negligible in comparison) to every term in the asymptotic series for the single orbit. Therefore, they do not contribute within a basic asymptotic analysis, be it either Poincaré asymptotics, where a fixed number of terms in the asymptotic series are included, or superasymptotics, where the number of terms is chosen to minimise the error.<cit.> The multi-orbit terms, instead, correspond to what are often referred to as hyperasymptotic contributions.<cit.>
These terms are significant because
the error made by the original asymptotic series can be written recursively in terms of them.
Hyperasymptotic series and their resummation are a central part of the closely related areas of resurgent asymptotics, exponential asymptotics, and transseries.<cit.> Such approaches allow one to go beyond the accuracy of superasymptotics and in principle to even obtain exact results. While these techniques have found a wide range of uses in modern physics, with applications in string theory, quantum field theory, and cosmology,<cit.> they have as yet not found wide use in chemistry or chemical physics.
It will therefore be interesting in the future to make connection to these approaches, for example by using the method proposed by Berry and Howls<cit.> to derive the multi-orbit contributions to the theory.
It should be noted that Miller's original derivation of the standard thermal instanton theory proceeded via a microcanonical expression involving multi-orbit instantons.<cit.> This raises the obvious question, how is the present theory related to Miller's original result?
While Miller's microcanonical result is equivalent in one dimension to the uniform WKB transmission probability, Miller noted immediately that it did not recover the correct result in a separable system. Despite this, once integrated asymptotically over energy it recovers the thermal instanton, which does correctly describe both separable and non-separable systems.
One might, therefore, wonder, can Eq. (<ref>) be derived from Miller's microncanonical expression following the method used in the Appendix in one dimension.
Interestingly, the answer is no. One finds that, the resulting theory is incorrect, failing to recover the separable result except at low temperature.
Miller's original expression is not the only microcanonical instanton theory. Shortly after his first paper on instanton theory, Miller, along with Chapman and Garrett, suggested an ad-hoc correction to the original microcanonical formula designed to correctly recover the separable limit.<cit.>
More recently an alternative (easier to implement) method, known as the density of states microcanonical instanton, has been proposed.<cit.>
Notably, thermalising either of these microcanonical theories by integrating asymptotically (ħ→0) over energy does recover Eq. (<ref>).[This is trivially true starting from the density of states instanton. The connection to Chapman, Garrett, and Miller's expression can be made by noting that it differs from the density of states instanton by a term of order ħ that can, therefore, be discarded.]
It is important to stress, however, that the present theory should not be considered as an approximation to these microcanonical results. This can be seen most clearly for escape from a metastable well, for which the thermal instanton correctly describes the plateau in the rate at low temperature, whereas exact integration of the semiclassical transmission probability approaches zero.
It should also be emphasised that, for the calculation of thermal rates the present approach is also more practical than thermalising a microcanonical theory.<cit.>
This is because integrating over energy requires information from instanton calculations at a wide range of imaginary times, including instantons whose period is greater than the thermal time. In contrast, Eq. (<ref>) only requires calculations at the temperature of interest, T, and a small number of integer multiples, nT.
Instantons are, of course, not the only way to incorporate the effects of tunneling into the calculation of reaction rate constants. In particular path-integral theories, such as ring-polymer molecular dynamics (RPMD)<cit.> and related quantum transition state theories,<cit.> are capable of describing the transition between the high and low temperature regimes.
However, it is important to recognise that, as these approaches involve path-integral sampling, they are practically very different methods.
In particular, numerical determination of free-energy differences, and the associated need to compute the potential energy at a large number of configurations typically makes sampling methods more expensive than instanton theory. The present theory and RPMD rate theory thus have different use cases. RPMD is most useful in liquid systems where instanton theory cannot be applied,<cit.> and instanton theory is most useful for gas phase and surface reactions in combination with high-level ab-initio electronic structure theory.<cit.>
One might reasonably suggest that making a harmonic approximation to RPMD-TST would avoid the need to perform path-integral sampling, and hence give a competitor to the present theory. However, to evaluate harmonic RPMD-TST in the vicinity of the crossover temperature one would need to derive a uniform theory that would likely be very similar to Eq. (<ref>). This is because, above the crossover temperature a basic harmonic approximation to RPMD-TST recovers the parabolic barrier rate,<cit.> and below the crossover temperature it is closely related to the F formulation of the thermal instanton.<cit.>
§ CONCLUSIONS AND FUTURE WORK
The breakdown of instanton theory at the crossover temperature has been a long standing problem in reaction rate theory. Although suggestions had been made to overcome this problem,<cit.> none were entirely satisfactory. Here the problem has been rigorously solved for the general class of multidimensional problems in which the instanton collapses smoothly to the transition state. To derive this result we have combined a new real-time version of Richardson's flux-correlation function derivation of instanton theory<cit.> with the modern method for developing uniform asymptotic expansions due to Bleistein.<cit.>
The resulting theory is a rigorous semiclassical theory for the rate that is uniformly valid as a function of τ=βħ. Unlike previous approaches<cit.> the present result is a smooth function of temperature.
The new theory is also rigorously independent of dividing surface, and obeys detailed balance.
Although the derivation was motivated by considering real-time trajectories, the final result only involves imaginary-time quantities (just like the original instanton theory).
Importantly, close to the crossover temperature the new theory only requires information already available from a standard instanton calculation, meaning that the theory can immediately be applied within existing instanton codes.
There are a number of exciting avenues for further the theoretical development. First, the present theory is an important step towards the development of more accurate microcanonical instanton theories. In particular, as the present theory is a globally valid (and smooth) function of temperature, it is now possible to use the steepest descent approximation to the inverse Laplace transform to obtain the microcanonical rate at any energy.<cit.> [Note in this approach the asymptotic parameter is not associated with ħ.]
The real-time formalism developed here also opens up exciting opportunities for future research. Although the real-time components of the trajectories do not contribute to the current theory, they could be used to extract extra information in other contexts, such as vibrationally state-resolved or electronically nonadiabatic reaction rates.
Another interesting direction for further development lies in connecting the present results to the F formulation. While the derivation used here was based on reactive scattering, the final result should be equally applicable to escape from a metastable well—the basis of the F derivation of instanton theory. Exploring this connection further would nevertheless be useful to link the current work more closely to the high-energy physics literature.
A key advantage of the first-principles derivation of the new theory is that it provides a rigorous framework for future work.
One such area is the generalisation of the theory to more complex systems in which the instanton does not collapse smoothly to the transition state with increasing temperature.
While these systems have not received much attention so far in the chemical literature, this is likely because there has not existed a theory that can treat them.
Another advantage of the present theory is that it can be systematically improved via the inclusion of higher order terms in the asymptotic series. These perturbative corrections have already been incorporated into the ring-polymer instanton (RPI) formalism (to give RPI+PC) for the calculation of molecular tunneling splittings.<cit.>
Future work will look to implement these corrections in the context of reaction rates to help describe systems where there is a significant change in anharmonicity along the reaction coordinate.
Such a combination of RPI+PC with the present theory would be a strong competitor to SCTST,<cit.> with the important advantage of providing a rigorous description of deep tunneling.
§ SUPPLEMENTARY MATERIAL
The supplementary material contains derivations of Eqs. (<ref>) and (<ref>). It also contains a comparison of the present approach to the recent proposal of Upadhyayula and Pollak that illustrates that their result is not a rigorous uniform asymptotic approximation to the rate constant in τ as ħ→0.
§ DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request.
§ ACKNOWLEDGEMENTS
I would like to thank Jeremy Richardson and George Trenins for helpful discussions.
This work was supported by an Independent Postdoctoral Fellowship at the Simons Center for Computational Physical Chemistry, under a grant from the Simons Foundation (839534, MT).
§ APPENDIX: DERIVATION OF MAIN RESULT IN ONE DIMENSION FROM KEMBLE'S UNIFORM SEMICLASSICAL TRANSMISSION PROBABILITY
In the main text we derive our uniform expression in the time domain, using methods for the asymptotic evaluation of integrals.
In one dimension an alternative but equivalent approach is to use WKB analysis. These two approaches are equivalent as the asymptotic parameter is formally the same. To arrive at a uniform expression for the thermal rate we can, therefore, begin with the uniform WKB expression for the transmission probability as a function of energy
P_ SC(E) = 1/1+e^W(E)/ħ.
This expression was originally proposed by Kemble<cit.> in 1935 and was later rigorously derived by Fröman and Fröman.<cit.>
It involves the reduced Euclidean action, W(E), which is related to S_ inst(τ) via a Legendre transform as
S_ inst(τ) = W(E) + τ E
with
τ = -W'(E)
and
E = S_ inst'(τ).
When E>V^, then W(E)<0 and we can write
P_ SC(E) = ∑_n=0^∞ (-1)^n e^ n W(E)/ħ.
Similarly when E<V^ then W(E)>0 and we can write
P_ SC(E) = e^- W(E)/ħ∑_n=0^∞ (-1)^n e^-n W(E)/ħ.
To obtain the correct uniform asymptotic expression for the thermal rate we therefore begin by writing
k Z_r ∼1/2πħ∫_0^∞ P_ SC(E) e^-τ E/ħdE.
Then separating this into two parts
kZ_r∼1/2πħ ∫_0^V^ P_ SC(E) e^-τ E/ħdE
+1/2πħ ∫_V^^∞ P_ SC(E) e^-τ E/ħdE
allows us to insert the expansions Eqs. (<ref>) and (<ref>) to give
kZ_r∼1/2πħ ∫_0^V^∑_n=1^∞ (-1)^n+1 e^-n W(E)/ħ-τ E/ħdE
+1/2πħ ∫_V^^∞∑_n=0^∞ (-1)^n e^ n W(E)/ħ-τ E/ħdE.
Consider first the integral up to the barrier height
I_n(ħ)=1/2πħ∫_0^V^ e^-(n W(E)+τ E)/ħdE.
Integrating by steepest descent gives the stationary condition as
n W'(E_n^⋆(τ)) = -τ.
Now for τ/n<τ_ c this stationary point will move outside of the integration range, E_n^⋆(τ<nτ_ c)>V^. Hence to obtain a uniform expression valid when E_n^⋆>V^, we again make use of Bleistein's method.<cit.> Hence, defining
S_n,inst(τ) = n W(E_n^⋆)+τ E_n^⋆ = n S_inst(τ/n)
and
Δ S_n = τ V^ - S_n,inst(τ)
application of Bleistein's method results in the following expression as ħ→0
I_n(ħ) ∼ k_n,instZ_r1/2erfc(sgn(nτ_ c-τ)√(Δ S_n/ħ))
+1/2πe^-τ V^/ħ1/nτ_ c - τ
- 1/2πe^-τ V^/ħsgn(nτ_ c-τ) √(-S_n,inst”(τ)/2Δ S_n)
where
k_n,instZ_r = e^-S_n,inst(τ)/ħ1/√(2πħ)(-d^2 S_n,inst/dτ^2)^1/2.
Next we consider the integral above the barrier height
J_n(ħ)=1/2πħ∫_V^^∞ e^(n W(E)-τ E)/ħdE.
When evaluating this asymptotically, we note that W'(E) < 0.
Hence, for τ>0 then there is no stationary point inside the integration region and the integrand is peaked about E=V^. The integral is, therefore, approximated asymptotically by expanding the exponent linearly. Using,
W'(V^)=-τ_ c=-2π/ω we thus have
J_n(ħ) ∼1/2πħ∫_V^^∞ e^-(nτ_ c(E-V^)+τ E)/ħdE
= 1/2π e^-τ V^/ħ1/nτ_ c+τ
as ħ→0.
Combining these two results together we then obtain
k Z_r ∼∑_n=1^∞ (-1)^n+1 k_n,instZ_r1/2erfc(sgn(nτ_ c-τ)√(Δ S_n/ħ))
-∑_n=1^∞ (-1)^n+11/2πe^-τ V^/ħsgn(nτ_ c-τ) √(-S_n,inst”(τ)/2Δ S_n)
+∑_n=1^∞ (-1)^n+11/2π e^-τ V^/ħ1/nτ_ c-τ
+∑_n=0^∞ (-1)^n1/2π e^-τ V^/ħ1/nτ_ c+τ.
This can then be simplified by noting that the final two sums can be combined to give
∑_n=1^∞ (-1)^n+11/2π n/ω-τ + ∑_n=0^∞ (-1)^n1/2π n/ω+τ=∑_n=-∞^∞(-1)^n/τ+2π n/ω
which is equivalent to
∑_n=-∞^∞(-1)^n/τ+2π n/ω = ω/2sin(ωτ/2).
Hence, we arrive at our final expression for the uniform thermal rate in one dimension
k Z_r ∼∑_n=1^∞ (-1)^n+1 k_n,instZ_r1/2erfc(sgn(nτ_ c-τ)√(Δ S_n/ħ))
-∑_n=1^∞ (-1)^n+11/2πe^-τ V^/ħsgn(nτ_ c-τ) √(-S_n,inst”(τ)/2Δ S_n)
+ω/4πsin(ωτ/2) e^-τ V^/ħ.
|
http://arxiv.org/abs/2409.03730v1 | 20240905173517 | Likelihood Geometry of the Squared Grassmannian | [
"Hannah Friedman"
] | math.ST | [
"math.ST",
"math.PR",
"stat.TH"
] |
: Assessing Contextual Integrity Norms in Language Models
Yan Shvartzshnaider
York University
Vasisht Duddu
University of Waterloo
John Lacalamita
York University
=======================================================================================================================
§ ABSTRACT
We study projection determinantal point processes and their connection to the squared Grassmannian.
We prove that the log-likelihood function of this statistical model has (n - 1)!/2 critical points, all of which are real and positive, thereby settling a conjecture of Devriendt, Friedman, Reinke, and Sturmfels.
§ INTRODUCTION
Determinantal point processes (DPPs) on a finite ground set are a family of statistical models whose state space is the power set of the ground set.
The key feature of DPPs is negative correlation, meaning that DPPs select for diverse subsets of the ground set, for some notion of diversity.
DPPs elegantly model negative correlation and consequently have many applications in physics, random matrix theory, and machine learning <cit.>.
DPPs were previously studied from an algebraic statistics perspective in <cit.>.
We continue in this vein, motivated by the connection between the Grassmannian and a special class of DPPs called projection DPPs which is studied by Devriendt, Friedman, Reinke, and Sturmfels in <cit.>.
We go beyond the work in <cit.>, giving a complete picture of the likelihood geometry of projection DPPs.
Our main result is the resolution of <cit.>.
A DPP is a random variable Z on the power set 2^[n] of the finite set [n] such that
[I ⊆ Z] = (K_I),
where K is an n × n symmetric matrix with eigenvalues in [0,1] and K_I is the principal submatrix of K whose rows and columns are indexed by I.
The matrix K is the kernel of the DPP and encodes the similarity of the elements of the ground set [n].
We study maximum likelihood estimation for determinantal point processes.
Given some data (u_I)_I ∈ 2^[n], we seek the maximizer of the log-likelihood function
L_u(q) = ∑_I ∈ 2^[n] u_I log(q_I) - ( ∑_I ∈ 2^[n] u_I ) log ( ∑_I ∈ 2^[n] q_I )
where the vector q = (q_I)_I ∈ 2^[n] is a probability distribution coming from a DPP, up to scaling.
The maximizer of L_u, called the maximum likelihood estimate (MLE), is the point on the DPP model that best explains the data u.
The number of complex critical points of (<ref>) for generic data u is the maximum likelihood degree (ML degree) of the model.
The ML degree gives an algebraic bound on the complexity of maximum likelihood estimation.
To count the critical points of (<ref>), we need a precise description of the constraints on (q_I)_I ∈ 2^[n].
In particular, we need to know the probability q_I of observing a set I, rather than [I ⊆ Z].
We discuss two scenarios in which there are nice formulae for q_I.
When the eigenvalues of K lie in (0, 1), there exists a positive definite matrix Θ such that q_I = (Θ_I)/(Θ + Id_n); see <cit.> for details.
The maximum likelihood estimation problem can then be solved by maximizing (<ref>) subject to the constraint that q satisfies the hyperdeterminantal relations; see <cit.>.
Practitioners use the unconstrained, parameteric log-likelihood function
L_u(Θ) = ∑_I ∈ 2^[n] u_I log((Θ_I)) - ( ∑_I ∈ 2^[n] u_I ) log((Θ + Id_n)).
In <cit.>, the authors express the number of critical points of the parametric log-likelihood function in terms of the ML degree of the hyperdeterminantal variety.
In this paper, we take the approach of <cit.> and restrict the eigenvalues of K to be in {0, 1}.
If K has rank d, then K is the unique orthogonal projection matrix onto a d-dimensional subspace and we denote K by P.
DPPs whose kernels are projection matrices are called projection DPPs <cit.>.
By <cit.>, a projection DPP with kernel P has state space [n]d and the probability of observing a d-subset I is q_I = (P_I).
The column space of P is a d-dimensional subspace of ^n and can be represented by its vector of Plücker coordinates (p_I)_I ∈[n]d; see <cit.>.
There is a particularly nice relationship between the principal minors of P and the Plücker coordinates of its column span.
The principal minors of an orthogonal projection matrix P are proportional to the corresponding squared Plücker coordinates:
q_I = (P_I) = p^2_I/∑_J ∈[n]d p_J^2.
An immediate consequence of Lemma <ref> is that every probability distribution arising from a projection DPP can be written as the entry-wise square of a vector of Plücker coordinates.
This observation motivates the definition of the squared Grassmannian, denoted sGr(d,n), as the Zariski closure of the model for projection DPPs.
The squared Grassmannian is the image of the squaring map from the Grassmannian Gr(d, n) ⊂ℙ^nd - 1 in its Plücker embedding given by (p_I)_I ∈[n]d↦ (p_I^2)_I ∈[n]d; see <cit.>.
Thus the MLE of a projection DPP is found by maximizing the implicit log-likelihood function (<ref>) over sGr(d, n).
We focus on the case where d = 2, so our model has state space [n]2.
We determine the number of critical points of the implicit log-likelihood function:
The ML degree of sGr(2,n) is (n - 1)!/2 for n ≥ 3.
In Section <ref>, we prove that all of these critical points are relevant from a statistical perspective.
This is reminiscent of Varchenko's Theorem <cit.>.
For u ∈^n2, each critical point of the log-likelihood function (<ref>) over sGr(2,n) is real, positive, and a local maximum when restricted to real points of sGr(2,n).
Another consequence of Lemma <ref>, is that the model of projection DPPs is parameterized by d × n matrices M_d,n:
q_I = (M_d,n)_I^2/∑_J ∈[n]d (M_d,n)_J^2
where (M_d,n)_I is the maximal minor of M_d,n whose columns are indexed by I.
We then have the parametric log-likelihood function for projection DPPs:
L_u(M_d,n) = ∑_I ∈[n]d u_I log(((M_d,n)_I)^2) -
( ∑_I ∈[n]d u_I ) log ( ∑_I ∈[n]d((M_d,n)_I)^2 ).
If we restrict to the matrices M_d,n whose first d × d square is the identity, we get a 2^n-1-degree parameterization; see <cit.>.
Therefore the number of critical points of (<ref>) is 2^n-1 times the ML degree of the squared Grassmannian.
[d = 2, n = 3]
The implicit log-likelihood function is
L_u(q) = u_12log(q_12) + u_13log(q_13) + u_23log(q_23) - (u_12 + u_13 + u_23) log(q_12 + q_13 + q_23).
Because sGr(2,3) = ^2, the MLE is u and the ML degree is one.
We can find the same answer parametrically: if M_2,3 = [ 1 0 x_3; 0 1 y_3 ], then
L_u(M_2,3) = u_12log(1) + u_13log(y_3^2) + u_23log(x_3^2) - (u_12 + u_13 + u_23) log(1 + x_3^2 + y_3^2)
whose partial derivatives are
∂ L_u/∂ x_3 = 2u_23/x_3 - 2(u_12 + u_13 + u_23)x_3/1 + x_3^2 + y_3^2 = 0, ∂ L_u/∂ y_3 = 2u_13/y_3 - 2(u_12 + u_13 + u_23)y_3/1 + x_3^2 + y_3^2 = 0.
We can solve these equations by hand to find x_3 = ±√(u_23/u_12) and y_3 = ±√(u_13/u_12) for a total of four solutions.
Our parameterization sends the parametric solution to the implicit one:
[ 1 0 ±√(u_23)/√(u_12); 0 1 ±√(u_13)/√(u_12). ]↦ (1: u_13/u_12 : u_23/u_12).
Likelihood inference on the Grassmannian has been studied in other contexts as well; see <cit.> for an overview and comparison of three different models.
Of particular interest is the configuration space of X(d, n) which is modeled by the Grassmannian modulo a torus action.
When d = 2, X(2, n) is the moduli space ℳ_0,n of n marked points in ^1.
Similar to our situation, the number of critical points of the likelihood function on ℳ_0,n grows exponentially and all (n - 3)! critical points are real <cit.>.
This paper is organized as follows.
In Section <ref>, we outline the approach for the proof of Theorem <ref> and study the topology of the parametric model arising from (<ref>).
In Section <ref>, we study the stratification of the parametric model by a deletion map and prove Theorem <ref>.
In Section <ref>, we turn to real critical points and prove Theorem <ref>.
We finish by computing the MLE for random data for 4≤ n≤ 9 and we record the runtimes.
§ PARAMETRIC MODEL
Our approach to computing the ML degree of sGr(2,n) is topological.
If X ⊆^n is a projective variety, then ℋ = {(x_0 : ⋯ : x_n) ∈ X | (∑^n_i=0x_i) ·∏_i=0^n x_i = 0} is the set of points of X where the log-likelihood function is undefined.
The set X \ℋ⊆ (^*)^n+1 is a very affine variety, meaning that it is a closed subvariety of the torus (^*)^n+1.
We use the following theorem, which first appeared as <cit.> under stronger hypotheses.
If the very affine variety X \ℋ is smooth of dimension d, then the ML degree of X equals the signed Euler characteristic (-1)^dχ(X \ℋ).
The Euler characteristic is a topological invariant that can be defined as the alternating sum of the Betti numbers of the space.
We will often make use of the excision property of the Euler characteristic: if Z = X ⊔ Y, then χ(Z) = χ(X) + χ(Y).
We also rely on the fibration property of the Euler characteristic; if f: X → Y is a fibration, then χ(X) = χ(Y) χ(F), where F is the fiber of a point in Y.
For the maps we consider, proving that the fibers of a map are homeomorphic suffices to show that the map is a fibration.
We apply Theorem <ref> to the squared Grassmannian.
The open squared Grassmannian is the very affine variety
sGr(d,n)^∘ = {(q_I)_I ∈[n]d∈ sGr(d, n) | ( ∑_I ∈[n]d q_I ) ·∏_I ∈[n]d q_I≠ 0 }⊆ (^*)^nd.
The variety sGr(d,n)^∘ is smooth because Gr(d,n) is smooth and the Jacobian of the squaring map has full rank on points in the preimage of sGr(d,n)^∘.
Because ( sGr(d,n)) = d(n-d), by Theorem <ref>, the ML degree of sGr(d,n) is (-1)^d(n-d)χ( sGr(d,n)^∘).
We now turn to the case d = 2 and parameterize the open squared Grassmannian by
X_n → sGr(2,n)^∘,
M_n =
[ 1 0 x_3 x_4 ⋯ x_n; 0 1 y_3 y_4 ⋯ y_n ]↦
(p_ij^2)_1 ≤ i < j ≤ n
where p_ij is the 2-minor formed from columns i and j of M_n and X_n is the subset of ^2(n-2) such that the image of the parameterization is sGr(2,n)^∘.
To be specific, we define
Q_n = ∑_1 ≤ i < j ≤ n p_ij^2 and
X_n = {
M_n
∈ℂ^2(n-2) Q_n · ( ∏_1 ≤ i< j ≤ n p_ij ) ≠ 0
}.
We can explicitly write the parametric log-likelihood function in the d = 2 case as
L_u(M_n) = ∑_i = 3^n (u_1ilog(y_i^2) + u_2ilog(x_i^2) ) + ∑_3 ≤ i < j ≤ nu_ijlog((x_iy_j - y_ix_j)^2) - ∑_1 ≤ i < j ≤ n u_ijlog(Q_n).
The map in (<ref>) has degree 2^n-1 with no ramification points because we can flip the signs of any row and column of the matrix M_n independently; see <cit.>.
By multiplicativity of the Euler characteristic, the implicit and parametric models are related by:
χ(X_n) = 2^n-1χ( sGr(2, n)^∘) = 2^n-1 ML Deg( sGr(2,n)) = #{critical points of (<ref>)}.
By (<ref>), we can compute the number of critical points of both the implicit and parametric log-likelihood functions from χ(X_n).
We compute χ(X_n) in Section <ref> by an argument similar to that in <cit.>.
In particular, we wish to define a map X_n + 1→ X_n which deletes the last column and use the fact that this map is a stratified fibration to compute the Euler characteristic inductively.
However, this deletion map is not well defined, because Q_n vanishes on points in X_n+1, so we define this map only on the open subset of X_n + 1 where Q_n does not vanish, denoted X_n + 1^∘ = { M_n+1∈ X_n + 1 Q_n ≠ 0 }.
We define the projection
π_n + 1: X_n + 1^∘→ X_n, [ 1 0 x_3 x_4 ⋯ x_n x_n+1; 0 1 y_3 y_4 ⋯ y_n y_n+1 ]↦[ 1 0 x_3 x_4 ⋯ x_n; 0 1 y_3 y_4 ⋯ y_n ]
and argue that the change from X_n + 1 to X_n + 1^∘ does not affect the Euler characteristic.
To do so, we observe that the fiber of a point M_n ∈ X_n under π_n + 1 is the complement in (^*)^2 of the lines p_i(n + 1) and conic Q_n + 1.
The lines p_i(n+1) all pass through the origin, so the key to understanding the geometry of the fibers is to understand the conic Q_n+1.
We can write
Q_n + 1 = [ 1 x_n+1 y_n+1 ][ Q_n 0 0; 0 1 + ∑_i = 3^n y_i^2 - ∑_i = 3^n x_iy_i; 0 - ∑_i = 3^n x_iy_i 1 + ∑_i = 3^n x_i^2 ][ 1; x_n+1; y_n+1 ].
Taking the determinant of the matrix, we find that the discriminant of Q_n + 1 as a conic in x_n + 1 and y_n + 1 is Q_n^2.
We prove that restricting to X_n + 1^∘ does not change the Euler characteristic.
For n ≥ 3, we have χ(X_n+1) = χ(X_n + 1^∘).
By the excision property, χ(X_n+1) = χ(X_n + 1^∘) +χ(X_n+1\ X_n + 1^∘).
We shall prove that χ(X_n+1\ X_n + 1^∘) = 0 by showing that the map below is a fibration on its image:
X_n + 1\ X_n + 1^∘→ (^*)^2(n-2), [ 1 0 x_3 x_4 ⋯ x_n x_n+1; 0 1 y_3 y_4 ⋯ y_n y_n+1 ]↦[ 1 0 x_3 x_4 ⋯ x_n; 0 1 y_3 y_4 ⋯ y_n ].
It suffices to show that the fibers are homeomorphic.
Every nonempty fiber F is the complement in (^*)^2 of n - 1 lines through the origin: each minor p_i(n + 1) for i ≥ 3 contributes one line and Q_n + 1 degenerates into a double line since Q_n = 0.
The Euler characteristic of finitely many lines through the origin in (^*)^2 is zero because each line is a ^1 with two points removed and the lines do not intersect.
Because χ((^*)^2) = 0, the Euler characteristic of the complement of n -1 lines through the origin in (^*)^2 is zero, too.
By the fibration property of the Euler characteristic and because χ(F) = 0, we conclude that χ(X_n + 1\ X_n + 1^∘) = 0.
§ STRATIFICATION OF THE PARAMETRIC MODEL
We now prove that the map π_n + 1 is a stratified fibration and compute χ(X_n).
A map f: X → Y between complex algebraic varieties with a stratification {S_α}_α=1^m of Y by closed sets is a
stratified fibration if the restriction of f to each open stratum S^∘_α = S_α\⋃_S_β⊊ S_α S_β is a fibration with fiber denoted F_S_α.
We use Möbius inversion with the fibration property of the Euler characteristic, to compute Euler characteristics along stratified fibrations.
Let f: X → Y be a stratified fibration.
Let 𝒮 be the poset of closed strata S_α of f, let F_S_α be the fiber of a generic point in S_α, and let μ be the Möbius function of 𝒮.
Then
χ(X)
= χ(Y) ·χ(F_Y) + ∑_S_α∈𝒮χ(S_α) ·∑_S_β∈𝒮, S_β⊇ S_αμ(S_α, S_β) · (χ(F_S_β) - χ(F_Y)).
In our situation, only the generic fiber contributes: the sums vanish and χ(X) = χ(Y) χ(F_Y).
We first describe the stratification of X_n and the fibers of π_n + 1.
The map π_n + 1 is a stratified fibration with stratification
𝒮 = { X_n }∪{S_i | 1 ≤ i ≤ n}∪{S_i ∩ S_j | 1 ≤ i < j ≤ n},
where S_i = {M_n ∈ X_n |∑_j = 1^n p_ij^2 = 0}.
The fibers of π_n + 1 are the complements of n lines p_i(n+1) and a conic Q_n + 1 in ^2 where the intersections of the lines with the conic are
Q_n + 1∩ p_k(n+1) = two points for all k in F_X_n.
Q_n + 1∩ p_k(n+1) = two points for k ≠ i
∅ for k = i. in F_S_i.
Q_n + 1∩ p_k(n+1) = two points for k ≠ i,j
∅ for k ∈{i,j} in F_S_i ∩ S_j.
To prove π_n + 1 is a fibration, it suffices to show that the fibers of π_n + 1 are homeomorphic on each open stratum S^∘.
The fiber in ^2 of a point M_n ∈ X_n is the complement of the lines p_i(n + 1) and the conic Q_n + 1.
Since all the lines p_i(n + 1) pass through the origin, their intersection is the same for any M_n ∈ X_n.
The variation in the fibers comes from the intersection with the conic.
We claim that every line p_i(n+1) either intersects the conic Q_n + 1 in two distinct points or
not at all.
In the intersection locus of p_i(n+1) = 0 and Q_n + 1 = 0,
Q_n + 1 = Q_n + ∑_j = 1^n p_j (n + 1)^2 = Q_n + x_n+1^2/x_i^2∑_j = 1^n p_ij^2 = 0.
If ∑_j = 1^n p_ij^2 ≠ 0, then there are two distinct intersection points.
If ∑_j = 1^n p_ij^2 = 0, then (<ref>) becomes Q_n = 0, which is a contradiction, so the intersection of Q_n + 1 and p_i(n + 1) is empty.
Thus, in the fiber F_X_n, every line intersects the conic in two points and in the fiber F_S_i, every line intersects the conic in two points, except p_i(n+1) which does not intersect the conic.
It remains to be shown that there can be at most two lines p_i(n+1) which do not intersect Q_n + 1.
When p_i(n+1) does not intersect Q_n + 1 in ^2, the line and conic intersect at a double point at infinity.
In other words, p_i(n+1) is tangent to the projectivization of Q_n + 1 at infinity.
In projective space, p_i(n + 1) intersects Q_n + 1 tangentially if and only if p_i(n + 1)^∨ lies in the dual conic Q_n + 1^∨, i.e., [ 0 -y_i x_i ] A_n^-1[ 0 -y_i x_i ]^T =∑_j = 1^n p_ij^2 = 0.
Since at most two lines passing through the origin can be tangent to a conic, only two of the lines p_i(n + 1) can have an empty intersection with the conic Q_n + 1 at once.
Thus, in the fiber F_S_i ∩ S_j, every line intersects the conic in two points, except p_i(n + 1) and p_j(n + 1) which do not intersect the conic.
The combinatorics of the stratification 𝒮 is simple, which allows us to evaluate the formula for the Euler characteristic in Lemma <ref>.
To apply the lemma, we need the Möbius function of the poset of each closed stratum; see Figure <ref> for the posets of each of the special strata along with the values of their Möbius functions.
The remaining computations necessary to prove Theorem <ref> are Euler characteristics of the fibers and of the strata.
The Euler characteristics of the fibers are
χ(F_X_n) = 2n χ(F_S_i) = 2n - 2 χ(F_S_i ∩ S_j) = 2n - 4 for 1 ≤ i < j ≤ n.
In ^2\{(0,0)}, we have χ(Q_n + 1) = 0, since the conic Q_n + 1 has nonzero constant term Q_n and is therefore topologically equivalent to a ^1 with two points at infinity removed.
Similarly, in ^2\{(0,0)}, we have χ(p_i(n + 1)) = 0, since the line p_i(n + 1) is topologically equivalent to a ^1 with the origin and a point at infinity removed.
Suppose that M_n ∈ X_n.
Because χ(^2\{(0,0)}) = 0, by the excision property,
χ(π^-1(M_n)) = - χ(Q_n + 1) - ∑_i = 1^n χ(p_i(n + 1)) + ∑_i = 1^n χ(p_i(n + 1)∩ Q_n + 1)
= ∑_i = 1^n χ(p_i(n + 1)∩ Q_n + 1) = | ⋃_i = 1^n (p_i(n + 1)∩ Q_n + 1) |
where the last equality holds because the intersection of a line with a conic is zero dimensional.
We computed these intersections in Lemma <ref>.
In the generic fiber, each of the n lines intersects the conic in two points, for a total of 2n intersection points.
The intersections for S_i, S_i ∩ S_j are the same, but with n - 1 and n - 2 lines, respectively.
We now show that the Euler characteristics of the closed, codimension one strata vanish.
We have χ(S_i) = 0 for all i.
It is sufficient to prove the claim for S_n by symmetry.
We first remark that S_n ⊂ X_n^∘.
We argue that S_n has Euler characteristic zero by showing that π_n|_S_n is a fibration whose fiber is the union of two lines through the origin in (^*)^2.
Since the fiber has Euler characteristic zero, the result follows from the fibration property.
Because ∑_j = 1^n-1 p_jn^2 is a homogeneous polynomial in x_n and y_n, the fiber of a point in π_n(S_n) is a degenerate conic in the variables x_n, y_n, so each fiber is the union of two lines through the origin in the complement of the lines p_jn and conic Q_n in (^*)^2.
Since the p_jn all pass through the origin, they do not intersect the degenerate conic in (^*)^2.
The conic Q_n also does not intersect ∑_j = 1^n-1 p_jn^2, because Q_n - ∑_j = 1^n-1 p_jn^2 = Q_n - 1 = 0 on the intersection, contradicting S_n ⊂ X_n^∘.
Since the conic ∑_j = 1^n-1 p_jn^2 does not intersect the lines p_jn or the conic Q_n, the fiber is the union of two lines through the origin in (^*)^2.
We do not need to compute χ(S_i ∩ S_j), because the inside sum in Lemma <ref> vanishes for S_i ∩ S_j.
In fact, we can count the number of critical points of the parametric log-likelihood function.
Theorem <ref> then follows from Proposition <ref> and (<ref>).
The parametric log-likelihood function (<ref>) has 2^n-2 (n-1)! critical points.
By (<ref>), the number of critical points is equal to the Euler characteristic χ(X_n), so it suffices to show that χ(X_n) = 2^n-2 (n-1)!.
We proceed by induction on n.
The base case is furnished by Example <ref>.
By Lemmas <ref>, <ref>, and <ref>, we have
χ(X_n + 1) = χ(X_n + 1^∘) =
χ(F_X_n) χ(X_n)
+ ∑_i = 1^n χ(S_i) ∑_S' ∈{S_i, X_n}μ(S_i, S') (χ(F_S') - F_X_n )
+ ∑_1 ≤ i < j ≤ nχ(S_i∩ S_j) ∑_S' ∈{S_i ∩ S_j, S_i, S_j, X_n}μ(S_i ∩ S_j, S') (χ(F_S') - F_X_n )
By Lemma <ref>, χ(S_i) = 0, so the first sum is zero.
By Lemma <ref>
and the values of the Möbius function in Figure <ref>, the second sum is zero:
∑_S' ∈{S_i ∩ S_j, S_i, S_j, X_n}μ(S_i ∩ S_j, S') (χ(F_S') - χ(F_X_n)) =
(1)(-4) + (-1)(-2) + (-1)(-2) = 0.
By induction and Lemma <ref>, χ(X_n + 1) = χ(F_X_n) χ(X_n) = 2n(2^n-2(n-1)!) = 2^n-1n!.
A formula for the ML degree of sGr(d, n) for d > 2 has not yet been conjectured.
In our calculation of the ML degree of sGr(2,n), we relied heavily on the nice geometry of conics in the complex projective plane, which we do not have for higher d.
In <cit.>, the ML degrees for d = 3 and n = 5, 6, 7 were found to be 12, 552, and 73440 by numerical methods.
These numbers don't follow an obvious pattern, so it is likely that the combinatorics of the stratification is complicated and that we do not have the nice cancellation we get in the proof of Proposition <ref>.
The ML degrees of the configuration space X(d, n) behave similarly.
The ML degree of X(2, n) = ℳ_0,n is (n - 3)! <cit.>, but when d ≥ 3 the combinatorics of the stratification is more complicated and the Euler characteristics are more difficult to compute.
The ML degrees of X(3,n) are known for n ≤ 9 <cit.>.
The ML degree of X(4, 8) was numerically shown to be 5211816 <cit.>.
§ REAL SOLUTIONS
From a statistical perspective, the only relevant critical points of the implicit log-likelihood function (<ref>) are real, nonnegative ones.
Because q_I is a square, this condition is equivalent to the critical points of the parameteric log-likelihood function (<ref>) being real.
We will prove that when d = 2, all critical points of the parametric log-likelihood function (<ref>) are real and are local maxima.
This is not necessarily true for greater d.
[d=3, n = 6]
This parametric log-likelihood function has 17664 critical points <cit.>.
Generically, 11904 of them are real and local maxima.
These numbers were computed using the package <cit.>.
For real solutions we turn from the log-likelihood function to the likelihood function
∏_1 ≤ i < j ≤ np_ij^2u_ij/ (∑_1 ≤ i < j ≤ np_ij^2 )^∑_i,ju_ij,
which shares its critical points with the log-likelihood function.
We optimize (<ref>) on the real open set X_n^ = X_n ∩^2(n-2).
Since the quadric Q_n has no real solutions, we are left with the complement of the real algebraic variety ⋃_1 ≤ i < j ≤ nV_(p_ij).
The irreducible hypersurfaces V_(p_ij) all pass through the origin and divide X_n^ into unbounded regions.
It is known that every bounded region contains at least one critical point, because the function is either positive or negative on the region and must therefore achieve a local minimum or maximum; see <cit.>.
The following result extends this idea to unbounded regions.
Let f_0, f_1, …, f_n ∈[x_1, …, x_d] such that f_0 = 1 and f_0 + f_1 + ⋯ + f_n is positive on ^d.
If u_0, u_1, …, u_n ∈_> 0, then
#{regions of ^d \⋃_i=1^n {f_i = 0}}
≤#{critical points of f_0^u_0f_1^u_1⋯ f_n^u_n/(f_0 + ⋯ + f_n)^u_0 + ⋯ + u_n in ^d}
≤ML degree of V(f_0, …, f_n)
Further, L = f_0^u_0f_1^u_1⋯ f_n^u_n/(f_0 + ⋯ + f_n)^u_0 + ⋯ + u_n has a local maximum for every region of ^d \⋃_i=1^n {f_i = 0} where L > 0 and a local minimum for every region where L < 0.
Since f_0 + ⋯ + f_n > 0, the function L is smooth on ^d.
Because the denominator of L has larger degree than the numerator, L approaches 0 at infinity and is therefore bounded on all regions.
Because L is smooth and bounded, it attains a local maximum or minimum on each region, depending on the sign of L.
This proves the first inequality and second statement.
The second inequality follows from the definition of ML degree.
We use this result to show that all critical points of the parameteric likelihood function (<ref>) are real.
Theorem <ref> is an immediate corollary, since squares of real numbers are nonnegative.
A similar technique was used to show that all critical points of the likelihood function on the moduli space ℳ_0,n are real in <cit.>.
All critical points of the parametric likelihood function (<ref>) are real and each critical point is a local maximum.
We will prove that X_n^ has exactly 2^n-2(n-1)! regions.
By Lemma <ref>, with f_0 = p_12 = 1, L_u(M_n) has at least 2^n-2(n-1)! real solutions.
Since the total number of solutions is 2^n-2(n-1)! by Theorem <ref>, all solutions are real.
Every solution is a local maximum by Lemma <ref>, because (<ref>) is nonnegative on ^2(n-2), so it is positive on every region of X_n^.
We first argue that the number of regions of X_n^ is equal to the number of possible sign vectors of (p_ij)_1 ≤ i< j ≤ n.
Every region has a fixed sign vector, since it is impossible to change the sign vector without crossing a p_ij.
Conversely, given a sign vector (s_ij)_1 ≤ i < j ≤ n, Y_s = {M_n ∈ X_n^ : sgn(p_ij) = s_ij} is contractible: for any M_n ∈ Y_s, G(M_n', t) = tM_n + (1-t)M_n' ∈ Y_s for all t ∈ [0,1] and M_n' ∈ Y_s, so G is a deformation retract onto M_n.
To count the sign vectors, we fix 2 ≤ k ≤ n and begin with a matrix
M_n = [ 1 0 -x_3 ⋯ -x_k x_k + 1 ⋯ x_n; 1 0 y_3 ⋯ y_k y_k + 1 ⋯ y_n ]
in X_n such that x_3,…, x_n, y_3, …, y_n > 0.
We take any permutation of the last n - 2 columns and flip the sign of any of the last n - 2 columns.
This process yields 2^n - 2 (n - 2)! different matrices.
We argue that given a fixed M_n, each of these matrices has a distinct sign vector.
Because every matrix in X_n arises in this way, there are 2^n - 2 (n - 2)! possible sign vectors for a fixed k.
Hence, in total, we have 2^n - 2 (n - 1)! total possible sign vectors.
Let p be the Plücker vector of M_n and q be the Plücker vector of a matrix produced from M_n by the permutation and sign changes described above.
We identify the permutation and columns whose signs were flipped.
The columns i with q_1i < 0 had their signs flipped.
Assuming that q_1i > 0 for all i, we uniquely identify the permutation from its inversions: for i, j ≥ 3 the signs of p_ij and q_ij agree if and only if the pair ij is not an inversion.
The practical implication of this result is that likelihood inference for projection DPPs is difficult.
In particular, for any data u, the number of local maximizers of both the implicit and parametric log-likelihood function grows exponentially.
In constrast, the linear model on ℳ_0,n in <cit.>, has only one positive critical point.
Computational Experiments.
To compute the true MLE for some data u, one needs to compute all critical points of the parametric log-likelihood function (<ref>), evaluate (<ref>) at the critical points, and select the one which yields the largest value.
We give runtimes for computing the MLE of our model for data selected uniformly at random from [1000].
We use the numerical algebraic geometry software <cit.> to compute the critical points of (<ref>).
We use the strategy outlined in <cit.> to find solutions to the rational equations ∇ L_u(M_n) = 0.
We first use the monodromy method to compute the solutions to ∇ L_u'(M_n) = 0 for some complex start parameters u'.
We then use a coefficient parameter homotopy to move the start parameters u' to the target parameters u ∈ [1000]^n2, simultaneously moving each solution of ∇ L_u'(M_n) = 0 to a solution of ∇ L_u(M_n) = 0.
[ n = 4 n = 5 n = 6 n = 7 n = 8 n = 9; Runtime 180 ms 235 ms 962 ms 14.287 s 330 s 2 h; Number of Solutions 24 192 1920 23040 322560 5160960; ]
The runtime grows exponentially, as expected.
The bulk of the time is spent computing the solutions to ∇ L_u'(M_n) = 0 with the monodromy method.
Because uses a heuristic stopping criterion, knowing the number of solutions a priori makes the computation significantly faster than if we did not know the number of solutions.
The computations were run with 64 threads on a 2 × 8-Core Intel Xeon Gold 6144 at 3.5 GHz.
Acknowledgments
We thank Simon Telen for proving the count of critical points for n = 4, which inspired the proof of Theorem <ref>.
We also thank Claudia Fevola, Bernhard Reinke, and Claudia Yun for helpful conversations.
We thank the Max Planck Institute for Mathematics in the Sciences for its stimulating research environment and for its computing resources.
Hannah Friedman, UC Berkeley
<[email protected]>
|
http://arxiv.org/abs/2409.03225v1 | 20240905034535 | Enhancing Healthcare LLM Trust with Atypical Presentations Recalibration | [
"Jeremy Qin",
"Bang Liu",
"Quoc Dinh Nguyen"
] | cs.CL | [
"cs.CL"
] |
Theory of Turbulent Equilibrium Spheres with Power-Law Linewidth-Size Relation
[
Accepted XXX. Received YYY; in original form ZZZ
==============================================================================
§ ABSTRACT
Black-box large language models (LLMs) are increasingly deployed in various environments, making it essential for these models to effectively convey their confidence and uncertainty, especially in high-stakes settings. However, these models often exhibit overconfidence, leading to potential risks and misjudgments. Existing techniques for eliciting and calibrating LLM confidence have primarily focused on general reasoning datasets, yielding only modest improvements. Accurate calibration is crucial for informed decision-making and preventing adverse outcomes but remains challenging due to the complexity and variability of tasks these models perform. In this work, we investigate the miscalibration behavior of black-box LLMs within the healthcare setting. We propose a novel method, Atypical Presentations Recalibration, which leverages atypical presentations to adjust the model's confidence estimates. Our approach significantly improves calibration, reducing calibration errors by approximately 60% on three medical question answering datasets and outperforming existing methods such as vanilla verbalized confidence, CoT verbalized confidence and others. Additionally, we provide an in-depth analysis of the role of atypicality within the recalibration framework. The code can be found at <https://github.com/jeremy-qin/medical_confidence_elicitation>
§ INTRODUCTION
Despite recent successes and innovations in large language models (LLMs), their translational value in high-stakes environments, such as healthcare, has not been fully realized. This is primarily due to concerns about the trustworthiness and transparency of these models, stemming from their complex architecture and black-box nature. Recent studies <cit.> have begun to explore methods for eliciting confidence and uncertainty estimates from these models in order to enhance trustworthiness and transparency. The ability to convey uncertainty and confidence is central to clinical medicine <cit.> and plays a crucial role in facilitating rational and informed decision-making. This underscores the importance of investigating and utilizing calibrated confidence estimates for the medical domain.
Previous work on confidence elicitation and calibration of large language models (LLMs) has mainly focused on general reasoning and general knowledge datasets for tasks such as logical reasoning, commonsense reasoning, mathematical reasoning, and scientific knowledge <cit.>. Few studies have investigated tasks that require expert knowledge, and these have shown considerable room for improvement. Moreover, with the success of many closed-source LLMs, such as GPT-3.5 and GPT-4, which do not allow access to token-likelihoods and text embeddings, it has become prevalent to develop tailored methods for eliciting confidence estimates. However, most approaches developed consist of general prompting and sampling strategies without using domain-specific characteristics.
Traditionally, clinicians are taught to recognize and diagnose typical presentations of common illnesses based on patient demographics, symptoms and signs, test results, and other standard indicators <cit.>. However, the frequent occurrence of atypical presentations is often overlooked <cit.>. Failing to identify atypical presentations can result in worse outcomes, missed diagnoses, and lost opportunities for treating common conditions. Thus, awareness of atypical presentations in clinical practice is fundamental to providing high-quality care and making informed decisions. Figure <ref> depicts a simplistic example of how atypicality plays a role in diagnosis. Incorporating the concept of atypicality has been shown to improve uncertainty quantification and model performance for discriminative neural networks and white-box large language models <cit.>. This underscores the importance of leveraging atypical presentations to enhance the calibration of LLMs, particularly in high-stakes environments like healthcare.
Our study aims to address these gaps by first investigating the miscalibration of black-box LLMs when answering medical questions using non-logit-based uncertainty quantification methods. We begin by testing various baseline methods to benchmark the calibration of these models across a range of medical question-answering datasets. This benchmarking provides a comprehensive understanding of the current state of calibration in LLMs within the healthcare domain and highlights the limitations of existing approaches.
Next, we propose a new recalibration framework based on the concept of atypicality, termed Atypical Presentations Recalibration. This method leverages atypical presentations to adjust the model's confidence estimates, making them more accurate and reliable. Under this framework, we construct two distinct atypicality-aware prompting strategies for the LLMs, encouraging them to consider and reason over atypical cases explicitly. We then compare the performance and calibration of these strategies against the baseline methods to evaluate their effectiveness.
Finally, our empirical results reveal several key findings. First, black-box LLMs often fail to provide calibrated confidence estimates when answering medical questions and tend to remain overconfident. Second, our proposed Atypical Presentations Aware Recalibration method significantly improves calibration, reducing calibration errors by approximately 60% on three medical question answering datasets and consistently outperforming existing baseline methods across all datasets. Third, we observe that atypicality interacts in a complex manner with both performance and calibration, suggesting that considering atypical presentations is crucial for developing more accurate and trustworthy LLMs in healthcare settings.
§ BACKGROUND AND RELATED WORK
§.§ Confidence and Uncertainty quantification in LLMs
Confidence and uncertainty quantification is a well-established field, but the recent emergence of large language models (LLMs) has introduced new challenges and opportunities. Although studies have shown a distinction between confidence and uncertainty, we will use these terms interchangeably in our work.
Research on this topic can be broadly categorized into two areas: approaches targeting closed-source models and those focusing on open-source models. The growing applications of commercial LLMs, due to their ease of use, have necessitated particular methods to quantify their confidence. For black-box LLMs, a natural approach is to prompt them to express confidence verbally, a method known as verbalized confidence, first introduced by <cit.>. Other studies have explored this approach specifically for language models fine-tuned with reinforcement learning from human feedback (RLHF) <cit.>. Additionally, some research has proposed new metrics to quantify uncertainty <cit.>.
Our work aligns most closely with <cit.>, who presented a framework that combines prompting strategies, sampling techniques, and aggregation methods to elicit calibrated confidences from LLMs. While previous studies primarily benchmarked their methods on general reasoning tasks, our study focuses on the medical domain, where accurate uncertainty quantification is critical for diagnosis and decision-making. We evaluate LLM calibration using the framework defined by <cit.> and propose a framework consisting of a new prompting strategy and aggregation method, termed Atypicality Presentations Recalibration, which shows significant improvements in calibrating LLM uncertainty in the medical domain.
§.§ Atypical Presentations
Atypical presentations have garnered increasing attention and recognition in the medical field due to their critical role in reducing diagnostic errors and enhancing problem-based learning in medical education <cit.>. Atypical presentations are defined as "a shortage of prototypical features most frequently encountered in patients with the disease, features encountered in advanced stages of the disease, or features commonly listed in medical textbooks" <cit.>. This concept is particularly important in geriatrics, where older patients often present atypically, and in medical education, where it prompts students to engage in deeper reflection during diagnosis.
Given the increasing emphasis on atypical presentations in medical decision-making, it is pertinent to explore whether this concept can be leveraged to calibrate machine learning models. <cit.> were the first to incorporate atypicality into model calibration for classification tasks. Our work extends this approach to generative models like LLMs, integrating atypical presentations to achieve more accurate and calibrated confidence estimates.
§ METHOD
In this section, we describe the methods used to elicit confidence from large language models (LLMs) as well as our recalibration methods. Calibration in our settings refers to the alignment between the confidence estimates and the true likelihood of outcomes <cit.>. Our experiments are based on the framework described by <cit.>, which divides the approaches into three main components: prompting, sampling, and aggregation, and uses it as baselines. In their framework, they leverage common prompting strategies such as vanilla prompting and Chain-of-Thoughts while also leveraging the stochasticity of LLMs. In contrast, we propose an approach, Atypical Presentation Recalibration, that retrieves atypicality scores and use them as a recalibration method in order to have more accurate confidence estimates. Our framework is mainly divided into two parts: Atypicality Prompting and Atypicality Recalibration. We explain how each of the three components are applied to our tasks and how we integrate atypicality to develop hybrid methods that combine these elements.
§.§ Prompting methods
Eliciting confidence from LLMs can be achieved through various methods, including natural language expressions, visual representations, and numerical scores <cit.>. We refer to these methods collectively as verbalized confidence. While there are trade-offs between these methods, we focus on retrieving numerical confidence estimates for better precision and ease of calibration. We design a set of prompts to elicit confidence estimates from LLMs.
Vanilla Prompting
The most straightforward way to elicit confidence scores from LLMs is to ask the model to provide a confidence score on a certain scale. We term this method as vanilla prompting. This score is then used to assess calibration.
Chain-of-Thought (CoT)
Eliciting intermediate and multi-step reasoning through simple prompting has shown improvements in various LLM tasks. By allowing for more reflection and reasoning, this method helps the model express a more informed confidence estimate. We use zero-shot Chain-of-Thought (CoT) <cit.> in our study.
Atypicality Prompting
Inspired by the concept of atypical presentations in medicine, we aim to enhance the reliability and transparency of LLM decision-making by incorporating atypicality into the confidence estimation process. We develop two distinct prompting strategies to achieve this goal:
* Atypical Presentations Prompt: This strategy focuses on identifying and highlighting atypical symptoms and features within the medical data. The prompt is designed to guide the LLM to assess the typicality of each symptom presented in the question. By systematically evaluating which symptoms are atypical, the model can better gauge the uncertainty associated with the diagnosis. For example, the prompt might ask the model to rate the typicality of each symptom on a scale from 0 to 1, where 1 represents a typical symptom and 0 represents an atypical symptom. In the following sections of the paper, we will refer to these scores as atypicality scores where the lower the score is the more atypical it is. This information is then used to adjust the confidence score accordingly.
* Atypical Scenario Prompt: This strategy evaluates the typicality of the question itself. It is based on the notion that questions which are less familiar or more complex may naturally elicit higher uncertainty. The prompt asks the LLM to consider how common or typical the given medical scenario is. For instance, the model might be prompted to rate the overall typicality of the scenario on a similar scale. This approach helps to capture the inherent uncertainty in less familiar or more complex questions.
§.§ Sampling and Aggregation
While verbalized confidences provide a straightforward way to assess the uncertainty of LLMs, we can also leverage the stochasticity of LLMs <cit.> by generating multiple answers for the same question. Different aggregation strategies can then be used to evaluate how aligned these sampled answers are. We follow the framework defined by <cit.> for the sampling and aggregation methods and uses them as baselines to our Atypical Presentations Recalibration framework.
Self-Random Sampling
The simplest strategy to generate multiple answers from an LLM is by repeatedly asking the same question and collecting the responses. These responses are then aggregated to produce a final confidence estimate.
Consistency We use the consistency of agreement between different answers from the LLM as the final confidence estimate <cit.>. For a given question with a reference answer Ỹ, we generate a sample of answers Ŷ_k. The aggregated confidence C_consistency is defined as:
C_consistency = 1/K∑_k=1^K 1{Ŷ_k = Ỹ}
Weighted Average
Building on the consistency aggregation method, we can use a weighting mechanism that incorporates the confidence scores elicited from the LLM. This method weights the agreement between the different answers by their respective confidence scores. The aggregated confidence C_average is defined as:
C_average = ∑_k=1^K 1{Ŷ_k = Ỹ} * C_k/∑_k=1^K C_k
Atypicality Recalibration
To integrate the atypicality scores elicited with Atypicality Presentations Prompting into our confidence estimation framework, we propose a non-linear post-hoc recalibration method that combines the initial confidence score with an aggregation of the atypicality assessments. This method draws inspiration from economic and financial models where expert judgments are combined with varying weights and exponential utility functions to address risk aversion. Formally, for an initial confidence C_i of a given question and atypical scores A_k, the calibrated confidence CC_i is computed as follows:
CC_i = C_i * (1/K∑_k=1^K e^A_k - 1)
where A_k takes values in [0,1] and a value of 1 corresponds to a typical value. For the Atypical Scenario Prompt, this equation translates to having K equal to 1. Thus, the final confidence estimate will equal the initial confidence score only if all the atypical scores are 1.
§ EXPERIMENTS
§.§ Experimental Setup
Datasets
Our experiments evaluate the calibration of confidence estimates across three different english medical question-answering datasets. For our experiments, we restricted on evaluating on only the development set of each dataset. MedQA <cit.> consists of 1272 questions based on the United States Medical License Exams and collected from the professional medical board exams. MedMCQA <cit.> is a large-scale multiple-choice question answering dataset with 2816 questions collected from AIIMS & NEET PG entrance exams covering a wide variety of healthcare topics and medical subjects. PubMedQA <cit.> is a biomedical question answering dataset with 500 samples collected from PubMed abstracts where the task is to answer research question corresponding to an abstract with yes/no/maybe.
Models We use a variety of commercial LLMs that includes GPT-3.5-turbo <cit.>, GPT-4-turbo <cit.>, Claude3-sonnet <cit.> and Gemini 1.0 Pro <cit.>.
Evaluation Metrics
To measure how well the confidence estimates are calibrated, we will report multiple metrics across the different datasets, methods and models. Calibration is defined as how well a model's predicted probability is aligned with the true likelihoods of outcomes <cit.>. We measure this using Expected Calibration Error (ECE) <cit.> and Brier Score <cit.>.
To evaluate the quality of confidence estimates using ECE, we group the model's confidence into K bins and estimate ECE by taking the weighted average of the difference between confidence and accuracy in each bin <cit.>. Formally, let N be the sample size, K the number of bins, and I_k the indices of samples in the k^th bin, we have:
ECE_K = ∑_k=1^K |I_k|/N |acc(I_k) - conf(I_k)|
Brier score is a scoring function that measures the accuracy of the predicted confidence estimates and is equivalent to the mean squared error. Formally, it is defined as:
BS = 1/N∑_n=1^N (conf_n - o_n)^2
where conf_n and o_n are the confidence estimate and outcome of the n^th sample respectively.
Additionally, to evaluate if the LLM can convey higher confidence scores for correct predictions and lower confidence scores for incorrect predictions, we use the Area Under the Receiver Operating Characteristic Curve (AUROC). Finally, to assess any significant changes in performance, we also report accuracy on the different tasks.
§.§ Results and Analysis
To assess the ability of LLMs to provide calibrated confidence scores and explore the use of atypical scores for calibration, we experimented with each mentioned method using four different black-box LLMs across three medical question-answering datasets. The main results and findings are reported in the following section.
LLMs are miscalibrated for Medical QA.
To evaluate the reliability and calibration of confidence scores elicited by the LLMs, we examined the calibration curves of GPT-3.5-turbo in Figure <ref>, where the green dotted line represents perfect calibration. The results indicate that the confidence scores are generally miscalibrated, with the LLMs tending to be overconfident. Although the Atypical Scenario and Atypical Presentations methods show improvements with better alignment, there is still room for improvement. Introducing recalibration methods with atypicality scores results in more variation in the calibration curves, including instances of underestimation. Additional calibration curves for the other models are provided in Appendix <ref>.
Leveraging atypical scores greatly improves calibration.
We analyzed the calibration metrics for each method and found that leveraging atypical scores significantly reduces ECE and Brier Score across all datasets, as shown in Figure <ref> and Table <ref>. In contrast, other methods show minor changes in calibration errors, with some even increasing ECE. The Consistency and Average methods do not show improvement, and sometimes degrade, due to the multiple-choice format of the datasets, which shifts confidence estimates to higher, more overconfident values. However, the Atypical Scenario method, which elicits an atypical score describing how unusual the medical scenario is, outperforms all other methods and significantly lowers ECE compared to vanilla confidence scores. It is very interesting that the level of atypicality considered seems to make a significant difference. It is a hallmark of reasoning that how the LLM aggregates the atypicality from a lower level when prompted for a scenario is superior to simply aggregating the symptoms atypicality. This opens for further investigation into how LLMs reason about atypicality. We discuss and analyze the role of atypicality in calibration in the following sections. Detailed results of our experiments are reported in Table <ref>.
Atypicality distribution varies between Atypical Scenario and Atypical Presentations.
To better understand the gap between the calibration errors of Atypical Scenario and Atypical Presentations, we first examine the distribution of the atypicality scores. In Figure <ref>, we observe that the distribution of Atypical Presentations is much more right-skewed, indicating a prevalence of typical scores. This is largely due to the nature of the approach. Not all questions in the datasets are necessarily diagnostic questions; for example, some may ask for medical advice, where there is no atypicality associated with symptoms or presentations. In our framework, we impute the atypicality score to 1 for such cases, so it does not affect the original confidence estimate. In contrast, Atypical Scenario shows a more evenly distributed spread over the scores. This suggests that the LLMs can identify that some questions and scenarios are more atypical, which allows this atypicality to be considered when calibrating the confidence estimates.
Typical samples do not consistently outperform atypical samples.
We now question the performance of atypical versus typical samples. The intuitive answer is that performance should be better on typical samples, which are common scenarios or symptoms, making the question easier to answer. However, as shown in Figure <ref>, there is no consistent pattern between accuracy and atypicality for GPT-3.5-turbo. While accuracy increases as atypicality decreases in some cases like MedQA and PubMedQA, in other cases, the accuracy remains unchanged or even decreases. This performance variation across typicality bins provides insights into how LLMs use the notion of atypicality in their reasoning process. Higher accuracy for atypical samples could suggest that unique, easily identifiable features help the LLM. Conversely, high atypicality can indicate that the question is more difficult, leading to lower accuracy. To understand this better, we also experimented with prompts to retrieve difficulty scores and analyzed their relationship with atypicality. Our results show no clear correlation between difficulty and atypicality scores. Most atypicality scores are relatively high across all difficulty levels. Although some atypical samples are deemed more difficult, the results are inconsistent and hard to interpret. Associated graphs are in Appendix <ref>. Briefly, this inconsistent performance behavior shows there is more to explore about how LLMs use atypicality intrinsically.
Atypicality does not predict LLM's calibration error.
Another question we explored was whether calibration errors correlate with atypicality. We used the same approach as our performance analysis, binning the samples by atypicality scores and examining the ECE within each bin. This allowed us to evaluate how well the model's predicted confidence level aligned with actual outcomes across varying levels of atypicality. For both Atypical Scenario and Atypical Presentations, we assessed GPT-3.5-turbo's calibration. As shown in Figure <ref>, there are no clear patterns between atypicality scores and calibration errors. The high fluctuation of ECE across different levels of atypicality suggests that the model experiences high calibration errors for both typical and atypical samples. This indicates that calibration performance is influenced by factors beyond just atypicality. Similar to the previous performance analysis in terms of accuracy, how LLMs interpret and leverage atypicality may vary between samples, leading to inconsistent behavior.
Atypicality helps in failure prediction.
While ECE and Brier Score provide insights into the reliability and calibration of confidence estimates, it is also important for the model to assign higher confidences to correct predictions and lower confidences to incorrect predictions. To assess this, we used AUROC. In Table <ref>, we observe that incorporating atypicality into our model improves its performance across most experiments compared to the vanilla baseline. However, these improvements do not consistently outperform all other methods evaluated. This indicates that, while incorporating atypicality can improve the model's failure prediction, there remain specific scenarios where alternative methods may be more effective.
§ CONCLUSION
In our study, we have demonstrated that LLMs remain miscalibrated and overconfident in the medical domain. Our results indicate that incorporating the notion of atypicality when eliciting LLM confidence leads to significant gains in calibration and some improvement in failure prediction for medical QA tasks. This finding opens the door to further investigate the calibration of LLMs in other high-stakes domains. Additionally, it motivates the development of methods that leverage important domain-specific notions and adapting our method for white-box LLMs. We hope that our work can inspire others to tackle these challenges and to develop methods for more trustworthy, explainable and transparent models, which are becoming increasingly urgent.
Limitations This study present a first effort into assessing black-box LLMs calibration and the use of atypicality in the healthcare domain. Several aspects of the study can further be improved for a better assessment. While we restricted ourselves to three modical question-answering datasets, we can expand it to more datasets with questions that are more open-ended or even different tasks such as clinical notes summarization which could also benefit a lot from having trusted confidence estimates. Next, we limited to use only commercial LLMs as they sit better in the medical context because of their ease of use and availability. Since our approach is also applicable to open-source LLMs, testing and assessing our approach to these other models will allow for a more comprehensive review of calibration in LLMs for the medical setting. Morever, our approach is still dependent on a prompt, and since LLMs are quite sensitive to how we prompt them, there could be even more optimal prompts for retrieving atypicality scores. Lastly, the notion of atypicality is not only seen and leverage in healthcare, but it is also present in other domains such as law. Adapting our methodology for other domains could further improve LLMs calibration performance.
Ethical considerations In our work, we focus on the medical domain with the goal of enhancing the calibration and accuracy of confidence scores provided by large language models to support better-informed decision-making. While our results demonstrate significant improvements in calibration, it is imperative to stress that LLMs should not be solely relied upon without the oversight of a qualified medical expert. The involvement of a physician or an expert is essential to validate the model's recommandations and ensure a safe and effective decision-making process.
Moreover, we acknowledge the ethical implications of deploying AI in healthcare. It is crucial to recognize that LLMs are not infallible and can produce erroneuous outputs. Ensuring transparency in how these models reach their conclusions, and incorporating feedback from healthcare professionals are vital steps in maintaining the integrity and safety of medical practice. Thus, our work is a step towards creating reliable tools, but it must be integrated thoughtfully within the existing healthcare framework to truly benefit patient outcomes.
§ ADDITIONAL RESULTS
In the main sections of the paper, we presented figures for GPT3.5-turbo. Here we provide additional results for GPT3.5-turbo and the other three models to support the claims and findings discussed above. We show calibration and performance metrics for all methods used and across all three datasets: MedQA, MedMCQA and PubmedQA. Furthermore, we provide additional graphs to support the analysis of the distributions of atypicality scores across the different datasets as well as the distribution of atypicality scores by difficulty levels.
The findings and conclusions from these additional figures are already discussed in the main sections of the paper. These supplementary figures are included here to demonstrate that the findings are consistent across multiple models, ensuring that the conclusions drawn are robust and not based solely on one model.
§ PROMPT TEMPLATES
We provide the full prompt used for Atypical Scenario and Atypical Presentations. Note that for completeness, the version of prompts provided contains the component of difficulty scores. This component is optional and is only used for analyzing the relationship between difficulty and atypicality. The prompt templates can be found at Table <ref>.
|
http://arxiv.org/abs/2409.02645v1 | 20240904122205 | A Survey on Emergent Language | [
"Jannik Peters",
"Constantin Waubert de Puiseau",
"Hasan Tercan",
"Arya Gopikrishnan",
"Gustavo Adolpho Lucas De Carvalho",
"Christian Bitter",
"Tobias Meisen"
] | cs.MA | [
"cs.MA",
"cs.CL"
] |
A Survey on Emergent Language]A Survey on Emergent Language
[1]Jannik Peters [email protected]
1]Constantin Waubert de Puiseau [email protected]
1]Hasan Tercan [email protected]
2]Arya Gopikrishnan [email protected]
Work done during and after a DAAD RISE internship at Institute of Technologies and Management of Digital Transformation.
3]Gustavo Adolpho Lucas De Carvalho [email protected]
Work done during and after a DAAD RISE internship at Institute of Technologies and Management of Digital Transformation.
1]Christian Bitter [email protected]
1]Tobias Meisen [email protected]
[1]University of Wuppertal, Institute of Technologies and Management of the Digital Transformation, Rainer-Gruenter-Str. 21, Wuppertal, 42119, NRW, Germany
[2]Drexel University, College of Engineering, 3141 Chestnut St, Philadelphia, 19104, PA, USA
[3]University of Southern California, Department of Computer Science, 941 Bloom Walk, Los Angeles, 90089, CA, USA
The field of emergent language represents a novel area of research within the domain of artificial intelligence, particularly within the context of multi-agent reinforcement learning. Although the concept of studying language emergence is not new, early approaches were primarily concerned with explaining human language formation, with little consideration given to its potential utility for artificial agents. In contrast, studies based on reinforcement learning aim to develop communicative capabilities in agents that are comparable to or even superior to human language. Thus, they extend beyond the learned statistical representations that are common in natural language processing research. This gives rise to a number of fundamental questions, from the prerequisites for language emergence to the criteria for measuring its success. This paper addresses these questions by providing a comprehensive review of scientific publications on emergent language in artificial intelligence. Its objective is to serve as a reference for researchers interested in or proficient in the field. Consequently, the main contributions are the definition and overview of the prevailing terminology, the analysis of existing evaluation methods and metrics, and the description of the identified research gaps.
[
[
September 4, 2024
=====================
[MARL]
el[EL]emergent language
ec[EC]emergent communication
nl[NL]natural language
nlp[NLP]natural language processing
rl[RL]reinforcement learning
marl[MARL]multi-agent reinforcement learning
llm[LLM]large language model
hci[HCI]human-computer interaction
§ INTRODUCTION
Communication between individual entities is based on conventions and rules that emerge from the necessity or advantage of coordination. Accordingly, Lewis <cit.> formalized settings that facilitate the emergence of language as coordination problems <cit.> and introduced a simple signaling game. This game, in which a speaker describes an object and a listener confronted with multiple options has to identify the indicated one, extensively shaped the field of el research in computer science. Early works examined narrowly defined questions regarding the characteristics of ec via hand-crafted simulations <cit.>. These approaches mostly utilized supervised learning methods and non-situated settings, limiting them in their ability to examine the origins and development of complex linguistic features <cit.>. However, el research recently experienced an upsurge <cit.> with a focus on marl approaches <cit.> to enable the examination of more complex features.
One fundamental goal of el research from the marl perspective is to have agents autonomously develop a communication form that allows not only agent-to-agent but also agent-to-human communication in nl style fashion <cit.>. Therefore, rl methods are attractive from two points of view. First, successful communication settings might lead to agents that are more flexible and useful in everyday life <cit.>. Furthermore, they may provide insights into the evolution of nl itself <cit.>.
el is the methodological attempt to enable agents to not only statistically understand and use nl, like nlp models that learn on text alone <cit.>, but rather to design, acquire, develop, and learn their own language <cit.>.
The autonomy and independent active experience of rl learning settings is a crucial difference to the data-driven approaches in the field of nlp <cit.> and its llm. According to Browning and Lecun, we should not confuse the shallow understanding llm possess for the deep understanding humans acquire <cit.> through their experiences in life.
In el settings, the agents experience the benefits of communication through goal-oriented tasks <cit.> just like it happens naturally <cit.> and therefore have the opportunity to develop a deeper understanding of the world <cit.>. Hence, advances in el research enable novel applications of multi-agent systems and a considerably advanced form of human-centric AI <cit.>.
In the current state of el research, numerous different methods and metrics are already established but they are complex to structure and important issues remain regarding the analysis and comparison of achieved results <cit.>. Therefore, we see a need for a taxonomy to prevent misunderstandings and incorrect use of established metrics. In this paper, we address these issues by providing a comprehensive overview of publications in el research and by introducing a taxonomy for discrete el that encompasses key concepts and terminologies of this field. Additionally, we present established and recent metrics for discrete el categorized according to the taxonomy and discuss their utility. Our goal is to provide a clear and concise description that researchers can use as a shared resource for guidance. Finally, we create a summary of el research that highlights its achievements and provides an outlook on future research directions.
We base our work on a comprehensive and systematic literature search with reproducible search terms on well-known databases. We follow the PRISMA <cit.> specifications and show a corresponding flow diagram in Figure <ref> in Appendix <ref>. The literature search and review process as well as its results are described in detail in Section <ref>. All identified work has been reviewed and categorized according to an extensive list of specific characteristics, e.g. regarding communication setting, game composition, environment configuration, language design, language metrics, and more.
Previous surveys of el in computer science focused only on a subgroup of characteristics or specific parts of this research area.
Some of these earlier surveys focus on specific learning settings <cit.>, on methodological summaries and criticism <cit.>, or provide a more general overview <cit.>. The most similar ones to our work are <cit.> and <cit.>. <cit.> gives an introduction and overview of the el field before 2021, however, it is mostly a summary of previous work and does not provide a taxonomy or review of existing metrics in the field as we do. <cit.> focuses on common characteristics in ec research and the development of emergent human-machine communication strategies. They discuss distinctions and connections of ec research to linguistics, cognitive science, computer science, and sociology, while we focus on emergent language and its analysis. We describe and discuss all relevant surveys in more detail in Section <ref>.
Based on this preliminary work, the current state of research on el misses an overarching review and a comprehensive compilation and alignment of proposed quantification and comparability methods. Accordingly, the key contributions of the present survey are:
* A taxonomy of the el field, in particular regarding the properties of discrete el, see Section <ref>.
* An analysis of quantification approaches and metrics, including their categorization, see Section <ref>.
* A summary of open questions and an outlook on potential future work, see Section <ref>.
In addition, we introduce the fundamental concepts of nl and ec that underlie our survey in Section <ref>. As mentioned, we provide a detailed summary of related surveys in Section <ref>. Section <ref> describes our study methodology, including the keywords and terms of our systematic literature search. Finally, Section <ref> offers a concluding discussion and final remarks.
§ BACKGROUND
To contextualize the presented taxonomy and analysis, this section summarizes the key concepts of communication and linguistics and provides an overview of el research.
§.§ Communication
Communication at its very basis is the transfer or exchange of signals, which can be interpreted to form some information. These signals include both intended, such as deliberate utterances, and unintended, such as uncontrolled bodily reactions, and include both explicit and implicit parts <cit.>. According to Watzlawik's Interactional View <cit.>, one cannot not communicate. In this regard, communication is ubiquitous and necessary, occurring through various channels and modes <cit.>. Depending on the specific channel and purpose, communication can be roughly divided into the five forms depicted in Figure <ref>.
In the context of el, two of these forms are actively studied (see Section <ref>), namely interpersonal communication and group communication. Interpersonal communication is communication between entities that mutually influence each other, and its general setting is depicted in Figure <ref>. This form of communication is based on individual entities, each within its perceivable environment. Although these environments are agent-specific, they overlap and allow communication through a common channel. In addition, there may be noise in this process that affects the perception of the environment or the communication itself.
Group communication, on the other hand, differs only in the number of entities involved and the communication goal. Usually, group communication is more formal and focuses on a common goal or group task while interpersonal communication has a social character and might only relate to a goal or task of one of the participants. Accordingly, the group communication setting can be found in most population-based el research.
Intrapersonal communication (e.g., internal vocalization), public communication (e.g., lectures), and mass communication (e.g., blog entries) are not currently examined in the el literature.
Generally, communication can be seen as a utility to coordinate with others, as motivated by linguistic and computer science research alike <cit.>. Conversely, the necessity for collaboration within a collective may be a fundamental precursor to the evolution and sustained functionality of explicit communication <cit.>. This theory leads to an essential differentiation regarding context-dependent communication. Meaningful communication might emerge in a cooperative but not in a fully competitive or manipulative setting. However, a partially competitive setting might be vital for the emergence of resilient and comprehensive communication, e.g. to enable the detection and use of lies <cit.>. Accordingly, the level of cooperation is a defining element of the communication setting in el research, as outlined in Section <ref>.
nl is one major accomplishment of humanity that is utilized in all forms of communication (see Section <ref>). It is a tool that allows us to encode very complex information within a discrete and humanly manageable amount of utterances. A lot of artificial intelligence research aims to develop nl models, with applications ranging from translation to coherent full-text generation based on single-word input <cit.>.
However, a theory with many supporters from the AI community <cit.> states that current intelligently designed statistical models trained on large static datasets do not produce an understanding of language that can lead to productive cooperation with humans <cit.>.
Correspondingly, the recently evolved field of el research in AI aims to enable agents to utilize intended communication in the same way humans use it to increase cooperation, performance, and generalization and, in the long run, enable direct meaningful communication between humans and artificial systems <cit.>. In line with this, multiple explicit forms of ec in artificial intelligence research have been investigated as shown in Section <ref>. In contrast, work focusing on implicit communication, like the information content of spatial positioning of agents in a multi-agent setting <cit.>, is not part of the present survey.
§.§ Natural Language
nl is a prime example of a versatile and comprehensive form of communication designed to convey meaning <cit.>. The flexibility of nl allows humans to be exact but also deliberately ambiguous in their communication <cit.>. It is a vital feature that distinguishes us from other species and gives us a great advantage in terms of knowledge storage, sharing, and acquisition <cit.>. However, the origin and evolution of language is still a mystery <cit.>. In the field of linguistics, many conflicting theories have been introduced so far <cit.>, ranging from behavioral to biological explanations. Additionally, accompanying research in the field of computer science has a long history <cit.> with a comparable range of theories. Even though there is still a debate around this topic, it is commonly agreed upon that a very intricate evolutionary process was involved <cit.>. This evolution most likely took place in two different areas simultaneously, biologically and linguistically. On the biological side, the human brain most likely developed specific areas and functionalities specifically for more complex language-based communication, that are studied in the scientific field of neurolinguistics <cit.>. On the linguistics side, this evolution can be seen in language development itself, which is a constantly ongoing process <cit.>. Similarly, el is concerned with the research of suitable model structures for the processing of language, while concurrently developing and evaluating language.
While the exact origin of language is highly debatable, the actual communication process via nl is generally easier to conceptualize. For example, it can be modeled by the semiotic cycle depicted in Figure <ref> <cit.>. This depiction applies to multiple expressive channels, e.g. speech and writing.
It assumes at least two involved parties, a speaker and a listener. The speaker produces an utterance based on the meaning to be conveyed. This meaning results from the combined conceptualization of the speaker's goal and model of the world. On the other hand, the listener receives the utterance and comprehends it to derive a meaning, which is not a direct copy of the initial one by the speaker but it still refers to the shared world. The interpretation of the meaning, which the listener's world model informs, leads to some action by the listener.
At the center of this process are the shared world and the respective world models of speaker and listener that function as grounding for the information exchange via language. Further, both linguistic level components of production and comprehension allow the respective agent to participate in the language process.
The semiotic cycle puts the utterance as an externalized information carrier into focus. While the other components are internalized and thus difficult to define and measure, the utterance itself is external and available for analysis. Fundamentally, this specific utterance is based on the underlying communication process and specifically, the language used. Accordingly, most research papers investigate characteristics of the utilized language to analyze the communication possibilities and capabilities of users.
To this end, linguistics subdivides the language structure into six major levels <cit.>, as illustrated in Figure <ref>. This structure was originally developed for spoken language, as indicated by the terms *phonetics and *phonology derived from the Greek word *phon meaning *sound. However, the levels are also applicable to written language in the context of el. Therefore, the following description will address both spoken and written language within this framework.
The phonetics level includes the entire bandwidth of the chosen, often continuous, language channel. For example, it comprises the full range of possible speech sounds available to humans. Consequently, it is fundamental for the general transfer range and describes it without any limitation.
At the phonology level are the atomic building blocks of the spoken or written language, defined as phonemes or graphemes. A phoneme or grapheme enables the creation of meaning as well as the necessary distinction at the lowest level of language. However, in a nl with an alphabetic writing system, phonemes and graphemes, which in this case correspond to letters, are often not a direct match and are only roughly related. Nevertheless, these individual units comprise the set of used elements from the continuous channel range for a specific language.
These are used and combined at the morphology level to create and assign meaning by making words, in linguistics called lexemes. In this context, word-forming rules and underlying structures are of interest.
Utilizing these meaningful building blocks, sentences can be realized at the syntax level. This level only concerns the structure of sentences and in particular, their assembly rules and the word categories that are used. The meaning of these sentences is relevant at the next level, semantics. At this level, the literal meaning of language constructions is of interest while the final level, pragmatics, focuses on how context contributes to the meaning. Accordingly, it analyzes how language is used in interactions and the relationship between the involved parties. Overall, the presented levels are not only important to describe language functionally and structurally but also to distinguish language characteristics and metrics. Thus, we use them to organize parts of the taxonomy in Section <ref> and the metrics in Section <ref>.
§.§ Emergent Language
el refers to a form of communication that develops among artificial agents through interaction, without being explicitly pre-programmed. Thus, it is a bottom-up approach, arising from the agents' need to cooperate and solve tasks within a given environment <cit.>. This process involves the agents creating, adapting, and refining linguistic structures and meanings to enhance their ability to exchange information effectively and efficiently <cit.>. el research aims to understand the principles and mechanisms underlying this spontaneous development of communication. It explores how linguistic elements such as syntax <cit.>, semantics <cit.>, and pragmatics <cit.> can arise from the interaction of artificial agents and how these elements contribute to the agents' performance and cooperation.
A nl-like communication form would make artificial agents and computer systems, in general, more accessible, simpler to comprehend, and altogether more powerful <cit.>. el research originally focused on the question of language origin <cit.>. Recently, this focus shifted to the more functional aspect of el, focusing on how to enable agent systems to benefit from a mechanism that helped humanity thrive and how to achieve communication capabilities as close as possible to nl <cit.>. Today, el within computer science is about self-learned <cit.>, reusable <cit.>, teachable <cit.>, interpretable <cit.>, and powerful <cit.> communication protocols. In the long run, el aims to enable machines to communicate with each other and with humans in a more seamless and extendable manner <cit.>.
Accordingly, various research questions and areas were derived.
For example, recent papers have addressed issues around the nature of the setting, which can be semi-cooperative <cit.>, include adversaries <cit.>, have message-influencing noise <cit.>, or incorporate social structures <cit.>.
Moreover, some are concerned with the challenge of grounding el, e.g. using representation learning as basis <cit.>, combining supervised learning and self-play <cit.>, or utilizing el agents as the basis for nl finetuning approaches <cit.>.
Others tackle the direct emergence of language with nl characteristics, e.g. looking at internal and external pressures <cit.>, evaluating factors to enforce semantic conveyance <cit.>, looking at compositionality <cit.>, generalization <cit.>, or expressivity <cit.>, or questioning the importance of characteristics like compositionality <cit.> and the connection between compositionality and generalization <cit.>.
Based on these examples and the introduced goals and approaches, the difference in comparison to nlp research becomes apparent. Current approaches in nlp, namely llm, learn language imitation via statistics, but they do not capture the functional aspects and the purpose of communication itself <cit.>. In contrast, el uses language not as the sole objective but as a means to achieve something with meaning <cit.>. Accordingly, agents have to learn their own el to enable functionality beyond simple statistical reproduction. Specifically, agents should learn communication by necessity or benefits <cit.> and they need a setting that rewards or encourages communication, e.g., an at least partially cooperative setting <cit.>.
While the el concept sounds simple, it comes with many challenges. Encouraging communication alone can lead to simple gibberish that helps with task completion but does not represent the intended natural language characteristics <cit.>. Providing the right incentives for language development is therefore crucial. In addition, it is important to examine how agents use communication and the opportunity to send and receive information, raising the question of how to measure successful communication <cit.>. The measurability of language properties such as syntax, semantics, and pragmatics is also important for assessing the emergence of desirable language properties <cit.>. The following sections explore these challenges and related constructs and approaches in detail.
§ RELATED SURVEYS
L>X
As briefly mentioned in Section <ref>, our literature review identified 19 publications that we classified as surveys. We adopted a broad definition of what constitutes a survey, categorizing any publication as a survey if it either explicitly described itself as such or provided a particularly comprehensive and structured review of previous research. These publications conduct similar investigations on el research but with different scopes. We focus on discrete language emergence, associated taxonomy, characteristics, metrics, and research gaps. In contrast, in our review of the existing survey work, three distinct interpretive directions emerge, which we categorize as summarized in Table <ref>: Surveys that focus on the learning settings <cit.>, surveys that summarize and review utilized methods <cit.>, and surveys that provide a general discussion or overview of the el field <cit.>. The following section briefly summarizes these surveys within these categories.
*Settings
Surveys are classified within the settings category when the primary focus is on the general learning problem, the environment, and the design of the language learning setting.
van Eecke and Beuls <cit.> provided a comprehensive overview of the language game paradigm and lined out common ideas in marl research. They categorized distinct types of experiments in this paradigm and identified properties that should be considered in marl research, e.g. symmetric agents taking all roles or fully autonomous behavior. Our survey, on the other hand, addresses approaches beyond the language game paradigm with increased detail (see Section <ref>). Similarly, Lipowska and Lipowski <cit.> reported on the state of the art in el based on marl, focusing on the language game. Their paper discussed the explainability of protolanguages developed in the surveyed work as well as sociocultural approaches, such as migration or teachability, with an emphasis on the naming game and simple one-word communication. While these aspects are part of our analysis, our review goes further by placing them in a common context. Denamganaï and Walker <cit.> reviewed literature related to referential games to generate a nomenclature, leading to the development of the ReferentialGym framework. Their paper included some well-known el metrics, such as positive signaling and positive listening <cit.>, primarily aiming to introduce ReferentialGym as a comprehensive research framework. Although referential games are part of our analysis, they represent only a small part of our survey. Additionally, our work discusses multiple metrics implemented in their framework.
*Methods
The methods category contains surveys that primarily address learning methods and methods of evaluation.
Korbak et al. <cit.> discussed existing compositionality metrics and highlighted different types of compositionality, which are not fully addressed in the current literature. The authors argued that most el research and metrics emphasize the communication aspect of learned representations, like symbol sets and simple concatenation. At the same time, nl are non-trivially compositional <cit.> and require an analysis of the semantic perspective. Hence, they introduced a metric called tree reconstruction error. Our discussion of compositionality (see Section <ref>) is shorter, but we refer interested readers to <cit.> as we also include the proposed metric (see Section <ref>).
LaCroix <cit.> argued that there is an overemphasis on compositionality in el research, noting that no evolutionary precursor could be identified so far that supports this focus. The author suggested that approaches should rather focus on reflexivity - the ability to take advantage of previously evolved communicative dispositions to shape future dispositions. However, reflexivity metrics have not been established yet, so they are not included in our survey.
Lemon <cit.> reviewed language grounding, specifically the combination of symbolic grounding and conversational grounding. Symbolic grounding, as further lined out in Section <ref>, describes simple symbol-to-concept connections, while conversational grounding enables agents to adapt their language and learn new concepts <cit.>. The author argues for better data collections to resolve disagreements and clarify ambiguities. As no metrics were proposed, we do not delve further into this topic in our survey.
Lowe et al. <cit.> focused on language utility rather than semantic characteristics, reviewing metrics related to the usefulness of emergent protocols. The authors also proposed metrics such as positive signaling and positive listening, arguing for causal relationships over reward-based metrics when investigating ec. This work is one of the main inspirations for our Section <ref> focusing on language utility.
Mihai and Hare <cit.> motivated the exploration of factors that convey semantics rather than low-level hashes of an environment or task. In their review, they criticized the lack of insights into the semantics of emerged languages and highlighted the importance of disentangling and measuring semantics as a future research direction. As they do not propose additional metrics, we include their work here as a general reference without discussing it more extensively.
In their review of language pressures and biases employed in el research, Galke and Raviv <cit.> sought to resolve mismatches between neural agent el and human nl. They identify four pressures and discuss their utility as well as their inclusion into the el process, either because they are inherent to the objective or through inductive biases. These pressures facilitate the generation of nl phenomena in el that are also part of this survey. However, our focus is on the measurability of these phenomena and characteristics, rather than on the training biases that potentially give rise to them.
Vanneste et al. <cit.> analyzed and compared discretization methods for communication learning with marl. They concluded that methods like discretize regular unit (DRU), straight through DRU, and straight through Gumbel-Softmax are suitable for general use but emphasized that the best method depends on the specific environment. Discretization methods are essential for discrete el learning, but they are not the focus of our survey, so we refer interested readers to <cit.> for further details.
*General
Surveys in this general category do not fit into the other categories as they either have a focus that does not fit the settings or methods classification or are providing a general overview of the field.
Hernandez-Leal et al. <cit.> provided an extensive overview of the field of multi-agent deep reinforcement learning in general, proposing four categories to group recent work: emergent behavior, learning communication, learning cooperation, and agents modeling agents. In the segment on learning communication, the authors introduced multiple approaches through short paper summaries without going into too much detail or critical discussion. Overall, the paper is a compelling source in regard to the historical development of the field and to get an overview of recent developments in multi-agent learning in general. Additionally, the authors provided a conclusive list of lessons learned, practical challenges, and open questions. Our survey includes the work mentioned in the learning communication part of <cit.> but also several additional sources, and more importantly, we examine and review these with a different focus. Nevertheless, we explicitly recommend <cit.> as an extensive survey of the state of multi-agent deep reinforcement learning in general.
Brandizzi and Iocchi <cit.> advocated for a general human-in-the-loop concept within ec research to emphasize the interaction between humans and artificial intelligence (AI). The authors argued that, so far, human interaction is extremely underrepresented in the field. To substantiate this claim, they compared common characteristics of el research and the modeling of aspects of human-human interactions, focusing on a categorization of interaction types and the theory of mind approach. Accordingly, they elaborated in depth on the possible interaction and communication settings, e.g. types of cooperation and competitiveness, but did not provide a comprehensive categorization of existing papers in the field as we do.
Moulin-Frier and Oudeyer <cit.> provided a short review of marl research to put it into perspective with historical linguistic research and theories. To do so, they summarized established theories on the formation of language and formulate future challenges for marl, e.g. decentralized learning, plausible constraints, and intrinsic motivation. The authors have provided an interesting and inspiring non-technical view, but they do not discuss appropriate metrics, nor do they provide a detailed explanation of the characteristics of emergent language.
Galke et al. <cit.> gave a survey of ec approaches using rl, by examining 15 papers in detail. Additionally, the authors focused on the perceived mismatch between el and human nl. The compositionality of nl was used as an exemplary feature to accentuate the shortcomings of el so far. The authors concluded that key cognitive and communicative constraints, which essentially form nl, are still missing in the simulations utilized for el learning, e.g. memory constraints and role alternation. These findings are also discussed in our work and put into a wider context of publications.
Fernando et al. <cit.> also provided a brief review of language-based ec approaches. The shortcomings of those were used to motivate a drawing-based communication approach and accompanying communication game variants. While this work exhibits an interesting proposal it does not provide enough implementation details and is thematically outside of the focus group of our survey. Nevertheless, we also report on and discuss drawing-based communication approaches.
Suglia et al. <cit.> provided a categorization and analysis of visually grounded language games, datasets, and models. The focus of their survey is the multimodal grounding approach associated with visual language games. Visually grounded language games are categorized into discriminative, generative, and interactive tasks. ec is categorized as interactive and thus part of the most relevant class of language games to study the problem of grounded language learning <cit.> according to the authors.
Zhu et al. <cit.> derived nine dimensions to structure ec works. These dimensions include controlled goals, communication constraints, communicatee type, communication policy, communicated messages, message combination, inner integration, learning methods, and training schemes. Their work focused on learning tasks with communication rather than the el itself and is thus a recommended complement to our work.
Lazaridou and Baroni <cit.>, as mentioned earlier, is similar to our work. The paper included a concise introduction and overview of the el field, featuring the different types of communication, language understanding, language characteristics, and settings. However, it was mostly a summary of previous work and does not focus on the metrics and quantification of el as much as we do (see Section <ref>). Additionally, we provide an extensive taxonomy to provide a structured overview of the concepts and wording in the field (see Section <ref>).
Brandizzi <cit.> is also similar to our work. This survey reviewed ec literature to establish common characteristics within the field, resulting in four categories: game environment, learning paradigm, interaction types, and theory of mind. It tries to draw parallels to fields like linguistics, cognitive science, computer science, and sociology to derive open challenges for emergent human-machine communication. It also discussed some of the metrics we include in this survey. Even though it included a linguistics view, it did not use established frameworks from this field to derive a comprehensive taxonomy like we do (see Section <ref>). Additionally, it is based on the review of 73 publications which were found via cross-referencing and https://www.connectedpapers.com/Connected Papers while we include publications identified in a systematic literature search.
§ STUDY METHODOLOGY
The literature search that resulted in the body of work surveyed in this paper was conducted on the . The used libraries and databases are:
https://www.sciencedirect.com/ScienceDirect, https://ieeexplore.ieee.org/IEEE Xplore, https://dl.acm.org/ACM Digital Library, https://www.webofscience.com/WebOfScience, https://arxiv.org/arXiv, and https://www.semanticscholar.org/SemanticScholar. https://www.semanticscholar.org/SemanticScholar is a special case, due to the nature of the provided search machine that does not allow complex queries and filtering like the others. Consequently, we hand-picked suitable papers from the first 50 entries of the search result list. A PRISMA <cit.> flow diagram of the publication selection process is provided in Figure <ref> in Appendix <ref>. Additionally, the individual queries and results of all services are summarized in Table <ref>.
The queries delivered hits in total which resulted in unique papers. A first quick read of these papers led to additional papers, referenced by some of the originally found work. Accordingly, the literature review started with a corpus consisting of individual papers.
Of the papers, were sorted out due to the substantial divergence from the searched topic, often focusing on domains like 5G, networking, and radio. Of the remaining papers, directly address the field of interest, while are only partially relevant. Papers were deemed partially relevant if they mentioned the surveyed topic but primarily focused on different areas such as datasets, language theory, simulation, or unrelated case studies. In conclusion, this survey mainly reviews papers that directly discuss or contribute to the topic of el in computer science.
Figure <ref> presents the distribution of the relevant publications over the years, categorized by publication type. The topic of el has maintained a steady presence in conference publications, peaking in 2020. The subsequent decline in total publications may be attributed to the absence of recent topic-specific workshops. Additionally, the surge in interest in llm technologies might have diverted attention from el research. It is also worth noting that some recent studies may not have been openly published at the time of our literature search. We therefore expect the publication count to increase by 2024.
§ TAXONOMY OF EMERGENT LANGUAGE
In the course of our comprehensive literature review, we identified recurrent instances of taxonomic inconsistencies due to missing standardization <cit.> and ill-adapted metrics <cit.>. Particular concern arises from the discrepancy between the concepts intended for measurement and their corresponding metrics, or the absence of such metrics <cit.>. This section is dedicated to the formulation of a systematic taxonomy aimed at enhancing comparability and mitigating confusion within the field. This taxonomy forms the basis for the following sections and is designed to ensure consistent representation throughout the survey. It is created with the hope that it will serve as a cornerstone for future research, promoting the use of standardized terminology, particularly in the domain of language characteristics.
The taxonomy first describes the main factors influencing the el, before categorizing the language characteristics. These influencing factors have a significant impact on the investigative possibilities of el research and are therefore of particular importance when analyzing el.
Thus, the taxonomy introduces a classification system for the communication setting (Section <ref>) and communication games (Section <ref>) that agents encounter during language emergence. The communication setting encompasses factors such as the number of agents and the type of communication available to them. The communication game involves the environmental configuration and crucial factors influencing challenges and the complexity of multi-task learning. Furthermore, a short discussion on the concept of language priors is provided in Section <ref>, considering that the presence of a prior significantly influences the characteristics of the emerging language <cit.>. We conclude this section with a comprehensive overview of the concepts and characteristics examined within el research (Section <ref>). The taxonomy adheres to the six major linguistic structural levels introduced in Section <ref> and illustrated in Figure <ref>.
§.§ Communication Setting
In the literature, several communication settings are represented. One distinguishing factor is the number of agents involved. We derived three classes - the single agent, dual agent, and population setting. While the single agent setting is rare, the other two are well represented in the examined literature, as shown in Table <ref>. A single agent is typically used to train human-machine interfaces <cit.> or fine-tune existing models <cit.>. In contrast, dual-agent settings are more common and often involve a pair of speaker-listener agents, with one agent designated as the speaker and the other as the listener exclusively <cit.>. The population setting involves larger groups of agents in the language emergence process. This requires more computational resources but also enables more possibilities for regularization <cit.> and language evolution <cit.>. Accordingly, the population setting offers more opportunities to actively shape the process <cit.>.
An additional factor that shapes the communication setting is the type of cooperation inherent in the setup. Determining the level of cooperation or competition feasible within the setting is a fundamental decision and closely related to the choice of the language game. We derived three options - the cooperative, semi-cooperative, and competitive type. In the literature reviewed, the majority of studies adopted a fully cooperative setting approach, where agents fully share their rewards and lack individual components. The emphasis on strongly cooperative settings is justified given that AI agents utilize a common language to coordinate and will not learn to communicate if they dominate without communication <cit.>. Only a few publications explore semi-cooperative settings that incorporate individual rewards alongside shared rewards, introducing the challenge of balancing tasks and rewards <cit.>. A semi-cooperative setup can be compared to a simplified social scenario with overarching societal objectives, while also encompassing additional individual interests and goals. In contrast, investigations of fully competitive settings are rare, with only one work in which agents compete for rewards without a common goal <cit.>. This scarcity likely arises from the fact that such settings inherently favor deceptive language as the only advantageous strategy, making its emergence improbable without any cooperative element. <cit.>.
The third important factor in communications settings is symmetry. Agents should treat messages similarly to regular observations; otherwise, they risk devolving into mere directives <cit.>. Building on this premise, the symmetry is important for promoting robust language emergence, as opposed to languages that consist primarily of directives. An illustrative example of asymmetric settings is the commonly used, and aforementioned, speaker-listener paradigm <cit.>. Languages developed in such settings are severely limited compared to nl, lacking the capacity for diverse discourse or even basic information exchange beyond directives <cit.>. Contrary to promoting informed choices by the listener, the speaker-listener approach emphasizes obedience to commands. Conversely, a symmetric setting facilitates bi-directional communication, thereby allowing for more comprehensive language development <cit.>. For instance, symmetry may result from agents being randomly assigned roles within the interaction <cit.>. Additionally, symmetry can emerge from tasks that are inherently balanced, such as negotiations between equal partners where both parties have equivalent roles and objectives <cit.>.
At the population level, another important consideration is the choice of recipients, i.e., between targeted and broadcast communication. While broadcast communication facilitates broader information dissemination across the agent group, targeted communication promotes the development of social group dynamics and regularization <cit.>. For example, targeted communication strategies can be learned through mechanisms such as attention <cit.>, and agents can develop minimized communication strategies that optimize group performance <cit.>.
Table <ref> provides a summary of these settings and their variations. The setting categories presented and their implementation are not inherently tied to the language itself but are crucial in determining the likelihood of meaningful language emergence and in shaping the features and experimental possibilities. These initial choices dictate the options for the language development process, the opportunities for regularization <cit.>, and the requirements regarding computational resources.
§.§ Language Games
Distinct communication settings are implemented through different communication games. In this section, we provide an overview of the games used in el literature. Specifically, we focus on a subset of these games known as language games, that emphasize explicit communication via a predefined language channel.
The literature identifies several categories of language games, such as referential games, reconstruction games, question-answer games, grid-world games, among others. Our review indicates that these categories represent the most commonly used game types. To give a comprehensive view, Table <ref> lists the publications that focus on these game types. In the following, we offer a concise overview of each category to provide a clearer understanding of their characteristics.
Referential Game: Generally, a referential game, also called signaling game, consists of two agents, a sender and a receiver <cit.>. The objective of this game is for the receiver to correctly identify a particular sample from a set, which may include distractors, solely based on the message received from the sender. This set can consist of images <cit.>, object feature vectors <cit.>, texts <cit.>, or even graphs <cit.>. To accomplish this selection task, the sender must first encode a message that contains information about the correct sample. In game design, a fundamental decision arises regarding whether the sender should only view the correct sample or also some distractors that may differ from those presented to the receiver <cit.>. Another design decision concerns the receiver's side, specifically the number of distractors and whether to provide the original sample shown to the sender or only a similar one for selection <cit.>. However, only the encoded message is transmitted to the receiver, who then selects an item from their given collection.
Reconstruction Game: The reconstruction game is similar to the referential game, but with a key difference: the receiver does not have a collection to choose from. Instead, the receiver must construct a sample based on the message from the sender, aiming to replicate the original sample shown to the sender as closely as possible <cit.>. Consequently, this game setup resembles an autoencoder-based approach, with a latent space tailored to mimic or facilitate language <cit.>. Therefore, the key distinction between reconstruction and referential games, often used interchangeably in early literature, lies in the collection's presence (referential) or absence (reconstruction) for the receiver to select from <cit.>.
Question-Answer Game: The question-answer game is a variant of the referential game, but without strict adherence to previously established rules. It operates as a multi-round referential game, allowing for iterative and bilateral communication <cit.>. Unlike referential and reconstruction games, the question-answer game explicitly incorporates provisions for multiple rounds with follow-up or clarifying queries from the receiver <cit.>. Question-answer games have introduced intriguing inquiries and avenues for exploring the symmetry of el, although they are not as widely adopted <cit.>.
Grid World Game: Grid world games use a simplified 2D environment to model various scenarios like warehouse path planning <cit.>, movement of objects <cit.>, traffic junctions <cit.>, or mazes <cit.>. They offer design flexibility, allowing agents to be part of the environment or act as external supervisors. Design choices also include environment complexity and the extent of agents' observations. Although common in the literature surveyed, implementations of grid world games vary widely in their design choices and are thus a very heterogeneous group.
Continuous World Game: Continuous environments add complexity to the learning process <cit.>. In el approaches, the learning landscape involves multi-task settings where one task is tackled directly within the environment while another involves language formation. Playing continuous world games, whether in two or three dimensions, presents challenges and adds a greater sense of realism and intricacy. These environments have the potential to make it more feasible to deploy el agents in real-world scenarios compared to discrete environments <cit.>.
Other: The literature on el also covers various other game types besides those mentioned earlier, such as matrix communication games <cit.>, social deduction games <cit.>, or lever games <cit.>. These game types contribute to the creation of new language emergence settings, often designed to target specific aspects or characteristics of language development. They are valuable tools to explore and understand the complexities of el in different contexts.
In summary, although many language games have been developed, comparing different games can be complex and understanding the nuances of each game can prove challenging. A promising direction would be for the research community to collectively agree on a standardized subset of these games as benchmarks. By focusing on a representative set of games from different categories, researchers could systematically explore different settings, ensuring that new approaches are rigorously tested and their results are directly comparable across studies. This would accelerate the maturation of the field of el research, foster collaboration, and enable the community to better identify and address key challenges.
§.§ Language Prior
el research occasionally utilizes a concept known as a language prior to incorporate structures from human nl into the emerging language. A language prior is used to impose specific linguistic structures on the emerging language, making it easier to align with human nl and improve interpretability and performance. This prior can be implemented through supervised learning <cit.>, also known as injection, or through divergence estimation <cit.>. An overview of prior usage in the literature surveyed is given in Table <ref> in Appendix <ref>.
Given this context, research on el can be divided into two main areas. The first area focuses on independent situated learning and does not use priors, so that communication and language emerge spontaneously <cit.>. The second area explores imitation learning-based approaches, which aim to replicate nl behavior in artificial agents using priors <cit.>. However, it is important to note that these approaches differ from llm because language acquisition in el is generally task-oriented. In academic literature, the independent situated learning environment is often referred to as the evolution-based approach, while the imitation learning-related approach is commonly known as the acquisition-based approach. The term evolution implies starting from scratch, while acquisition involves learning an existing language <cit.>. The terminology and different approaches are depicted in Figure <ref>.
In addition, the concepts of community and generational learning are closely related <cit.>. In these methods, language emerges through iterative learning across and within agent sub-groups called communities. Generational learning additionally involves older generations of agents training younger ones using previously developed communication as a foundation <cit.>. Language transfer across groups or generations can be interpreted as an iterative prior. However, this method remains a fully evolutionary approach in the absence of a deliberately designed prior.
§.§ Language Characteristics
As discussed in Section <ref>, language is a complex, multifaceted system <cit.>. Therefore, it is essential to establish a comprehensive taxonomy of its properties to provide a unified framework for el research. This taxonomy will not only facilitate the unambiguous categorization of metrics used in el studies (cf. Section <ref>) but will also enhance the comparability and comprehensibility of approaches and results within the field.
As shown previously in Figure <ref>, nl can be divided hierarchically into distinct characteristics <cit.>. The following sections provide a categorization of the reviewed publications along these characteristics, occasionally breaking them down into smaller sub-characteristics if relevant.
§.§.§ Phonetics
The phonetics of a language inherently represents its medium, delineating the constraints of the specific communication channel <cit.>. These media or channels can be either discrete or continuous; for example, an audio channel is continuous, while a symbolic channel is typically discrete. Regardless of the type, they lay the foundation for the nature of communication.
However, for el research the discrete case is of particular importance, as it closely mirrors nl as we understand it <cit.>. Although humans use a continuous phonetic medium for communication, some degree of discretization is essential to establish a common ground for efficient communication <cit.>.
Table <ref> provides an overview of the reviewed papers, categorized according to the continuous or discrete approach. Notably, some papers explore both approaches, providing valuable insights for researchers interested in the basic aspects of phonetics research in el.
§.§.§ Phonology
Phonology encompasses the actively used vocabulary and determines the part of the medium that is utilized for communication. We identified five different types of vocabulary actively researched, however, some of them are rare to find in the literature. Table <ref> summarizes the results of our survey regarding vocabulary types in el research.
One commonly used phonological type in el is a binary encoding, while an even more prominent type is a token-based vocabulary. However, these two phonological classes are not always distinct, as a token-based vocabulary often builds upon a binary encoded representation <cit.>.
The other three types, which are distinct from the two most prominent, are rarely mentioned in the literature reviewed.
One of these types involves using nl vocabulary, such as all the words from an English dictionary. While this approach enforces the nl resemblance of the el, it also drastically limits the emergence and associated benefits <cit.>. Essentially, this phonological preset strips the agents of the possibility to shape phonology and morphology.
The other two vocabulary types being referred to are sound and graphics. The former enables agents to produce and process sound <cit.>, while the latter focuses on enabling agents to draw and analyze graphical representations <cit.>. Both mediums present challenges in ensuring discretization, which may be the reason why they are not as extensively researched in el.
§.§.§ Morphology
Morphology governs the rules for constructing words and sentences, meaning the overall ability to combine individual elements, also called tokens, into words and to combine those words into sentences <cit.>. This is particularly relevant in the field of el due to the prominent division of existing work based on morphological setup and options. The most significant differentiation is between the use of a fixed or flexible message length. Table <ref> demonstrates that much of the existing work employs fixed message lengths, despite this setup not being comparable to nl <cit.>. For instance, nl users, such as humans, have the ability to adjust the length of their message to fit their intention, which may vary depending on the audience, medium, or communicative goal. When communicating with colleagues, they may use shorter sentences to be efficient, while more detailed explanations may be used when conversing with friends.
Accordingly, this characteristic can be measured using metrics that assess word formation and vocabulary. Based on the metrics found in the literature, distinct features of language morphology can be quantified. Specifically, this refers to the compression of language and the presence of redundancy or ambiguity.
Compression
Compression <cit.>, also known as combinatoriality <cit.>, refers to the ability of a communication system to combine a small number of basic elements to create a vast range of words that can carry meaning. This feature of discrete communication is crucial in producing comprehensive and flexible communication with limited resources, and is an essential characteristic of nl. We assume that using compressed language is generally favorable for language learners as it reduces the burden of learning <cit.>.
Redundancy or Ambiguity
In nl, words and phrases can have redundant or ambiguous meanings. Redundancy occurs when multiple words convey the same meaning, while ambiguity arises from a limited vocabulary <cit.>. The addition of this characteristic in the morphology subsection rather than the semantics subsection may be controversial. We argue that any metric measuring redundancy or ambiguity provides more useful information about the morphology, encompassing the form and size of the vocabulary, than it does about the semantic range and capabilities of the language. However, to quantify redundancy or ambiguity, we must establish semantic meaning first.
§.§.§ Syntax
The syntax of a language establishes the grammatical rules that govern sentence formation. Consequently, syntax plays a central role in establishing a functional correspondence between emerged language and nl <cit.>. This specific characteristic of a discrete language is underrepresented in current el literature.
However, we found two examples in the body of literature discussing syntax in el. Ueda et al. <cit.> introduced a method to examine the syntactic structure of an el using categorial grammar induction (CGI), which is based on the induction of categorial grammars from sentence-meaning pairs. This method is straightforward in simple referential games. Additionally, van der Wal et al. <cit.> introduced unsupervised grammar induction (UGI) techniques for syntax analysis in el research. We discuss the methods they use to measure and analyze syntax in an el briefly in Section <ref>.
§.§.§ Semantics
Semantics is concerned with the literal meaning of language constructs and is a dominant topic in current el research, as shown in Table <ref> in Appendix <ref>. el studies often focus on establishing useful and meaningful communication between agents, making semantics a central feature <cit.>. It serves as a crucial tool for distinguishing actual information exchange from mere noise utterances <cit.>. Given the complexity of capturing the meaning of literal language in a single metric, several features have been introduced to measure the semantics of el. In particular, these features include grounding, compositionality, consistency, and generalization, as shown in Figure <ref>. Table <ref> provides an overview of the literature addressing the individual semantic features in el.
Grounding
A language is considered grounded when it is deeply intertwined with the environment, for example, when it is tightly bound to environmental concepts and objects <cit.>. Grounding is essential for the interoperability of individuals and is particularly important in nl communication, where meaningful interaction requires shared understanding <cit.>. While in theory, an el can establish a unique form of grounding using self-emerged concepts distinct from those in nl, deriving a useful metric for such a scenario proves challenging. This difficulty arises from the need to compare el to existing and comprehensible grounding principles typically found in nl <cit.>.
Compositionality
When a language exhibits compositionality, its components can be rearranged or replaced by conceptually equivalent words without changing the overall meaning <cit.>. Compositionality facilitates the construction of higher-level concepts, using conceptual foundations to enable efficient language expression <cit.>. For example, nl partition concepts such as objects and their attributes to allow compositional constructions <cit.>. As a result, we can describe variations of a single object using different words from the same semantic concept, such as ‘blue towel and ‘red towel for the object towel and the semantic concept of color. Similarly, we can attribute specific properties to different objects using the same phrase, as in ‘green towel and ‘green car. Ultimately, compositionality is beneficial for the learning process <cit.> and promotes efficient and rich language use, even in systems with limited memory capacity <cit.>.
Consistency
Merely having grounded words in a language does not necessarily guarantee its semantic quality. In addition, consistency is essential for a language to convey meaningful and practical information effectively <cit.>. If the words within a language lack consistency in their literal meanings, they will not facilitate effective communication. Therefore, even if a language is semantically grounded and compositional, its utility is compromised if the words exhibit inconsistent literal meanings <cit.>. While words can change their general meaning to fit the context, their literal meaning should remain consistent to keep their usefulness <cit.>.
Generalization
Generalization serves as a cornerstone of nl, allowing humans to communicate about topics ranging from simple to complex, broad to specific, and known to unknown, all with a relatively limited vocabulary <cit.>. A language that excels at generalization enables its users to navigate different levels of complexity, facilitating hierarchical descriptions of concepts and relationships <cit.>. Consequently, generalization and compositionality are closely related, as they both contribute to the flexibility and expressiveness of language <cit.>. This ability to generalize not only enriches communication but also underscores the adaptability and robustness of human language.
§.§.§ Pragmatics
The final dimension of el research is pragmatics. This field of study encompasses metrics that examine how language is employed in context, particularly in interactions, and how it conveys information <cit.>. By examining the pragmatics of the linguistic structure, we can ascertain whether el is itself useful and utilized effectively. While this assessment may be feasible based on rewards in a standard rl setting, integrating communication into such environments increases the complexity. This is because most setups do not separate the agent's environment interaction from its communication capabilities, thereby expanding the network's capacity, and making it difficult to attribute an increase in reward directly to el <cit.>.
As outlined in Table <ref> and depicted in Figure <ref>, five distinct features have been identified for which metrics have been proposed: predictability, efficiency, positive signaling, positive listening, and symmetry. These features are essential for assessing the constructive impact and utilization of el. Understanding how agents employ language is crucial in evaluating its effectiveness and overall benefit.
Predictability
Predictability is concerned with the assessment of the complexity of the context, including the action space within the environment. When actions exhibit less diversity, it becomes more feasible to coordinate without communication <cit.>. For instance, in a simple grid-based environment where agents have only two possible actions — moving left or right — agents can often achieve their objectives without the need for communication. In such a scenario, the limited action space reduces the necessity for el, as agents can predict each other's movements based on past behavior or simple rules. However, in a more complex environment where agents have multiple actions, such as navigating a maze with numerous paths and obstacles, the need for effective communication increases. Here, el can significantly enhance coordination by allowing agents to share information about their positions, plans, or discoveries, thus improving their overall performance in navigating the maze. Therefore, it is essential to compare the diversity of signaling and context attributes to evaluate the potential benefit of el.
Efficiency
Efficiency is a critical aspect considered whenever communication entails a cost. This is particularly true in the context of modeling the emergence of nl and the broader objective of employing el for hci. In el settings, the achievement of concise communication is contingent upon the presence of an opportunity cost <cit.>. Without such a cost, there is no incentive to communicate concisely, making el ineffective as an intermediary for hci. When communication is accompanied by a cost the necessity for efficiency in communication becomes paramount. In such scenarios, the objective is to minimize the cost while maximizing the effectiveness of communication within a given task.
Positive Signaling
The concept of positive signaling is concerned with the degree of alignment between the observations of the message producer and their communication output <cit.>. The objective is to guarantee the transmission of useful information, or at the very least, information that the speaker can discern through observation <cit.>. This assessment operates on the premise that all communication should be relevant to something observable by the speaker.
Positive Listening
Positive listening focuses on the role of the message receiver, seeking to quantify the usefulness and application of incoming information <cit.>. It seeks to quantify the impact and correspondence between the received message and subsequent actions taken. This evaluation operates under the assumption that the agent engages with the message only when it significantly influences the decision-making process, for example, in the form of chosen actions <cit.>.
Symmetry
Symmetry in el is defined as the consistency in language usage among participating agents <cit.>. This concept applies to marl settings where agents can assume multiple roles, such as message producer and message receiver. Symmetry plays a crucial role in achieving convergence on a shared and aligned el. For instance, if an agent employs language differently depending on whether it is sending or receiving messages so that words have varying meanings based on the assigned role the el setting is considered asymmetric. In such instances, rather than learning a collectively grounded language, agents develop individual protocols specific to their respective roles <cit.>.
§.§ Summary of the Taxonomy
Our proposed taxonomy systematically categorizes the key features of el systems, including communication settings, language games, language priors, and language characteristics. The latter is particularly detailed, with sub-characteristics and their features aligned with the major levels of linguistic structure, as previously illustrated in Figure <ref>. This comprehensive taxonomy enables a standardized comparison of approaches in the el literature, highlighting the opportunities and properties associated with individual options and topics in el research. Specifically, by applying this taxonomy, especially in terms of language characteristics, we can uncover the capabilities and potentials of various el approaches. This facilitates a more detailed, comparable, and insightful analysis of el.
§ METRICS
This section provides a comprehensive categorization and review of existing metrics used in EL research. The section is organized along the same categorization used in Section <ref>. Note that the categories of phonetics and phonology are excluded from this discussion, as these aspects are predetermined settings in the current el literature and thus not yet targeted by metrics.
We begin by introducing the notational system used for all metrics to ensure consistency and facilitate ease of use. We then describe the metrics within each category, detailing the individual metric and adapting it to our notation. For each metric, we provide references to both original sources and additional literature, if available, to enable further exploration beyond the scope of this work. Figure <ref> provides a visual summary of the existing metrics and their correspondence to the language characteristics. An extended version including all references for the individual metrics is provided in Figure <ref> in Appendix <ref>
octagon/.style=regular polygon,regular polygon sides=8,
§.§ Notation
Given the complexity and variability within the el field, it is crucial to establish a unified and coherent notation system. In this section we present a standardized mathematical notation designed to be consistent across the various aspects of el research, thereby facilitating clearer communication and comparison of results within the community. This approach aligns with our broader goal of advancing the field through a common taxonomy that supports the development of measurable and interpretable el. Throughout this section we focus on finite and discrete languages, although some of the definitions and metrics discussed here are also applicable to continuous languages. These languages offer a more straightforward mapping to nl, making them particularly relevant to the study of el systems.
§.§.§ Definition
In alignment with the semiotic cycle introduced in Section <ref>, our notation is organized into three interconnected spaces: setting, meaning, and language. The setting space encompasses the typical elements of rl, providing the foundational environment in which agents operate. The meaning space incorporates a representation learning endeavor, whereby sensory input is integrated with decision-relevant information to generate a coherent internal representation. Finally, the language space encompasses both the production and comprehension of discrete messages, encapsulating the communication process. These components, illustrated in Figure <ref>, will be introduced and explored in detail in the following paragraphs.
Setting
The overall setting, consisting of the environment, actions, goals, and other typical rl elements, is denoted by Ω. Let ξ denote the set of all entities in the system, with an individual entity represented as ξ_i ∈ξ. Each entity can assume specific roles, such as the sender (S) or receiver (R) in a communication scenario. An entity can assume several roles over the course of the entire communication scenario. However, for an individual message exchange, an entity assumes one specific role. We represent the role of an individual entity i by ξ_i,j∈ξ_i, where j specifies the role (e.g., j = S or j = R).
Entities interact with their environment ℰ through actions, denoted as a, which belong to the set of possible actions A, such that a ∈ A. The action taken by a specific entity ξ_i is represented as a_ξ_i. The state of the environment at any given time is denoted by s, which is an element of the state space S, so that s ∈ S. As the system progresses over time, denoted by discrete points in time [ 0, …, t ], the sequence of states and actions forms a trajectory τ, generally expressed as τ = { s^0, a^0, …, s^t, a^t }. It is important to note that the entities described here do not necessarily correspond to autonomous agents in the traditional sense; they could also represent ground truth models, human participants, or abstract constructs that lack the direct interaction capabilities typically associated with agents. Despite this distinction, for the sake of clarity and consistency, we will refer to these entities as agents in the following sections.
Given the importance of partial observability in el research <cit.>, it is essential to consider that agents only have access to their own observations, denoted as o_ξ, which are derived from the underlying state s. An individual observation o_ξ is an element of the collection of observations of an agent O_ξ, which is a subset of the observation space O, so that o_ξ∈ O_ξ⊆ O. In our framework, an observation o_ξ effectively replaces the *world model component from the traditional semiotic cycle, highlighting the localized and subjective nature of an agent's perception in partially observable environments.
Referential games (cf. Table <ref>) are frequently employed in el literature. They often operate on individual, static samples that are drawn from a corresponding dataset or distribution. In doing so, they differ from traditional rl setups that emphasize sequential decision-making and environmental interactions over time. In such cases, rather than speaking of a state s or an observation o, we use the term sample k, which is an element of the collection of all samples K, so that k ∈ K. The specific nature of a sample depends on the environment; for example, in an image-based sender-receiver game, the sample would be an image. Each sample is represented by its feature vector f, which belongs to the feature space F, so that f ∈ F. The feature vector corresponding to a specific sample k is denoted by f_k.
In el settings, the communicative goal g of an agent may differ from the (reinforcement) learning task goal. In addition, depending on the game, the sender and receiver may have distinct goals. These are important factors to consider when evaluating the communicative behavior.
Meaning
In our notation, the meaning space, denoted by Φ, serves as the critical intermediary between the setting space and the language space. The meaning space represents the semantic connections derived from the provided information. Each element within this space, represented by a specific meaning vector φ∈Φ, captures the essence of concepts or objects as understood by the agent. These meaning vectors are critical to the processes of language comprehension and production, as well as to the processes of conceptualization and interpretation, that allow an agent to effectively use inputs and generate outputs in the setting space (cf. Figure <ref>).
The representation mappings Ψ within the meaning space are agent-specific and referred to as Ψ_con and Ψ_int, given in Equation <ref>. These mappings enable the transition between an arbitrary space χ, such as sensory inputs or raw data, and the meaning space, where the data acquires semantic meaning. Ψ_con refers to the conceptualization process that transforms raw, uninterpreted data into meaningful representations within Φ. Conversely, Ψ_int denotes the interpretation process that translates these meaning vectors back into the arbitrary space that can represent any external or internal stimuli. These mappings are critical to the agent's ability to both understand its environment and communicate effectively within it through language that is both grounded in and reflective of the underlying reality with which the agents interact.
Ψ =
Ψ_con : χ→Φ
Ψ_int : Φ→χ
Language
In our proposed framework, a message m belongs to the message space M, such that m ∈ M. Each message encapsulates semantic and pragmatic content, serving as a vehicle for meaningful communication between agents. A message is composed of individual words w, which are elements of a finite collection W, commonly referred to as vocabulary, lexicon, or dictionary. In this context, each word is considered a semantic unit that carries (intrinsic) meaning. At the lowest level, a word is composed of characters or symbols υ∈Υ. These atomic characters, while essential for constructing words, do not independently carry semantic meaning. Instead, they function as elements of a finite set Υ from which any number of meaningful words can be composed.
Building on the formalization from <cit.>, we describe the message space M_ξ of an agent ξ, which represents the agent's language capabilities from a compositional standpoint. The message space M_ξ⊆ M is composed of a set of messages or strings m_ξ, each constructed from words within W_ξ, as shown in Equation <ref>. Further, each w_ξ∈ m_ξ is composed of a set of characters υ_ξ∈Υ_ξ⊆Υ utilized by the agent, given by Equation <ref>.
m_ξ ⊆ M_ξ
=
{
w_ξ|
w_ξ∈ W_ξ⊆ W
| w_ξ|≥ 0
}
w_ξ ⊆ W_ξ
=
{υ_ξ|υ_ξ∈Υ_ξ⊆Υ|υ_ξ|≥ 0
}
A language ℒ encompasses a set of mapping functions that facilitate the transformation between the message space M and other arbitrary spaces χ. These mappings are agent-specific and enable both the production of messages, denoted as ℒ_prod, and the comprehension of messages, denoted as ℒ_comp. This framework aligns with the linguistic level description of the semiotic cycle presented in Figure <ref>. Within this context, we formally define a language ℒ in Equation <ref>.
ℒ =
ℒ_prod : χ→ M
ℒ_comp : M →χ
These emerging mapping functions are not necessarily injective, meaning that distinct inputs from the space χ could potentially be mapped to an identical message within M <cit.>. Conversely, distinct messages within M could also be mapped to the same value in χ. While this non-injectivity adds a layer of complexity to the expressiveness of the language, it also introduces a degree of flexibility that can be advantageous in certain communication scenarios. For example, it allows for synonymy (where different messages convey the same meaning), which can provide redundancy and flexibility in communication, and homonymy (where the same message may have multiple interpretations depending on context), which can facilitate more nuanced and context-dependent communication. These natural phenomena, though challenging, are well-documented in nl and are of particular interest in the design and evaluation of artificial communication systems <cit.>. However, managing these complexities effectively is crucial, as unchecked non-injectivity could lead to ambiguities that complicate communication rather than simplifying it.
§.§.§ Important Notes
The notation presented here is designed to be comprehensible and thorough; however, it may not be directly applicable in all cases to existing works, as these employ different wordings. a lot of existing work uses the term *word, which in our notation describes element carrying semantic meaning, and *symbols, which in our notation serve as fundamental building blocks without inherent semantic meaning, interchangeably <cit.>. Furthermore, a considerable proportion of existing literature utilizes a multitude of different definitions for concepts such as *meaning space <cit.>, *ground-truth oracle <cit.>, and other pivotal elements. In our endeavor to establish a unified framework, we have occasionally adopted terminology that differs from that used by the original authors. While this may initially lead to some confusion, we intend to mitigate this by providing transparent and detailed descriptions. Our objective is a consistent application of these concepts across the field of el research, thereby promoting coherence between different studies. The following sections attempt to align existing research and metrics with the proposed framework. While this alignment has required some linguistic adjustments to existing terminology and procedures, it is important to note that no substantive changes have been made to the underlying methodologies.
§.§ Morphology
Morphological metrics aim to evaluate the structure and formation of words within a language, as well as the richness and diversity of its vocabulary. The identified metrics focus on aspects such as language compression, redundancy, and ambiguity. The morphology of a language significantly influences the complexity of language based tasks <cit.>. Therefore, the evaluation of morphological features is a crucial component for understanding and evaluating the effectiveness of el.
§.§.§ Compression
The concept of compression within a language refers to its ability to efficiently combine and reuse a limited set of characters to generate a large collection of words or meanings <cit.>. Several metrics can be used to quantify compression in el. A straightforward approach for these metrics is to use statistical measures, as shown in the following paragraphs. These metrics provide insight into the efficiency of the language, indicating how well it minimizes redundancy while maximizing expressiveness. Efficient compression is a key indicator of a communication system, especially in scenarios where resources (such as memory or bandwidth) are constrained.
Distinct Appearances
The metric of distinct appearances (DA) was proposed by Loreto et al. <cit.>. It is formalized in Equation <ref> and designed to quantify the capacity of a communication system to name a diverse set of objects or categories using its available symbols <cit.>. Specifically, this metric evaluates how frequently characters υ∈Υ are reused across different words or names w within the lexicon W. By examining the set W_υ, which includes all words containing a given character υ, we can assess the system's flexibility in recombining basic units to generate a broad spectrum of expressions.
A high DA value, approaching 1, indicates that the characters are highly versatile and reused extensively across different words, thereby reflecting a flexible communication system. Conversely, a low DA value suggests limited reuse of characters, which may imply constraints in the system's expressiveness or a less efficient use of its symbolic resources. This metric provides insights into how efficiently a system can balance the trade-off between a compact character set and the richness of its vocabulary.
DA
= ∑_υ∈Υ( | W_υ| - 1 )/( | W | - 1 ) |Υ| with
W_υ = { w |υ∈ w w ∈ W }
Average Message Length
Another way to assess the degree of compression achieved by agents in their communication is to analyze the average message length <cit.>. This metric, which appears for the first time in Choi et al. <cit.>, captures the typical length of generated messages and provides insight into the efficiency of the el in terms of information density <cit.>. By tracking the average number of words in the messages, we can quantify how effectively the agents compress their language. This metric is computed at the word level, meaning each word within a message is counted. The average message length | m | for a set of messages M is calculated as follows:
| m | = 1/| M |∑_m ∈ M| m | with | m | = ∑_w ∈ m 1
Active Words
The active words metric, introduced by Lazaridou et al. <cit.>, complements the average message length by quantifying the diversity of word usage within the vocabulary <cit.>. Specifically, this metric measures the variety and utilization of distinct words in a communication system. A high number of active words indicates a diverse vocabulary, reflecting a more complex or redundant el. Conversely, a lower number suggests that the communication system relies on a limited set of words, which may indicate a more efficient and compressed language with less synonyms <cit.>. This metric is widely used in the literature <cit.>. Mathematically, the active word value AW for an agent ξ_i can be defined as the size of the collection of words actively used by the agent W_ξ_i, as given in Equation <ref>. In multi-agent setups, this metric can be averaged across all agents to provide a collective measure of vocabulary diversity within the joint system.
AW( ξ_i) = | W_ξ_i| with W_ξ_i⊂ W
§.§.§ Redundancy or Ambiguity
Redundancy in language occurs when multiple words are associated with the same meaning, providing alternative expressions for the same concept. Conversely, ambiguity occurs when a single word is associated with multiple meanings, creating the potential for different interpretations depending on the context. Both redundancy and ambiguity are characteristic features of nl, reflecting the complexity and flexibility inherent in human communication <cit.>.
Perplexity
Perplexity, introduced by Havrylov and Titov <cit.>, measures how often a word was used in a message to describe the same object <cit.>. A lower perplexity shows that the same words are consistently used to describe the same objects. <cit.>.
Mathematically, P ( w | φ) represents the probability or score of a word for a specific concept or meaning, e.g., derived from an affine transformation of the sender's hidden state <cit.> or from a ground truth label <cit.>. Thus, perplexity, given in Equation <ref>, quantifies the predictability of word usage, with lower values reflecting a less redundant communication system. It is usually calculated based on a sampled set of meanings Φ_test for which the word probability can be generated.
Ppl
= exp(
- ∑_w ∈ W[ P ( w | φ) ·log( P ( w | φ) ) ]
)
∀φ∈Φ_test⊆Φ
Singular Value Decomposition
Another approach to quantitatively assess the redundancy of the vocabulary used in a communication system is outlined by Lazaridou et al. <cit.>. This method involves constructing a matrix where the rows correspond to distinct meanings, the columns represent individual words, and the matrix entries indicate the frequency with which each word is used for a given meaning. The rows are thus constructed based on a predefined ground truth classification. By applying Singular Value Decomposition (SVD) to this matrix, we can examine the dimensionality of the underlying communication strategy. If the communication system relies on a limited set of highly synonymous words, we would expect the SVD to reveal a low-dimensional structure. Conversely, a higher-dimensional decomposition would indicate a more diverse use of vocabulary, reflecting a potentially less synonymous and more redundant language.
Message Distinctness
Message distinctness evaluates the linguistic representation of distinct features and thus aims to quantify ambiguity <cit.>. The metric, first suggested in Lazaridou et al. <cit.> and Choi et al. <cit.>, quantifies the diversity of messages generated by the agent by assessing how well it differentiates between various inputs. Specifically, message distinctness MD is calculated as the ratio of the number of unique messages generated within a batch (cf. <ref>) to the batch size (cf. <ref>). A higher message distinctness indicates less ambiguity of the language.
M_unique =
{
m_i| m_i∈ M_test m_i≠ m_j ∀ m_j∈ M_test, i ≠ j
}
MD
= | M_unique|/| M_test|
§.§ Syntax
Despite the significance of structural properties in el, particularly regarding their syntax and its relation to semantics, research in this area remains limited <cit.>. Recurrent syntactical patterns are central to the robustness and versatility of nl <cit.>. Exploring these properties within the context of el could provide valuable insights into their development and alignment with nl.
Syntax Tree
Van der Wal et al. <cit.> introduced unsupervised grammar induction (UGI) techniques for syntax analysis in el research, describing a two-stage approach to deriving grammar and syntax. The first phase involves the induction of unlabeled constituent tree structures, explained below, and the labeling of these structures. The second phase extracts a probabilistic context-free grammar (PCFG) from the labeled data. Two methods were compared for constituency structure induction: the Common Cover Link (CCL), a pre-neural statistical parser that makes assumptions about nl such as the Zipfian distribution, and the Deep Inside-Outside Recursive Auto-encoder (DIORA), a neural parser. For the labeling process, Van der Wal et al. <cit.> used Bayesian Model Merging (BMM), to consolidate probabilistic models to label the induced syntax trees.
In syntax trees, the structure of the language is represented in a hierarchical manner, where nodes represent grammatical constructs (such as sentences, phrases, and words) and edges represent the rules or relationships that connect these constructs. Analysis of these trees helps to understand how well grammar induction methods match the true syntactic nature of el. There are several metrics associated with syntax trees that are used to measure the complexity of the grammar <cit.>. First, tree depth measures the maximum distance from the root of the tree to its deepest leaf. Tree depth reflects the hierarchical complexity of the grammar. Shallow trees indicate a simpler grammar, while deeper trees suggest a more complex syntactic structure. Second, the number of unique preterminal groups is a metric that counts the different sets of preterminals (intermediate symbols) that appear to the right of production rules in a grammar. A larger number of unique preterminal groups indicates a richer and more diverse syntactic organization, suggesting that the grammar can generate a greater variety of structures.
Categorical Grammar Induction
Ueda et al. <cit.> proposed a novel approach for analyzing the syntactic structure of el using Categorial Grammar Induction (CGI). This technique focuses on deriving categorial grammars from message-meaning pairs, making it particularly well-suited for simple referential or signaling games.
In this method, derivation trees are constructed using lexical entries and application rules, mapping messages to atomic syntactical representations. Given that multiple derivations might exist for a single message, the most likely derivation [is selected] using a log-linear model <cit.>. CGI is particularly valuable for assessing the syntactic structure of an el using the generated trees.
§.§ Semantics
Capturing the semantic properties of el is inherently complex, making it difficult to encapsulate nuances in a single metric. To address this, several key features have been introduced, including grounding, compositionality, consistency, and generalization. These are important because agents can develop representations that are well aligned with task performance but fail to capture the underlying conceptual properties <cit.>. Thus, an el might enable successful task completion without truly encoding semantic meaning. Therefore, evaluating these semantic features is essential to evaluate the value and validity of the el.
§.§.§ Grounding
Grounding is essential for the development of meaning and for systematic generalization to novel combinations of concepts <cit.>. It forms the basis of human-agent communication <cit.>, and without proper grounding, meaningful communication cannot be effectively learned <cit.>. However, in general dialog settings, grounding does not emerge naturally without specific regularization techniques <cit.>. The grounding problem, which concerns how words acquire semantic meaning, is central to this challenge <cit.>.
Thus, grounding metrics are vital as they largely define the usability of a language. However, a significant limitation of these metrics is their reliance on some form of oracle or a nl-grounded precursor <cit.>.
Divergence
Havrylov and Titov <cit.> proposed a weak form of grounding. Weak grounding means that the same word can correspond to completely different concepts in the induced el and nl. They used the Kullback-Leibler divergence D_KL (cf. Equation <ref>) of an el and a nl distribution to ensure that the statistical properties of el messages resemble those of nl. They introduced this approach as an indirect supervision measure during training but it can also serve as a metric for evaluating the alignment between el and nl. For a given sample k and the message m_ξ_S produced by the sender, the grounding divergence G_Div calculation is shown in Equation <ref>. Since the true nl distribution P_NL( m_ξ_S) is inaccessible, a language model is trained to approximate this distribution. The KL divergence yields a value in the range [ 0, ∞), with lower values indicating a closer resemblance between the generated messages and nl.
D_KL(P||Q)= ∑_x P(x) log( P(x)/Q(x))
G_Div
= D_KL(
P ( m_ξ_S| k ) ∥ P_NL( m_ξ_S)
)
Purity
Purity, proposed by Lazaridou et al. <cit.>, is a metric used to assess the alignment between predefined semantic categories and those observed in an el. It measures the effectiveness of a communication system in consistently mapping signals or words to specific concepts <cit.>. Thus, purity quantifies the extent to which the clustering of words reflects meaningful and coherent categories, as determined by ground-truth labels. To assess purity, we first form clusters by grouping samples based on the most frequently activated words to describe them. The quality of these clusters is then evaluated using the purity metric, which calculates the proportion of labels in each cluster that match the majority category of that cluster. A higher purity score indicates that the sender is producing words that are semantically aligned with predefined categories, as opposed to arbitrary or agnostic symbol usage, as demonstrated in <cit.>. However, this metric requires the existence of predefined ground-truth labels, limiting its applicability in scenarios where such labels are unavailable or ambiguous.
Formally, given a set of clusters { C_k} where each cluster of samples C_k has a corresponding majority ground-truth label c_k, the purity of a cluster C_k is defined as:
purity( C_k) = |{ w_c| w_c∈ C_k w_c = c_k}|/|{ w | w ∈ C_k}|
Here, { w | w ∈ C_k} is the collection of all words used to describe the samples in the cluster and { w_c| w_c∈ C_k w_c = c_k} is the collection of words within the cluster that fit the majority label of that cluster. The purity metric ranges from 0 to 1, where a value of 1 indicates perfect alignment with the ground-truth categories.
Representational Similarity Analysis
Representational Similarity Analysis (RSA) emerged in the field of neuroscience and was proposed by Kriegeskorte et al. <cit.>. It has since been adapted for the evaluation of the similarity of neural representations across different modalities, including computational models and brain activity patterns. This technique has been effectively applied in el research <cit.>, where the focus shifts from analyzing neural activity to exploring the structural relationships between different embedding spaces. For example, RSA has been employed to compare the similarity of embedding space structures between input, sender, and receiver in a referential game <cit.>. By calculating pairwise cosine similarities within these spaces and then computing the Spearman correlation between the resulting similarity vectors, we can calculate an RSA score that measures the global agreement between these spaces, independent of their dimensionality. The agreement of an agent's embedding space with the input embedding space as such provides an intuitive measure of the grounding of the el.
This approach offers the advantage of being applicable to heterogeneous agents and arbitrary input spaces. In our framework, this corresponds to any ground truth structured embedding e ( o_ξ) of an agent's observation o_ξ and its internal meaning representation φ_ξ. Nevertheless, a significant limitation is the necessity for an embedding, which provides a structured description of the observation oriented towards a ground truth, for example, based on a nl model. Furthermore, RSA is not directly applicable to the language itself, particularly for discrete languages. Instead, it operates at the level of earlier meaning representations. Despite this, RSA provides valuable insights into whether the el can be grounded by evaluating the grounding of the meaning space.
The methodology of <cit.> utilizes a collection K of samples, comprising k observations, images, or feature vectors, to compute representational similarities between input and meaning space. First, we generate input or ground truth embeddings e_GT = e ( o_ξ) using an appropriate model and generate the corresponding internal representations φ_ξ from the appropriate architecture part of agent ξ. Next, we compute pairwise similarities within each embedding space, denoted as S_e for the ground truth embeddings and S_φ for the agent representations, typically using cosine similarity S_cos as defined in Equation <ref>. This yields a similarity vector of size N · (N - 1) for each embedding space. The vectors are converted into rank vectors R( S_e) and R( S_φ). Finally, we calculate the Spearman rank correlation ρ <cit.> between the ranked similarity vectors, using the covariance cov and standard deviation σ, to assess the alignment between the input and agent representation spaces (cf. Equation <ref>). The correlation coefficient ρ takes on values between -1 and 1. A high absolute value of this coefficient indicates a strong alignment between the two variables.
S_e
= S_cos(
e_i, φ_i
)
∀ i, j ∈ k, i ≠ j
with S_cos( a, b )
= a · b/|| a ||·|| b ||
ρ = cov( R( S_e), R( S_φ) )/σ_R( S_e)σ_R( S_φ)
§.§.§ Compositionality
In el research, achieving compositionality often requires deliberate guidance, as it does not naturally arise without specific interventions <cit.>. For instance, training models on diverse tasks and varying environmental configurations can facilitate the development of compositional structures. This occurs as atomic concepts, learned in simpler contexts, are recombined in more complex scenarios <cit.>. When a language is truly compositional, its components can be systematically rearranged or substituted with conceptually equivalent components without altering the overall meaning <cit.>.
The formalization of compositionality can be framed using the comprehension ℒ_comp or production ℒ_prod function that map expressions from a language ℒ to a space of meanings Φ or vice versa <cit.>. For example, the function ℒ_comp : ℒ→Φ reflects all the things that the language can denote <cit.>. A language is compositional if these functions act as a homomorphism, e.g., there exist binary operators ∘ on ℒ_comp and × on Φ such that for any expression composed of two constituents m_1 and m_2 in ℒ, the following condition holds:
ℒ_comp(m_1 ∘ m_2) = ℒ_comp(m_1) ×ℒ_comp(m_2)
Topographic similarity
Topographic similarity (topsim), originally proposed by Brighton and Kirby <cit.> and first applied to el by Lazaridou et al. <cit.>, is a metric designed to quantify the structural alignment between the internal representations of meanings and the corresponding generated messages in a communication system. Unlike RSA (cf. Section <ref>), which compares the meaning space against a ground truth, topsim focuses on the internal alignment within an agent's meaning and message spaces. The intuition behind this measure is that semantically similar objects should have similar messages <cit.>. It has become a widely used metric in the study of el, as depicted in Figure <ref> in Appendix <ref>).
To compute topsim, we start by sampling k meaning representations denoted by φ, typically embedded feature vectors, from the meaning space Φ. Let ϕ = {φ_1, … , φ_k} denote the collection of these samples, with φ∈Φ. Using the sender's policy π_ξ_S^M, we generate corresponding messages m_i = π_ξ_S^M(φ_i) for each sample φ_i∈ϕ. We then compute distances within the meaning and language spaces using suitable distance functions for language Δ_ℒ and meaning Δ_Φ space.
The choice of distance function Δ depends on the nature of the spaces involved. For discrete communication, typical choices include Hamming <cit.> or Levenshtein <cit.> distance, whereas for continuous spaces, cosine or Euclidean distance are often used <cit.>. Finally, we compute the Spearman rank correlation ρ <cit.> using the ranked distances to get the topsim value of the language:
ρ =
cov(
R( Δ_ℒ( m_i) ),
R( Δ_Φ( φ_i) )
)
/σ_R( Δ_ℒ( m_i) )σ_R( Δ_Φ( φ_i) ) ∀φ_i∈ϕ
Positional Disentanglement
Positional Disentanglement (posdis) was introduced by Chaabouni et al. <cit.> as a metric to evaluate the extent to which words in specific positions within a message uniquely correspond to particular attributes of the input. This metric operates on an order-dependent strategy, which is normalized by the message length and calculated as the ratio of mutual information to entropy. The underlying assumption is that the language leverages positional information to disambiguate words, such that each position of the message should only be informative about a single attribute <cit.>. Thus, posdis assumes a message whose length equals the number of attributes in the input object, and where each message token, in a specific position, represents a single attribute <cit.>. This order-dependence is a characteristic feature of nl structures and is essential for the emergence of sophisticated syntactic patterns <cit.>.
The metric begins by identifying each word w_p at position p in a message m, where f represents the feature vector of the ground truth. The mutual information I(w_p, f_i) between w_p and a specific feature f_i is calculated to determine how informative the position p is about the attribute f_i (cf. Equation <ref>). The two most informative features f_i^1 and f_i^2 are then identified based on the mutual information value (cf. Equation <ref>). To quantify positional disentanglement, the mutual information difference between the two most informative features is normalized by the entropy H(w_p) of the word at position p, as defined in Equation <ref> and Equation <ref>.
Finally, the overall posdis value for a language is calculated by averaging the posdis scores across all positions in the messages within the dataset. For messages of varying lengths, the posdis score is normalized by the average message length | m |, as given in Equation <ref>.
I ( w_p, f_i)
= ∑_ w_p∈ m ∑_ f_i∈ f P ( w_p, f_i)
log( P ( w_p, f_i) / P (w_p) P(f_i) ) )
f_i^1 = _ f_i∈ f
I ( w_p, f_i)
and
f_i^2 = _ f_i∈ f f_i≠ f_i^1
I ( w_p, f_i)
H ( w_p) = - ∑_ w_p∈ m P ( w_p) log( P ( w_p) )
posdis_p
= I ( w_p, f_i^1) - I ( w_p, f_i^2) / H ( w_p)
posdis = 1 /| m |∑_pposdis_p with | m | = 1 /| M |∑_m ∈ M| m |
Bag of Symbols Disentanglement
Bag of Symbols Disentanglement (bosdis) is a metric introduced by Chaabouni et al. <cit.> to assess the degree to which words in a language unambiguously correspond to different input elements, regardless of their position within a message. While positional disentanglement (posdis) relies on the assumption that positional information is crucial for disambiguating words (cf. Section <ref>), bosdis relaxes this assumption and captures the intuition behind a permutation-invariant language. In such a language, the order of words is irrelevant, and only the frequency of words carries meaning <cit.>. The metric normalizes the mutual information between symbols and input features by the entropy summed over the entire vocabulary.
This approach maintains the requirement that each symbol uniquely refers to a distinct meaning, but shifts the focus to symbol counts as the primary informative element.
I ( w, f_i) = ∑_w ∈ m∑_f_i∈ f P ( w, f_i)
log( P ( w, f_i) / P ( w ) P ( f_i) )
f_i^1 = _ f_i∈ f
I ( w, f_i)
and
f_i^2 = _ f_i∈ f f_i≠ f_i^1
I ( w, f_i)
H ( w ) = - ∑_ w ∈ m P ( w ) log( P ( w ) )
bosdis_w
= I ( w, f_i^1) - I ( w, f_i^2) / H ( w )
bosdis = 1 /| W |∑_ w ∈ W bosdis_w
Tree Reconstruct Error
Tree Reconstruct Error (TRE) assumes prior knowledge of the compositional structure within the input data, enabling the construction of tree-structured derivations <cit.>. As defined by Andreas <cit.>, a language is considered compositional if it functions as a homomorphism from inputs to their representations. The compositionality of a language should be evaluated by identifying representations that allow an explicitly compositional language to closely approximate the true underlying structure <cit.>. One metric for this assessment is TRE, which quantifies the discrepancy between a compositional approximation and the actual structure, using a composition function and a distance metric. A TRE value of zero indicates perfect reproduction of compositionality.
The compositional nature of a sender's language is affirmed if there exists an assignment of representations to predefined primitives (e.g., categories, concepts, or words) such that for each input, the composition of primitive representations according to the oracle's derivation precisely reproduces the sender's prediction <cit.>. TRE specifically measures the accuracy with which a given communication protocol can be reconstructed while adhering to the compositional structure of the derivation or embedding of the input e ∈ E <cit.>.
One of the key advantages of the TRE framework is its flexibility across different settings, whether discrete or continuous. It allows for various choices of compositionality functions, distance metrics, and other parameters. However, this flexibility comes with challenges, including the requirement for an oracle-provided ground truth and the necessity of pre-trained continuous embeddings.
It is defined in a way that allows the choice of the distance metric δ and the compositionality function ∘ to be determined by the evaluator <cit.>. When the exact form of the compositionality function is not known a priori, it is common to define ∘ with free parameters, as suggested by Andreas <cit.>, treating these parameters as part of the learned model and optimizing them jointly with the other parameters η. However, care must be taken when learning the compositional function to avoid degenerate solutions <cit.>.
Given a data sample k from the dataset K (k ∈ K) and a corresponding message m from the set of all possible messages M (m ∈ M), TRE requires a distance function δ and learnable parameters η. Additionally, it employs a compositionality function ∘ and pre-trained embeddings of ground truth, denoted by e ∈ E, which can be obtained using models like word2vec.
The functions involved in the TRE calculation are as follows:
* Pre-trained ground truth oracle (e.g., word2vec): ℰ : K → E
* Learned language speaker: ξ_S : K → M
* Learnable approximation function for TRE: f_η : E → M
In the discrete message setting, which is the focus here, a discrete distance metric such as L_1 is typically chosen, along with a compositional function ∘ defined by a weighted linear combination <cit.>:
m_1∘ m_2 = A m_1 + B m_2 with η = { A , B }
To compute the TRE, an optimized approximation function f_η is required. This function must satisfy two key properties: embedding consistency, meaning that the learned parameters η are specific to an embedding, and compositionality, which ensures that the function behaves according to:
f_η( e_i) = η_i and f_η( ⟨ e_i, e_j⟩) = f_η( e_i) ∘f_η( e_j)
The optimization process involves minimizing the distance between the output of the learned language speaker ξ_S(k_i) and the approximation function f_η(e_i), based on the ground truth:
η^∗ = _η∑_iδ( ξ_S( k_i),
f_η( e_i) )
with ℰ( k_i) = e_i
With the optimized parameters η^∗, TRE can be calculated at two levels: the datum level, which assesses individual instances:
TRE(k_i) = δ( ξ_S(k_i) , f_η^∗(e_i) )
with ℰ(k_i) = e_i
and the dataset level, which measures the overall communication performance across the dataset:
TRE(K) = 1/|K|∑_k ∈ KTRE(k)
Conflict Count
Conflict count, introduced by Kuciński et al. <cit.>, is designed to quantify the extent to which the assignment of features to words in a language deviates from the word's principal meaning. This metric is particularly useful in scenarios where the language employs synonyms, as it accounts for the possibility of multiple words referring to the same concept.
The conflict count metric operates under the assumption that the number of concepts or features f_i given in a feature vector f of a sample k in the collection of samples K is equal to the message length | m |, and that there exists a one-to-one mapping between a concept f_i ∈ f and a word w ∈ W. The metric counts how frequently this one-to-one mapping is violated, with a value of 0 indicating no conflicts and, therefore, high compositionality. An advantage of this metric is its ability to accommodate redundancy in the language. However, it also has limitations, such as the assumption that the number of features or attributes equals the message length, i.e., | f | = | m |. Additionally, because conflict count assumes the number of concepts in a derivation to be equal to the message length, it becomes undefined for languages or protocols that violate this assumption, such as those involving negation or context-sensitive constructions presented in <cit.>.
The primary objective of conflict count is to quantify the number of times the mapping from a word w to its principal meaning φ_w is violated. This requires the assumption that a mapping α exists from the position p of word w in message m to an individual feature in feature vector f, such that:
α =
{
1, …, | m |}→{
1, …, | f |}
In this framework, the meaning of a word, denoted by φ_w, is determined by both the word w itself and its position p within the message. This meaning corresponds to a specific instance j of a particular feature i within the feature vector f, such that f_i,j = φ(w,p).
The process of calculating the conflict count begins by identifying the principal meaning of each word-position pair:
φ(w,p:α) = _f_i,jcount(w,p,f_i,j:α)
∀ f_i,j∈ f
using the count function:
count(w,p,f_i,j:α)
= ∑_k ∈ K|{
w | w ∈ m ( k ) pos_m( w ) = p f_i,j∈ k
}|
where m ( k ) is the message produced for sample k and pos_m( w ) computes the position of word w in message m.
Finally, the conflict count value conf is determined by finding the mapping α that minimizes the score:
conf = _α∑_w,pscore( w,p:α)
where the score function is defined as:
score( w,p:α) = ∑_f_i,j≠φ( w , p )count( w,p,f_i,j:α)
§.§.§ Consistency
For a language to be effective, the meaning of each word must be consistent across different contexts. Inconsistent word meanings can render a language practically useless, even if the language is semantically grounded and exhibits compositional properties <cit.>. In dialogue settings, particularly in the absence of explicit regularization mechanisms, words often fail to maintain consistent groundings across different instances, leading to ambiguity and reduced communicative effectiveness <cit.>. Thus, it is crucial to carefully monitor this language characteristic in el settings.
Mutual Information
Consistency in language can be quantitatively assessed by examining the mutual information between messages and their corresponding input features. Ideally, a consistent language will exhibit a high degree of overlap between messages and features, leading to a high mutual information value, indicating strong correspondence <cit.>.
Formally, mutual information between two random variables, say X and Y, with joint distribution P_(X,Y) and marginal distributions P_X and P_Y, is defined as the Kullback–Leibler divergence D_KL (see Equation <ref>) between the joint distribution and the product of the marginals:
I(X;Y) = D_KL( P_(X,Y)∥ P_X⊗ P_Y)
In the context of discrete communication, where both messages and sample features are represented as discrete variables, the mutual information between the set of messages M and the set of features F is computed using a double summation over all possible message-feature pairs:
I( M ; F )
= ∑_m ∈ M∑_f ∈ F
P_( M , F ) ( m , f )
log(
P_( M , F ) ( m , f ) /P_M( m ) P_F( f ))
where P_( M , F ) ( m , f ) is the joint probability of message m and feature f, and P_M( m ) and P_F( f ) are the marginal probabilities of m and f, respectively.
Correlation
Various studies employ different statistical techniques to measure consistency using correlations <cit.>.
For example, consistency within a language system can be quantified by analyzing the variability of words produced for a given sample k. Specifically, given the set of all words representing k, a heatmap is generated using the mean of this set. The sharpness of the heatmap is then quantified by computing the Variance of the Laplacian (VoL). The average consistency score is obtained by dividing the VoL of the heatmap by the count of all samples considered, as introduced by Verma and Dhar <cit.>.
Additionally, Mul et al. <cit.> explored the correlation between messages and actions as well as between messages and salient properties of the environment. The analysis reveals correlations by examining the conditional probability distribution of actions given the messages produced by a pretrained or fine-tuned receiver. This distribution, denoted as P (a | m ), was visualized using bin bar plots to highlight the prominent correlations <cit.>. Similarly, the relationship between input and messages is analyzed by examining the conditional distribution of a pretrained sender's messages given the observational input, represented as P ( m | o ) <cit.>.
Coherence
Coherence is often assessed through context independence, a metric initially proposed by Bogin et al. <cit.>. Context independence examines whether words within a language maintain consistent semantics across varying contexts. However, context independence may be considered restrictive, particularly in languages where synonyms are prevalent <cit.>. The context independence metric aims to measure the alignment between words w ∈ W and features f ∈ F of the input samples by analyzing their probabilistic associations. Specifically, P (w | f ) denotes the probability that a word w is used when a feature f is present, while P ( f | w ) represents the probability that a feature f appears when a word w is used. For each feature f, we identify the word w_f most frequently associated with it by maximizing P ( f | w ):
w_f_w P ( f | w )
The context independence or coherence metric CI is then computed as the average product of these probabilities across all features:
CI( w_f , f )
= 1/| F |∑_f ∈ F P ( w_f| f ) P ( f | w_f)
This metric ranges from 0 to 1, with 1 indicating perfect alignment, meaning that each word retains its meaning consistently across different contexts and is thus used coherently.
Entropy
Entropy metrics are instrumental in analyzing the variability and predictability within linguistic systems. The most fundamental use of entropy involves marginal probabilities, which capture the variability in the number of words in a language <cit.>. More advanced applications of entropy focus on sender language entropy, which examines the conditional entropy of messages given features and vice versa <cit.>. Specifically, low conditional entropy H ( M | F ) indicates that a unique message is used for a specific feature, whereas high H ( M | F ) reflects the generation of synonyms for the same feature <cit.>.
Recent approaches further extend this analysis by combining conditional entropies <cit.>. For example, H ( M | F ) quantifies the uncertainty remaining about messages after knowing the concepts, while H ( F | M ) measures the uncertainty about concepts given the messages. A negative correlation between these measures and agent performance is expected <cit.>. However, a notable limitation of these entropy-based methods is that they focus on complete messages rather than individual words, which can limit the evaluation of more complex languages.
For example, Ohmer et al. <cit.> provide the following comprehensive evaluation approach.
First, the conditional entropy of messages given features H (M | F ), see Equation <ref>, and H (F | M ) are calculated. Additionally, the marginal entropies are calculated using Equation <ref>, where X represents either messages M or features F.
H (M | F ) = - ∑_m ∈ M∑_f ∈ F P ( f , m )
log(
P ( f , m ) / P ( f ) )
H ( X ) = - ∑_x ∈ X P ( x ) log( P ( x ) )
Using these entropies, consistency, see Equation <ref>, measures how much uncertainty about the message is reduced when the feature is known, with lower values indicating more consistent message usage. effectiveness, on the other hand, see Equation <ref>, evaluates the reduction in uncertainty about the feature when the message is known, with lower values reflecting more unique messages for individual features.
consistency(F, M) = 1 - H(M | F)/H(M)
effectiveness(F, M) = 1 - H(F | M)/H(F)
Finally, the normalized mutual information NI provides a combined score:
NI(F, M) = H(M) - H(M | F)/0.5 · (H(F) + H(M))
A high NI score indicates a strong predictive relationship between messages and features, reflecting high consistency.
Similarity
The Jaccard similarity coefficient is a another metric for evaluating the consistency of language usage among agents <cit.>. It quantifies the similarity between two sets by comparing the size of their intersection to the size of their union <cit.>. To measure language consistency, the Jaccard similarity is computed by sampling messages for each input and averaging the similarity scores across the population <cit.>. This approach reflects how consistently words are used across different messages. Specifically, Jaccard similarity J(M_ξ_i, M_ξ_j) is defined in Equation <ref>, where M_ξ_i and M_ξ_j represent sets of messages generated by different agents based on the same input. The similarity ranges from 0 to 1, with 1 indicating complete overlap and thus perfect similarity.
In practice, Jaccard similarity helps to assess the coherence of languages emerging from agent-based systems. For instance, in referential game experiments, high perplexity (cf. Section <ref>) and low Jaccard similarity have been observed, suggesting that agents assign unique but incoherent strings to object types to gain an advantage in the game without producing a consistent language <cit.>. However, Jaccard similarity is only applicable to scenarios where multiple agents generate messages about the same set of objects. Thus, its application is limited to cases where the goal is to compare the overlap of message sets between agents attempting to convey similar meanings.
J(M_ξ_i, M_ξ_j)
= | M_ξ_i∩ M_ξ_j|/| M_ξ_i∪ M_ξ_j|
= | M_ξ_i∩ M_ξ_j|/| M_ξ_i| + | M_ξ_j| - | M_ξ_i∩ M_ξ_j|
§.§.§ Generalization
A language's ability to generalize is crucial for describing objects and concepts at different levels of complexity, allowing for effective clustering and hierarchical representation. Generalization in el reflects their ability to extend beyond specific training instances to novel situations. If the emergent languages can be generalised, we then could say that these languages do capture the structure of meaning spaces <cit.>. Research shows that languages capable of generalization tend to emerge only when the input is sufficiently varied <cit.>. In contrast, a large dictionary size often indicates a lack of generalization <cit.>.
Human languages have evolved under the pressure of a highly complex environment, fostering their generalization capabilities <cit.>. However, deep learning models often exploit dataset-specific regularities rather than developing systematic solutions <cit.>. To address this, much research is being done on the systematic generalization abilities of el.
Zero Shot Evaluation
Zero-shot evaluation, which assesses the ability of an agent to generalize to novel stimuli <cit.>, has become a standard metric in the study of el as illustrated in Figure <ref> in Appendix <ref>. This evaluation is critical to understand the generalization capabilities of an agent. Zero-shot evaluation can be done in two different scenarios, one with unseen input and the other with an unseen partner.
In the unseen input scenario, models are tested on a zero-shot test set consisting of samples with feature combinations not encountered during training. Performance, such as accuracy, is reported for these unseen samples <cit.>. Different methods for constructing novel inputs include exposing models to objects that resemble training data but have unseen properties or entirely novel combinations of features <cit.>. Moreover, a more drastic approach may involve moving to entirely new input scenarios, such as testing the ability of agents to generalize across different game types <cit.>.
The unseen partner scenario, also known as cross-play or zero-shot coordination, evaluates models by pairing agents that did not communicate during training. Again, performance is measured, typically in terms of accuracy <cit.>.
However, these approaches also have drawbacks. The unseen input scenario requires a ground truth oracle to withhold feature combinations, which is necessary to accurately define novel combinations. Meanwhile, the unseen partner setup can introduce inefficiencies by requiring additional resources to train novel communication partners for testing.
Ease and Transfer Learning
Ease and Transfer Learning (ETL), as proposed by Chaabouni et al. <cit.>, evaluates how easily new listeners can adapt to an el on distinct tasks. ETL extends the concept of ease-of-teaching <cit.> by assessing how effectively a deterministic language, developed by a fixed set of speakers, can be transferred to new listeners who are trained on tasks different from the original one for which the language was optimized <cit.>. This metric not only gauges the language's generality but also its transferability across tasks <cit.>.
To measure ETL, after convergence, a fixed number of speakers produce a deterministic language by selecting symbols using an operation over their distributions. This language is then used to train newly initialized listeners on a new task. The training curve is tracked to observe how quickly and accurately the listeners learn the task, which may involve more challenging objectives than the former training tasks <cit.>.
§.§ Pragmatics
Pragmatics is a critical aspect of language that examines how context influences meaning <cit.>. It goes beyond the literal interpretation of words and requires the listener to infer the speaker's intentions, beliefs, and mental states, an ability known as Theory of Mind (ToM) <cit.>. In human interactions, this contextual reasoning is essential for predicting and understanding behavior. In the context of el, pragmatics focuses on how effectively agents use the communication ability in their environment. Empirical studies have shown that agents may initially fail to use communication meaningfully, but, once they do communicate, they can reach a locally optimal solution to the communication problem <cit.>. Thus, evaluating the pragmatics of el is essential to determining its utility and effectiveness in real-world applications.
§.§.§ Predictability
Predictability evaluates the complexity of an environment and its effect on the need for communication. Thus, it is a central metric for the probability of emergence and the use of el. In simple environments with limited actions, agents can often coordinate without communication <cit.>.
Behavioral Divergence
Behavioral divergence, introduced by Dubova et al. <cit.>, posits that less diversity in actions or messages correlates with more predictable behavior, potentially reducing the need for communication. To quantify this, we calculate Behavioral Action Predictability BAP and Behavioral Message Predictability BMP. Both use the Jensen-Shannon Divergence (JSD) (see Equation <ref>) which itself uses the Kullback-Leibler Divergence D_KL (cf. Equation <ref>).
D_JS( P ∥ Q ) = 1/2D_KL( P ∥ M ) + 1/2D_KL( Q ∥ M )
with M = P + Q / 2
BAP (see Equation <ref>) and BMP (see Equation <ref>) both use a uniform distribution Q for comparison. BAP further uses the distribution of actions by the agent P ( a_ξ) while BMP uses the distribution of messages by the agent P ( m_ξ). Based on that, these metrics provide a robust measure of how predictable agent behaviors and messages are, with higher values indicating less predictability and greater need for beneficial communication <cit.>.
BAP = D_JS( P ( a_ξ) ∥ Q )
BMP = D_JS( P ( m_ξ) ∥ Q )
§.§.§ Efficiency
In el settings, efficient communication arises only when there is an opportunity cost <cit.>. Without such a cost, there is no drive towards brevity, which limits the effectiveness and efficiency of el in hci.
Sparsity
Sparsity, as proposed by Kalinowska et al.<cit.>, measures the extent to which agents minimize their communication during task execution. This metric requires only the collection of messages exchanged per episode for computation. However, its applicability is limited to scenarios where communication is not strictly necessary for task completion, i.e., agents have the option to send no messages at all or to send messages that contain no meaningful information.
A sparsity value of 0 indicates that an agent can solve the task using only a single message throughout an episode, reflecting a highly efficient communication strategy. Conversely, higher sparsity values indicate more frequent or verbose communication, which may indicate inefficiencies in the el.
Communication sparsity ComSpar is mathematically defined as:
ComSpar = 1/n_ep·∑_ M_ep, i∈ M
- log(
|{ m | m ∈ M_ep, i m ≠ 0 }|)
In this equation, M_ep, i represents the set of all messages exchanged during episode i, and n_ep is the total number of episodes observed. The collection { m | m ∈ M_ep, i m ≠ 0 } consists of all messages m of episode i that are non-zero and thus contributing.
§.§.§ Positive Signaling
Positive signaling evaluates the alignment between an agent's observations and its communication output <cit.>. The goal is to ensure that the outgoing transmitted information is both relevant and observable by the agent <cit.>.
Speaker Consistency
Speaker Consistency (SC), introduced by Jaques et al. <cit.>, measures how effectively an agent's messages reflect its state or trajectory, thereby ensuring the communication is meaningful. This is quantified using mutual information. For an agent ξ_i, the trajectory τ_ξ_i^t represents the sequence of states and actions up to time step t. The message produced at time t is denoted by m_ξ_i^t. The mutual information I(m_ξ_i^t, τ_ξ_i^t) between the message and trajectory is calculated as:
I(m_ξ_i^t, τ_ξ_i^t)
= H(m_ξ_i^t) - H(m_ξ_i^t | τ_ξ_i^t)
= - ∑_m ∈ M_ξ_iP_ξ_i(m) logP_ξ_i(m)
+ 𝔼_τ_ξ_i^t[
∑_m ∈ M_ξ_i P_ξ_i(m|τ_ξ_i^t) log P_ξ_i(m|τ_ξ_i^t)
]
Here, H(m_ξ_i^t) is the entropy of the message distribution, H(m_ξ_i^t | τ_ξ_i^t) is the conditional entropy given the trajectory, P_ξ_i(m) as marginal distribution of message m over all trajectories, and P_ξ_i(m|τ_ξ_i^t) as conditional distribution of message m given the trajectory τ_ξ_i^t. This way, the mutual information value reflects how much information the message carries about the agent's trajectory.
Lowe et al. <cit.> built on this concept and provided the following formula for Speaker Consistency (SC):
SC
= ∑_a ∈ A∑_m ∈ M P ( a, m )
log P ( a, m ) / P ( a ) P ( m )
In this equation, P ( a, m ) is the joint probability of action a and message m, calculated empirically by averaging their co-occurrences across episodes. In general, SC is a valuable metric for evaluating whether the el is both informative and aligned with the behavioral patterns of the sender.
§.§.§ Positive Listening
Positive listening evaluates the effectiveness of how a message receiver utilizes and applies incoming information <cit.>. However, agents should not simply process messages similarly to other observations to avoid treating them as mere directives <cit.>. Nevertheless, the metrics presented in this section focus on evaluating the receiver's ability to effectively integrate and use the information received, rather than evaluating the receiver's ability to do more than just follow instructions.
Instantaneous Coordination
Instantaneous Coordination (IC), also referred to as listener consistency <cit.>, was introduced by Jaques et al. <cit.> as a metric to evaluate how effectively an agent's message influences another agent's subsequent action. IC is computed similarly to Speaker Consistency (cf. Section <ref>), but differs in that it measures the mutual information between one agent's message and the other agent's next action, averaged over episodes. This metric directly captures the receiver's immediate reaction to an incoming message, making it a measure of positive listening. However, it primarily captures situations where the receiver's action is directly changed by the sender's message, without considering the broader context or long-term dependencies <cit.>. Accordingly, IC can miss many positive listening relationships <cit.>.
Jaques et al. <cit.> proposed two specific measures for IC: One that quantifies the mutual information between the sender's message and the receiver's next action (see Equation <ref>), and another one that measures the mutual information between the sender's current action and the receiver's next action (see Equation <ref>). These measures are calculated by averaging over all trajectory steps and taking the maximum value between any two agents, focusing on short-term dependencies between consecutive timesteps.
IC_m_ξ_S→ a_ξ_R=I(m_k^t ;a_j^t+1)
IC_a_ξ_S→ a_ξ_R=I(a_k^t ;a_j^t+1)
A unified equation for IC is provided by Lowe et al. <cit.>:
SC
= ∑_m_ξ_S^t∈ M_ξ_S∑_a_ξ_R^t+1∈ A_ξ_R
P ( a_ξ_R^t+1, m_ξ_S^t)
log P ( a_ξ_R^t+1, m_ξ_S^t) / P ( a_ξ_R^t+1) P ( m_ξ_S^t)
Here, P ( a_ξ_R^t+1, m_ξ_S^t) is the empirical joint probability of the sender's message and the receiver's subsequent action, averaged over episodes within each epoch.
Message Effect
The Message Effect (ME) metric, introduced by Bouchacourt and Baroni <cit.>, quantifies the influence of a message sent by one agent on the subsequent actions and messages of another agent. This metric explicitly considers bidirectional communication, so in the following we use generic agents ξ_A and ξ_B instead of sender and receiver. A notable challenge of this metric is the requirement for counterfactual analysis.
Given an agent ξ_A at timestep t sending a message m_ξ_A^t, we define z_ξ_B^t+1 as the combination of the action and message produced by agent ξ_B at the following timestep. Accordingly, the conditional distribution P ( z_ξ_B^t+1| m_ξ_A^t ) represents the response of ξ_B to the message from ξ_A. To account for counterfactuals, which encode what might have happened had ξ_A sent a different message m_ξ_A^t, we define the counterfactual distribution P( z_ξ_B^t+1) (see Equation <ref>).
The ME is then measured by the Kullback-Leibler divergence between the actual response and the counterfactual response (see Equation <ref>). The computation involves sampling z_ξ_B^t+1, k from the conditional distribution for the actual message and sampling counterfactuals m_ξ_A^t to estimate P( z_ξ_B^t+1, k) (see Equation <ref>). The final ME is calculated as the average KL divergence over the collection of samples K (see Equation <ref>).
P( z_ξ_B^t+1)
= ∑_m_ξ_A^t P ( z_ξ_B^t+1|m_ξ_A^t )
P( m_ξ_A^t )
ME_ξ_A→ξ_B^t
= D_KL(
P ( z_ξ_B^t+1| m_ξ_A^t )
∥P( z_ξ_B^t+1)
)
P( z_ξ_B^t+1, k)
= ∑_j=1^J P ( z_ξ_B^t+1, k|m_ξ_A^t )
P( m_ξ_A^t )
ME_ξ_A →ξ_B^t = 1/| K |∑_k ∈ Klog P ( z_ξ_B^t+1, k| m_ξ_A^t ) /P( z_ξ_B^t+1, k)
Causal Influence of Communication
The Causal Influence of Communication (CIC) metric, introduced independently by Jaques et al. <cit.> and Lowe et al. <cit.>, provides a direct measure of positive listening by quantifying the causal effect that one agent's message has on another agent's behavior. Traditional methods of evaluating communication often fall short, as simply testing for a decrease in reward after removing the communication channel does not adequately capture the utility of communication <cit.>.
CIC is computed using the mutual information between an agent's message and the subsequent action of the receiving agent. Unlike Instantaneous Coordination (cf. Section <ref>), CIC considers the probabilities P ( a , m ) = π_ξ_R( a | m ) π_ξ_S( m ) that represent changes in the action distribution of the receiver ξ_R when the message m from the sender ξ_S is altered. These probabilities are normalized within each game to accurately reflect the influence of messages on actions within the same context <cit.>.
For multi-time-step causal influence, the CIC metric is defined as the difference between the entropy of the receiver's actions with and without communication:
CIC(τ_ξ_R) = H(a_ξ_R^t | τ_ξ_R) - H(a_ξ_R^t | τ_ξ_R^+M)
Here, τ_ξ_R denotes the standard trajectory of the receiver, comprising state-action pairs, while τ_ξ_R^+M includes the communicated messages. The CIC is estimated by learning an approximate policy function π(· | τ_ξ_R). For more details on the multistep version, refer to Eccles et al. <cit.>, and for the single-step version, see Jaques et al. <cit.>.
§.§.§ Symmetry
Symmetry in el refers to consistent language use across agents in settings, where agents alternate between roles such as message sender and receiver <cit.>. Thus, symmetry ensures convergence to a common language rather than distinct dialects <cit.>.
Inter-Agent Divergence
Inter-Agent Divergence (IAD), introduced by Dubova et al. <cit.>, quantifies the similarity in how different agents map messages to actions. Let a_ξ_i denote the action of agent ξ_i. The first step involves computing the marginal action distributions for each agent given a message m, represented as P(a_ξ_i|m).
P ( a_ξ_i| m ) ∀ξ_i ∈ξ m ∈ M
The divergence between two agents, ξ_i and ξ_j, based on their responses to the same message, is then calculated using the Jensen-Shannon Divergence (JSD) as follows:
D_JS( ξ_i, ξ_j, m )
= D_JS(
P(a_ξ_i|m) ∥ P(a_ξ_j|m)
)
= 1/2[
D_KL( P ( a_ξ_i| m ) ∥ M)
+
D_KL( P ( a_ξ_j| m ) ∥ M )
]
where M = P ( a_ξ_i| m ) + P ( a_ξ_j| m )/2
Finally, the overall IAD is computed by averaging these divergences across all possible agent pairs ( ξ_i, ξ_j ) ∈ξ_comb and messages m ∈ M:
IAD =
1/|ξ_comb|1/|M|∑_( ξ_i, ξ_j ) ∈ξ_comb∑_m ∈ MD_JS(ξ_i,ξ_j,m)
While IAD effectively captures the consistency of inter-agent communication, it may have limitations when applied to more complex languages where message-level comparisons become difficult.
Within-Agent Divergence
Within-Agent Divergence (WAD), proposed by Dubova et al. <cit.>, measures the consistency of an agent's communication behavior when it changes roles, such as from sender to receiver. This metric captures the internal symmetry in an agent's behavior and is crucial in complex systems where agents can assume different roles within the same environment. To compute WAD, we again first consider the action distribution P ( a_ξ_i| m ) for each agent ξ_i over a set of messages m ∈ M. This distribution reflects how an agent's actions are conditioned on receiving or sending a specific message.
P ( a_ξ_i| m ) ∀ξ_i ∈ξ m ∈ M
Given this, the Jensen-Shannon Divergence (JSD) is used to assess the divergence between an agent's behavior when acting as a sender ξ_i,S versus as a receiver ξ_i,R:
D_JS( ξ_i,S , ξ_i,R , m )
= D_JS(
P ( a_ξ_i,S| m ) ∥ P ( a_ξ_i,R| m )
)
= 1/2[
D_KL( P ( a_ξ_i,S| m ) ∥ Q )
+
D_KL( P ( a_ξ_i,R| m ) ∥ Q )
]
with Q
= P ( a_ξ_i,S| m ) + P ( a_ξ_i,R| m ) /2
Finally, the overall WAD is computed by averaging this divergence across all agents ξ_i ∈ξ based on the WAD for individual agents and their messages m ∈ M_ξ_i:
WAD =
1/|ξ|∑_ξ_i ∈ξ1/|M_ξ_i|∑_m ∈ M_ξ_iD_JS(ξ_i,S,ξ_i,R,m)
§.§ Summary of the Metrics
While some el features are quantifiable by multiple metrics and have been investigated in multiple studies, others remain underexplored, as illustrated in Figure <ref> in Appendix <ref>. Metrics such as topographic similarity and zero shot evaluation, both of which assess semantic properties, are well established and widely utilized across multiple studies. In contrast, metrics related to pragmatics, such as speaker consistency and instantaneous coordination, are fairly well established but are less frequently used. Morphology metrics, particularly active words and average message length, are more commonly used, whereas syntax remains a peripheral concern, with only two isolated metrics proposed and not adopted in subsequent research. This imbalance indicates that while semantic metrics dominate el research, morphology and pragmatics receive moderate attention, and syntax is mostly neglected.
Furthermore, the optimality of these metrics is not straightforward. Rather than being simply minimized or maximized, their ideal values are likely to lie at a nuanced balance point that varies depending on the specific el system and application. This uncertainty leaves the critical question of what constitutes a *good el system largely unanswered. Addressing this gap will require a deeper exploration of underrepresented metrics and a more refined understanding of how to evaluate el systems holistically.
§ FUTURE WORK
In this section, we outline potential future directions for the research field of el, based on our vision outlined in Section <ref>. We present major research opportunities, organized along key research dimensions, in Section <ref>.
Along with future research directions, we have summarized a list of open source code repositories in Table <ref> in Appendix <ref> that can serve as convenient starting points for experimenting with these directions, for example, comprehensive frameworks such as the EGG toolkit <cit.> and BabyAI <cit.> are included.
§.§ Vision
Our vision for el research is grounded in a functional perspective, aiming to achieve significant breakthroughs in human-agent interaction <cit.>. This means developing communication systems that enable hci at the human level, addressing the purpose, cost, and value of communication with intuitive and effective interfaces <cit.>. A key goal is to ensure that el are grounded in real-world contexts, allowing agents to understand and interact with human-like comprehension and vice versa <cit.>. This includes creating hierarchical, compositional conceptualization capabilities that allow agents to discuss and understand novel concepts in a structured, human-relevant manner <cit.>. In addition, exploring the potential for AI explainability through communication is an exciting area <cit.>. Finally, in the long term, creation and creativity through el comparable to human capabilities would be a milestone. This would allow agents to truly communicate on a human level and enhance their ability to perceive and adapt to their environment through the use of language <cit.>.
§.§ Dimensions and Opportunities
The development, evaluation, and application of el in communication systems can be analyzed along several critical dimensions. We identified eight key dimensions that, to the best of our knowledge, represent the primary areas of focus in el research.
Evaluation Metrics
Evaluation metrics are essential for rigorously assessing the characteristics and effectiveness of el. As detailed in our taxonomy (cf. Section <ref>), we have identified key characteristics and their associated metrics. While some el features are quantifiable through multiple metrics and have been examined in multiple studies, others remain underexplored, as illustrated in Figure <ref> in Appendix <ref>. We emphasize the need to develop comprehensive and quantitative metrics that accurately capture these features, which are critical to determining the practical utility of el. Previous studies have similarly highlighted this need <cit.>.
In addition, further research is needed to systematically investigate existing metrics, especially with respect to their sensitivity to variations in settings, algorithms, and agent architectures <cit.>. It is imperative that these metrics be subjected to more rigorous investigation to ensure that they enable meaningful quantitative comparisons and support well-founded conclusions about the capabilities and utility of el. Thus, we endorse more comprehensive studies, more edge case testing and, in particular, more analysis of actual human-agent interaction. We see this as a critical priority for advancing the field.
Emergent Language and Natural Language Alignment
This dimension addresses the convergence and divergence between el and nl. A key approach to this challenge, discussed in Section <ref>, involves leveraging language priors to guide this alignment. Achieving robust el-nl alignment is essential for advancing human-agent interaction. Thus, future research should explore the integration of nl-centered metrics and regularization techniques to enhance this alignment <cit.>.
However, this alignment presents a fundamental dilemma. On the one hand, agents need the autonomy to develop languages organically, tailored to their specific interactions and requirements. On the other hand, to facilitate seamless human-agent communication, these el must closely resemble nl, which imposes significant constraints on their development. This tension creates what we call the Evolution-Acquisition Dilemma, where the evolutionary process fosters intrinsically motivated language emergence, while the acquisition process necessitates alignment with nl. Balancing these competing needs is a critical challenge for future research in this area.
Representation Learning
el can be viewed as a complex representation learning task, focusing on how agents encode, interpret, and construct internal representations of observations and linguistic data. While representation learning is a well-established area in artificial intelligence research, its application in the context of el remains underexplored. This dimension is central to the analysis of meaning and language space as outlined in our framework, which is based on the semiotic cycle (cf. Figure <ref>). Advancing this dimension requires advanced latent space analyses to elucidate the relationships between el, underlying world models, and nl structures. In addition, evaluating the impact of discrete versus continuous representations is critical to refining our understanding of el dynamics.
Future research directions include developing methodologies to ensure that agent representations more accurately reflect the input they receive <cit.>, exploring efficient representation of (multimodal) information <cit.>, conducting in-depth analyses to uncover and mitigate influencing factors and biases in learned representations <cit.>, and assessing the efficacy of these representations for downstream tasks <cit.>.
Agent Design
Agent design is a critical aspect in el research, directly influencing the linguistic capabilities and adaptability of artificial agents. Prominent research directions include the investigation of advanced neural network architectures tailored for el <cit.>, the creation of architectures optimized for heterogeneous and dynamic agent populations, and the refinement of structures that enhance language emergence and linguistic properties <cit.>. In addition, modular designs rather than monolithic ones potentially offer advantages by separating language processing from other task-specific computations. Addressing these design challenges is critical to advancing both el research and broader artificial intelligence goals.
Setting Design
The environment in which agents operate is central to shaping the el, encompassing interaction rules, agent goals, and communication dynamics (cf. Table <ref>). This dimension is integral to the setting space outlined in our framework (cf. Figure <ref>).
Important future research directions include scaling up experimental settings to include larger and more complex tasks <cit.> with a focus on realistic perceptually grounded game environments <cit.>. In addition, the study of the impact of populations as such <cit.> and the use of heterogeneous agent populations <cit.> are crucial areas of research. While some benchmarks have been established and utilized <cit.>, there remains a significant need for the development and widespread dissemination of comprehensive benchmarks in area of research.
Communication Design
The design of the communication channel in el systems is critical, focusing on how agents exchange and structure information through the channels available to them. This aspect is directly related to the phonetics and phonology components outlined in our taxonomy (cf. Section <ref> and Section <ref>). For discrete el, it is essential to establish channels that support word-based communication, with considerations such as vocabulary size and variable message length being fundamental to enabling effective and scalable human-agent interaction. Future research directions in this area include the exploration of topology-aware variable communication channels, the integration of heterogeneous channels within multi-agent systems, and the evolution of communication channels over time. Moreover, the incorporation of multimodal communication channels could provide more realistic and contextually rich stimuli, which may significantly enhance the sophistication and applicability of el in nl-oriented human-agent coordination <cit.>.
Learning Strategies
Learning strategies focus on how agents acquire, adapt, and refine their linguistic capabilities over time, including the development of language rules and adaptation through interactions with other agents. While marl serves as the foundational framework, there is significant potential to enhance the learning process through strategic design choices. Future research directions include the exploration of advanced regularization techniques <cit.>, the adoption of tailored optimization strategies <cit.>, and the integration of supervised or self-supervised learning objectives using appropriate loss designs <cit.>. Additionally, the application of meta-learning <cit.>, decentralized learning approaches <cit.>, and curriculum learning methodologies <cit.> offer promising avenues for optimizing the el learning process.
Human-Agent Interaction
The final dimension focuses on the interpretability of el by humans and the degree to which humans can shape their development. This aspect is critical for creating human-agent interaction systems where communication is intuitive and effective <cit.>. To advance this dimension, future research should prioritize the integration of human-in-the-loop feedback mechanisms to ensure that el are not only practical, but also comprehensible to human users <cit.>. This will improve the usability and adoption of these systems in real-world applications.
Key research directions include designing experiments that create incentives for agents to develop communication strategies more closely aligned with human language <cit.>. Additionally, exploring the resilience of communication protocols to deception through training with competing agents can lead to more robust and realistic interactions <cit.>. Exploring adaptive communication strategies to optimize the sparsity and clarity of messages based on individual or group needs within human-agent teams is another promising direction <cit.>.
§ LIMITATIONS AND DISCUSSION
In this section, we critically evaluate the limitations of our survey and identify areas for future improvement. Through our review, we aimed to develop a detailed taxonomy for the field of el, focusing on its key properties (cf. Section <ref>), and to analyze as well as categorize quantification approaches and metrics (cf. Section <ref>). In addition, we curated a summary of open questions and suggestions for future research (cf. Section <ref>). Despite considerable efforts to establish a viable taxonomy and framework in the most systematic and unbiased manner, there are several potential limitations to our research approach and methodology.
First, while we have provided an extensive overview of scientific publications in el research, it is important to acknowledge that our search process, despite being thorough, may have overlooked significant contributions. Consequently, we do not claim completeness. However, we are very confident that our review represents a fair and well-balanced reflection of the existing body of work and the current state of the art.
Second, our review includes sources that are not peer-reviewed, such as preprints from https://arxiv.org/arXiv, to ensure that our work captures the most recent developments and diverse perspectives, including those that might be controversial. While we have carefully examined each paper included in this review, we cannot guarantee that every detail in non-peer-reviewed papers is entirely accurate. Consequently, we focused on concepts, findings, and metrics that are supported by multiple studies.
Third, we have introduced a taxonomy and a comprehensive metrics categorization for el research, a field that is still in its early stages. This effort comes with inherent challenges, and while we have addressed many of these, it is important to note that our proposed framework does not represent a consensus within the wider research community. We are transparent about this limitation and encourage further discussion and validation.
Fourth, in order to maintain focus and conciseness, we have deliberately excluded ideas that lack associated metrics. As a result, some conceptual ideas from the reviewed research literature that are difficult to quantify in this early stage may not be fully explored in this survey.
Finally, we have incorporated several existing metrics into our proposed framework. While many of these metrics are well established in the field, we acknowledge that a more rigorous and critical experimental evaluation of these metrics would be beneficial. We strongly recommend that future research conduct such evaluations to further refine and validate the tools and methods used in el research.
§ CONCLUSION
In this paper, we present a comprehensive taxonomy of el, an overview of applicable metrics, and a summary of open challenges and potential research directions. Additionally, we provide a list of open source code repositories of the field in Table <ref> in Appendix <ref>. Our overall goal is to create a standardised yet dynamic framework that not only facilitates progress in this area of research, but also stimulates further interest and exploration.
Section <ref> introduces the foundational linguistic concepts that underpin our taxonomy. Section <ref> offers a comprehensive taxonomy of el based on the review of scientific publications. Section <ref> presents a unified categorization and notation for various metrics, depicted in Figure <ref>, ensuring consistency and clarity. Section <ref> provides a summary of current achievements and outlines research opportunities.
By providing a structured overview and systematic categorization of linguistic concepts relevant to el we have created a common ground for research and discussion. The detailed presentation of metrics and their unified notation ensures readability and usability, making it easier for researchers to navigate related topics and identify potential research opportunities and blind spots of future publications and the research field as a whole. This survey provides a valuable perspective on the development and analysis of el, serving as both a guide and a resource for advancing this area of study.
el is a fascinating and promising way to achieve grounded and goal-oriented communication among agents and between humans and agents. Despite its significant progress in recent years, the field faces many open questions and requires further evaluation methods and metrics. Critical questions remain about the measurability of linguistic features, the validity of proposed metrics, their utility, and their necessity. Aligning el with nlp for hci presents additional opportunities and challenges. We encourage continued contributions and interdisciplinary research to address these issues and advance the field.
§ DECLARATIONS
* Funding: We acknowledge the funding of the internships of Arya Gopikrishnan and Gustavo Adolpho Lucas De Carvalho by the German Academic Exchange Service (DAAD) project https://www.daad.de/rise/en/rise-germany/*RISE Germany.
* Conflict of interest/Competing interests (check journal-specific guidelines for which heading to use): Not applicable.
* Ethics approval and consent to participate: Not applicable.
* Consent for publication: Not applicable.
* Data availability: Not applicable.
* Materials availability: Not applicable.
* Code availability: Not applicable.
* Author contribution: J.P., H.T. and T.M. had the idea for the article. J.P. performed the literature search and data analysis. The first draft of the manuscript was written by J.P. with the continuous support of C.W.d.P. and H.T. The first draft of the metrics section was written by A.G. All authors commented on earlier versions of the manuscript and critically revised the final manuscript.
§ ADDITIONAL TABLES
[
caption = List of code repositories for the literature reviewed.,
label = tab:code_repositories,
]
colspec = rX, width = ,
rowhead = 1,
Paper Link
<cit.> <https://github.com/agakshat/visualdialog-pytorch>
<cit.> <https://github.com/jacobandreas/tre>
<cit.> <https://github.com/facebookresearch/EGG/tree/main/egg/zoo/compo_vs_generalization_ood >
<cit.> <https://github.com/arski/LEW>
<cit.> <https://github.com/proroklab/adversarial_comms>
<cit.> <https://github.com/benbogin/emergence-communication-cco/ >
<cit.> <https://github.com/brendon-boldt/filex-emergent-language>
<cit.> <https://github.com/brendon-boldt/filex-emergent-language>
<cit.> <https://github.com/DianeBouchacourt/SignalingGame >
<cit.> <https://github.com/facebookresearch/fruit-tools-game>
<cit.> <https://github.com/nicofirst1/rl_werewolf>
<cit.> <https://github.com/facebookresearch/EGG/blob/master/egg/zoo/channel/README.md>
<cit.> <https://github.com/facebookresearch/brica>
<cit.> <https://github.com/facebookresearch/EGG/blob/master/egg/zoo/compo_vs_generalization/README.md>
<cit.> <https://github.com/deepmind/emergent_communication_at_scale>
<cit.> <https://github.com/mila-iqia/babyai/tree/master>
<cit.> <https://github.com/AriChow/EL>
<cit.> <https://github.com/mcogswell/evolang>
<cit.> <https://github.com/flowersteam/Imagine>
<cit.> <https://github.com/DylanCope/zero-shot-comm >
<cit.> <https://github.com/gautierdag/cultural-evolution-engine>
<cit.> <https://github.com/batra-mlp-lab/visdial-rl>
<cit.> <https://github.com/Near32/ReferentialGym>
<cit.> <https://github.com/Near32/ReferentialGym/tree/master/zoo/referential-games%2Bst-gs>
<cit.> <https://github.com/Near32/Regym/tree/develop-ETHER/benchmark/ETHER>
<cit.> <https://github.com/Near32/ReferentialGym/tree/develop/zoo/referential-games%2Bcompositionality%2Bdisentanglement>
<cit.> <https://github.com/facebookresearch/EGG/tree/main/egg/zoo/emcom_as_ssl >
<cit.> <https://github.com/CLMBRs/communication-translation>
<cit.> <https://github.com/blinodelka/Multiagent-Communication-Learning-in-Networks>
<cit.> <https://github.com/nyu-dl/MultimodalGame>
<cit.> <https://github.com/jacopotagliabue/On-the-plurality-of-graphs>
<cit.> <https://github.com/alshedivat/lola >
<cit.> <https://github.com/Shawn-Guo-CN/EmergentNumerals>
<cit.> <https://github.com/Shawn-Guo-CN/GameBias-EmeCom2020>
<cit.> <https://github.com/uoe-agents/Expressivity-of-Emergent-Languages>
<cit.> <https://github.com/SonuDixit/gComm >
<cit.> <https://github.com/Meta-optimization/emergent_communication_in_agents>
<cit.> <https://fringsoo.github.io/pragmatic_in2_emergent_papersite/ >
<cit.> <https://github.com/facebookresearch/EGG >
<cit.> <https://github.com/facebookresearch/EGG/tree/master/egg/zoo/language_bottleneck>
<cit.> <https://github.com/facebookresearch/EGG/tree/master/egg/zoo/compositional_efficiency>
<cit.> <https://github.com/tomekkorbak/compositional-communication-via-template-transfer>
<cit.> <https://github.com/tomekkorbak/measuring-non-trivial-compositionality>
<cit.> <https://github.com/batra-mlp-lab/lang-emerge>
<cit.> <https://github.com/facebookresearch/translagent>
<cit.> <https://github.com/MediaBrain-SJTU/ECISQA>
<cit.> <https://github.com/cambridgeltl/ECNMT>
<cit.> <https://github.com/pliang279/Competitive-Emergent-Communication>
<cit.> <https://github.com/ToruOwO/marl-ae-comm>
<cit.> <https://github.com/olipinski/rl_werewolf>
<cit.> <https://anonymous.4open.science/r/TPG-916B>
<cit.> <https://github.com/facebookresearch/measuring-emergent-comm>
<cit.> <https://github.com/backpropper/s2p>
<cit.> <https://github.com/Ddaniela13/LearningToDraw>
<cit.> <https://github.com/jayelm/emergent-generalization>
<cit.> <https://github.com/mnoukhov/emergent-compete >
<cit.> <https://github.com/XeniaOhmer/hierarchical_reference_game>
<cit.> <https://github.com/XeniaOhmer/language_perception_communication_games>
<cit.> <https://github.com/saimwani/CoMON>
<cit.> <https://github.com/asappresearch/compositional-inductive-bias>
<cit.> <https://github.com/evaportelance/emergent-shape-bias>
<cit.> <https://github.com/Joshua-Ren/Neural_Iterated_Learning >
<cit.> <https://github.com/backpropper/cbc-emecom>
<cit.> <https://github.com/MathieuRita/Population>
<cit.> <https://github.com/wilrop/communication_monfg>
<cit.> <https://github.com/Homagn/MultiAgentRL>
<cit.> <https://github.com/david-simoes-93/A3C3 >
<cit.> <https://github.com/david-simoes-93/A3C3 >
<cit.> <https://github.com/shanest/function-words-context>
<cit.> <https://github.com/CLMBRs/communication-translation>
<cit.> <https://github.com/facebookarchive/CommNet>
<cit.> <https://github.com/mynlp/emecom_SignalingGame_as_betaVAE>
<cit.> <https://github.com/thomasaunger/babyai_sr >
<cit.> <https://github.com/i-machine-think/emergent_grammar_induction>
<cit.> <https://github.com/TonghanWang/NDQ>
<cit.> <https://github.com/jimmyyhwu/spatial-intention-maps>
<cit.> <https://github.com/wildphoton/Compositional-Generalization>
<cit.> <https://github.com/ysymyth/ec-nl >
<cit.> <https://github.com/geek-ai/Magent>
§ ADDITIONAL FIGURES
octagon/.style=regular polygon,regular polygon sides=8,
|
http://arxiv.org/abs/2409.02212v1 | 20240903182715 | LSTM-QGAN: Scalable NISQ Generative Adversarial Network | [
"Cheng Chu",
"Aishwarya Hastak",
"Fan Chen"
] | quant-ph | [
"quant-ph",
"eess.SP"
] |
A Novel Audio-Visual Information Fusion System for Mental Disorders Detection
Yichun Li, Shuanglin Li, Syed Mohsen Naqvi
Intelligent Sensing and Communications Research Group, Newcastle University, UK
September 9, 2024
===============================================================================================================================
§ ABSTRACT
Current quantum generative adversarial networks (QGANs) still struggle with practical-sized data. First, many QGANs use principal component analysis (PCA) for dimension reduction, which, as our studies reveal, can diminish the QGAN's effectiveness. Second, methods that segment inputs into smaller patches processed by multiple generators face scalability issues.
In this work, we propose , a QGAN architecture that eliminates PCA preprocessing and integrates quantum long short-term memory (QLSTM) to ensure scalable performance.
Our experiments show that significantly enhances both performance and scalability over state-of-the-art QGAN models, with visual data improvements, reduced Fréchet Inception Distance scores, and reductions of 5× in qubit counts, 5× in single-qubit gates, and 12× in two-qubit gates.
NISQ, Quantum Generative Adversarial Network, Long Short-Term Memory
§ INTRODUCTION
Current QGANs.
Recent advancements in Noisy Intermediate-Scale Quantum (NISQ) platforms <cit.> have catalyzed intense research on Quantum Generative Adversarial Networks (QGANs) <cit.>, which are well-suited to the constraints of NISQ systems, such as limited qubit counts and shallow circuit depths<cit.>.
Building on the foundational work <cit.> that established the theoretical superiority of QGANs over classical counterparts,
early QGAN implementations <cit.> only focused on low-dimensional inputs like single-bit data.
Subsequent research introduced innovations such as Wasserstein loss <cit.>
and novel architectures <cit.> to improve training stability.
More recent work <cit.> expanded QGANs to high-dimensional data, like the 28×28 MNIST dataset, by employing dimensionality reduction techniques like Principal Component Analysis (PCA).
The state-of-the-art (SOTA) PatchGAN <cit.> further segments inputs into smaller patches, enabling efficient processing on practical NISQ devices.
Limitations.
Despite recent developments, QGANs continue to face challenges in managing practical-sized data.
First, while pre- and post-processing with PCA and inverse PCA <cit.> enable QGANs to handle large-dimensional data, PCA often dominates the process, diminishing the contributions of the QGANs themselves.
Second, although PatchGAN <cit.> facilitates the direct processing of practical-sized inputs through multiple small patches, its architectural limitations demand an increasing number of quantum resources as input size grows, leading to serious scalability challenges.
For instance, generating a single MNIST image requires a prohibitively high 56 sub-quantum generators and 280 qubits.
Third, and more concerning, our preliminary study shows a significant decline in output quality as PatchGAN scales from its original 5-qubit design <cit.> to 8 qubits, severely limiting its effectiveness at larger scales.
Contributions.
We introduce , a novel architecture that eliminates the need for PCA when processing large-dimensional data.
The design allows for the use of a constant amount of NISQ computing resources as input size increases.
However, as additional hardware resources become available, the architecture scales efficiently, ensuring consistent and reliable performance.
Our contributions include:
* Preliminary Analysis.
We conduct experiments on the SOTA QGANs <cit.>, revealing previously undisclosed limitations in PCA pre-processing and model scalability.
* Scalable Architecture.
We present , a scalable QGAN architecture inspired by recent advances in quantum long-short memory (QLSTM) <cit.>.
eliminates the need for PCA, maintains constant NISQ resources as input size grows, and efficiently scales with increasing quantum computing resources.
* Enhanced Performance.
We conduct evaluations on NISQ computers. Experimental results show that significantly enhances generative performance and improves scalability compared to SOTA QGANs.
§ BACKGROUND
QGAN Basics.
Figure <ref>
illustrates a standard QGAN with two parameterized models: the Generator, G(θ_g), which generates synthetic data, and the Discriminator, D(θ_d), which evaluates the generated data against real data.
G is implemented using a quantum neural network (QNN), typically composed of a data encoder E(·) and repeated layers of a variational quantum circuit (VQC) with one-qubit rotations (i.e., Rot.) and two-qubit entanglement (i.e., Ent.).
D in SOTA QGANs <cit.> can be implemented with either classical or quantum models.
The objective is to optimize the predefined minmax loss ℒ, as outlined in Equation <ref>, where z represents the latent variable.
The specific loss function can be implemented using various specified functions <cit.>.
The overall goal is to enable G to generate data indistinguishable from real data, while D improves its ability to differentiate between them.
min_θ_gmax_θ_dℒ{D_θ_d(G_θ_g(z)), D_θ_d(x) }
SOTA QGANs.
To manage larger-dimensional data with limited qubits on NISQ computers, SOTA QGANs <cit.>) primarily utilize the following two techniques:
* Pre- and Post-Processing.
Several recent QGANs <cit.> utilize principal component analysis to reduce input dimensions (e.g., from 784 to 4 in <cit.>) to fit within the limitations of NISQ computers with constrained qubits.
The key steps in PCA involve:
(1) standardizing the data to have zero mean and unit variance, and
(2) calculating the covariance matrix 𝐂 and the matrix 𝐕_k, which contains the top k eigenvectors (principal components).
For any data matrix 𝐗 with mean μ, the data can be reduced to the top k principal components by 𝐙=𝐗𝐕_k.
The reduced-dimensional data 𝐙^* can then be reconstructed to approximate the original data through inverse PCA: 𝐗^*=𝐙^*𝐕^⊤_k + μ.
* Patched Input.
PatchGAN <cit.> segments the input into small regional patches and trains a dedicated sub-generator for each, capable of generating synthesized data that follows the pattern of the corresponding patch.
This approach makes it a resource-efficient QGAN framework.
The number of sub-generators scales with the input size; for instance, the 5-qubit design in the original work <cit.> requires 56 sub-generators to process the 784-pixel MNIST dataset, and doubling the input size would proportionally increase the number of sub-generators needed.
QLSTM.
Long short-term memory <cit.> effectively captures spatiotemporal information, enabling task-specific regulation of data flow.
Recent work <cit.> has introduced quantum LSTM, extended to various sequential learning tasks <cit.>.
As shown in Figure <ref>, QLSTM retains the classical LSTM gating mechanism, with the key distinction being the integration of QNNs.
Due to page limit, we refer readers to <cit.> for detailed insights into QLSTM.
LSTM has already been applied in classical GANs <cit.>, demonstrating enhanced generative power and reduced computational cost.
Building on this, we aim to leverage LSTM's ability to selectively retain relevant patterns within a QGAN by training a QLSTM-based generator using different patched inputs, rather than separate sub-generators for different patches as in <cit.>.
§ PRELIMINARY STUDY AND MOTIVATION
§.§ Preliminary Study
PCA Overshadows QGANs.
QGANs <cit.> on MNIST utilize PCA and inverse PCA for dimensionality reduction and reconstruction.
To evaluate the impact of PCA, we reduced 28×28 MNIST images to 1×2 vectors using , generating the corresponding 𝐂, 𝐕_2, and μ.
We then randomly generated 1×2 vectors, applied inverse PCA, and present the reconstructed images in Figure <ref>.
These reconstructions closely resemble the original MNIST data and are comparable to those produced by QGANs <cit.>. This suggests that PCA pre- and post-processing may play a dominant role, potentially overshadowing the effectiveness of QGANs.
These findings raise concerns about the independent validity of QGANs when PCA is involved, underscoring the need to evaluate QGANs using unprocessed data.
Scalability for PatchGAN.
PatchGAN <cit.> claims effectiveness with patch-based processing of high-dimensional inputs, but its original work only reports results using 5 qubits.
To assess scalability, we increasing the qubit count from 5 to 8. Since PatchGAN employs amplitude encoding and processes one patch at a time, we adjusted the number of sub-generators to cover all 784 pixels in an MNIST image.
As show in Figure <ref>,
the generated images reveal a rapid deterioration in quality as qubit count increases, with sub-figure titles indicating the number of qubits and required sub-generators (i.e., sub-gens).
This highlights the poor scalability of the PatchGAN architecture and suggests that even with more qubits, the architecture is unlikely to improve or effectively handle larger-scale inputs.
§.§ Motivation
Our preliminary results highlight the critical need for a QGAN model capable of directly processing real-world data without PCA preprocessing, as well as a more scalable architecture to overcome the limitations of existing QGANs. Motivated by these findings, we are exploring the integration of patched inputs inspired by PatchGAN <cit.> to enable direct input processing without PCA. Specifically, we are investigating a scalable QGAN framework that leverages QLSTM as the generator's backbone, utilizing QLSTM's ability to capture spatiotemporal information across patches with a single generator, rather than separate sub-generators as in <cit.>. Additionally, we are reengineering the quantum circuit ansatz within the QLSTM structure to improve hardware efficiency, fully addressing the NISQ constraints overlooked in previous QLSTM studies like <cit.>.
§
§.§ Overall Architecture
As illustrated in Figure <ref>(b), utilizes QLSTM at the core of the generator to enhance scalability and resource efficiency.
Like PatchGAN <cit.>, the discriminator in can be implemented using either a classical or quantum neural network, depending on the available quantum computing resources.
The following outlines the key components and configurations in .
* Patch Inputs without PCA.
In line with <cit.>, processes patched inputs to generate corresponding output patches, which are then recombined into a complete output. Unlike <cit.>, eliminates the need for PCA and inverse PCA, processing the original data directly. This introduces a trade-off between resources (i.e., qubit number N) and processing latency (i.e., steps T).
With an N-qubit implementation, generates 2^N measured probabilities at each step as the output vector for each synthetic patched output. These vectors are then compared to the real patched input data in the discriminator. The total number of steps, T, is determined by D/2^N, where D represents the size of the real data.
* Scalable QGAN with LSTM.
The generator in consists of four QLSTM cells, as shown in Figure <ref>(b).
The process starts by feeding normally distributed noise z into the generator to produce the initial sub-image, G_θ_g(z). The discriminator then assesses both the synthetic and real input patches, calculating the loss ℒ.
Unlike PatchGAN <cit.>, which requires a separate generator for each patch—leading to a significant increase in NISQ resource overhead as input size grows— leverages the QLSTM's ability to learn and retain relevant patterns while discarding irrelevant information, regardless of the index of patches.
To achieve this, gradients from all patches within a single input are averaged and applied to update the model parameters collectively, resulting in an image-adaptive generator that scales with increasing data dimensions while maintaining fixed resource usage.
* Training Optimization.
Convergence in QGAN training is a critical challenge, significantly influenced by the choice of quantum loss function.
Within the framework, we evaluated both the conventional binary cross entropy loss <cit.> and the Wasserstein loss <cit.>.
The specific Wasserstein loss used for is detailed in Equation <ref>,
where
𝕃_x̂ = x̂∼ P_x̂𝔼 [ ( ∇ _x̂D_θ_d(x̂) _2 -1 ) ^2],
P_r and P_g represent the real data (i.e., x) and
generated data (i.e., x̃∈D_θ_d(G_θ_g(z))) distributions, respectively.
The distribution
P_x̂ is uniformly sampled between P_r and P_g, and
λ is a constant.
Experimental results on the impact of QGAN loss functions are discussed in Section <ref>.
min_θ_gmax_θ_dx∼ P_r𝔼 [ D_θ_d(x) ] - x̃∼ P_g𝔼 [ D_θ_d(x̃) ] -
λ𝕃_x̂
§.§ NISQ Implementation
offers flexibility in implementing G and D.
For fair comparison, D is implemented as a classical neural network, as in PatchGAN <cit.>.
In the QLSTM cells for G, we employed a hardware-efficient ansatz inspired by recent QNNs <cit.>, instead of the generic circuit from <cit.>.
Figure <ref>(a) shows the QNN circuit, which utilizes seven qubits.
Each VAC block includes , , and layers, followed by 2-qubit entanglement layer, with the VQC layers repeated twice.
The measurement layer converts the quantum state into classical vectors.
Although the gate count matches that in <cit.>, our circuit uses native gates, while the (α, β, γ) gate in <cit.> requires synthesis into multiple native gates.
Design Overhead.
Table <ref> compares the hardware resources required by PatchGAN and for the MNIST dataset. Due to architectural differences, a QNN in PatchGAN refers to the quantum generator used for each input patch, while in , it refers to the quantum module within the QLSTM.
The last three rows of Table <ref> highlight that achieves a significant reduction:
a 5× decrease in qubit counts, a 5× decrease in one-qubit gates (1QG), and a 12× decrease in two-qubit gates (2QG).
§ EXPERIMENTS AND RESULTS
§.§ Experimental Setup
Schemes and Benchmarks.
We compare with PatchGAN <cit.> using the MNIST dataset, which consists of 28×28 grayscale images of handwritten digits 0∼9.
PatchGAN is implemented according to its original design <cit.>, utilizing 5 qubits and 56 sub-generators. Each sub-generator produces a 14-pixel patch, and together, the 56 sub-generators generate the entire 784-pixel MNIST image.
For , we implement the generator with two QLSTM layers, each containing 4 QNNs with 7 qubits. At each time step, the LSTM-QGAN generates a 196-pixel patch, requiring 4 time steps to produce a complete MNIST image.
Simulation.
All QGANs are implemented with the PennyLane and Torchquantum libraries.
PatchGAN and are trained using the ADAM optimizer with a 2e-4 learning rate, a 128 batch size, and 1000 epochs. Quantum circuits are run on the NISQ computer <cit.>.
Evaluation metrics.
We evaluate the generated images using both qualitative (e.g., visual inspection) and quantitative methods. For quantitative assessment, we employ the Fréchet Inception Distance (FID), a widely recognized metric for measuring image similarity in GANs <cit.>. A lower FID score indicates a closer feature distance between real and generated images, signifying higher quality. In our experiments, we randomly select 500 real images and 500 generated images for comparison.
§.§ Results and Analysis
Comparison of Image Visual Quality.
Figure <ref>(a)
presents a visual comparison between the images generated by PatchGAN and .
PatchGAN demonstrates limited generation capabilities, as the outlines of the digits (0∼9) are only vaguely identifiable, with noticeable white noise in the background.
Additionally, the clarity of more complex digits, such as 4, 5, and 9, is particularly low, further highlighting its deficiencies.
In contrast, demonstrates superior image generation, producing sharper and more distinct digits with minimal noise, underscoring its enhanced capability in generating high-quality images.
Comparison of Image FID Scores.
Figure <ref>(b)
compares the FID scores of images generated by PatchGAN and across different digit classes.
The FID scores vary between the two models depending on the complexity and distinctiveness of each digit.
Overall, achieves lower FID scores than PatchGAN, indicating higher quality in the generated images.
Specifically, PatchGAN shows significant variability, with its highest FID score at 445.22 (class 0) and its lowest at 246.56 (class 4).
In contrast, consistently outperforms PatchGAN, with its highest FID score at 275.58 (class 7) and its lowest at 134.31 (class 1).
On average, PatchGAN's FID score is 318.02, while achieves a significantly lower average FID score of 193.28.
Impact of Loss Function.
Figure <ref>
illustrates the impact of Wasserstein loss and binary cross-entropy (BCE) loss on the training convergence of , comparing both generator loss (i.e., GL) and discriminator loss (i.e., DL).
With Wasserstein loss, the DL initially decreases while GL increases, ultimately leading to convergence as training progresses.
Conversely, with BCE loss, GL rapidly increases after several training cycles and stabilizes around 100, while DL drops sharply—indicating mode collapse, a known issue in GAN training.
Although with BCE loss can stabilize, achieving full convergence may require more sophisticated techniques.
In contrast, Wasserstein loss offers greater training stability, resulting in smoother convergence.
§ CONCLUSION
This work presents , a quantum generative adversarial network (QGAN) architecture that overcomes key limitations in existing models. By eliminating reliance on principal component analysis (PCA) and integrating quantum long short-term memory (QLSTM), achieves scalable performance with efficient resource use. As the first QGAN to incorporate QLSTM, this approach represents a significant advancement likely to inspire further research.
IEEEbib
|
http://arxiv.org/abs/2409.02443v1 | 20240904044115 | Exploring the applicability of Large Language Models to citation context analysis | [
"Kai Nishikawa",
"Hitoshi Koshiba"
] | cs.DL | [
"cs.DL"
] |
Exploring the applicability of Large Language Models to citation context analysis]
Exploring the applicability of Large Language Models to citation context analysis
1,2]NISHIKAWA [email protected]
2]KOSHIBA Hitoshi
[1]Institute of Library, Information and Media Science, University of Tsukuba, 1-2, Kasuga, Tsukuba, 305-8550, Ibaraki, Japan
[2]National Institute of Science and Technology Policy (NISTEP), Ministry of Culture, Science and Sports (MEXT), 3-2-2,Kasumigaseki, Chiyoda-ku, 100-0013, Tokyo, Japan
Unlike traditional citation analysis—which assumes that all citations in a paper are equivalent—citation context analysis considers the contextual information of individual citations.
However, citation context analysis requires creating large amounts of data through annotation, which hinders the widespread use of this methodology.
This study explored the applicability of Large Language Models (LLMs)—particularly ChatGPT—to citation context analysis by comparing LLMs and human annotation results.
The results show that the LLMs annotation is as good as or better than the human annotation in terms of consistency but poor in terms of predictive performance.
Thus, having LLMs immediately replace human annotators in citation context analysis is inappropriate.
However, the annotation results obtained by LLMs can be used as reference information when narrowing the annotation results obtained by multiple human annotators to one, or LLMs can be used as one of the annotators when it is difficult to prepare sufficient human annotators.
This study provides basic findings important for the future development of citation context analyses.
[
[
Received ... / Accepted ...
===============================
§ INTRODUCTION
Quantitative analysis focusing on citation relationships among papers assumes that all citations are essentially and implicitly equivalent <cit.>.
In contrast, citation context analysis has been proposed to consider the contextual information of individual citations, such as the location of the citation and the semantic content of the text containing the citation.
Although citation context analysis is expected to provide complementary findings to the traditional quantitative citation analysis, it has the drawback that the cost of creating the data necessary for the analysis is significant.
Therefore, it is difficult to conduct studies that require a large amount of data, for example, analyzing differences in citation context trends among multiple disciplines.
In the citation context analysis, data are created by determining the contextual characteristics of each citation using the text surrounding the citation in the citing paper.
There are two ways to create data: manual data processing, in which a human annotator manually creates data, and automatic data processing, in which data are created using machine learning and other techniques <cit.>.
However, because the latter method often uses supervised learning—which requires training data—it is necessary to create large datasets using human annotators.
The high cost of this annotation work is an obstacle to the development of citation context analysis.
However, with the recent development of Large Language Models (LLMs) such as GPT <cit.>, some studies have attempted to perform general annotation tasks on behalf of human annotators <cit.>.
These studies clarify that LLMs sometimes outperform human annotators hired through crowdsourcing and can produce more data in a more time- and cost-efficient manner.
However, the performance of LLMs annotation varies depending on the specific task, even for the same text classification, and it is not necessarily clear whether LLMs can immediately automate the annotation process.
To the best of our knowledge, no study has focused on whether LLMs can substitute for human annotators in scientific papers.
Because a scientific paper is a specialized text with its own formatting and writing style and contains a large amount of specialized terminology, annotations seem different from general annotations that can be easily crowdsourced, as focused on in previous studies.
In fact, in citation context analysis, annotations are often performed by researchers or graduate students employed as research assistants (RA), who are accustomed to reading articles and are required to be familiar with a schema and manual for annotation through a certain amount of training.
Therefore, it is unclear whether the findings of previous studies can be applied to the citation context analysis.
This study aims to explore the applicability of LLMs to citation context analysis.
Specifically, we will examine the following by having LLMs perform annotation tasks similar to those performed by human annotators in <cit.>, a previous study on citation context analysis:
* Can LLMs replace humans for annotations in citation context analysis?
* How can LLMs be effectively utilized in citation context analysis?
The results of this study indicate that the annotation results of LLMs are comparable to or better than those of humans in terms of consistency but poor in terms of predictive performance.
Therefore, it is not appropriate to allow current LLMs to perform annotations associated with citation context analysis on behalf of humans.
However, the annotation results obtained by LLMs can be used as reference information when narrowing the annotation results obtained by multiple human annotators to one, or LLMs can be used as one of the annotators if securing a sufficient number of human annotators is difficult.
This study provides basic findings important for the future development of citation context analyses.
In the following section, we provide a literature review followed by the methods and results of the experiments.
Subsequently, based on the experimental results, we discuss whether LLMs can replace human annotators.
Next, we examine whether LLMs can be applied to citation context analysis beyond replacing humans.
Finally, we present our conclusions.
§ LITERATURE REVIEW
§.§ Citation Context Analysis
Citation context analysis is also referred to as citation content analysis.
Although some studies distinguish between both <cit.>, we use the term citation context analysis as including citation content analysis.
When conducting citation context analysis, the first step is to set a schema that defines the categorization of citations.
Categories and possible values (also called classes) for each category are often set arbitrarily by researchers according to their research purposes.
<cit.> divided these into syntactic and semantic categories, with the former represented by citation location and the latter by citation purpose (also called citation function or citation motivation) and citation sentiment.
Next, a dataset is created by classifying the citations to be analyzed based on the schema.
This stage corresponds to a task known as annotation, coding, or citation classification.
As mentioned previously, there are two dataset creation methods: human annotators (coders), machine learning, and other techniques <cit.>.
The former is a costly method in terms of both time and money, whereas supervised learning is often used in the latter, especially for semantic categories that require human annotation <cit.>.
In addition, the distribution of classes has been reported to be highly skewed for many categories <cit.>, which is another reason for the need for larger datasets for analysis.
Finally, the created dataset is analyzed for individual research purposes.
However, because both human and machine methods require costly human annotations, studies requiring large datasets, such as comparisons of citation relationships among multiple disciplines, are not well developed.
The few exceptions that make inter-discipline comparisons are achieved by limiting the number of categories, topics, or disciplines focused on <cit.>.
In other words, the cost of annotation must be reduced to allow a more flexible or large-scale study design for citation context analysis.
§.§ Annotation by LLMs
Annotation refers to the text classification in natural language processing tasks.
Many previous studies have compared the results of multiple models and versions of LLMs to evaluate their performance in text classification <cit.>.
Several studies have focused on the potential of LLMs as substitutes for annotation, comparing the annotation results obtained by multiple models and versions of LLMs <cit.>, or more directly comparing human and LLMs annotations <cit.>.
The specific tasks performed in these studies are as diverse as classifying topics for social networking posts <cit.>, classifying websites as news <cit.>, and classifying article genres for news article headlines <cit.>.
In studies comparing human and LLM annotations, several texts that are the subject of these tasks are common and do not require expertise to read.
Therefore, crowd workers are often employed as human annotators in addition to trained annotators, such as graduate students, when making comparisons between humans and LLMs.
Although the cost of annotation is significantly lower with LLMs than with crowd workers <cit.>, the question remains as to whether the quality of annotation results with LLMs is high enough to be used for analysis.
The results of annotation tasks are often evaluated in terms of their consistency (also known as reliability) and prediction performance.
Inter-coder agreement is often used as a consistency metric, whereas accuracy or F1 is often used as a performance metric.
The findings of previous studies indicate that LLMs are superior or comparable to human annotators, particularly cloud workers, in terms of consistency and performance.
Meanwhile, it has also been noted that the consistency and performance of LLMs annotation work can vary depending on the attributes of texts and categories <cit.>.
Thus, no consensus has been reached on whether current LLMs, such as ChatGPT, can immediately replace human annotators.
Additionally, few studies have applied LLMs to citation classification.
<cit.> argued that LLMs can contribute to citation context analysis by automating citation classification; however, the applicability of LLMs classification is currently unclear and should be addressed in the future.
<cit.> compared the performance of LLMs citation classification when performing parameter updating using multiple methods on the public datasets ACL-ARC <cit.> and ACT2 <cit.>.
The results show high performance when using some of the methods and that the zero-shot performance of GPT3.5 is high when targeting multiple fields (ACT2) but low when targeting a single field (ACL-ARC).
However, <cit.> did not compare human annotators to LLMs, nor did it focus on the applicability of LLM-generated data for citation context analysis.
To the best of our knowledge, it is not yet clear whether LLMs can replace humans in annotating paper, a special type of text that requires expertise in reading and understanding.
In other words, it remains to be seen whether LLMs can be used in applied citation context analysis research, in which researchers create and analyze data that categorize individual citations for their own research purposes.
§ METHODS
§.§ Task and Data
Many categories are used in citation context analysis, and annotations vary widely according to the categories used.
We let LLMs perform annotation on the two categories used in <cit.>: citation purpose and citation sentiment.
Descriptions are presented in Table 1.
This study focuses on the same tasks as those in <cit.> for the following three reasons.
First, <cit.> simply organizes categories and their classes (also called values) after reviewing previous studies on citation context analysis.
Second, the manual used for human annotators is publicly available and can be annotated using LLMs under the same conditions as humans.
Third, because the gold standard data used in the analysis are publicly available <cit.>, the predictive performance of the annotation results from the LLMs can be evaluated.
Regarding the last reason, in <cit.>, the data used in the analysis were prepared using the following procedure:
* Two annotators, a researcher and a graduate student employed as a research assistant, independently annotated all data according to the manual.
* After the initial annotation, the annotators explained why they had determined a value for a citation in which the results did not match[This phase is called "discussion" <cit.>.].
* Finally, data from the annotator with the lowest number of value modifications for each category were used in the analysis.
While <cit.> set six categories, this study focuses only on citation purpose and sentiment for the following reasons.
First, citation purpose and sentiment are the main categories addressed in many previous studies that have conducted citation context analyses <cit.>.
In addition, automating their annotations is difficult because it is necessary to understand the semantic content of the surrounding text in which the target-cited paper is mentioned to determine the class.
However, the classification of other categories that correspond to syntactic categories <cit.> can be automated relatively easily because they can be processed without understanding the meaning of the text.
Therefore, we believe that citation purpose and sentiment would particularly benefit from automation, and we address only these two categories in this study.
In this study, we allowed the LLMs to annotate the same texts that were the subject of annotation for citation purpose and sentiments in <cit.>.
<cit.> created 1,174 data by human annotation for citation purpose and citation sentiment, respectively.
However, we annotated 181 of them with LLMs for which the text of the citing papers could be obtained in the Journal Article Tag Suite (JATS) -XML format.
We have limited the annotation target to papers in JATS-XML format because they are expected to be almost free from errors and labor involved in text extraction.
Therefore, if LLMs annotation is successful, it can be easily applied on a large scale.
In addition, many of the papers that are the target of annotation in <cit.> are available only in PDF format.
In such cases, it is often difficult to accurately and automatically extract text and citation information from PDFs.
Therefore, when evaluating the annotation results, the performance of the LLMs and the extraction accuracy of the target data must be considered.
Manual text extraction from PDFs can avoid this problem but at a higher cost.
For these reasons, we limited the scope of this study to articles that could be collected in JATS-XML format.
§.§ Type of LLMs
The LLMs annotation was performed using the API provided by OpenAPI Inc.
The LLM model used in this study was .
The , which is the parameter for creativity or the randomness of replies, was set to 0.5
[According to the official explanation (<https://platform.openai.com/docs/api-reference/chat/create>, Last access:2023/May/06), it takes a value between 0 and 2, and defaults to 1. A value of about 0.2 always returns almost the same response, while a value of about 0.8 returns a random result.
Considering these factors, we set an intermediate value this time.].
§.§ Prompts
In LLMs, any task can be executed by providing instructions (prompts) in a natural language, and the task results can differ depending on the expression of the prompt.
For example, to have LLMs prepare a summary of a certain paper, there are multiple patterns of possible prompts, such as
“Please summarize the paper given below,”
“Please give a brief summary of the paper,”
and “Please summarize the paper in about 200 words,”
and the results may differ among them.
In addition to these differences in expression, using certain techniques such as few-shot <cit.> or chain-of-thought <cit.> can change the results[Furthermore, depending on the parameters of the LLMs and the nature of the data, the results may differ even for the same prompt].
Therefore, in this study, we set up multiple patterns of prompts based on the manual used by human annotators in <cit.> but with almost the same content.
Because specific prompts differ depending on the citation purpose and sentiment, the following describes the prompt patterns for each.
The prompts are included in Online Resource 1.
§.§.§ Prompt patterns for citation purposes
The manual for citation contains the following elements:
1. types of possible classes,
2. definitions for each class,
3. procedures for annotation,
and 4. Keywords and example sentences based on class determination <cit.>.
Correspondingly, in addition to the basic instructions, four patterns of prompts for citation purpose were established, including the following elements:
* Types of class only (Simple)
* Types of class and their definition (Basic)
* Types of class, their definitions, and procedures for annotation (Precise)
* Types of class, definitions, annotation procedures, and keywords and example sentences (Full)
The fourth pattern (Full), which includes all elements, is almost identical to the manual used in <cit.>.
However, the original manual included instructions that did not directly affect the annotation results, such as the handling of the files to be worked on.
This study excluded such instructions from the prompts.
In addition, although the original manual instructs annotators to consider “the title of the section in which the citation in question is being made,” in this experiment, we excluded that part of the annotation to reduce the time and effort required to extract the title.
Moreover, although the published versions of the manual and target papers were written in English, the original manual was written in Japanese.
Therefore, eight prompt patterns were established by writing the above four patterns in Japanese and English.
For example, the simplest pattern of prompts (Simple, EN) containing only the type of class is shown in fig:prompt.
§.§.§ Prompt patterns for citation sentiment
Although the elements included in the original manual for citation sentiment are the same as those for citation purpose, there are no instructions regarding procedures for annotation.
Therefore, three patterns of prompts are set in citation sentiment, except “Precise,” as follows:
* Types of class only (Simple)
* Types of class and their definition (Basic)
* Types of class, their definitions, and keywords and example sentences (Full)
Finally, six prompt patterns were set by writing the above in Japanese and English.
Patterns containing all the elements (Full) are generally identical to those in the original manual; however, as in the case of citation purpose, it excludes instructions that do not directly affect the annotation results contained in the original manual.
§.§ Evaluation Metrics
As mentioned in the Literature Review, several studies have evaluated the results of LLMs annotations in terms of consistency and predictive performance.
These two perspectives were also used in this study, where we use <cit.> as the gold standard.
In addition, multiple applicable metrics exist for both perspectives.
We used the simple agreement rate between annotators and Cohen’s kappa for consistency and the metrics shown in fig:math for performance.
§ EXPERIMENTS
§.§ Distribution of Data
tab:cite_pattern and tab:sdgs show the distributions of the targets annotated in this study.
In <cit.>, the analysis unit was a citation pair, which is a pair of citing papers and one of the papers cited by it, all of which were related to either renewable energy (SDG7) or climate change (SDG13).
Both the citing and cited papers were classified as Natural Sciences (NS) or Social Sciences and Humanities (SSH), and the following four patterns of relationships between disciplines were set: NS citing NS (NS-NS), NS citing SSH (NS-SSH), SSH citing SSH (SSH-SSH), and SSH citing NS (SSH-NS).
As mentioned in the Methods section, this study used some of these as targets for annotation.
tab:cite_pattern shows the distribution of citation pairs for each of the patterns used in this study, and tab:sdgs shows the distribution of the citing papers by research topic, that is SDG7 or 13.
tab:pp and tab:st present the gold standards for annotation used in this study.
In other words, they are a selection of annotation results for citation purpose and sentiments from <cit.>, and their citing papers are available in the JATS-XML format.
§.§ Consistency
First, we compared the annotation results of the human annotators in <cit.> and ChatGPT in this study in terms of consistency.
ChatGPT was given a prompt (Full, EN) that was nearly identical to those in the manuals used in <cit.> and was asked to annotate twice each of the citation purpose and citation sentiments for all 181 data.
tab:compare shows the simple agreement rate and Cohen’s kappa for each of the annotation results by ChatGPT and the results at the time the two annotators independently annotated in <cit.>, i.e., before the “discussion.”
It can be seen in tab:compare that ChatGPT is more consistent than humans with respect to both citation purpose and citation sentiment.
Next, we compared the consistency of the annotation results using ChatGPT with all the patterns of prompts described in the Methods section.
Using eight patterns for citation purpose and six for citation sentiment prompts, we had ChatGPT annotate all 181 data points twice each.
tab:pp_agree shows the number of cases in which the results at each prompt did not agree, the simple agreement rate, and Cohen’s kappa for citation purpose.
As shown in tab:pp_agree, for citation purpose, the highest consistency was found in the prompt (Precise, EN) that provided types of classes, their definitions, and annotation procedures in English, with eight cases (4.4%) differing between the first and second prompts and 95.6% remaining consistent.
The lowest consistency was for the simple prompt in English (Simple, EN), with 30 cases (33.1%) differing and a simple agreement rate of 66.9%.
Prompts other than this pattern exceeded the agreement rates of the human annotators.
Interestingly, the prompt, including all elements (full), which is similar to the original manual for humans, was less consistent.
This finding suggests that consistency does not necessarily increase with more detailed instruction.
tab:se_agree summarizes the consistency of the annotation results for each prompt for citation sentiment.
The table shows that the highest consistency was for the prompt that gave only the types of classes in Japanese (Simple, JP), with one case (0.6%) differing, and the lowest consistency was found in the prompt (Full, EN), with 16 cases (8.8%) differing.
However, all patterns outperform the agreement rate by human annotators, and the consistency of the prompt Simple is significantly higher than for citation purpose.
In addition, focusing only on the prompts written in English, the more detailed and specific the instructions, the less consistent they become.
§.§ Predictive Performance
First, we see the overall performance of annotation using ChatGPT.
As mentioned above, ChatGPT was asked to annotate each prompt pattern twice, but here, we have taken the results of the first annotation.
The accuracy of the results obtained by ChatGPT when given a prompt with the same content as the manual used in <cit.> was 61.3% for citation purpose and 64.6% for citation sentiment.
Next, we look at the performance of the annotation results from the prompts with the highest consistency for each citation purpose and citation sentiment.
For citation purpose, because the prompt that gave types of classes, their definitions, and annotation procedures in English (Precise, EN) were the most consistent, the relationship between the results of the first annotation with this pattern (Predict) and the gold standard (Actual) is summarized in tab:pp_main.
tab:pp_main shows, for example, that when the correct answer is “Background (BKG),” ChatGPT correctly predicted BKG, i.e., True Positive, in 107 instances.
However, a consistent number of errors are observed, including 16 cases where ChatGPT incorrectly predicted “Evidence (EVS)” instead of the correct BKG and 22 cases where ChatGPT predicted BKG, but the correct answer was EVS.
Moreover, none of the “Compare (CMP),” “Criticize (CRT),” and “Use” were correctly predicted, even though they were originally infrequent classes.
Similarly, tab:st_main summarizes the relationship between the results of the first annotation and the prompt (Simple, EN), which provided only types of classes in English and was most consistent in the case of citation sentiment and the gold standard.
The table shows, for example, that there are 120 cases where the correct answer is “Neutral (NT)” and it is correctly predicted as NT.
However, there are some discrepancies, such as 24 cases where the correct answer was “Positive (PG),” even though it was predicted as NT.
Thus far, we have examined the results of the first annotation based on a single prompt.
However, in the case of human annotation, it is common to create a single dataset for analysis, or in other words, a gold standard, by narrowing down the annotation results from multiple annotators through some means.
Based on this, we had ChatGPT annotate using prompts of all types explained in the Methods section; then, we created a single dataset by integrating the results and compared the dataset with the gold standard.
A majority vote was employed as the method of integration.
In tab:pp_multi, for citation purpose, the relationship between the ChatGPT dataset and the gold standard is summarized.
Although there was a slight increase in the number of correct answers for the EVS in tab:pp_multi, the results were not significantly different from those shown in tab:pp_main.
Similarly, for citation sentiment, tab:st_multi shows the relationship between the results of ChatGPT’s annotation, merged into one by majority vote, and the gold standard.
tab:st_multi shows that the performance for the prompt (Simple, EN) in tab:st_main and the performance when integrating the annotation results from all prompts are nearly comparable.
§.§ Discussion of Experimental Results
The results of the experiments indicate that while ChatGPT outperforms human annotators in terms of consistency, it does not produce high-quality data in terms of predictive performance.
Based on the results presented in tab:pp_main to <ref>, it can be said that ChatGPT does not predict the correct answer well, even though the gold standard created in <cit.> was originally highly skewed from class to class.
For example, of the 148 cases in tab:pp_main predicted to be BKG, the number of cases that were actually BKG was 107 (72.3%).
Considering that the proportion of BKG in the correct data was 70.7% as shown in tab:pp_main, the improvement rate is 1.6% compared to the hypothetical case where ChatGPT always predicts any class as a BKG.
In addition, it should be noted that there were a certain number of cases that were predicted as BKG but were not actually BKG, that is, false negatives, and those that were predicted as a class other than BKG but were actually BKG, that is false negatives.
As for citation sentiment, tab:st_main shows that all those predicted as PG are actually PG, which is a good prediction in terms of the small number of false negatives.
However, this number is only 4 out of 28 total actual PG, which is not large.
The same was true for NG.
For NT, which accounts for the majority of cases, 120 of the 173 cases predicted as NT are actually NT, but the improvement rate is 1.5% compared with the hypothetical case in which ChatGPT always predicts the class as NT.
It should also be noted that approximately 30% of the cases were falsely predicted to be PG or NG.
Even if we consider the predicted results as PG or NG, this percentage is only approximately 4.4% of the total.
In other cases, the annotation results are unreliable; therefore, human review is inevitable.
Conversely, the results of the experiments in this study, which showed poor performance while maintaining a certain level of consistency, may indicate differences in how ChatGPT and human annotators “interpret” texts.
Thus, we considered how ChatGPT interprets texts by examining the texts that were actually the target of the annotation in cases where ChatGPT failed to predict the classes correctly or where the results were inconsistent across multiple annotations.
The results of the examination suggest that there is no explicit expression of the relationship between the target cited paper and its surrounding sentences in the texts for which ChatGPT makes erroneous or inconsistent annotations.
For example, ChatGPT predicted the class as “Criticism (CRT)” for the following text <cit.>[It is the citaion pair (NS-SSH) in SDG13.], but the correct class was "Background (BKG)".
(...)
This use of technical devices in an attempt to suppress political debates has been widely documented elsewhere (e.g. Latour, 2004; Lupton and Mather, 1997).
At an extreme this use of GIS and mapping reinforces and aggravates existing divides and inequalities.
In this text, the target-cited paper was <cit.>.
Although the text includes words that would provide a basis for predicting the class as CRT (“widespread” and “reinforces and exacerbates existing divisions and inequalities”), it is “this use of technical devices/GIS and mapping,” not <cit.>, that is the target of “criticism” in this text.
Because <cit.> seems to have been cited to provide background information on the research topic of the citing paper, the correct class here is the BKG.
This would have been relatively easy for human annotators to determine, but it would have been difficult for ChatGPT to do so because the relationship between the cited and citing papers was not explicitly stated as words.
In other words, it is suggested that ChatGPT interprets text using only explicit words and does not consider implied contexts.
Note that this pattern is often seen in texts where ChatGPT fails to correctly predict the class, but there are exceptions in which this pattern does not successfully explain the reason for the interpretation.
From the above, it can be said that the annotation results of ChatGPT are inadequate in terms of performance, and it is problematic to use the dataset created by ChatGPT for analysis.
In other words, the experimental results of this study clarify that it is difficult to use the current ChatGPT as a substitute for human annotators in citation context analyses.
§ CONSIDERATION OF LLM USE CASE IN CITATION CONTEXT ANALYSIS
§.§ Support for Human Annotators
Thus far, we have evaluated the LLM from the viewpoint of consistency and predictive performance when instructions are given based on a manual for humans.
Consequently, the annotation results of the LLM are not likely to be as good as those of humans in terms of performance, and it was suggested that human annotation is necessary for unknown data.
However, it may be possible to use LLMs not as a complete replacement for human annotation, but as a support for human annotators.
If we can believe that “what LLMs predict as PG is actually PG,” as in the case of PG in tab:pp_main—i.e., if the prediction performance for at least some classes is sufficiently high—we can let LLMs annotate them on behalf of human annotators.
Moreover, as with human annotations, it is also possible that some of those predicted to be in the same class include those predicted with confidence, whereas others do not.
In this case, it may be possible to reduce the cost of annotation by having low-confidence prediction results annotated again by a human and adopting LLMs’ annotation results of the LLMs for high-confidence prediction results.
In general, the number of possible classes affects the text classification performance, and the annotation performance by LLMs depends on the type of class predicted <cit.>.
Therefore, the performance of LLMs annotation can be improved by changing the number of classes and their characteristics.
In light of the above, we reviewed prompts for using LLMs to support human annotators and examined the possibility of their use.
§.§.§ Citation Purpose
“Background (BKG)” accounts for about 70% of the gold standard, followed by “Evidence” at about 16%.
We thus reconfigured the annotation for citation purpose as a three-class classification task, adding “Other” to these two classes.
tab:pp_c_01 presents the annotation results for ChatGPT[The prompts used for the following tasks in this section are shown in Online Resource 2.].
As shown in the table, none of the cases were classified as “Other,” and the predicted results are either “Background” or “Evidence.”
Compared to tab:pp_main and tab:pp_multi, the number of cases predicted as “Evidence” increased, and the performance of annotation generally worsened.
In addition, the consistency (simple agreement rate) of the two annotations is 91.7%.
One possible reason for this change in annotation trends is that ChatGPT was influenced by the literal meaning of the name of the newly created class “Other” and avoided annotating that broad and ambiguous class.
We thus replaced “Other” with “General” or “Pending,” respectively, and let ChatGPT annotate again.
As shown in tab:pp_c_02 and <ref>, although some cases belonging to the third class can be observed, the overall trend is the same as that shown in tab:pp_c_01.
In addition, the consistency (simple agreement rate) of the results of the two annotations was 86.2% and 91.7%.
The above experiments were conducted hoping that reducing the number of classes would improve the annotation performance; however, the results showed that the performance degraded.
We also changed the third-class names to account for the possibility that the name might affect annotation; however, performance did not improve.
However, if the literal meaning of the class name affects the annotation, it is possible that the existing classes “Background” and “Evidence” also affected the annotation trend of ChatGPT, apart from their operative definition.
Therefore, we let ChatGPT annotate again by replacing “Background” and “Evidence” with just “BKG” and “EVS,” which are sequences of symbols without meaning as words.
Moreover, the name of the third class was changed to UKN.
The annotation results for these changes are listed in tab:pp_c_04.
The overall trend was the same as that shown in tab:pp_c_01.
In addition, the consistency (simple agreement rate) of the results for the two annotations is 86.2%.
The experiments thus far have shown that reducing the number of classes or changing class names does not improve the annotation performance.
Therefore, we attempted a different strategy, having GPT perform a binary classification for each class.
Specifically, we let ChatGPT predict whether it was a BKG (PB or NB) or an EVS (PE or NE) and examined the relationship between their combination and the gold standard.
Although we replaced the original annotation with a binary classification, there was no relationship, as shown in tab:pp_c_05.
In addition, the consistency (simple agreement rate) of the results of the two annotations was 90.6% and 91.7%.
Finally, we reorganized the annotation into the simplest binary classification: BKG (BKG or UKN).
tab:pp_c_06 shows that of the 43 cases predicted as BKG, 38 (88.4%) were BKG, which is highly accurate.
Although the number of cases in which ChatGPT answered correctly was small, these data could potentially be used in the analysis.
In this case, the consistency (simple agreement rate) between the two annotations is 91.2%.
§.§.§ Citation Sentiment
Unlike citation purpose, citation sentiment was originally a three-class classification task, and it was difficult to reduce the number of classes any further.
We thus attempted a different approach: adding a class.
As mentioned previously, some of the predicted results from the LLMs may contain different confidence levels.
If so, the annotation performance could be improved by distinguishing between those predicted with high and low confidence.
Therefore, we first added a class named “Pending (PD)” and let those with low confidence be classified there.
As shown in tab:st_c_02, some are classified as “Pending,” but the number is small, 30.
Furthermore, what is actually a “Positive” may be classified as the opposite, “Negative,” which means that the performance is deteriorating.
Next, we had both those that were actually neutral and those predicted with low confidence be classified in the current “Neutral,” and we have renamed only this class to UKN, a name without meaning.
As shown in tab:st_c_03, while the number of those predicted as “Positive” and “Negative” has increased, the number of errors was too large to use the results for analysis.
§.§ Other Use Cases
The results of the experiments thus far indicate that it is difficult to use LLMs to perform annotations partially on behalf of humans.
However, they also suggest that there is room for LLMs to be utilized in citation context analysis.
<cit.> categorizes the use cases of LLMs in general annotation work as follows:
* Confirming the quality of human-labeled data
* Identifying cases to prioritize for human review
* Producing labeled data to finetune and validate a supervised classifier
* Classifying the entire corpus directly
This paper shows that cases other than Case 1 are difficult to apply to citation context analysis.
For Case 4, as we have seen in the Experiments section, poor predictive performance makes it problematic to use the LLM-generated data for the analysis.
For the same reason, the use of LLMs in Case 3 should be avoided.
In addition, Case 2 is a use case related to the partial substitution of annotation work, but as discussed thus far in this section, there are also concerns about the use of LLMs for this purpose.
However, Case 1 seems open for consideration.
This use case implies examining the quality of human annotation results by comparing the annotation results of humans and the LLM.
Putting this into the context of citation context analysis, a possible use of LLMs is to use the results of the LLMs annotation as reference information when narrowing down the data produced by multiple human annotators to a single set of data to be used in the analysis.
As shown in the Experiments section, although the performance of the annotation results by the LLM was low, the consistency was more stable than that of the human annotators.
The LLM can also output reasons for its decisions.
Because of these characteristics, the LLM can be viewed as an annotator with criteria and tendencies different from those of human annotators.
In other words, the annotation results of LLMs and the reasons for their decisions can be utilized as reference information in the process of the aforementioned “discussion”<cit.>, which narrows down multiple datasets to a single one.
In “discussion,” human annotators try to maintain the objective correctness of the data by explaining the reasons for their decisions of annotations to each other, taking care not to persuade the other, and voluntarily revising their own work results if necessary.
At this time, using LLMs data as reference information from a third-party standpoint may reduce the possibility of a particular human annotator’s subjectivity having a significant impact on the “discussion.”
Additionally, hiring human annotators is generally expensive in terms of time and money, which sometimes forces the use of data from a single annotator for analysis <cit.>.
Particularly in the case of citation context analysis, annotators must have more advanced skills because the text to be annotated is a special one, a scientific paper.
This makes it more challenging to secure a sufficient number of annotators compared with general annotations.
In these situations, one option would be to introduce LLMs as one of the annotators to avoid using the data generated by a single annotator.
§ CONCLUSION
This study aimed to explore the applicability of LLMs to citation context analysis.
The results revealed that ChatGPT, at least in its current version, cannot annotate with sufficiently high performance to replace human annotators for the major categories in citation context analysis: citation purpose and sentiment.
It was also found to be difficult to have ChatGPT partially annotated on behalf of humans, such as by using CatGPT annotation results for specific classes and having a human annotate the rest.
However, because the ChatGPT annotation results have a certain consistency, it may be possible to view LLMs as annotators who interpret differently than humans.
This suggests the following two possible use cases of LLMs in citation context analysis.
First, the annotation results obtained by LLMs can be used as reference information when narrowing the annotation results obtained by multiple human annotators to one.
Second, it is possible to use LLMs as the Nth annotators when securing the number of human annotators is difficult.
Future researchers attempting to utilize LLMs for citation context analysis can refer to LLMs' limitations and use cases clarified in this study.
In contrast to previous studies that verified the performance of LLMs in general annotation tasks and proposed use cases, the findings of this study are novel in that they examined their applicability and use cases in the specific task of citation context analysis.
However, this study had several limitations.
In the present study, we focused on , which exhibited the best performance at the time of the experiment.
However, , which is believed to perform better than , is now available to the public.
We also tried an experiment using this new model and found that it is worse than in terms of predictive performance, as shown in Online Resource 3; however, further new models may emerge in the future that will allow LLMs to perform in a way that overturns the conclusions of this study.
Therefore, the findings of this study represent a snapshot of the potential applications of LLMs and should be analyzed continuously following future technological trends.
§ ACKNOWLEDGEMENTS
This preprint has not undergone peer review (when applicable) or any post-submission improvements or corrections. The Version of Record of this article is published in Scientometrics, and is available online at https://doi.org/10.1007/s11192-024-05142-9
§ DECLARATIONS
§.§ Competing Interests
The authors have no competing interests to declare that are relevant to the content of this article.
§.§ Funding
No funding was received for conducting this study.
|
http://arxiv.org/abs/2409.02307v1 | 20240903213459 | NEW-MUSIC: The Next-generation Extended-Wavelength Multiband Sub/millimeter Inductance Camera | [
"Sunil R. Golwala",
"Andrew D. Beyer",
"Daniel Cunnane",
"Peter K. Day",
"Fabien Defrance",
"Clifford F. Frez",
"Xiaolan Huang",
"Junhan Kim",
"Jean-Marc Martin",
"Jack Sayers",
"Shibo Shu",
"Shiling Yu"
] | astro-ph.IM | [
"astro-ph.IM"
] |
Black holes of type D revisited:
relating their various metric forms
Marco Astorino
September 9, 2024
======================================================================
§ ABSTRACT
The Next-generation Extended Wavelength-MUltiband Sub/millimeter Inductance Camera (NEW-MUSIC) on the Leighton Chajnantor Telescope (LCT) will be a first-of-its-kind, six-band, transmillimeter-wave (“trans-mm”) polarimeter covering 2.4 octaves of spectral bandwidth to open a new window on the trans-mm time-domain frontier, in particular new frontiers in energy, density, time, and magnetic field. NEW-MUSIC's broad spectral coverage will also enable the use of the Sunyaev-Zeldovich effects to study accretion, feedback, and dust content in the hot gaseous haloes of galaxies and galaxy clusters. Six-band spectral energy distributions, with polarization information, will yield new insights into stellar and planetary nurseries. NEW-MUSIC will employ hierarchical, phased arrays of polarization-sensitive superconducting slot-dipole antennas, coupled to photolithographic bandpass filters, to nearly optimally populate LCT's 14field-of-view with six spectral bands over 80–420 GHz (1:5.25 spectral dynamic range; 2.4 octaves). Light will be routed to Al or AlMn microstripline-coupled, parallel-plate capacitor, lumped-element kinetic inductance detectors (MS-PPC-LEKIDs), an entirely new KID architecture that substantially enhances design flexibility while providing background-limited performance. Innovative, wide-bandwidth, etched silicon structures will be used to antireflection-treat the back-illuminated focal plane. NEW-MUSIC will cost-effectively reuse much of the MUSIC instrument, initially deploying a quarter-scale focal plane capable of the bulk of NEW-MUSIC science followed later by a full-FoV focal plane needed for NEW-MUSIC wide-area survey science.
§ INTRODUCTION
The time-domain sky at submillimeter and millimeter wavelengths is only just now beginning to be explored thanks to advances in observing facilities at these wavelengths. Sources include various explosions associated with stellar death, outbursts from and accretion onto the remnants, young stars growing their mass, flaring stars, and active galactic nuclei. This spectral range traces some of the most energetic phenomena, penetrates deep into some of the densest environments, accesses the earliest times and shortest-timescale variability, and probes the highest magnetic field environments. Critical to understanding these sources are multi-band spectral energy distribution (SED) data over a large spectral range. For synchrotron sources, the SED spectral slopes and breaks help to constrain the energies of the emitting electrons and thus the shocks that accelerate them, and the SED time evolution constrains dynamics of the explosion and the outflow. For local dusty sources, the spectral slope and curvature constrain the temperature and grain emissivity, and temporal information tells us about the episodic nature of evolution.
Accretion and feedback play central roles in the evolution of galaxies and galaxy clusters, impacting the hot circumgalactic medium (CGM) and intracluster medium (ICM). Via Sunyaev-Zeldovich effect observations yielding total thermal content, pressure, and density, it is possible to probe deviations from equilibrium due to accretion and feedback processes: non-thermal pressure and bulk motions in galaxy clusters, and deviations of the CGM from self-similar scaling. The complex nature of the spectral signature of the SZ effects, combined with the presence of contaminating foreground and background sources, necessitates multi-band data through the trans-mm (0.7–3.8 mm) regime.
Six-band spectral energy distributions, with polarization information, will yield new insights into stellar and planetary nurseries.
The very broadband trans-mm SED information needed for these various applications will be provided by the Next generation Extended Wavelength MUltiband Sub/millimeter Inductance Camera (NEW-MUSIC) on the Leighton Chajnantor Telescope. NEW-MUSIC will provide 2.4 octaves of spectropolarimetric coverage in six spectral bands from 80 to 420 GHz (0.7–3.8 mm). It will be deployed on the Leighton Chajnantor Telescope (LCT), the re-siting of the 11 rms, 10.4 m Leighton Telescope of the Caltech Submillimeter Observatory to Cerro Toco in the Atacama Desert in Chile.
A number of key elements enable this broad spectral coverage with fundamental-noise-limited performance. Hierarchical phased arrays of slot-dipole antennas using low-loss, hydrogenated amorphous silicon (a-Si:H) dielectric make it possible to couple incoming light to detectors across the 2.4-octave bandwidth while also matching pixel size to the diffraction spot size so the detector count and sensitivity requirements are not unnecessarily demanding. The antennas are also inherently polarization-selective. We couple light into the antennas through the silicon substrate using metamaterial, silicon, antireflective structures. We sense the light from the antennas with Al or AlMn microstrip-coupled, parallel-plate capacitor, lumped-element KIDs (MS-PPC-LEKIDs), an innovative new KID design that combines the low two-level-system noise of a-Si:H with a flexible KID design that is also inherentely shielded against direct absorption. These KIDs provide sensitivity limited only by photon statistics and generation-recombination noise (the sum being what we term “fundamental noise”). This revolutionary focal plane technology will be integrated into the existing MUSIC cryostat and relay optics and make use of existing KID readout systems to enable quick deployment.
§ SCIENTIFIC MOTIVATION
§.§ The Trans-mm Time-Domain Frontier
NEW-MUSIC on LCT will have transformative impact via simultaneous observations in six spectral bands from 80 to 420 GHz, covering the critical spectral range where transient and time-domain synchrotron emission shows peaks and spectral breaks and where dust thermal emission is accessible in most weather conditions at an excellent site. NEW-MUSIC/LCT will build on enormous investments in O/IR time-domain surveys and the growing transient alert capacity of mm-wave CMB surveys.
§.§.§ Explosive Stellar Death
*The Nearby Universe
Death omens
Core-collapse supernovae (CCSNe) explode in a circumstellar medium (CSM) sculpted by mass loss during the star’s life. There is increasing evidence that many, or perhaps most, massive stars undergo intense eruptive mass loss within days to years of core collapse, producing a dense CSM. However,
emission from supernova-driven
shocks in dense CSM (“Interacting SNe”) is absorbed at early times (when the explosion is still close to the source; days to weeks) except in the trans-mm. NEW-MUSIC/LCT's simultaneous six-band observations with mJy sensitivity (Table <ref>) will track the explosion size and the structure of the surrounding medium (CSM density and magnetic pressure) via the time evolution of the synchrotron self-absorption (or free-free absorption) peak. A program to study nearby (≲100 Mpc) events would comprehensively assess pre-supernova mass-loss rates across the zoo of CCSNe. Insight into the CSM structure may also shed light on the nature of the progenitors.
Relativistic outflows into the CSM? Fast blue optical transients (FBOTs) are a recently discovered class of transients, remarkable because of their blue colors and short (≈ 10 day) durations. Many models have been put forth (see, e.g., <cit.> for a review); one is that they are a particularly rare (0.1%) class of interacting SNe, interesting because the newly formed compact object launches a mildly relativistic outflow into the CSM that enhances the luminosity by 100 times.
Seven have been discovered to-date, informing the rate estimate. NEW-MUSIC/LCT should detect of order ten FBOTs per year (Table <ref>) to substantially enhance the sample with trans-mm SEDs well-sampled in time, which will reveal the dynamics of the outflow and the explosion environment and directly probe the particle acceleration process in relativistic shocks.
Low-luminosity gamma-ray bursts LLGRBs are thought to be cases of the fairly common broad-line SNe Ic that emit gamma-rays like gamma-ray bursts (discussed below) but with lower luminosity, perhaps due to smothering by a dense CSM — effectively, another case of CCSNe exploding into a dense CSM, but now adding gamma-rays. NEW-MUSIC/LCT should detect a handful a year, adding to our understanding of the LLGRB emission mechanism and CSM.
*The Distant Universe
Gamma-ray bursts Long-duration gamma-ray bursts (LGRBs) are thought to arise from relativistic jets launched in the collapse of massive stars. Thousands of GRBs have been discovered, but only a handful have the trans-mm data that provide unique information about the jet. In particular, the reverse shock (RS) dominates the trans-mm at early times (≲1 day <cit.>) and provides information about the baryon content and magnetization of the ejecta, while the forward shock (FS) dominates from 𝒪(1) day onward and provides information about the CSM, including its magnetic field geometry via polarization <cit.>. Figure <ref> shows a typical spectrum at 3.9 days and
Table <ref> shows luminosities and rates for different viewing angles. CSM information may be critical to testing whether LGRBs are a special case of
LLGRBs, one in which the CSM is tenuous and the full gamma-ray emission can escape <cit.>.
Jets from shredded stars Relativistic tidal disruption events (TDEs) are a rare case of TDEs in which a synchrotron-emitting jet is launched <cit.> when the supermassive black hole shreds a passing star. Only 1–10% of all TDEs are relativistic, so only four have been observed, with Swift J1644+57 being an exemplar (Figure <ref>). NEW-MUSIC/LCT could detect a relativistic TDE once every year or two (Table <ref>), yielding trans-mm SEDs well-sampled in time that, like for FBOTs, will reveal the environment and outflow dynamics and directly probe relativistic shock particle acceleration. Non-relativistic TDEs are 100 times less luminous <cit.>, so their 10–100× higher rate does not compensate the smaller detection volume.
Black Hole Accretion Black holes accreting from companion stars power transient jets when in outburst. Trans-mm monitoring near the spectral power-law break probes jet power variability that is more difficult to observe at lower frequencies due to synchrotron self-absorption <cit.>. As these sources evolve over the weeks following the outbursts, mm flares are still observed, likely tracing discrete jet ejections.
For the handful of such systems studied, this field has been plagued by sparse data relative to the minimum variability timescales of ≪ hours, with recent dedicated observations revealing just how significant the activity is on short timescales and at high frequencies (see Figure <ref>). NEW-MUSIC/LCT minute-by-minute measurements, with ≈10 mJy rms at 345 GHz, would be sufficient to track the variability detected by ALMA, to do so in many more sources, and to provide substantial SED information to characterize the time evolution of the spectral break.
Novae These thermonuclear explosions on accreting white dwarfs yield shocks that accelerate particles to radiate in the radio, X-rays, and gamma-rays. Internal shocks determine the morphology of nova ejecta and eventually lead to dust and molecule formation in the interstellar medium. Nova synchrotron emission should be brighter in the trans-mm early on, but data are lacking. V1324 Sco, one of the most gamma-ray-luminous novae, showed mJy fluxes at 33 GHz with a spectral index F_ν∼ν^2 at tens of days <cit.>. NEW-MUSIC/LCT would have been able to provide a full SED with mJy noise in 3 minutes, and these SEDs and their time evolution would be much more diagnostic of the electron spectrum and the gamma-ray source than radio data <cit.>.
§.§.§ Outbursts and Pulses from Stellar Remnants
Magnetars Magnetars, incredibly magnetized neutron stars with 10^13-15 G fields, can have pulsed trans-mm emission synchronized to their radio pulsar activity, but the trans-mm emission mechanism is not known. It is even conceivable the pulsed flux, and the SED shape and position of the break, can vary with time <cit.>. Of the thirty known magnetars, six are radio-loud, but only two have been detected in the trans-mm: XTE J1810-197 (Figure <ref>), which exhibits a spectral break in the trans-mm, and the galactic center magnetar SGR J1745-2900, 2–290 GHz with a ν^0.4 spectrum. NEW/MUSIC can both monitor the spectral shape of these known sources and search for trans-mm emission from others, even those with past non-detections given potential variability.
§.§.§ Mass Buildup in Infant Stars
Young stellar objects (YSOs) are generally underluminous compared to expected mass accretion rates. The wide range of YSO variability (e.g., Figure 3 of <cit.>), with some outbursts showing 100× increase in luminosity lasting for decades, support the idea of episodic accretion. Trans-mm variability can be cleanly used to measure the change in T_dust driven by stellar luminosity variations <cit.>. The JCMT Transient Survey <cit.> has monitored eight nearby star-forming regions monthly since 2015/2016, observing 10–30% variations in ≳ 20% of sources at a rms sensitivity per half-hour monthly observation of 14 mJy/beam at 345 GHz. NEW-MUSIC/LCT will complement and expand on this work by accessing the southern sky and achieving a factor of 2 better sensitivity (see Table <ref> caption). Six-band SEDs will enable a search for time-variable free-free emission.
§.§.§ Active Stars and Exoplanet Habitability
The number of millimeter flares from across the stellar landscape is growing, mainly via serendipitous searches in wide-area CMB surveys <cit.>. The flares range over six orders of magnitude in luminosity, entirely unexpected from prior radio data. Several flares are in binary systems (including YSOs) and may be triggered by reconnection events in the interacting magnetospheres. Only one flare has spectral information above 300 GHz. The short flare timescales of several minutes to several hours, and extremely low duty cycles, motivate rapid (≲1 hr) follow-up with NEW-MUSIC/LCT. The flares seen to date, with flux changes of tens to hundreds of mJy at 90–220 GHz, are eminently detectable with NEW-MUSIC/LCT's 1–10 mJy rms across this specral band in 3 minutes (Table <ref> caption), and NEW-MUSIC/LCT will add three more bands of data, especially in the poorly sampled region above 300 GHz. Multi-band lightcurves will reveal the flare energetics, critical for understanding the emission mechanism. Cyclotron emission is likely and will be circularly polarized. Such flares have deep implications for exoplanet habitability: life on Earth relies on the relatively modest solar flaring activity and protection by Earth's magnetic field.
§.§.§ Active Galactic Nuclei: A View Deep into the Jet
Most radio galaxies have compact cores with flat spectra (∼ν^0).
The flux at a given frequency is dominated by a radius r ∝ν^-1 for a conical jet. In low-luminosity sources such as M87, the innermost parts of the jet (at about 5–10 Schwarzschild radii) emit at ∼200 GHz, while for higher luminosity sources, and more strongly relativistically beamed ones, the innermost parts emit at 400 GHz or above.
As shocks propagate down the jets and compress the magnetic field, there are flux outbursts and swings of polarization correlated with inverse-Compton-scattered high-energy gamma-ray emission.
Trans-mm monitoring of ∼100 “interesting” sources on a few-day cadence would complement the long-running 15 GHz monitoring program on the Caltech OVRO 40 m telescope <cit.>, more completely characterizing variability over a range of length scales deep into the jet. Especially exciting would be correlations with gamma-rays (CTA) and neutrinos (IceCube, Baikal, KM3NeT). Many of these sources are bright (> 30 mJy, so SNR > 30 in three minutes (Table <ref> caption)) and used as pointing calibrators, ensuring a large database of observations.
§.§ Using Hot Gas Haloes to Study Accretion and Feedback in Galaxy and Galaxy Cluster Evolution
Astro2020 <cit.> highlighted the emerging study of accretion and feedback via the hot, ionized CGM's thermal energy density (= pressure) and electron density distributions as measured by, respectively, the thermal and kinetic Sunyaev-Zeldovich (tSZ, kSZ) effects[CMB distortions due to scattering with free electrons <cit.>]. kSZ can also reveal coherent flows of ionized gas (non-thermal gas motions).
§.§.§ The Dynamics of Accretion from the IGM onto the ICM in Galaxy Clusters
Galaxy clusters serve as high-mass, high-SNR galaxy analogues for studying accretion from the intergalactic medium (IGM). The accretion shock heats infalling gas to near-virial temperatures, but significant support against gravity is also provided by residual coherent motions of order 500 km/s <cit.>.
Such support should coincide with a tSZ thermal pressure deficit relative to hydrostatic equilibrium <cit.>, for which there is modest evidence <cit.> (Figure <ref>).
kSZ measurements of such motions have been sensitivity-limited to extreme mergers (relative velocity ∼3000 km/s <cit.>; Figure <ref>), though the recently fielded TolTEC/LMT <cit.> will have 5–10× better sensitivity and 5× finer angular resolution.
With six bands and better sensitivity than prior instruments on CSO, NEW-MUSIC/LCT will be able to spectrally separate tSZ, kSZ, and dust/synchrotron contaminants in a large program on 20 well-studied galaxy clusters. The ≳20× improvement in tSZ SNR will enable %-level constraints on non-thermal pressure, yielding a high-significance detection and distinguishing among simulations (Figure <ref>). The 10× kSZ uncertainty improvement to 100 km/s will enable mapping of bulk motion in typical clusters, complementing TolTEC/LMT in field size. Future X-ray spectroscopic imaging (XRISM, , X-ray probe mission) of such motions will also be complementary, mainly probing core regions at lower z. The high-frequency bands will constrain the ICM dust content
and deliver SZ constraints on the mass-weighted temperature via relativistic corrections.
§.§.§ Feedback and the Relation between Galaxies and their Circumgalactic Medium
Simulations show that, on galaxy scales, deviations from self-similar, gravity-only predictions for the radial tSZ profile of stacks of galaxies reflect feedback mechanisms including supernova winds <cit.>, AGN-driven outflows <cit.>, and cosmic-ray pressure <cit.>. Both deficits and excesses have been observed and there are varying degrees of consistency with simulations that do or do not incorporate AGN feedback <cit.>. Current work focuses only on quiescent, high-mass galaxies rather than comparing star-forming and quiescent samples, and thus conclusions rely on comparison to imperfect, incompletely calibrated simulations. Empirical comparison of different samples may be more conclusive, but deeper maps are needed. Dust contamination is also a serious problem <cit.>.
Relative to prior work with , SPT, and ACT, NEW-MUSIC/LCT's angular resolution, broader spectral coverage, and narrower, deeper survey will reach to 3–10× lower mass. Its high-frequency bands will better constrain dust than the 220 GHz data used to date. These improvements will enable differential measurements between galaxy stacks with different star-formation rates and potentially a detection of a deviation of the tSZ-mass relation from self-similarity.
§.§ Dust in Stellar and Planetary Nurseries
NEW-MUSIC/LCT will provide six-band SEDs and polarimetry of dust thermal emission in star-forming regions, protostellar cores, and protoplanetary disks, with many potential applications. Magnetic field orientation measurements on scales between ALMA (sub-arcminute) and (degree) can help quantify the role of magnetic fields in regulating star formation. The frequency dependence of dust polarization on small scales can reveal whether different dust temperatures and populations are needed <cit.>, impacting shielding in many environments.
SEDs of protoplanetary disks can yield sizes and environments of large dust grains, testing if protoplanetary dust forms in situ and how grain size affects shielding and clumping, impacting the speed of protoplanetary disk evolution.
§ INSTRUMENT PARAMETERS
Table <ref> summarizes the spectral bands we are targeting for NEW-MUSIC, motivated by a combination of the science goals outlined above, atmospheric transmission windows, and appropriateness of the technologies being developed.
The prime driver for this specific frequency range is science using the Sunyaev-Zeldovich effects, as this spectral range overlaps the atmospheric windows where the effect is bright and spectrally distinguishable from contaminating sources such as the primary anisotropy of the cosmic microwave background, radio galaxies, and, most importantly, dusty, star-forming galaxies. As outlined in the prior section, this frequency range is also an excellent match to unfulfilled needs for the study of time-domain sources in this spectral band. In particular, the large proposed spectral range (1:5.25) will provide a large lever arm for: measuring spectral indices and looking for spectral breaks in non-thermal sources that constrain their engines, emission mechanisms, and explosion and outflow dynamics; for constraining T/(1+z) for extragalactic dusty sources; and, for separating synchrotron, free-free, and thermal dust emission for local sources.
The specific choice of bands is driven by the available atmospheric windows (see Figure <ref>), with the additional requirement that we split the very wide 190–310 GHz window into two bands to obtain spectral information across that window. Practical considerations limit the spectral bands at high and low frequencies. Above 420 GHz, there are no atmospheric windows with useful fractional bandwidths until the 650 and 850 GHz windows, which approach the Nb gap and thus require a fundamentally different optical coupling technology. The rarity of good observing conditions at these higher frequencies also argue for different instrumentation. At low frequencies, there is again a large gap down to the next atmospheric window below 45 GHz. The combination of degraded angular resolution and the need for a much lower T_c material render other technical approaches more appropriate.
We note that we evaluate the atmospheric optical load at the approximate 25th percentile for the site, 0.55 mm PWV, because we expect that, under better conditions, LCT will be in use for 650 and 850 GHz observations. (Comprehensive weather statistics are actually not available for Cerro Toco. The one existing study of Cerro Toco <cit.> suggests that the PWV there is, on average, 90% of that on the plateau. Thus, we use the percentile for 0.6 mm PWV on the plateau <cit.>.) We use the best planned observing conditions for our calculations to obtain the most stringent requirements on instrument sensitivity so that, under all conditions, the instrument is background limited.
Table <ref> also summarizes the focal plane parameters: pixel size in mm and (f/#)λ, beam FWHM, and number of pixels. The f/# of the optics is chosen so that the pixel size is between 1(f/#)λ and 2(f/#)λ at all frequencies. The band centers work out such that the pixel size is closer to 1(f/#)λ at high frequency, where angular resolution is important for dusty, star-forming galaxies, while the pixel size is more conservative (closer to 2(f/#)λ) at low frequencies where sidelobes and stray light are more of a concern.
The total number of pixels in each band is chosen to approximately fill the LCT 14field of view. (Gaps between pixels for KIDs and the readout feedline make the focal plane larger than ℓ_pix√(N_pix).) Initial deployment will use a quarter-scale focal plane, and the final instrument will consist of four copies of the quarter-scale focal plane.
§ TECHNICAL APPROACH
§.§ Focal Plane Architecture – Design
§.§.§ Polarized, Hierarchical Antennas using Low-Loss Hydrogenated Amorphous Silicon (a-Si:H)
Phased Arrays of Slot-Dipole Antennas
Light is received at the focal plane by a superconducting phased-array antenna <cit.>, back-illuminated through the silicon substrate, as shown in Figure <ref>. The fundamental element is a 1.664 mm long, 18 wide slot in a niobium ground plane. An incoming EM wave polarized normal to the slot excites a voltage across it, which excites waves on capacitively shunted microstripline (“microstrip”) <cit.> feeds crossing the slot. The feeds have a 54 impedance given the microstripline geometry. The capacitors are 37 × 10 and have 40 reactance at 100 GHz. The 1 wide microstrip comprises the ground plane (190 nm thick) and a Nb wiring layer (160 nm thick) sandwiching a 1070 nm thick hydrogenated amorphous silicon (a-Si:H) dielectric layer. The ground plane prevents direct excitation of the microstrip by incoming light, both by geometrical blocking and by imposing a zero electric field boundary condition ≪λ away from the top conductor. A binary summing tree combines the trans-mm wave from 16 feeds along a slot and from 16 such slots with equal path lengths. The feeds and the slots are all spaced by 104 center-to-center. After each summing junction, the now-widened microstripline is tapered back down in width so the summing tree occupies minimal space between the slots, but then the microstripline exiting the summing tree is allowed stay at the final summing junction output width, 5 . This wider microstripline is both more ideal (width large compared to the 1.07 dielectric thickness) and more robust against fabrication defects as it travels to the filters and KIDs. A backshort 150 from the vacuum side improves the antenna forward efficiency averaged over its entire band, and it does this for a very wide bandwidth because of the high permittivity substrate (whose thickness is also optimized in the calculation). The inductance it adds is tuned out by the capacitive shunts at the feeds. The intrinsic bandwidth of the antenna is calculated to be very wide, 1:7.5 above 90% efficiency.
To the extent that the slot impedance is the same for all 16 feeds along a slot, the illumination of the antenna is uniform and the far-field beam will be sinc-like. Variations of the slot impedance with position may cause the illumination to be tapered along the slots. At low frequency, the impedance oscillates significantly along the slot (varying by ±50% at 100 GHz) but the variation is over the entire slot, while, as the frequency increases, the slot impedance is stable over the bulk of the slot and then shows similar ±50% variations only near the ends. The resulting coupling variation may result in a small E-H plane beam asymmetry. Fortunately, even these significant impedance mismatches should cause ≲10% variations in power coupling from the slot to the feed. It remains to be determined whether any such effects are observed (see <ref>). If so, they may be corrected by adjusting the dimensions of the phased array.
Hierchical Summing
The main innovation in the phased-array antennas for NEW-MUSIC is to hierarchically sum the slot antennas in a frequency-selective way so that the pixel size grows with wavelength to roughly track the diffraction FWHM = (F/#)λ. Figure <ref> shows how photolithographic low-pass and band-pass filters (LPFs and BPFs) will be used to do this, starting with 16×16, 1.664 mm antennas. The main advantage is the nearly optimal trade-off between optical efficiency and angular resolution.
For λ for which a fixed antenna size is smaller than (F/#)λ, its efficiency is <50%, enhancing the requirement on detector noise. For λ for which the fixed antenna size is larger than 2(F/#)λ, angular resolution is degraded (FWHM > λ/D), which degrades point source sensitivity and confusion limit.
In implementing hierarchical summing, we allow gaps between the fundamental elements, which could have been avoided by use of microstripline crossovers. While such crossovers have been demonstrated at trans-mm wavelengths<cit.>, we deemed it simpler, more robust, and easier to model to instead array the 16× 16 slot antennas with gaps between them, using the space between them for filtering and microstripline routing without crossovers. The gaps are 208 between the level 0 (fundamental) antennas and 312 between the level 1 (one level of summing) antennas; the gap grows because the level 1 gap must permit space for the summing trees. Both are multiples of the feed/slot spacing of 104 .
Low-Loss a-Si:H for Trans-mm Microstripline
An enabling technology for hierarchical phased-array antennas is low-loss dielectric for the trans-mm-wave microstripline, which permits the detectors for even the highest-frequency bands of interest to be at the outer edge of the low-frequency antenna. We use hydrogenated amorphous silicon (a-Si:H), for which we have demonstrated recipes with RF loss tangent δ as low as 7 × 10^-6<cit.>. For fabrication convenience here, we used somewhat lossier recipes, with δ≈ 3 × 10^-5 (see <ref> for details). The dielectric is 1070 nm thick to provide a microstripline impedance comparable to that obtained with more typical dielectrics (≈ 12 for a-Si:H vs. ≈ 4 for and ≈ 7 for , μ = for all). For the material used here and assuming a trans-mm loss tangent equal to the RF loss tangent, the microstripline dielectric loss for the 420 GHz band would be 0.6%. It is expected the loss tangent will increase between RF and trans-mm frequencies<cit.>, but even a factor 10 degradation would yield an acceptable 94% microstripline transmission at 420 GHz.
Polarization Coverage
While the antennas are inherently polarization-selective, it is well understood that polarization sensitivity must be modulated quickly to mitigate systematic uncertainties in differencing complementary polarizations on the sky. The first level of such differencing will be provided by rotating the antennas 90between adjacent level 2 (B1/B2) pixels, separated by approximately 1.5on the sky. At scan speeds of 0.5–1/s, there will be only a 25–50 ms time separation, fast enough to freeze atmospheric emission variations (“sky noise”) <cit.>. The sky noise will thus provide a relative calibration and will difference away well. Parallactic angle rotation will yield full coverage in Q and U Stokes parameters. Another, potentially more effective level of differencing, pending evaluation of its consistency with the final optical design, would be provided by placing a polarizing grid between the final lens and the focal plane, feeding complementary polarizations to two focal planes situated at right angles with respect to each other. Additional modulation could be provided by a rotating or stepped broadband, multi-layer, sapphire half-wave plate <cit.>, cryogenically situated at an internal Lyot stop, though obtaining the necessary 2.4-octave bandwidth may be challenging <cit.>. A more practical option may be a rotating or stepped broadband, reflective metal-mesh half-wave plate <cit.> placed at one flat relay mirror outside the cryostat. If circular polarization sensitivity is desired (interesting for stellar flaring cyclotron emission, <ref>), a variable-delay polarization modulator <cit.> could be used at the second flat relay mirror, employing a λ/8 air gap rather than the conventional λ/4 to make it a quarter-wave rather than half-wave plate. This approach is narrow-band, but observations could be taken sequentially with multiple spacings.
§.§.§ Microstrip-Coupled, Parallel-Plate Capacitor, Lumped-Element KIDs (MS-PPC-LEKIDs) using Low-Noise a-Si:H
Parallel-Plate Capacitor, Lumped-Element KIDs (PPC-LEKIDs)
The trans-mm microstripline exiting the BPFs couple to a novel KID design illustrated in Figure <ref> <cit.>. The two ends of a meandered inductor connect to two plates, all 100 nm thick Al or AlMn. The structure sits on top of the ground plane and a first 800 nm thick a-Si:H layer. The top plates form two parallel-plate capacitors (PPCs) with the ground plane, connected in series. The symmetric KID design makes the shared PPC electrode a virtual ground, obviating isolating it from the ground plane. While the two plates could in principle couple to an incoming electric field normal to the slot between them, the ground plane shields the KID inductor and capacitor in the same way as it does the trans-mm microstrip. To prevent out-diffusion of the quasiparticles into the inductor, we deposit Nb (via liftoff to prevent etching damage to the KID material) on the PPC top plates to raise the pair-breaking energy to 3 meV. This energy corresponds to ν = 2 Δ_Nb/h ≈ 740 GHz, so the Nb is also a poor direct absorber for in-band light (as well as being highly reflective even above 740 GHz). We couple the KID to a 2 wide microstripline feedline (impedance 32.5 ) via a PPC coupling capacitor attached to one end of the inductor. Though it is difficult to impedance-match this type of feedline to 50 , it ensures the ground plane is uninterrupted except at the antenna slots. A traditional coplanar-waveguide (CPW) feedline inherently breaks the ground plane into two halves, which can at best be coupled intermittently by ground bridges across the CPW.
Microstrip Coupling
The trans-mm microstripline deposits power in the inductor via a unique capacitive coupling. For B4/B5/B6, the design is similar to the one we detailed earlier for a inductor <cit.>, illustrated in Figure <ref>. The microstripline from the BPF encounters a 50-50 splitter. One of the outputs is delayed by a half-wavelength so that the two complementary microstriplines, now with nearly equal magnitude but opposite sign trans-mm voltages on their top electrodes, present a trans-mm voltage difference. The KID inductor meanders back and forth in the space between the two microstriplines, with small pads extending from the end of each meander under pads that extend outward from the microstripline top electrode, separated by 270 nm of a-Si:H (the total thickness of 1070 nm less the 800 nm thickness between the KID inductor and the ground plane). The overlapping pads and each meander of the inductor thus form a trans-mm C-R-C network, through which the trans-mm voltage on the two microstriplines drives a trans-mm current. This current dissipates the trans-mm power in the meanders of the KID inductor. Any single C-R-C element has a lumped element impedance much larger than that of the microstripline, and the microstripline continues onward, so each meander can be consider a high parasitic impedance between the two microstriplines rather than a terminating impedance. With many such meanders, the trans-mm power is adiabatically absorbed as the wave propagates down the microstripline pair. The coupling C can be increased along the microstripline so that equal power, rather than equal fractional power, is absorbed per unit length, ensuring the trans-mm power is absorbed uniformly over the entire inductor rather than with an exponential profile. The coupling C and the meander width, thickness, and length are all free parameters, providing a great deal of design space to simultaneously obtain a high enough KID responsivity (set by the KID inductance, capacitance, and inductor volume) that generation-recombination (and thus photon noise) dominates over amplifier and TLS noise while also obtaining high optical absorptance. The details of the optimization have been previously provided for a inductor<cit.> and for an Al inductor<cit.>. We term this design the “adiabatic lumped-element” coupler.
For the lower frequency bands B1/B2/B3, it proved difficult for the above design to provide high optical absorptance with a meander of a reasonable length and width given the low frequency combined with the low resistivity of Al (or AlMn) in comparison to the for which the design was originally intended<cit.>. (A narrower Al linewidth could have addressed this challenge, but we worried about fabrication yield given that the Al sits on somewhat rough a-Si:H rather than optically polished crystalline silicon.) We therefore developed an alternative “traveling wave” coupler design, illustrated in Figure <ref>. First, we taper the microstripline to a very large width (low impedance) to match the low impedance of the coupler structure. Again, there is a 50-50 splitter, but this time with no delay added. Instead, where the two microstriplines run in parallel, we extend the capacitive pads from each microstripline to become fingers, almost touching between the two microstriplines. The KID inductor is again sandwiched between the ground plane and the microstripline top electrode, with the same 800 nm/270 nm split of the a-Si:H thicknesses. In this case, however, the inductor meanders along the same direction as the microstripline, and the microstripline's capactive tabs extend almost completely over the inductor's meanders. We believe this geometry effectively makes a capacitive voltage divider between the microstripline top electrode, the inductor, and the ground plane, imposing a spatially varying voltage along the KID meanders. This coupling excites a microstripline mode between the meanders and the ground plane, and the mode's energy is dissipated in the trans-mm-lossy KID inductor material. Each long KID meander is connected to one of its neighbors at either end in order to form one continuous inductor. The reflections imposed by these shorts effectively make the excitation in the KID inductive meander microstripline a standing wave that is excited by the incoming wave on the Nb microstripline. Like the adiabatic lumped-element coupler, the traveling wave coupler is adiabatic in the sense that power is deposited gradually along the coupler's microstripline rather than by terminating the microstripline in a matched impedance. Again, the details of the optimization have been previously been provided<cit.>.
Since there is no voltage difference between the 50-50 split Nb microstriplines in the traveling-wave design, the split may be an unnecessary residual feature of its origin in the adiabatic lumped element coupler. The use of capacitive fingers is, however, important: it provides the necessary voltage divider coupling to the KID meander while maintaining a higher microstripline impedance than would be obtained by simply widening the microstripline to cover the KID inductor. The capacitance per unit length is increased by about half as much as would have been obtained by widening the microstripline, while the inductance per unit length is largely unchanged (very little current flows along the fingers). Therefore, rather than the impedance decreasing by a factor ≈ w_b/w_c, where w_b and w_c are the microstripline width exiting the BPF and in the coupler, it decreases by a more modest factor, ≈√(2 w_b/w_c). The increase in microstripline width between the BPF and the coupler can be smaller by the same factor.
Contrasts with Prior KID Designs
Our design contrasts with the other standard approaches for coupling mm/submm power to KIDs: direct absorption in the inductor (unmediated by microstripline) and termination of microstripline directly in the KID inductor. In both cases, the KID inductor must match the impedance of the incoming wave, which may be in vacuum (direct illumination), a vacuum cavity (horn-coupled direct illumination), dielectric (silicon or alumina lens coupling), or microstripline (horn, lens, or antenna coupling), while simultaneously providing high enough responsivity (deriving from the volume and resonant frequency) for photon-background-limited sensitivity. Impedance depends on film thickness and line width, so yield and uniformity can be fabrication challenges. The designs used here are quite robust against film thickness and linewidth variations because the adiabatic coupling to the KID avoids dimension-sensitive impedance matching.
Low-Noise a-Si:H for PPC-LEKIDs
This design is only feasible because of our development of low-loss a-Si:H, based on <cit.> but with 10× lower loss tangent <cit.> and at least 2× lower noise <cit.>. Our first work<cit.> presented two a-Si:H deposition recipes for two different machines at two sites and found that the loss tangent is stable, both across films fabricated months or years apart <cit.> and in a given film over time (d/dt ≈ 0.35 × 10^-6/mo in a typical lab environment). Our KID design optimization<cit.> used measurements of TLS noise for recipe A from <cit.> taken at = 100 mK. In this work, we make our measurements at ≈ 250 mK. Assuming the measured ^-1.7 scaling of TLS noise power spectral density with temperature <cit.> and a naive (and experimentally unconfirmed) linear scaling with , we should observe TLS noise about 1.5× lower than the design.
AlMn KIDs
Al, especially in thin films, generally has T_c too high to be suitable for B1, whose lower edge is ≈75 GHz: even bulk T_c ≈ 1.2 K and thus 2 Δ_Al/h ≈ 88 GHz would substantially decrease the B1 bandwidth (Table <ref>), and thin films generally have higher T_c<cit.>. We will therefore use AlMn, an alloy in which Mn suppresses Al T_c <cit.> and that has previously yielded Q > 2 × 10^5 resonators down to T_c = 0.69 K <cit.>.
Control and Trimming of
There is reason to believe PPC-LEKID resonant frequencies will be more well controlled than for IDC-based KID designs: because the electric field is so well confined in the PPC, there should be negligible frequency scatter due to parasitic capacitances and inter-resonator couplings. Fabrication non-uniformities such as variations of inductor linewidth and thickness (both affecting geometrical as well as kinetic inductance), dielectric thickness, or PPC plate dimensions should be smooth functions of position. Regardless, it is sensible to have a trimming mechanism <cit.>. We will etch the edge of the PPC furthest away from the inductor (etching through a-Si:H, Nb, and Al) to avoid collateral damage to the inductor by etch chemicals permeating between layers transversely to the inductor, which was a problem in the past for some of the buffered oxide etches used to clean various layers before or after the second a-Si:H deposition (<ref>).
§.§ Focal Plane Architecture – Fabrication
The devices are fabricated on double-side polished, high-resistivity, float-zone silicon wafers, 100 mm in diameter and 375 thick. We do fine-scale photolithography using a Canon EX3 stepper mask aligner and some coarser features (e.g., a-Si:H trench etch) using a Heidelberg MLA 150 maskless aligner. Steps:
* Nb ground plane: After a buffered oxide etch (BOE) dip of the wafer to remove native oxide, we ion mill and then deposit a 190 nm thick ground plane via RF magnetron sputtering using a 6-inch target at 900 W RF power. It is patterned with the antenna slots and the BPF windows using a fluorine etch in an ICP RIE machine.
* First a-Si:H layer (microstripline, KID capacitor): Another BOE dip is done (to remove oxides on the Nb) and then a 1070 nm a-Si:H layer is deposited by CVD. The machines and recipes available are described in detail elsewhere<cit.>. The lowest loss recipes require deposition in a machine at JPL with 350 C substrate temperature, which can cause flaking if the deposition chamber is not properly conditioned. For the first device studied here, we deposited at 350 C but with a machine at the Caltech Kavli Nanoscience Institute that has previously yielded δ≈ 1.2 × 10^-5<cit.>, and for the second device, we used the JPL machine but at 150 C, which previous work had shown yielded δ≈ 3 × 10^-5. We then etch away 270 nm of a-Si:H in windows where the resonators, coupling capacitors, and microwave feedline will reside.
* KID inductor and capacitor top plates: After a BOE dip to remove that forms on the first a-Si:H layer, we ion mill the a-Si:H and then sputter-deposit the 100 nm Al layer using a 6-inch target and 750 W RF power. If doing AlMn, we instead use a dedicated machine that co-sputters AlMn (2500 ppm Mn doping) and pure Al, aiming for 750 ppm. We use chlorine etch in an ICP RIE machine to pattern the Al (AlMn) to obtain the KID inductor and capacitor, coupling capacitor outer (bottom) plates, and readout feedline. We then ion mill the Al (AlMn) and deposit/pattern, using liftoff, 50 nm Nb and 30 nm Al over the patterned Al (AlMn) except over the KID inductor (same 6-inch target sputter tool and powers). The Nb is intended to prevent outdiffusion of quasiparticles from the Al (AlMn) inductor and to increase the threshold frequency for photon absorption in the capacitor (already mitigated by its high reflectivity and its placement within 1.1 of the Nb ground plane). It also ensures the feedline has transmission at 4 K, which is useful for device screening. The Al top layer protects the thin Nb layer from the later “Nb wiring layer” etch (step 5 below).
* Second a-Si:H layer (trans-mm coupler): A second layer of a-Si:H is then deposited, 270 nm thick, now always at 150 C to prevent damage to the underlying Al film (formation of AlSi at ≈166 C) and always using the JPL ICP PECVD (both for convenience and because the KNI PECVD does not make good a-Si:H at 150 C). No BOE dip is done before this deposition to avoid damage to the now-patterned KID Al (AlMn); any oxide at the interface will only be present in the trans-mm coupler, not in the KID or the trans-mm microstripline. This a-Si:H layer is etched away almost everywhere except in the window where the prior a-Si:H layer was etched. The result is an approximately constant thickness of a-Si:H over the entire wafer, but with the KID inductor and capacitor films (and KID coupling capacitor outer plates and readout feedline) residing between the 800 nm and 270 nm a-Si:H layers. The one exception is that the 2nd a-Si:H layer is etched away over the microwave feedline bondpads so they are accessible.
* Nb wiring layer: We complete the KID, antenna, and bandpass filters, and also the KID coupling capacitor by ion milling the a-Si:H and depositing a last Nb layer, 160 nm thick, which is now patterned using fluorine-chlorine ICP RIE. This etch is not highly selective against Si, hence the Al protect layer for the microwave feedline in step 3.
Ideally, because it is more effective at removing , we would do a BOE dip prior to the Nb layer, but there have been cases in which BOE at this step reduced KID yield, presumably because BOE can permeate through microfissures or pinholes in the a-Si:H to the Al (AlMn) KID layer. To achieve the best trans-mm loss, we may try to resolve this problem so we may implement a BOE dip at this step.
* Borders for good electrical and thermal connection: To ensure good RF and thermal coupling to the device box, we etch a border through all the layers (fluorine-SF_6 ICP RIE) to expose the Nb ground plane. On three sides of the device, we additionally etch through part of the exposed Nb (same fluorine-chlorine ICP RIE as Nb wiring layer) and deposit 10 nm of Ti (sticking layer) and 350 nm of Au by electron-beam evaporation through a liftoff mask that leaves openings over both Si and Nb. When the device is mounted, Au wirebonds connect the Au pads to the copper box to provide good thermal connection to the silicon substrate (see discussion of substrate heating vs. direct absorption in the KIDs in <ref>) as well as electrical connection to Nb on three sides, and Al wirebonds connect the Nb ground plane on the fourth side to the box for RF grounding.
§.§ Focal Plane Architecture – Experimental Validation
We undertook extensive dark and optical tests of two prototype devices. Both devices incorporate a two-scale analogue of the three-scale antenna planned for NEW-MUSIC. For B3–B5, the two-scale prototype uses a fundamental 3.328 mm wide, 32× 32 slot array antenna, analogous to the 16× 16 slot array antenna presented in <ref> and Figure <ref>, and it sums four such antennas for B2, so it tests the critical feature: summing of fundamental elements with gaps between. Both devices also incorporate a limited set of bands as a first step in the complexity of the filter banks.[In fact, each fundamental element has its own four-band BPF at its output, following by summing of the B2 outputs. The long-term scheme would not use BPFs for the bands to be summed but only LPFs, with a single BPF to follow after summing. This latter approach prevents BPF variation among the summed elements from causing beam asymmetries.] Each antenna feeds four KIDs in each of B3–B5 and one KID in B2. We additionally have four dark KIDs (no connection to an antenna). We thus expect to see 56 KID resonances on each die.
The second device incorporates some pixels where an output feeds not one but rather two KIDs for loss and impedance/wave-speed testing. For the former, a 50-50 splitter is followed immediately by a KID on one leg and by a long length of microstripline terminated in a KID on the other leg. The relative optical efficiency of the two KIDs measures the transmittance, and thus the loss, of the length of microstripline. For the latter, the second arm of the splitter is instead followed by a Fabry-Perot cavity consisting of a widened (and thus impedance-mismatched) length of microstripline. The resulting standing wave pattern's frequency measures the microstripline wave-speed and its amplitude contrast measures the ratio of the widened to standard microstripline impedance. The total number of KIDs is unchanged, but 14 more are “dark” due to the microstrip routing required for the test structures. Measurements of these test structures are not yet available, so we report only on the non-test-structure KIDs.
For optical testing, the devices incorporated a 2-layer metamaterial-structured silicon antireflection wafer (1:1.6 bandwidth, 190–310 GHz) <cit.> and a niobium backshort (<ref>). They were tested in a pulse-tube-cooled 4 K dewar with a 240 mK Chase ^3He/^3He/^4He sorption refrigerator<cit.>. The dewar has a UHMWPE window (with single-layer Porex[<https://www.porex.com/products/porous-sheets/>] PM23DR 0.25 mm thick AR coating) with ≈30 cm clear aperture, permitting very wide angle beam measurements (up to 40off axis). For blackbody radiation filtering, the dewar has five 3 mm thick Zotefoam[<https://www.zotefoams.com>] HD-30 sheets behind the vacuum window, two PTFE filters (25 and 10 mm thick, with single-layer Porex PM23DR 0.25 mm thick AR coatings) at 50 K, and, at 4 K, a nylon filter (10 mm thick, with single-layer Porex PMV30 0.25 mm thick AR coating) and a 420 GHz low-pass cutoff metal mesh filter <cit.>. To ensure good heat sinking of the devices, we bond the Au border on three sides of the device to the copper device box every 0.5–1 mm using Au wirebonds. The remaining side has Al wirebonds to the Nb ground plane and microwave feedline of similar density. Figure <ref> shows the backside of a device mounted for testing.
A magnetic shield, residing at 4 K and consisting of two layers of Amuneal A4K material, enclosed the devices to limit the impact of Earth's magnetic field. The shield incorporates an aperture at the top to permit optical access for one device. The device under optical test incorporates an additional single-layer A4K shield, also with an aperture, to improve its shielding as it sits near the aperture in the larger shield. A combination of stainless steel and NbTi semi-rigid coaxial cables carried the readout signal to the devices, with 30 dB and 10 dB in-line attenuators at 4 K and 0.35 K, respectively, to block 300 K thermal noise. Similar NbTi coax carried the signal exiting each device to a cryogenic SiGe low-noise amplifier (LNA) at 4 K (ASU 10 MHz–2 GHz or Cosmic Microwave Technologies CITLF2), with a noise temperature of approximately 5 K, followed by stainless steel coax back to 300 K. Additional LNAs at 300 K ensured the cryogenic LNA dominated the system noise. We monitored the device temperature using a Stanford Research System (SRS) SIM921 reading a Lakeshore Germanium Resistance Thermometer (GRT) located next to the devices. The temperature was controlled using a SRS SIM960 analog PID controller supplying a current to a 10 kΩ heater on the mechanical stage holding the devices.
Measurements relying on S_21(f) scans used a Copper Mountain Technologies SC5065 Vector Network Analyzer. We used the Python module SCRAPS <cit.> to fit the S_21(f) data to standard forms (e.g., <cit.>) to extract the resonance frequency and quality factors , , and . Measurements of small signal response such as beam maps and Fourier Transform Spectroscopy used an Ettus X310 USRP with a UBX 160 daughter card. Measurements of noise relied on a standard homodyne mixing setup. The resonator drive signal and local oscillator for mixing was provided by an Anritsu MG3694A synthesizer, and the mixer was followed by Stanford Research Systems SR560 voltage preamplifiers and a National Instruments NI-9775 ADC.
§.§.§ KID Parameters and Yield
We use a vector network analyzer (VNA) to measure (f) over the frequency range containing the resonances at a range of values under dark conditions. We fit these resonance scans using standard techniques (e.g., <cit.>). Figure <ref> shows frequency scans, δ/ and vs. , and kinetic inductance fraction α and gap parameter Δ inferred from fits of the δ/ vs. data to Mattis-Bardeen theory <cit.>.
The different trans-mm spectral bands also have different design ranges to enable disambiguation of bands without FTS data. Yield and uniformity is fairly high already. The two devices for which data are shown have yields of 55/56 and 47/56 resonators. Two additional devices show yields of >50/56 and 40/56 resonators. A detailed inspection has not yet been done, but, given the uniformity of the KIDs that do appear, we suspect catastrophic fabrication defects caused the failed resonators. Better process control will likely enable regular achievement of >95% yield.
From the Mattis-Bardeen fits, we infer Δ≈ 0.20–0.22 meV, which would imply 2 Δ_Al/h ≈ 100 GHz. We find α≈ 0.25–0.40, fairly consistent with the design value α = 0.26, which assumed 100 nm Al with a normal state sheet resistance of 0.069 /<cit.>. We speculate that the systematic differences in Δ and α between wafers are due to slight differences in Al film thickness: the penetration depth in particular, which determines α, is a strong function of thickness for d ≲ 100 nm<cit.>. Fortunately, these variations in α only affect as 1/√(1 + α/(1-α)), so the bands are very similar, 209–406 GHz and 213–417 MHz. (In the first device, two B3 resonators moved below 200 GHz, but the remainder did not move, so these shifts are likely due to a defect specific to those two resonators.) The design frequency range was 180–336 MHz, so the shift relative to design varies from 16% to 24% across the octave band. Given the good match of α to expectations, this shift seems likely to be due to unmodeled parasitic reactances.
Each device has a handful (3 and 6) of resonators that have anomalous δ/ vs. curves. These all happen to be dark and also seem to have moved from their design values (374–388 MHz for nominal darks, mixed with the other bands for the darks due to the test structures) by a larger factor than the optically sensitive KIDs, up to ∼500–600 MHz. They all seem to have low α, too, which causes the anomalous δ/ vs. . Lower α can explain part of the upward shift in , but the films are the same thickness as those of the other KIDs, so the kinetic inductance L_k should not have changed. One potential explanation for all of this behavior is that the removal of the microstripline in the trans-mm coupler (unnecessary since these resonators are intended to be dark) increases the geometrical inductance L_g (decreases α) while also reducing C by a larger factor, making higher ∝ 1/√(LC) possible in spite of the larger L_g. We will re-implement the trans-mm coupler for these dark KIDs (with no incoming microstripline) to eliminate this systematic difference.
For the majority of resonators, we find ≳ 10^5 and still rising at the lowest temperatures for which we have data, ≈ 250 mK. There is good uniformity in the behavior. Thus, in spite of the unconventional device structure, with the Al inductor material on a thick a-Si:H layer and with many parasitic capacitive and inductive couplings, the loss is still dominated by quasiparticles at /T_c ≈ 0.15–0.2.
One challenge to be addressed is large variation of , over an order of magnitude (not shown). We suspect impedance mismatches arising from the use of the 32.5 microstripline feedline, causing reflections at the interfaces to the 50 readout wiring and standing waves on the feedline. Our initial choice of feedline width was conservative due to the catastrophic impact of feedline failure, but we have not had a single device with a failed feedline, we will try narrowing the feedline in future devices to increase the impedance.
§.§.§ Hierarchical Antenna Beams
Figure <ref> shows experimental validation of beams. The measurements were done using a chopped hot blackbody source[A commercial ceramic heater source (e.g., <https://www.amazon.com/Infrared-Ceramic-Heater-Forming-Element/dp/B0C394KWJB>), coated in Bock black<cit.>.] behind a 18.35 mm aperture at a distance of 189 mm. The main expected features are visible. Most importantly, the beam FWHM scales with frequency as expected, accounting for summing for B2, rendering the B2 FWHM similar to the B4 FWHM. The B3 and B4 beams show sidelobes similar in shape to the sinc expectation, though the nulls are insufficiently deep and the level is 2–5 dB too high. The B2 and B5 beams show shoulder-like features, as if the sinc function null were filled in. The B2 beam may show some asymmetry between the E and H planes, which we are confirming before taking corrective action. While we are still trying to understand some details, it is clear that hierarchical summing works well in spite of the gaps between the fundamental elements. Moreover, these sidelobes will be terminated on a cold Lyot stop in the NEW-MUSIC optical configuration (<ref>).
§.§.§ Spectral Bandpasses
Figure <ref> shows bandpass measurements using a Martin-Puplett Fourier Transform Spectrometer fed by a chopped ≈1100 C cavity blackbody[CI-Systems SR-200N]. The measurements are overlaid on expectations from Sonnet for the BPF banks alone and expected atmospheric transmission for 1 mm and 2 mm PWV at the LCT site. There is good qualitative agreement of the measurements with expectations in terms of band centers and edges. The measured spectra show higher contrast ripples, and the B5 upper edge approaches the upper edge of its atmospheric window too closely (a design error). As noted above, the optical train includes a 190–310 GHz silicon antireflection wafer and UHWMPE, PTFE, and nylon plastic windows/filters coated with single-layer AR coatings, and an uncoated metal-mesh filter, so some reflections are to be expected. More detailed modeling is in process. Nevertheless, the basic elements of the design appear work, with refinement of the design and the measurement setup necessary to reduce non-idealities.
§.§.§ Optical Efficiency
We measure optical efficiency using beam-filling cold (liquid nitrogen) and hot (room temperature) blackbody loads[61 cm × 61 cm pieces of WAVASORB® VHP (<https://www.ecanechoicchambers.com/pdf/WAVASORB%20-%20VHP.pdf>). For the cold load, we immerse the blackbody in liquid nitrogen contained in a closed-cell ethylene vinyl acetate (EVA) foam container that we assembled from individual layers cut by Rapid Die Cut (<https://rapiddiecut.com>). The layers were provided with adhesive on one side to aid assembly.]. We use an air knife[Exair Super Air Knife, <https://www.exair.com/products/air-knives/super-air-knives.html>] to prevent condensation on the large clear-aperture window during measurements. We measured for the two values of and a range of . We then fit the δ/ data to a model that incorporates resonator-specific measurements of (α, Δ) from dark data (<ref>) and determines () and for each resonator, where is the optical power absorbed in the KID quasiparticle system (i.e., that can affect and ) and is the “excess load” due to emission from the dewar (especially the dewar windows) converted to a Rayleigh-Jeans load temperature outside the dewar. If is the power incident on the KID from the microstripline, then = where is the efficiency with which incoming trans-mm photons break Cooper pairs (with the remainder of the energy lost to sub-2Δ phonons). Figure <ref> shows an example of these data and fits. To infer (), we use as given in Table <ref>.
We obtain from () two quantities: d/d and the optical efficiency, . To do so, we make a typical set of assumptions: 1) the antennas are single-moded and thus have throughput A Ω = λ^2 at 100% efficiency; 2) the blackbody loads are in the Rayleigh-Jeans limit at the frequencies of interest; and, 3) the antenna is sensitive to a single polarization. These assumptions imply = k_B Δν where Δν is the spectral bandpass width and is the end-to-end optical efficiency between the blackbody source and the KID, excluding . We may thus calculate d/d = / trivially. As always, there is a degeneracy between Δν and . Peak normalization of the spectral bandpasses is one choice that is frequently made. Another choice would be to take Δν to be equal to the design values from Table <ref> so that both incorporates non-idealities in the spectral bandpasses and absorption. Figure <ref> provides plots of the unambigous d/d quantity, and Table <ref> provide minimum, maximum, and mean values for each band and device for d/d and for under the two different assumptions for Δν. We find the two choices for normalization differ fractionally at the ≲ 10–15% level, resulting in no sigificant difference in interpretation of the results.
In B2, the d/d values are comparable to the best typically achieved for sub-Kelvin detectors given the significant blackbody filtering required, 0.24 pW/K <cit.>. B3 has comparable performance to B2 in both d/d and . There is a significant difference between mean B4 performance for the two devices. The second device again yields d/d and performance comparable to B2 (and B3). The first device's best detectors also match this performance, but it has a number of detectors with response lower by about 1/3 fractional. The B5 detectors show appreciably lower than all the other bands, also down by 40% relative to B2 and B3 but now uniformly. (Comparison of B5's d/d other bands is not useful because of its lower design bandwidth.)
While we have not modeled the optical efficiency in detail, we note at least two potential causes for variations in optical efficiency among bands (and especially in B5): loss in the plastic windows/filters (expected transmittances of 0.76, 0.71, 0.66, and 0.62 in the four bands) and reflection loss due to insufficiently wide-band AR coatings on optical elements (for the silicon AR wafer, <0.5% reflectance in B3 and B4 but ≈ 10% and ≈ 5% reflectance in B2 and B5, respectively <cit.>). Together, these effects result in an expected transmittance relative to B3 of 0.96, unity, 0.93, and 0.83. Motivated by the dispersion among detectors noted above, which is not due to measurement uncertainty but rather to individual detectors suffering performance non-idealities, Table <ref> compares these expected relative transmittances to B3-normalized maximum efficiencies. The B2–B4 relative transmissions are generally in line with expectations while the B5 relative transmission is not. A more extensive analysis is underway (<ref>) and may explain this discrepancy. We also plan measurements with a blackbody at the 4 K stage of the cryostat to eliminate filter transmission uncertainty.
We have developed a broader-bandwidth three-layer AR structure <cit.> and are finalizing the development of an even broader-bandwidth four-layer AR design, and we anticipate implementing lower-loss windows and filters with better AR coatings, so we expect significant improvement in B5's performance as well as modest gains in the other bands.
§.§.§ a-Si:H Loss Tangent
Every device includes lumped-element Nb LC resonators, identical in design to those used in our prior demonstrations of a-Si:H low-power TLS loss tangent as low as ^0 ≈ 7 × 10^-6 near 1 GHz<cit.>, so that we may measure independently this loss tangent for the deposited material in the KIDs and microstripline on each device. There are two sets of resonators, one incorporating the 800 nm a-Si:H in the KID capacitors and the other the 1070 nm a-Si:H in the microstripline. (See <ref> and <ref> for an explanation of the distinction between the two.) As in that prior work, we infer ^0 by measuring δ/ vs. in the 250–450 mK range, well below the temperatures at which quasiparticles cause any shift in in Nb, and fitting for ^0 as the normalization for the known TLS loss dependence on temperature (e.g., <cit.>). We show in Figure <ref> loss tangents of 2 and 3.7 × 10^-5 for the 800 nm KID a-Si:H for the two devices studied here. (The resonators for the 1070 nm microstripline a-Si:H do not seem to be present.) These results are 70% and 25% poorer than expected (<ref>), respectively, and worth tracking in future devices, but they do not significantly impact expectations for device optical efficiency or KID TLS noise.
We are in the midst of explicitly measuring the trans-mm wave loss of our a-Si:H using the dedicated test devices. We may set a conservative upper limit using the data in hand by assuming the lowest efficiency resonators in each band for the second device (Figure <ref> (bottom right)) are part of a loss-test pair for that band and the highest efficiency resonators have twice the efficiency of the loss-test pair's reference detector. The loss-test devices were designed to match the 1/e length for a loss tangent of ≈10^-3 and yield 10% loss for a loss tangent of ≈10^-4. The inferred efficiency ratio is approximately 0.6≈ln 0.5 in all of B3, B4, and B5, so the loss tangent is at most 5 × 10^-4. From this, we estimate a lower limit on the transmission for the three-scale antenna of 0.93–0.94, approximately independent of frequency. This value is certainly good enough that it is subdominant to many other optical losses (<ref>). It is likely the constraints on δ will be improved by our ongoing measurements using the paired KIDs. If δ≈ 10^-4 is obtained, comparable to the best achieved with a-Si:H<cit.> and a-SiC:H<cit.>, then the transmission would be approximately 0.985.
§.§.§ Direct Absorption Limits
A key feature of the device design is the many measures taken to limit direct absorption of trans-mm light by the KIDs (<ref>). We are able to place limits on this absorption by including in our optical efficiency analysis the dark KIDs. Their data are treated in the same way as those of the other KIDs, yielding an effective and d/d that can be compared to those of the optically sensitive KIDs. ( can be calculated too by assuming Δν = 420 - 96 = 304 GHz, the bandwidth between the metal mesh filter cutoff and 2Δ_Al/h, but and d/d are more important for determining what fraction of the signal observed by a given detector arrives via the antenna vs. being directly coupled.)
Because it is physically allowed, to improve the fit quality, and to ensure we interpret any apparent response of the dark KIDs to varying optical load correctly, the model allows for the substrate to heat up due to broadband absorption in the silicon wafer itself. We use a standard conductance power law, P = g (^n - ^n), where is the substrate temperature and we fit for g and n. The measurement of δ/ under dark, cold, and hot loads as a function of breaks the degeneracy between optical efficiency and substrate heating. We find that the data prefer substrate heating, but the modeling is problematic. The preferred value is inconsistent among KIDs, even among the dark detectors, with values of up to - ≈ 15 mK at low for optical detectors but values of 60–70 mK for dark KIDs under the same conditions. We also find that the value depends far more on than on , which is unphysical.
We can obtain a more useful constraint on using the a-Si:H RF loss tangent diagnostic resonators discussed in <ref>. The dependence of δ/ on due to TLS can be used as a differential thermometer by seeing how much changes between cold, mirror, and hot loads. Figure <ref> shows those data. While the desired difference is clearly limited by systematics, we can infer a conservative upper limit of about 10^-7 on (^hot - ^cold)/, and we observed d(δ/)/d≈ 5 × 10^-8/mK from the same data. We may therefore conclude the substrate changes in temperature by no more than about 2 mK between exposure to cold and hot loads. Thus, it is clear that the fitted deviation of from for the KIDs is indeed unphysical. We suspect the model's preference for substrate heating reflects non-idealities in our Mattis-Bardeen modeling of δ/ vs. for dark data.
Fortunately, the fitted () is not significantly affected by whether substrate heating is included or not, so we use these data to set an upper limit ^dark/^opt≲ 1%.
§.§.§ AlMn
We have recently demonstrated devices with the Al KID material replaced by AlMn and no other changes. The yield on these first arrays appears to be around 60%, a good starting point. Figure <ref> shows δ/ and vs. over a limited range of . The low reflects the high value of /T_c and may make multiplexing and achieving fundamental-noise-limited performance challenging. That said, the large B1 pixels imply very few B1 resonators are needed for the three-scale architecture (see <ref>), and will be close to unity, so the low may be acceptable.
Because the data are not highly constraining, fits of δ/ vs. to Mattis-Bardeen theory yield a significant α–Δ degeneracy, seen in Figure <ref>.
We may narrow the range for Δ_AlMn by constraining α_AlMn using α_Al and the observed ranges for Al and AlMn and the common resonator geometry:
if f_g is the resonant frequency assuming only the geometrical inductance L_g and is the resonant frequency including the kinetic inductance L_k, then
= f_g √(L_g)/√(L_g+L_k) = f_g√(1 - α) ⟹ α_AlMn = 1 - (1 - α_Al) (^AlMn/^Al)^2
Figure <ref> shows the range for Al is 210-420 MHz (both devices) and the α_Al range is 0.25–0.40 (over both devices). The observed ranges for the two AlMn devices are 136–265 MHz and 158–315 MHz. Combining these ranges yields α_AlMn = 0.58–0.75, which narrows Δ_AlMn to 0.125–0.135 meV, or 2 Δ_AlMn/h = 60–65 GHz.
We will use index-test KIDs, which incorporate no BPFs, to explicitly measure 2 Δ_AlMn/h.
To improve at the current , we could tune the AlMn T_c to obtain higher Δ_AlMn≈ 0.165 meV (2 Δ_AlMn/h ≈ 80 GHz). If remains inconveniently low, it may be possible to deposit AlMn only for the B1 resonators, suffering low for only 64/(2× 64 + 2× 256 + 2× 1024) = 2.3% of the KIDs, by using coarse liftoff masks for Al and AlMn depositions: i.e., prepare a liftoff mask to only permit Al deposition in windows over the non-B1 resonators, deposit Al, remove the liftoff mask, prepare a second liftoff mask to only permit AlMn deposition in windows over the B1 resonators, deposit AlMn, and then proceed with the rest of the fabrication as before. There is no need for the Al and AlMn to be in galvanic contact with each other, so this two-step deposition process would be acceptable.
§.§.§ Noise and Sensitivity
Generation-Recombination-Dominated Detector Noise
Figure <ref> shows an example of the noise power spectral densities and measured in a dark run at = 310 mK, a temperature at which the quasiparticle density is similar to what we expect under optical load at a telescope on the sky for the most demanding bands (B1–B3), ≈ 30–40 K (Table <ref>). We focus on the region above 100 Hz, where electronics 1/f noise is negligible. The noise in the frequency direction is far in excess of that in the dissipation direction but is flat from 100 Hz until it begins to roll off just above 1 kHz. The noise in the dissipation direction is approximately flat from 100 Hz to 100 kHz. (The slight dip seen below a few kHz is likely due to slight miscalibration of the frequency and dissipation directions. Correcting this miscalibration would have little effect on the frequency-direction noise given the logarithmic vertical scale.) The resonator ring-down bandwidth is indicated. We interpret the approximately white dissipation-direction noise as amplifier noise given that it maintains approximately the same level above the resonator bandwidth. We interpret the higher flat noise in the frequency direction as generation-recombination (GR) noise, rolling off with the quasiparticle lifetime (3db ≈ 3.3 kHz, ≈ 50 ) and the resonator bandwidth (3db ≈ 5 kHz). The plot thus shows that, under dark conditions at = 310 mK, the device responsivity is large enough for GR noise to dominate over amplifier noise by about a factor of about 3, the square root of the ratio of the flat noise spectral densities in the frequency and dissipation directions.
The above conclusion that the noise seen in the frequency direction is GR noise is reinforced by the observation that, when plotted in for various temperatures under dark conditions (Figure <ref>), the flat level is approximately independent of temperature. We would expect this behavior in the recombination-dominated limit, where ^GR = 4 ≈ 2 V/R with V the inductor volume and R the recombination constant, while the rolloff frequency, given by 3db = (2 π )^-1, moves up with temperature as decreases. With V ≈ 3500
for this B3 detector and the flat noise level, we infer R ≈ 6.5–9.5 /sec, not too different from the canonical value for Al, R = 10 /sec.
Photon-Noise-Dominated Performance Under Optical Load
We have also measured the noise under optical load. We used 77 K and room-temperature (295 K) blackbody loads outside the dewar window as for the optical efficiency measurements, and we also put a reflective cover in front of the dewar window (“mirror”), which we calibrated to yield an effective outside-the-dewar optical load ≈ 150 K. The fits of the dark, cold, and hot load data also allow us to calibrate the excess loading from the dewar to be ≈ 150 K in B3, which adds to the loads applied outside the dewar window. We show under cold load (≈ 77 K + ≈ 180 K) in Figure <ref>. While the amplifier noise level in increases substantially due to the decrease in ∂ V/∂(δ/) = 2 ^2/ under increased optical loading, the frequency direction noise increases by about the same factor. We interpret this increase as due to the addition of photon noise, which we corroborate below via its dependence on .
To more clearly establish that the increase in the flat noise level observed under optical load is indeed due to photon noise, it is particularly useful to plot as a function of because, in the recombination-dominated regime, it satisfies
^tot = ^GR + S_N_qp^shot + S_N_qp^Bose
= V/R[ 2 + ( h ν/2 Δ + k_B /2 Δ) ]
That is, we expect a simple linear dependence of on , with the intercept providing the sum of the GR and photon shot noise terms. (Beyond this simple behavior, another benefit of the () analysis is that it does not depend on converting to noise-equivalent power at the input to the KID, NEP_opt = (Δ/)√(), which relies on theoretical calculations for and may suffer systematic uncertainties if cannot be measured well, such as when it is too similar to the resonator ring-down time constant (as is seen in Figure <ref>).) Figure <ref> shows the expected linear dependence, demonstrating that the additional noise observed under optical load is indeed photon noise.
We can determine from the vs. plot, by empirical extrapolation rather than calculation, the expected under the expected optical load on sky at a telescope and the relative contributions of GR, shot, and Bose noise. For the B3 detector shown, we expect = 40 K on sky, yielding an expected total noise of ≈ 9000/Hz. Given ≈ 750/Hz measured under dark conditions, it would appear the photon noise is well in excess of the GR noise. However, a detailed comparison reveals the dark and optical data do not yield a consistent recombination constant. Using = 0.49 as discussed earlier, estimated values of ν = 225 GHz (B3) and 2 Δ_Al = 96 GHz, V = 3500 , and R = 8 /sec, we find the shot noise term should be 60% of the GR noise term, or about 450/Hz. The fit yields an offset, corresponding to the sum of the GR and shot noise terms, of ≈4800/Hz, about four times larger than the expected 750 + 450 = 1200/Hz. Similarly, using = 0.5 (<ref>), we find k_B/2 Δ_Al≈ 0.11 and thus the slope should be 23/(K Hz) while the fit yields a slope of 104/(K Hz), 4.5 times larger than expected. We can only reconcile the dark and optical data by positing that somehow R is reduced when trans-mm-generated quasiparticles are present, a hypothesis we will return to momentarily. Assuming R = 1.8–2 /sec is more appropriate for optical data, then we infer ^GR≈ 3000–3400/Hz, ^shot≈ 1800–2000/Hz, and ^Bose( = 40 K) ≈ 3500–4000/Hz. The result is that the GR noise is about 60% of the photon noise and about 1/3 of the total noise — our detectors will be quite close to photon-noise-limited at expected optical loads.
As to how R can decrease under optical load, one hypothesis would be that: 1) the phonons arising from pair recombination of trans-mm-generated quasiparticles have a distinctly different spectrum than the Wien tail of thermal phonons that thermally generate quasiparticles; and, (2) this different spectrum may be more easily trapped by acoustic mismatch in the Al KID film than thermal phonons. It is known that the recombination constant and quasiparticle lifetime measured in a purely thermal environment for thin films already reflect some level of phonon trapping — lifetimes calculated from BCS theory <cit.> are much faster than those observed — so we hypothesize that our data can be explained by an enhancement of this effect. The effect may have be enhanced by the 800 nm a-Si:H layer between the KID film and the susbtrate. With more exhaustive data over all the frequency bands, we can test for a systematic dependence of R on ν, which would impact the trans-mm-generated phonon spectrum and which varies by a factor of 2.5 over B2–B5.
While we have already demonstrated performance limited by GR+photon noise, it is useful to convert to noise-equivalent power, NEP_opt (), because it is the standard performance metric for detectors of this type. Under the expected on-sky optical loads from
Table <ref>, the photon-noise-limited sensitivity, ^γ_opt, varies from 6.4 to 16 × 10^-17 for B2–B6. (B1 requires AlMn, which will have different .) If we map from the
expected under optical load (≈ 30 ) to the with matching , then = 310 mK and we see that ^GR_opt≤^γ_opt. The margin is not as great as the one from
the empirical extrapolation above because it fails to account for the decrease in R observed under optical load. Both and are inversely proportional to R, so ^GR_opt∝√(R), decreasing as R decreases. The factor of 4 reduction in R between dark and optical data implies a factor of √(4) = 2 in ^GR_opt, implying ^GR_opt≤^γ_opt/2 and ensuring photon noise is dominant.
TLS Noise
Electronics 1/f noise currently makes it difficult to measure the noise below 100 Hz, but we may estimate the expected TLS noise level using existing measurements<cit.>, which yield = 0.8, 0.4, and 0.25 × 10^-18/Hz at 10, 100, and 1000 Hz, an electric field of 130 V/m, at = 250 mK, and for a capacitor area A_C = 0.3 (single top plate). (We do not correct the TLS noise for its T^-1.7 temperature dependence<cit.> because the capacitor dielectric would be at = 250 mK even if the quasiparticle density under load is comparable to what is seen thermally at = 330 mK.) Comparing to the dark data in Figure <ref> (upper left), we scale to the expected B3 KID capacitor area A_C = 0.55 and the electric field E = 1430 V/m at the applied feedline power of –81 dBm to obtain = 3.9, 2.0, and 1.2 × 10^-20/Hz at 10, 100, and 1000 Hz, below the amplifier noise and well below the GR noise. Comparing to the cold load data in Figure <ref> (upper right), the same feedline power now yields E = 990 V/m because of the reduced under load. The TLS noise scales to =5.6, 2.8, and 1.8 × 10^-20/Hz at 10, 100, and 1000 Hz, slightly higher but now even further below the amplifier noise and photon noise because the latter have increased substantially in units while the TLS noise has not. Extrapolating to lower frequencies more relevant for astronomical observations, assuming a conservative ∝ 1/f scaling, yields = 3.9–5.6 × 10^-19/Hz at 1 Hz and 3.9–5.6 × 10^-18/Hz at 0.1 Hz. Since the amplifier and fundamental (GR+photon) noise under sky loading will likely be somewhere between the dark and cold load noise PSDs from Figure <ref>, TLS noise will likely be subdominant to amplifier noise and certainly well below fundamental noise. Actual measurements of TLS noise under optical load close to expectations will likely motivate reducing the capacitor area to reduce focal plane dead area.
§.§ Supporting Subsystems
NEW-MUSIC builds on the heritage of the MUSIC instrument <cit.>, and we will reuse much of MUSIC for NEW-MUSIC. We review the extant MUSIC sub-systems and describe the necessary modifications.
*Cryostat
The MUSIC cryostat <cit.> consists of a 1.5 m tall, 0.6 m diameter dewar with two internal stages cooled by a Cryomech PT-415 pulse-tube cooler, providing 40 W at 50 K and 1.35 W at 4 K. A Chase Cryogenics ^4He/^3He/^3He closed-cycle sorption cooler provides 3 at 240 mK and 100 at 350 mK, sufficient to accommodate conductive and radiative heat loads while providing well in excess of 24 hrs hold time after a 6 hr cycle. It incorporates a two-layer A4K shield at 4 K to provide magnetic shielding against Earth's field, which varies with time as the telescope moves. The cryostat originally included eight 2-8 GHz HEMT amplifiers, which will be replaced as needed with SiGe amplifiers that provide excellent performance below 2 GHz. The cryostat is already equipped with the necessary RF coaxes as well as DC wiring for thermometry.
*Optical Train
The MUSIC optical train consists of one powered mirror and two flat mirrors at 300 K followed by a cold Lyot stop and cold lens at 4 K; see Figure <ref>. The original cold optics reimaged the f/12.6 Cassegrain focus of the CSO to f/3.45 with a 15diameter (135 mm) field-of-view and a flat focal surface using a single-layer Porex-coated HDPE lens. A second version of the cold optics instead reimaged to f/2.19, making the field-of-view 85 mm in diameter. We will revise the 4 K lens slightly to provide a focal ratio of f/1.72 to match our antenna pixels, making the 14field-of-view roughly 60 mm across. We will also AR-coat the lens with two layers of Porex to provide broader bandwidth. Ongoing development work on broadband AR-structured metamaterial silicon gradient-index lenses <cit.> may be implemented later to enhance optical efficiency. At the focal plane, NEW-MUSIC will use a proven three-layer silicon AR structure <cit.>, or an in-development four-layer AR.
In terms of windows and filtering, the existing MUSIC design is quite similar to the one used for the work presented here, with the differences being that MUSIC used a HDPE rather than UHMWPE window, metal-mesh 14 THz and 3.75 THz low-pass blackbody filters instead of Zotefoam sheets to limit 300 K radiation incident on 50 K, and PTFE instead of Nylon at 4 K. As noted earlier, we plan to revisit this stack to increase its efficiency without degrading the optical load on 4 K or the sub-Kelvin cooler.
The f/2.19 optical train incorporated significant baffling near the Lyot stop, which ensures that, in a time-reversed sense, rays emitted by the focal plane have many opportunities to intersect cold, absorbing surfaces before exiting the cryostat. See Figure <ref>. The absorber used was “steelcast” <cit.>. Without these baffles, rays that reflect off the Lyot stop can reflect off the focal plane and exit through the optical path. In spite of this baffling, MUSIC suffered a significant (∼50%) loss of beam to wide angles <cit.>, suggesting the baffling was not completely effective in absorbing wide-angle beam. We will incorporate more sophisticated absorber materials that have been developed in recent years <cit.>, and we plan an extensive baffling test campaign.
*KID Readout and Data Acquisition
Our initial deployment plan (for a quarter-scale focal plane; see <ref>) is to reuse the MUSIC readout but without the the up/down-converting IF system. Each MUSIC readout module<cit.> consists of three boards: a ROACH-1 board <cit.> providing a Virtex5 FPGA, a PowerPC for a simple linux operating system, and an ethernet transceiver; an ADC/DAC board with two pairs of TI DAC5618 1000 MHz 16-bit DACs and TI ADS5463 500-MHz 12-bit ADCs, which we operate at 491.52 MHz; and, an “IF” (intermediate frequency) board that provides 300 K amplification, a variable attenuator, IQ mixers, baseband amplification, and anti-alias filtering. The board set accepts signals from a Rb frequency standard locked to GPS to provide a stable frequency reference (10 MHz) for generation of the CPU and ADC/DAC clocks and for absolute time synchronization (1 Hz). Each module has firmware <cit.> that handles 192 readout tones across 450 MHz of usable bandwidth, using a 2^16-sample FFT to output data at 7.5 kHz for each tone. On-board filtering and decimation reduce the data rate to 100 Hz, matching the telescope pointing stream, for transport off the board. The entire planned MUSIC focal plane required 16 boards: 144 KIDs per board with 48 monitor tones for removal of low-frequency gain and phase noise. The boards were housed in two fan-cooled electronics crates that mount on the telescope near the cryostat. The vented air is ducted away from the telescope. DC power supplies sat in the CSO dome, away from the telescope.
The 100-400 MHz readout band motivates replacing the MUSIC IQ mixer (“IF”) board with a much simpler one that eliminates the IQ mixers and uses diplexers at the Nyquist frequency to separately populate the first and second Nyquist zones of different ADC/DAC pairs. It would still include additional gain, variable attenuation, and anti-alias filtering. The new boards will be compatible with the MUSIC FPGA command interface to obviate firmware changes.
RF system-on-chip (RFSoC) implementations for KID readout are making fast progress <cit.>, so we anticipate switching to such a system for a full NEW-MUSIC focal plane.
*Data Acquisition
MUSIC used an industrial server running Linux from internal SSDs with a hard-drive RAID for data. CASPER <cit.> provided python tools for communicating with the ROACHs, which we used wholesale. We wrote our own Matlab-based data acquisition software to accept the streams of data from the 16 ROACHs, combine it with pointing and slow monitoring data, and package it into the HDF5 format. Initially, we plan to port this existing DAQ to modern computing hardware and a Python platform and use SSDs for storage as well as the operating system. When we switch to a RFSoC system, we may adapt DAQ software built for such systems and only replace the low-level DAQ-KID readout drivers.
*Data Reduction An IDL-based pipeline was used for reduction of MUSIC data for extensive instrument characterization <cit.> and science publications <cit.>. This pipeline was an extension of the pipeline developed for Bolocam, which was the mm-wave facility camera on the CSO for over a decade and resulted in tens of publications. We will either port this code base to Python or adopt data reduction packages developed over the past decade for KID-based instruments (e.g., <cit.> for TolTEC).
§.§ Sensitivity and Mapping Speed
Given uncertainties on the recombination constant R, we take a semi-empirical approach to calculating the expected sensitivity and mapping speed of NEW-MUSIC, making heavy use of Equation <ref> and the measurements in <ref>. We calculate ^GR from V = 3500 and the empirical noise-based value of R = 1.9 /sec from <ref>. We calculate ^shot using the same V and R as for ^GR along with Δ_AlMn = 165 for B1 (<ref>) and Δ_Al = 200 (Figure <ref>) for the other bands along with from<cit.> (listed in Table <ref>; we take = 1 for B1 because h ν≈ 2 Δ_AlMn). For ^Bose, we start with the measurement of the ^Bose() slope in Equation <ref> shown in Figure <ref>. Because of the factor in that term, we correct it by the ratio between the optical efficiency expected for NEW-MUSIC (which accounts for the Lyot stop) and the optical efficiency we measure here (the average of the two devices' “max” values of ^design, with the use of the maximum motivated as in <ref>). We then multiply by given in Table <ref>. We convert these three values to noise-equivalent power (NEPs) using another recombination-limit relation:
d/d = √(Δ / R/V)
This approach is mathematically equivalent to calculating from , R, V, and Δ using the GR equation, calculating from , R, and V, and using d/d = Δ/, but it obviates the intermediate result for and makes it clear how the derivative depends on input parameters. We calculate for the expected and assuming Δν from Table <ref> and for the instrument as given in Table <ref>. Consistent with expectations from <ref>, Table <ref> shows this semi-empirical model indicates NEW-MUSIC will be quite close to photon-noise-limited.
Table <ref> also shows calculations of noise-equivalent flux density (NEFD) and mapping speed (N_pixΩ_beam/NEFD^2). We calculate NEFD from NEP_tot using the 10.4 m diameter of the Leighton Telescope along with a degradation factor to account for primary illumination, including the effect of the Lyot stop. We then calculate mapping speed from the NEFD, N_pix, and Ω_beam = FWHM^2 π/(4 ln 2) (N_pix and FWHM from Table <ref>).
§ CONCLUSION AND FUTURE PLANS
We have motivated, described, and provided significant technology validation for the Next-generation, Extended Wavelength, MUlti-band Sub/mm Inductance Camera, NEW-MUSIC, a six-band, polarization-sensitive, trans-mm camera for the 10.4 m Leighton Chajnantor Telescope. NEW-MUSIC will provide SEDs from 80 to 420 GHz for: a variety of time-domain sources to probe new frontiers in energy, density, time, and magnetic field: the Sunyaev-Zeldovich effects in galaxies and galaxy clusters to study accretion, feedback, and dust content in their hot gaseous haloes; and to provide new insights into stellar and planetary nurseries via dust thermal emission and polarization.
Hierarchical summing of our slot-dipole, phased-array antennas works as expected and that microstripline loss is acceptable. We have demonstrated reasonable spectral bandpasses and competitive optical efficiency in four of NEW-MUSIC's six bands, including validation of NEW-MUSIC's groundbreaking microstripline-coupled, parallel-plate-capacitor, lumped-element KIDs. The detectors' generation-recombination noise dominates over amplifier and two-level-system noise above 100 Hz, and scaling predictions indicate this performance should continue to hold down to 0.1 Hz. The NEW-MUSIC detectors are demonstrably photon-background-limited. Direct absorption is at the ≲ 1% level. We have fabricated and observed reasonable first-try yield for AlMn KIDs necessary for B1.
In the near term, we expect to provide the following key remaining demonstrations: beams for the three-scale antenna; similar sensitivity down to the 0.1–1 Hz audio frequencies necessary for astronomical scanning observations; AlMn KIDs with photon-background-limited sensitivity for B1; improved optical efficiency and spectral bandpasses; explicit measurements of loss and wave-speed for our a-Si:H microstripline; and, resonator yield and collision statistics. These results will enable the final design of NEW-MUSIC. Funding-permitting, NEW-MUSIC will deploy with a quarter-scale focal plane on LCT in 2027, with a focus on time-domain sources and object-oriented science. On-sky validation will motivate construction of the full focal-plane and readout system to enable wide-area surveys.
This work has been supported by the JPL Research and Technology Development Fund, the National Aeronautics and Space Administration under awards 80NSSC18K0385 and 80NSSC22K1556, the Department of Energy Office of High-Energy Physics Advanced Detector Research program under award DE-SC0018126, and the Wilf Foundation. The research was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004). The authors acknowledge the work of numerous former students and collaborators in the development of the technologies presented here and thank Liam Connor, Anna Ho, Mansi Kasliwal, Shri Kulkarni, Sterl Phinney, and Vikram Ravi for development of the time-domain astronomy science targets.
|
http://arxiv.org/abs/2409.02552v2 | 20240904091830 | Cointegration test in time series analysis by global optimisation | [
"Alvey Qianli Lin",
"Zhiwen Zhang"
] | math.NA | [
"math.NA",
"cs.NA",
"65K05"
] |
OMSzplmmn
thmTheorem[section]
aspnAssumption
conjConjecture[section]
lem[thm]Lemma
prop[thm]Proposition
cor[thm]Corollary
ex[thm]Example
*reRemark
defn[thm]Definition
notationNotation
|
http://arxiv.org/abs/2409.03113v1 | 20240904223715 | On the complexity of the Eulerian path problem for infinite graphs | [
"Nicanor Carrasco-Vargas",
"Valentino Delle Rose",
"Cristóbal Rojas"
] | math.LO | [
"math.LO",
"math.CO",
"03C57 (primary), 03D45, 05C45, 05C85, 68R10 (secondary)"
] |
§ ABSTRACT
We revisit the problem of algorithmically deciding whether a given infinite connected graph has an Eulerian path, namely, a path that uses every edge exactly once. It has been recently observed that this problem is D_3^0-complete for graphs that have a computable description, whereas it is Π_2^0-complete for graphs that have a highly computable description, and that this same bound holds for the class of automatic graphs. A closely related problem consists of determining the number of ends of a graph, namely, the maximum number of distinct infinite connected components the graph can be separated into after removing a finite set of edges. The complexity of this problem for highly computable graphs is known to be Π_2^0-complete as well. The connection between these two problems lies in that only graphs with one or two ends can have Eulerian paths. In this paper we are interested in understanding the complexity of the infinite Eulerian path problem in the setting where the input graphs are known to have the right number of ends. We find that in this setting the problem becomes strictly easier, and that its exact difficulty varies according to whether the graphs have one or two ends, and to whether the Eulerian path we are looking for is one-way or bi-infinite. For example, we find that deciding existence of a bi-infinite Eulerian path for one-ended graphs is only Π_1^0-complete if the graphs are highly computable, and that the same problem becomes decidable for automatic graphs. Our results are based on a detailed computability analysis of what we call the Separation Problem, which we believe to be of independent interest. For instance, as a side application, we observe that König's infinity lemma, well known to be non-effective in general, becomes effective if we restrict to graphs with finitely many ends.
GraphEx: A Graph-based Extraction Method for Advertiser Keyphrase Recommendation
Kamesh Madduri
September 9, 2024
================================================================================
§ INTRODUCTION
The study of infinite graphs from a computability point of view dates back to the seminal work of Manaster and Rosenstein <cit.>, and of Bean <cit.>, who initiated a research program devoted to understand the algorithmic complexity of deciding classical graph properties for families of graphs that have an algorithmic description.
How hard is to decide a given property depends on how strong is the given computable description of the graph, and the program consists on characterizing this relationship, for instance according to the hierarchies of algorithmic complexity provided by computability theory.
A central notion in these works is that of a computable graph. Intuitively, an infinite graph is computable if there is an algorithm to decide whether or not two vertices are connected by an edge. These works also introduced the idea of considering stronger notions of computability for graphs, namely highly computable graphs. Highly computable graphs correspond to the subclass of locally finite computable graphs for which, in addition, one can effectively compute the degree of each vertex. Intuitively, one can think of a highly computable graph as a graph for which there is an algorithm capable of drawing a certified picture of any of its finite portions. A third class of graphs, introduced much more recently <cit.>, is that of automatic graphs. These are graphs whose vertex set and adjacency relation can be described by finite automaton, and their algorithmic properties have been sistematically compared with those of computable and highly computable graphs (see <cit.>).
The point of considering these stronger notions is that some problems may become easier for graphs with stronger algorithmic descriptions, and an important goal is to understand for which problems this is indeed the case.
An interesting example is given by the coloring problem. While Bean constructed an example of a computable graph with chromatic number 3 that cannot be computably colored with any finite number of colors <cit.>, Schmerl proved that any highly computable graph with chromatic number k admits a computable coloring with at most 2k-1 colors (and that such bound is tight) <cit.>.
Another particularly interesting example is the Eulerian path problem, which has a very long history in mathematics as it can be dated back to the famous problem of the seven bridges of Könisberg. In 1736, Euler solved this problem <cit.> by providing a characterization of those finite graphs possessing (what we now call) an Eulerian path, namely a path which visits every edge exactly once. Around two-hundred years later, Euler's characterization was extended to countably infinite graphs by Erdős, Grünwald and Vázsonyi <cit.>. From a computability perspective, it turned out that such result is non-effective for computable graphs, while it admits an effective version in the highly computable case. Let us call a graph Eulerian when it admits an Eulerian path: Bean <cit.> showed that, while there are computable Eulerian graphs for which every Eulerian path is non computable, a highly computable graph is Eulerian if and only if it has a computable Eulerian path –moreover, one can effectively compute (an index for) such a path given (an index for) the graph. See also <cit.> for a recent generalization of Bean's result.
It is important to notice that Bean's result is rather unusual, being in stark contrast with other fundamental results in graph theory which are proven to be non-effective even when restricted to highly computable graphs. Notable examples are the Hall's marriage theorem <cit.>, Ramsey's theorem <cit.> and the existence of vertex or edge colorings of a certain size <cit.>.
A related question is that of deciding whether a given graph is Eulerian. This problem was shown to be D_3^0-complete (where D_3^0 denotes the class of sets which can be expressed as differences of two Σ_3^0 sets) for computable graphs, whereas the same question is only Π_2^0-complete in the case of both highly computable graphs or automatic graphs <cit.>[Notice that in his survey <cit.>, Gasarch states, without proof, that the existence of an Eulerian path in computable graphs was Π_3^0-hard and Σ_4^0, while the precise complexity was still unknown. Also, Π_2^0-completeness of the same problem in the case of highly computable graphs is stated, without proof, in <cit.>.].
The proof of Π_2^0-completeness from <cit.> strongly relies on the classical characterization by Erdős, Grünwald and Vázsonyi <cit.>, which relates the existence of Eulerian paths to a certain topological property of graphs, namely, the number of ends of a graph. Indeed, according to this characterization, only graphs with one end can have one-way Eulerian paths, while only graphs with one or two ends can have bi-infinite (or two-ways) Eulerian paths. Notably, Kuske and Lohrey also showed that the problem of counting the number of ends of a graph is itself Π_2^0-complete. Even if we know that the number of ends of a graph is at most two, deciding whether is has one or two ends is already a Π_2^0-complete problem.
§.§ Main results
In this paper, motivated by a deeper understanding of how the complexity of these different problems interact, we fix the number of ends of a graph and ask whether the complexity of the Eulerian path problem changes. Interestingly, we find that leaving the number of ends out of the decision task makes the Eulerian path problem significantly easier.
Theorem A. Determining whether a highly computable graph with one end has a one-way Eulerian path is Σ_2^-1-complete, whereas for two-ways Eulerian paths it is only Π_1^0-complete. On the other hand, for highly computable graphs with two ends, the problem attains precisely the m-degrees of Δ_2^0 sets.
The situation for highly computable graphs is summarized in the Table <ref>.
We also ask the question for the class of automatic graphs, for which the same Π_2^0-completeness bound holds in the general case. Surprisingly, we find that the complexity in this case decreases dramatically when restricted to graphs with only one end.
Theorem B. Determining whether an automatic graph with one end has an Eulerian path is a decidable problem.
These results are based on a detailed analysis of what we call the Separation Problem. This problem consists of determining the number of different connected components a graph gets separated into after removing a finite number of edges. Our main tool is a result stating the computability of the Separation Problem from a description of a highly computable graph plus some additional, non-uniform, finite information about it. This result, together with some other related observations, constitute our main technical contribution. As an amusing consequence of this analysis, we show that König's lemma is effective for graphs with finitely many ends:
Theorem C. A connected, highly computable, and locally finite graph with finitely many ends is infinite if and only if it admits a computable infinite path.
On the other hand, we also provide an example of a highly computable infinite graph with only one end, for which every geodesic infinite path is uncomputable, showing some subtleties in this kind of questions.
§.§ Related work
Besides the Eulerian Path problem, in his work <cit.> Bean also considered the case of Hamiltonian paths (those paths visiting each vertex of a given graph exactly once) and showed that Hamiltonian paths may be all uncomputable even for highly computable graphs.
More recently, Kuske and Lohrey in <cit.> characterized the exact complexity of both the Eulerian path problem and the Hamiltonian path problem: these problems are, respectively, Π_2^0-complete and Σ_1^1-complete, and these bounds apply to highly computable as well as automatic graphs. As further evidence of the relevance of this kind of problems in computability theory, we would like to point out that a recent work by Jura, Levin and Markkanen <cit.>, extending the seminal work by Gasarch and Lee <cit.>, considers a notion of effectivity for graphs that lies between computable and highly computable graphs. In particular, they show that for every non-computable Δ_2^0 oracle A, the ability to compute non-trivial neighborhoods in a graph (that is, A can compute the degree function of a graph which is not highly computable) is equivalent to the existence of A-computable graphs (i.e., computable graphs whose degree function is computable in A) which admit Eulerian paths but no computable Eulerian path.
§.§ Paper organization
In <Ref> we briefly recall all the notions that will be relevant for our study. In <Ref> we introduce the Separation Problem, and develop a first set of technical results about its computability, which we directly use to prove Theorem C. Then in <Ref> we put this analysis together with some observations specific to the Eulerian path problem and prove Theorems A and B. Finally, in <Ref> we carry on a more in-depth analysis of the Separation Problem and establish a fairly complete classification result for it.
§.§ Acknowledgements
N. Carrasco-Vargas was partially supported by ANID Doctorado Nacional/2020-21201185, ANID/ Basal National Center for Artificial Intelligence CENIA FB210017, and the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 731143. V. Delle Rose was supported by the project PRIN 2022 Logical Methods in Combinatorics No. 2022BXH4R5 of the Italian Ministry of University and Research (MIUR). C. Rojas was supported by ANID/FONDECYT Regular 1230469 and ANID/Basal National Center for Artificial Intelligence CENIA FB210017.
§ PRELIMINARIES
§.§ Graph theory
We consider undirected, simple graphs. That is, edges are non-ordered pairs of vertices and no loops or multiple edges between vertices are allowed. The set of vertices of a graph G is denoted by V(G), and its set of edges by E(G). Each edge joins a pair of vertices, and is said to be incident to each of the vertices it joins. Two vertices joined by an edge are called adjacent or neighbors. The degree of a vertex v, denoted _G(v), is the number of edges in G incident to v. A graph is said to be even when every vertex has even degree.
We will consider subgraphs that are induced by sets of edges or vertices. We will often write A ⋐ B as an abbreviation for A is finite and A ⊆ B. Let V⊂ V(G). The induced subgraph G[V] is the subgraph of G whose vertex set is V, and whose edge set is {e∈ E(G) : e joins vertices in V}. We write G∖ V=G[V(G)∖ V]. Now let E⊂ E(G). The induced subgraph G[E] is the subgraph of G whose edge set is E, and whose vertex set is {v∈ V(G) : v is incident to an edge in E}. We write G∖ E = G[E(G)∖ E].
A path p in a graph is a sequence of adjacent vertices v_0,…,v_n. We call v_0 and v_n the initial and final vertices of p, respectively. We say that p visits the set of vertices {v_i| 0≤ i≤ n}, and the set of edges {e_i| i∈{0,…,n-1}, e_i joins v_i to v_i+1}. A graph is connected if every pair of vertices can be joined by a path. In a connected graph G, the distance d_G(v,w) between two vertices v and w is the length of the shortest path joining them, where the length of a path is the number of edges that it visits. A connected component of a graph is a connected subgraph that is maximal for the subgraph relation.
The following notion will be central for us.
The number of ends of a connected graph G is the supremum of the number of infinite connected components of G∖ E, where E ranges over all finite subsets of E(G).
Observe that the number of ends of a graph G may be infinite. We note that other ways of defining the number of ends of a graph exist, but they all coincide for locally finite graphs <cit.>.
We remark that, for the sake of clarity, in what follows we will restrict ourselves to study the case of graphs that are connected and locally finite. However, our definitions apply to connected graphs which are not necessarily locally finite, and many of our results extend straightforwardly to this more general case. We will explicitly point this out when appropriate.
§.§ Computability theory
Let us review some computability theoretical terminology which will be used throughout the paper. The notation we use is quite standard and follows mostly the textbook <cit.>. Note that we will often identify a set A ⊆ with its characteristic function A →{0, 1 } so that A(n)=1 n∈ A. For a set A we will write A to denote its complement.
Given a partial computable function φ, it is understood that it is computed by some Turing machine, which we also denote by φ. We write φ_s to denote the outcome of (running the Turing machine) computing φ for s many steps: if such computation is successful, namely φ halts within s many steps, we write φ_s ↓, otherwise we write φ_s ↑. We also write φ↓ in case there is an s for which φ_s ↓, and φ↑ otherwise. Similarly, for a c.e. set W, we denote by W_s the set of its elements enumerated within s steps.
It is well known that one can give a computable enumeration of all partial computable functions, as well as of all c.e. sets. Also, one can consider a computable enumeration (φ_e)_e ∈ of Turing machines without input and define the halting set = {e φ_e ↓}.
In this paper we will make an extensive use of two notions of reducibilities among sets, both of them measuring, intuitively, the relative complexity of those sets.
The first one is defined in terms of oracle Turing machines, namely Turing machines with an additional one-way read only tape: when φ is equipped with oracle Y, we write φ^Y and we mean that, for each n, the n-th cell of such tape contains a 1 if n ∈ Y, and a 0 otherwise.
Let A, B ⊆.
A is Turing-reducible to B, denoted by A ≤_T B, if there is an oracle Turing machine φ such that φ^B(n) = A(n) for every n ∈. Moreover, if both A ≤_T B and B ≤_T A holds, we write A ≡_T B and say that A and B are Turing-equivalent.
Notice that, by means of oracle Turing machines, one can define the halting set relative to any set Y, i.e. the set Y'={e: φ_e^Y ↓}. Therefore, it is possible to consider, for example, ∅”, the halting set relative to ∅' and, similarly, ∅^(n) for every n>0: in this way, one obtains a strictly increasing chain with respect to ≤_T, namely ∅ <_T ∅ ' <_T ∅”<_T …
Let us turn to the second notion of reducibility:
Given A, B ⊆, we say that A is many-one-reducible (or, simply, m-reducible) to B, and write A ≤_m B, if there is a computable function f → such that, for every n,
n ∈ A if and only if f(n) ∈ B.
If both A ≤_m B and B ≤_m A holds, A and B are said to be m-equivalent and we write A ≡_m B.
It is immediate to verify that both ≡_T and ≡_m are equivalence relations: the Turing-degree (respectively, the m-degree) of A ⊆ is the equivalence class of A with respect to ≡_T (respectively, ≡_m).
Reducibilities are closely related to the core idea of hierarchies, which group sets in increasing levels of difficulty. One which is widely used in computability theory is the so-called arithmetical hierarchy, whose levels are defined as follows.
Let A ⊆ and n≥0.
* A is Σ_n^0 if there is a computable relation R ⊆^n+1 such that, for all x,
x ∈ A if and only if ∃ y_1 ∀ y_2 … Q y_n R(x,y_1, …, y_n),
where Q is ∃ if n is odd, and ∀ if n is even.
* A is Π_n^0 if A is Σ_n^0.
* A is Δ_n^0 if it is both Σ_n^0 and Π_n^0.
Moreover, A set is said to be Σ_n^0-complete (respectively, Π_n^0-complete) if it is Σ_n^0 (Π_n^0) and for every Σ_n^0 (Π_n^0) set X it holds that X ≤_m A.
The arithmetical hierarchy is tied to the Turing degrees of the form ∅^(n) by the well-known Post's Theorem.
For every n > 0, ∅^(n) is Σ_n^0-complete and ∅' is Π_n^0-complete.
Moreover, for every n ≥ 0 and A ⊆:
* A is Σ_n+1^0 if and only if A is c.e. relative ∅^(n);
* A is Δ_n^0 if and only if A ≤_T ∅^(n).
We will also use the fact that Inf = {e |W_e|= ∞} and Fin = {e |W_e| < ∞} are, respectively, Π_2^0-complete and Σ_2^0-complete (see, e.g., <cit.>).
Another notion we will use extensively is that of computable approximation of a set.
A computable approximation of A ⊆ is a computable function f ^2 →{0, 1 } such that, for all n:
* lim_s f(n,s) exists (i.e. |{f(n,s) f(n,s+1)}| < ∞), and
* A(n) = lim_s f(n,s).
The following well-known result characterizes the Δ_2^0 sets precisely as those admitting computable approximations:
A set A ⊆ is Δ_2^0 if and only if A has a computable approximation.
Using the characterization given by the Limit Lemma, it is possible to provide a hierarchy of the Δ_2^0 sets based on the number of mind changes in their computable approximations. This is known as the Ershov hierarchy: as proven in <cit.>, the whole hierarchy exhausts all Δ_2^0 sets. We will define only the initial levels of such hierarchy, which are the only ones playing a role in what follows, and refer the interest reader to the excellent survey <cit.>.
Let n ≥ 0 and A⊆ be a Δ_2^0 set. We say that:
* A is Σ_n^-1 (or n-c.e.) if there is a computable approximation f of A such that, for every x ∈, |{s f(x,s) f(x,s+1)| ≤ n.
* A is Π_n^-1 if A is Σ_n^-1.
* A is Δ_n^-1 if it is both Σ_n^-1 and Π_n^-1.
The terminology n-c.e. is motivated by the following characterization, whose proof is straightforward.
Let A ⊆. Then:
* A is 0-c.e. if and only if A is computable.
* A is n-c.e., with n=2k, if and only if there exist c.e. sets A_1 ⊇ A_2 ⊇…⊇ A_2k such that A = (A_1 ∖ A_2) ∪…∪ (A_2k-1∖ A_2k).
* A is n-c.e., with n=2k+1, if and only if there exist c.e. sets A_1 ⊇ A_2 ⊇…⊇ A_2k+1 such that A = (A_1 ∖ A_2) ∪…∪ (A_2k-1∖ A_2k) ∪ A_2k+1.
The m-degrees of n-c.e. sets have been extensively studied in <cit.>: in particular, for each n > 0, one can prove the existence of a Σ_n^-1-complete set, namely a n-c.e. set C such that A ≤_m C whenever A is n-c.e.
§.§ Computability for Graphs
We now recall some standard computability notions for countably infinite graphs.
A graph G is computable when V(G) is a decidable subset of , and the adjacency relation is a decidable subset of ^2. A computable graph G is highly computable when it is locally finite and the vertex degree function V(G)→ is computable.
The notion of highly computable graphs can be naturally extended to graphs which are not necessarily locally finite by requiring the vertex degree function to be computable as a function from V(G) →∪{∞}.
The notion of computable graph is strictly weaker than that of highly computable. In particular, decidability of the adjacency relation only allows to enumerate the set of neighbors of a vertex, and we need to know the degree of this vertex to be sure that no more neighbors can be added. Indeed, an easy example of a (connected and locally finite) graph which is computable but whose degree function is uncomputable is the following: let V(G) = and E(G) = { (x,x+1) | x ∈}∪{ (m, -n) | n = ⟨ m,s ⟩ and φ_m,s-1↑ and φ_m,s↓}, where (φ_e)_e ∈ is some fixed enumeration of all Turing machines without input and ⟨ · , · ⟩ is a fixed computable bijection of ^2 to . Notice that the degree of each vertex is either 2 or 3, but, for m >0, we have that d_G(m) = 3 if and only if φ_m halts.
Notice that, equivalently, we could define a highly computable (locally finite) graph as one for which there exists a computable function capable of drawing any finite portion of the graph: more precisely, G is highly computable if and only if there is a computable function f V(G) ×→ which maps any pair (v,r) to (an index of) the finite graph G[{w ∈ V(G) | d_G(v,w) ≤ r}]. We also note that this characterization does not apply to the case of graphs which are not locally finite.
Finally, we say that a sequence (G_e)_e ∈ of highly computable graphs is uniformly highly computable if there are computable functions V, E and with two inputs, such that
* V(e,v)=1 v ∈ V(G_e),
* E(e,⟨ v,w⟩)=1 {v,w}∈ E(G_e) and
* (e,v)=n v ∈ V(G_e) and _G_e(v)=n.
§ THE SEPARATION PROBLEM
Where does the hardness of the Eulerian path Problem comes from? From the classical result by Erdős, Grünwald and Vázsonyi <cit.>, we expect the number of ends of a graph to play a central role and, in fact, the proof behind the upper bound given in <cit.> shows that it is precisely deciding whether the graph has the right number of ends where the complexity lies.
Recall that the number of ends of a graph corresponds to the maximal number of different infinite connected components the graph can be separated into by removing a finite set of edges. The focus of this Section will be on characterizing the algorithmic complexity of what we call the Separation Problem: given a graph G and a finite set of edges E, to count the number of different infinite connected components of G∖ E. Let us therefore start by introducing some vocabulary.
Let G be a locally finite connected graph, and let E⊂ E(G)
be a finite set of edges. We denote by _G(E) the number
of distinct infinite connected components in G∖ E. We say
that E is separating or that it separates G when _G(E)≥2. We denote by G the collection of all separating finite sets of edges, and by G the collection of all finite sets of edges E⊂ E(G) such that _G(E) equals the number of ends of G.
We note that a finite set of edges E belongs to G precisely when all the connected components of G∖ E have either 0 or 1 end. In particular, when G has infinitely many ends the set G is empty, and if G has only one end, then any finite collection of edges (including the empty one) belongs to G.
Our goal will be to establish the algorithmic complexity of computing _G(E). As we shall see, while as a function of both G and E this is an upper-computable function in general, it becomes computable when we fix a graph G with finitely many ends and see it as a function of E alone. The results we present here will be crucial in <Ref>, where we study the complexity of the Eulerian Path Problem for infinite graphs.
We start by showing that for graphs with finitely many ends, the quantity _G(E) can be computed in a non-uniform way, that is, by an algorithm which in addition to G and E, receives some suitable extra finite information about the graph G.
There is a partial computable function which, when receiving as inputs a (program for a) highly computable graph G with finitely many ends, a finite set E ⊂ V(G), the number of ends of G and an arbitrary element W ∈G, outputs _G(E).
We start by exhibiting an algorithm, called 𝒜, that will be used several times in the course of this proof. The algorithm 𝒜 takes as input a program for an infinite and highly computable graph G with finitely many ends, the number k of ends of G, and an element W∈G. Thus G∖ W has exactly k infinite connected components. The algorithm computes the set V of vertices in G∖ W that are incident to some edge from W, and then outputs a partition V=V_1⊔…,⊔ V_k⊔ F, where V_i is the set of vertices in V that lie in the i-th infinite connected component of G∖ W, and F is the set of vertices in V that lie in a finite connected component of G∖ W.
We now explain the process executed by 𝒜. It is clear how to compute the set V from the input, as the graph is highly computable. For each vertex v∈ V, we denote by C_v the connected component of G∖ W containing v. Now observe the following:
* For every v∈ V, the property “C_v is finite" is c.e event, as this can be verified by computing finite portions of the graph G.
* For every u≠ v in V, the property “C_u=C_v" is a c.e event, as this can be verified by finding a finite path in G from u to v, and which visits no edge from W.
Our algorithm 𝒜 executes the individual algorithms for the finitely many c.e. events indicated above. That is: for each v∈ V, we wait to see whether C_v is finite, and for each pair of vertices u v∈ V, we wait until eventually we see that C_u=C_v. Knowing the number of ends of G, and since W ∈G, eventually these algorithms will provide enough information to write a partition V=V_1⊔…,⊔ V_k⊔ F as desired. At this point the algorithm 𝒜 stops and outputs this partition.
We now proceed to prove the claim in the statement.
Assume we receive as inputs some description of the highly computable graph G, a finite set of edges W ∈G, the number k of ends of G, and a finite set E ⊂ E(G). We will compute a finite set of edges U satisfying the following conditions:
* U contains E and W.
* G[U] is connected.
* For each pair of vertices u,v in G∖ U that are incident to edges from U, and such that u,v lie in the same infinite connected component of G∖ U, there is a path from u to v that is completely contained in G[U∖ E].
* G∖ U has exactly k infinite connected components, and no finite conected component.
Computing a set with these properties requires some care, and we need to introduce some notation. We fix a vertex v_0, and for each r∈ we denote by U_r the edge set of the induced graph G[{v∈ V(G)| d_G(v,v_0)≤ r}]. We let L_r be the set of vertices in G∖ U_r that are incident to some edge from U_r. Observe that we can compute U_r and L_r from r and the other input parameters, as the graph G is highly computable.
We now define r_0 as the smallest natural number such that both E and W are contained in U_r_0. Observe that r_0 exists, and that U_r_0 lies in G. It is straightforward how to compute r_0. Next, we define r_1 as the smallest natural number having the following property: for every pair of vertices u,v∈ L_r_0 that lie in the same infinite connected component of G∖ U_r_0, there is a path from u to v contained in G[U_r_1∖ U_r_0].
The number r_1 is well defined, and can be computed from the inputs. We provide the details for the interested reader. Observe that for every pair of vertices u and v in L_r_0 that lie in the same infinite connected component of G∖ U_r_0, there is a path from u to v in G∖ U_r_0, since such graph is connected. Moreover, this path is contained in G[U_r∖ U_r_0] for some r big enough. As there are finitely many paths to consider, it follows that r_1 exists. In order to compute r_1 it suffices to show that the defining condition for r_1 is decidable given an arbitrary number r≥ r_0. For this we need the algorithm 𝒜. In order to check whether r≥ r_0 satisfies the condition, we first use the algorithm 𝒜 to list all pairs of vertices in L_r_0 that lie in the same infinite connected component from G∖ U_r_0. Then, for each pair of vertices of this list, we check whether they lie in the same connected component of the finite graph G[U_r∖ U_r_0].
After computing r_1, we compute the set U_f of all edges in a finite connected component of G∖ U_r_1. It is clear how to do this with the algorithm 𝒜. We define U=U_r_1∪ U_f, and we claim that this set verifies conditions (1), (2), (3) and (4) defined above. Conditions (1), (2) and (4) are clear. We verify condition (3), so let u and v be vertices in L_r_1 as in the definition of condition (3). Observe that there is a path completely contained in G[U_r_1∖ U_r_0], from the vertex u to some vertex u' in L_r_0. The existence of this path can be seen by taking any path with minimal length from u to v_0. Similarly, there is a path completely contained in G[U_r_1∖ U_r_0] from the vertex v to some vertex v' in L_r_0. But there is also a path in G[U_r_1∖ U_r_0] from u' and v' (this was the defining condition for r_1). We can use these three paths to obtain a new path in G[U_r_1∖ U_r_0] from u to v, so our claim follows.
We are almost ready to count the number of infinite connected components in G∖ E. Let V be the set of all vertices in G∖ U that are incident to some edge from U. We use the algorithm 𝒜 to compute a partition V=V_1⊔…⊔ V_k as indicated before. Our task now is to understand for which i j, V_i and V_j are in the same connected component of G∖ E.
We claim that the infinite connected component of G∖ E that contains V_i equals the one that contains V_j if and only if there is a path within G[U] that visits no edge from E, and that joins some vertex in V_i to some vertex in V_j. The backward implication is obvious. For the forward implication, suppose that V_i and V_j lie in the same connected component of G∖ E. Then there is a path p in G whose initial vertex is in V_i, whose final vertex is in V_j, and which visits no edge from E.
Some segments of p may escape from the graph G[U∖ E] through a vertex in V, and then enter again to G[U∖ E] through another vertex in V. Condition (3) in the definition of U ensures that we can replace these segments by segments completely contained in G[U∖ E]. After replacing all these segments, we end up with a path with the same initial and final vertex as p, but contained in G[U∖ E]. This shows the forward implication.
Finally, we define an equivalence relation ∼ in {1,…,k} as follows: i∼ j when there is a path in G[U] that visits no edge from E, and that joins some vertex in V_i to some vertex in V_j. This relation is computable because G[U] is a finite graph, and we can compute the number of equivalence classes of ∼. The paragraph above shows that this number equals _G(E). This finishes our computation.
A direct consequence of this result is that for each graph G there is an algorithm that computes _G(E) for any given E, and thus both G and G are (non-uniformly) decidable sets.
If G is a connected highly computable graph with finitely many ends, then the function E↦_G(E) is computable. In particular, G and G are decidable.
Let us denote by S(G,E,n,W)=_G(E) the computable function from <Ref>, where G is a highly computable graph with n ends, E is a finite set of edges and W is some element in G. When seen as a function of E, that is, when G,n,W are fixed, this is a computable function that allows to decide both G and G, as E∈G_G(E)≥2
and E∈G_G(E)=n.
We now show that in the uniform setting and without any extra information, we have that the function (G,E)↦_G(E) is only upper semicomputable. Observe that now there is no assumption on the number of ends of G.
There is an algorithm which given a description for a highly computable graph G and E⋐ E(G), computes a sequence of natural numbers (_G,n(E))_n∈ whose minimum equals _G(E).
Let G and E be as in the statement. We proceed as in the first part of the proof of <Ref> and let V be the set
of vertices in G∖ E that are incident to some edge
in E. Further, for each v∈ V, we let C_v be the connected
component of G∖ E that contains v. Now, for each vertex v∈ V and each n∈, we define E_v,n
as the set edges in some path in G of length at most n, starting at v, and that visit no edge from E. Observe that these sets can be uniformly computed (from G, E,
v and n). It is then clear that for all u,v∈ V:
* C_v is finite when for some n, we have E_v,n=E_v,n+1.
* C_u=C_v when for some n, we have E_u,n∩ E_v,n∅.
For every n∈ we let _G,n(E) be the
number of infinite connected components of G∖ E that
we can distinguish with the collection of sets {E_v,n| v∈ V}
and the observations above. By this we mean the following.
We let 𝒜 _n={E_v,n| v∈ V, E_v,n+1 E_v,n}. We define a relation ∼ on A_n by setting E∼ E' E∩ E'∅, and we extend ∼ to its transitive closure. Finally, we let _G,n(E) be the number of equivalence classes of ∼. It is clear that the sequence (_G,n(E))_n∈ converges monotonously to _G(E), so our claim follows.
As a corollary we obtain the corresponding upper bound on the complexity of deciding G, uniformly in G.
The problem of deciding membership
in G with an algorithm uniform on G, is Π_1^0.
That is, there is an algorithm which, on input a connected and highly
computable graph G and a finite set of edges E, halts if and
only if E∉G.
In <Ref> we will show that
this problem is in fact Π_1^0-complete in its uniform version. We will also study the difficulty of computing G, from a description of G.
All the notions and results of this section can be extended to the case of graphs which are not necessarily locally finite (recall that in this case the vertex degree function is assumed to be computable as a function of V(G)→∪{∞}, see <Ref>). The intuitive idea is that given E⋐ E(G), the set of vertices in G∖ E incident to edges from E is finite and computable, even if G is not locally finite. We can use this finite set of vertices as labels for the connected components of G∖ E, and reproduce the same computations (see for instance <cit.>). It is not hard then to adapt the proof of <Ref> to this more general case. Observe that given a graph G that is not locally finite, the sets U_r defined in the proof may be infinite. However, it suffices to replace these sets by finite approximations to them; the rest of the argument remains unchanged.
Let us end this section by illustrating an interesting connection between the separation problem and the problem of computing infinite paths in a graph. In particular, we will see that as a consequence of our results, infinite graphs with finitely many ends always admit computable infinite paths.
§.§ Computing infinite paths by separation
The word path receives different meanings in graph theory, according to whether one admits repeated vertices or not. We say that a finite or infinite path is simple when every vertex is visited at most once.
A fundamental result in graph theory, known as König's Infinity Lemma,
asserts that a connected and locally finite graph admits an infinite
simple path if and only if it is infinite. It is well known that this result is non effective in general, in the sense that there exists highly computable
graphs satisfying these hypotheses but where all infinite simple paths are uncomputable, see for instance <cit.> and <cit.>.
Here we establish a close connection between the separation problem, and the problem of determining whether a finite simple path in a graph can be extended to an infinite one. Let us denote by G the set of finite words v_0… v_n∈ V(G)^∗ such that v_0,…,v_n is a simple path in G that can be extended to an infinite simple path (v_n)_n∈. Our goal is to prove:
Let G be a highly computable and connected infinite
graph. Then G≥_TG. If G is a tree, then
we have an equivalence G≡_TG. The reductions
are uniform on G.
Putting this result together with <Ref>, we obtain the following amusing consequence.
If G is a highly computable infinite graph with finitely many ends, then G is a decidable set. In particular, G admits computable infinite simple paths.
In other words, the property of “having infinite simple paths but no computable one" is exclusive to graphs with infinitely many ends. The proof of <Ref> will be based on the following lemma.
Let G be a highly computable graph. Then G is Turing equivalent to the set of pairs (E,U) where E∈G and U is the (finite) subset of vertices that are adjacent to E that belong to an infinite connected component of G∖ E.
Moreover, the reduction is uniform on the graph G.
We prove the nontrivial reduction. For this we describe an effective procedure which given E∈ and with oracle access to G, computes the set of
vertices U as in the statement.
On input E, we start by computing the set {u_1,…,u_n} of vertices in ∖ E
which are incident to E. For each i, we let _i be the connected component
of ∖ E containing u_i. In this notation, the set U that we are trying to compute equals {u_i : G_i is infinite}. For each i∈{1,…,n} and r∈, we define
E_i,r as the set of edges that lie in a path of length at most
r, with initial vertex u_i, and which visits no edge from E.
As is highly computable, we can compute E_i,r from E,
i and r. Observe now that _i is infinite if and only if for some r, the set of edges E_i,r lies in .
This is a Σ_1^0 condition with oracle access to . But “G_i is finite” is also a Σ_1^0 condition. It follows that given E we can compute the set {i∈{1,…,n} : G_i is infinite}, which is what we wanted.
Let v_0… v_n be a word in V(G)^∗. As the graph G is highly computable, we can check whether it corresponds to a finite simple path, that is, whether v_i is adjacent to v_i+1 and whether no vertices are repeated. Let E be the set of all edges incident to some of these v_i. Now observe that (v_i)_i=0^i=n can be extended to an infinite simple path in G if and only if v_n is adjacent to some vertex from G that lies in an infinite connected component from G∖{v_0,…,v_n}=G∖ E. This is decidable with oracle G by <Ref>. This proves the reduction G≥_TG.
We now assume that is a tree, and prove the reduction G≥_TG.
We exhibit a procedure which, given a finite set E⊂ E(), a
vertex u in ∖ E that is incident to some edge
in E, and using G as oracle, determines if the connected
component of ∖ E containing u is infinite. It
is clear how to use this information to check whether E∈G. The procedure is as follows. We denote by C be the connected component
of ∖ E containing u. We start by computing the
set V of vertices adjacent to u in the graph ∖ E.
This is possible as the graph is highly computable. If this set is
empty, then we conclude that C is certainly finite. If V is
nonempty, then we observe that, by König's infinity Lemma, C
is infinite if and only if it has some infinite path starting at u. Moreover, these paths are in correspondence
with those infinite paths in that start at u and then visit
some vertex v∈ V. This is true because both C and are trees. Thus in order to decide whether
C is infinite, it suffices to check whether some of the finitely
many elements {uv| v∈ V} lies in G.
The following question raises naturally from <Ref>.
Is it true that G≡_TG for every locally finite graph G?
We also mention that it is not clear if G allows to determine whether a finite path can be extended to a two-way infinite path. It can be proved that for a highly computable infinite tree G we have that
{p∈G : p can be extended to a two-way infinite path}
is Turing equivalent to G, with a reduction uniform on the tree G. This is a straightforward adaptation of the proof for <Ref>. However, it is not clear whether any of these reductions can be adapted to graphs that are not necessarily trees.
We now provide an example illustrating the limitations of <Ref>. A finite path (v_i)_i=n^i=m in a graph is called geodesic when its length is minimal among all finite paths with the same initial and final vertex, and an infinite path is geodesic when its restriction to {n,…,m} is geodesic for all n,m∈, n<m. These concepts are fundamental in combinatorial and geometric group theory, see for instance <cit.>.
There is a highly computable graph Λ with one end, where the degree of every vertex is at most 6, such that all infinite geodesic paths are uncomputable. Moreover, the set
{p∈Λ : p can be extended to an infinite geodesic path}
is not c.e.
Let T be a subtree of the rooted binary tree that is connected, highly computable, infinite, and with no computable infinite simple path. The existence of T is proved in <cit.>. We define a new graph Λ as follows. The vertex set of Λ is the cartesian product V(T)× V(T), and two vertices (u_1,v_1) and (u_2,v_2) are neighbors when one of the following conditions occurs:
* u_1=u_2 and v_1 is adjacent to v_2 in T.
* v_1=v_2 and u_1 is adjacent to u_2 in T.
It is clear that Λ is highly computable, and that every vertex has degree at most 6. Observe that a path p in Λ can be seen as a combination of two paths in T, p_1 and p_2, that are “visited" by p one node at a time, jumping between p_1 and p_2 from time to time. Importantly, different orders of visiting p_1 and p_2 correspond to different paths in Λ. Moreover, p can be made simple even if the paths p_1 and p_2 are not. In particular, it is not hard to adapt the argument used to show that the grid has one end, to prove that so does Λ. We leave these details to the reader.
Let us now examine the distance d_Λ, and geodesics in the graph Λ. We claim that given a pair of vertices (u,v) and (u',v') in Λ, their distance d_Λ satisfies the following:
d_Λ((u,v),(u',v'))=d_T(u,v)+d_T(u',v').
This simply follows by examining the definition of Λ, but we review the details for the interested reader. Consider geodesics u=u_0,…,u_n=u' and v=v_0,…,v_m=v' in the graph T. Then (u_i,v_0)_i=0^i=n is a path in Λ from (u_0,v_0) to (u_n,v_0), and (u_n,v_i)_i=0^i=m is a path in Λ from (u_n,v_0) to (u_n,v_m). Thus we can “paste” them at the vertex (u_n,v_0) to obtain a path with length n+m, whose initial vertex is (u,v) and whose final vertex is (u',v'). This proves the inequality ≤ in <Ref>. For the other inequality, observe that given a path (u,v)=(u_0,v_0),(u_1,v_1),…,(u_k,v_k)=(u',v') in Λ, we can project to the first and second coordinate and recover two paths in T, one from u to u' and another one from v to v'. Indeed, we can partition
{0,…,k-1}={i:u_i u_i+1}⊔{i:v_i v_i+1}.
The fact that these sets are disjoint follows from the fact that for i< k and u_i u_i+1, we have v_i=v_i+1 by the definition of the edge relation of Λ.
After reordering, the first set can be used as index set for a path from u to u', while the second can be used as index set for a path from v to v'. This proves the inequality ≥ in <Ref>.
Now let (u_n,v_n)_n∈ be a geodesic infinite path in Λ, and suppose that it is computable to reach a contradiction. Thus for each n we have
d_Λ((u_0,v_0),(u_n,v_n))=n.
For n≥ 1 arbitrary, it is not possible to have d_T(u_0,u_n)<d_T(u_0,u_n+1) and d_T(v_0,v_n)<d_T(v_0,v_n+1), this follows from the definition of the edge relation for Λ. Joining this fact with <Ref>, we conclude that (d_T(u_0,u_n))_n∈ and (d_T(v_0,v_n))_n∈ are nondecreasing sequences, and at least one of them tends to infinity. We suppose without loss of generality that (d_T(u_0,u_n))_n∈ tends to infinity. Thus the sequence (u_n)_n∈ may have some repeated elements, but each element can be repeated only finitely many times, and with a set of indices that is a connected interval of . Thus we can define a new sequence (u'_n)_n∈ by removing all repetitions from (u_n)_n∈. It is clear that this sequence is an infinite simple path in T, and it is computable, contradicting our hypothesis on T.
Finally, observe that if the set
C={p∈Λ : p can be extended to an infinite geodesic}
was c.e. then we could compute an infinite geodesic in a recursive manner. That is, we run an enumeration of C until a first element p_1 is enumerated, then we wait until an element p_2 that extends p_1 appears, and so on.
Notice that even though the graph Λ above does not have any computable geodesic infinite path, it must have some computable infinite simple path, by <Ref>. What do such paths look like then? The key observation is that, if we think of a path in Λ as a combination of two paths p_1 and p_2 in the tree T, then one can search for an infinite path in T using p_1 while keeping p_2 at a fixed node, and in case we chose the wrong branch and end up in a leaf of T, we can change the node from p_2 and then “go back" with p_1 to try a different branch, without formally repeating nodes in Λ.
We finish this section recalling other known results about decision problems for finite paths on infinite graphs. It is proved in <cit.> that there is a highly computable graph with one end and which admits infinite Hamiltonian paths, but no computable one. Thus for this graph the problem of determining whether a simple path can be extended to an infinite Hamiltonian path is undecidable. On the other hand, it is proved in <cit.> that this problem becomes decidable when we instead consider 3-paths and Hamiltonian 3-paths, where a sequence of vertices (v_i)_i=0^i=n is a 3-path in a graph G when the distance between consecutive vertices is at most 3. Finally, it is proved in <cit.> that given a highly computable Eulerian graph, the problem of determining whether a path can be extended to an infinite Eulerian path is decidable.
Let us observe that given a highly computable graph G with finitely many ends, G may have some
Let G be a highly computable and connected graph with finitely many ends. We have proved that G is a decidable set ( However, it may have interesting subsets with different complexities. For instance, it is proved in <cit.> that the set of paths in that can be extended to a one-way (or two-way) Eulerian path is decidable. In contrast, it is proved in <cit.> that this is false for Hamiltonian paths. The author constructs an example of a graph G with one end which admits one-way infinite Hamiltonian paths, but in such a manner that none of them is computable. An easy argument shows that for this graph, the set
{p∈G : p can be extended to a one-way Hamiltonian path}
is not even recursively enumerable. More recently, it
We have proved that if G has finitely many ends then G is a decidable set. However, some subsets of interest may still be undecidable. For instance, it is proved in <cit.> that there is a highly computable graph G with one end which admits infinite Hamiltonian paths, but no computable Hamiltonian path. An easy argument shows that then
{v_0… v_n∈G : v_0… v_ncan be extended to an infinite Hamiltonian path}
is not even recursively enumerable. Here we provide another example in the same direction. We say that an infinite path (v_n)_n∈ in a graph G is geodesic when for all n,m∈ we have d_G(v_n,v_m)=|m-n|.
Proposition. There is a highly computable graph with one end, and thus the set of infinite paths is computable by our results, but where no geodesic infinite path is computable.
Proof. Let T be a subtree of the infinite binary tree that is highly computable, and that has no computable infinite path [cita]. Then the product graph T× T satisfies the claim.
§ THE COMPLEXITY OF THE EULERIAN PATH PROBLEM
The celebrated work by Erdős, Grünwald and Vázsonyi <cit.> established necessary and sufficient conditions for a countably infinite graph to admit either a one-way or a two-way Eulerian path. We start this section by recalling this characterization:
Let G be a graph.
* G admits a one-way infinite Eulerian path if and only if it satisfies the following set of conditions, called :
* E(G) is countable and infinite.
* G is connected.
* Either G has exactly one vertex with odd degree, or G has at least one vertex with infinite degree and no vertex with odd degree.
* G has one end.
* G admits a two-way infinite Eulerian path if and only if it satisfies the following set of conditions, called :
* E(G) is countable and infinite.
* G is connected.
* The degree of each vertex is either even or infinite.
* G has one or two ends.
* If E is a finite set of edges
which induces a subgraph where all vertices have even degree, then
G∖ E has one infinite connected component.
A proof of this result can also be found on <cit.>. Using the characterization above, it has been proven in <cit.> that deciding if a highly computable graph admits a one-way Eulerian path (i.e. if a graph satisfies the set of conditions ) is Π_2^0-complete. This turns out to be the same difficulty of deciding whether a highly computable graph as exactly one end: therefore, it is natural to ask whether the condition about having one end encapsulates most of the hardness of this question, making this decision task easier if we consider only graphs with the correct number of ends.
In the rest of this section, we show that, indeed, deciding whether a highly computable graph admits a one-way Eulerian path provided that this graph has one end is only Σ_2^-1-complete. Moreover, to complement the work done in <cit.>, we also consider two-way infinite Eulerian paths and show that deciding their existence for highly computable graphs is, again, Π_2^0-complete in general, but it becomes only Π_1^0-complete knowing that this graph has one end. Finally, we show that, whenever G is a highly computable graph with two ends, then we can decide whether G has a two-way Eulerian path using ∅ ' as an oracle, meaning that, again, the problem becomes strictly simpler than in the general case. In terms of many-one degrees, we show that every Δ_2^0 set can be realized as an index set of the form {e: G_e admits a two-way Eulerian path }, for a uniformly highly computable sequence (G_e)_e ∈ of graphs with two ends.
The results of this section will therefore give a complete picture of the hardness of the problem of deciding whether a given highly computable graph is Eulerian (see <Ref>).
§.§ Computable and highly computable multigraphs
For simplicity and clarity, so far we have proved all our results in the setting of simple graphs, that is, with no loops or multiple edges between vertices allowed. However, when considering Eulerian paths it is natural to allow multigraphs, since for instance the graph with a single vertex and infinitely many loops, admits both a one-way and a two-way infinite Eulerian path. For this reason, in this section we shall consider highly computable multigraphs. Let us then recall that in a multigraph G, the degree of the vertex v equals the number of edges incident to v, where loops are counted twice. A multigraph G=(V,E) is computable when V(G) is a decidable subset of , and we have a computable function E(G) V(G)^2→ such that E(G)(u,v) is the number of edges joining u to v in G. The definitions of highly computable multigraph, and highly computable sequence of multigraphs are a straightforward generalization of the ones for simple graphs. It is easy to check that all the results proved in the previous section are valid in this more general context, so we will use them without further clarifications.
§.§ One-way Eulerian paths
We begin by revisiting the result from <cit.> on the Π_2^0-completeness of deciding whether a highly computable graph admits a one-way Eulerian path. Recall that Eulerian paths can exists only for graphs with one or two ends. Thus, the complexity of this problem is related to the complexity of counting ends, which was shown in <cit.> to also be Π_2^0-complete. Let us start by giving an alternative proof of the latter fact by using a construction that we will employ, with some modifications, several times in what follows.
Let k∈, k≥ 1. The problem of deciding whether a highly computable graph G has at most k ends is Π_2^0-complete. Moreover, there is a highly computable sequence of graphs (G_e^k)_e ∈ where every G_e^k has either k or k+1 ends and the set {e∈: G_e has k ends} is Π_2^0-complete.
Let k∈. We start by observing that the problem of determining whether a highly computable graph G has at most k ends is at most Π_2^0. Indeed, we can write such property as follows:
(∀ E ⋐ E(G))(∃ n∈)(_G,n(E)≤ k)
Where _G,n(E), as defined in <Ref>, denotes the number of infinite connected components that we can distinguish by inspecting a ball of radius n. We now prove completeness. We construct a uniformly highly computable sequence of graphs (G_e^1)_e ∈ with either one or two ends, such that {e | G_e^1 has one end } is Π_2-complete.
Let us fix a computable enumeration (W_e)_e ∈ of c.e. sets. Recall that Inf= {e | |W_e|= ∞} is a Π_2^0-complete set (and is complement Fin is Σ_2^0-complete): hence, our goal is to construct (G_e^1)_e ∈ in such a way that {e | G_e^1 has one end }≡_m Inf. With these elements at hand, we define a total computable function f: ^2 → by recursion. We set f(e,0)=0, and then
f(e,s) =
1 - f(e,s-1), if W_e,s W_e,s-1
f(e,s-1) , otherwise.
It is clear that f(e, · ) is eventually constant if and only if e ∉Inf. Now, for each e ∈, we define G_e by letting V(G_e^1) = and
E(G_e^1) = {(s,s+1) | s ∈}∪{(-s-1,-s) | f(e,s) = f(e,s-1) }
∪{(-s,s), (-s-1,s) | f(e,s) f(e,s-1) }.
Now that we have defined the family (G^1_e)_e∈, we verify that {e | G_e^1 has one end } is equal to the Π_2-complete set Inf. Let us describe how to draw the graph G^1_e for fixed e. We compute f(e,s) for each s and in increasing order. As long as f(e,s)=f(e,s-1), we put edges between consecutive integers in {-s-1,…,s+1}, so the graph looks like a line:
at (-1,0) (-1) ∙;
at (0,0) (0) ∙;
at (1,0) (1) ∙;
at (-4,0) (-s) ∙;
at (4,0) (s) ∙;
at (-5,0) (-s-1) ∙;
at (5,0) (s+1) ∙;
at (-1,0.3) -1;
at (0,0.3) 0;
at (1,0.3) 1;
at (-4,0.3) -s;
at (4,0.3) s;
at (-5,0.3) -s-1;
at (5,0.3) s+1;
(-1) – (0);
(0) – (1);
(-1) – (-2,0);
[dashed] (-2.1,0) – (-2.9,0);
(-3,0) – (-s);
(1) – (2,0);
[dashed] (2.1,0) – (2.9,0);
(3,0) – (s);
[blue, thick] (-s) – (-s-1);
[blue, thick] (s) – (s+1);
[dashed] (-s-1) – (-6,0);
[dashed] (s+1) – (6,0);
But, as soon as we see that f(e,s) f(e,s-1), we omit the edge between -s and -s-1 and, instead, connect s to -s and also to both s+1 and -s-1.
at (-1,0) (-1) ∙;
at (0,0) (0) ∙;
at (1,0) (1) ∙;
at (-4,0) (-s) ∙;
at (4,0) (s) ∙;
at (-5,0) (-s-1) ∙;
at (5,0) (s+1) ∙;
at (-1,0.3) -1;
at (0,0.3) 0;
at (1,0.3) 1;
at (-4,0.3) -s;
at (4,0.3) s;
at (-5,0.3) -s-1;
at (5,0.3) s+1;
(-1) – (0);
(0) – (1);
(-1) – (-2,0);
[dashed] (-2.1,0) – (-2.9,0);
(-3,0) – (-s);
(1) – (2,0);
[dashed] (2.1,0) – (2.9,0);
(3,0) – (s);
[blue, thick] (s) – (s+1);
[blue, thick] (-s) .. controls (-3,-1) and (3,-1) .. (s);
[blue, thick] (-s-1) .. controls (-2.7,-2) and (2.7,-2) .. (s);
[dashed] (-s-1) – (-6,0);
[dashed] (s+1) – (6,0);
Observe that in this manner, we create a cycle graph with vertex set {-s,…,s}. This process is iterated, and more cycles will be created according to the values of f(e,·).
at (0,0) (s) ∙;
[below = 0 cm of s] s;
(-2,0) – (s);
(2,0) – (s);
[dashed] (-2.1,0) – (-2.6,0);
[dashed] (2.1,0) – (2.6,0);
(s) .. controls (-1, 1) and (1, 1) .. (s);
By definition of f, we have that exactly one cycle is created for each element |W_e|.
at (0,0) (0) ∙;
at (1,0) (s0) ∙;
at (2,0) (sk) ∙;
at (3,0) (s*) ∙;
(0) .. controls (0.25, 0.5) and (0.75, 0.5) .. (s0);
(0) .. controls (0.25, -0.5) and (0.75, -0.5) .. (s0);
[dashed] (s0) – (sk);
(sk) .. controls (2.25, 0.5) and (2.75, 0.5) .. (s*);
(sk) .. controls (2.25, -0.5) and (2.75, -0.5) .. (s*);
(s*) – ++(20:0.75cm);
[dashed] (s*)++(20:0.75cm) – ++(20:0.75cm);
(s*) – ++(-20:0.75cm);
[dashed] (s*)++(-20:0.75cm) – ++(-20:0.75cm);
[below = 0 cm of 0] 0;
[below = 0 cm of s0] s_0;
[below = 0 cm of sk] s_k;
[below = 0 cm of s*] s^*;
(U) at ((s*)+(20:1.5cm));
(B) at ((s*)+(-20:1.5cm));
(F) at (0.5*(U)+0.5*(B)) ;
(L) at (0.5*(0)+0.5*(F)) ;
[below = 0.75cm of L] G_e^1 for e ∈Fin;
at (0,0) (0) ∙;
at (1,0) (s0) ∙;
at (2,0) (s1) ∙;
at (3,0) (s2) ∙;
(0) .. controls (0.25, 0.5) and (0.75, 0.5) .. (s0);
(0) .. controls (0.25, -0.5) and (0.75, -0.5) .. (s0);
(s0) .. controls (1.25, 0.5) and (1.75, 0.5) .. (s1);
(s0) .. controls (1.25, -0.5) and (1.75, -0.5) .. (s1);
(s1) .. controls (2.25, 0.5) and (2.75, 0.5) .. (s2);
(s1) .. controls (2.25, -0.5) and (2.75, -0.5) .. (s2);
[dashed] (s2) – (4,0);
[below = 0 cm of 0] 0;
[below = 0 cm of s0] s_0;
[below = 0 cm of s1] s_1;
[below = 0 cm of s2] s_2;
[below = 0.65 cm of s1] G_e^1 for e ∈Inf;
If e ∈Fin, then there is a step s^* after which no new element enters W_e, meaning that f(e, s)=f(e,s^*) for all s ≥ s*. In this case, G_e^1 has two ends, as G_e^1 ∖{v | |v| < s^* } is isomorphic to (, {(x,x+1) | x ∈}) via
x ↦
x + s^*, if x≥ 0
x - s^*, otherwise.
On the other hand, if e ∈Inf, G_e^1 has one end. Indeed, for any E ⋐ E(G_k^1), we can consider n= max{|v| | v is adjacent to some edge in E }: since E is contained in the set of edges of G_k^1[{s | |s| ≤ n} ], it is enough to observe that, by construction, G_e^1 ∖{s | |s| ≤ n} = G_e^1[{s | |s| > n} ] is an infinite connected graph.
Thus, we have proved that the family of graphs (G^1_e)_e∈ has the desired properties. The claim for k > 1 follows easily by considering the highly computable sequence of graphs (G_e^k)_e ∈ defined by attaching k-1 copies of the graph (∖{0}, {(s,s+1) | s ∈∖{0}}) to the vertex 0 of each G_e^1 defined above. Namely, we consider graphs which look as follows.
at (0,0) (0) ∙;
at (1,0) (s0) ∙;
at (2,0) (sk) ∙;
at (3,0) (s*) ∙;
(0) .. controls (0.25, 0.5) and (0.75, 0.5) .. (s0);
(0) .. controls (0.25, -0.5) and (0.75, -0.5) .. (s0);
[dashed] (s0) – (sk);
(sk) .. controls (2.25, 0.5) and (2.75, 0.5) .. (s*);
(sk) .. controls (2.25, -0.5) and (2.75, -0.5) .. (s*);
(s*) – ++(20:0.75cm);
[dashed] (s*)++(20:0.75cm) – ++(20:0.75cm);
(s*) – ++(-20:0.75cm);
[dashed] (s*)++(-20:0.75cm) – ++(-20:0.75cm);
[below = 0 cm of 0] 0;
[below = 0 cm of s0] s_0;
[below = 0 cm of sk] s_k;
[below = 0 cm of s*] s^*;
(U) at ((s*)+(20:1.5cm));
(B) at ((s*)+(-20:1.5cm));
(F) at (0.5*(U)+0.5*(B)) ;
(U') at ((0)+(160:1.5cm));
(B') at ((0)+(-160:1.5cm));
(F') at (0.5*(U')+0.5*(B')) ;
(L) at (0.5*(F')+0.5*(F)) ;
(0) – ++(160:0.75cm);
[dashed] (0)++(160:0.75cm) – ++(160:0.75cm);
(0) – ++(-160:0.75cm);
[dashed] (0)++(-160:0.75cm) – ++(-160:0.75cm);
(T) at ((0)+(158:1.5cm));
(B) at ((0)+(-158:1.5cm));
[decorate,decoration=brace,amplitude=5pt,mirror,raise=1ex] (T) – (B);
(C) at (0.5*(T)+0.5*(B)) ;
[rotate=90] at ((C)+(0.3,0.025)) …;
[above = 0.5 cm of C] (label) (∖{ 0 })^k;
[->] (C)++(-0.5,0.1) to[bend left] (label);
[below = 0.75cm of L] G_e^k for e ∈Fin;
at (0,0) (0) ∙;
at (1,0) (s0) ∙;
at (2,0) (s1) ∙;
at (3,0) (s2) ∙;
(0) .. controls (0.25, 0.5) and (0.75, 0.5) .. (s0);
(0) .. controls (0.25, -0.5) and (0.75, -0.5) .. (s0);
(s0) .. controls (1.25, 0.5) and (1.75, 0.5) .. (s1);
(s0) .. controls (1.25, -0.5) and (1.75, -0.5) .. (s1);
(s1) .. controls (2.25, 0.5) and (2.75, 0.5) .. (s2);
(s1) .. controls (2.25, -0.5) and (2.75, -0.5) .. (s2);
[dashed] (s2) – (4,0);
[below = 0 cm of 0] 0;
[below = 0 cm of s0] s_0;
[below = 0 cm of s1] s_1;
[below = 0 cm of s2] s_2;
(0) – ++(160:0.75cm);
[dashed] (0)++(160:0.75cm) – ++(160:0.75cm);
(0) – ++(-160:0.75cm);
[dashed] (0)++(-160:0.75cm) – ++(-160:0.75cm);
(T) at ((0)+(158:1.5cm));
(B) at ((0)+(-158:1.5cm));
[decorate,decoration=brace,amplitude=5pt,mirror,raise=1ex] (T) – (B);
(C) at (0.5*(T)+0.5*(B)) ;
[rotate=90] at ((C)+(0.3,0.025)) …;
[above = 0.5 cm of C] (label) (∖{ 0 })^k;
[->] (C)++(-0.5,0.1) to[bend left] (label);
(U') at ((0)+(160:1.5cm));
(B') at ((0)+(-160:1.5cm));
(F') at (0.5*(U')+0.5*(B')) ;
(L) at (0.5*(F')+(2,0)) ;
[below = 0.75cm of L] G_e^k for e ∈Inf;
It is clear that G_e^k has k ends if G_e has one end, while it has k+1 ends if G_e has two ends.
Using the construction developed in the proof above, we can also give an alternative proof of the Π_2^0-completness of deciding existence of one-way Eulerian paths.
The problem of deciding whether a highly computable graph admits a a one-way Eulerian path is Π_2^0-complete.
Since we consider only countable, connected and locally finite graphs, the set of conditions in our case reduces to the following condition:
G has exactly one vertex of odd degree and has one end.
The statement G has exactly one vertex of odd degree is a conjunction of a Σ_1^0 statement and a Π_1^0 statement, which makes it Σ_2^-1. It follows from <Ref> that the property of having one end is Π_2^0. Hence, <Ref> is a Π_2^0 property.
Now we show Π_2^0-completeness. To do so, it is enough to slightly modify the construction in the proof of <Ref>. Indeed, we define the highly computable sequence of multigraphs (G_e)_e ∈ by letting V(G_e) = and
E(G_e)(n,m) =
2, if n > m ≥ 0 and n=m+1
1, if 0 ≤ m < n and n=m+1 and f(e,s) = f(e,s-1)
1, if (n=-m or (n > m and m=-n-1)) and f(e,s) f(e,s-1)
0, otherwise.
In other words, we perform the same algorithm that we used to construct the sequence in <Ref>, with the only difference that now each vertex with non-negative index is joined to its successor by two edges. Thus, at any stage s where f(e,s) = f(e,s-1), we add edges as shown in the picture below.
at (-1,0) (-1) ∙;
at (0,0) (0) ∙;
at (1,0) (1) ∙;
at (-4,0) (-s) ∙;
at (4,0) (s) ∙;
at (-5,0) (-s-1) ∙;
at (5,0) (s+1) ∙;
[above = 0cm of -1] -1;
[above = 0cm of 0] 0;
[above = 0cm of 1] 1;
[above = 0cm of -s] -s;
[above = 0cm of s] s;
[above = 0cm of -s-1] -s-1;
[above = 0cm of s+1] s+1;
(-1) – (0);
(0) .. controls (0.25, 0.5) and (0.75,0.5) .. (1);
(0) .. controls (0.25, -0.5) and (0.75,-0.5) .. (1);
(-1) – (-2,0);
[dashed] (-2.1,0) – (-2.9,0);
(-3,0) – (-s);
(1) .. controls (1.25, 0.5) and (1.75,0.5) .. (1.9,0.2);
(1) .. controls (1.25, -0.5) and (1.75,-0.5) .. (1.9,-0.2);
[dashed] (2.1,0) – (2.9,0);
(3.1,0.2) .. controls (3.25,0.5) and (3.75,0.5) .. (s);
(3.1,-0.2) .. controls (3.25,-0.5) and (3.75,-0.5) .. (s);
[blue, thick] (-s) – (-s-1);
[blue, thick] (s) .. controls (4.25, 0.5) and (4.75,0.5) .. (s+1);
[blue, thick] (s) .. controls (4.25, -0.5) and (4.75,-0.5) .. (s+1);
[dashed] (-s-1) – (-6,0);
[dashed] (s+1) – (6,0);
On the other hand, whenever f(e,s) f(e,s-1), the situation will look as in the following picture.
at (-1,0) (-1) ∙;
at (0,0) (0) ∙;
at (1,0) (1) ∙;
at (-4,0) (-s) ∙;
at (4,0) (s) ∙;
at (-5,0) (-s-1) ∙;
at (5,0) (s+1) ∙;
[above = 0cm of -1] -1;
[above = 0cm of 0] 0;
[above = 0cm of 1] 1;
[above = 0cm of -s] -s;
[above = 0cm of s] s;
[above = 0cm of -s-1] -s-1;
[above = 0cm of s+1] s+1;
(-1) – (0);
(0) .. controls (0.25, 0.5) and (0.75,0.5) .. (1);
(0) .. controls (0.25, -0.5) and (0.75,-0.5) .. (1);
(-1) – (-2,0);
[dashed] (-2.1,0) – (-2.9,0);
(-3,0) – (-s);
(1) – (2,0);
[dashed] (2.1,0) – (2.9,0);
(3.1,0.2) .. controls (3.25,0.5) and (3.75,0.5) .. (s);
(3.1,-0.2) .. controls (3.25,-0.5) and (3.75,-0.5) .. (s);
[blue, thick] (s) .. controls (4.25, 0.5) and (4.75,0.5) .. (s+1);
[blue, thick] (s) .. controls (4.25, -0.5) and (4.75,-0.5) .. (s+1);
[blue, thick] (-s) .. controls (-3,-2) and (3,-2) .. (s);
[blue, thick] (-s-1) .. controls (-2.7,-2.5) and (2.7,-2.5) .. (s);
[dashed] (-s-1) – (-6,0);
[dashed] (s+1) – (6,0);
Therefore, G_e for e ∈Fin will look like this:
at (0,0) (0) ∙;
[below = 0 cm of 0] 0;
at (1,0) (1) ∙;
[below = 0 cm of 1] 1;
(0) .. controls (0.25, 0.5) and (0.75,0.5) .. (1);
(0) .. controls (0.25, -0.5) and (0.75,-0.5) .. (1);
[dashed] (1) – (1.8,0);
at (2,0) (2) ∙;
at (3,0) (3) ∙;
[below = 0 cm of 3] s_0;
(2) .. controls (2.25, 0.5) and (2.75,0.5) .. (3);
(2) .. controls (2.25, -0.5) and (2.75,-0.5) .. (3);
at (0.5,0.75) (a) ∙;
[above = 0 cm of a] -1;
(0) to[bend left] (a);
at (2.5,.75) (b) ∙;
[above = 0 cm of b] -s_0;
(b) to[bend left] (3);
(a) – (1,.75);
[dashed] (1.2,.75) – (1.8,.75);
(2,.75) – (b);
at (4,0) (4) ∙;
at (5,0) (5) ∙;
(4) .. controls (4.25, 0.5) and (4.75,0.5) .. (5);
(4) .. controls (4.25, -0.5) and (4.75,-0.5) .. (5);
[dashed] (5) – (5.8,0);
at (6,0) (6) ∙;
at (7,0) (7) ∙;
[below = 0 cm of 7] s^*;
(6) .. controls (6.25, 0.5) and (6.75,0.5) .. (7);
(6) .. controls (6.25, -0.5) and (6.75,-0.5) .. (7);
at (4.5,0.75) (c) ∙;
(4) to[bend left] (c);
at (6.5,.75) (d) ∙;
[above = 0 cm of d] -s^*;
(d) to[bend left] (7);
(c) – (5,.75);
[dashed] (5.2,.75) – (5.8,.75);
(6,.75) – (d);
[dashed] (3) – (4);
at (8,.5) (e) ∙;
at (8,-.5) (e') ∙;
(7) – (e);
(7) to[bend left] (e');
(7) to[bend right] (e');
[dashed] (e) – (8.7,.9);
[dashed] (e') – (8.7,-.9);
While G_e for e ∈Inf will be as follows:
at (0,0) (0) ∙;
[below = 0 cm of 0] 0;
at (1,0) (1) ∙;
[below = 0 cm of 1] 1;
[<-, red] (0) .. controls (0.25, 0.5) and (0.75,0.5) .. (1);
[->, blue] (0) .. controls (0.25, -0.5) and (0.75,-0.5) .. (1);
[dashed] (1) – (1.8,0);
at (2,0) (2) ∙;
at (3,0) (3) ∙;
[below = 0 cm of 3] s_0;
[<-, red] (2) .. controls (2.25, 0.5) and (2.75,0.5) .. (3);
[->, blue] (2) .. controls (2.25, -0.5) and (2.75,-0.5) .. (3);
at (0.5,0.75) (a) ∙;
[above = 0 cm of a] -1;
[->] (0) to[bend left] (a);
at (2.5,.75) (b) ∙;
[above = 0 cm of b] -s_0;
[->] (b) to[bend left] (3);
(a) – (1,.75);
[dashed] (1.2,.75) – (1.8,.75);
(2,.75) – (b);
at (4,0) (4) ∙;
at (5,0) (5) ∙;
[<-, red] (4) .. controls (4.25, 0.5) and (4.75,0.5) .. (5);
[->, blue] (4) .. controls (4.25, -0.5) and (4.75,-0.5) .. (5);
[dashed] (5) – (5.8,0);
at (6,0) (6) ∙;
at (7,0) (7) ∙;
[below = 0 cm of 7] s_k;
[<-, red] (6) .. controls (6.25, 0.5) and (6.75,0.5) .. (7);
[->, blue] (6) .. controls (6.25, -0.5) and (6.75,-0.5) .. (7);
at (4.5,0.75) (c) ∙;
[->] (4) to[bend left] (c);
at (6.5,.75) (d) ∙;
[above = 0 cm of d] -s_k;
[->] (d) to[bend left] (7);
(c) – (5,.75);
[dashed] (5.2,.75) – (5.8,.75);
(6,.75) – (d);
[dashed] (3) – (4);
[dashed] (7) – (7.8,0);
Observe that, for every e, the only vertex with odd degree in G_e is 0: then G_e admits an infinite Eulerian path if and only if G_e has one end. An example of such path is shown in the picture above: this path starts at the vertex 0, reaches vertex s_0 using, say, the blue edges, proceeds backwards through the red edges and then changes direction again passing through the black edges, and so on. By the same reasoning used in the proof of <Ref>, this happens if and only e ∈Inf.
It should be noticed that the proof in <cit.> relies on the construction of automatic graphs, and hence the above statement actually holds already in the case of automatic graphs. In the next result we show that the problem becomes simpler, in fact, Σ_2^-1-complete, when restricted to graphs with one end.
The problem of deciding whether a highly computable graph with one end admits a a one-way Eulerian path is Σ_2^-1-complete.
We already observed that the the statement G has exactly one vertex of odd degree is Δ_2^0: in particular, this statement is a conjunction of a Σ_1^0 statement and a Π_1^0 statement, which makes it Σ_2^-1. It follows that the problem of deciding that a graph satisfies , provided it has one end and it is connected, is at most Σ_2^-1. We now prove completeness.
Let W be a Σ_2^-1-complete set (the existence of such set as been proved in <cit.>) and f: ^2 → be a computable approximation of W: since W is 2-c.e., we can assume that f changes mind on each element at most twice. Moreover, we can assume that f(e,0)=0 for every e.
For every e, we define G_e as follows. Let V(G_e) = and
E(G_e)(n,m) =
2, if n=s, m=s+1 and f(e,s)=0
1, if n=s, m=s+1 and f(e,s)=1
0, otherwise.
Again, we explain how one can define the edge set of E(G_e) in stages. At every stage s, we look at the value of f(e,s).
While f(e,s)=0, we keep adding 2 edges between s and s+1.
at (0,0) (0) ∙;
[below = 0 cm of 0] 0;
at (1,0) (1) ∙;
[below = 0 cm of 1] 1;
(0) .. controls (0.25, 0.5) and (0.75,0.5) .. (1);
(0) .. controls (0.25, -0.5) and (0.75,-0.5) .. (1);
[dashed] (1) – (1.8,0);
at (2,0) (2) ∙;
[below = 0 cm of 2] s;
at (3,0) (3) ∙;
[below = 0 cm of 3] s+1;
[thick, blue] (2) .. controls (2.25, 0.5) and (2.75,0.5) .. (3);
[thick, blue] (2) .. controls (2.25, -0.5) and (2.75,-0.5) .. (3);
[dashed] (3) – (3.8,0);
But whenever f(e,s)=1, we instead start adding only one edge between s and s+1.
at (0,0) (0) ∙;
[below = 0 cm of 0] 0;
at (1,0) (1) ∙;
[below = 0 cm of 1] 1;
(0) .. controls (0.25, 0.5) and (0.75,0.5) .. (1);
(0) .. controls (0.25, -0.5) and (0.75,-0.5) .. (1);
[dashed] (1) – (1.8,0);
at (2,0) (2) ∙;
at (3,0) (3) ∙;
(2) .. controls (2.25, 0.5) and (2.75,0.5) .. (3);
(2) .. controls (2.25, -0.5) and (2.75,-0.5) .. (3);
at (4,0) (4) ∙;
[below = 0 cm of 3] s;
[below = 0 cm of 4] s+1;
[thick, blue] (3) – (4);
[dashed] (4) – (4.8,0);
Finally, it might also happen that f changes its mind again at stage s: hence, we go back to adding 2 edges between s and s+1.
at (0,0) (0) ∙;
[below = 0 cm of 0] 0;
at (1,0) (1) ∙;
[below = 0 cm of 1] 1;
(0) .. controls (0.25, 0.5) and (0.75,0.5) .. (1);
(0) .. controls (0.25, -0.5) and (0.75,-0.5) .. (1);
[dashed] (1) – (1.8,0);
at (2,0) (2) ∙;
at (3,0) (3) ∙;
(2) .. controls (2.25, 0.5) and (2.75,0.5) .. (3);
(2) .. controls (2.25, -0.5) and (2.75,-0.5) .. (3);
at (4,0) (4) ∙;
(3) – (4);
[dashed] (4) – (4.8,0);
at (5,0) (5) ∙;
at (6,0) (6) ∙;
[below = 0 cm of 6] s;
(5) – (6);
at (7,0) (7) ∙;
[below = 0 cm of 7] s+1;
[thick, blue] (6) .. controls (6.25, 0.5) and (6.75,0.5) .. (7);
[thick, blue] (6) .. controls (6.25, -0.5) and (6.75,-0.5) .. (7);
[dashed] (7) – (7.8,0);
Notice that all G_e's constructed in this way have one end.
Moreover, if f never changes its mind on e, then all vertices in G_e have even degree, hence G_e cannot have a one-way infinite Eulerian path. On the other hand, if f changes its mind on e at step s and never changes its mind again, we have that s has odd degree, while all other vertices have even degree. In this case, which is exactly when e ∈ W, G_e admits a one-way infinite Eulerian path. For example, we have the path described in the picture below: such a path starts at the vertex pointed by the double arrow, crosses our graph backwards going through the red edges until it reaches the vertex 0 and then proceeds forever visiting each black edge in order.
at (0,0) (0) ∙;
at (1,0) (1) ∙;
[->] (0) .. controls (0.25, 0.5) and (0.75,0.5) .. (1);
[<-, red] (0) .. controls (0.25, -0.5) and (0.75,-0.5) .. (1);
[dashed] (1) – (1.8,0);
at (2,0) (2) ∙;
at (3,0) (3) ∙;
[->] (2) .. controls (2.25, 0.5) and (2.75,0.5) .. (3);
[<-, red] (2) .. controls (2.25, -0.5) and (2.75,-0.5) .. (3);
at (4,0) (4) ∙;
[->] (3) – (4);
at (5,0) (5) ∙;
[->] (4) – (5);
[dashed] (5) – (5.8,0);
[->>] (3,-.5) – (3);
Finally, if f changes its mind twice on e, say at stages s and s', both vertices s and s' have odd degree: hence G_e does not admit Eulerian paths.
§.§ Two-way Eulerian paths
We begin by observing that <Ref> is also valid for two-way Eulerian paths.
The problem of deciding whether a highly computable graph admits a a two-way Eulerian path is Π_2^0-complete.
Since we consider only countable, connected and locally finite graphs, the set of conditions reduce to the following:
* (∀ v ∈ V(G)) (_G(v) is even)
* G has at most two ends
* (∀ E ⋐ E(G))(∃ n∈) ((∃ v ∈ G[E]) _G[E](v) is odd) (_G,n(E)≤ 1).
Condition (3) is clearly a Π_1^0 statement. By <Ref>, condition (4) is Π_2^0. Let us recall that E→_G,n(E) was defined in <Ref>, where we also proved that this function is computable. This fact shows that condition (5) is also Π_2^0. Thus, overall, the complexity of is at most Π_2^0.
To prove completeness, it is enough to slightly modify the sequence of graphs (G_e^1)_e ∈ constructed in the proof of <Ref>. Namely, for each e we define H_e as the graph obtained by duplicating each edge in G_e^1. We hence obtain a uniformly highly computable sequence of multigraphs (H_e)_e ∈ where each H_e has either one or two ends and where each vertex has even degree. We claim that
{e: H_e admits a two-way Eulerian path} = {e: H_e has one end}.
Notice that this is enough to conclude the proof, as the set on the right-hand side is Π_2^0-complete by <Ref>.
Let us first consider the case where H_e has one end. As every vertex in H_e has even degree, it follows that it verifies , so it admits a two-way infinite Eulerian path. For concreteness, let us describe how to obtain such a path. For this we let s_0=0, and denote by s_k the k-th step in which a new element is enumerated into W_e. As we are in the case where W_e is infinite, we have a well-defined infinite sequence (s_k)_k ∈. Then each subgraph H_e[{v∈ V(H_e): s_k ≤ |v| ≤ s_k+1}∖{-s_k} is even and connected, so by Euler's theorem admits an Eulerian cycle. We can split such a cycle into two disjoint paths from s_k to s_k+1, shown in blue and in red in the picture below. These paths can be joined to construct the desired two-way Eulerian path.
at (0,0) (0) ∙;
at (45:1cm) (1) ∙;
at (-45:1cm) (-1) ∙;
at ((1)+(1,0)) (2) ∙;
at ((-1)+(1,0)) (-2) ∙;
at ((2)+(1,0)) (3) ∙;
at ((-2)+(1,0)) (-3) ∙;
at ((3)+(1,0)) (4) ∙;
at ((-3)+(1,0)) (-4) ∙;
at ((4)+(-45:1cm)) (5) ∙;
[red, thick, ->] (0) to[bend left] (1);
[red, thick, ->] (1) to[bend left] (2);
[dashed] (2) – (3);
[red, thick, ->] (3) to[bend left] (4);
[red, thick, ->] (4) to[bend left] (5);
[blue, thick, ->] (0) to[bend right] (1);
[blue, thick, ->] (1) to[bend right] (2);
[blue, thick, ->] (3) to[bend right] (4);
[blue, thick, ->] (4) to[bend right] (5);
[blue, thick, ->] (0) to[bend right] (-1);
[blue, thick, ->] (-1) to[bend right] (-2);
[dashed] (-2) – (-3);
[blue, thick, ->] (-3) to[bend right] (-4);
[blue, thick, ->] (-4) to[bend right] (5);
[blue, thick, <-] (0) to[bend left] (-1);
[blue, thick, <-] (-1) to[bend left] (-2);
[blue, thick, <-] (-3) to[bend left] (-4);
[blue, thick, <-] (-4) to[bend left] (5);
[above left = -0.3 cm of 0] s_k;
[above left = -0.3 cm of 1] s_k+1;
[below left = -0.3 cm of -1] -(s_k+1);
[above right = -0.3 cm of 4] s_k+1-1;
[below right = -0.3 cm of -4] -s_k+1;
[above right = -0.3 cm of 5] s_k+1;
at ((5)+(3,0)) (0') ∙;
at ((0')+(1,0)) (1') ∙;
[red, thick, ->] (0') .. controls ((0')+(0.25, 0.5)) and ((0')+(0.75,0.5)) .. (1');
[blue, thick, ->] (0') to[bend left] (1');
[blue, thick, <-] (0') to[bend right] (1');
[blue, thick, ->] (0') .. controls ((0')+(0.25,-0.5)) and ((0')+(0.75,-0.5)) .. (1');
at ((1')+(1,0)) (2') ∙;
[red, thick, ->] (1') .. controls ((1')+(0.25, 0.5)) and ((1')+(0.75,0.5)) .. (2');
[blue, thick, ->] (1') to[bend left] (2');
[blue, thick, <-] (1') to[bend right] (2');
[blue, thick, ->] (1') .. controls ((1')+(0.25,-0.5)) and ((1')+(0.75,-0.5)) .. (2');
[dashed] (2') – ++(0:0.75cm);
[above = 0 cm of 0'] 0;
[above = 0 cm of 1'] s_1;
[above = 0 cm of 2'] s_2;
On the other hand, if H_e has two ends, then, by construction, W_e is finite. So, there must be a stage s_k in which some element enters W_e for the last time. Let E⋐ E(H_e) be the set whose elements are the two edges joining s_k to s_k+1. Then H_e[E] is an even graph, while H_e∖ E has two infinite connected components, so H_e does not satisfy .
Next, we study the complexity of deciding the existence of a two-way Eulerian path when restricting ourselves only to graphs with, respectively, one or two ends. Let us begin with the case of graphs with one end.
The problem of deciding whether a highly computable graph with one end admits a a two-way Eulerian path is Π_1^0-complete.
In the case of a highly computable graph with one end, the only condition which is left to check in is that all vertices in the graph have even degree, which is clearly a Π_1^0 statement.
To prove completeness, let us fix a computable enumeration (φ_e)_e ∈ and define the highly computable sequence of multigraphs (G_e)_e ∈ as follows: for every e, we let V(G_e) = and
E(G_e)(u,v) =
2, if u=s, v=s+1 and ((φ_e,s↑) or (φ_e,s-1↓ and φ_e,s↓))
1, if u=s, v=s+1 and φ_e,s-1↑ and φ_e,s↓
0, otherwise.
In other words, we always add two edges between s and s+1, unless φ_e halts in exactly s steps, in which case we only add one edge. Thus, we have the following two cases:
at (0,0) (0) ∙;
at (1,0) (1) ∙;
[red, thick, ->] (0) .. controls (0.25, 0.5) and (0.75,0.5) .. (1);
[blue, thick, ->] (0) .. controls (0.25, -0.5) and (0.75,-0.5) .. (1);
at (2,0) (2) ∙;
[blue, thick, ->] (1) .. controls (1.25, 0.5) and (1.75,0.5) .. (2);
[red, thick, ->] (1) .. controls (1.25, -0.5) and (1.75,-0.5) .. (2);
[dashed] (2) – (2.8,0);
at (1.4, -1) G_e for φ_e ↑;
at (0,0) (0) ∙;
at (1,0) (1) ∙;
(0) .. controls (0.25, 0.5) and (0.75,0.5) .. (1);
(0) .. controls (0.25, -0.5) and (0.75,-0.5) .. (1);
[dashed] (1) – (1.8,0);
at (2,0) (2) ∙;
at (3,0) (3) ∙;
(2) .. controls (2.25, 0.5) and (2.75,0.5) .. (3);
(2) .. controls (2.25, -0.5) and (2.75,-0.5) .. (3);
at (4,0) (4) ∙;
(3) – (4);
at (5,0) (5) ∙;
(4) .. controls (4.25, 0.5) and (4.75,0.5) .. (5);
(4) .. controls (4.25, -0.5) and (4.75,-0.5) .. (5);
[below = 0 cm of 3] s;
[dashed] (5) – (5.8,0);
at (2.9, -1) G_e for φ_e ↓ in s steps;
It is clear that the vertices s and s+1 in G_e have odd degree if and only if φ_e halts in exactly s steps. It follows that G_e admits a two-way Eulerian path if and only if e ∈.
We conclude by showing that, in the case of graphs with two ends, the existence of a two-way Eulerian path exhausts exactly the m-degrees of Δ_2^0 sets.
The problem of deciding whether a highly computable graph with two ends admits a a two-way Eulerian path is Δ_2^0. Moreover, for every Δ_2^0 set X⊂ there is a uniformly highly computable family of graphs (G_e) with two ends, such that
X={e∈| G_e admits a two-way infinite Eulerian path}.
We start by proving the upper bound. Let (G_e)_e ∈ be a uniformly highly computable family of connected graphs with two ends. We claim that {e∈| G_e satisfies }≤_T∅'.
A Turing reduction which decides whether G_e satisfies using is the following. We start by verifying that every vertex in G_e has even degree. This is a Π_1^0 condition and, hence, decidable with oracle ∅'. Now our task is to verify that G_e ∖ E has a single infinite connected component, for every set of edges E that induces an even subgraph of G_e. For this purpose we compute a set S∈G_e, which is possible with oracle ∅ ' by <Ref>. Now let φ be the function from <Ref>. By the Recursion Theorem, from the index e we can compute an index e' of a Turing machine which runs φ(G_e,S,2,E) for every finite set of edges E that induces an even subgraph of G_e, and that halts whenever it finds some E such that φ(G_e,S,2,E)≥ 2. Then φ_e' halts if and only if G_e has a finite set of edges E for which G_e ∖ E has more than one infinite connected component, namely if and only if G_e does not satisfy . This proves that, in order to check whether G_e satisfies , it suffices to ask whether e'∈∅'. This concludes the desired Turing reduction.
Next, we consider any Δ_2^0 set X and describe the construction of a highly computable sequence of graphs (G_e)_e ∈ such that {e: G_e admits a two-way Eulerian path }=X.
Again, our construction will be a variation of the one used in the proof of <Ref>. Let x(e,s) be a computable approximation to the characteristic sequence of X. Without loss of generality, we can assume that x(e,-1)=x(e,0)=0 for every e. Now, for every e, we let V(G_e) =, while the set of edges is defined by the following effective procedure:
* We begin by adding two edges between 0, 1 and 0, -1.
* We keep adding two edges between both s, s+1 and -s, -(s+1) until we see that x(e,s-1)=x(e,s)=0 (that is, our computable approximation believes that e ∉ X).
at (-1,0) (-1) ∙;
at (0,0) (0) ∙;
at (1,0) (1) ∙;
at (-4,0) (-s) ∙;
at (4,0) (s) ∙;
at (-5,0) (-s-1) ∙;
at (5,0) (s+1) ∙;
[above =0.1cm of -1] -1;
[above =0.1cm of 0] 0;
[above =0.1cm of 1]1;
[above =0.1cm of -s] -s;
[above =0.1 cm of s]s;
[above =0.1cm of -s-1] -s-1;
[above =0.1cm of s+1] s+1;
(-1) .. controls (-0.75, 0.5) and (-0.25,0.5) .. (0);
(-1) .. controls (-0.75, -0.5) and (-0.25,-0.5) .. (0);
(0) .. controls (0.25, 0.5) and (0.75,0.5) .. (1);
(0) .. controls (0.25, -0.5) and (0.75,-0.5) .. (1);
(-1) .. controls (-1.25, 0.5) and (-1.75,0.5) .. (-1.9,0.2);
(-1) .. controls (-1.25, -0.5) and (-1.75,-0.5) .. (-1.9,-0.2);
[dashed] (-2.1,0) – (-2.9,0);
(-3.1,0.2) .. controls (-3.25, 0.5) and (-3.75,0.5) .. (-s);
(-3.1,-0.2) .. controls (-3.25, -0.5) and (-3.75,-0.5) .. (-s);
(1) .. controls (1.25, 0.5) and (1.75,0.5) .. (1.9,0.2);
(1) .. controls (1.25, -0.5) and (1.75,-0.5) .. (1.9,-0.2);
[dashed] (2.1,0) – (2.9,0);
(3.1,0.2) .. controls (3.25,0.5) and (3.75,0.5) .. (s);
(3.1,-0.2) .. controls (3.25,-0.5) and (3.75,-0.5) .. (s);
[blue, thick] (-s) .. controls (-4.25, 0.5) and (-4.75,0.5) .. (-s-1);
[blue, thick] (-s) .. controls (-4.25, -0.5) and (-4.75,-0.5) .. (-s-1);
[blue, thick] (s) .. controls (4.25, 0.5) and (4.75,0.5) .. (s+1);
[blue, thick] (s) .. controls (4.25, -0.5) and (4.75,-0.5) .. (s+1);
[dashed] (-s-1) – (-6,0);
[dashed] (s+1) – (6,0);
* If at stage s we see that x(e,s-1)=0 and x(e,s)=1 (namely, our approximation changes its mind and puts e into X), we add two edges between -s and s, then we add one edge between both s,s+1 and s, -(s+1), while putting no edge between -s and -(s+1).
at (-1,0) (-1) ∙;
at (0,0) (0) ∙;
at (1,0) (1) ∙;
at (-2,0) (-s+1) ∙;
at (2,0) (s-1) ∙;
at (-3,0) (-s) ∙;
at (3,0) (s) ∙;
at (-4,0) (-s-1) ∙;
at (4,0) (s+1) ∙;
[above =0.1cm of -1] -1;
[above =0.1cm of 0] 0;
[above =0.1cm of 1]1;
[above =0.1cm of -s] -s;
[above =0.1 cm of s]s;
[above =0.1cm of -s-1] -s-1;
[above =0.1cm of s+1] s+1;
(-1) .. controls (-0.75, 0.5) and (-0.25,0.5) .. (0);
(-1) .. controls (-0.75, -0.5) and (-0.25,-0.5) .. (0);
(0) .. controls (0.25, 0.5) and (0.75,0.5) .. (1);
(0) .. controls (0.25, -0.5) and (0.75,-0.5) .. (1);
[dashed] (-1.2,0) – (-1.8,0);
(-s+1) .. controls (-2.25, 0.5) and (-2.75,0.5) .. (-s);
(-s+1) .. controls (-2.25, -0.5) and (-2.75,-0.5) .. (-s);
[dashed] (1.2,0) – (1.8,0);
(s-1) .. controls (2.25,0.5) and (2.75,0.5) .. (s);
(s-1) .. controls (2.25,-0.5) and (2.75,-0.5) .. (s);
[blue, thick] (s) – (s+1);
[blue, thick] (-s-1) .. controls (-2.7,-2.5) and (2.7,-2.5) .. (s);
[blue, thick] (-s) .. controls (-2.5,-1.5) and (2.5,-1.5) .. (s);
[blue, thick] (-s) .. controls (-2.5,-2) and (2.5,-2) .. (s);
[dashed] (-s-1) – (-5,0);
[dashed] (s+1) – (5,0);
* We keep adding one edgee between both s, s+1 and -s, -(s+1) until we see that x(e,s-1)=x(e,s)=1 (that is, our computable approximation believes that e ∈ X).
at (0,0) (0) ∙;
at (.75,0.5) (0') ∙;
at (-.75,0.5) (0”) ∙;
(0) to[bend left] (0');
(0) to[bend right] (0');
(0) to[bend left] (0”);
(0) to[bend right] (0”);
at (0.75,1.25) (1') ∙;
at (-0.75,1.25) (1”) ∙;
at (0,1.75) (1”') ∙;
(1') to[bend left] (1”');
(1') to[bend right] (1”');
(1”) to[bend left] (1”');
(1”) to[bend right] (1”');
[dashed] (0') to[bend right] (1');
[dashed] (0”) to[bend left] (1”);
at (1,0) (1) ∙;
at (-1,0) (-1) ∙;
(0) – (1);
(0) – (-1);
at (2,0) (s) ∙;
[above =0.1 cm of s] s;
at (-2,0) (-s) ∙;
[above =0.1 cm of -s] -s;
[dashed] (1) – (s);
[dashed] (-1) – (-s);
at (3,0) (s+1) ∙;
[above =0.1 cm of s+1] s+1;
at (-3,0) (-s-1) ∙;
[above =0.1 cm of -s-1] -s-1;
[blue, thick] (s) – (s+1);
[blue, thick] (-s) – (-s-1);
[dashed] (s+1) – (4,0);
[dashed] (-s-1) – (-4,0);
* If at stage s we see that x(e,s-1)=1 and x(e,s)=0 (namely, our approximation changes its mind again and removes e from X), we add one edge between -s and s, then we add two edges between both s,s+1 and s, -(s+1), while putting no edge between -s and -(s+1).
at (0,0) (0) ∙;
at (.75,0.5) (0') ∙;
at (-.75,0.5) (0”) ∙;
(0) to[bend left] (0');
(0) to[bend right] (0');
(0) to[bend left] (0”);
(0) to[bend right] (0”);
at (0.75,1.25) (1') ∙;
at (-0.75,1.25) (1”) ∙;
at (0,1.75) (1”') ∙;
(1') to[bend left] (1”');
(1') to[bend right] (1”');
(1”) to[bend left] (1”');
(1”) to[bend right] (1”');
[dashed] (0') to[bend right] (1');
[dashed] (0”) to[bend left] (1”);
at (1,0) (1) ∙;
at (-1,0) (-1) ∙;
(0) – (1);
(0) – (-1);
at (2,0) (2) ∙;
at (3,0) (s) ∙;
[above =0.1 cm of s] s;
at (-2,0) (-2) ∙;
at (-3,0) (-s) ∙;
[above =0.1 cm of -s] -s;
[dashed] (1) – (2);
[dashed] (-1) – (-2);
(2) – (s);
(-2) – (-s);
at (4,0) (s+1) ∙;
[above =0.1 cm of s+1] s+1;
at (-4,0) (-s-1) ∙;
[above =0.1 cm of -s-1] -s-1;
[dashed] (s+1) – (5,0);
[dashed] (-s-1) – (-5,0);
[blue, thick] (s) .. controls (3.25,0.5) and (3.75,0.5) .. (s+1);
[blue, thick] (s) .. controls (3.25,-0.5) and (3.75,-0.5) .. (s+1);
[blue, thick] (-s) .. controls (-2.5,-1) and (2.5,-1).. (s);
[blue, thick] (-s-1) .. controls (-2.7,-1.5) and (2.7,-1.5) .. (s);
[blue, thick] (-s-1) .. controls (-2.7,-2) and (2.7,-2) .. (s);
* We restart from item (1).
[rotate=90]
at (0,0) (0) ∙;
at (.75,0.5) (0') ∙;
at (-.75,0.5) (0”) ∙;
(0) to[bend right] (0');
(0) to[bend left] (0”);
at (0.75,1.25) (0”') ∙;
at (-0.75,1.25) (0””) ∙;
[dashed] (0') to[bend right] (0”');
[dashed] (0”) to[bend left] (0””);
at (0,1.75) (0””') ∙;
(0”') to[bend right] (0””');
(0””) to[bend left] (0””');
at (.75,2.25) (1') ∙;
at (-.75,2.25) (1”) ∙;
(0””') to[bend left] (1');
(0””') to[bend right] (1');
(0””') to[bend left] (1”);
(0””') to[bend right] (1”);
at (.75,3) (1”') ∙;
at (-.75,3) (1””) ∙;
at (0,3.5) (1””') ∙;
[dashed] (1') to[bend right] (1”');
[dashed] (1”) to[bend left] (1””);
(1”') to[bend right] (1””');
(1”') to[bend left] (1””');
(1””) to[bend right] (1””');
(1””) to[bend left] (1””');
at (-.75,-.5) (-1) ∙;
at (.75,-.5) (1) ∙;
[blue, thick] (0) to[bend right] (1);
[blue, thick] (0) to[bend left] (1);
[blue, thick] (0) to[bend right] (-1);
[blue, thick] (0) to[bend left] (-1);
[dashed] (1) – (1.5,-1);
[dashed] (-1) – (-1.5,-1);
Notice that the graph G_e constructed in such a way always has two ends. Indeed, since X ∈Δ_2^0, there must be a stage s^* with x(e,s)=x(e,s^*) for all s≥ s^*: hence, we do not create new cycles after such stage.
Now, let s_0=0 and let {s_1, …, s_k} be the set of all stages in which x(e, · ), labeled in increasing order. In other words, those stages s with x(e,s -1) x(e,s)). By our assumption, e ∈ X if and only if k is odd. Thus, it remains to show that G_e admits a two-way infinite Eulerian path if and only if k is odd. First, observe that every vertex in G_e has even degree. Moreover, observe that the graph G_e[{v | |v| ≥ s_k}] is an infinite, connected graph: hence, for any finite set F of edges such that each of them connects two vertices x,y with max{|x|,|y| }≤ s_k, the graph G_e ∖ F has only one infinite connected component. Now, assume that k is odd. Then every vertex in G_e[{v | |v| ≥ s_k}] has degree 2: thus, any finite set of edges from G_e[{v | |v| ≥ s_k}] will induce a subgraph which is not even. This proves that if k is odd, then G_e satisfies . On the other hand, if k is even, every vertex in G_e[{v | |v| ≥ s_k] has degree 4: removing both edges connecting s_k and s_k+1, and those connecting s_k and -(s_k+1) we are left with two infinite connected components, namely G_e[{ v ≥ s_k+1 }] and G_e[{ v ≤ -(s_k+1) }]. This shows that for k even, the graph G_e does not satisfy .
[scale=0.8]
at (-6.5,0) ⋯;
at (-6,0) (0) ∙;
[-] (-6,0) – (-5,0);
at (-5,0) (0) ∙;
[-] (-4,0) – (-5,0);
at (-4,0) (0) ∙;
at (-4,1) (0) finite
even
graph;
[black] (-4,1) ellipse (0.5cm and 1cm);
at (-4,-1) G_e for e odd;
at (-3,0) ∙;
[-] (-4,0) – (-3,0);
[-] (-3,0) – (-2,0);
at (-2,0) ∙;
at (-1.5,0) ⋯;
at (6.5,0) ⋯;
at (6,0) (6) ∙;
[black] (5.5,0) ellipse (0.5cm and 0.2cm);
at (5,0) (5) ∙;
[black] (4.5,0) ellipse (0.5cm and 0.2cm);
at (4,0) (4) ∙;
at (4,1) finite
even
graph;
at (4,-1) G_e for e even;
[black] (4,1) ellipse (0.5cm and 1cm);
[black] (3.5,0) ellipse (0.5cm and 0.2cm);
at (3,0) (3) ∙;
[black] (2.5,0) ellipse (0.5cm and 0.2cm);
at (2,0) (2) ∙;
at (1.5,0) ⋯;
§.§ The case of non-locally finite graphs
With minor modifications, all our results about the difficulty of the Eulerian Path problem for highly computable graphs are also valid for the larger class of computable graphs for which the vertex degree function _G V(G)→∪{∞} is computable, and which are not necessarily locally finite. This includes <Ref>, <Ref>, <Ref>, <Ref>, <Ref>, and <Ref>. Indeed, all the lower bounds that we proved are valid in this more general context, simply because we proved the lower bounds for highly computable graphs, and we are now considering a class of graphs that includes highly computable graphs.
Regarding the upper bounds for and , observe that the conditions on the vertex degrees now include vertices with infinite degree, but upper bounds remain intact as “v has infinite degree” is a decidable predicate of v. Moreover, all results from <Ref> are still available, as explained in <Ref>. By this we refer specifically to the fact that E→_G(E) is upper-semicomputable, uniformly on G, and to <Ref>. These are the key facts that allowed us to prove the upper bounds in <Ref>, <Ref>, <Ref>, <Ref>, <Ref>, and <Ref>.
Let G be a computable graph for which the function _G V(G)→ is computable, with no extra assumptions, and an unknown number of ends. We claim that the problem of determining whether G satisfies (resp. ) is Π_2^0-complete. Let us first consider the upper bound for the conditions . Determining whether G is connected and infinite is clearly a Π_2^0 problem. The condition that every vertex has either even or infinite degree is clearly Π_1^0. Moreover, as E→_G(E) is an upper-semicomputable function, uniformly on G (see <Ref>), we have that determining whether G has at most two ends is a Π_2^0 problem. The condition that for every E⋐ E(G) for which G[E] is an even graph, G∖ E has exactly one infinite connected component, can be written as
(∀ E ⋐ E(G))(∃ n∈) ((∃ v ∈ G[E]) _G[E](v) is odd) (_G,n(E)≤ 1),
so it is Π_2^0 again. It follows that the problem of determining whether G satisfies is Π_2^0. These arguments can be easily extended to the case. Finally, the lower bounds follow from the constructions in <Ref> and <Ref>.
This shows that the problem of determining whether an infinite graph G admits a one-way or two-way infinite Eulerian path is Π_2^0. The lower bound
§.§ Decidability in the automatic case
The definition of automatic graph is a special case of that of automatic structure <cit.>. This roughly means that the vertex set and adjacency relation can be understood through words and finite automata on these words. It is natural to wonder whether some problems whose complexity is known for highly computable graphs, become easier for automatic graphs. While there is a number of examples of problems for which this is the case, other problems just keep the same difficulty. As we have mentioned, deciding whether a given general graph is Eulerian belongs to the latter case: it was shown in <cit.> that this is a Π_2-complete problem regardless of whether the graphs are automatic or highly computable.
As we have shown, however, for highly computable graphs this complexity gets reduced if we restrict ourselves to graphs with only one end. In this section we prove that for automatic graphs the complexity of this problem gets reduced even more: determining if an automatic graph with one end admits an Eulerian path is decidable.
We start by giving a brief account for the material on automatic graphs, and refer the reader to <cit.> for more details. Given an undirected graph G, we denote by R(G) the relation of adjacency. An undirected graph G is automatic when the relational structure (V(G),R(G)) admits an automatic presentation: a tuple (Σ,L,R_e,R_a,h) where Σ is a finite alphabet, L⊂Σ^∗ is a regular language, h L→ V(G) is a surjective function, and R_e={(u,v) : h(u)=h(v)}, R_a={(u,v) : R(G)(h(u),h(v))} are regular relations.
Automatic graphs have a decidable first order theory. Even more, the sets that can be defined by first order formulas are regular with respect to an automatic presentation <cit.>. This proves the decidability of the first order theory, as the emptiness problem for regular languages is decidable. Several works show that we can enrich the first order theory with counting quantifiers such as “there are k mod n”, and “there are infinitely many”, and that the sets defined with these formulas are still regular. We will use the following result, which is a direct consequence of <cit.>
There is an algorithm which, given an automatic graph G, an automatic presentation for (V(G),R(G)), and a sentence written with first order quantifiers plus quantifiers of the form ∃^even (there is an even number), ∃^odd (there is an odd number), and ∃^∞ (there is an infinite number), returns the truth value of the sentence.
There is an algorithm which, on input the automatic presentation of an automatic graph which is connected and has one end, decides whether it satisfies .
Recall from <Ref> that a graph which is connected and has one end admits a two-way infinite Eulerian path if and only if every vertex has even degree. This condition can be expressed as follows:
(∀ u )( ∃ ^even v )R(u,v).
This is decidable from an automatic presentation by <Ref>.
There is an algorithm which, on input the automatic presentation of an automatic graph which is connected and has one end, decides whether it satisfies .
Recall from <Ref> that a graph which is connected and has one end admits a one-way infinite Eulerian path if and only if it has exactly one vertex with odd degree. This condition can be expressed as follows:
(∃ ! u )( ∃ ^odd v ) R(u,v).
That is,
(∃ u )( ∃ ^odd v)(∀ w)(∃ ^even z) ( R(u,v) (w=u R(w,z)).
This is decidable from an automatic presentation of the graph by <Ref>.
We note here that automatic graphs are not necessarily locally finite, and that these proofs can be easily adapted to cover this more general case. Indeed, let G be an automatic graph with one end and which is not locally finite. <Ref> asserts the following: G admits a one-way infinite Eulerian path if and only if either G has exactly one vertex whose degree is odd, or G has at least one vertex with infinite degree and no vertex with odd degree. Moreover, G admits a two-way infinite Eulerian path if and only if all vertices have either even or infinite degree. Now observe that <Ref> allows us to deal with quantifiers of the form “there are infinitely many v”, so the same arguments in Propositions <ref> and <ref> can be extended in a straightforward manner.
§ ON THE COMPLEXITY OF G, G AND THEIR RELASHIONSHIP TO G.
In this section we develop a detailed analysis on the hardness of computing the sets G and G from a description of a highly computable graph G. In a first part we establish that while G is Π_1^0-complete in this uniform setting, the set G is Π_2^0-complete. We also study a version of these problems where we ask to compute just one element in G or G from a description of the G. We found that for this simplified version of the problem, the complexities are ∅' for G and ∅” for G.
In the second part we explore how the computability of G, G and of G interact with each other. We find that the information provided by G is equivalent to the information provided by the combination of G and G, in the sense that there is an algorithm that uniformly transforms one into the other, and vice-versa. We also obtain some negative results, mainly establishing that, individually, the information provided by G is essentially independent from the information provided by G.
§.§ The complexity of G
We have already establish in <Ref> that while for every G with finitely many ends there is an algorithm that decides G, doing it so with an algorithm uniform in G is a Π_1^0 problem in general. Here we prove that this upper bound is in fact tight. More precisely, we show that every function f: ⟶ mapping a highly computable graph G with (at least) two ends into some element of G –a witness of separation– satisfies f≥_T. Moreover, as our construction shows, this is the case even if we restrict ourselves to graphs that are trees.
There is a uniformly highly computable sequence (G_e)_e ∈ of graphs, all of them trees and with two ends, such that:
* {e |{(0,1) }∈G_e} is Π_1^0-complete;
* Let f be such that f(e) ∈G_e for every e∈. Then f ≥_T.
We have already established the Π_1^0 upper bound in <Ref>. It remains to show the Π_1^0-completeness.
Let (φ_e)_e ∈ be a computable enumeration of Turing machines without input. We define G_e as follows: we let V(G_e) = and
E(G_e)={(s,s+1)| s∈}∪{(-s-1,s+1)|φ_e,s-1↑ and φ_e,s↓}∪{(-s,-s-1)| (φ_e,s-1↑ and φ_e,s↑) or (φ_e,s-1↓ and φ_e,s↓) }.
In other words, at every stage s we check whether the computation φ_e,s halts.
As long as φ_e,s↑, we keep adding both the edge between s and s+1 and the edge between -s and -s-1.
at (-1,0) (-1) ∙;
at (0,0) (0) ∙;
at (1,0) (1) ∙;
at (-4,0) (-s) ∙;
at (4,0) (s) ∙;
at (-5,0) (-s-1) ∙;
at (5,0) (s+1) ∙;
[above = 0 cm of -1] -1;
[above = 0 cm of 0] 0;
[above = 0 cm of 1] 1;
[above = 0 cm of -s] -s;
[above = 0 cm of s] s;
[above = 0 cm of -s-1] -s-1;
[above = 0 cm of s+1] s+1;
(-1) – (0);
(0) – (1);
(-1) – (-2,0);
[dashed] (-2.1,0) – (-2.9,0);
(-3,0) – (-s);
(1) – (2,0);
[dashed] (2.1,0) – (2.9,0);
(3,0) – (s);
[blue, thick] (-s) – (-s-1);
[blue, thick] (s) – (s+1);
[dashed] (-s-1) – (-6,0);
[dashed] (s+1) – (6,0);
Hence, if φ_e ↑, G_e will look as follows:
at (0,0) (0) ∙;
[above = 0 cm of 0] 0;
(-2,0) – (0);
(2,0) – (0);
[dashed] (-2.1,0) – (-2.6,0);
[dashed] (2.1,0) – (2.6,0);
But if φ_e,s↓, we still put edge between s and s+1, while omitting the edge between -s and -s-1. Instead, we put an edge between s+1 and -s-1.
at (-1,0) (-1) ∙;
at (0,0) (0) ∙;
at (1,0) (1) ∙;
at (-4,0) (-s) ∙;
at (4,0) (s) ∙;
at (-5,0) (-s-1) ∙;
at (5,0) (s+1) ∙;
[above = 0 cm of -1] -1;
[above = 0 cm of 0] 0;
[above = 0 cm of 1] 1;
[above = 0 cm of -s] -s;
[above = 0 cm of s] s;
[above = 0 cm of -s-1] -s-1;
[above = 0 cm of s+1] s+1;
(-1) – (0);
(0) – (1);
(-1) – (-2,0);
[dashed] (-2.1,0) – (-2.9,0);
(-3,0) – (-s);
(1) – (2,0);
[dashed] (2.1,0) – (2.9,0);
(3,0) – (s);
[blue] (s) – (s+1);
[blue, thick] (-s-1) .. controls (-2,-1) and (2,-1) .. (s+1);
[dashed] (-s-1) – (-6,0);
[dashed] (s+1) – (6,0);
Hence, in this case G_e will look like this:
at (0,0) (s) ∙;
[below = 0 cm of s] s;
(-2,0) – (s);
(2,0) – (s);
[dashed] (-2.1,0) – (-2.6,0);
[dashed] (2.1,0) – (2.6,0);
at (0,1) (0) ∙;
[left = 0 cm of 0] 0;
at (0,2) (-s) ∙;
[left = 0 cm of -s] -s;
(s) – (0) – (-s);
The G_e's are clearly uniformly highly computable.
Notice that the singleton {(0,1) } separates G_e if and only if φ_e does not halt, that is e ∈. This proves item 1.
For proving item 2 observe that:
* if (x,y) ∈G_e and y ∉{ x-1, x+1 }, then φ_e halts;
* if φ_e halts, then {(x,x+1)}∈ E(G_e) separates G_e if and only if max{|x|,|x+1|} is larger than the minimum number of steps in which φ_e halts (notice that in this case (-s,-(s+1)) ∉ E(G_e)).
Now suppose we are given a function f: → such that f(e) ∈G_e. First we check whether there is (x,y) ∈ f(e) such that y ∉{ x-1, x+1 }, in which case we are sure that e ∈. Otherwise, let n = max{|v| | v is incident to some edge in f(e) } and run φ_e for n steps: then e ∈ if and only if φ_e,n↓.
We end our analysis of G by considering the case of graphs with infinitely many ends. As shown in the next result, in this case G may become undecidable even in the non uniform regime.
There is a connected highly computable graph G with infinitely many ends for which G≡_T.
Fix a computable enumeration (φ_e)_e ∈ of Turing machines without input.
We define a graph G as follows. Its set of vertices is V(G) = { (e,s): φ_e,s↑}⊂^2, and its set of edges is E(G) = {((e,0),(e+1,0): e ∈}∪{((e,s-1),(e,s)): φ_e,s↑}.
For example, in case φ_0 and φ_2 halts but φ_1 doesn't, G would look something like the picture below.
at (0,0) (0) ∙;
[below = 0 cm of 0] (0,0);
at (1.5,0) (1) ∙;
[below = 0 cm of 1] (1,0);
at (3,0) (2) ∙;
[below = 0 cm of 2] (2,0);
(0) – (1);
(1) – (2);
(2) – (4,0);
[dashed] (4,0) – (4.75,0);
at (0,1.5) (0') ∙;
[left = 0 cm of 0'] (1,0);
at (1.5,1.5) (1') ∙;
[left = 0 cm of 1'] (1,1);
at (3,1.5) (2') ∙;
[left = 0 cm of 2'] (2,1);
at (0,3) (0”) ∙;
[left = 0 cm of 0”] (2,0);
at (1.5,3) (1”) ∙;
[left = 0 cm of 1”] (2,1);
at (1.5,4.5) (1”') ∙;
[left = 0 cm of 1”'] (3,1);
(0) – (0');
(0') – (0”);
(1) – (1');
(1') – (1”);
(1”) – (1”');
(1”') – (1.5, 5.5);
[dashed] (1.5,5.5) – (1.5, 6.25);
(2) – (2');
Observe that G is highly computable. Indeed, V(G) and E(G) are clearly computable. Moreover, for each (e,s) ∈ V(G), we have
deg_G((e,s)) =
1, if ((e=s=0) or (e>0 and s>0)) and φ_e,s+1↓
3, if (e>0 and s=0) and φ_e,s+1↑
2, otherwise,
which, again, is clearly computable.
Observe that the set {((e,s),(e,s+1)) : s ∈} is contained in E(G) if and only if φ_e does not halt. Moreover, in this case, removing the edge from (e,0) to (e,1) separates G into two infinite connected components - namely, the infinite path P = ((e,t),(e,t+1))_t > 1 and G ∖ (P ∪(e,s)), where the latter contains the infinite path ((n,0),(n+1,0))_n > e.
Assume that G is decidable. Then we can decide whether ((e,0),(e,1)) ∈G, which is equivalent to decide whether φ_e ∈ (the complement of the halting set), a contradiction.
§.§ The complexity of G
Here we show that the complexity of computing G is strictly higher than that for G.
The problem of given a highly computable graph G and E⋐ E(G), to determine whether E∈G, is Π_2^0-complete.
Recall that in <Ref> we defined _G,n(E) as the number of infinite connected components of G∖ E that we can distinguish by inspecting a ball of radius n. This value is computable from a description of a highly computable graph G, E⋐ E(G) and n∈. Observe that E⋐ E(G) lies in G if and only if
(∀ F⋐ E(G))(∃ n∈)(_G,n(F)≤_G,n(E)).
This proves the upper bound in the statement. For the lower bound, we use the sequence of highly computable graphs (G_e^2)_e ∈ constructed in the proof of <Ref>. Recall that graphs of this family have either two or three ends. Consider the edge (0,1), with the vertex 1 taken in the additional copy of ∖{0}: for this family it clearly holds that
{e: (0,1) ∈G_e^2} = {e: G_e^2 has 2 ends}.
at (0,0) (0) ∙;
at (1,0) (s0) ∙;
at (2,0) (s*) ∙;
(0) to[bend left] (s0);
(0) to[bend right] (s0);
[dashed] (s0) – (s*);
(s*) – ++(20:0.75cm);
[dashed] (s*)++(20:0.75cm) – ++(20:0.75cm);
(s*) – ++(-20:0.75cm);
[dashed] (s*)++(-20:0.75cm) – ++(-20:0.75cm);
[above = 0 cm of 0] 0;
[above = 0 cm of s*] s^*;
at (-1,0) (1) ∙;
[above= 0 cm of 1] 1;
[thick, blue, dotted] (0) – (1) node[midway] ×;
(1) – ++(180:0.75 cm);
[dashed] (1)++(180:0.75 cm) – ++(180:0.75 cm);
(L) at ((1)+(180:1.5 cm));
(C1) at ((s*)+(20:1.5cm));
(C2) at ((s*)+(-20:1.5cm));
(R) at (0.5*(C1)+0.5*(C2));
[decorate,decoration=brace,amplitude=5pt,mirror,raise=4ex] (L) – (1) node[midway, yshift=-3em] 1 end;
[decorate,decoration=brace,amplitude=5pt,mirror,raise=4ex] (0) – (R) node[midway, yshift=-3em] 2 ends;
at (0,0) (0) ∙;
at (1,0) (s0) ∙;
at (2.1,0) (s1) ;
[above = 0 cm of 0] 0;
(0) to[bend left] (s0);
(0) to[bend right] (s0);
[dashed] (s0) – (s1);
at (-1,0) (1) ∙;
[above= 0 cm of 1] 1;
[thick, blue, dotted] (0) – (1) node[midway] ×;
(1) – ++(180:0.75 cm);
[dashed] (1)++(180:0.75 cm) – ++(180:0.75 cm);
[decorate,decoration=brace,amplitude=5pt,mirror,raise=4ex] (1)++(180:1.5cm) – (1) node[midway, yshift=-3em] 1 end;
[decorate,decoration=brace,amplitude=5pt,mirror,raise=4ex] (0) – (s1) node[midway, yshift=-3em] 1 end;
Hence, the claim follows immediately from the fact that the set on the right hand side is Π_2^0-complete.
We finish this part by showing the announced lower bound for the simplified version of uniformly computing an arbitrary element in G.
Let f be a function which, given a highly computable connected
graph G with finitely many ends, outputs an element f(G)∈G.
Then f≥_T0”.
The proof will rely on the following lemma.
Let G be an infinite, connected, and locally finite graph, and let w_0 be a vertex in G. For each r∈ let V_r be the set of vertices in G whose distance to w_0 equals r or r-1, and let E_r be the set of edges in the induced subgraph G[V_r]. Suppose further that E_r∈G.
Then a subset E⊂ E_r is minimal for inclusion within G if and only if there is an infinite connected component C of G∖ E_r, such that E equals the set of all edges in E_r incident to some vertex from C.
For the forward implication, let E⊂ E_r, suppose that E lies in G, and that any proper subset of E is not in G. As E∈G, the graph G∖ E has at least two infinite
connected components. We claim that there is an infinite connected component C of G∖ E_r, such that the set E contains
all edges that join some vertex in {v∈ V(G)| d(v,w_0)≤ r-1}
to some vertex from C. This follows from the fact that the graph G[{v∈ V(G)| d(v,w_0)≤ r-1}] is connected: if C does not exist, then every infinite connected component in G∖ E is connected to G[{v∈ V(G)| d(v,w_0)≤ r-1}], implying that G∖ E has exactly one infinite connected component. This contradicts the fact that E∈G, so the component C exists. Moreover, if E contains edges that are not joined to vertices in C, then we can discard them while still
having an element in G, contradicting that E is minimal for inclusion. This proves the forward implication, and the backward implication is clear.
Let f be a function as in the statement. Note that if G has
two ends, then G=G. Then it follows from <Ref> that f≥_T0'.
Then, by <Ref>, f can
be used to decide membership in G, with a procedure that
is uniform on a description of G.
We claim that the function f can be used to compute the number
of ends of a highly computable and connected graph with finitely many
ends. This can be done as follows. Given the graph G, we compute
E=f(G)∈G. Then we choose a vertex v_0
and r∈ so that the induced subgraph G[{v∈ V(G)| d(v,v_0)≤ r-1}]
contains all edges from E. We then let E_r=E(G[{v∈ V(G)| r-1≤ d(v,v_0)≤ r}}]). Our choice of r ensures that E_r lies in G.
As we can decide membership in G, we can determine those
subsets of E_r that are minimal for inclusion among those in
G. This information is sufficient to determine the number
of infinite connected components of G∖ E_r by <Ref>. But as E_r∈G, this is the number of ends of G.
At this point we have proved that the function f can be used to compute the number
of ends of a highly computable connected graph G with finitely
many ends. Then it follows from Proposition <ref>
that f≥_T0”.
§.§ The relationship among G, G and G
In informal words, the main result of this section is the relation:
G+G=G.
More specifically:
Let G be a highly computable graph with finitely many ends. Then:
* There is an algorithm which with oracle access to G computes G and G. This algorithm is uniform on G.
* There is an algorithm which given G and with oracle access to G, decides membership in G.
This result will be proved in several parts.
There is an algorithm which, given a description of a highly computable graph G which is connected and has finitely many ends, and oracle access to G, computes
G.
First, observe that ∅∈G if and only if G has only one end. Hence, we can detect if G has only one end. If ∅∉G, then we proceed as follows. We choose a vertex w_0, and define E_r as the set of edges in the induced subgraph G[{v∈ V(G)| d(v,w_0)=r or d(v,w_0)=r-1}]. Having oracle access to G, we can compute r so that E_r lies in G. We fix from now on E= E_r. We write E={e_1…,e_n}, and we compute a set of vertices U={u_1,…,u_n} so that d(u_i,w_0)=r, and e_i is incident to u_i.
Observe that every connected component of G∖ E contains a vertex u_i. We denote by C_i the connected component of G∖ E that contains u_i. Moreover, the number of ends of G equals the number of infinite connected components of G∖ E. In what follows we describe an effective procedure which, with oracle access to G, allows us to decide which components of C_1,…, C_n are infinite, and for which pairs of i and j the component C_i is equal to C_j.
C_i is infinite if and only if there is a set W ⊂ E minimal for inclusion within G, and that contains an edge incident to u_i.
Indeed, suppose that C_i is finite, that W⊂ E lies in G, and that E
contains an edge e that is incident to u_i. Then it is clear that W∖{e} also lies in G, as removing the edge e from W modifies G∖ W only by connecting a finite connected component to a possibly infinite one. It follows that W also lies in G, so W was not minimal for inclusion in G. We conclude that for C_i finite, any subset W⊂ E that is minimal for inclusion and lies in G has no edge incident to u_i.
Assume now that C_i is infinite. Since we already ruled out the possibility that G has 1 end, there must be a component C_k different to C_i, and that is also infinite. We choose W⊂ E as the set of all edges in E that are incident to some vertex in an infinite connected component of G∖ E that is different from C_k. Observe that then W contains all edges incident to C_i, and no edge incident to a finite connected component of G∖ E. Moreover, the connected components from G∖ E that are different from C_k, remain the same in G∖ W. The component C_k of G∖ E, has been modified in G∖ W: it has been connected to the finite graph G[{v∈ V(G)| d(v,w_0)≤ r-1}].
We claim that W is minimal for inclusion in G. Indeed, let e be an arbitrary edge in W, so e is incident to a vertex u_j, with j∈{1,…,n} and C_k C_j. Observe that, in the graph G∖ (W∖{e}), the vertices u_k and u_j are in the same connected component. This follows from the fact that the graph G[{v∈ V(G)| d(v,w)≤ r-1 }] is connected. In other words, the components C_k and C_j from G∖ E, can be connected in the graph G∖ W', and thus W' does not lie in G. It follows that the set W is minimal for inclusion in G. This proves the claim.
Suppose that both C_i and C_j are infinite. Then C_i=C_j if and only if there is W ⊂ E minimal for inclusion within G, and which contains no edge incident to vertices in C_i or in C_j.
In the proof of the previous claim we actually proved the forward implication. That is, we proved that if C_i is infinite, then the set W⊂ E of all edges in E that are incident to some vertex in an infinite connected component of G∖ E that is different from C_i, lies in G and is minimal for inclusion within G. If C_i=C_j, then this set W verifies the claim. We now prove the backward implication, suppose that there is W ⊂ E minimal for inclusion within G, and which contains no edge incident to vertices in C_i or in C_j. If the connected components C_i and C_j of G∖ E are infinite and different, then the graph G∖ W has a component with two ends, contradicting that W lies in G. This proves the claim.
Observe that the conditions in these two claims are decidable with oracle access to G. In other words, we have exhibited an effective procedure that, with oracle access to G, decides which components among C_1,…,C_n are infinite, and which are indeed the same connected component. This allows us to compute the number of infinite connected components of G∖ E, which equals the number of ends of G because E is taken in G.
There is an algorithm which, given a description of a highly computable graph G which is connected and has finitely many ends, the number G, and oracle access to G, computes an element in G.
Let k be the number of ends of G. We fix a vertex w_0∈ V(G), and for each r∈ we let E_r be the set of edges in the induced subgraph {v∈ V(G)| d(v,w_0)=r or d(v,w_0)=r-1}. It is clear that for r big enough, the set E_r lies in G. The next claim follows directly from <Ref>:
E_r lies in G if and only if the following two conditions are satisfied:
* If E,E' are subsets of E_r, both lie in G, and both are minimal for inclusion within G, then E,E' are either disjoint or equal.
* E_r has exactly k subsets which are in G and are minimal for inclusion within G.
As these conditions are decidable with oracle access to G, we have an effective procedure that, given r∈ and having oracle access to G, determines whether E_r lies in G. But then we can compute a set E_r that lies in G by an exhaustive search.
Let k be the number of ends of G. We fix a vertex v∈ V(G), and from now on we write B_r=B_r(v), r∈. We define U_r={u∈ V(G)| d(u,v)=r}.
We say that r∈ satisfies ⋆ if the following conditions hold:
* | {V ∈G∩ U_r: V' ⊂ V V' ∉G}| = k, namely U_r has exactly k subsets which are in G and are minimal for inclusion within G; and
* every pair V,V' of subsets of U_r which lie in G and are minimal for inclusion among subsets of G are either disjoint or the same set.
There is r_0 such that every r≥ r_0 satisfies ⋆.
It is clear from the definition that if r satisfies ⋆, then r+1 also satisfies ⋆. Hence it is enough to prove that if U_r lies in G, then U_r+1 satisfies ⋆. Let U_r ∈G, and let C_1,…,C_k be the infinite connected components of G-U_r.
Without loss of generality, we assume that U_r doesn't satisfy ⋆. Then we claim that U_r+1 must satisfy that for every v∈ U_r, there is at most one infinite connected component from G-U_r adjacent to v. Notice that this property implies ⋆. Since U_r+1 lies in G, there are k infinite connected components C_1',…,C_k', where C_i'⊂ C_i. Assume for a contradiction that some v∈ U_r+1 is adjacent to at least two infinite connected components C_i'≠ C_j', say at vertices u_i∈ C_i' and u_j∈ C_j', and let v' be a vertex in U_r adjacent to v. This shows that components C_i and C_j share a vertex v, contradicting the fact that G-U_r has k infinite connected components.
If r satisfies ⋆, then U_r lies in G.
Let V⊂ U_r be a set in G and minimal for inclusion among sets in G. This ensures the existence of an infinite connected component C of G-U_r which is adjacent to V, and which is not adjacent to any other V'≠ V which lies in G and is minimal for inclusion. Indeed, if every infinite connected component of G-U_r which is adjacent to V is also adjacent to some other V', then V does not lie in G.
Thus every subset V⊂ U_r which is in G and is minimal for inclusion has at least one adjacent infinite connected component in G-U_r, and which is not adjacent to any other such V'. This proves that G-U_r has at least k infinite connected components, and thus U_r lies in G.
This finishes the proof.
We are now ready to prove <Ref>.
The main tool in this proof is <Ref>. Let us start by proving the first item in <Ref>. Given a graph G as in the statement and oracle access to G, we can compute G with the algorithm in <Ref>. Putting this together with the function in <Ref>, we obtain an algorithm to decide membership in G.
We now prove the second item in <Ref>. Given a graph G as in the statement and oracle access to G, we can compute an element in G with the algorithm in <Ref>. Now we have the information required by the function in <Ref>. This is sufficient to decide membership in G: on input E⋐ E(G), we just have to use the function from <Ref> to compute the number of infinite connected components of G∖ E. If this number equals G, then we conclude that E∈G, and otherwise E∉G.
§.§ G and G are independent
Here we show that having access to G is not enough to compute G with a procedure uniform on G, and vice versa. This shows that <Ref> cannot be improved significantly.
We first observe that having access to G is not enough to compute G, even if the graphs are assumed to be trees.
There is a uniformly highly computable sequence of trees (G_e)_e∈ such that G is uniformly computable, but G is not.
The sequence defined in the proof of <Ref> satisfies this.
Similarly, we observe that having access to G is not enough to compute G, even if the graphs are assumed to be trees.
There is a uniformly highly computable sequence of trees (G_e)_e∈ such that G is uniformly computable, but G is not.
Fix a computable enumeration (φ_e)_e ∈ of Turing machines without input. Let V(G_e) = ∪{ -s' | s ∈∖{0}}∪{s' |φ_e,s-1↓} (in other words, V(G_e) consists of 0, two copies of the negative integers, a copy of the positive integers and another copy of all the positive integers larger than the number of steps required to φ_e to halt, if any) and
E(G_e) = { (s,s+1) | s ∈}∪{ (0,1') }∪{ (-s', (-s-1)') | s ∈∖{0}}∪{(s', (s+1)') |φ_e,s-1↓}.
The following picture shows how each G_e roughly looks like, depending on whether φ_e halts.
at (0,0) (0) ∙;
[above = 0 cm of 0] 0;
(0) – (150:0.75cm);
[dashed] (150:0.75cm) – (150:1.5cm);
(0) – (-150:0.75cm);
[dashed] (-150:0.75cm) – (-150:1.5cm);
(0) – (0:0.75cm);
[dashed] (0:0.75cm) – (0:1.5 cm);
[below = 1 cm of 0] G_e for φ_e ↑;
at (0,0) (0) ∙;
[above = 0 cm of 0] 0;
(0) – (150:0.75cm);
[dashed] (150:0.75cm) – (150:1.5cm);
(0) – (-150:0.75cm);
[dashed] (-150:0.75cm) – (-150:1.5cm);
(0) – (0:0.75cm);
[dashed] (0:0.75cm) – (0:1.5 cm);
at (1.25,0) (M) ;
[below = 1cm of M] G_e for φ_e ↓;
at (0:2.4cm) (s) ∙;
[above= 0 cm of s] s;
(0:1.6cm) – (s);
[blue, thick] (s) – ++(30:0.75cm);
[blue,dashed, thick] (s)++(30:0.75cm) – ++(30:0.75cm);
(s) – ++(-30:0.75cm);
[dashed] (s)++(-30:0.75cm) – ++(-30:0.75cm);
It is clear that G_e is uniformly decidable as for each e, G_e is a tree without finite branches and therefore G_e contains all finite sets of vertices. On the other hand, an “additional end" will appear exactly when φ_e halts: hence, the number of ends of these graphs must be uncomputable.
|
http://arxiv.org/abs/2409.02734v1 | 20240904140632 | Reply to Comment on "A slightly oblate dark matter halo revealed by a retrograde precessing Galactic disk warp" | [
"Yang Huang",
"Qikang Feng",
"Tigran Khachaturyants",
"Huawei Zhang",
"Jifeng Liu",
"Juntai Shen",
"Timothy C. Beers",
"Youjun Lu",
"Song Wang",
"Haibo Yuan"
] | astro-ph.GA | [
"astro-ph.GA"
] |
School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing, 100049, China
National Astronomical Observatories, Chinese Academy of Science, Beijing, 100012, China
Department of Astronomy Peking University, Beijing, 100871, China
Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing, 100871, China
Department of Astronomy, School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China
Key Laboratory for Particle Astrophysics and Cosmology (MOE)/Shanghai Key Laboratory for Particle Physics and Cosmology, Shanghai 200240, China
Department of Astronomy Peking University,
Beijing, 100871, China
Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing, 100871, China
National Astronomical Observatories, Chinese Academy of Science, Beijing, 100012, China
New Cornerstone Science Laboratory, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012, China
School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing, 100049, China
Department of Astronomy, School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China
Key Laboratory for Particle Astrophysics and Cosmology (MOE)/Shanghai Key Laboratory for Particle Physics and Cosmology, Shanghai 200240, China
Department of Physics and Astronomy and JINA Center for the Evolution of the Elements (JINA-CEE), University of Notre Dame, Notre Dame, IN 46556, USA
National Astronomical Observatories, Chinese Academy of Science, Beijing, 100012, China
School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing, 100049, China
National Astronomical Observatories, Chinese Academy of Science, Beijing, 100012, China
Department of Astronomy, Beijing Normal University, Beijing, 100875, People's Republic of China
§ ABSTRACT
In this reply, we present a comprehensive analysis addressing the concerns raised by <cit.> regarding our recent measurement of the disk warp precession using the `motion-picture' method <cit.>.
We carefully examine the impact of ignoring the twist of the disk warp and the so-called R-τ correlation on the estimation of the precession rate.
The results indicate that the effect is minor and does not exceed the systematic and statistical uncertainties.
Using N-body+SPH simulation data, we confirm that the `motion-picture' technique is effective in measuring retrograde precession of disk warp in stellar populations younger than 170 Myr, similar to classical Cepheids.
Therefore, the overall conclusions of <cit.> remain robust.
§ MAIN
In <cit.>, we measured the precession of the Galactic disc warp using a `motion-picture' method, by tracing the line-of-nodes (LON) angle, ϕ_w, of classical Cepheids as a function of their age, τ.
By analyzing a sample of 2,600 classical Cepheids with accurately determined distances and ages, we found that the Galactic warp is undergoing mild retrograde precession at a rate of -2.1 ± 0.5 ( statistical) ± 0.6 ( systematic) km s^-1 kpc^-1.
This result is further used to constrain the shape of the inner dark matter halo, suggesting it is slightly oblate, that is required to explain the measured mild retrograde precession.
<cit.> raised concerns that: 1) the `motion-picture' method is ineffective, as young stars like classical Cepheids primarily trace the gas warp at present, i.e. ϕ_w should hardly depend on Cepheid age; and 2) the measured dϕ_w/dτ is mainly from an omitted-variable bias, resulting from neglecting the natural twist of the disc warp, dϕ_w/dR, and the relationship between R and τ for classical Cepheids.
As demonstrated in earlier N-body + smooth particle hydrodynamics (N-body+SPH) simulations <cit.>, young mono-age stellar populations that were born in the warp experience orbital tilting and phase-mixing over much longer timescales than in the `motion-picture' method: at least 300 Myr are needed for major changes to manifest in these populations.
A detailed analysis of a warped simulation from <cit.> shows that the retrograde precession of the disc warp can be revealed by the `motion-picture' method, using stellar populations with ages younger than 170 Myr (similar to classical Cepheids).
Although not discussed in <cit.>, we find that neglecting the twist of the disc warp has only a minor effect on the measured dϕ_w/dτ, contributing no more than the reported systematic uncertainty from other sources.
In summary, the main findings of <cit.> remain valid.
§ THE EFFECT OF AN OMITTED-VARIABLE BIAS
<cit.> argued that the positive value of dϕ_w/dτ measured in <cit.> is primarily due to an omitted-variable bias, resulting from the neglect of the natural twist dϕ_w/dR and the correlation between R and τ for classical Cepheids.
However, they overestimated both the value of dϕ_w/dR and the correlation coefficient between R and τ.
For the twist term, the LON ϕ_w of the disc warp does not increase monotonically with R, which is different from the trend reported in <cit.>.
Both our Figure 1 here and Figure 3 of <cit.> clearly show that ϕ_w initially decreases with R from the warp's starting radius to R ∼ 12-13 kpc, then increases with R until R ∼ 15-16 kpc, and finally tends toward being flat.
The slope of dϕ_w/dR for all sample stars with R > 7.5 kpc is only 4.4 ± 0.7 deg kpc^-1.
Specifically, for the three radial bins analyzed in <cit.>, the slopes are 6.1 ± 1.1 deg kpc^-1 for 11.8 < R < 18.8 kpc, 3.9 ± 0.8 deg kpc^-1 for 14.0 < R < 21.0 kpc, and 2.1 ± 1.8 deg kpc^-1 for R > 15.5 kpc (see Table 1). All of these detected slopes are significantly smaller than the value of 10.6 ± 0.8 deg kpc^-1 reported by <cit.>.
<cit.> also claimed that a correlation exists between R and τ for classical Cepheids, which they attribute to the metallicity gradient of the Galactic disc and the Cepheid age–metallicity relation. However, as discussed in <cit.>, the contribution of the metallicity term to age estimation for Cepheids is negligible compared to the period term <cit.>. <cit.> demonstrated that assuming a constant [Fe/H] value of -0.07 for all stars does not affect the calculated ages of classical Cepheids. Consequently, we do not expect a significant R-τ correlation for Cepheids, given the weak age–metallicity relation. As a verification, we calculated the Pearson correlation coefficient ρ_Rτ for the stars in Figure 1, which is only 0.16, consistent with our expectations.
We then apply Equation 6 from <cit.> to assess the impact of the disc-warp twist and the R-τ correlation (although this is scarcely a true correlation) on the estimated dϕ_w/dτ as reported by <cit.>.
The details are summarized in Table 1.
For all sample stars, this effect is just 0.03 ± 0.01 deg Myr^-1, only one-third of the value claimed by <cit.>, and comparable to the systematic or statistical uncertainties in the estimation of dϕ_w/dτ reported by <cit.>.
Similar results are also found for the three radial bins: 11.8 < R < 18.8 kpc, 14.0 < R < 21.0 kpc, and R > 15.5 kpc.
Moreover, if the argument of <cit.> is correct, we should expect a negative value of dϕ_w/dτ for 9 < R < 12 kpc, as ϕ_w decreases with R within this radial range.
However, our measurements show a positive value (despite the relatively large error; see Table 1), which serves as a counterargument.
§ TESTING `MOTION-PICTURE' METHOD WITH SIMULATIONS
Here we use simulation data to demonstrate that: 1) young stellar populations preserve the orbital information imparted by the gas at the time of their formation; and 2) applying the `motion-picture' method to these young stellar populations can reveal a retrograde precession.
We adopt a Milky-Way-like warped N-body+SPH simulation described in (henceforth KH22), which is evolved using the code gasoline <cit.> for 12 Gyr without any mergers. In this warped model, the dark matter halo is triaxal with the angular momentum of the embedded gas corona misaligned with its principal axes; this results in gas accreting via an S-shaped warp onto the disc. Detailed discussion of the initial conditions and evolution of the warped model is presented in KH22. In each snapshot of the simulation, similar to the pre-processing in KH22, the stellar disc is centred and rotated into the (x,y) plane based on the angular momentum of the inner region (R<5 kpc). As a result of the pre-processing, the sense of rotation in the model is in the direction of increasing azimuth.
The warp in the KH22 simulation is long-lived, and increases in size with time; it has been shown in K22 (via spectral analysis) to have a mostly static or mildly retrograde precessing warp.
We then select several hundred time periods between 4 Gyr and 12 Gyr that have clear disc warp precession. These time periods are determined by analysing the LON of the cold gas (T≤50,000 K) at a fixed radial annulus 12 < R <13 kpc.
Here we define LON as the ϕ_w angle of the radially averaged angular momentum vector.
For each time period, we then analyse the LON of young stellar populations separated into equally-sized, non-overlapping age bins (Δτ=20 Myr).
As an example, we present the LON for cold gas, stars at formation, and stars at present as functions of age or relative lookback time for one of these time periods in Figure <ref>.
Clearly, the LON versus age of stars at present basically catches the trend of both cold gas and stars at formation for stellar populations younger than 170 Myr, which contradicts the argument of <cit.> that the `LON should hardly depend on Cepheid age'.
Discrepancies begin to emerge beyond this age, suggesting that the warp signal preserved from the gas at the time of formation starts to be erased.
Following the `motion-picture' approach in <cit.>, we perform linear fits to the LON as a function of median age for two age intervals: [0,110] Myr and [0,150] Myr.
Here, 110 Myr and 150 Myr represent the median and upper age limits, respectively, of the classical Cepheids sample in <cit.>.
For all picked time periods, we find that 90% and 80% of the disc warp measured with retrograde precession using the `motion-picture' method, for the [0,110] Myr and [0,150] Myr age bins respectively, are indeed retrograde, as indicated by the stars at the time of formation.
A more thorough validation and analysis of the `motion-picture' method will be performed in a forthcoming paper.
These simulation results can also be understood physically as follows.
<cit.> demonstrated that young stellar populations born in the warp will gradually tilt and phase-mix into the disc. However, these processes are slow and can take up to 300 Myr to produce significant changes within a given mono-age population.
Therefore, it is possible that there exists a maximum population age at which the warp history can be preserved: within a retrograde precessing warp, older populations are born with a larger LON, but their orbital precession is not sufficient enough to erase the warp signal until a population is older than about 170 Myr.
The above explanations provide a theoretical foundation for <cit.> in measuring disc warp precession using the `motion-picture' technique with classical Cepheids samples (mostly younger than 170 Myr).
Finally, we emphasize that: 1) the `motion-picture' method is effective for young stellar populations, particularly those younger than 170 Myr (such as the classical Cepheids used in ), whose angular momentum has yet to be altered to such a degree that the warp signal were unrecognisable <cit.>; and 2) neglecting the disc-warp twist has a minor effect on the estimated dϕ_w/dτ, if any, and this effect is no greater than the systematic or statistical uncertainties. Therefore, the main conclusion of <cit.> remains valid.
[Chen et al.(2019)]CH19 Chen, X., Wang, S., Deng, L., et al. 2019, Nature Astronomy, 3, 320
[Dehnen et al.(2024)]D24 Dehnen, W., Schönrich, R., Drimmel, R., et al. 2024, arXiv:2407.06341. doi:10.48550/arXiv.2407.06341
[De Somma et al.(2021)]DS21 De Somma, G., Marconi, M., Cassisi, S., et al. 2021, , 508, 1473. doi:10.1093/mnras/stab2611
[Huang et al.(2024)]H24 Huang, Y., Feng, Q., Khachaturyants, T., et al. 2024, arXiv:2407.00319. doi:10.48550/arXiv.2407.00319
[Khachaturyants et al.(2021)]TK21 Khachaturyants, T., Beraldo e Silva, L., & Debattista, V. P. 2021, , 508, 2350
[Khachaturyants et al.(2022)]TK22 Khachaturyants, T., Beraldo e Silva, L., Debattista, V. P., & Daniel, K. J. 2022, , 512, 3500
[Wadsley et al.(2004)]Wadsley+04 Wadsley, J., Stadel, J. & Quinn, T. 2004, , 137-158
|
http://arxiv.org/abs/2409.03260v1 | 20240905055142 | In Search of Trees: Decision-Tree Policy Synthesis for Black-Box Systems via Search | [
"Emir Demirović",
"Christian Schilling",
"Anna Lukina"
] | cs.AI | [
"cs.AI",
"cs.LG"
] |
Gaussian Process Phase Interpolation for estimating the asymptotic phase of a limit cycle oscillator from time series data
[
September 9, 2024
==========================================================================================================================
§ ABSTRACT
Decision trees, owing to their interpretability, are attractive as control policies for (dynamical) systems. Unfortunately, constructing, or synthesising, such policies is a challenging task.
Previous approaches do so by imitating a neural-network policy, approximating a tabular policy obtained via formal synthesis, employing reinforcement learning, or modelling the problem as a mixed-integer linear program. However, these works may require access to a hard-to-obtain accurate policy or a formal model of the environment (within reach of formal synthesis), and may not provide guarantees on the quality or size of the final tree policy.
In contrast, we present an approach to synthesise optimal decision-tree policies given a black-box environment and specification, and a discretisation of the tree predicates, where optimality is defined with respect to the number of steps to achieve the goal. Our approach is a specialised search algorithm which systematically explores the (exponentially large) space of decision trees under the given discretisation. The key component is a novel pruning mechanism that significantly reduces the search space. Our approach represents a conceptually novel way of synthesising small decision-tree policies with optimality guarantees even for black-box environments with black-box specifications.
§ INTRODUCTION
Designing controllers for complex systems with the guarantee of specified behaviour remains an important challenge. Classical control synthesis can provide such guarantees given a (precise) model of the system <cit.>. This requirement may in some cases be infeasible, which gave rise to black-box and approximate approaches, e.g., based on machine learning. As systems grow larger, interpretability is an increasingly desired specification for machine-learned
policies to achieve alignment with human specifications <cit.>. With the success of decision trees as interpretable machine-learning models, policies represented as decision trees have gained considerable traction <cit.>.
There are diverse approaches to synthesising, or learning, decision-tree policies. Stratego <cit.> employs reinforcement learning dedicated to decision trees. Modifying the reinforcement learning process to produce decision trees has also been proposed <cit.>. An alternative is to apply imitation learning to distil a neural-network policy into a decision tree <cit.>. After using formal synthesis to construct a policy in tabular form, decision trees may be induced via specialised algorithms akin to algorithms used for solving standard classification problems <cit.>.
While previous approaches have their strengths, none of the discussed methods
provide guarantees in terms of decision-tree policy performance or size and/or require an existing expert policy or effective reinforcement learning algorithm. Policy synthesis may be posed as a mixed-integer linear programming problem <cit.>, which can provide guarantees. However, this approach
assumes that a model of the environment is given. When the above requirements are not met, decision-tree policy cannot be constructed using existing methods.
In contrast, we consider a unique setting: derive 1) small decision-tree policies when 2) the model and specification of the environment is black-box whilst 3) providing optimality guarantees of performance. Every
work we are aware of will violate at least one of these three points.
We note that our work is applicable to deterministic black-box systems.
Such systems pose a challenging controllability problem, as providing exact guarantees implies searching in an exponentially large space.
Our approach is based on search. Briefly, our algorithm systematically enumerates all possible decision trees that may be constructed using a given set of predicates, and then selects the tree that optimises the specification evaluated by the black-box environment for each tree, e.g., minimises (maximises) the time to reach (maintain) a target state.
As an illustrative example, consider the pendulum environment in Fig. <ref>. The pendulum is attached at one end to a fixed point, and available control actions are to apply force to push the free end left (a_1=-1) or right (a_2=1) <cit.>. Our aim is to construct a small decision-tree policy that swings the pendulum to an upward angle (θ=0) from a given initial state, and does so as quickly as possible. The environment is available as a black box, i.e., the dynamics are hidden, but given an initial state and a policy, we may compute the trajectory and obtain its evaluation with respect to the black-box specification.
We start using the first tree (with predicate [θ≥ c_1] and leaf nodes corresponding to actions)
as a policy for the black-box environment , and obtain a trace that reaches the goal angle within 60 time steps. Next, the predicate is modified to [θ≥ c_2] (where c_2>c_1), and the new tree produces a trace that reaches the goal within 50 time steps, which is considered better. For the next tree with predicate [θ≥ c_3] (where c_3>c_2), the (partially) produced trace is considered inferior:
it surely does not reach the goal faster than the best tree (50 steps).
A key component is our novel trace-based pruning mechanism that discards a large portion of the search space by runtime analysis. This is made possible by exploiting the decision-tree structure and considering the execution of the tree policy: even though the environment is black-box, examining the trace allows us to understand how the decision tree is used, and discard trees that are guaranteed to not lead to a better trace. This allows us to reduce the search space without sacrificing optimality even though our model and specification are given as a black box. In the previous example, depending on the concrete trace, our trace-based pruning might be able to determine that it is possible to discard the third tree from consideration without missing a better tree only by observing the trace produced by the first and second tree. In practice this can lead to order-of-magnitude reductions in runtime.
We implemented our approach and evaluate it on classical control benchmarks. The experiments demonstrate significant reductions obtained with our trace-based pruning, and illustrate that small and optimal decision trees may be constructed within reasonable time. We further analyse the scalability of our algorithm in terms of the number of predicates (granularity) and the size of the tree, both of which have an exponential influence on the runtime. Nevertheless, the experiments show the runtime is within practical use.
To summarise, we consider a unique setting and provide a conceptually novel approach to construct optimal small decision-tree policies with respect to black-box systems. While not all environments are controllable by small trees, when the environment does admit a small tree policy, our approach provides an effective way to compute an optimal tree only requiring black-box access to the system.
We organise the paper as follows. In the next section, we discuss related work and highlight our unique setting. We outline preliminaries in Section <ref>, define the problem in Section <ref>, present our approach in Section <ref>, experimentally evaluate our approach in Section <ref>, provide further discussion in Section <ref>, and conclude in Section <ref>.
§ RELATED WORK
Our work covers a unique setting: constructing decision trees given a deterministic black-box system whilst providing optimal performance guarantees. As such, there are no directly applicable works that we are aware of in this setting. Nevertheless, to illustrate the challenges of our setting, we discuss previous works for synthesising decision-tree policies, albeit not fitting into our setting.
Reinforcement learning. A tree policy can be obtained via reinforcement learning, either by using dedicated tree algorithms <cit.> or by modifying reinforcement learning to output tree policies <cit.>.
Alternative approaches allow linear functions on the leaves <cit.>, consider multiple predicates, branches and actions at a time, or fix the structure using expert knowledge <cit.> and then employ policy gradient updates <cit.>. These approaches perform exceptionally well, when an existing reinforcement-learning approach is available that is effective for the given system <cit.>, and/or the model <cit.> of the environment is known. A tree policy may be derived by imitating an expert policy, e.g., a neural network <cit.>.
In contrast, we require neither model nor expert policy and provide optimality guarantees. In case of sparse rewards, reinforcement learning might struggle, while our framework by design has no such problem.
Learning from tabular data. When the policy is given in tabular form, dedicated tree-learning algorithms for control policies can be employed <cit.>, which extend classical tree-learning algorithms <cit.>. Recent advancements in optimal tree induction could potentially also be employed <cit.>. However, obtaining the tabular policy requires an explicit model, which is not required in our setting.
Optimal policy synthesis. The problem of constructing a tree policy may be posed as a mixed-integer linear program <cit.>, after which off-the-shelf solvers may be used to obtain optimal policies. However, not all environments may be feasible to model with such an approach (e.g., differential equations or trigonometric functions), and in our setting we consider black-box environments, which are not amendable to linear programming.
Verification. There has been recent work to provide guarantees for decision-tree policies <cit.>, possibly for infinitely many traces (we consider finitely many traces). However, the tree policy and the model must be given explicitly, which makes this work orthogonal to ours.
We do note that we consider deterministic systems with discrete actions. Some of the methods above are also applicable to stochastic environments and continuous actions, which we consider as future work.
To summarise, while there has been considerable work on decision-tree policies, synthesising such policies when the environment is black-box whilst also providing optimality guarantees is an open challenge.
§ PRELIMINARIES
A state S = (s_1, s_2, …, s_d)^⊤∈ is a d-dimensional real-valued vector from a bounded state space ⊆^d, where each state dimension s_i belongs to an interval s_i ∈ [ℓ_i, u_i] of a lower and an upper bound. An action a ∈ comes from a finite set ⊆ℤ of integer-valued actions.
An environment is a function ×→ that takes as input a state S and an action a and computes a trajectory until asked to output the next observable state S' = (S, a). In our setting, the environment is treated as a black box, i.e., we are agnostic to its dynamics.
A policy
is a function π→ that chooses an action based on an input state. We write for the set of all policies over action set . In this work, we are concerned with the special case of a decision-tree policy, which is given in the form of a binary tree where each inner node is called a predicate node and each leaf node is called an action node. Each predicate node is associated with a function →, where is the Boolean domain. Each action node is associated with one of the available actions a ∈. Given a state S and a node of the tree, a decision-tree policy π computes the action a = π(S) using the following recursive procedure, starting at the root node. If the current node is an action node, the associated action is returned. Otherwise, the current node is a predicate node. If the state S makes the predicate evaluate to true (false), the procedure continues with the left (right) child node.
Given an environment , a decision-tree policy π, an initial state S_0, and a bound k ∈, the trace with k steps is the sequence of observed states τ = S_0, S_1, S_2, …, S_k, obtained by applying the sequence of actions given by π: S_i = (S_i-1, π(S_i-1)), for i=1,…,k. We write for the set of all traces.
§ OPTIMAL DECISION-TREE POLICIES
To simplify the exposition, we focus our discussion on the case of a single initial state S_0. We generalise the problem to multiple initial states in Section <ref>.
§.§ Discretised Decision-Tree Predicates
The space of all decision trees is infinitely large. By discretising the tree predicates, we obtain a finite (but exponentially large) space. We restrict our attention to (axis-aligned) predicates of the form [s_i ≥ + m ·], where s_i is the i-th state dimension, , ∈ are real-valued constants, and m ∈ is a positive integer. Since our state space is bounded, we obtain a finite number of non-equivalent predicates. We write for the set of all (tree) predicates.
For instance, given a state space with s_i ∈ [0, 3] and = = 1, we consider the predicates [s_i ≥ 1], [s_i ≥ 2], and [s_i ≥ 3]. Note that predicate [s_i ≥ 0] is excluded since it is a tautology by s_i's domain, i.e., always evaluates to true.
The increment value controls the discretisation resolution: a smaller value yields more predicates, which increases the space of considered decision trees but potentially allows to find better trees. The algorithm's runtime is sensitive to the larger search space, and hence a practical balance is needed. Experimentally we show that our approach can handle reasonably small increments.
§.§ Specification
Given environment , decision-tree policy π, and initial state S_0, we consider a specification to determine whether π satisfies the specification. We assume that the specification is given in terms of traces, again in the form of a black-box specification function →.
In order to effectively determine whether π satisfies the specification, we restrict the class of specifications we consider. A bounded-time specification with a bound k ∈
has the property that
every trace τ of length k either satisfies or violates the specification. As a consequence,
we are guaranteed to obtain a Boolean verdict from a trace of length at most k.
We call the (unique) trace of length k the witness trace.
This class of specifications includes common reach-avoid problems where a goal needs to be reached while undesired states need to be avoided.
For instance, for the pendulum environment, the specification is to reach the vertical position within a step bound k.
A trace satisfies the specification if and only if a prefix of length less than k satisfies the specification function. Conversely, any trace not reaching the goal withing k steps violates the specification.
§.§ Optimality
So far, we were only interested in finding any policy that satisfies a given specification. In general, there may exist multiple solutions. We are interested in identifying an optimal policy. For that, we assume a fitness function, which is a partial order ≽× to compare two traces. A trace that satisfies the specification always precedes a trace that violates the specification. The fitness function induces another partial order ≽× to compare two policies, as follows. We say that π_1 is strictly better than π_2, written π_1 ≻π_2, if one of the following conditions holds:
1) The witness τ_1 has strictly better fitness than the witness τ_2.
2) Both witnesses have the same fitness, and π_1 is strictly smaller.
We wrap the black-box environment and the black-box specification function into a black-box system ×→×. This system takes as input a policy π and an initial state S_0 and outputs both the (Boolean) verdict and the trace τ. We say that a policy π satisfies the specification for initial state S_0 if yields a positive verdict. We note that can be implemented from and by simply generating the witness τ
and querying the specification function.
Given a black-box system over a set of actions , an initial state S_0, a limit on the depth and number of predicate nodes, and a fitness function ≽, find a decision-tree policy π∈ within the defined size that satisfies the specification optimally with respect to the fitness function ≽ and the witness trace τ produced by the black-box system .
For instance, for environment in Fig. <ref>, the black-box specification (reaching the vertical position) is satisfied for two out of three decision trees. However, we are looking for those trees that satisfy the specification within the smallest number of time steps (in this example, 50).
§ SYNTHESIS OF OPTIMAL DECISION-TREE POLICIES
For computing an optimal decision-tree policy that solves Problem <ref>, a naive procedure is to enumerate all possible decision trees and evaluate them. By fixing an upper limit of the number of tree nodes, this procedure terminates. However, the number of available trees is exponential, rendering this procedure infeasible. Our main contribution is an efficient instantiation of this procedure.
§.§ Searching In the Space of Decision Trees
Our algorithm to enumerate decision trees is based on backtracking search. We represent the search space using backtracking variables b_i, where each variable is associated with a node in the tree. The possible values that can be assigned to a backtracking variable depend on the type of node which the variable is associated with: predicate nodes may be assigned a predicate from the set of discretised predicates, whereas action nodes may be assigned an action from the set of available actions.
Backtracking variables are considered in a predefined order, i.e., variable b_i goes before variable b_i+1. As is standard in backtracking, all combinations for variable b_i+1 are exhausted before taking the next value for variable b_i.
Assigning all backtracking variables thus results in a particular decision-tree policy, and consequently, enumerating all possible assignments to the backtracking variables corresponds to all possible policies in our available space. When enumerating a policy, it is used in combination with the black-box environment to compute the witness trace and subsequently evaluate the quality of the policy. Finally, the best policy is returned as the result.
For a tree with a fixed shape and n predicate nodes, || number of actions, and || number of discretised predicates, the size of the total search space is 𝒪(||^n· ||^n+1). Our key contributions are techniques for reducing this exponentially large search space in practice.
§.§ Intuition Behind Trace-based Pruning
The idea of pruning is to limit the exploration by avoiding to explicitly enumerate trees that are guaranteed to be suboptimal, i.e., do not satisfy the desired property with a higher fitness. We define sufficient conditions for pruning.
To provide the intuition behind our approach, consider an environment with only one state dimension s_1, and the process of enumerating all trees with exactly one predicate node, three possible predicates [s_1 ≥ 1], [s_1 ≥ 2], [s_1 ≥ 3], and having the left and right child nodes fixed to actions a_1 and a_2, respectively (cf. Figure <ref>). To find the best tree of the given description, we generally need to consider all three trees, with each tree differing only in the root node.
Assume the algorithm starts with predicate [s_1 ≥ 1] and computes the first trace depicted in Figure <ref>. We could consider the two remaining trees with predicates [s_1 ≥ 2] and [s_1 ≥ 3]. However, from the first trace we see that the second tree with predicate [s_1 ≥ 2] would result in the exact same trace. Indeed, there is no state in the trace where the policy would decide differently regardless of whether the predicate is [s_1 ≥ 1] or [s_1 ≥ 2], so the trace would not change. As a result, we do not have to evaluate the tree with predicate [s_1 ≥ 2], and can directly go to the last tree.
§.§ Trace-Based Pruning
The intuition discussed above can be generalised to prune a potentially exponential number of trees that do not result in a different trace, which significantly speeds up the search process. In the following, we discuss incorporating a general version within a backtracking algorithm with more than one predicate node. We will consider predicates in increasing order, i.e., [s_j ≥ v_1] before [s_j ≥ v_2] if v_1 < v_2.
Given a backtracking variable b_i associated with a predicate node, the algorithm needs to select the next predicate to assign to the variable. A naive approach would be to simply select the next bigger one, e.g., from the example in Section <ref>, assign predicate [s_1 ≥ 2] after considering [s_1 ≥ 1]. However, we can leverage information about previous traces to avoid explicitly considering predicates that are guaranteed to not result in a trace that has not been previously observed.
After assigning a predicate [s_j ≥ v] to a backtracking variable b_i, the idea is to track the values of state dimension s_j that have been observed during environment runs such that the predicate evaluated to true. In particular, we are interested in the smallest such value, which we refer to as the distance value d_i ∈. Note that a tree policy is only run after having all backtracking variables assigned.
The key idea is that, when selecting the next predicate for backtracking variable b_i, its threshold value (v in the example above) should exceed the distance value d_i. Otherwise, the trace would be identical.
To reiterate, for each backtracking variable b_i associated with a predicate node with current predicate [s_j ≥ v], we store a value d_i ∈, which tracks the minimum value of state dimension s_j for which the predicate was evaluated to true amongst all traces that were considered since the predicate [s_j ≥ v] has been assigned. Note that selecting a new threshold value for the predicate that is smaller than d_i is guaranteed to result in a trace already observed. Note that it is not necessary to track the values where the predicate evaluates to false, since the predicates are explored in increasing order of the thresholds and as such the future predicates would also evaluate to false on those values.
Initially, the distance value d_i is set to undefined each time a new predicate [s_j ≥ v] is assigned as part of the search. The first time the node observes a state where its predicate is satisfied in a trace, the distance value d_i is set to the corresponding value of s_j. Each subsequent time the predicate is satisfied, d_i is updated to the smallest value for which the predicate still evaluates to true.
After considering predicate [s_j ≥ v] for node i, our algorithm does not consider the next predicate, but instead uses the predicate [s_j ≥ v'] where v' is the smallest available value such that v' > d_i.
If the distance value d_i is undefined, all predicates may be discarded for that backtracking variable for the currently considered state dimension s_j.
For the example in Section <ref>,
the distance value d_1 is initially undefined, and upon completing the first trace, it is set to d_1 = 2.3. When selecting the next predicate, [s_1 ≥ 2] is discarded since its threshold 2 does not exceed distance value d_1 = 2.3; so the next selected predicate is [s_1 ≥ 3].
The above idea is applied to every backtracking variable associated with a predicate.
Our pruning strategy is computationally inexpensive: it amounts to tracking a single value for each backtracking variable, and updating this value as the tree is queried during trace computation. The algorithm retains completeness, as it is guaranteed to not discard optimal trees. Our trace-based pruning is the key component in the practical efficiency.
§.§.§ Additional Techniques:
Trees explored in increasing size. The algorithm partitions the search space in terms of tree shapes, which are ordered by size. For example, after considering trees with exactly one predicate node, the algorithm considers trees which have a root node with one left predicate child, then trees which have a root node with one right predicate child, then complete trees with three predicate nodes, and so on until the size budget is reached.
Early stopping due to the objective. During the search, the best tree found so far is tracked. When evaluating a new candidate tree, its evaluation is preemptively stopped when it is determined that the trace cannot be extended to a trace that is better than the one obtained from the best tree so far.
For example, consider a setting where the policy should reach a goal state as quickly as possible. If the best policy so far reaches the goal state in k trace steps, and the partial trace associated with the current candidate policy has not reached the goal state in k-1 steps, we may safely discard the candidate policy from further consideration, since it cannot result in a better trace. Note that, since we explore trees in increasing size, this results in the algorithm computing the smallest tree with the optimal performance across the considered maximum tree-size budget.
Early stopping has two advantages: 1) it saves computational time, and 2) it results in traces of shorter length, which allows for more aggressive trace-based pruning owing to fewer distance updates being made.
Symmetries. Trees that have identical left and right subtrees are discarded from consideration. In these cases, the root node of such a tree is redundant and its predicate has no influence on the trace. A smaller tree, consisting of the subtree, would result in the same trace, and since the algorithm explores trees in increasing size, it is safe to discard the larger symmetric tree without further consideration. The main use of this technique is to discard trees that contain predicate nodes with identical left and right action nodes, and otherwise plays a minor role.
§.§.§ Summary
Algorithm <ref> provides a high-level view on using backtracking variables. If available, the next unassigned backtracking variable is selected, or the last assigned variable otherwise. The next value is selected either as the next action for variables representing action nodes, and otherwise trace-based pruning is used to determine the threshold. Once a predicate has been exhausted on one state dimension, predicates for the next state dimension are selected.
Once all backtracking variables are assigned, the algorithm constructs a decision-tree policy, and uses the black-box system to produce the trace τ. If τ is better than the globally best trace (initially null), that trace is updated to τ. The distance values of the nodes of the policy are used to update the distance values of the backtracking variables. After all policies have been (implicitly) considered, the algorithm returns the best policy (Line <ref>).
§.§ Extensions
Multiple initial states. The previous discussion was based on constructing a tree policy from a single initial state. However, we may be interested in finding a single tree policy that works well across multiple initial states. The algorithm remains similar, with impact on two components: 1) the objective function, and 2) trace-based pruning.
When evaluating a tree with respect to multiple initial states, we generalise the fitness function. For example, if the goal is to minimise the trace length, then the generalisation aims to minimise the maximum trace length. This influences early stopping: the initial states are evaluated with respect to the tree policy one at a time, and as soon as a trace is encountered that is considered violating, the evaluation stops, i.e., the remaining initial states are not considered further.
The above idea interacts with trace-based pruning. In case the tree evaluation is stopped early, meaning the tree is deemed not better than the best tree found so far, only the last trace is used to update the distance values. The intuition is that, if we wish to find a better tree, it must lead to a trace different from the last trace, and we can ignore the distance-value updates of the other traces.
As a result, due to the interaction with pruning, finding the optimal tree with respect to multiple initial states may lead to lower running times than with a single initial state.
Maximisation. Rather than satisfying the desired property in the least number of steps, we may be interested in maximising the number of steps. For example, the goal may be to balance a pole for as long as possible. The algorithm stays largely the same, with the only analogous changes needed in the evaluation of the tree. For maximisation it is important to specify an upper bound on the trace length; otherwise, the algorithm may potentially run infinitely long.
§ EXPERIMENTAL STUDY
We aim to illustrate the effectiveness of our approach with a proof-of-concept implementation. We show that our trace-based pruning approach is a key factor in making the approach feasible. Furthermore, we consider scalability from two perspectives: the granularity of the predicates, and the number of predicate nodes in the tree. While both are expected to have an exponential impact on the runtime, we observe that the runtime is still within practical use.
We consider three classical control problems: CartPole, MountainCar, and pendulum. The environment behaviour is defined as in Gymnasium[https://github.com/Farama-Foundation/Gymnasium].
MountainCar and pendulum are minimisation problems, whereas CartPole is a maximisation problem. We set the maximum trace size to 10,000, which is mainly only relevant for CartPole since it maximises the trace size, while for the other two environments, most traces are cut well before the limit due to early stopping. The control actions are limited to two choices, e.g., apply maximum force in one or the other direction. Our implementation only queries the environment in a black-box fashion. We generated the initial states randomly within a specified range; see Appendix <ref> for details about the parameters, environment description, and a sample of trees produced by our approach.
To reiterate, as discussed in Section <ref>, we work with a unique setting where we 1) synthesise decision-tree policies, 2) only require black-box access to the environment and the specification, and 3) provide guarantees on performance under the tree definition, e.g., the policy that minimises the time taken, or prove that no such policy exists. Every work we are aware of violates at least one of these points. Consequently, while direct comparisons with other works may be done by relaxing the requirements of our setting, this brings considerable caveats that result in comparisons that we argue are not meaningful with respect to our contribution. For these reasons, we focus on demonstrating the feasibility of our approach and its scalability.
Our code base is written in (pure) Rust 1.77.0.
The experiments were run on consumer-grade hardware (Intel(R) Xeon(R) W-10855M @ 2.8 GHz). Our code will be made publicly available in due course.
§.§ Experiment #1: Trace-Based Pruning
We run experiments with and without our trace-based pruning (Table <ref>), both for one and 100 initial states, for trees of depth two. Initial states are generated randomly, and the results are averaged over ten runs.
Our trace-based pruning technique is clearly effective in pruning the search space. For CartPole and MountainCar, there is a one- to two-orders-of-magnitude difference, whereas for pendulum, it leads to a 4x reduction.
The runtime is roughly proportional to the number of trees explicitly considered, as expected, and is consistent based on the standard deviation across different initial states. Interestingly, we observe only a sub-linear increase in runtime with more initial states, and the total number of trees considered is roughly similar regardless of the number of initial states.
§.§ Experiment #2: Granularity of Predicates
The granularity of predicates impacts the runtime: the finer the discretisation, the larger the search space. Each state dimension in the environment has a predefined range of values that it may take. We divide this interval into 5, 10, 15, and 20 values, and use these values as the predicate thresholds.
Table <ref> summarises the results for trees of depth three across randomly generated initial states (averaged over ten runs). The general trend is that increasing the number of predicates indeed increases the runtime.
However, in the case of CartPole, we observe an opposite effect: finer predicates decrease the runtime. This is because the finer discretisation resulted in the algorithm finding a tree faster that balances the pole for 10,000 steps, which is the maximum number of steps so the search can terminate.
The results indicate that the discretisation needs to be chosen carefully to balance runtime and quality of the final tree.
§.§ Experiment #3: Number of Predicate Nodes
The number of predicate nodes directly influences the size of the search space. We study the runtime by considering trees of depth three and varying the maximum number of nodes from three to six.
The results are summarised in Table <ref>. Note that the search space of trees with at most k predicate nodes includes trees with less than k predicate nodes, meaning the search space is strictly larger as we increase the number of nodes. Consequently, there is a sharp increase in the runtime, which is expected due to the exponential factor.
§ FURTHER DISCUSSION AND LIMITATIONS
Our approach is effective at computing small and optimal decision-tree policies. There is an exponential runtime dependency with respect to the size of the tree and the discretisation of the state space. It may be infeasible with our approach to construct large tree policies or deal with high-dimensional environments. It is also possible that not all environments may be controlled by small decision trees.
However when it is applicable, we believe small trees are valuable for interpretability reasons, and our approach provides the means to easily obtain such trees.
The exponential runtime factor in our approach is inherent to every approach that aims to provide guarantees.
Our approach is exceptionally flexible as it only requires black-box access to the system. This entails that the black box may be arbitrarily complex, as long as it can still be practically computed. Furthermore our algorithm provides performance guarantees, despite working with black boxes.
The optimality is important since it guarantees that we obtain the best performing tree under consideration, which may be relevant for some applications. It also allows us to conclude in cases when no such tree exists, and in general understand the limits of decision trees as control policies.
Given that our approach is a conceptually novel way to synthesise decision-tree policies in a unique setting, it opens many avenues for future work.
Parallelisation is promising as the search space can be naturally partitioned, and further heuristic pruning may lead to a principled trade-off between runtime and guarantees. Extending the approach to stochastic environments is another interesting direction. In our work, continuous actions ought to be discretised in a preferably smaller number of actions. Synthesising optimal trees for continuous actions remains an open challenge for decision-tree policies in general.
§ CONCLUSION
We presented a novel search-based method for computing an optimal decision-tree policy given a set of initial states and a black-box system. To the best of our knowledge, our approach is the first to consider such a setting. The key component is our trace-based pruning technique, which discards large portions of the search space at runtime. We illustrated the practicality of the approach on classical control benchmarks. When the environment is controllable by a small tree, our approach provides a way to obtain a small and optimal tree despite only requiring black-box access to the system.
§ ACKNOWLEDGEMENTS
This research received support from NWO Veni grant Explainable Monitoring (222.119), Independent Research Fund Denmark under reference number 10.46540/3120-00041B, and the Villum Investigator Grant S4OS under reference number 37819. This work was done in part while Anna Lukina was visiting the Simons Institute for the Theory of Computing.
§ APPENDIX / SUPPLEMENTAL MATERIAL
We considered three environments as defined in Gymnasium[https://github.com/Farama-Foundation/Gymnasium]. The dynamics may be found in their git repository, however, our algorithm does not have access to the internal dynamics, and only observes the state-action outputs in a black-box fashion.
In the following, we describe the initial states and specifications. Note that specifications are also treated as black-box for the algorithm.
§.§ CartPole
The system has four dimensions: cart position x, cart velocity ẋ, pole angle θ, and pole angular velocity θ̇. We select the initial state by randomly assigning values in the range [-0.05, 0.05] to each state dimension. These values follow the initial states given in the Gymnasium.
The specification is to maintain that the cart position stays within the range [-2.4, 2.4] and the pole angle is within the range [-α, α], where α = 24 π / 360, by applying force to the left (-1) and right (1).
§.§ MountainCar
The system has two dimensions: car position x and car velocity ẋ. We select the initial state by randomly assigning the car position in the range [-0.6, -0.4] and setting the velocity to zero. These values follow the initial states given in the Gymnasium.
The specification is to reach the top of a hill, which means that the car position is greater or equal to 0.5, by applying force to the left (-1) and right (1).
§.§ Pendulum
The system has two dimensions: the position of the free end of the pole x and pole angular velocity θ. Note that in the Gymnasium, the position of the tip of the pole is given as two dimensions in terms of cos and sin of the position - presumably this is done to make it easier for neural networks to learn; however, in our case, we chose to directly represent the position in Cartesian coordinates.
The initial state is given by randomly assigning values to the position of the tip of the pole and the angular velocity from the ranges [-0.8, -0.5] and [-0.2, 0.2], respectively. The values in Gymnasium are given within [-1.0, 1.0], however, depending on the random values chosen for the experiments with multiple initial states, some configurations resulted in trees, whereas some did not, i.e., no decision tree within the given specification could reach the goal within 10,000 steps. To make the experiments more consistent, we opted for a reasonable reduction of the initial states, and selected the above provided values for the initial states.
The specification is to reach a state where both state dimensions (in radians) are within the range [-0.1, 0.1] by applying force to the left (-1) and right (1).
|
http://arxiv.org/abs/2409.02485v1 | 20240904072312 | Adversarial Attacks on Machine Learning-Aided Visualizations | [
"Takanori Fujiwara",
"Kostiantyn Kucher",
"Junpeng Wang",
"Rafael M. Martins",
"Andreas Kerren",
"Anders Ynnerman"
] | cs.CR | [
"cs.CR",
"cs.AI",
"cs.HC",
"cs.LG",
"stat.ML"
] |
Adversarial Attacks on Machine Learning-Aided Visualizations]Adversarial Attacks on Machine Learning-Aided Visualizations
[1]Takanori [email protected]
1]Kostiantyn Kucher
2]Junpeng Wang
3]Rafael M. Martins
1,3]Andreas Kerren
1]Anders Ynnerman
[1]Linköping University, Sweden
[2]Visa Research, United States
[3]Linnaeus University, Sweden
Research in ML4VIS investigates how to use machine learning (ML) techniques to generate visualizations, and the field is rapidly growing with high societal impact. However, as with any computational pipeline that employs ML processes, ML4VIS approaches are susceptible to a range of ML-specific adversarial attacks. These attacks can manipulate visualization generations, causing analysts to be tricked and their judgments to be impaired. Due to a lack of synthesis from both visualization and ML perspectives, this security aspect is largely overlooked by the current ML4VIS literature. To bridge this gap, we investigate the potential vulnerabilities of ML-aided visualizations from adversarial attacks using a holistic lens of both visualization and ML perspectives.
We first identify the attack surface (i.e., attack entry points) that is unique in ML-aided visualizations.
We then exemplify five different adversarial attacks.
These examples highlight the range of possible attacks when considering the attack surface and multiple different adversary capabilities. Our results show that adversaries can induce various attacks, such as creating arbitrary and deceptive visualizations, by systematically identifying input attributes that are influential in ML inferences. Based on our observations of the attack surface characteristics and the attack examples, we underline the importance of comprehensive studies of security issues and defense mechanisms as a call of urgency for the ML4VIS community.
[
[
3 September 2024
====================
fancy
§ INTRODUCTION
Visualizations play a critical role in depicting relationships or patterns within an underlying dataset such that analysts can effectively explore, interact, and communicate the data.
Recently, researchers are actively investigating various applications of using machine learning (ML) techniques when generating visualizations optimized for certain tasks (e.g., chart recommendation <cit.>, text-chart translation <cit.>, and interaction prediction <cit.>).
This approach is often referred to as ML for visualization, or ML4VIS for short <cit.>.
Despite the wide usage, we argue that the current research focus on ML4VIS is imbalanced. The majority of ML4VIS research mainly focuses on the benefits these techniques provide <cit.>, and consequently largely overlooks the security issues these techniques introduce.
Critically understanding security issues related to ML4VIS requires a more holistic perspective that is both visualization- and ML-oriented.
This dual perspective is necessary as the joint use of ML and visualization techniques may cause additional security problems that are not prevalent in each respective field.
However, relevant research investigations on these matters are currently disconnected.
For example, though some visualization research has explored the vulnerabilities of visualization approaches, the focus of these investigations targets more general visualization practices (e.g., the influence of the choice of the aspect ratio on the recognition of correlation patterns <cit.>). These insights do not consider the specific security issues that ML approaches introduce (e.g., manipulating data transformation when generating ML-aided visualizations).
Similarly, though the ML community is intensively studying the vulnerability of ML models from adversarial attacks <cit.>, these security vulnerabilities have not been studied with respect to visualization.
Adversaries can then take advantage of loopholes that are unattended.
For instance, while direct human intervention is vital to prevent critical harm from ML mispredictions <cit.>, this scenario is generally not applicable to ML4VIS.
In ML4VIS, an adversary's goal could be to harm the end users by corrupting output visualizations that humans use for their judgments.
This lack of an interdisciplinary perspective can introduce security concerns that both communities are overlooking.
This paper aims to convey the importance of systematic studies related to the security vulnerabilities of ML4VIS approaches.
To reveal structural vulnerabilities of ML-aided visualizations, we first outline the attack surface (i.e., a set of entry points for adversaries) by referring to the ML4VIS pipeline introduced by Wang et al. <cit.>.
We then characterize the unique aspects of the attack surface of ML-aided visualizations.
For this characterization, we focus on ML-aided visualizations that incorporate neural networks (NNs), as NNs are rapidly gaining increased attention and wide usage <cit.>.
To demonstrate the unique characteristics of the attack surface with real possible threats, we design concrete examples of adversarial attacks on representative state-of-the-art ML4VIS methods.
The results from these exemplary attacks highlight critical vulnerabilities in the current ML4VIS approach that can directly impact future applications.
Based on the insights from our literature review, characterization of the attack surface, and attack examples, we discuss the identified research gaps on the lack of studies on vulnerabilities specific to ML4VIS and propose a research agenda outlining future research directions from a security perspective.
Our research reflects the urgency of this matter as several government and official institutions (e.g., EU <cit.>, UK <cit.>, and US <cit.>) have already introduced and developed ML security regulations.
We similarly argue that it is important that visualization researchers also take action to minimize the risk of any harm that can stem from using ML in conjunction with visualization.
In summary, our primary contributions are to be:
* characterization of the attack surface of ML-aided visualizations, which produces an indication of potential threats (<ref>);
* five examples of adversarial attacks using two representative ML-aided visualizations that (1) describe the attack strategies and (2) analyze the attacked results (<ref>); and
* a research agenda outlining the future research direction for ML4VIS from a security perspective (<ref>).
§ BACKGROUND AND RELATED WORK
Addressing security issues in ML-aided visualizations requires a dual set of ML and visualization considerations.
We describe relevant works related to adversarial attacks and vulnerabilities from ML and visualization research.
We also discuss a general overview of ML-aided visualizations.
§.§ Adversarial Attacks on Machine Learning Models and Defenses
With ML techniques actively used in real-world settings, Dalvi et al. <cit.> posits a critical research agenda that can address the issue of how “the presence of an adversary actively manipulating the data [can] defeat the data miner”.
As deep NNs are rapidly being used in various domains, such as vision and speech recognition, a significant portion of ML research is devoted to addressing security issues in NNs.
One notable early result is an adversarial example designed for convolutional NN (CNN) models by Szegedy et al. <cit.>.
They demonstrated that an adversarial example can be easily constructed by adding a human-imperceptible perturbation into an input image.
This perturbation can readily cause image misclassification even with state-of-the-art CNN models.
These adversarial examples pose critical issues in real-world settings where, for example, an adversary may craft a stop traffic sign that an autonomous car will misclassify as a yield sign <cit.>. Researchers have since studied a multitude of efficient and intense adversarial examples on CNNs.
Adversarial examples on CNNs can be generally categorized as either white-box attacks or black-box attacks depending on the adversary's knowledge about the model of interest.
When an adversary knows detailed information about a target CNN model (i.e., white-box attacks), they can efficiently construct adversarial examples by referring to the gradients of the model's loss function <cit.>.
In contrast, black-box attacks are when the adversary has limited information available on a target model.
In these cases, the adversary generally has two strategies. First, an adversary can build a substitute model that performs the same classification task as the target model.
The substitute model can then be used to generate adversarial examples like in white-box attacks <cit.>.
Another common black-box attack strategy is to reverse engineer a target model from the collected input-output relationships by sending queries to the model <cit.>.
Research continues to highlight new attack methods that have grown and diversified for other NNs, such as recurrent NNs (RNNs) and graph NNs (GNNs) <cit.>.
Besides crafting adversarial examples, data poisoning <cit.> (i.e., adding malicious training data) is another effective way to corrupt the model as exemplified by the problematic tweets made by Microsoft's chatbot Tay <cit.>.
Data poisoning can also be used to perform backdoor attacks.
Backdoor attacks can make NN models behave maliciously only when models receive inputs stamped with a trigger crafted by the adversaries <cit.>.
Multiple surveys <cit.> highlight how backdoor attacks can be easily prepared with various approaches, such as sharing maliciously pre-trained models with others.
The growing range of these attacks highlights the need for defense strategies.
Defense strategies against adversarial attacks vary but all present shortcomings.
One straightforward way to protect against white-box attacks is gradient masking, which conceals the gradient information from adversaries <cit.>. However, even after gradient masking, an adversary can perform a black-box attack with a substitute model.
Another defense strategy is to generate more robust NN models through adversarial training (i.e., training using artificially created adversarial examples) <cit.>.
NN models produced with adversarial training are still vulnerable to out-of-sample adversarial examples.
A third popular strategy focuses on input validation and preprocessing, such as applying statistical methods to detect abnormal inputs (e.g., PCA-based detection <cit.>) and data compression to exclude the perturbations (e.g., JPEG compression <cit.>).
However, this strategy is highly domain-specific and is not generalizable across domains <cit.>.
We discussed only three possible defense strategies.
We refer readers to the taxonomy by Papernot et al. <cit.> and multiple surveys and reports for more details on attack and defense strategies <cit.>).
Visual analytics approaches have also devoted to investigating the mechanism of adversarial attacks on NNs to establish better defense strategies <cit.>.
However, there is a research void in studying the impact of adversarial attacks on the generation of visualizations.
§.§ Vulnerabilities of Visualizations
Visualizations can become obscure, misleading, and even deceptive as a consequence of poorly prepared data <cit.>, problematic visual designs <cit.>, viewers' cognitive bias <cit.>, or any combinations of these.
For example, a visualization with missing value imputations that is not suited for target analysis tasks can decrease viewers' performance <cit.>.
Subtle skews in visualizations, such as 3D effects on pie charts, can lead to wrong conclusions <cit.>.
A viewer's belief in the existence of correlations between two variables (e.g., the numbers of guns and crimes) can also influence the cognition of the correlation strength <cit.>.
The choice of a visual representation in itself can additionally lead to bias in estimating a user's confidence in the presented visual representations <cit.>.
McNutt et al. <cit.> provided a conceptual model that frames these visualization vulnerabilities along the visual analytics pipeline.
By exploiting these visualization vulnerabilities, adversaries can easily create malicious visualizations.
Correll and Heer <cit.> discussed a man-in-the-middle attack on visualizations: an attack from a malicious visualization designer who aims to distract communication between data and viewers.
Adversaries (i.e., designers) can intentionally break visualization conventions <cit.> to create visualizations with problematic designs.
Our work argues that even when adversaries do not have such a strong capability of manipulating visualization designs, various practical attacks can be imposed on ML-aided visualizations (see <ref> and <ref>).
To the best of our knowledge, no existing work explicitly studied defense strategies against adversarial attacks on visualizations.
Existing works targeted on detecting flaws in visualizations <cit.>, mitigating cognitive biases <cit.>, and testing the robustness of visualizations <cit.>.
For example, Chen et al. <cit.> developed a linting tool to detect erroneous visual specifications.
McNutt et al. <cit.> introduced metamorphic testing for visualization, which measures how strongly a change to the input data can perturb a visualization outcome.
This current state of literature presents security issues that are being overlooked within the visualization community.
§.§ Machine Learning-Aided Visualizations
Existing works <cit.> provide comprehensive surveys on ML4VIS related to visual analytics, information visualization, and scientific visualization.
Based on these surveys, the following facts emphasize the importance of needing future studies regarding adversarial attacks on ML-aided visualizations.
Increase of research interest in ML4VIS.
Research on ML4VIS is rapidly growing.
Until 2015, only a few ML4VIS-related papers were published annually.
Since then, the number of ML4VIS-related publications radically increased <cit.>.
This trend indicates an increasing interest in, and need for, using ML for visualization.
Existence of broad and critical attack targets.
ML4VIS approach is employed for various tasks, including data processing for visualization, feature extraction, visualization generation, user behavior prediction, and interpretation support of visualized results.
Also, ML4VIS approaches are utilized in life-critical applications, such as 3D medical image analysis and reconstruction <cit.> as well as cancer genome analysis <cit.>.
Because visualization is a fundamental tool to communicate and analyze data, by attacking ML-aided visualizations, adversaries could significantly and negatively impact critical applications in areas such as business, politics, and medicine.
Potential exposure to immediate threats.
Many existing ML-aided visualizations are based on NN-based models, using multilayer perceptrons (MLPs), CNNs, RNNs, GNNs, autoencoders, and/or generative adversarial networks.
Adversarial attack methods have been developed for all of these NNs <cit.>.
Therefore, ML-aided visualizations might be inherently exposed to potential threats.
§ ATTACK SURFACE OF MACHINE LEARNING-AIDED VISUALIZATIONS
We analyze the attack surface (i.e., set of potential entry points for attacks) of ML-aided visualizations.
For this analysis, we use Wang et al.'s ML4VIS pipeline <cit.>, which is derived from an extensive survey on ML4VIS approaches, to review the potential processes, inputs, and outputs involved in ML-aided visualizations.
§.§ ML4VIS Pipeline
As shown in <ref>, Wang et al.'s ML4VIS pipeline consists of seven processes that can be individually aided by ML: Data Processing4VIS, Data-VIS Mapping, Insight Communication, Style Imitation, VIS Interaction, User Profiling, and VIS Reading.
Note that we consider that visualizations are ML-aided if they utilize ML for any of these processes.
ML does not have to be involved in all the processes.
For example, a user may utilize ML for data processing but still manually design the visual encodings to produce a visualization.
Below, we briefly describe each process.
Data Processing4VIS prepares data for the subsequent visualization processes.
A typical ML method used for this process is dimensionality reduction (DR).
This process can use raw data, existing visualizations, or both as input.
For example, DR can utilize the previously generated visualization result to perform visually consistent updates <cit.>.
It is worth noting that NN-based DR methods are becoming more actively developed for Data Processing4VIS <cit.>.
Data-VIS Mapping encodes processed data into visualizations by assigning visual marks and channels.
For this process, ML can recommend marks and channels suitable for a given dataset or accelerate the encoding process.
These enhancements are often achieved by training NNs using supervised learning on labeled training data (e.g., data tables and their suitable visual marks) <cit.>.
Insight Communication aims to produce visualizations that effectively convey insights found in the data.
Wang et al. <cit.> distinguish this process from Data-VIS mapping based on whether the insights are used as inputs.
Because insights are frequently represented with texts, NN-based natural language processing can be applied to this process <cit.>.
Style Imitation reflects a given visualization's style in other visualizations.
Visual styles include applied color palettes, chart decorations, etc.
Similar to Data-VIS Mapping, this process can be aided by NNs trained with supervised learning <cit.>.
VIS Interaction takes user actions as input and then accordingly updates visualizations by using any of the aforementioned processes.
For this process, ML can aid in interpreting users' intentions as well as refine the interaction results.
For example, NN-based classification models are used to improve the accuracy and speed of selecting visual elements (e.g., points in 2D scatterplots) <cit.>.
User Profiling is to understand users from their action histories.
ML is applied to predict user behaviors (e.g., next interactions) or user characteristics (e.g., cognitive abilities, analysis goals, and personalities) <cit.>.
With such predictions, visualizations can provide better interaction latency, suggestions for the next actions, and desirable marks and channels for users and their analysis.
VIS Reading is the process of reviewing visualizations and understanding the encoded information.
Through this process, the goal is to understand the visualized data, extract applied styles, and obtain insights.
ML can help users automate VIS Reading, instead of relying on manual human inspections.
For example, NNs can reconstruct input data corresponding to visualized results <cit.>.
As shown in <ref>, these processes are intertwined via the inputs and outputs that are produced by or used for the other processes.
In addition, as with other visualization pipelines <cit.>, there are likely iterative updates for each process based on newly added data and user actions.
We also want to note that the dataflow of this pipeline only reflects inputs and outputs at the inference phase.
For the training phase, pre-generated outputs can be used as training data for supervised learning (e.g., Data-VIS mapping can be trained with a set of pairs of data and its visualization) for each process.
§.§ Characterization of Attack Surfaces
We consider that all employed processes, inputs, and outputs in Wang et al.'s ML4VIS pipeline compose an attack surface given how adversaries may be able to manipulate inputs, corrupt ML/non-ML processes, and tamper with the output results.
Probable attack surfaces across different ML-aided visualizations reveal critical, unique characteristics in ML-aided visualizations:
C1. ML inputs and outputs specific to visualization.
Though existing ML-aided visualizations usually customize and utilize NNs developed from ML research <cit.>, the inputs used for the training and inference phases are often specific to the visualization (e.g., <cit.>).
Target outputs and loss functions to produce such visualization outputs can also be unique for ML-aided visualizations.
For example, ML-aided visualizations often adapt NNs to analyze visualizations based on their visual marks and channels rather than pixel-based images <cit.>.
Consequently, adversaries might design attack methods for ML-aided visualizations that are significantly different from those studied in the ML field, and existing vulnerability assessment and defense methods for NNs <cit.> might not be as effective for such attacks.
C2. Exposure of ML outputs (and inputs) to adversaries.
Visualizations are usually intended to be reviewed by users. The processes within the ML4VIS pipeline, such as Data-VIS Mapping, Insight Communication, and Style Imitation, all produce visualizations as their outputs.
Thus, when these processes employ ML, the visualizations are inference results that are abundant with information, which will inherently be observed by users as well as adversaries.
When the data generated by Data Processing4VIS is visualized without additional processes (e.g., directly visualizing DR results as 2D scatterplots <cit.>), Data Processing4VIS also suffers from the same issue.
Moreover, interactive visualizations often support details-on-demand <cit.>, enabling adversaries to access detailed information of raw or processed input data.
This attack surface characteristic can enable adversaries to gain further knowledge of ML models (i.e., contributing to the reconnaissance stage of cyberattacks <cit.>).
In contrast, C1 provides new opportunities to create attacks specific to visualization (i.e., contributing to the weaponization stage <cit.>).
C3. Long, complex pipeline with interdependent processes.
ML-aided visualizations may involve a large number of interdependent processes.
This interdependency introduces additional opportunities for adversaries to amplify a cascade of attacks throughout the pipeline.
For instance, adversaries may be able to create slight influences on Data Processing4VIS so that the subsequent processes will cause critical issues (see <ref> for a concrete example).
This cascade of attacks can be even amplified when feedback loops are involved <cit.>.
Furthermore, the ML4VIS approach inherits each respective attack surface that exists in ML <cit.> and visualization <cit.> pipelines, resulting in an even larger attack surface.
This attack surface indicates that ML-aided visualizations might be exposed to more potential threats and could be more complex to defend than cases that only use ML or visualization.
C4. Active involvement of users in the ML processes.
Users are actively involved with ML-aided visualizations. They interpret (i.e., VIS Reading) and interactively analyze (i.e., VIS Interaction and User Profiling) the visual content.
This user involvement provides adversaries opportunities to manipulate users to project attacks or target users as their attack objectives.
For example, by exploiting the information memorized in NNs, adversaries might reveal users' private information, such as their cognitive ability <cit.>.
C5. Threats on human judgment.
Human intervention is vital to avoid critical harm from actions instigated by ML outputs <cit.>.
For example, even if an autonomous car misclassifies a stop sign as a yield sign from an attack, a human driver could still hit the brakes <cit.>.
However, with ML4VIS, adversaries may intentionally attack visualizations that require human judgment, making them deceptive and thereby creating situations where human intervention is not as readily feasible (<ref> demonstrates such situations).
Although closely related to C4, we separately list this attack surface characteristic. Effective human intervention would be the final defense to protect users from potential harm.
§ CONCRETE ADVERSARIAL ATTACK EXAMPLES
To showcase possible threats while highlighting the uniqueness of the attack surface, we designed attacks on state-of-the-art ML-aided visualizations <cit.>.
A summary is presented in <ref>.
Across the attacks, we make different assumptions that cover different levels of the adversaries' capabilities (e.g., black-box attack vs. white-box attack).
The source code used for our attacks is available online <cit.>.
Our attacks are on two representative methods that focus on different parts of the ML4VIS pipeline: (1) parametric UMAP (PUMAP) <cit.> for Data Processing4VIS and (2) MultiVision <cit.> for Data-VIS Mapping.
These methods are selected for three reasons.
First, we aim to cover multiple different processes in the ML4VIS pipeline so we can highlight all of the identified attack surface characteristics (C1–5 in <ref>).
Second, we consider the research influence of these two processes.
Data Processing4VIS is the root process of the ML4VIS pipeline.
As a result, attacks on this process can trickle a cascade of attacks on the subsequent processes.
On the other hand, Data-VIS Mapping is one of the most frequently studied ML4VIS processes <cit.>.
Lastly, after our extensive search, we found only a limited portion of the published works provide publicly available, executable source codes and datasets <cit.>.
These source codes and datasets are necessary to precisely replicate their ML models.
Consequently, we cover only a few processes in the ML4VIS pipeline—designing attacks on the other processes remains as future work.
§.§ Attack Target 1: Parametric DR for Data Processing4VIS
We first provide the background of the attack target.
Nonlinear DR methods, such as t-SNE and UMAP, are commonly used for visual analysis of high-dimensional data <cit.>.
Using NNs, parametric extensions of nonlinear DR methods have been developed (e.g., parametric t-SNE <cit.> and PUMAP <cit.>).
Unlike its non-parametric counterpart, the parametric method produces a parametric mapping that projects data instances onto a low-dimensional space.
<ref> compares pipelines employed by UMAP and PUMAP.
Conventional UMAP first constructs a graph representation of the input high-dimensional data.
The subsequent step is an iterative optimization that does not involve NNs.
Using this optimization, UMAP layouts the instances onto a low-dimensional space (often in 2D) so that instances similar in the graph representation are spatially proximate (refer to <cit.> for details).
There is no direct, numerical connection between high-dimensional data and its low-dimensional representation.
Thus, UMAP does not provide parametric mapping.
In contrast, PUMAP feeds high-dimensional data to the MLP's input layer, learns the hidden layers' neuron weights for parametric mapping, and produces low-dimensional coordinates from the output layer.
PUMAP only uses the graph representation for a loss function during the training phase.
At the inference phase, PUMAP can directly project new inputs onto the low-dimensional space by using the trained neuron weights.
This parametric mapping ability is useful for visual analysis, such as when analyzing streaming data.
Since the projection can be performed without rerunning DR methods, parametric DR is computationally efficient and provides visually stable results (e.g., avoiding the arbitrary rotation caused for each update <cit.>).
Parametric DR is actively studied due to this ability and its convenience of NN-based optimizations <cit.>.
Also, similar to conventional DR methods, parametric DR can be widely used for visualizations.
Therefore, investigating the vulnerabilities of parametric DR methods is critical to ensure security for ML-aided visualizations.
§.§.§ Attack Manipulating One Attribute Value
Attack design. For our first attack, using PUMAP as an example, we demonstrate a rudimentary attack that can be performed even with limited adversarial capability.
This basic example provides evidence of how NN-based parametric DRs are susceptible to maliciously crafted inputs.
We assume that PUMAP is trained with the default setting as introduced by Sainburg et al. <cit.> (e.g., using an MLP of three 100-neuron hidden layers with rectified linear activation functions, or ReLUs) and that PUMAP outputs are visualized as scatterplots.
In addition, we assume the following: adversaries only have the capability to observe their inputs into the trained PUMAP and the corresponding scatterplots; their goal is to produce misleading scatterplots.
From the assumptions above, we designed a one-attribute attack: a black-box attack that manipulates one attribute value of an input instance.
The one-attribute attack is designed based on our observations that PUMAP tends to have highly influential attributes on a learned parametric mapping.
The one-attribute attack consists of two steps.
First, for any given input instance, we observe the change in its low-dimensional coordinate
before and after adding a small perturbation for each attribute.
We add a small perturbation to hide suspicious behaviors before performing the main attacks.
Although this step is an exhaustive search, we only need to feed (d+1) instances at minimum, where d is the number of attributes.
For the second step, we create an adversarial input by adding a relatively large perturbation to the most influential attribute.
By doing so, we can locate the instance in an arbitrary coordinate in the low-dimensional space.
Attack results and analyses. We provide a demonstrative example of the one-attribute attack, using the Wine dataset <cit.>, which consists of 178 instances, 13 attributes, and 3 cultivar labels.
Since PUMAP is an unsupervised learning method, we use data labels only for the visual encodings to better convey our analysis results.
We first normalize the dataset and then use it to train PUMAP.
<ref>-a shows the resulting 2D scatterplot, where circles correspond to the 178 instances projected by the trained PUMAP.
Based on the one-attribute attack, flavanoids is identified as the most influential attribute.
For presentation purposes, we select one benign input from Cultivar 3 and then craft an adversarial input by adding a value of 10 into the benign input's flavanoids.
We then project both the benign and adversarial inputs onto <ref>-a.
The adversarial input is positioned near Cultivar 1 instead of Cultivar 3.
<ref>-b shows a histogram of flavanoids for each cultivar, where are approximately placed from right to left in order.
From the histograms, we expect that flavanoids is useful for PUMAP to decide the low-dimensional coordinate of each instance (e.g., the higher value of flavanoids, the more characteristics of Cultivar 1).
However, as seen in <ref>-b, although the adversarial input has an extremely high value of flavanoids, the trained PUMAP does not show it as an outlier.
In <ref>-c, we further investigate the influence of the perturbation strength by incrementing flavanoids from 0 to 15 in step 1.
As the perturbation strength increases, the adversarial input moves from the top right and passes by all three cultivars.
By simply changing one attribute, this result highlights how adversaries can manipulate a user's visual perception of which cluster the input instance belongs (e.g., Cultivar 2 when adding 5) while not exposing any extreme or outlying characteristics of the adversarial input.
One can consider that the one-attribute attack is analogous to the one-pixel attack designed for image datasets <cit.>.
Although we performed the one-attribute attack on the dataset with 13 attributes, this example is mainly intended to provide a concise demonstration. In reality, the attack may occur for datasets with numerous attributes (e.g., 1,000 attributes).
In such situations, similar to finding the one abnormal pixel in an image, the identification of the one-attribute attack can be non-trivial when humans solely rely on manual or visual inspection of each attribute's data distribution.
We suspect that the observed phenomenon above—the projected coordinate can be controlled even by changing only one attribute value—occurs due to the “linearity” of modern NNs.
Modern NNs utilize close-to-linear activation functions (e.g., ReLU <cit.>) and linear multiplications of neuron inputs and weights.
Goodfellow et al. <cit.> also hypothesized this linearity characteristic can explain the success of adversarial examples on CNNs.
As a result, for NN-based parametric DR, some of the attributes could almost be linearly mapped onto one direction of low-dimensional coordinates, as shown in .
The supplementary material <cit.> exhibits close-to-linear mappings even for different datasets and activation functions.
This inherent issue in NN-based parametric DR should be addressed for more secure use.
§.§.§ Attack Using a Substitute Model
Attack design. When adversaries can observe various combinations of inputs and the corresponding outputs for parametric DR, they can construct a substitute model that can craft adversarial inputs flexibly and effectively.
Interactive visualization systems showing DR results have a high likelihood of this situation occurring.
These systems often provide training data information on demand to allow users to examine the DR results <cit.>.
Therefore, we assume that it is possible and reasonable that an adversary's goal is to construct a substitute model to the attack target model and generate deceptive visualizations.
<ref> shows our architecture for a substitute model attack.
The substitute model learns the parameters that produce a nearly identical low-dimensional representation to the one produced by the attack target model.
This learning is achieved by setting a loss function that minimizes the positional differences (e.g., the sum of pairwise Euclidean distances) between the instances in the low-dimensional representations that are produced by the attack target and substitute model.
To construct this substitute model, adversaries do not need to know how the attack target model's parametric mapping is learned (e.g., the use of PUMAP).
In addition, the substitute model's NN architecture and implementation do not have to be the same as the attack target model's.
Adversaries need to only have sufficient neurons/parameters to mimic the attack target's parametric mapping.
Since adversaries can access all information of the substitute model (e.g., gradients), they can efficiently craft adversarial inputs so that the inputs are projected at specific positions in the substitute model's low-dimensional representation.
Then, adversaries can feed the crafted adversarial inputs to the attack target model, where the inputs would be projected closely to the aimed positions.
Attack results and analyses. We demonstrate attacks on the same ML model as <ref> (i.e., PUMAP trained with the Wine dataset).
We construct our substitute model with an MLP of three 50-neuron hidden layers, using PyTorch.
Note: PUMAP employs TensorFlow and an MLP of three 100-neuron hidden layers.
We first showcase how to create adversarial inputs.
As shown in <ref>-a1, we can create an adversarial input that is projected onto a specific position (e.g., (-2.5, -2.5)) with the substitute model.
The adversarial inputs can be optimized by adjusting the input's attribute values based on their gradients.
The gradients are related to the error between the aimed and projected positions.
When we feed the crafted adversarial input to PUMAP, as shown in <ref>-a2, the adversarial input is placed close to the aimed position.
Due to the linearity of NNs, we expect that similar placement could be achieved without manipulating many attributes.
For example, two attributes would be enough given how this example's low-dimensional space is in 2D.
To validate this hypothesis, using one arbitrarily selected benign input (in <ref>, one instance from ), we perform this same creation procedure while only manipulating two attribute values of the benign input.
Examples of the produced adversarial inputs are shown in <ref>-b and c.
We can see that the adversarial inputs are placed closely to the aimed position in both the substitute model and PUMAP results.
This analysis exposes two critical vulnerabilities for PUMAP as well as for other NN-based parametric DR methods.
First, it can be easy for adversaries to construct a substitute model that has an almost identical parametric mapping to the attack target model's.
As such, a substitute model can effectively create adversarial inputs without necessarily involving the attack target model.
Second, as discussed in <ref>, NN-based parametric DR methods are likely to have strong linearity.
By exploiting this linearity characteristic, adversaries can aim to place malicious inputs to desired low-dimensional coordinates by manipulating only a few attributes.
In the supplementary material <cit.>, we exhibit that this same issue occurs even for three other cases: a PUMAP with a smaller NN, a parametric t-SNE, and different datasets (including a dataset for cancer detection as a life-critical example).
From these analysis insights, we can induce that visualizations can be easily obscured, misleading, or deceptive from crafted adversarial inputs.
For example, as shown in <ref>-a, adversaries can overwrite the existing visual cluster corresponding to Cultivar 2.
At the VIS Interaction step, if users aim to identify and understand the clusters from the tainted scatterplot and histograms <cit.>, they would fundamentally misunderstand the characteristics of the clusters.
In <ref>-b, a majority of the manipulated cluster (i.e., Cultivar 2) tends to have a higher value of flavanoids than Cultivar 1.
In contrast, flavanoids values for the original cluster for Cultivar 2 are mostly in-between Cultivar 1 and Cultivar 3 (see <ref>-b).
Also, if the Data-VIS Mapping process adjusts the visualization axes based on the value range, adversaries can craft an adversarial input that is projected to be an outlier, as shown in <ref>-c.
This radical change of axis scales influences the visual space used for the main region to be much smaller, and thereby affecting the VIS Reading process to disturb observing the main region.
Adversaries may utilize this situation to hide subsequent attacks influencing the main region.
§.§ Attack Target 2: Chart Recommendation for Data-VIS Mapping
Chart recommendation is a canonical use of ML for Data-VIS Mapping.
Given input data, chart recommendation systems offer suggestions and generate the appropriate charts.
By using large datasets of collected charts (e.g., <cit.>), researchers applied supervised learning to NNs and developed various chart recommendation systems <cit.>.
We use MultiVision <cit.> as a representative example of chart recommendation systems.
Given a data table, MultiVision is designed to rank and recommend multiple charts.
See <ref> for an overview of MultiVision's pipeline for chart recommendation.
The following procedure is applied to both the training and inference phases:
* given an input data table, generate sets of data columns for 2D chart generation (e.g., selecting two columns/attributes that would be represented as the scatterplot's x- and y-axes);
* extract various features for each column, such as a word embedding corresponding to the column name, data type (e.g., quantitative, nominal, and temporal), and statistics (e.g., the ratio of negative values, standard deviation, and skewness);
* score the sets of columns by feeding the corresponding features to an NN using bidirectional long short-term memory (LSTM);
* select an appropriate chart type (e.g., bar, line, or pie) for each set of columns by using another NN that jointly employs a bidirectional LSTM and an MLP;
* based on the score from the step 3 and the confidence score of the chart type selection from the step 4, rank the recommendation level of each output chart;
* visualize the top-recommended charts with Vega-Lite <cit.>.
While chart recommendation systems are useful to help reduce the burden of creating charts for analysts, adversarial attacks can make systems recommend meaningless or deceptive charts and hide important information from the analysts.
Existing NN-based chart recommendations <cit.> share similar approaches to MultiVision by extracting data table features as ML inputs and using RNN-related models to interpret the relationships among the set of data columns.
§.§.§ Knowledge-Based Attack
Attack design. We perform a simple, but critical attack on Wu et al.'s pre-trained MultiVision <cit.>.
Here, we assume that adversaries know or can guess some basic specifications related to the employed ML model.
This assumption fits into cases when adversaries can recognize that the recommendation system uses a similar approach to MultiVision.
Even if adversaries partially know the MultiVision procedure described above, they can strategically craft adversarial inputs.
For example, to induce misprediction, they can influence data features by manipulating column names or values.
They can even apply very subtle manipulations to make attacks difficult to notice without a close examination.
To guess the important features for the recommendation, we perform a trial-and-error process to find a subtle change that causes invalid recommendations.
Attack results and analyses. We show attack results using the Gapminder dataset <cit.>, which the original authors of MultiVision used for their user study.
We use this dataset to ensure that MultiVision works in the intended manner.
This dataset consists of 693 instances and 6 attributes with no missing values.
In the supplementary materials <cit.>, we provide the attack results for another dataset.
<ref>-a shows the top 2 recommended charts by MultiVision before our attack.
MultiVision suggests reviewing the relationships between life_expect (life expectancy) and other attributes (e.g., fertility) throughout the years.
From our trials, we decided to generate an adversarial input by replacing one randomly selected value of life_expect with a blank space (similar to a missing value).
This manipulation leads to the results shown in <ref>-b.
Though life_expect is still selected as an important attribute, the resultant charts now do not adequately convey the relationships between life_expect and other attributes.
This effect is likely because life_expect is categorized as a nominal attribute due to the blank space we added.
Nonetheless, a nominal version of life_expect is not helpful in understanding the data, and
MultiVision's susceptibility highlights how it should have selected other attributes for the recommendation charts.
§.§.§ Attack Referring to Gradients
Attack design. When the chart recommendation system is a white box, adversaries can craft adversarial inputs more efficiently.
MultiVision employs two different NNs to recommend column sets and chart types respectively.
We refer to the gradients for column features toward decreasing the scores corresponding to the highest-ranked chart.
Then, we manipulate an input data table to change the column features that have large gradient magnitudes.
Attack results and analyses. We craft an adversarial input for the Gapminder dataset.
As shown in <ref>-a1, MultiVision originally recommended
life_expect (x-axis), fertility (y-axis), and year (line width) as the column set; line chart as the chart type.
By accessing the NNs used in MultiVision, we extract the aforementioned gradients to disturb this line chart recommendation.
<ref> shows the gradients of the NNs and we list only three out of 96 features used by MultiVision.
We observe that (column index divided by the number of columns) has significantly large gradient magnitudes for both column set and chart type recommendations.
This value indicates that switching the column order introduces a high probability of changing the overall recommendation.
From this observation, we shuffle the order of columns to attack MultiVision, resulting in the recommendation shown in <ref>.
As expected, the recommended charts are radically different from this attack, and the chart no longer seems to be able to support analyzing the dataset.
Bidirectional LSTM considers both forward and backward orders of inputs and theoretically should mitigate the influence of the order of columns.
However, this attack results clearly highlight that MultiVision is still vulnerable to changes in the column order.
This result draws similar implications to various chart recommendation systems <cit.> as they also rely on RNN-related models.
Therefore, the possibility of this vulnerability may be common across these systems.
§.§.§ Attack Propagating across Multiple Processes
Attack design. This attack focuses on creating adversarial examples for visualizations that utilize NNs for multiple ML4VIS processes.
We consider a system that first performs Data Processing4VIS using PUMAP to derive 2D features from the raw data.
Then, the subsequent Data-VIS Mapping process employs MultiVision to select the appropriate chart to visualize the 2D features.
We assume that when attacking the target system, adversaries can add new instances to the raw data but cannot directly change a data table input used in MultiVision.
We also assume that adversaries are capable of accessing a pre-trained MultiVision model (e.g., the pre-trained model available from an online repository).
With these two capabilities, adversaries aim to induce misprediction for the chart recommendation.
Similar to <ref>, we can refer to the gradients of the pre-trained MultiVision models.
However, when considering the first capability of adversaries, we can only indirectly influence attribute values of the input data table.
For example, we cannot change the column order and column names.
When MultiVision extracts the column features, attribute values (in our case, 2D features from PUMAP) are converted into statistics that are interdependent from each other (e.g., the ratios of negative values and skewness).
Consequently, it is not trivial to associate these statistics' gradients with the data table's attributes.
Instead, we generate various 2D features with multiple different magnitudes of values and select one that changes the ranks of the chart type or column set scores.
Then, to find an adversarial instance that PUMAP transforms to the selected 2D feature, we build and exploit a substitute model of PUMAP by taking the same approach as <ref>.
Attack results and analyses. We demonstrate an attack that aims to change MultiVision's recommendation by influencing PUMAP's inference.
<ref>-a shows MultiVision's top-recommended chart for PUMAP's 2D feature outputs extracted from the Wine dataset.
The recommended chart reasonably visualizes the distribution of features.
Using the attack design described above, we create and add one adversarial input that is processed as negative 2D feature values with large magnitudes.
Feeding this input to the attack target generates the recommendation shown in <ref>-b.
In this case, the chart is less suitable for visualizing the distribution of 2D features when compared with <ref>-a.
This result highlights how MultiVision can drastically change the chart recommendation even with one additional outlier.
§ DISCUSSION: TOWARD SECURE MACHINE LEARNING-AIDED VISUALIZATION
We now discuss the implication of the attacks demonstrated in <ref> as well as highlight potential threats in the real world.
We then discuss the insights we gained by studying these concrete attacks.
Lastly, we provide a set of suggestions to help further investigate the vulnerabilities of ML-aided visualizations to move toward a more secure use.
§.§ Discussion on the Performed Attack Examples
Possible critical scenarios in the real world. Although we performed the attacks using demonstrative datasets and ML-aided visualizations, these attacks can be easily applied to various real-life scenarios.
Adversaries can directly influence NN models used in various domains.
For example, in the case where NN-based parametric DR is used to monitor a product in real time <cit.>, adversaries might notice that the product weight has an influence on the projection.
This attack would result similarly to the one-attribute attack example.
Then, they might physically control the weight to hide any other abnormal product status.
They may also intentionally cause problematic x, y-axes scaling in visualizations to conceal any subsequent attacks from analysts.
Another possible scenario is how these adversarial attacks can have political influence.
For example, parametric DR can be employed to analyze political opinions, such as sentiment analysis in social media posts <cit.>.
Adversaries could find that the projection can be controlled by a post's length and overwrite existing visual clusters by sending many posts of different lengths to hide important opinion differences.
Adversaries can also attack chart recommendations in practical scenarios.
When a company analyzes its web customer data using chart recommendation services, adversaries (disguising as customers) can make their own data with malicious values to cause problematic recommendations.
As another example, since diagnosing NN models themselves requires analyses from various aspects <cit.>, a company may utilize a chart recommendation service to efficiently detect adversarial attacks on their NNs (i.e., utilizing NN-based chart recommendation to analyze the other NNs).
If a chart recommendation service has vulnerabilities, adversaries can exploit them to hide their attacks on the company's NN model, using a similar approach to the attack propagating across multiple processes we performed.
Need of defense strategies for the identified vulnerabilities. We identified clear vulnerabilities of PUMAP and MultiVision.
These vulnerabilities are likely to exist in other NN-based parametric DR methods and chart recommendation systems.
The design and success of the performed attacks are highly related to the unique characteristics of the ML4VIS attack surface, as described in <ref>.
The attacks on PUMAP utilize ML inputs and outputs exposed by the visualization, exploit the linearity of NNs, and demonstrate cases that can potentially cause user misjudgments.
Developing defense strategies against adversaries requires more research effort.
For example, concealing the inputs and outputs may reduce the analysis capability or the interpretability of ML-aided visualizations, requiring developers to find a good balance between preserving security and supporting analyses.
The linearity of NNs can be mitigated by employing other activation functions such as the sigmoid.
However, even these functions have a region that is close to being linear.
As a result, these functions alone cannot significantly resolve the issues related to the linearity of NNs (see <cit.>).
Through these insights, we may need to design new activation functions to device secure parametric DR methods.
One of the root causes of the success of the attacks on MultiVision could be a mismatch between the employed models (i.e., bidirectional LSTMs) and visualization-specific inputs (i.e., data tables).
Though we want to capture the relationship among table columns, we do not want to place importance on the order of columns when applying ML to data tables.
Although the influence of the column order change can be avoided by shuffling the order during the training phase, a new NN architecture suitable for data tables should essentially be developed.
Also, the attack propagating across multiple ML processes exemplifies a significant security vulnerability for long, complex ML4VIS pipelines.
This attack highlights that not only do we need to investigate each of the ML4VIS processes, but also study how to defend an ML-aided visualization as a whole system.
§.§ Suggestions for Future Research
Enhance studies on attacks specific to ML-aided visualizations. To develop defense strategies, we first need to analyze possible attacks that exploit the vulnerabilities of ML-aided visualizations.
We expect that various attacks can be specific to ML-aided visualizations based on our observations on their attack surface (<ref>).
As a primary step toward a better understanding of ML4VIS' vulnerability, this work demonstrates several critical attacks.
As discussed in <ref>, only a limited work makes their source code, training and testing datasets, and pre-trained ML models available for the public <cit.>.
In contrast, in the ML field, they are often publicly available given how they are vital to efficiently and accurately study limitations of ML models <cit.>.
Thus, we encourage more visualization researchers to make efforts to provide full accounts of data and software publicly, and thereby enable further analyses of ML-aided visualizations.
Also, this immaturity of security studies in ML4VIS indicates our study's limitation.
The current characterization of the attack surface is based on an abstract-level analysis and may not reflect a variety of possible real attacks (since they are yet unseen).
Identify and inform potential vulnerabilities. Furthermore, we suggest that researchers routinely identify and publicly inform potential vulnerabilities in their ML-aided visualizations.
Authors have intimate knowledge of their methods (e.g., designs, algorithms, and datasets).
For example, adding discussions on the vulnerabilities in the respective publications would largely benefit risk assessments.
If we believe the developed visualizations provide significant value (e.g., having a large number of users and deriving highly valuable knowledge) <cit.>, these discussions are crucial in light of potential threats.
Although discussing the vulnerabilities involves a risk of distributing information about attack entries, unsolved critical issues should be reported before the research results are applied to practical applications.
In addition to these individual efforts, there is a growing need to develop methods that systematically evaluate vulnerabilities <cit.>, which would reduce the need for time-consuming manual inspections.
Investigate the role of human intervention. Lastly, we pose another two open questions: Should we utilize human intervention for detecting adversarial attacks? If so, how?
For pure ML models, when inputs and the corresponding ML predictions have a human-noticeable mismatch (e.g., when a stop sign is recognized as a yield sign), human intervention is useful to avoid harm caused by this mismatch.
However, for ML-aided visualizations, adversaries can aim to create deceptive visualized results to make human intervention unreliable or impossible.
One potential approach to still use human intervention is to provide multiple visualizations that exhibit different responses to the changes in input data.
For example, for the adversarial inputs crafted in <ref>, if we constantly show both the scatterplot and histogram (e.g., <ref>-a and b), a user would detect the beginning of attacks from the histogram.
Similarly, visual supports designed for the detection of biases in ML model outputs <cit.> may be useful.
However, approaches that rely on additional visualizations can increase the cognitive load of users, which would be especially problematic when visualizations are used for real-time monitoring purposes.
Further research is required to answer many of these fundamental questions and provide clear and comprehensive guidelines for developers and users.
§ CONCLUSION
We systematically reviewed the vulnerabilities of ML-aided visualizations.
We described the uniqueness of the attack surface of ML-aided visualizations and demonstrated security risks with five concrete examples of adversarial attacks.
Our results show that more research efforts are needed to address security aspects and advance defense strategies against these adversarial attacks.
This work also suggests several future research directions, such as investigating diverse adversarial attacks, systematically testing to evaluate the robustness of ML-aided visualizations, and evaluating the human role in defense against adversarial attacks.
In addition to pursuing these directions, we suggest future research to study the interrelationships between security and other closely related topics, such as privacy preservation and trust building <cit.> in ML-aided visualizations.
This work contributes as a stepping stone toward a holistic study for both maximizing benefits and minimizing risks in the use of ML for visualizations.
Supplementary information
We provide the supplementary materials online <cit.>.
The materials include the source code for the adversarial attack examples in <ref>; additional experiments related to the attack examples; full-size figures and tables of the attack results; and a list of publications that provide publicly available source code.
Acknowledgments
The authors wish to thank S. Sandra Bae and Daniel Jönsson for their assistance in improving the clarity of the paper's content.
This work has been supported in part by the Knut and Alice Wallenberg Foundation through grant KAW 2019.0024 and the ELLIIT environment for strategic research in Sweden.
|
http://arxiv.org/abs/2409.03046v1 | 20240904193120 | Oddballness: universal anomaly detection with language models | [
"Filip Graliński",
"Ryszard Staruch",
"Krzysztof Jurkiewicz"
] | cs.CL | [
"cs.CL"
] |
§ INTRODUCTION
Not all events with low probability are weird or oddball
when they happen.
For instance, the probability of a specific deal in the game of bridge is
extremely low (p_b = 1/5.36 × 10^28 for each deal).
So every time you are dealt cards in bridge, something unfathomable
happens? Of course not, actually an event of the very low probability p_b
must happen (with the probability 1!).
Another example, imagine two probability distributions:
* D_1 = {p_1 = 1/100, p_2 = 99/100},
* D_2 = {p_1 = 1/100, p_2 = 1/100, … p_100 = 1/100},
Intuitively, p_1 is much more oddball in D_1 than p_1 in D_2.
So, how to measure oddballness? We already know that a low probability is not
enough. Let us start with basic assumptions or axioms of oddballness.
Then we will define oddballness and show their practical usage for
anomaly detection when applied to probability distributions generated
by language models.
§ AXIOMS OF ODDBALLNESS
Let us assume a discrete probability distribution
D = (Ω, Pr), where Ω could be finite or countably infinite.
From now on, for simplicity, we define D just as a multiset of
probabilities:
D = {p_1, p_2, p_3, …} = {Pr(ω_i) : ω_i ∈Ω}.
We would like to define an oddballness measure[Measure
understood informally, not as defined in measure theory.] for an
outcome (elementary event) of
a given probability p_i within a distribution D:
ξ_D(p_i), ξ_D : D → [0,1]
Let us define some common-sense axioms for oddballness:
(O0) ξ_D(p_i) ∈ [0,1] – let us assume our measure is
from 0 to 1,
(O1) ξ_D(0) = 1 – if an impossible event happens, that's
pretty oddball!
(O2) for any distribution ξ_D(max{p_i}) = 0
the most likely outcome is not oddball at all,
(O3) p_i = p_j →ξ_D(p_i) = ξ_D(p_j) – all
we know is a distribution, hence two outcomes of the same probability
must have the same oddballness (within the same distribution),
(O4) p_i < p_j →ξ_D(p_i) ≥ξ_D(p_j), if
some outcome is less likely than another outcome, it cannot be less
oddball,
(O5) (continuity) for any distribution D = {p_1, p_2,p_3,…},
the function f(x) = ξ_D_x(x), where
D_x = {x, p_2 ×1-x/1-p_1, …, p_i ×1-x/1-p_1, …},
is continuous – if we change the probabilities a little bit, the
oddballness should not change much.
Note that (O2) implies the following two facts:
(F1) p_i > 0.5 →ξ_D(p_i) = 0, what is more likely
than 50% is not oddball at all,
(F2) for any distribution
D={p_1=1/N,…,p_N=1/N}, ξ_D(p_i) = 0 –
like in the bridge example.
§ ODDBALLNESS MEASURE
Let us a define a measure that fulfils (O0)-(O5). First, let us define
an auxiliary function:
x^+ = max(0, x)
(In other words, this is the ReLU activation function.)
Now let us assume a probability distribution
D = {p_1, p_2, p_3, …}. Let us define the following
oddballness measure:
ξ_D(p_i) = ∑_j g((p_j - p_i)^+)/∑_jg(p_j),
where g is any monotonic and continuous function
for which g(0)=0 and g(1)=1.
This measure satisfies the axioms (O0)-(O5).
From now on, we assume the identity function g(x)=x (though, for
instance x^2 or x^3 can be used as well); the oddballness measure
simplifies to:
ξ_D(p_i) = ∑_j (p_j - p_i)^+.
Let us check this measure for our distributions D_1 and D_2 given
as examples:
* ξ_D_1(p_1) = 0.98,
* ξ_D_1(p_2) = 0,
* ξ_D_2(p_i) = 0,
Consider another example: D_3 = {p_1=0.7, p_2=0.25, p_3=0.05},
then: ξ_D_3(p_1) = 0, ξ_D_3(p_2) = (0.7-0.25)^+ + (0.25-0.25)^+ + (0.05-0.25)^+ = 0.45,
ξ_D_3(p_3) = 0.85.
§ ODDBALLNESS AS A COMPLEMENT OF PROBABILITY OF PROBABILITY
Interestingly, oddballness can be interpreted as the complement of
the probability of a probability. By probability of a
probability p_i with respect to distribution D, or π_D(p_i),
we mean the probability that an event of probability p_i (not
necessarily ω_i) happens, with two extra assumptions:
* all probabilities smaller than p_i are also summed up,
* for each event ω_j with probability p_j > p_i, we
assume that it contains a “subevent” of probability p_i, hence
for each such event we sum p_i in.
It can be shown that
π_D(p_i) = 1 - ξ_D(p_i).
Intuitively, it makes sense: An event is oddball if the probability of
any event happening with similar probability is low. See
Figure <ref> for an illustration of the relation between
oddballness and probability of probability.
§ WHAT'S THE PRACTICAL USE?
The oddballness measure can be used to detect anomalies or errors, e.g. in a
text, assuming that we have a good language model. The language model will
give a probability distribution for any word in a text, some words
will be given higher probability (likelihood), some lower. We could mark words
with low probability as suspicious, but sometimes a low-probability
event must occur. For instance, the distribution for the gap in
the sentence:
I was born in …, a small village
should be (for a good language model[For this
example, an encoder-only model trained on the masked language task
should be assumed, for instance RoBERTa <cit.>.]) composed of a large number of names, each with a
rather low probability. Hence, like in the bridge example, we should
be not surprised to see a low-probability event. On the other hand, in the
sentence:
I was born in New …City
any word other than York is pretty unlikely (and
oddball). Therefore, rather than probability, the oddballness
should be used – words with oddballness exceeded some threshold
should be marked as suspicious, they are potential mistakes or
anomalies to be checked by humans. This way, we could devise a grammar
checking/proofreading system that is not trained or fine-tuned in a supervised
manner for the specific task of error detection.
The notion of oddballness might not be that useful in the world before
good language models, when usually only static discrete distributions
were assumed. Language models, even for the same text, can generate
vastly different types of probability distributions for each
position:
* sometimes the model is almost certain and almost all
probability will be assigned to one token,
* sometimes the model will predict a group of possible tokens
plus a long tail of less likely tokens,
* and sometimes the model is uncertain and the entropy is high.
In this paper, we focus on applying oddballness to grammatical error
detection (see Section <ref>). Some related (but not
the same) ideas were, however, proposed in the field of log anomaly
detection, as log sequences can be viewed as a modality similar to
natural language. LogBERT by <cit.> was trained on, in a
semi-supervised way, on log sequences. During anomaly detection some
tokens are masked and the probability distribution is obtained from
LogBERT for each of them. If the probability of the actual token is
not one of the K highest-likelihood tokens (K is a hyperparameter),
the token is considered anomalous (we will refer to this method as
topK later). LogGPT by <cit.> is a
similar idea, but applied to an decoder-only GPT-like architecture,
rather than an encoder-only Transformer, but still the same approach
of considering topK prediction is taken for the anomaly detection
itself, though the model is also fine-tuned specifically for anomaly
detection.
In general, there is a vast body of literature on anomaly or outlier
detection (see, for instance: <cit.>,
<cit.>, <cit.>). Oddballness is
different, as it considers only probabilities from a language model
(or any other statistical model) rather than any intrinsic feature of
events in question.
§ EXPERIMENTS WITH ERROR DETECTION
Table <ref> presents the results on the FCE
dataset <cit.>. In each case, using the
oddballness value as the threshold gives better results than using the
probability value. All thresholds were adjusted to maximize the F0.5
score on the development set. The maximum oddballness value from the
GPT2-XL and RoBERTa Large <cit.> models produced the best F0.5 score on the
test set. The result is slightly better than the BiLSTM model by <cit.>,
which was trained specifically to detect errors in texts, while GPT2-XL and RoBERTa
Large are models which were trained, in a self-supervised manner, on the masked token prediction
task. Although results based on the oddballness value are not competitive with
state-of-the-art solutions, it should be noted that the oddballness
technique does not involve any task-specific fine-tuning, except
for single-hyperparameter tuning. Also, the texts were written by CEFR B level
students, indicating that they may not be fully proficient in the
language. This could cause the language model to flag not fluent words
as incorrect and thus predict correct words as erroneous. This may also
explain why the smaller GPT2-small model outperforms the much larger
Mistral 7b model. This study demonstrates that the oddballness measure
can yield superior results compared to using probability values for
anomaly detection.
We also tested the Mistral 7b model for multilingual GED datasets used in
MultiGED-2023 Shared Task <cit.> using the
same approach as in experiments for the FCE dataset. The results in
Table <ref> show that for all languages the
oddballness method outperforms the probability method. We also tested
adding the following prompt before each sentence: "An example of a
grammatically correct text in any language that may be out of context:
<example>" to make probability distribution more smooth. The results
in Table <ref> show that this trick helps in almost
all experiments, but the improvements for the oddballness method are
greater compared to the probability method. Looking at the thresholds
we can also indicate that thresholds for the oddballness value are
more universal compared to the probability thresholds. We also tested the top-K approach. For multilingual GED task it does not provide better results than probability method in any language. The best
solutions for each dataset in the shared task are better compared to
oddballness value results, but again those solutions are trained to
predict incorrect tokens, whereas the oddballness method approach
focus more on predicting spans in texts that are most likely
errorneous without precisely labeling all incorrect tokens.
§ CONCLUSIONS
We have showed that using a new metric for anomalous events,
oddballness, is better than just considering low-likelihood tokens, at
least for grammatical error detection tasks. The method based on
oddballness yields worse results than state-of-the-art models heavily
fine-tuned for the task (<cit.>), but its great advantage is that it can be
used for any language model, without any fine-tuning. This technique
can be applied potentially to anomaly detection in sequences of any
type of data, assuming that a “language” model was pre-trained.
|
http://arxiv.org/abs/2409.03668v1 | 20240905162231 | A Fused Large Language Model for Predicting Startup Success | [
"Abdurahman Maarouf",
"Stefan Feuerriegel",
"Nicolas Pröllochs"
] | cs.LG | [
"cs.LG",
"cs.CL"
] |
Maarouf, Feuerriegel, and Pröllochs
A Fused Large Language Model for Predicting Startup Success
A Fused Large Language Model for Predicting Startup Success
Abdurahman Maarouf
LMU Munich & Munich Center for Machine Learning, [email protected],
Stefan Feuerriegel
LMU Munich & Munich Center for Machine Learning, [email protected],
Nicolas Pröllochs
Justus Liebig University Giessen, [email protected]
Investors are continuously seeking profitable investment opportunities in startups and, hence, for effective decision-making, need to predict a startup's probability of success. Nowadays, investors can use not only various fundamental information about a startup (e.g., the age of the startup, the number of founders, and the business sector) but also textual description of a startup's innovation and business model, which is widely available through online venture capital (VC) platforms such as Crunchbase. To support the decision-making of investors, we develop a machine learning approach with the aim of locating successful startups on VC platforms. Specifically, we develop, train, and evaluate a tailored, fused large language model to predict startup success. Thereby, we assess to what extent self-descriptions on VC platforms are predictive of startup success. Using 20,172 online profiles from Crunchbase, we find that our fused large language model can predict startup success, with textual self-descriptions being responsible for a significant part of the predictive power. Our work provides a decision support tool for investors to find profitable investment opportunities.
Machine Learning; Text Mining; Large Language Models; Deep Learning; Venture Capital
Wind turbine condition monitoring based on intra- and inter-farm federated learning
[
Accepted XXX. Received YYY; in original form ZZZ
===================================================================================
=10000
§ INTRODUCTION
Startups are ventures undertaken by entrepreneurs to seek, develop, and validate a business model <cit.>. For investors, startups represent investment opportunities with substantial financial risks yet often also with the prospect of large returns. Return on investment can easily exceed the initial investment by several orders of magnitude. As an example, the early-stage investment of Peter Thiel of 0.5 million USD into Facebook increased in value by more than 1 billion USD <cit.>. However, successful investments into startups are rare. Many startups cannot establish an economic business model and eventually fail <cit.>. Given the high failure rate among startups, investors are confronted with the non-trivial decision-making task of identifying startups that will eventually be successful <cit.>.
In order to find successful startups, investors can nowadays access information about startups through online platforms for venture capital (VC). A prominent example is Crunchbase, where startups can present their venture to investors through a detailed online profile. The online profiles can include both (i) fundamental variables, which provide structured information on founders, funding, and the business sector, and (ii) textual self-description. The latter is a free text that can be used to describe the startup in verbal form. Startups can use such online profiles to inform about the venture's prospects and attract the interest of venture capitalists and other potential investors <cit.>.
Prior literature has explored the potential of leveraging VC platform data (e.g., from Crunchbase) to predict startup success due to their comprehensive coverage <cit.>. However, prior studies have primarily assessed the predictive power of fundamental variables <cit.>, while mostly ignoring textual self-descriptions. Notable exceptions are <cit.> and <cit.>, who use textual self-descriptions for prediction. However, these works rely on traditional methods with manual feature engineering. We thus contribute to the existing literature stream with a novel, fused large language model to combine textual self-descriptions with fundamental variables for predicting startup success.
In this paper, we aim to predict startup success from online profiles of VC platforms. Thereby, we not only consider fundamental information (e.g., on founders, funding, and the business sector) that are captured in traditional scorecards but we also leverage the textual self-descriptions in online profiles on VC platforms. Here, we develop a machine learning approach to predict startup success from large-scale VC platforms. Machine learning allows us to assess how well startup success can be detected for new startups and thus support the decision-making of investors regarding whether to select a startup for funding. Specifically, we develop a tailored, fused machine learning approach for predicting startup success that considers both (structured) fundamental variables and (unstructured) textual self-descriptions. For this, we draw upon large language models as a recent innovation in machine learning <cit.>, which we carefully adapt to our research objective. A key benefit of large language models in practice is that they are pre-trained on a large amount of public data, because of which relatively small datasets are sufficient for fine-tuning and, thus, to generate accurate predictions. We then assess the relative contribution of textual self-descriptions to making predictions of startup success.
We evaluate our machine learning approach based on our fused large language models for predicting startup success using 20,172 online profiles from Crunchbase. Crunchbase is one of the largest online VC platforms hosting online profiles from startups. We find that only fundamental variables can alone make predictions with a balanced accuracy of 72.00 %. When additionally incorporating textual self-descriptions, the balanced accuracy increases to 74.33 %. The improvement is statistically significant, implying that textual self-descriptions are effective in predicting startup success. In addition, we estimate the financial performance of our machine learning approach by translating the performance improvement to investment portfolio improvement. The investment portfolio improvement amounts to a 40.61 percentage points increase in return on investment (ROI) when incorporating textual self-descriptions, highlighting the practical implications of our machine learning approach. We then evaluate the prediction performance across various events indicating startup success (, initial public offering, acquisition, and external funding). We further provide an extensive series of sensitivity analyses in which we compare the prediction performance across business sectors, startup age, and additional machine learning baselines, thereby confirming the robustness of our findings.
Our work contributes to business analytics in several ways. First, we provide empirical evidence on the operational value of online VC platforms for better investment decision-making. Thereby, we extend upon extensive research which has studied the benefits of online platforms for users while we focus on investors. Second, we contribute to a growing stream of machine learning in business analytics <cit.>. Here, we demonstrate an impactful application of machine learning in VC decision-making. Third, we show the operational value of large language models for research and practice. However, as we detail later, a naïve application of large language models would miss significant predictive power. Instead, our task requires a non-trivial adaptation through a fused large language model to our decision problem in order to make combined predictions from both fundamental information and texts. Fourth, we provide a flexible tool for investors to automate their screening process in VC decision-making.
The rest of this paper is structured as follows. <Ref> provides a background on venture capital and analytics for decision-making. In <Ref>, we develop our machine learning approach in the form of a tailored large language model. <Ref> presents our dataset with online profiles from Crunchbase, based on which we study the predictive power of textual self-descriptions (<Ref>). We then discuss implications for both business analytics practice and research (<Ref>), while <Ref> concludes.
§ RELATED WORK
§.§ Venture Capital
Startups are new entrepreneurial ventures founded to develop and validate a business model <cit.>. In practice, startups typically take an innovative idea and then build a scalable business model around it, with the intention of turning the startup into a high-growth, profitable company <cit.>. This process is largely dependent on external funding in order to cover costs for technology development, entering markets, or other upfront investments. Hence, events in which startups receive funding are commonly used in the literature to determine success <cit.>. Examples of such events are initial public offerings (, Airbnb, which went public in December 2020), acquisitions (, Slack and DeepMind, which were acquired by Salesforce and Google, respectively), and external funding (, SpaceX, which had several funding rounds after its series A funding in 2002). To capture startup success, prior literature has often studied either individual events such as initial public offerings <cit.> or a combination of events <cit.>.
Startups often represent lucrative investment opportunities with the prospect of large returns. As of 2023, more than 180 startups have turned into unicorns, that is, reached valuations of over USD 1 billion in less than five years <cit.>. Investing in such unicorns in an early stage can create a return multiple times larger than the initial investment. However, investments in startups are known to be of high risk. Startups that eventually fail leave the investor with little or no return. Hence, identifying successful startups at an early stage is difficult <cit.>.
Predicting which startups will turn out to be successful is inherently challenging, as startups represent new ventures for which little to no information on past performance is available. Thus, many investors make such predictions based on their gut feeling <cit.>. However, according to prior literature, there are several determinants that characterize successful startups. These can be loosely divided into characteristics regarding the business model, the founders, and funding. (1) The business model explains—to some degree—the survival of ventures <cit.>. In this regard, the business sector is also associated with startup success <cit.>. (2) Founders decide upon how a business is run and thus founder characteristics are important success factors <cit.>. For instance, startups are more likely to be on a path toward growth when their founders have attended higher education <cit.>.
(3) Funding is often a prerequisite to stimulate growth <cit.>. Hence, startup success is also associated with previous funding rounds <cit.>. In this regard, it is further beneficial for startups to have backing from a known venture capitalist <cit.>. Hence, to avoid relying on gut feeling or subjective bias when processing information about startups, machine learning presents a scalable, data-driven approach to predict startup success.
§.§ Predicting Startup Success
Prior works have developed data-driven approaches for predicting startup success. For instance, predictions can be made based on data from questionnaires, namely via so-called scorecards <cit.>. One study also draws upon data that are extracted from business plan competitions <cit.>. Yet, both questionnaires and business plan competitions involve data from manual reporting, which is often not available in VC markets. These data sources also tend to have limited coverage, and thus their usefulness in the daily decision-making of investors is limited. A different stream of literature predicts acquisitions as a specific event in startup lifecycles using proprietary databases (e.g., COMPUSTAT <cit.>, SDC Platinum <cit.>). However, such databases are typically restricted to specific events (, acquisitions) and, on top of that, have limited coverage as they provide only few variables (e.g., about founders and funding) but not textual descriptions. In contrast to that, textual descriptions about startups and their business model, innovation, or market structure may provide significant predictive power regarding which ventures will eventually be successful.
Recently, the possibility of using online data from VC platforms to predict startup success was explored <cit.>. Predicting startup success from VC platforms has a clear advantage in practice: online platforms for VC typically exhibit comprehensive coverage of startups and thus provide big data <cit.>. This is beneficial, as large-scale datasets are generally a prerequisite for making accurate inferences using machine learning. Studies predicting startup success based on questionnaires have often relied on samples with less than 200 observations (, 200 different startups), because of which the prediction models can not generalize well across startups and thus lack predictive power <cit.>. In contrast, online platforms for VC, such as Crunchbase, provide online profiles of more than 20,000 startups in the U. S alone.
Based on data from VC platforms, a variety of research questions have been studied. <cit.> evaluate the predictive ability of fundamental information at Crunchbase, but textual self-reports as predictors are ignored. In <cit.>, a hybrid machine learning approach is designed in which both fundamental information and judgment scores from crowds are combined, but again textual self-reports as predictors are again ignored. <cit.> use textual self-reports for prediction but rely on traditional, bag-of-words representations and not large language models. <cit.> also make predictions from self-reports but rely on a dictionary-based approach that requires manual feature engineering. Hence, the ability of large language models together with textual self-descriptions from VC platforms has yet to be explored and presents our contribution.
§.§ Machine Learning in Business Analytics
Machine learning can support managerial decision-making by predicting uncertain operational outcomes <cit.>. The adoption of machine learning in business analytics has been greatly fueled by the increasing availability of data and recent methodological advances <cit.>.
Promising examples include credit scoring <cit.>, financial risk assessment <cit.>, business failure prediction <cit.>, throughput prediction <cit.>, customer analytics <cit.>, recommendation systems <cit.>, and public sector operations <cit.>. However, the aforementioned works build upon structured data and not text.
Business analytics has also increasingly embraced machine learning that can make inferences from textual content <cit.>. As such, business analytics can mine user-generated content, e.g., from social media, in an automated and scalable manner <cit.>. For example, <cit.> enrich historical sales data with social media as a measure of customer perception towards products and evaluate how that combined data source is better in predicting future sales. However, existing methods in business analytics oftentimes build upon bag-of-words approaches where an unordered set of words is used as input <cit.> and where, as a result, the relationship, order, and hierarchical structure among words is lost. Hence, existing methods merely operate on word frequencies and not on semantic meaning. A potential remedy is given by large language models that model the ordered sequence of words and thus capture the semantics of running text; however, the operational value of large language models has so far been largely unclear. Moreover, we are not aware of previous work that uses large language models for startup prediction to support investment decisions.
§ EMPIRICAL MODEL
In this section, we first formulate our research question of whether textual self-descriptions from VC platforms predict startup success. To answer this, we then describe our machine learning approach based on a tailored, fused large language model.
§.§ Research Question
In this study, we build a machine learning approach where we leverage information provided by startups on VC platforms in order to predict startup success. Information on online VC platforms such as Crunchbase can loosely be grouped into two categories (which may potentially complement each other). (1) VC platforms provide structured information on a startup's fundamentals. Examples of such fundamental variables are the age of the startup, the number of founders, or information about past funding success. Fundamental variables are typically entered on VC platforms in a structured format and thus with little degree of customization. (2) Startups can additionally provide a textual self-description. The textual self-description can be used to describe the business model, a startup's innovation, or the market structure. Textual self-descriptions have become mandatory on VC platforms such as Crunchbase but the actual content is at the startup's discretion.
In this study, we examine whether large language models can be successfully leveraged by investors to predict startup success from textual self-descriptions on VC. There are several factors that lead us to expect that textual self-descriptions are predictive. In particular, startups can use the textual self-description on VC platforms to present information on a startup's business model, innovation, or market structure. An example is BetterTrainers has a new type of business model that protects all sessions booked through the site with premium insurance coverage where a business model is explained, or FaceTec's patented, industry-leading 3D Face Authentication software anchors digital identity with 3D FaceMaps where a startup details how to make use of certain technologies. Besides the actual information captured in the text, latent factors such as the tone of the text (e.g., a positive sentiment) may also implicitly signal success.
As seen by the previous examples, traditional approaches from machine learning for making predictions from online descriptions (e.g., bag-of-words) will likely struggle with the complexity of the underlying task since traditional approaches only rely upon word frequencies and do not provide a principled approach to infer semantic meanings. To this end, the previous examples motivate the use of large language models in our study as a principled, data-driven approach to capture semantic meanings in text and thus to predict startup success.
§.§ Fused Machine Learning Approach
In the following, we present our fused machine learning approach in order to predict startup success. Let i = 1, …, n denote the startups. Specifically, we develop a tailored, fused large language model as shown in <Ref>. In our machine learning approach, both sets of variables – i.e., fundamental variables (FV) and textual self-descriptions (TSD) – are taken into account but in different ways. (1) The fundamental variables come in a structured format x_i^FV∈ℝ^m_FV and are thus directly passed on to the final machine learning classifier. (2) The textual self-descriptions are first mapped onto document embeddings x_i^TSD∈ℝ^m_TSD and then passed on to the final ML classifier. Let us denote the final ML classifier by ϕ_θ : ℝ^m_FV + m_TSD→{ 0, 1} with some parameters θ. Here, the output y_i ∈{ 0, 1 } indicates whether a startup i = 1, … will be successful (y_i=1) or not (y_i=0). Crucially, a custom architecture for our large language model is necessary in order to fuse both fundamental variables x_i^FV and document embeddings x_i^TSD to make predictions. For comparison, we later evaluate a naïve large language model without the “fused” structure which uses only x_i^TSD for prediction.
In our machine learning approach, we take the textual self-description of the startup and use a large language model <cit.> as an embedding generator to map text onto a document embedding. The document embedding is then concatenated with the fundamental variables and the resulting concatenated vector is then used as input to the classifier. Large language models represent state-of-the-art techniques for modeling natural language in machine learning <cit.>. A prominent example is BERT, which has been found effective in capturing complex dependencies such as semantics in textual content <cit.>. In the following, we detail how we fuse data in our large language model.
§.§.§ Large Language Model (BERT) as embedding generator.
Large language models, often also called transformers, are large-scale deep neural networks that are carefully designed to process running text <cit.>.
The practical benefit of large language models is that they leverage the strength of large-scale deep neural networks and are thus able to capture context, semantics, structure, and meaning <cit.>.
A prominent large language model is BERT <cit.>. BERT was developed by Google AI and stands for bidirectional encoder representations from transformer. BERT has been successful in solving various machine learning tasks for natural language.
In particular, BERT has been shown to be superior to alternative document representations such as bag-of-words. Methodologically,
Language models such as BERT map running text onto a new representation called embedding <cit.>. Formally, each textual input is first transformed into a sequence of tokens [[CLS], w_1, w_2 …] based on the predefined vocabulary of BERT, with [CLS] being used at the beginning of each sequence. Hence, for each textual input, BERT receives a sequence of individual tokens as input where the tokens are represented by vectors [CLS], w_1, w_2, …∈ℝ^w. The vectors are not “one-hot-encoded” as traditionally done in simpler models. Instead, BERT uses an embedding layer to convert the sequence of tokens into dense vector representations e_[CLS], e_1, e_2, …∈ℝ^e that are lower-dimensional (i.e., the dimensionality e is much smaller than the dimensionality of a typical one-hot encoding, which is computationally more desirable). Next, the token embeddings are fed into a transformer encoder. A transformer encoder is a neural network designed for sequential data that processes the entire input sequence [e_[CLS], e_1, e_2, …] simultaneously, rather than sequentially. It relies on two key mechanisms: (a) positional encodings, which add information about the position of each token to retain the order; and (b) an attention mechanism, which allows the model to weigh the importance of different tokens dynamically. Thereby, a transformer encoder employs a complex, non-linear process to determine how tokens influence one another. The output of the transformer encoder consists of transformed vectors (embeddings) [o_[CLS], o_1, o_2, …], which can then be used for various tasks. Specifically, the embedding for the [CLS] token (i.e., o_[CLS]) can be used for classification tasks as it aggregates the meaning of the entire input sequence.
During training, BERT utilizes a technique called masked language modeling, where some of the input tokens are randomly masked (i.e., omitted) for self-supervised learning. The objective of BERT during training is to correctly predict these masked tokens. Thereby, BERT updates its internal weights and learns a deep understanding of language context and relationships between words. Due to self-supervised learning, large-scale textual databases (e.g., Wikipedia) can be used for training but without the need for explicitly annotated labels. A schematic visualization is in <Ref>.
Our implementation is as follows. We use the so-called basic, uncased version of BERT <cit.>, comprised of 12 layers with ∼ 110 million trainable parameters. It generates embeddings o_[CLS], o_1, o_2, …∈ℝ^n of dimension n = 768. BERT is shipped as a pre-trained network where parameters have already been learned from open-source content. Before applying BERT, all text is lowercased and tokenized using the WordPiece algorithm, which maps the text onto subwords or unigrams from the WordPiece vocabulary. Afterward, the text is passed through the pre-trained BERT network. The embedding of the [CLS] token (i.e., o_[CLS]) is then used as the document embedding x^TSD, representing the textual self-description for the downstream classification. Hence, our document embedding x^TSD is of dimension m_TSD=786.
§.§.§ Baseline text representations.
We compare our machine learning approach based on a tailored, fused large language model against three traditional text representations. All of the baselines are again concatenated to the fundamental variables and are then fed into a final machine learning classifier. The final machine learning classifier is again subject to rigorous hyperparameter tuning (see later for details) for fair comparison. Therefore, all performance gains from our approach must be attributed to that large language models are better at handling textual content.
* Manual feature extraction: The first text-based baseline is based on manual feature extraction. Specifically, we manually craft features that capture textual information (e.g., the length, the mean word length, and the number of geographic references). We follow prior literature and extract the same features as in <cit.>. This results in a text representation of dimension 10. We refer readers to <cit.> for a full list of the features.
* Bag-of-words: We compare our machine learning approach against the traditional approach of a bag-of-words baseline. We refer readers to <cit.> for an introduction. We implement bag-of-words as follows. We first tokenize the words of the textual self-description to unigrams, remove stop words, lemmatize, and apply a tf-idf weighting. Furthermore, we remove words with more than 95 % sparsity. The bag-of-words baseline results in a 98-dimensional text representation.
* GloVe: GloVe <cit.> transforms words into vectors (so-called word embeddings) based on their co-occurrence in a text corpus. Thereby, the vectors capture semantic relationships, offering a rich set of features for text analysis. We use the GloVe model pre-trained on Wikipedia (i.e., ) to extract the 50-dimensional word embeddings. We average the individual word embeddings to get the final text representation used for the downstream classification.
§.§.§ Final Machine Learning Classifier.
The final machine learning classifier ϕ_θ(·) with parameters θ is responsible for the “fused” approach and, for this, receives the concatenated vector of (1) fundamental variables and (2) document embeddings. The output is then the predicted probability of startup success.
We thus optimize
θ^∗ = _θ𝔼[ ℒ(ϕ_θ([x^FV, x^TSV]), y) ] ,
where ℒ is a convex loss (e.g., mean squared error) and where [ · , ·] is the concatenation operator.
We experiment with different classifiers that are designed to handle both linear and non-linear relationships in the data. Specifically, we make use of the following classifiers:
* Logistic regression: The logistic regression is a simple linear model used for binary classification. It models the probability of a binary outcome using the logit function to map predictions to probabilities. The logistic model expresses the log-odds of the outcome variable as a linear combination of the independent variables, formalized as log(p/1-p)=θ^T x, where p is the probability of the outcome of interest.
* Elastic net: The elastic net extends the logistic regression in which overfitting is prevented through regularization <cit.>. Specifically, regularization is given by a combination of both an L1- and an L2-norm penalty (analogous to lasso and ridge methods, respectively). This thus shrinks some coefficients closer to zero, and, as a result, the classifier generalizes better to out-of-sample observations. Formally, let ϕ_θ(x) = θ^T x. Then the regularized loss ℒ_reg is formalized as ℒ_reg(x,y) = ℒ(ϕ_θ(x), y) + λ ( 1-α/2θ^2_2 + α θ_1 ) with hyperparameters α and λ. The elastic net is especially beneficial in tasks where predictors are subject to linear dependence <cit.>. For reasons of completeness, we also experimented with lasso and ridge methods <cit.>, but with qualitatively similar results (and thus omitted the results for brevity).
* Random forest: The random forest is an ensemble learning classifier where predictions are made from a multitude of decision trees <cit.>. Each decision tree is fit to a random subset of the data, while the final prediction is then made by taking the majority vote over the individual decision trees. As a result, the classifier is less prone to overfitting, has a better prediction performance than a single decision tree, and is effective in handling non-linear relationships <cit.>.
* Neural net: The neural network is a flexible model for classification by using layers of nodes that transform the input through non-linear activation functions. The output layer uses a sigmoid to produce class probabilities. The loss is regularized by a combination of both the L1- and L2-norm penalty to prevent overfitting. Neural networks excel due to their flexibility in handling non-linear
relationships.
§.§ Performance Metrics
To evaluate the performance of machine learning in predicting startup success, we report different performance metrics: balanced accuracy, precision, recall, F_1-score, area under the curve from the precision-recall curve (AUCPR), and area under the curve from the receiver operating characteristics (AUROC). However, due to its inherent benefit of considering the complete distribution of discrimination thresholds <cit.>, we primarily focus on the AUROC. We remind that we follow common practice in machine learning and evaluate the performance on out-of-sample observations, that is, startups that have not been part of the training set but from the test set so that they are thus unseen to the machine learning classifiers.
Furthermore, we calculate the return on investment (ROI) for the machine learning-selected portfolios. Let 𝑇𝑃 denote the number of true positives (, cases where startup success was predicted correctly) and 𝐹𝑃 the number of false positives (, cases where the model predicted success despite that the startup was actually not successful). We then calculate the net investment gain for correctly predicted successful startups by taking the sum of the final investment values 𝐹𝐼𝑉_𝑇𝑃 (, the startup valuations after a success event) minus the sum of the total cost of investment (IC).
Note that, since data on startup valuations and costs of investment is not always publicly available, we approximate these variables using constants determined based on historical mean values for startups listed on Crunchbase.[In our Crunchbase dataset (see <Ref>), the valuation of a startup after a success event (, initial public offerings, funding, acquisitions) is, on average, $184.47 million. The pre-success valuation (, the last valuation in previous funding rounds) is, on average, $12.19 million. Hence, startups have, on average, a 15.13 times higher valuation if they become successful.]
For companies that were non-successful,
we conservatively assign a final investment value of zero (𝐹𝐼𝑉_𝐹𝑃). The ROI for the portfolio is then calculated by taking the net investment gain divided by the total cost of investment. Formally,
ROI = 𝑇𝑃×𝐹𝐼𝑉_𝑇𝑃 + 𝐹𝑃×𝐹𝐼𝑉_𝐹𝑃 - (𝑇𝑃 + 𝐹𝑃) ×𝐼𝐶/(𝑇𝑃 + 𝐹𝑃) ×𝐼𝐶× 100,
where the total investment costs (𝐼𝐶) comprise (i) the investor's investment into equity of the startup; and
(ii) we consider 10% of the last valuation as additional screening and monitoring costs for the investor.
§.§ Implementation Details
Our implementation follows best practice in machine learning <cit.>. For this, we split the data into a training set and a test set. The former is used for training the model; the latter is used to evaluate the out-of-sample performance. In our work, we randomly assign 80 % of the data samples to the training set and 20 % to the test set. Due to class imbalances, common procedures in machine learning are followed; that is, we apply a stratified split <cit.>, so that both sets have the same ratio of successful vs. non-successful startups. To ensure robustness in our evaluation, we repeat the random split five times and report the mean and standard deviation of the performance metrics on the test set across the five iterations. This allows us to quantify how well machine learning can predict success for ventures that were not seen during training.
Hyperparameter tuning is conducted using 10-fold cross-validation. Specifically, hyperparameters are tuned via randomized grid search (20 iterations), using the tuning grid in <Ref>. The best hyperparameters are selected based on the cross-validated AUROC score. Note that the hyperparameter tuning is done separately for the different input variables, that is, for when training our machine learning approach using fundamental variables (FV), textual self-descriptions (TSD), or a combination of both (FV + TSD).
§ EMPIRICAL SETTING
§.§ Online Profiles on Crunchbase
Our evaluations are based on data from Crunchbase.[<http://www.crunchbase.com>] Crunchbase is a leading online VC platform that connects startups and investors. For this, Crunchbase allows startups to create online profiles where they can present information on their business, founders, and funding. Edits can be made by verified employees to ensure that correct information is entered.
We collected online profiles (, both fundamental variables and textual self-descriptions) from all US-based startups that were listed on Crunchbase. Furthermore, we excluded startups that went public and that have already received series C funding (or a later funding round). The latter is important as our objective is to make predictions for companies that fall under the definition of a startup.
§.§ Definition of Startup Success
In our study, we predict startup success with regard to different events that are conventionally used as indicators of success <cit.>, namely, whether startups had an initial public offering, have been acquired, or secured external funding. If any of these events occurred, we treat the startup as successful. Otherwise, a startup is treated as non-successful. If not stated otherwise, these labels are used to evaluate our machine learning approach. As part of our sensitivity analyses, we later continue to compare how the prediction performance varies across these events – , initial public offerings, funding, and acquisitions.
§.§ Time-Aware Prediction and Evaluation Framework
We implemented a time-aware approach that is common in time-series forecasting <cit.>. Recall that we aim to evaluate whether we can predict if startups will become successful in the future. Consequently, we processed our data as follows. We restricted our analysis to startups that were founded between 2013 and 2015, based on which we predicted their future development until the end of 2020. We obtained raw access to the Crunchbase database with historical data. This allowed us to collect information from online profiles that were available in 2015. In particular, we discarded information that was added or updated later, so that we only considered data as presented on Crunchbase at the end of 2015.
We then predict whether an event indicating startup success has occurred during the years 2016 through 2020, that is, we make forecasts whether startups were successful over a time horizon of five years. The forecast horizon is set analogous to earlier statistics reporting upon a high failure rate among startups in their early stage <cit.>, so that a 5-year-ahead forecast horizon should be sufficient to distinguish successful from non-successful startups. Our choice of events representing startup success is listed in the previous section.
§.§ Variable Descriptions
Our fused machine learning approach makes use of an extensive set of variables from Crunchbase (see <Ref>). The outcome variable (, the variable to predict) is binary, denoting whether a startup was successful (=1; otherwise =0).
The predictors (, the variables that are fed into our machine learning classifiers) consist of the following: (structured) fundamental variables and (unstructured) textual self-descriptions. (1) The fundamental variables (FV) describe different characteristics of startups such as their age or the industries in which they operate (see <Ref>). Note that we use the industries as reported on Crunchbase, which is based on a highly granular scheme (e.g., an Internet-of-Things company may be assigned simultaneously to “artificial intelligence”, “industrial automation”, etc.). Social media activity has been found to be related to startup success <cit.>, and, analogously, we include information about whether startups are on social media (, whether they have a Twitter/X or LinkedIn profile). Furthermore, we collect information about the characteristics of the founders (, the number of university degrees). We follow previous literature <cit.> by controlling for the presence of known investors that have a profile on Crunchbase themselves. We also include information on previous funding rounds but, since we use a historical view on Crunchbase data, we only access information up to our time point when making the predictions so that there is no lookahead bias (i.e., we discard funding rounds that occur during the forecast horizon to ensure a time-aware evaluation framework).[We also considered a less sparse encoding of business sectors (rather than fine-grained industries as in the Crunchbase coding scheme) but we discarded this. The reason is seen in our later analysis, where there is little variability across business sectors and, thus, sector information has only little predictive power. Furthermore, we also considered additional information about founders (e.g., their number of current and past jobs) and prominence (e.g., site visitors, growth in site visitors, number of media articles) but found that these are too sparse to make a meaningful addition to our predictions.] (2) The latter, , textual self-descriptions (TSD), are encoded via the large language model (BERT). This yields document embeddings, which are then used as input to the machine learning classifier.
§.§ Descriptive Statistics
The above filtering yields a final dataset with 20172 startups. Descriptive statistics on startups for our dataset are as follows (see <Ref>). Out of all startups, 7252 (, 35.94 %) startups have been labeled as successful, whereas 12920 (, 64.06 %) have been labeled as being non-successful. Frequent events indicating success are founding rounds (, 32.45 % of all startups), followed by acquisitions (3.10 %) and initial public offerings (0.40 %). For startups in our dataset, the average age is 18 months. Startups tend to be more successful if they provide a link to their social media profiles. In general, startups are more frequently founded by males (, 1.58 male founders per startup) than by females (, 0.25 female founders per startup). Successful startups have, on average, more founders (mean: 1.98) than non-successful ones (mean: 1.69). Furthermore, founders with university degrees often have more successful startups. On average, startups have previously raised funding totaling to USD 3.156 million from 2.07 investors. Unsurprisingly, startups that are eventually labeled as successful have received more funding (mean: USD 4.81 million) and are backed by more investors (mean: 3.28). On average, successful startups provide a shorter textual self-description (mean: 613.32 characters) than non-successful ones (mean: 694.04 characters). <Ref> lists two example textual self-descriptions, one for each class.
Startups listed on Crunchbase operate in a variety of business sectors (see <Ref>). The majority of startups in our data operate in the area of Information Technology and Communication Services. In contrast, startups in the Energy, Utilities, and Materials sectors are less common. Note that startups can be assigned to multiple business sectors. Across the business sectors, we also see variation in the success rate of startups. For instance, startups in some sectors such as Utilities have a high success rate (50.79 %), while the success rate for Communication Services amounts to only 32.71 %.
§ EMPIRICAL FINDINGS
§.§ Comparison of Our Large Language Model Against the Baselines
We now evaluate the performance of our fused large language model in predicting startup success (see <Ref>). We use the neural net as the best-performing final machine learning classifier within our fused large language model for this evaluation. For a detailed comparison across different final machine learning classifiers, we refer to <Ref> of the Supplementary Materials. We further remind that we follow common practice in machine learning and evaluate the performance on out-of-sample observations, that is, startups that have not been part of the training set and are thus unseen to the machine learning classifiers. In addition, we repeat the random splitting of our train and test sets five times and thus report the mean and standard deviation of our evaluation metrics across the five test sets.[We also perform an out-of-time evaluation in <Ref> of the Supplementary Materials, where we evaluate the performance in predicting the success of startups that originate from a period outside the one used for training. Overall, the performance remains robust but, due to the task formalization, has a smaller sample size and thus tends to have a larger variance.]
Overall, we find that our tailored, fused large languages model is considerably more accurate than a majority vote (i.e., a model that always predicts the majority class label) and a random vote (i.e., a model that predicts class labels randomly based on the distribution of the class labels in the training data). Both approaches represent naïve baselines from machine learning, which are outperformed by a large margin. Our tailored, fused large language model using both fundamentals and textual self-descriptions yields an AUROC of 82.78 %, a balanced accuracy of 74.33 %, and a 7.23-fold ROI. Altogether, this demonstrates the efficacy of machine learning based on our fused large language model in predicting startup success from VC platforms.
We further compare our fused large language model against common baseline text representations. Specifically, we draw upon manual feature extraction from textual data <cit.>, GloVe document embeddings <cit.>, and a bag-of-words approach <cit.>. The baseline text representations have a known limitation in that they struggle with capturing long-term dependencies across language, because of which semantics are ignored to a large extent. As expected, we find that, compared to our fused large language model, the baselines are inferior. For example, the best baseline in terms of AUROC (GloVe) has a 6.41-fold ROI, while our custom, fused large language model has a 7.23-fold ROI, which is a plus of 82.19 percentage points. Note that both our fused, large language model and the bag-of-words baseline have access to the same data, that is, fundamental variables and textual self-descriptions. Hence, all performance improvements must solely be attributed to the better model architecture of our fused large language model.
In addition, we compare using fundamental variables only vs. a combination of fundamental variables and the textual self-description. Here, including textual self-description using the baseline text representations increases the AUROC by 0.62 percentage points (manual feature extraction <cit.>), 1.29 percentage points (GloVe <cit.>), and 0.5 percentage points (bag-of-words). Including textual self-descriptions within our fused large language model performs best and increases the AUROC by 2.18 percentage points. As such, we yield consistent evidence that demonstrates the operational value of textual self-descriptions: a significant improvement in prediction performance is achieved by including textual self-descriptions. Altogether, this highlights the importance of textual self-descriptions for successful investing decisions.
§.§ Sensitivity to Final Machine Learning Classifier
We now provide a sensitivity analysis where we vary the final machine learning classifier (i.e., elastic net, random forest, neural network) and within our fused large language model.[We also tested the performance of varying the input variables (i.e., FV, TSD, FV + TSD) within our final machine learning classifiers. We report the results in <Ref>.] Thereby, we confirm that our choice of a neural network for the final machine learning classifier in our fused large language model is superior. The results are reported in <Ref>. By varying the final machine learning classifier in our fused large language model using both fundamentals and textual self-descriptions, we yield an AUROC of 81.76 % (logistic regression), 82.51 % (elastic net), 81.75 % (random forest), and 82.78 % (neural network). We observe a similar pattern with regard to the other performance metrics. For instance, the neural network achieves a 7.23-fold ROI. Hence, the best overall AUROC is obtained by the neural network, followed by the elastic net, logistic regression, and the random forest. Altogether, this demonstrates the efficacy of our fused large language model based on a neural network in predicting startup success from VC platforms.
§.§ Sensitivity to Fine-Tuning Our Large Language Model
We now experiment with fine-tuning our fused large language model. Specifically, we add a classification head to the [CLS] embedding (i.e., o_[CLS]) for classifying startup success on top of BERT. We concatenate the fundamental variables to the [CLS] embedding before feeding them to the classification head. This way, both the classification head is trained and BERT is fine-tuned simultaneously based on the task of predicting startup success. Hence, the difference to no fine-tuning lies in the fact that we now allow for parameters in BERT to be fine-tuned for the task of classification.
We fine-tuned BERT using the transformers framework from Huggingface <cit.>. We use a training batch size of 32 and a learning rate of 4 · 10^-5. We freeze the first 8 layers as they capture language patterns and an understanding of text in general. We update the weights of BERT and the classification head using the AdamW optimizer <cit.>. We fine-tune for a maximum number of 5 epochs. We validate the performance every 50 steps. We performed early stopping when the loss on the hold-out set does not decrease for more than 5 steps.
<Ref> reports the results. Overall, we do not observe any performance improvement when fine-tuning our fused-large language model. The performance of fine-tuning is comparable to that of our fused large language model with a neural net classifier. Specifically, fine-tuning our fused large language model yields a 0.12 percentage point decrease in accuracy, 0.1 percentage point decrease in AUROC, and 10.64 percentage point decrease in ROI, as compared to our not fine-tuned fused large language model. Hence, the pre-trained embeddings of BERT already capture textual information relevant to the task of success prediction. Our findings underline an important aspect of machine learning: Increasing the number of trainable (or fine-tunable) parameters does not necessarily guarantee performance improvements. We discuss the finding later in <Ref>.
§.§ Prediction Performance Across Business Sectors
We now perform a sensitivity analysis in which we compare how the prediction performance from our fused large language model varies across business sectors (see <Ref>). In general, startup activities and outcomes vary significantly across business sectors <cit.>. For example, the sector of Information Technology typically features better data coverage and a higher number of startups, potentially leading to better predictability. Motivated by these differences, we perform a sensitivity analysis to provide insights into the extent to which textual self-descriptions contribute to performance gains across sectors. Here, we focus our evaluations on the implementation based on a neural network, , the best-performing classifier. We compare high-level business sectors for easier interpretability (this is different from the fine-grained but sparse industries that are reported on Crunchbase and that we use as predictors). Overall, we find that the prediction performance is fairly robust. The AUROC varies from 72.04 % (Energy) to 85.39 % (Industrials). This thus confirms that our fused large language model allows for accurate predictions across all business sectors. Furthermore, including textual self-descriptions improves the prediction performance across most business sectors. The only exceptions are the four sectors with the smallest number of data points (Energy, Materials, Real Estate, and Utilities). For these sectors, including textual self-descriptions does not lead to a performance improvement as compared to using only the fundamental variables for prediction. Also, the standard deviation in the prediction performance is higher across these sectors. This implies that a sufficient number of training observations is necessary to make accurate predictions from textual self-descriptions.[We also tested whether the lower prediction performance in these business sectors could stem from more diverse textual self-descriptions as compared to other business sectors. However, the representations of the textual self-descriptions are (a) not more/less discriminatory for successful vs. non-successful startups across business sectors, and (b) not more/less diverse across business sectors, suggesting that more diversity in self-descriptions within specific business sectors is not a factor for lower prediction performance.]
§.§ Prediction Performance Across Investment Events
We compare the prediction performance of our fused large language model across different events that are indicative of startup success, namely initial public offering, acquisition, and external funding. For this, we evaluate our models on subsets of the out-of-sample test sets split by the different events. Hence, the corresponding accuracy quantifies, for example, to what extent startups are correctly classified in the subset of startups that eventually had an initial public offering. We proceed analogously for acquisition and funding events. The results are reported in <Ref>. Overall, the events vary in their frequency, as only a few startups had an initial public offering or had been acquired, whereas a larger proportion received external funding.
We find that the prediction performance is generally higher for initial public offerings and funding events. Here 82.05 % of initial public offerings and 80.17 % of funding events were predicted correctly. In contrast, only 65.54 % of acquisitions were predicted correctly, implying that, for the latter, inferences are more challenging. Again, we confirm that machine learning benefits from incorporating textual self-descriptions. In fact, using textual self-descriptions increases the rate of correct classifications for initial public offerings by 4.02 %, for acquisitions by 3.92 %, and for funding events by 2.75 %. Therefore, our findings suggest that textual self-descriptions from VC platforms are informative for predicting startup success, consistently across all success events.
§.§ Robustness Checks
We perform the following additional robustness checks. We evaluate the prediction performance of our fused large language model across different company characteristics (, the age of a startup) and the length of the textual self-description. We find that the inclusion of textual self-descriptions improves the prediction performance considerably, which is consistent across startup ages and across different text lengths. This contributes to the robustness of our findings.
§.§.§ Prediction Performance Across Startup Age.
The prediction performance with and without textual self-descriptions grouped across startup age is reported in <Ref>. For all age groups, the majority vote and random vote as naïve baselines from machine learning are outperformed by a considerable margin and thus point toward the overall large prediction performance. In addition, all performance metrics increase by a considerable margin when including textual self-descriptions. This adds further robustness to our finding that textual self-descriptions are predictive of startup success. Furthermore, the balanced accuracy is higher for older startups with and without textual self-descriptions, indicating that more established startups potentially yield more predictive information on Crunchbase.
Varying the age of startups is also important for another reason: it allows us to assess the prediction performance across different time periods. Startups with an age between 1–12 months originate from 2015, an age between 13–24 months originate from 2014, etc. This thus contributes to the robustness of our findings.
§.§.§ Prediction Performance Across Length of Textual Self-Description.
The prediction performance with and without textual self-descriptions across different lengths of the textual self-description is reported in <Ref>. For all length groups, a majority vote and a random vote are outperformed by machine learning. In addition, a clear improvement in AUROC is found for all groups when including textual self-descriptions. Overall, this adds robustness to our finding that textual self-descriptions are predictive of startup success. Furthermore, the AUROC is higher for startups with longer textual self-descriptions. Similarly, both metrics increase for the baseline without textual self-descriptions. Still, the length of the textual self-description appears to play a minor role in the prediction performance.
§.§ Post-Hoc Explainability of Our Machine Learning Approach
The previous analyses demonstrate the performance improvement of including textual self-descriptions for the task of startup success prediction. Now, we analyze the contributions of each variable (, fundamentals and textual self-descriptions) for predicting startup success. To this end, we aim to understand how our fused large language model uses the variables to arrive at predictions. We use the SHAP value method <cit.>. Intuitively, the SHAP value method treats the prediction of a model as a cooperative game, i.e., the prediction (i.e., the payoff) must be allocated fairly among the feature values (i.e., the individual players) based on their contribution. Hence, the SHAP value method enables a nuanced understanding of how each feature contributes to the prediction of the model and is frequently used for understanding machine learning in management applications <cit.>.
SHAP values are computed for each observation separately, i.e., every feature within the vector of each observation is assigned a SHAP value. SHAP values can also be interpreted at the model level. Therefore, we quantify both feature attribution and feature importance based on the SHAP values. Feature attribution is directly determined by the SHAP values and feature importance is computed by averaging the absolute SHAP values across observations. We follow previous research <cit.> and aggregate the SHAP values (sum) across the document embedding of the TSD to one feature representation.
<Ref> shows the summary plot of SHAP values computed for the predictions of our fused large language model. In the left plot, the dots across each feature represent the feature attribution for each prediction of a specific observation. The right plot shows the mean of the absolute SHAP values across all samples. Both plots show the 20 features with the highest computed feature importance, ranking them from highest (top) to lowest (bottom) importance. Notably, the aggregated representation of the textual self-description is the most important feature, indicating that it contributes, on average, the most to the prediction of our fused large language model. Here feature attributions range from -0.53 to 0.76 with a mean absolute value of 0.29. Thus, out of all features, the textual self-description adds the most to the predictions of our fused large language model.
Variables characterizing the momentum of a startup also make important contributions to the predictions of our fused large language model. Overall, the age of the startup is the second most important feature. Feature attributions range from -0.28 for a startup age of 35.59 months to 0.20 for a startup age of 4 months. On average, a higher age of startups is estimated to have a negative feature attribution to success prediction. This may indicate that startups with a longer market presence face reduced probabilities of success due to, for example, lower perceived growth potential and questionable viability of their business model if they have not yet achieved success. In addition, recent funding activities (last round time lag) are estimated as positive contributions to predictions of success, with feature attributions ranging from -0.37 to 0.05.
A plausible explanation might be that recent funding signals reduced risk, as other investors have recently found the startup promising enough to invest in (, a form of validation). Similarly, the number of investors in the last investment round contributes positively to the success predictions, reinforcing the idea that previous funding may serve as a form of validation that predicts success also in the future.
Founder characteristics also play an important role in the predictions made by our fused large language model. Among these, the number of founders and their educational backgrounds are highly influential. Here, the total number of founders positively contributes to success predictions, which suggests that startups with a higher number of founders may benefit from diverse skill sets and shared responsibilities. The total number of degrees among the the founders also shows a positive contribution, with attributions ranging from -0.09 to 0.09. This suggests that a higher number of educational degrees within the founding team may predict success, possibly reflecting the founding team's capability to tackle complex challenges and innovate. In addition, the presence of a LinkedIn profile (and an email) for founders also stands out as an important and positive contributor to the predictions of our model. This indicates that visible professional networking and the credibility it brings might be a strong predictor of later startup success.
The contributions of the sectors in which a startup operates are also reflected in the SHAP values. For example, as seen by the SHAP values, software is the most influential sector feature. Startups in the software sector typically exhibit high growth potential, so that the information of whether a startup operates in this sector helps to predict later success. Similarly, sectors such as healthcare and artificial intelligence also show positive contributions, with healthcare ranging from -0.04 to 0.13 and artificial intelligence from -0.04 to 0.22. These sectors are typically characterized by innovative solutions of high impact. In contrast, sectors such as food and beverage and hardware show smaller but still positive contributions, which could be attributed to higher capital requirements and longer time to market.
§ DISCUSSION
§.§ Managerial Implications
Our work demonstrates that VC platforms can be used to predict startup success and thus support investing decisions. We find that predictions from our fused large language model achieve an AUROC of up to 82.78 %, a balanced accuracy of up to 74.33 %, and a 7.33-fold ROI. Thereby, baselines without machine learning (, a majority vote) are outperformed by a considerable margin. Prior literature has already shown that various fundamental variables are predictors of startup success, whereas we show that additional predictive power is offered by textual self-descriptions. Here, we find that incorporating textual self-descriptions through our fused large language model increases the AUROC by 2.18 percentage points, the balanced accuracy by 2.33 percentage points, and the ROI by 52.25 percentage points. The increase in prediction performance is statistically significant. As such, our work is of direct managerial relevance as it provides computerized decision support for venture capitalists with the prospect of making financially rewarding investments.
We also show that traditional machine learning methods for making predictions from text (e.g., bag-of-words with manual feature extraction <cit.>) are inferior to state-of-the-art methods based on large language models. Traditional methods <cit.> rely on manually crafting features from text that might not capture the entire latent textual information. In contrast, our fused large language model utilizes so-called neural representation learning, capturing latent information in texts through an automated, data-driven procedure that learns from data. Notably, we observe that fine-tuning the language model does not increase the performance. The complexity behind the alignment of pre-trained knowledge and target domain characteristics has been discussed in recent NLP literature <cit.>, where evidence is provided that fine-tuning does not always help performance due to it being highly task- and data-specific. The fact that fine-tuning shows similar performance as no fine-tuning underlines an important aspect of machine learning: increasing the number of trainable or fine-tunable parameters does not necessarily guarantee performance improvements. Thus managers should carefully consider the use of large language models when dealing with decision problems that involve text data.
The improvements in prediction performance when incorporating textual self-descriptions are robust across all business sectors and economically significant. To assess the practical implications, we translate the prediction performance into investment portfolio performance (ROI). Our results show significantly increased ROI when incorporating textual self-descriptions through our fused large language model: The best-performing baseline without textual self-descriptions amounts to a 6.71-fold ROI, while our fused large language model achieves a 7.23-fold ROI. The financial gains from our fused large language model can be further explained by the substantial costs of false positives in the context of startup investment decisions. False positive classifications for investment decisions lead to investing in startups predicted to succeed but ultimately failing. Hence, investments in startups that eventually fail lead to a potential loss of the entire investment amount. Our model significantly reduces the probability of false positives compared to the baselines, thereby increasing the overall returns from our machine learning approach for making investment decisions.
The Crunchbase database is widely used for academic research, which in turn yields practical implications. Crunchbase offers an online platform with comprehensive data on startups including fundamental variables (e.g., the age of the startup) and textual self-descriptions. Such data has key differences from the data traditionally collected by VC investors for decision-making <cit.>. Here, two reasons stand out why investors traditionally have only little data about startup trajectories. On the one hand, investors typically collect only a few variables about startups (e.g., via scorecards) <cit.> and often not in a structured format <cit.>. On the other hand, and more importantly, investors typically screen only a few dozen startups and thus only have access to startup data for a very small sample size <cit.>, which precludes data-driven inferences. In sum, both of the aforementioned reasons are salient hurdles for training and deploying machine learning tools. As a remedy, prior literature evaluated the predictive ability of fundamental variables on Crunchbase <cit.>. We add to prior literature by using large language models to incorporate the additional predictive ability of textual self-descriptions on Crunchbase. Hence, Crunchbase offers valuable data for VC investors and other practitioners regarding the evaluation of startups and the enhancement of decision-making tools.
§.§ Methodological Implications
We contribute to business analytics research by demonstrating the operational value of large language models in the context of more effective investment decisions. Thereby, we connect to a growing stream of machine learning in business analytics <cit.>. Different from explanatory analysis (, regression analysis) that merely estimates associations in an in-sample setting, machine learning is concerned with how well inferences can be made in an out-of-sample setting. Here, we demonstrate an impactful application of machine learning in VC decision-making.
Large language models have several favorable advantages over traditional methods for natural language processing. On the one hand, large language models provide a flexible way to capture semantics and structure in textual materials, thereby bolstering the prediction performance over alternative machine learning approaches (e.g., bag-of-words). On the other hand, large language models can learn from vast amounts of unlabeled texts through pre-training. As such, large language models can often be applied out-of-the-box with little need for fine-tuning. This is beneficial as it greatly reduces the manual effort and the cost for data annotation. However, applications of large language models in business analytics are still rare, while we develop a tailored, fused architecture for our decision-making problem. As shown above, large language models may need custom tailoring. In our case, we build a fused large language model that can leverage running text but where the final prediction layer can also process structured data. As such, we expect that our fused large language model is of direct relevance for many business analytics settings where the goal is to expand traditional operational information in structured form with additional text data.
Our study offers implications for the use of large language models in business analytics. We based our predictions on a tailored large language model, a recent innovation from machine learning research. We expect that large language models are beneficial for a wide array of managerial decision-making tasks. This opens new opportunities for research by adapting large language models to, for instance, sales and demand forecasting from social media data, credit scoring, and business failure prediction.
§.§ Limitations and Future Research
As with other works, ours is not free of limitation, which offers promising directions for future research. First, large language models such as BERT may embed biases that are populated in downstream tasks. Large language models are trained on vast corpora of text data, which inevitably contain societal biases <cit.>. Consequently, there is a risk that these embedded biases could influence predictions <cit.>, potentially disadvantaging certain startups. Addressing this challenge requires ongoing efforts to mitigate biases within large language models. Future research could focus on refining these models to ensure equitable decision-making processes. For now, we call for careful use when deploying our model in practice. Second, our work is centered on data from the VC platform Crunchbase. While this choice is informed by prior research <cit.>, it does introduce a limitation to our work. Crunchbase is a leading online VC platform that collects rich startup and investor data; however, it may not capture the full set of startups and investors globally. Future work might expand the data sources to include a broader spectrum of startups, enhancing the relevance and robustness across different sectors and regions. Third, the economic landscape of startups is dynamically evolving. To ensure ongoing predictive performance, continuous data collection and model retraining is needed. Lastly, the dynamic nature of the economic landscape might lead to startups adapting their textual self-descriptions in response to model predictions. This suggests an area for future research on the equilibrium implications of textual self-descriptions and model predictions. Specifically, analyzing equilibria could unveil the response of startups to prediction models in designing self-descriptions. Such analysis would require a different form of analysis using equilibria but not machine learning as in our paper.
§ CONCLUSION
The majority of startups fail. Owing to this, the decision-making of investors is confronted with considerable challenges in identifying which startups will turn out to be successful. To support investors in this task, we developed a tailored, fused large language model that incorporates the textual self-description of startups alongside other fundamental variables to predict startup success. Here, we show that additional predictive power is offered by the textual self-descriptions. Our model helps investors identify investment targets that promise financial returns. For this, our work provides computerized decision support that allows investors to automate their screening process with data-driven technologies. Furthermore, our study highlights the potential of applying large language models in domains where relevant text data is available but has not traditionally been used for predicting outcomes needed for decision-making. For example, similar to ours, future work could attempt using textual self-descriptions of venture capitalists to predict the performance of their investments. In such scenarios, the findings from our study suggest that combining textual information with conventional data sources may have the potential to significantly enhance predictive accuracy and decision-making processes.
Acknowledgments. We thank Matthias Gey for help.
informs2014
Online Supplements
§ OUT-OF-TIME PERFORMANCE EVALUATION
We now repeat the analysis from <Ref> but now perform an out-of-time performance evaluation. Specifically, we split our data, ensuring that startups included in the test set originate from a period that follows the one represented by startups in the training set. Thereby, we can evaluate the ability of our model to predict the success of startups that stem from a period outside the one used for training. We proceed analogously as for the analysis in <Ref>, i.e., we perform a 10-fold cross validation and tune our hyperparameters via a randomized grid search (20 iterations) using the tuning grid from <Ref>. <Ref> lists the results. Overall, our results remain robust, i.e., our fused large language model outperforms all baselines. Our fused large language model using both fundamentals and textual self-descriptions yields an AUROC of 78.91 %, a balanced accuracy of 71.03 %, and a 8.58-fold ROI. However, this type of out-of-time splitting leads to smaller datasets, which increases the variance. Altogether, this demonstrates the efficacy of our fused large language model in predicting the success of startups that stem from a period outside the one represented in the training data.
§ PREDICTION PERFORMANCE OF FINAL MACHINE LEARNING CLASSIFIER FOR THE BASELINES
We now evaluate the performance of the final machine learning classifiers within our baselines in predicting startup success (see <Ref>). Overall we observe some variation as to which final machine learning classifier performs best for each baseline. Specifically, using only the fundamental variables or incorporating the bag-of-words approach the random forest classifier performs best (AUROC: 80.95 or 81.34). For all other baselines, the neural network consistently outperforms the other final machine learning classifiers. We attribute the variation in the best-performing final machine learning classifier to the fact that the input size and the complexity vary for each baseline.
§ PERFORMANCE OF VARYING THE INPUT VARIABLES WITHIN OUR FUSED LARGE LANGUAGE MODEL
<Ref> compares the performance of the final machine learning machine learning classifiers within our fused large language model for varying input variables. Specifically, we assess the relative gain from using textual self-descriptions. For this purpose, we compare the prediction performance with two specific sets of predictors: (a) our machine learning approach trained only on fundamental variables and (b) our machine learning approach trained on both fundamental variables and textual self-descriptions (= our fused large language model). Across all machine learning classifiers, we find consistent evidence that the prediction performance is improved when considering textual self-descriptions. By including textual self-descriptions, the AUROC improves by 2.29 percentage points (logistic regression), 3.01 percentage points (elastic net), 1.8 percentage points (random forest), and 2.18 percentage points (neural network). The improvements in the balanced accuracy amount to 2.51 percentage points (logistic regression), 3.03 percentage points (elastic net), 1.26 percentage points (random forest), and 2.33 percentage points (neural network).
The increases in ROI amount to 46.31 percentage points (logistic regression), 48.24 percentage points (elastic net), 29.62 percentage points (random forest), and 52.25 percentage points (neural network). The increases in ROI highlight the economic value of incorporating textual self-descriptions when predicting startup success.
We also assess whether the improvement in prediction performance due to including textual self-descriptions is statistically significant. For this purpose, we utilize McNemar's test comparing the predictions with and without textual self-descriptions (while including fundamental variables). Here, we find that the performance increase is statistically significant at common significance thresholds for all considered machine learning classifiers, , logistic regression (χ^2-statistic = 21.70; p < 0.01), elastic net (χ^2-statistic = 25.55; p < 0.01), random forest (χ^2-statistic = 5.95; p < 0.05), and neural network (χ^2-statistic = 11.00; p < 0.01). In sum, the improvements from using textual self-descriptions are achieved consistently across all classifiers. The results thus confirm that textual self-descriptions have predictive power and thus are of operational value.
For comparison, we also report the prediction performance of machine learning that is fed solely with textual self-descriptions. Here, the majority vote and random vote as naïve baselines are again outperformed by a considerable margin. Furthermore, we observe that the prediction performance of using only textual self-descriptions is comparable but slightly inferior to the prediction performance obtained by using only fundamental variables. For instance, for the neural network, the AUROC is 80.60 % for a model with only fundamental variables vs. an AUROC of 77.24 % for a model with only textual self-descriptions.
§ CROSS-CORRELATION OF FUNDAMENTAL VARIABLES
The cross-correlation of the fundamental variables is shown in <Ref>. Strong correlations are observed, for example, between variables indicating the number of degrees of the founders (, founders degree count maximum and mean). While collinearity might affect correct estimates in explanatory analysis, it is beneficial for machine learning. The reason is that strong correlations often yield more powerful classifiers <cit.>.
[ph!]
Cross-correlations of fundamental variables.
round-mode=places,round-precision=2
Variable 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
1 Age 1
2 Has email -0.02 1
3 Has phone 0.03 0.29 1
4 Has Facebook 0.05 0.22 0.11 1
5 Has Twitter 0.07 0.21 0.06 0.54 1
6 Has LinkedIn -0.10 0.19 0.12 0.23 0.28 1
7 Number of investment rounds 0.24 0.01 -0.04 0.05 0.05 0.02 1
8 Raised funding 0.03 0.01 0.04 -0.01 -0.01 0.02 0.14 1
9 Last round raised funding 0.01 0.00 0.03 -0.03 -0.02 0.01 0.07 0.93 1
10 Last round post money evaluation 0.01 0.01 0.01 0.01 0.01 0.01 0.07 0.10 0.02 1
11 Last round time lag 0.32 -0.08 -0.06 -0.02 -0.04 -0.11 0.68 0.06 0.05 0.02 1
12 Investor count 0.16 0.06 -0.02 0.08 0.09 0.09 0.66 0.17 0.06 0.09 0.33 1
13 Last round investor count 0.10 0.03 -0.04 0.04 0.06 0.05 0.48 0.09 0.10 0.02 0.38 0.62 1
14 Known investor count 0.22 0.00 -0.04 0.04 0.04 0.01 0.91 0.15 0.08 0.06 0.70 0.69 0.55 1
15 Last round known investor count 0.22 -0.01 -0.05 0.04 0.03 0.00 0.92 0.14 0.09 0.05 0.74 0.63 0.57 0.98 1
16 Founders count 0.02 0.11 -0.09 0.12 0.19 0.17 0.26 0.05 0.03 0.02 0.11 0.26 0.20 0.25 0.24 1
17 Founders different country count 0.01 0.10 -0.09 0.12 0.18 0.16 0.23 0.03 0.02 0.01 0.09 0.22 0.17 0.21 0.21 0.91 1
18 Founders male count 0.02 0.10 -0.08 0.11 0.18 0.17 0.26 0.06 0.04 0.02 0.11 0.26 0.20 0.25 0.24 0.97 0.88 1
19 Founders female count 0.00 0.09 -0.08 0.12 0.17 0.13 0.18 0.02 0.01 0.01 0.06 0.18 0.15 0.17 0.17 0.79 0.82 0.64 1
20 Founders degree count total 0.01 0.10 -0.07 0.11 0.16 0.18 0.25 0.06 0.03 0.02 0.10 0.27 0.21 0.24 0.24 0.77 0.71 0.74 0.63 1
21 Founders degree count maximum 0.01 0.10 -0.07 0.11 0.17 0.18 0.24 0.05 0.02 0.02 0.09 0.25 0.19 0.23 0.22 0.78 0.79 0.75 0.70 0.94 1
22 Founders degree count mean 0.00 0.10 -0.07 0.11 0.17 0.18 0.23 0.04 0.02 0.02 0.09 0.24 0.19 0.22 0.22 0.78 0.80 0.75 0.71 0.91 0.99 1
20lStated: Pearson correlation coefficient
4rN = 20,172
|
http://arxiv.org/abs/2409.03332v1 | 20240905081142 | Masked Sensory-Temporal Attention for Sensor Generalization in Quadruped Locomotion | [
"Dikai Liu",
"Tianwei Zhang",
"Jianxiong Yin",
"Simon See"
] | cs.RO | [
"cs.RO"
] |
Rethinking Improved Privacy-Utility Trade-off with Pre-existing Knowledge for DP Training
Yu Zheng,
Wenchao Zhang,
Yonggang Zhang^∗,
Wei Song,
Kai Zhou,
Bo Han
∗ Yonggang Zhang is the corresponding author.
Yonggang Zhang is with the Department of Computer Science, University of Technology Sydney, Australia. E-mails: [email protected] Zheng is with the Department of Information Engineering, Chinese University of Hong Kong, Shatin, Hong Kong SAR. E-mail: [email protected] Zhang is with the College of Computer Science and Technology, Xi’an University of Science and Technology, China. E-mail: [email protected]; Wei Song is with the School of Computer Science and Engineering, Northeastern University, China. E-mail: [email protected]; Kai Zhou is with the Department of Computing, Hong Kong Polytechnic University, Kowloon, China. E-mail: [email protected]; Bo Han is with the Department of Computer Science, Hong Kong Baptist University, Hong Kong SAR. E-mails: [email protected]
MOX - Dipartimento di Matematica “F. Brioschi”, Politecnico di
Milano, via Bonardi 9, 20133 Milan, Italy
^1 [email protected]
^2 [email protected]
^3 [email protected]
September 9, 2024
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
[1]NVIDIA AI Technology Centre (NVAITC); e-mail: {dikail,jianxiongy,ssee}@nvidia.com
[2]College of Computing and Data Science, Nanyang Technological University, Singapore; e-mail: [email protected], [email protected]
[3]also with Nanyang Technological University and Coventry University
§ ABSTRACT
With the rising focus on quadrupeds, a generalized policy capable of handling different robot models and sensory inputs will be highly beneficial. Although several methods have been proposed to address different morphologies, it remains a challenge for learning-based policies to manage various combinations of proprioceptive information. This paper presents (), a novel transformer-based model with masking for quadruped locomotion. It employs direct sensor-level attention to enhance sensory-temporal understanding and handle different combinations of sensor data, serving as a foundation for incorporating unseen information. This model can effectively understand its states even with a large portion of missing information, and is flexible enough to be deployed on a physical system despite the long input sequence.
§ INTRODUCTION
Quadrupedal robots have showcased their capability to navigate in various challenging terrains, thanks to the rapid advancements of deep reinforcement learning (RL) technology <cit.>. With the increasing availability of affordable quadruped robots on the market,
there is a growing interest in developing a general-purpose locomotion policy that can fit all types of quadrupedal devices. Typically learned locomotion policies are specific to the robot, observation space, and task they were trained on, making it challenging to transfer or generalize to other robots or scenarios.
Recent works, such as GenLoco <cit.> and ManyQuadrupeds <cit.>, have successfully developed a single policy with the ability to adapt to diverse morphologies and degrees of freedom (DoF). However, these models depend on a fixed observation space input to generate latent space representations. They become ineffective when facing the following situations: (1) deployed to quadrupeds equipped with a different sensor set, (2) sensor data turn to unreliable due to wear and tear, (3) new input is needed for new task. Since sensory feedback is interrelated and each sensor plays a critical role at different stages of locomotion <cit.>, a policy with a deep understanding of proprioceptive information to handle flexible input is desired, to enhance the generalization, flexibility, and extensibility.
One promising solution is self-attention-based transformers <cit.>, which have demonstrated exceptional capabilities in understanding complex sequential information of arbitrary lengths.
They have been widely used in robotics to enhance various tasks
with multimodal processes <cit.>. However, due to the complex model structures and vast parameters, robots driven by transformers often run at very low frequency <cit.>, or depend on external high-power computing platforms <cit.>.
To address these issues, two common strategies are proposed to achieve smooth and stable control for transfermer-based locomotion tasks. The first strategy involves outputting high-level commands as an interface, such as gait pattern <cit.> and velocity commands <cit.>. An external joint controller, which requires additional design and training, is then used to convert these high-level commands into joint commands. However, the application of this method is restricted to the policy combined with perception and human-robot interaction. The second strategy, targeting the vanilla locomotion tasks, applies linear projections to observation-action data for tokenization <cit.>. Although this method is straightforward and efficient for producing joint commands in an end-to-end manner, it limits the transformer's direct access to sensory information and still relies on fixed input, thereby constraining its in-context understanding capability and multimodal nature of the data.
To address the above limitations, we propose , a novel transformer-based model for end-to-end quadruped locomotion control. It achieves sensor input generalization with its multimodal nature, while still being directly deployable on physical systems. Specifically in , all sensory data are discretized and tokenized to form a long proprioceptive information sequence. Inspired by the work <cit.> on learning spatiotemporal information in video understanding, a random mask is applied to remove a portion of the observation during training. This significantly enhances the model's sensory-temporal understanding, to better handle different combinations of sensor data and serve as a foundation for incorporating unseen data. Additionally, it aids in identifying the most essential sensory information, thereby reducing the computational power required for physical deployment. Extensive experiments conducted in simulation demonstrate that our method can efficiently handle incomplete sensory information, even with half of the data missing. It is also robust against unseen data, making it a solid foundation for further extensions. With direct sensory-temporal attention, the model is flexible enough to mix-and-match desired information for finetuneing, meeting the requirement for different end-to-end quadruped locomotion control in the physical world.
§ RELATED WORK
§.§ Reinforcement Learning in Legged Locomotion
Reinforcement learning (RL) has gained significant attention in developing robotic controllers for tasks such as legged locomotion <cit.>, eliminating the need for extensive prior knowledge. With the advance of robotic simulation, RL-based locomotion is often trained in virtual environments <cit.> with diverse terrains <cit.> and randomized environmental factors <cit.> to improve the robustness of the agent. This technique is commonly known as domain randomization (DR). Training in simulators also provides rich information, some of which is not easily accessible in the real world (i.e., privileged information). To better interpret such information and bridge the sim-to-real gap when deploying policies to the physical world, system identification is commonly used to transfer knowledge to a deployable student policy. <cit.> employed action imitation to infer teacher behaviors using historical proprioceptive data. <cit.> further developed a two-stage adaption framework for faster and more robust online transfer, which has become the foundation for many subsequent works <cit.>. Another approach combines transfer loss with RL loss for joint optimization <cit.> to allow the student to explore freely to maximize the reward return.
§.§ Transformer in Robotics
New models and methods are proposed to leverage the power of transformers in RL and robotics. For instance, Decision Transformer <cit.> converts states, actions, and rewards into embeddings using an encoder. Trajectory Transformer <cit.> uses the complete discretized trajectory for language model-like autoregressive prediction. Building on these frameworks, Gato <cit.> was developed to serve as a generalist agent for hundreds of tasks, including real-world robotic manipulation. Vision Language Models (VLM) <cit.> and Vision Language Action Models (VLA) <cit.> use transformers as interfaces for scene and language understanding to provide high-level commands for robotic control and human-robot interaction.
For legged locomotion, <cit.> developed a transformer model for vision-based locomotion. It outputs high-level velocity commands and relies on a dedicated low-level controller for motor control. To bring transformers to direct motor control, <cit.> proposed TERT, which utilizes historical observation-action pairs to generate target motor commands directly. <cit.> applied a similar method to the bipedal locomotion task and later reformulated it as a next token prediction problem <cit.>.
§ PRELIMINARY
We adopt the two-stage teacher-student transfer approach from TERT <cit.> as the basis of our framework. It utilizes a well-trained teacher policy through RL with privileged information.
§.§ Simulation Environment
We implement the simulation environment based on Isaac Gym and its open source library IsaacGymEnvs <cit.> to enable massive parallel training.
Terrain and Curriculum. We adopt the terrain curriculum from <cit.> to achieve locomotion in dynamic terrains. Five terrain types (smooth slope, rough slope, stairs up, stairs down, discrete obstacle) are used to spawn a total of 20 terrains with proportions of [0.1, 0.1, 0.35, 0.25, 0.2] respectively. Each terrain has 10 levels with the increasing difficulty for larger slope angle, higher stairs step and larger obstacle size. During training, the agent progresses to the next level when the tracked linear reward reaches 80% of the maximum achievable value and regresses if it fails to reach 25%. If any agent finishes the highest level, it will be sent to a random level to continue exploring.
Domain Randomization.
To enhance the robustness of the policy, DR is used in the simulation. The commanded longitudinal and lateral velocity is sampled in [-1.0, 1.0] m/s and horizontal angular velocity in [-1.0, 1.0] rad/s. The command is resampled every 5 seconds without any curriculum. Due to the significant computational power and time required for transformer calculations, system delay is introduced to simulate the delay in action updates.
Please refer to the Appendix for the complete DR parameters.
Observations and Actions. The privilege observation e_t for teacher training contains ground truth data gathered fro simulation, including base linear and angular velocity, orientation, surrounding height map and randomized parameters as described above. For proprioceptive information, we use three commonly seen low-level sensors from quadrupeds, joint encoders, IMU and foot contact sensors. These sensors can provide five sensory data, including joint position q ∈ℝ^12, joint velocity q̇∈ℝ^12, angular velocity ω∈ℝ^3, gravity vector g ∈ℝ^3 and binary foot contact c ∈ℝ^4. Furthermore, the randomly sampled user command target cmd = [v_x, v_y, ω_z] and actions from previous step a_t-1∈ℝ^12 is added, resulting in a observation of o_t ∈ℝ^49 for each step. To gather the temporal information, a list of historical proprioceptive information [o_t, o_t-1, ⋯, o_t-T] from past T=15 steps is stored and passed to the student network. Thus, full observation is in the ℝ^49 × 15 space. Both teacher and student outputs the desired joint position a_t, which is further processed by a PD controller for the output torque τ = K_p (q̂-q) + K_d (q̂̇̂- q̇), with base stiffness and damping set to 30 and 0.7 respectively and the target joint velocity q̂̇̂ set to 0.
Reward Function for RL. The reward functions are designed to encourage the agent to follow the commanded velocity. Following <cit.>, we primarily penalize linear and angular movement along other axes, large joint acceleration and excessive power consumption.
Please refer to the Appendix for the complete reward structure.
§.§ Teacher Policy and Training
We implement a teacher policy following <cit.>. The teacher first encodes the privilege information e_t, with a factor encoder μ into a latent space l_t, which is then combined with the latest observation-action pair o_t for the teacher policy π̂ to output the desired joint position â_t:
l̂_̂t̂ = μ(e_t), â_t = π̂ (l̂_̂t̂, o_t)
The μ and π networks are implemented as MLP with hidden layers of [512, 256, 128] and [256, 128], respectively. The teacher policy is trained with PPO <cit.> directly to maximize the reward return and is shared across all student transfers at later stages for a fair comparison.
§ METHODOLOGY
We present , a novel transformer-based framework to generate a generalized understanding of low-level proprioceptive information for quadruped locomotion in complex terrains. With this foundational understanding, this framework is capable of handling different combinations of sensory inputs, enabling better generalization and flexibility. It can potentially be extended to incorporate high-dimensional sensors for more complex tasks. Figure <ref> shows the overview of .
Sensory Data Tokenize. To avoid using linear projection like previous transformer-based locomotion controllers <cit.>, we need to map continuous data to tokens directly. Following previous works <cit.>, we first normalize the sensory data based on the collected mean and variance, and then pass them through an encoder. This is similar to Gato <cit.>, which discretizes the value into 256 bins. The data are further mapped into a learnable embedding space with 128 dimensions.
Positional Embedding and Sensor Type Embedding. To distinguish proprioceptive information from different sensors with multiple dimensions and historical relations, two additional embeddings are added. The first one is a fixed 2D sin-cos position embedding <cit.> applied on the sensor dimension and time axis. This allows the framework to handle sensors with varying lengths of dimensions and historical time windows and be easily extendable. To accommodate the multimodal nature of the sensory data, another learnable embedding is add to indicate each sensor type. This enables easy mix and match of information from different sensors without concerns about the order or placeholders.
Random Masking. Inspired by the use of masking in image and video understanding <cit.>, we randomly mask out portions of the sensory information. This brings two benefits. First, since only part of the sensory data are visible to the network, the model is required to infer and reconstruct the missing information from them, thereby enhancing its understanding of the relationships between different sensory inputs. Second, random masking significantly reduces the training time and computational resources required, as the complexity of self attention is necessarily quadratic in the input length, making it more feasible to run in massive parallel.
Transformer Model. We implement a transformer model to process the generated tokens. The model consists of multi-head self-attention blocks with a MLP ratio of 2.0. An additional learnable state embedding is added to consolidate the processed information <cit.>, which is subsequently projected into the action space with a MLP layer π:
l_t = ([o_t, o_t-1, ⋯, o_t-T]), a_t = π (l_t)
Teacher-Student Transfer. Following TERT <cit.>, we train with a two-stage transfer strategy. In the first offline pretraining stage, trajectory is gathered by unrolling the well-trained teacher policy while the student will predict the next actions. This is to ensure that the student can produce reasonable actions during the second online correction stage to overcome the gap of distribution shift by training on its own trajectory. In each stage, 50 million trajectories are collected to train the transformer for 80k updates.
We minimize the loss for the next step prediction:
ℒ = a_t - â_t ^2
§ EXPERIMENTS AND RESULTS
We design and conduct various simulation experiments to evaluate the effectiveness of the proposed masking mechanism, overall performance of , and its generalization ability for different sensor data. We mainly focus on three metrics: linear velocity tracking return per step, angular velocity tracking return per step, and the total final reward return, which tells us how the agent can conduct the task (following users' commands) and what the overall performance is. All reported results are averaged over 5000 trails with five terrain types and different levels. They are normalized on the basis of respect teacher data for easy comparison.
§.§ Impact of Mask Ratio
First, we would like to investigate the maximum portion of missing data can handle to reconstruct robot states. During the transfer stages, we set the masking ratio to 0%, 25%, 50% and 75% independently and test the 16 model with these masking ratios. Figure <ref> shows the resultant heatmap matrix. When trained without masking, despite the model having very good performance when all information is present, it suffers from missing data and cannot efficiently reconstruct the status. Also, we can see that the performance is more dependent on the masking ratio in the second stage than that in the first stage. This is because in the second transfer stage, the student is actually interacting with the environment to reduce the gap caused by missing information and observation shift. In contrast, the mission of the first stage is to generate a usable policy that outputs reasonable actions so the agent does not fail dramatically and has the chance in the second stage to generate high-quality trajectories for optimization, which is achievable even with a masking ratio of 75%. This also demonstrates the importance of using two-stage transfer.
Comparing the performance of these models, we choose the one trained with the masking ratio of 75% in the first stage and 50% in the second stage, which can well balance the resource usage and performance. This selected model can restore almost full performance even under a masking ratio of 50% during testing. Such ratio can significantly reduce the training time and resources required.
§.§ Comparison with Baselines
We compare with two baselines. The first is RMA <cit.>, which is implemented with TCN <cit.> to capture temporal information. The second is TERT <cit.>, a transformer-based framework with linear projection for tokenization of observations and actions in two favors: concatenated single token and separate tokens for states and action, resulting in T and 2T tokens respectively <cit.>. Both RMA and TERT have access and need the full observation information. To further evaluate the importance and capability of the transformer structure in our method, we replace it with a GRU <cit.> model. For both and GRU, we apply a testing mask of up to 50%, as identified in Section <ref>. All baselines are trained with the same two-stage transfer method sharing a common well-trained teacher network.
Table <ref> shows the comparison results. When fully optimized with the teacher-student transfer, the performance of all trained policies with complete observations is very close, often within just 2% difference. When faced with incomplete information, transformer-based can have a better understanding of the data and reconstruct the robot state more accurately than the GRU-based network, even with only half of the information.
§.§ Generalization, Robustness and Flexibility
While achieving state-of-the-art performance, additionally offers the benefit of generalization and flexibility to customize the model after training or even on the fly to fit the deployment requirement. We can balance the performance and computation power required by randomly masking out the observation directly, just like we do during testing. As quadrupedal robots are often not equipped with high-power computers and need to be off the grid to work remotely, such balance is crucial. The in-context understanding capability of the transformer can give us more insight.
Important Sensory Feedback. As our random masking is applied to each dimension independently, even with a mask of 50%, the proprioceptive information is often not completely wiped out and the transformer can still trace it in the given time window. We further investigate the impact of removing one sensor completely from the feedback. The results are shown in Figure <ref>. It is clear that certain feedback like q̇, c, ω and even a_t-1 are quite redundant and a well trained transformer-based can easily compensate the missing information.
Missing of Sensor Dimension. For some proprioceptive information, it is provided by multiple sensors, such as the individual joint encoders and force senors on each foot, meaning that they can also be damaged independently due to wear and tear from daily operations and it is not easy to have a redundant sensor. Among these sensors, joint encoders are the source of both q and q̇ for the observation. From previous analysis, missing of joint information can be crucial. We investigate the scenario where only a few encoders are dead or the data are compromised and need to be excluded. We conduct the test to mask certain numbers of joint encoders. For each masked joint, the related q and q̇ are removed completely from the observation. The results are shown in Figure <ref>. The loss of the joint encoder can have a great impact on the performance as the related information is very essential for quadruped locomotion. However, our transformer model can still handle multiple missing encoders before large performance degradation.
Time Window. Another configuration we can easily adjust on the fly for balance is the time window for historical information. The default window, T=15, is equivalently to past 0.3s. We check whether such a long sequence of information is necessary by applying different time windows without masking. The results are shown in Figure <ref>. It is clear that can efficiently extract and reconstruct the robot state for actions with only 7 steps of past information. Interestingly, given a longer timeframe like T=20 and the position the transformer has never seen during training, is still robust and not affected by these unknown information.
Unseen Information. We further assess the model's capability to handle previously unseen information by appending the sequence with random dummy tokens. With the increasing of unseen information, it become more challenging for the policy in following commands. However, even with 256 dummy tokens, equivalent to a camera frame with ViT <cit.>, the agent can still produce reasonable commands for exploration. This demonstrates that the model can be used as a solid foundation for further extension with high-dimensional information.
Minimized Observation and Fine-tuning. From the previous analysis in this section, we have identify the important sensors and the minimal history length required. We further explore the feasibility of creating a minimized observation policy based on these information. A observation with only cmd, q, g and a_t-1 with a window of T=7 is equivalent to a mask of 71% on the complete observation space. When directly deployed with such mask, the policy cannot perform well due to all the missing information. As there is no more observation, we freeze the transformer for a fast finetuning on the projection layers and test both the vanilla PPO <cit.> and supervised learning with online correction <cit.>. The performance of the policies is shown in Figure <ref>. While both algorithms can help improve the performance of the policy with only minimized observations, supervised learning gives a more significant boost. Training with the teacher has been identified as one major approach to achieve quadruped locomotion on challenging terrains <cit.>. Although our foundation with the transformer can provide a solid start point of student policy, additional work is still needed for pure RL-based fine-tuning to reduce the dependence on privilege information.
Model Size. We adjust our model by stacking [1, 2, 3, 4] attention blocks, resulting in a model with 0.2M, 0.33M, 0.46M and 0.59M parameters respectively. Figure <ref> shows that the model performance increases with the model size reaching optimal results with three stacked layers.
§.§ Physical Deployment
We successfully deployed the trained policy, exported with JIT, directly on a Unitree A1 robot equipped with a Jetson AGX Orin Developer Kit. The Jetson acts as both the main processor and a payload. No further model optimization is required for a zero-shot transfer. With the onboard processing power, the policy can process the full observation space at 75Hz and 100Hz while masking out half of the data. With our minimized observation, it can achieve 150Hz, meeting the requirements for real-time deployment and allowing room for further extension with high-dimensional sensors. Figure <ref> shows some snapshots from the deployment test. Please refer to the supplementary video for more information.
§ CONCLUSION
This paper introduces , a transformer-based model for quadruped locomotion. It leverages the masking technique and the direct sensor-level attention to enhance the understanding and generation of sensory information input. We evaluate the robustness of our policy with different combinations of proprioceptive information and demonstrate its capability to compensate for missing data and handle unseen information. Finally, we show that our model is efficient to be deployed on a physical robot without any additional optimization.
Limitation and Future Work. Attention in the full sensory-temporal observation space is computationally intensive and time-consuming. Although using masking can significantly reduce the resources needed, it still takes hours for knowledge transfer, which is considerably longer than existing methods like TCN and temporal-level attention. Additionally, fine-tuning the model with pure reinforcement learning remains challenging, necessitating a more efficient knowledge transfer solution to leverage privileged information effectively. While the policy demonstrated its capability to handle missing data, an addition module is needed to detect and mask out the defected sensors.
§ QUADRUPED SENSOR SET
When selecting sensors for proprioceptive information gathering, we referred to some of the most popular models that are commonly used. Table <ref> summarizes the default sensor sets that are typically included out of the box. For low-level tasks such as locomotion, joint encoders and IMUs are the most commonly used sensors, with foot contact sensors also being utilized in some works <cit.>.
§ DOMAIN RANDOMIZATION
We provide details of the domain randomization techniques used in simulation in Table <ref>. To simulate user commands, we frequently resample linear commands in longitudinal and lateral directions, as well as the angular heading, which are then converted to angular velocity commands. To enhance the robustness and generalization of our policy, key robot control factors such as stiffness, damping, and motor strength are randomized. External interferences, such as payload variations and push forces, are also randomized. To accommodate the processing time of the transformer, each action can be delayed by up to 15 ms. The same randomization techniques are applied to both the teacher training session and the knowledge transfer session.
§ REWARD TERMS
Following the approaches in <cit.>, the rewards are designed to encourage the agent to follow the commanded velocity. To achieve a smooth, safe, and efficient policy, we penalize unwanted motion, sudden changes in actions, feet slippage, body collisions, and excessive power consumption. During testing, we noticed that adding a reward component for balanced power distribution across joints helps to achieve more balanced motion.
§ TRAINING HYPER-PARAMETERS
In Table <ref>, present the detailed training hyper-parameters for PPO training of the teacher policy and the knowledge transfer for transformer-based student policies. The two stages in the knowledge transfer process share the same hyper-parameters.
§ EXTENSION WITH HEIGHT INFORMATION
To better illustrate our framework's capabilities, we tokenize height map information using a vanilla MLP encoder and add it directly to the observation for fine-tuning over 200 epochs. The normalized test results are shown in Table <ref>. The inclusion of height information significantly improves the agent's ability to navigate through terrains, particularly on staircases, which pose the greatest challenge for a small quadruped.
§ LEARNABLE MASK
We conduct additional tests to compare the performance of using a learnable representation with the same two-stage transfer. The results, shown in Table <ref>, indicate that when deployed with masks, the learnable representation cannot match the performance of the removal approach and has a significantly negative impact when all information is present.
|
http://arxiv.org/abs/2409.02629v1 | 20240904114700 | AdvSecureNet: A Python Toolkit for Adversarial Machine Learning | [
"Melih Catal",
"Manuel Günther"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.CR",
"cs.LG"
] |
My editor
AdvSecureNet: A Python Toolkit for Adversarial Machine Learning
Melih Catal [email protected]
Software Evolution and Architecture Lab
University of Zurich, Switzerland
Manuel Günther [email protected]
Artificial Intelligence and Machine Learning Group
University of Zurich, Switzerland
September 9, 2024
=================================================================================================================================================================================================================================================================================
§ ABSTRACT
Machine learning models are vulnerable to adversarial attacks. Several tools have been developed to research these vulnerabilities, but they often lack comprehensive features and flexibility. We introduce AdvSecureNet, a PyTorch based toolkit for adversarial machine learning that is the first to natively support multi-GPU setups for attacks, defenses, and evaluation. It is the first toolkit that supports both CLI and API interfaces and external YAML configuration files to enhance versatility and reproducibility. The toolkit includes multiple attacks, defenses and evaluation metrics. Rigiorous software engineering practices are followed to ensure high code quality and maintainability. The project is available as an open-source project on GitHub at <https://github.com/melihcatal/advsecurenet> and installable via PyPI.
Adversarial Machine Learning, Trustworthy AI, Research Toolkit, PyTorch
§ INTRODUCTION
Machine learning models are widely used in fields such as self-driving cars <cit.>, facial recognition <cit.>, and medical imaging <cit.>, as well as in natural language processing tasks like chatbots <cit.> and translation services <cit.>. However, these models are vulnerable to adversarial attacks – subtle input modifications that can deceive the models <cit.>, which can compromise their integrity, confidentiality, or availability <cit.>.
Several libraries, such as ART <cit.>, AdverTorch <cit.>, and CleverHans <cit.>, have been developed to research these vulnerabilities by providing tools for implementing attacks, defenses, and evaluation metrics. However, these libraries often lack key features necessary for comprehensive research and experimentation, such as native multi-GPU support, integrated CLI and API interfaces, and support for external configuration files.
To address these limitations, we introduce AdvSecureNet (Adversarial Secure Networks), a comprehensive and flexible Python toolkit that supports multiple adversarial attacks, defenses, and evaluation metrics, optimized for multi-GPU setups. It includes a command-line interface (CLI) and an application programming interface (API), providing users with versatile options for experimentation and research. This paper outlines the features, design, and contributions of AdvSecureNet to the adversarial machine learning community.
[1]SecML supports attacks from CleverHans <cit.> and FoolBox <cit.>.
[2]This feature is only available for adversarial training.
§ ADVSECURENET FEATURES
Adversarial Attacks and Defenses: AdvSecureNet supports a diverse range of evasion attacks on computer vision tasks, including gradient-based, decision-based, single-step, iterative, white-box, black-box, targeted, and untargeted attacks <cit.>. AdvSecureNet also includes defense mechanisms such as adversarial training <cit.>, which incorporates adversarial examples into the training process to enhance model resilience, and ensemble adversarial training <cit.>, which leverages multiple models or attacks to develop a more resilient defense strategy.
Evaluation Metrics: AdvSecureNet supports metrics like accuracy, robustness, transferability, and similarity. Accuracy measures performance on benign data, robustness assesses resistance to attacks, transferability evaluates how well adversarial examples deceive different models, and similarity quantifies perceptual differences using PSNR <cit.> and SSIM <cit.>.
Multi-GPU Support: AdvSecureNet is optimized for multi-GPU setup, enhancing the efficiency of training, evaluation, and adversarial attack generation, especially for large models and datasets. This parallel GPU utilization aims to reduce computational time, making the toolkit ideal for large-scale experiments.
Interfaces and Configuration: AdvSecureNet offers both CLI and API interfaces. The CLI allows for quick execution of attacks, defenses, and evaluations, while the API provides advanced integration and extension within user applications. The toolkit also supports YAML configuration files for easy parameter tuning and experimentation, enabling users to share experiments, reproduce results, and manage setups effectively.
Built-in Models, Datasets and Target Generation: AdvSecureNet supports all PyTorch vision library models and well-known datasets like CIFAR-10, CIFAR-100, MNIST, FashionMNIST, SVHN, and ImageNet, allowing users to start without additional setup. Additionally, it can automatically generate adversarial targets for targeted attacks to simplify the attack configuration process. Users can still provide target labels manually and use custom datasets and models if desired.
§ DESIGN AND IMPLEMENTATION
AdvSecureNet is a modular, extensible, and user-friendly toolkit built on PyTorch for efficient computation and GPU acceleration. It includes core modules for attacks, defenses, evaluation metrics, and utilities, each with well-defined interfaces. The toolkit follows best practices in software engineering, featuring comprehensive testing, documentation, and CI/CD pipelines. It adheres to PEP 8 guidelines and uses Black for code formatting, along with tools like Pylint <cit.> and MyPy <cit.> for static code analysis and type checking. SonarQube <cit.> and Radon <cit.> provide insights into code quality and complexity. The project is hosted on GitHub under MIT license. Documentation is available on GitHub Pages, which includes detailed guidance on installation, usage, and comprehensive API references. AdvSecureNet is also available as a pip package on PyPI for easy installation and use across various environments.
§ RELATED WORK AND COMPARISON WITH EXISTING TOOLKITS
The burgeoning field of machine learning security has led to the development of several libraries designed to aid researchers. Notable among these are ART <cit.>, AdverTorch <cit.>, SecML <cit.>, FoolBox <cit.>, Ares <cit.>, and CleverHans <cit.>. ART, developed by IBM, is recognized for its extensive range of attacks, defenses, and support for multiple frameworks. AdverTorch, created by Borealis AI, focuses on PyTorch and offers a wide array of attacks, though it lacks support for adversarial training. CleverHans, one of the earliest libraries in the field, was initially designed for testing adversarial attacks, and as a result, has limited defensive capabilities. SecML and Ares, while smaller in scale, provide unique features; Ares, for instance, supports distributed training and external configuration files. FoolBox is distinguished by its diverse attack portfolio and support for multiple frameworks, but it does not offer defensive methods. Unfortunately, many of these libraries are no longer maintained. Table <ref> provides a detailed comparison of the features offered by these libraries.
AdvSecureNet stands out among existing adversarial machine learning toolkits in both its features and performance. Regarding features, AdvSecureNet is one of the few toolkits that are actively maintained, which is crucial for ongoing support. While IBM ART offers the most extensive attacks and defenses, AdvSecureNet provides a balanced selection, including adversarial and ensemble adversarial training for defense and a diverse range of attacks for evasion. AdvSecureNet distinguishes itself by being the first toolkit to natively support multi-GPU setups for adversarial attacks, defenses, and evaluation, whereas ARES only supports distributed adversarial training. This makes AdvSecureNet ideal for large-scale experiments. It is also the first toolkit that fully supports both CLI and API usages and external YAML configuration files, aiding researchers in sharing and reproducing experiments.
AdvSecureNet shows its strength in performance, achieving faster execution times on multi-GPU setups compared to other toolkits. As shown in Table <ref>, AdvSecureNet’s multi-GPU PGD attack time (1.78 minutes) outperforms ARES’s best single GPU time (3.05 minutes). In adversarial training on CIFAR-10, AdvSecureNet reduces training time from 5.07 minutes on a single GPU to 2.77 minutes with 7 GPUs, a speedup of 1.83x. AdvSecureNet's performance is even more impressive on ImageNet, reducing training time from 240 minutes on a single GPU to 30 minutes with 7 GPUs, which is an 8x speedup. In comparison, ARES reduces training time from 627 minutes on a single GPU to 217 minutes with 7 GPUs, a less efficient speedup of 2.89x. IBM ART, which does not natively support multi-GPU setups, remains at 323 minutes on a single GPU. The results show that AdvSecureNet provides superior performance and scalability, making it an ideal choice for large-scale adversarial machine learning experiments.
§ FUTURE WORK AND CONCLUSION
The AdvSecureNet toolkit is an ongoing project, and we plan to continue improving and expanding its capabilities. Currently, the toolkit focuses on evasion attacks and defenses in computer vision tasks, but we aim to extend its functionality to other domains, such as natural language processing. Additionally, we plan to incorporate other aspects of the trustworthiness of machine learning models, including fairness and interpretability.
In conclusion, AdvSecureNet is a comprehensive toolkit for adversarial machine learning research, offering a wide range of attacks, defenses, datasets, and evaluation metrics in addition to multi-GPU support, CLI and API interfaces, as well as external configuration files. By providing a flexible and efficient platform for experimentation, AdvSecureNet aims to advance the field of adversarial machine learning.
|
http://arxiv.org/abs/2409.03018v1 | 20240904181930 | Random sampling of permutations through quantum circuits | [
"Bibhas Adhikari"
] | quant-ph | [
"quant-ph",
"cs.DM",
"math.CO"
] |
theoremTheorem[section]
proposition[theorem]Proposition
definition[theorem]Definition
corollary[theorem]Corollary
example[theorem]Example
exam[theorem]Example
remark[theorem]Remark
lemma[theorem]Lemma
kernel
span
|
http://arxiv.org/abs/2409.03733v1 | 20240905174449 | Planning In Natural Language Improves LLM Search For Code Generation | [
"Evan Wang",
"Federico Cassano",
"Catherine Wu",
"Yunfeng Bai",
"Will Song",
"Vaskar Nath",
"Ziwen Han",
"Sean Hendryx",
"Summer Yue",
"Hugh Zhang"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CL"
] |
: Assessing Contextual Integrity Norms in Language Models
Yan Shvartzshnaider
York University
Vasisht Duddu
University of Waterloo
John Lacalamita
York University
=======================================================================================================================
§ ABSTRACT
While scaling training compute has led to remarkable improvements in large language models (LLMs), scaling inference compute has not yet yielded analogous gains.
We hypothesize that a core missing component is a lack of diverse LLM outputs, leading to inefficient search due to models repeatedly sampling highly similar, yet incorrect generations.
We empirically demonstrate that this lack of diversity can be mitigated by searching over candidate plans for solving a problem in natural language.
Based on this insight, we propose PlanSearch,
a novel search algorithm which shows strong results across HumanEval+, MBPP+, and LiveCodeBench (a contamination-free benchmark for competitive coding).
PlanSearch generates a diverse set of observations about the problem and then uses these observations to construct plans for solving the problem. By searching over plans in natural language rather than directly over code solutions, PlanSearch explores a significantly more diverse range of potential solutions compared to baseline search methods.
Using PlanSearch on top of Claude 3.5 Sonnet achieves a state-of-the-art pass@200 of 77.0% on LiveCodeBench, outperforming both the best score achieved without search (pass@1 = 41.4%) and using standard repeated sampling (pass@200 = 60.6%).
Finally, we show that, across all models, search algorithms, and benchmarks analyzed, we can accurately predict performance gains due to search as a direct function of the diversity over generated ideas.
“If you fail to plan, you plan to fail.”
— Mastermind, Taylor Swift
§ INTRODUCTION
The bitter lesson <cit.> famously posits that two forms of scaling trump everything else: learning and search.
While recent advances in large language models have removed all doubt on the effectiveness of learning, search has not yet proven its value for large language models, despite its success with classical machine learning techniques <cit.>.
Here, we refer to search as any method of spending compute at inference time to improve overall performance <cit.>. In this work, we focus our efforts on improving LLM search for code generation, one of the most important current applications of LLMs.
We hypothesize the major bottleneck preventing widespread use of search at inference time for code is a lack of high-level diversity in model outputs. This lack of diversity is likely in part due to specific post-training objectives commonly used to train LLMs as chatbots, in which models are oftentimes optimized to produce a single correct answer <cit.>.
We empirically demonstrate that this is the case for many open-source language models which have undergone significant post-training. Specifically, we show that in many cases, despite instruction tuned models outperforming base models by large margins on a single sample regime (pass@1), this trend disappears—sometimes even reversing—when generating many samples. We refer to Figure <ref> as an example of this phenomenon.
Furthermore, the lack of diversity is particularly harmful for search algorithms. In the most egregious of cases with little to no diversity, such as greedy decoding, repeated sampling from the model returns highly similar programs, resulting in minimal gain from additional inference-time compute.
This diversity problem is also not reflected in many public leaderboards (e.g. LMSYS Chatbot Arena <cit.>, LiveCodeBench <cit.>, OpenLLMLeaderboard <cit.>), which often report only the pass rate from a single sample of the model, ignoring an entire dimension along which to compare models.
While the performance of one sample is the primary metric of relevance for applications such as chatbots, as users typically are sensitive to latency, this single scalar is insufficient to fully capture the quality of a model when it is allowed to use more inference time compute.
In this paper, we explore several directions for improving the diversity of LLMs at inference time. We hypothesize that the right axis of diversity to search over is the natural language conceptual/idea space, and we validate our hypothesis across several experiments. First, we show that models can produce the correct final program when fed the correct solution sketches, where these sketches have been “backtranslated” from passing solution code into sketches in idea space (Section <ref>).
Second, we show that when models are asked to generate their own ideas before implementing them on LiveCodeBench (IdeaSearch), their accuracy conditioned on a particular sketch trends towards either 0% or 100%, suggesting that most of the variance in passing a particular problem is captured by whether the sketch is correct rather than any other factor.
These two experiments suggest a natural method to improving LLM search for code generation: by searching for the correct idea to implement.
Guided by this principle of maximizing exploration of ideas, we propose . In contrast to many existing search methods that search over individual tokens <cit.>, lines of code <cit.>, or even entire programs <cit.>, searches over possible plans for solving the problem at hand, where a plan is defined as a collection of high level observations and sketches helpful to solve a particular problem (Figure <ref>). To generate novel plans, generates a number of observations about the problem, before combining these observations into a candidate plan for solving the problem. This is done for every possible subset of the generated observations to maximally encourage exploration in idea space, before the codes are eventually all translated into a final code solution (Section <ref>). We find that searching over plans outperforms both standard repeated sampling and directly searching over ideas (IdeaSearch, introduced in Section <ref>) in terms of effectively using compute at inference time.
Applying on top of Claude 3.5 Sonnet achieves a state-of-the-art pass@200 of 77.0% on LiveCodeBench, outperforming both the best score achieved without search (pass@1 = 41.4%) and the standard best-of-n sampling score (pass@200 = 60.6%). Furthermore, consistent with recent findings on the effectiveness of search on top of small models <cit.>, allowing based on a small model (GPT-4o-mini) outperforms larger models not augmented with search after merely 4 attempts. Evaluations of across two other coding benchmarks, HumanEval+ and MBPP+ <cit.>, suggest similar improvements.
Finally, we measure the diversity of output code over the idea space of all search methods via an LLM-as-a-judge procedure (Section <ref>) and show that the resulting diversity score is highly correlated with the performance gains generated by that search method. This provides further support for our hypothesis that the effective exploration of plans in idea space is key to LLM search for code generation (Figure <ref>).
§ RELATED WORK
We reiterate that search as defined in the context of our paper refers to any method which expends inference time compute to improve performance. We further specify planning as any form of high level observation or abstract thought that assists a model in generating a final solution.
Our work builds off a long history of work in scaling search and planning.
§.§ Search in Classical AI
Classical search algorithms like breadth-first search, depth-first search, and A* search have been widely used for pathfinding, planning, and optimization <cit.>. More advanced search techniques like Monte Carlo tree search (MCTS) have achieved remarkable success in domains like game playing, enabling superhuman performance in Go <cit.>, poker <cit.> and Diplomacy <cit.>.
More recently,
<cit.> find scaling laws for the performance of AI systems in board games, where ELO improves logarithmically with the amount of compute spent at inference.
§.§ Search with Language Models
Applying search on top of LLMs has been a topic of much interest, especially with an eye towards code generation <cit.>. Historically, methods such as beam search significantly improved performance for translation systems <cit.>. Closer to the present day, several recent works have explored repeated sampling <cit.> as a search method for improving performance. Repeated sampling is a method which directly generates candidate code solutions from the model many times at moderate to high temperatures in hopes that one of the resulting generations will be correct. However, although these works address the roughly linear increase in pass@k with respect to log k, they only focus on the most basic version of repeated sampling, without searching in idea space.
When combined with a verifier, reward model, or other filtering algorithm to select the best generation (in cases where pass@k is not a viable metric due to lack of test cases), it is also known under the name of best-of-n sampling <cit.>. Many works show somewhat good results under intelligent selection of such a filtering algorithm <cit.>.
Recently, several approaches have demonstrated the power of repeated sampling. For example, repeated sampling from a small model can sometimes outperform taking a single sample from a large model on an equalized compute bases <cit.>.
Unlike algorithms such as repeated sampling, which search over the output space, the key insight of PlanSearch is that it is far more effective to instead search plans over the latent idea space. By explicitly searching over different natural language plans before generating the code, we significantly increase the diversity of the final code outputs and thus, the resulting pass@k scores for sufficiently large k.
Regarding searching over plans in natural language, several approaches have also proposed generalizing chain-of-thought <cit.> reasoning into a search-like process, such as Tree of Thoughts <cit.> and Reasoning via Planning <cit.>.
However, prior methods have largely demonstrated effectiveness on somewhat contrived problems designed to highlight the power of search, such as the game of 24, or classic planning benchmarks such as Blocksworld <cit.>, where both benchmarks are easier to solve by explicitly considering many options, and where the `steps' over which to search over are fairly obvious.
By contrast, most real-world planning is used to assist in domains that are complex enough to benefit from, but not require, the additional exploration of plans. We demonstrate that , which plans in natural language, outperforms baseline search methods in one such domain: code generation.
Moreover, our analysis reveals the underlying reason that such search is effective: it increases the diversity of the generated ideas, allowing more efficient search relative to other methods which repeatedly submit highly similar, incorrect solutions.
diversity in LLM paper, cite all of natural llm history
bruh filtering section, code approaches section? maybe like reflexion, allphacode, codet, parsel
§ MOTIVATION
Coding is a powerful area in which search should excel. While search in other domains requires both generating many solutions and selecting the correct solution amongst all the resulting generations, coding often only requires the former, as any valid piece of code can be tested via code execution against given test cases. This allows code search algorithms to sidestep many of the issues that plague search algorithms for more open-ended domains (e.g. generating poetry) due to difficulty in selecting correct solutions out of all the generated solutions.
§.§ Defining the Search Space
Perhaps the most important question for eliciting strong search capacities is determining which space to search over, as finding the proper layer of abstraction is critical to progress in the field. Prior approaches have varied, with many people searching over individual tokens <cit.>, lines of code <cit.>, or even entire programs <cit.>.
We hypothesize that the key factor is obtaining the correct solution sketch, which we define as a description of the correct program in natural language space. Intuitively, conducting the reasoning process in natural language space allows us to effectively harness the training process of LLMs, which have observed many human reasoning traces in both pre- and post-training. Prior work <cit.> citehas observed strong positive effects from being allowed to conduct such reasoning in natural language, making it a natural place to search over. We describe two experiments providing evidence for this hypothesis by testing on the LiveCodeBench benchmark using GPT-4o-mini as our model.
§.§ Backtranslation
To investigate the hypothesis whether the idea space, instantiated as solution sketches, is the right area of exploration, a natural question is whether LLMs can correctly implement a correct code solution given a correct sketch. Inspired by approaches to backtranslation in machine learning <cit.>, we experiment with “backtranslating” passing code solutions back into idea space.
First, we generate code solutions using GPT-4o to generate 1000 attempts to solve the problem and filter out problems without any passing solutions.
As we also do not have a dataset of correct solution sketches associated with each solution, we generate a candidate correct idea via backtranslation.
We do this by feeding an LLM both the problem and code solution and asking the LLM to convert said solution into a natural language description of the solution. Additionally, we vary the detail of the backtranslated idea via instructions to the LLM in the prompt (e.g. `in w words'). A full description of the prompts can be found in Appendix <ref>, alongside several example backtranslated solutions of various lengths.
We observe that prompting a model with a backtranslated idea significantly improves accuracy, increasing with the length of the translated idea (Figure <ref>), which suggests that having a correct sketch is sufficient to produce the correct final solution with relatively high accuracy, even only after 10 tokens of backtranslated solution. This suggests that the correct direction of search is to explore through idea space to maximize the chance of arriving at a correct idea.
§.§ Conditioning on Idea Quality
In a follow-up experiment, we prompt an LLM to generate its own sketches to solve LiveCodeBench problems instead of providing it with golden ones via backtranslation. First, we generate 5 ideas per problem using IdeaSearch, defined in Section <ref>. For each idea, we then sample 25 candidate solutions and measure their pass rate.
For this experiment, we filter out any problem that GPT-4o-mini solves with either a 100% or a 0% solve rate, since such problems are either too easy or too hard for the model and would not be informative for this experiment. We end with 75 problems and 375 sketches.
To test our hypothesis that generating a correct sketch is a critical factor for solving problems, we compare the distribution of solve rates for generating correct code solutions conditioned on a given sketch to the distribution over solve rates given a sketch drawn at random, i.e., just the distribution over solve rates.
Formally, for any problem P_i, we sample some sketch I from some conditional distribution with probability mass P(I| P_i). The probability of solving P_i is then P(solve| P_i, I). We compare the solve-rate distribution, P(solve| P_i, I) over all problems and all sketches versus the solve-rate distribution of ∑_I P(solve| P_i, I) · P(I | P_i) = P(solve| P_i) over all problems.
While verifying whether a sketch is correct or incorrect is difficult without access to external labels, a key insight is that if generating the correct idea is a critical factor in solving the problem, then conditioning on a particular sketch should polarize the distribution of solve rates towards {0, 1}. If the model is given a correct sketch, it should consistently generate correct solutions, while if given a bad sketch, it should consistently generate incorrect solutions.
Our results confirm this to be the case. Figure <ref> shows the distribution of solve rates across problems, both unconditionally (in red) and conditioned on each sketch (in blue). We notice that when grouping by sketches, the solve rates indeed become polarized towards {0, 1}.
This result has important implications for improving code generation, suggesting that a large portion of variance in performance may come from whether the model is able to generate a correct idea or not. Therefore, a natural path for improvement is to focus on the sketch generation step and search for correct sketches and observations in idea space before generating solution code.
§ METHODS
We provide a description of the various methods of search we explore in our work. If additional background on competitive programming and related notation is desired, we provide more (optional) information in Appendix <ref>.
§.§ Baselines
§.§.§ Repeated Sampling
We consider the basic prompting approach as a baseline, in which we use few-shot prompting by providing the LLM with a number of problem-solution pairs before asking it to solve the desired question <cit.>. A full example of the prompt is given in Appendix <ref>. In code generation, the most common variant of search utilized is repeated sampling, where models are repeatedly sampled from until they generate an output that passes the test or the maximum number of samples is reached. Refer to the Related Work for more information (Section <ref>).
§.§.§ IdeaSearch
A natural extension of the Repeated Sampling approach discussed in Section <ref> is to avoid prompting the LLM for the solution code immediately.
This can be viewed as an application of the commonly used “chain-of-thought” prompting to programming problems <cit.>, although we find that IdeaSearch shows non-negligible performance boosts over standard “chain-of-thought” prompting (see Appendix <ref>).
In IdeaSearch, the LLM is given the problem P and is asked to output a natural language solution S of the problem. Then, a separate instance of the LLM is given P and S, and tasked to follow the proposed solution S to solve the problem P. The purpose of IdeaSearch is to isolate the effectiveness of having the correct “idea/sketch” for solving the problem. Empirically, we find that explicitly forcing the search algorithm to articulate an idea for solving the problem increases diversity.
See Appendix <ref> for detailed prompts.
§.§ PlanSearch
While both Repeated Sampling and IdeaSearch are successful and lead to improvement in the results on benchmark results, we observe that in many of the cases,
prompting multiple times (pass@k) (even at high temperatures) will only lead to small, narrow changes in the output code that change minor aspects but fail to improve upon pitfalls in idea. fix
§.§.§ Prompting for Observations
Starting from the problem statement P, we prompt an LLM for “observations”/hints to the problem.
We give examples of observations generated in Appendix <ref>.
We denote these observations as O^1_i, where, i ∈{1, …, n_1} due to the fact that they are first-order observations. Typically, n_1 is on the order of 3 to 6. The exact number depends on the LLM output.
To use these observations to inspire future idea generation, we create all subsets with size at most 2 of S^1={O^1_1, …, O^1_n_1}. Each of these subsets is a combination of observations, and for clarity we denote each subset as C^1_i, i ∈{1, …, l_1}, where l_1 = 1 + n_1 + n_12.
§.§.§ Deriving New Observations
The set of all observations can be thus defined as a directed tree with depth 1, where the root node is P, and an edge exists for each C^1_i pointing from P to C^1_i. We then repeat this procedure from Section <ref> on each leaf node C^1_i to generate a set of second order observations, S^2_i={O^2_i,1, …, O^2_i,n_i, 2}. To obtain second order observations, we prompt the model with both the original problem P and all observations contained in C^1_i, framed as primitive observations that are necessary in order to solve P. The LLM is then prompted to use/merge the observations found in C^1_i in order to derive new ones.
The same procedure as Section <ref> is used to create all subsets C^2_i, j, for all i ∈{1, …, l_1}. This process may be arbitrarily repeated, but we truncate the tree at depth 2 for computational constraints.
Note that there is no assumption any of the observations generated are correct. In fact, it is critical to note that many of them may be incorrect. , and we give examples in Appendix <ref>.
The observations merely serve to elicit the model to search over a more diverse set of ideas.
§.§.§ Observations to Code
After the observations have been made, they must be implemented as ideas before being translated into code. For each leaf node, we prompt the model with all observations, along with the original problem P, in order to generate a natural language solution to the problem P.
To add more diversity, for each generated idea, we generate an additional idea by supposing the idea is wrong, and asking an LLM to give criticisms/feedback, thus increasing our proposed ideas by a factor of 2.
These natural language solutions are then translated into pseudocode, which are subsequently translated into actual Python code. We take a more granular approach to reduce the translation error (which may cause the model to revert to its original mode, disregarding the reasoned-through observations).
We provide all prompts for all sections in Appendix <ref>.
§ EXPERIMENTAL RESULTS
§.§ Datasets
We evaluate our search methods on three benchmarks: MBPP+, HumanEval+ <cit.>, and LiveCodeBench <cit.>. MBPP <cit.> and HumanEval <cit.> are some of the most widely used code benchmarks in the field. However, since both benchmarks provide only a few test cases, <cit.> updates both benchmarks with additional test cases that increase the benchmarks' robustness to reward hacking. LiveCodeBench is a benchmark for coding that consists of competitive programming problems which typically require advanced reasoning capabilities. Given the reality that coding data is often highly upsampled during pre-training <cit.>, LiveCodeBench differentiates itself from other benchmarks by taking care to segregate problems by date to avoid data contamination concerns. For this paper, we use only the subset of problems between May 2024 and September 2024 to avoid possibilities of contamination.
We choose May 2024 as the cutoff date to ensure that our results with our best performing model (Claude 3.5 Sonnet) are not due to contamination, because Claude 3.5 Sonnet has a knowledge cutoff of April 2024.
To ensure fair comparison, we use the same cutoff for all models evaluated, even though the precise cutoff dates for other models may vary slightly from May 2024.
§.§ Experiment Details
For all search algorithms, we require that all output code be in the correct format specified, and we mark a solution as incorrect if it does not follow the intended formatting. The extracted code is then run through all tests of the program and marked as correct if and only if it passes all tests.
All models are run with temperature 0.9 and top-p of 0.95. Temperature was determined through a coarse hyperparameter sweep on Repeated Sampling and IdeaSearch from T∈{0.0, 0.1, 0.2, …, 1.2}, which we describe in Appendix <ref>.
Both Repeated Sampling and IdeaSearch generate exactly n codes, whereas PlanSearch generates a variable number of codes, usually ranging on the order of 300 to 400. To compute pass@k, we use the unbiased estimator in Equation <ref> <cit.>[Note that the estimator in Equation <ref> theoretically requires that the number of successes follows a binomial distribution. Repeated Sampling and IdeaSearch obey this, but PlanSearch generations may not be independent. See Appendix <ref> for more discussion.].
If k > n, we assume the remaining generations did not pass.
To compute pass@k for filtering, we limit the pool of codes to those that are filtered, meaning that both n and c may shrink in size. This can be thought of as a conditional probability, where the condition is that the code passes public tests.
§.§ Results
Our summarized results for Repeated Sampling, IdeaSearch, and PlanSearch can be found in Table <ref>, Figure <ref>, and Figure <ref>.
Additionally, we plot our full pass@k curves for all methods, models, and datasets in Appendix <ref>. For sake of easy comparison, we also plot all relative gains compared to Repeated Sampling@1 averaged over all models in Appendix <ref>. For a compute-normalized comparison between Repeated Sampling and PlanSearch, see Figure <ref>.
§.§ Public Test Filtering
Public test filtering is a method which only chooses samples out of the original pool n which pass the public tests.
This is particularly useful in settings such as code deployment where executing the full suite of tests may be computationally costly or otherwise undesirable (e.g. in a coding contest where every incorrect submission is penalized).
Thus, instead of submitting all n codes, after public test filtering, only codes c_i would be submitted such that c_i(x_j) = y_j for all j ∈{1, …, u}, where c_i(x) refers to the output from running the code on some input x. The primary effect of public test filtering is to shift the pass@k curve leftward, since public test filtering will discard low quality candidate solutions that either fail to compile or fail elementary test cases for the problem.
All problems in MBPP+, HumanEval+, and LiveCodeBench come with a few public tests which are usually used to sanity check any submissions. We can further improve performance by filtering on these public tests before a final submission, as described.
Applying public test filtering reduces the number of samples to achieve the same accuracy by tenfold: to achieve a 77.1% accuracy on LiveCodeBench after just 20 submissions (pass@20) compared to a pass@200 of 77.0% without using public filtering (see Figure <ref>). We provide full results for the other datasets in Appendix <ref>.
§ ANALYSIS
Our results suggest that both and IdeaSearch outperform basic sampling by a wide margin (Figures <ref>, <ref>, <ref>), with achieving the best score across all methods and models considered.
We show the detailed pass@k results for each dataset in Figures <ref>, <ref> and <ref>.
We also compare with Chain-of-Thought <cit.> in Appendix <ref>. Interestingly, we find that IdeaSearch performs somewhat better, which we speculate comes from differences in splitting solution sketch into two model responses, instead of doing both chain-of-thought and code solution in one model response.
Investigating the differences in specific models, we notice that trends exhibited by the pass@k curves are not uniform across all models; in fact, each curve seems unique.
We hypothesize that these differences are in part due to changes in idea diversity, as investigated in Figures <ref>, <ref>, <ref>. From the figures, we can see that our approximate diversity score accounts for much of the variance we see in the relative improvement that arrives from scaling-up inference-time compute. This correlation holds across all methods and models on the same dataset, thus suggesting that diversity score can be used as a proxy to predict for relative pass@k improvement. For further discussion on the specifics of the diversity score, see Section <ref>.
One interesting point of observation is that often hurts pass@1 for several models, including most notably Sonnet 3.5 on LiveCodeBench, our best performing combination.
Intuitively, this is because increasing the diversity across ideas likely dilutes the probability that any particular idea is generated, while simultaneously increasing the chance of having at least one correct idea within said pool. Therefore, pass@1 may be slightly lower than usual, yet pass@k will likely surpass “pools” of ideas lacking diversity for this reason. See Figure <ref> for a graphical intuition.
Finally, in Table <ref> and Figure <ref>, we present our main results normalized across attempts/completion, where each search method is allowed k attempts to solve each problem. An alternative method of normalizing across methods is to equalize the amount of compute spent on each method. Since and IdeaSearch first plan out an idea before implementing the final solution, they both spend more compute at inference time per solution generated. In Appendix <ref>, we report the equivalent plots normalized across compute. Our findings are highly similar and suggest that outperforms all other methods if sufficient compute is expended at inference time.
§.§ Measuring Diversity
We find that diversity as measured in idea space is highly predictive of search performance, as measured by the relative improvement between a model/method's pass@1 and its pass@200 (Figure <ref>).
While the most common measure of diversity is entropy <cit.>, entropy is insufficient for a number of reasons for the precise setting of LLMs <cit.>. As a simple example, consider two different language models, one of which generates minor variations of the same program while another generates a variety of programs with different underlying ideas. Even if both models have the same entropy, the latter model will be significantly better when augmented with search capabilities.
In our setting, we measure diversity by grounding it in idea space using a simple pair-matching strategy across all generated programs. Formally, suppose we have a pool of n code generations, {c_1, …, c_n}.
We assume that each piece of code implements some sketch, which can be thought to exist in some latent `idea' space. We consider two sketches similar if they are within ϵ of each other in this latent space, for some choice of ϵ. As such, in this space, c_i having a similar idea to c_j and similarly for c_j and c_k does not imply c_i and c_k share a similar idea.
To compute the diversity of such a given generation pool, we ask an LLM to judge the similarity of two ideas in the following manner.
First, we
construct each of the n 2 pairs. For each pair (c_i, c_j), we judge (using an LLM) whether both c_i and c_j implement the same idea. We define this as the function S(c_i, c_j) ∈{0, 1}, which evaluates to 1 if c_i and c_j implement the same idea and 0 otherwise.
Our overall diversity score for a particular problem is then defined as:
D = 1 - ∑_i < jS(c_i, c_j)/ n 2
Models that output programs that all implement the same idea will have a score of D=0, while models that output completely unique programs will have a score of D=1. Overall, a score of D implies that if two codes are chosen at random, the probability that they are the same idea (as measured by the LLM) is D. In Appendix <ref>, we describe this measure in additional mathematical depth.
For a particular method, our reported diversity score is simply the diversity score over all problems in the considered dataset.
For computational feasibility, for large n, we instead sample a subset of 40 codes and test all pairs from that subset instead. In order to test code samples, we first backtranslate using an LLM to express the code in natural language before comparing each pair using both the code and the backtranslated idea.
We detail the prompts used in Appendix <ref> and use OpenAI's GPT-4o-mini as the supporting LLM.
§ LIMITATIONS AND FUTURE WORK
While PlanSearch substantially improves diversity over idea space at inference-time, fundamentally, improvements in diversity should come at the post-training stage. This likely requires re-imagining the post-training pipeline for LLMs around search, instead of the current paradigm optimized for a single correct response. This may require both collecting high quality post-training data that is also sufficiently diverse, and new learning objectives that do not aim solely to maximize the expected reward of a given response.
We are optimistic around future work to design significantly improved post-training objectives that maximize both quality and diversity and which specifically optimized to use inference-time compute to maximum effectiveness.
In terms of methodological improvements to , currently searches all leaf nodes in the search tree uniformly. Because of this, it becomes quickly intractable to go further than a couple levels deep, and in our experiments, we are only able to go two levels down the tree. Several approaches based on Monte-Carlo Tree Search (MCTS), such as Tree of Thought <cit.> or Reasoning as Planning <cit.>, have suggested that some form of dynamic pruning and expansion of nodes can be very helpful. We are optimistic that can be further improved by such methods.
Furthermore, is a fairly elementary method taking advantage of the paradigm that searching over a conceptual or idea space is an effective method to improve diversity, and thus, downstream task performance. It is completely feasible to search at an even higher level of abstraction than observations, which may be used to inject even more diversity into the final generated outputs.
and IdeaSearch tradeoff a slight deterioration of pass@1 performance for a large improvement in pass@k performance. However, in many such cases outside of code generation, it is infeasible to run an LLM-based model for more than a few attempts at most. For example, in Figure <ref>, PlanSearch does not significantly outperform Repeated Sampling until k ≥ 4.
Fortunately, many filtering algorithms exist, which implicitly bring pass@k (for high k) to pass@1 (or lower k), i.e. shifting the original pass@k curve leftward. A simple example of this is public test filtering. As seen in Figure <ref>, pass@1 of filtered PlanSearch significantly improves upon pass@1 of base Repeated Sampling, which gets even better as k increases. Moreover, most to almost all base models with public test filtering outperform their instruct model variants at pass@1, no matter the dataset (see Appendix <ref>), where clearly base models are known to be worse, yet trading off for somewhat higher diversity. Thus, we argue that there exists a potential for a new paradigm—developing search algorithms which tradeoff pass@1 performance for much stronger pass@k performance, then filtering the promising generated solutions to extract the pass@k back into the pass@1.
Additionally, we focus on code generation in this paper and do not consider the applicability of to a broader set of domains. One point of importance is that the pass@k metric heavily used throughout code generation may not be as applicable to other domains and a larger focus on selecting the correct solution out of all possible generated candidates may be required, instead of merely generating the correct solution. However, with good filtering methods, which we demonstrate can be simple in nature, pass@k, for medium k, can be effectively brought down to pass@1, emphasizing a similar paradigm of increasing diversity, then strengthening existing filtering methods.
Finally, a natural extension of this work is training the underlying model itself on successful plans and code solutions obtained from PlanSearch. This has the potential to distill the pass@k into the pass@1—without inference-time methods like filtering—by reducing the likelihood of the model going down branches of the search tree which do not lead to correct solutions.
We believe that such training is likely to significantly improve the model and look forward to future work in this direction.
§ ACKNOWLEDGEMENTS
We would like to thank Jason Wei, Miles Turpin, Sail Wang, Horace He, Kenneth Li, Celia Chen, Rahul Chalamala, Alan Wu, and Kevin Chang for their helpful comments, suggestions and discussion over the course of this project.
iclr2024_conference
Appendix
§ FULL PASS@K CURVES FOR ALL MODELS AND ALL BENCHMARKS
See Figures <ref>, <ref>, <ref>. We plot all models and methods on HumanEval+, MBPP+ <cit.>, and LiveCodeBench <cit.>, respectively.
§ FULL PASS@K CURVES WITH PUBLIC FILTERING
See Figures <ref>, <ref>, <ref>. We plot all models and methods with public test filtering on HumanEval+, MBPP+ <cit.>, and LiveCodeBench <cit.>, respectively.
§ AVERAGE RELATIVE IMPROVEMENTS
See Figures <ref>, <ref>, <ref>. To create these graphs, the relative improvements of each point on all pass@k curves are computed and compared to the respective pass@1 of Repeated Sampling. Then these values are averaged over all models, so that there is one curve per method per dataset. The datasets are HumanEval+, MBPP+ <cit.>, and LiveCodeBench <cit.>, respectively. For the public test filtered versions, see Figures <ref>, <ref>, <ref>.
§ COMPUTE NORMALIZED PASS@K GRAPHS
See Figure <ref>. For each run of a method in Appendix <ref>, we compute the number of generated tokens needed per completion, per problem, independently on each dataset. Then, we average across all datasets to obtain 244 generated tokens per completion per problem for Repeated Sampling, and 1,428 generated tokens per completion per problem for PlanSearch.
§ COMPARISON WITH CHAIN-OF-THOUGHT
See Figures <ref>, <ref>, <ref>, which are run on LiveCodeBench <cit.>, MBPP+, and HumanEval+ <cit.>, respectively. These are the same plots as Appendix <ref>, with CoT <cit.>. See Figures <ref>, <ref>, <ref> for the public test filtered versions.
§ ABLATION ON TEMPERATURE FOR REPEATED SAMPLING AND IDEASEARCH
See Figure <ref>. We sweep over temperature increments of 0.1 from 0.0 to 1.2, inclusive, with top-p of 0.95, on Repeated Sampling and IdeaSearch.
§ DIVERSITY SCORE VS SEARCH IMPROVEMENT PLOTS FOR MBPP+ AND HUMANEVAL+
See Figures <ref>, <ref>, <ref>. Each figure is made through running the diversity measure as described in Section <ref> on the generated codes of each run, then compared with the relative gain from pass@k compared to pass@1.
§ BASE MODELS VS. INSTRUCT MODELS FOR LARGE SAMPLES
We find that base models, despite performing poorly relative to their instruct counterparts for evaluated with pass@1, will frequently match or even exceed performance on pass@k for sufficiently high k. This is likely due to higher amounts of diversity in base models, which have not undergone post-training designed to elicit a single strong response from the model.
We see this effect across all models for HumanEval+ and MBPP+, but only the DeepSeek-Coder-V2 family for LiveCodeBench.
See Figures <ref>, <ref>, <ref> for Llama-3.1-8b pass@k comparisons.
See Figures <ref>, <ref>, <ref> for Llama-3.1-70b pass@k comparisons.
See Figures <ref>, <ref>, <ref> for DeepSeek-Coder-V2-Lite pass@k comparisons.
We also ran Llama-3.1-8b and DeepSeek-Coder-V2-Lite pass@k comparisons for k up to 10,000; see Figures <ref>, <ref>.
§ BASE MODELS VS. INSTRUCT MODELS WITH PUBLIC TEST FILTERING
We repeat the graphs from Appendix <ref>, but with public test filtering. We find that base models with public test filtering almost always exceed the pass@1 of their instruct model variants.
See Figures <ref>, <ref>, <ref> for Llama-3.1-8b pass@k comparisons with public test filtering.
See Figures <ref>, <ref>, <ref> for Llama-3.1-70b pass@k comparisons with public test filtering.
See Figures <ref>, <ref>, <ref> for DeepSeek-Coder-V2-Lite pass@k comparisons with public test filtering.
§ PROMPTS
§.§ Backtranslation
§.§.§ Backtranslate System Prompt
§.§.§ Implement Backtranslation Idea
§.§ Repeated Sampling
§.§ Simple Idea
§.§ PlanSearch
§.§.§ Prompt for Observation Part 1
§.§.§ Prompt for Observation Part 2
§.§.§ Combining Observations
§.§ Measuring Diversity
§ COMPETITIVE PROGRAMMING
Competitive programming is a popular subset of programming tasks that involve solving complex algorithmic reasoning. Typically, problems consist of a problem statement (written in natural language) P, with associated tests: (x_i, y_i), i ∈{1, …, m}, for which any solution must pass all of them.
The number of tests m depends on the problem, but typically ranges on the order of 25 to 100. A small subset of the tests are typically given to the solver (we call these public tests) to use as validation that their program passes simple cases. The rest of the tests are hidden. Solutions to the problems must generally pass all the tests to be considered correct. Formally, we let f(x) denote the output of said code ran on input x. The solution code is considered correct (passing) if and only if f(x_i) = y_i for all i ∈{1, …, m}.
Each dataset consists of many (on the order of low-hundreds) independent problems, and models are evaluated on each of these problems independently.
§ A MODEL OF REPEATED SAMPLING: PASS@K
Consider a simplified model of repeated sampling for code generation.
Suppose we have a dataset D = {P_1, …, P_l} with l problems.
For some problem P_i, define the probability p_i as the probability that our code generation model solves the problem P_i in one submission.
The pass@k <cit.> metric (for problem P_i) is defined as the probability that our code generation model solves the problem P_i at least once out of k submissions.
Thus, if we know the true p_i of our model, we may compute our pass@k simply:
pass@k_i = 1 - (1 - p_i)^k
pass@k = ∑_i pass@k_i / l
However, it turns out that for k>1, the naïve estimator as seen in Equation <ref> is biased, if we sample n_i ≥ k from our code model to solve P_i, c_i ≤ n_i are correct, and compute p_i = c_i / n_i <cit.>.
Instead, pass@k_i is typically computed using the unbiased estimator:
pass@k_i = 1 - n-c k/ n k
Note that reporting pass@k on a dataset where l=1 is rather pointless, since pass@k can be derived using only pass@1_1 and n_1. Every curve, over a suitable range of k values, will look like the S-curve seen in Figure <ref> (as k is plotted on a log scale).
However, with datasets where l > 1, models are able to differentiate themselves through larger k, since the overall pass@k is an average of these l curves. For example, for l=3, it is less optimal to have solved probabilities of Set 1 = {0.001, 0.7, 0.9} versus Set 2 = {0.05, 0.1, 0.25}, in the regime of roughly k=20 to k=2,000 (in which both converge to 1), even though Set 1 has a pass@1 of 53% and Set 2 has a pass@1 of 13%. See Figure <ref>.
Although not shown in the graph, Set 2 converges close to 1 at roughly k=400, several orders of magnitude below Set 1. In addition, note that the slight notch seen in Set 1's curve at large k is due to the presence of low, but non-zero solve-rates, which can be seen in empirical pass@k curves later on. (These can be thought as the beginning of the `ramping-up' regime of the typical S-curves in Figure <ref>.)
§ MATHEMATICS OF THE DIVERSITY MEASURE
While our choice of a diversity metric is intuitive, one should note that there are a number of intriguing details that result from our definition. In particular, it is not necessarily the case that a model that outputs k unique ideas out of n samples to achieve a diversity score of k/n.
Consider an example of n=9 codes, separated into 3 cliques of 3, where each clique implements the same idea (and separate cliques implement separate ideas). In this setup, 1/3 of ideas are unique, but in our metric, there are 3 matching idea pairs (and 9 total matching idea pairs) out of 92 = 36, for a diversity score of 1 - 9/36 = 3/4.
§ BIASED ESTIMATOR FOR PASS@K DUE TO NON-INDEPENDENCE OF
From a pure theoretical standpoint, the expression is biased (if using the same interpretation), but it still leads to a similar interpretation—computing the probability that a subset of size k drawn from the set of samples we already generated contains at least one success. (These given samples were generated by one run of PlanSearch.) As such, in theory, the estimator may be slightly biased in the PlanSearch case when computing its true pass@k. In practice, we do not believe this to be a large concern, especially as our primary results feature a relatively large k=200.
do all four below later
|
http://arxiv.org/abs/2409.03586v1 | 20240905144223 | Optimal position-building strategies in Competition | [
"Neil A. Chriss"
] | q-fin.TR | [
"q-fin.TR"
] |
Simplified EPFL GaN HEMT Model
This project is funded by the Swiss National Science Foundation - project 200021 213116.
Farzan Jazaeri, Majid Shalchian, Ashkhen Yesayan, Amin Rassekh, Anurag Mangla,
Bertrand Parvais, and Jean-Michel Sallese
Farzan Jazaeri, Ashkhen Yesayan, and Jean-Michel Sallese are with the Electron Device Modeling and Technology Laboratory (EDLAB) of the École Polytechnique Fédérale de Lausanne (EPFL), Switzerland (e-mail:[email protected]).Majid Shalchian is with the department of Electrical Engineering, Amirkabir University of Technology. Amin Rassekh is with InCize, Louvain-la-Neuve, Belgium. Anurag Mangla is an alumnus of EPFL.
Bertrand Parvais is affiliated with IMEC in Leuven and holds a position as a Guest Professor at Vrije Universiteit Brussels, Belgium.
September 9, 2024
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
This paper develops a mathematical framework for building a position in a stock over a fixed period of time while in competition with one or more other traders doing the same thing. We develop a game-theoretic framework that takes place in the space of trading strategies where action sets are trading strategies and traders try to devise best-response strategies to their adversaries. In this setup trading is guided by a desire to minimize the total cost of trading arising from a mixture of temporary and permanent market impact caused by the aggregate level of trading including the trader and the competition. We describe a notion of equilibrium strategies, show that they exist and provide closed-form solutions.
§ INTRODUCTION
Participants in financial markets, in particular stock traders, often find themselves wanting to own a target quantity of stock on or before a future date. A typical scenario is when there is a catalyst that is expected to cause a change in the price and a trader who would like to own a target amount of the stock prior to that date. In such a case the trader is often aware that there are other traders looking to purchase the stock over the same period of time for the same reason. The problem for each trader is that the collective pressure all traders put on the price of the stock and cost of trading can have a dramatic impact on the success of the strategy. If a given trader delays trading then other traders may "drive up" the price of the stock increasing the cost of acquisition.
In simple terms, this paper develops a framework for reasoning about this situation through an analysis of the best-response a trader may take against the one or more other players' trading strategies; this best-response is arrived at through the analysis of the market impact all trading activity has on the stock. To address this challenge we develop a version of game theory where each trader's available set of actions are trading strategies themselves.
This problem shares some features in common with optimal liquidation strategies, as in the Almgren-Chriss model <cit.> but has significant difference because total trading costs are driven by aggregate trading.
§.§ Computational Methods and Plot Generation
The differential equations presented in this paper were solved using Wolfram Mathematica <cit.>, while the plots were generated using a combination of Wolfram Mathematica and Python. These tools provided the necessary computational accuracy and flexibility to visualize the results effectively.
§ OVERVIEW
This paper is organized into several sections, each addressing key aspects of optimizing position-building strategies in competitive trading environments.
Section <ref> defines the key concepts used throughout the paper, including trading strategies, trajectories, trajectory morphology (the shape of trajectories) including a breakdown of the most common shapes.
Section <ref> introduces the idea of position-building in competition using the concepts from the prior section. It provides details concerning the market impact models used in the rest of the paper, including both permanent and temporary and gives a conceptual overview of how market impact influences position-building in competition. Finally, this section discusses the total cost of trading in competition and the importance of the balance between temporary and permanent impact. This leads to the important notion of κ-regimes.
In Section <ref> we provide several motivating examples and calculations to provide a sense for the issues involve with position-building in competition. While this section is not strictly necessary in the sequel, the author believes it provides some insight into the motivations behind this paper.
Section <ref> examines passive position building strategies, those that trade as if there is no competition. We use these to introduce our game-theoretic approach to understanding how traders should optimally select their strategies in response to competitors. This section gives an overview of the Almgren-Chriss algorithm, <cit.>, and uses it to motivate the types of passive strategies one might run across. The section reviews the Euler-Lagrange equation which is used extensively throughout.
Section <ref> introduces non-equilibrium best-response strategies, the first use of game-theoretic notions in this paper. These are strategies that optimally minimize the total cost of trading when building a position in competition with an adversary trading a known strategy. This section derives the non-equilibrium best-response strategies to the three most important types of passive strategies, risk-averse, risk-neutral and eager.
Section <ref> defines equilibrium strategies when two traders are trading in competition and derives a simple system of differential equations that determines equilibrium strategies. In simple terms, equilibrium strategies are those strategies that are the best-response strategies to one another, meaning that is a and b are in equilibrium then a is
the best response to b and b is the best response to a. This section derives closed-form solutions for the two-trader equilibrium and explores the results with several examples and also explores how κ, the balance between permanent and temporary market impact, affects the shape of equilibrium strategies.
Section <ref> derives another equilibrium, one in which rather than two adversaries of potentially different target trading sizes, there are many traders, each trading the same target size, all in competition with one another. We derive a symmetric equilibrium, one in which all traders trade the same strategy and are in pairwise-equilibrium, for this case and give closed-form solutions. Once again we explore how κ impacts the shape of the resultant strategies.
In Section <ref> we describe the inverse problem which answers the question if I trade a certain strategy, what is this strategy the best response to? We give several examples and explore how this can be a useful diagnostic tool.
In Section <ref> we explore the role of uncertainty in the selection of optimal position-building strategies. We discuss how to select a position-building strategy when it is not known what your adversary's understanding of the market is. This section does not provide a complete theory, but does give an important motivating example through the complete analysis of an archetypal situation. The upshot is that strategy selection in these circumstances must be broadened from simply computing an equilibrium strategy, to computing a collection of equilibrium strategies, together representing the space of possible strategies that may arise when seen from your adversary's point of view. In this section we compute the example where there are two traders, one, called A, who is trading a single unit of stock and the other, called B, who is trading many units. The uncertainty arises because B does not know whether A believes there is one or many adversaries and A will choose its strategy according to this belief. We show in this example that this uncertainty places B's decision into a probabilistic framework and then the best-response strategy may be selected, for example, using mean-variance analysis.
In Section <ref> suggests an approach to expand the work of Section <ref> and place it in a rigorous probabilistic framework by introducing the idea of probabilistic strategy selection. We provide a partially-worked example.
In Section <ref>, we explore the impact of parameter mis-estimation, particularly focusing on the market impact parameter κ in two-trader equilibrium strategies. Mis-estimating κ can significantly affect the total cost of trading, as illustrated in Figure <ref>. We provide a numerical exposition of how variations in κ influence costs across a range of values. Additionally, we present sensitivity analysis of the total cost of trading with respect to both κ and λ, highlighting how errors in these parameters can lead to suboptimal strategies. The detailed results are captured in Tables <ref> and <ref>. Finally, we briefly discuss possible extensions involving holding risk, leaving more detailed analysis for future work.
Finally, in Section <ref> we introduce risk-aversion into the study of position-building with competition. Specifically we propose equilibrium equations analogous to those in Section <ref> but add a risk-aversion term to the loss functions that penalizes the total volatility a trader's position holds during the course of building the position. We proceed to study various numerical examples to gain intuition regarding the relative important of the parameters involved.
§ PRELIMINARIES
This section sets forth the precise definitions and terminology used throughout this paper.
§.§ Trading strategies
A trading strategy is a mathematical description of the path that trading takes over a fixed interval of time, expressed in terms of the quantity of stock owned at each point in time. Put another way, it is the graph of a function showing the relationship between quantity in units of currency and time. In this paper, time is represented by t and trading always starts at time t=0 and ends at time t=1. See Figure <ref>.
A trading strategy (or, simply, strategy) is twice-differentiable function of time, say x(t), that describes the units of stock held by a trader at each time t between 0 and 1, inclusive.
Trading strategies have various features that we will refer to throughout this paper and we enumerate them here:
* Type: the type of the trading strategy, references the purpose of the strategy and is defined formally in Definition <ref> below;
* Start time and end time: in the context of this paper every trading strategy has a definite start and end time. When not stated otherwise the start and end times will be t=0 and t=1, respectively;
* Trajectory: the trajectory of a trading strategy is the path it takes from start time to end time, often thought of as the graph of the strategy function; and
* Shape: the geometry of the trading trajectory, see Section <ref> for more details.
We now formally define strategy type.
There are several important types of trading strategies and here is non-exhaustive list for a strategy x(t):
* Liquidation: Strategies for which x(0)>0 and x(1)=0, in other words a strategy that starts with a positive quantity of stock and ends with none;
* Position-building: Strategies for which x(0)=0 and x(1)>0;
* Unit: This is a technical for this paper in which a trader seeks to acquire a single unit of stock, usually in reference to an adversary who wishes to acquire a larger quantity; and
* λ-Scaled: Position-building strategies whose rate of trading is scaled by a constant λ≥ 1. By convention these are unit strategies that are scaled by a constantly factor λ>0.
Note that scaled trading strategies represent strategies that have the "shape" of unit position strategies but which are scaled at each time t by a fixed constant λ>1. This means that for a λ-scaled strategy b(t), its trajectory is λ· b(t), while its shape is given by b(t). We discuss strategy shape in the next section.
§.§ Trajectory morphology
It is sometimes useful to categorize trading trajectories morphologicallly, that is, according to their shapes.
There are three basic shapes all of which have constant sign of second derivative:
* Risk-neutral: Strategies for which ẍ(t)= 0 for all t. Risk-neutral position-building strategies are always of the form λ· t for some λ>0;
* Risk-averse: Strategies for which ẍ(t) > 0 for all t; and
* Eager: Strategies for which ẍ(t) < 0 for all t.
Figure <ref> shows a visual description of the three types of position building strategy shapes. In subsequent sections we will discuss how these shapes relate to competitive position building.
In addition to the above depicted shapes there are two other shape types that arise in practice.
* Bucket: bucket strategies acquire more than their target quantity immediately after the start time and then sell down to their target quantity as close to the completion time as possible; and
* Barbell: a barbell strategy buys a portion of its target quantity at the very start of trading and the remaining amount very close to the end of trading.
See Figure <ref> for a visualization of these types.
§.§ Conventions and notation
For the remainder of the paper we will adhere to the following conventions and notation.
* Traders: traders will be denoted by capital letters, e.g., A and B. When not otherwise stated A will denote a unit trader and B a λ-scaled;
* Strategies: given a trader A, its associated strategy will refer to a function of time representing the strategy's trajectory. The associated strategies to a trader will be identified by the associated lower-case letter. For example the strategy trader by trader A will be a(t);
* Twice-differentiable: all strategies, say a(t), will be twice-differentiable;
* Unit position-building: unless otherwise specifically stated, all strategy functions, say a(t), will be unit position building strategies, that is, a(0)=0, a(1)=1; and
* λ-scaling: when a trader, say B, is trading a strategy that intends to acquire an arbitrary quantity of stock, λ, then we scale the strategy by λ, and thus the trajectory of B's λ-scaled strategy is λ b(t).
§ TRADING IN COMPETITION
In this section we lay the groundwork for studying trading in competition, the situation in which two or more traders are trading in the same stock over the same stretch of time. When this occurs each trader must contend with the impact of the others' trading. We call this situation trading in competition. If they are both building a position in the stock than the situation is generally referred to as competitive position building.
In general this paper considers how to optimally adapt to the situation where two or more traders are trading in competition. In order to study this we need a firm understanding of what makes the situation more complex than trading solo. The starting point is market impact.
§.§ Market impact
Market impact refers to the impact on the price of a stock that is a direct consequence of trading activity. In the context of execution of trading strategies one can think of there being a vast pool of traders whose net impact of the price of a stock is essentially zero and a small pool, one or more, of traders directly trading so that on net the price of the stock moves in the direction of trading by this small pool. If the traders are buying, the price moves up, if they are selling, it moves down. We collect these ideas into a single definition here.
At any given moment we divide the trading activity in a stock into two types:
* Noise trading: trading that is happening persistently as background activity by many traders and which is expected to have no net impact on price; and
* Concerted trading: trading that is being conducted by a few traders in the same direction (i.e., buying or selling), usually over a relatively brief period of time and whose net impact is expected to move the price in the direction of trading (e.g., up if the direction of trading is buying).
With this out of the way, we move on to discuss market impact. As market impact has an out-sized effect on both market function and investment performance, it is a subject of enormous practical and academic interest. In the context of this paper, we are concerned with both how immediate demand for a stock changes the price at which the stock may be purchased, and also how persistent demand for the stock over time causes an on-going change in the price. These forms of price change are respectively referred to as temporary (also referred to as transient) and permanent market impact.
The study of market impact in the academic literature is vast and heterogeneous. There are strictly formal models, both linear and non-linear, models of limit and market order dynamics and models concerning market impact estimation. See <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>[There are many models of market impact and methods for estimation of it, and the list provided is surely a small subset. See for for example the excellent overviews in <cit.>, <cit.> and <cit.>.]. In this paper we will use linear models for both temporary and permanent market impact following <cit.> and define them next.
The cost of trading relative to the prevailing market price of a stock as trading commences. Temporary impact affects the price of a stock only at the moment of trading and has not bearing on the future price. It is measured as the difference between the prevailing price at the time of an order and the average execution price of the order. The fact that there is temporary impact at all is presumed to arise from a premium being charged for immediacy of order execution.
In some sense the polar opposite of temporary market impact is permanent market impact, defined as follows:
Is the change in the price of a stock due to persistent trading over time. The impact in this case is measured relative to the price of the stock when trading commences, and persists so long as concerted trading is taking place.
§.§ Market impact while trading in competition
While there has been a great deal of academic work concerning the optimal execution of trading strategies in the presence of market impact, there has been scant discussion of trading in competition. As such we take a moment here to discuss the specific mathematical formulation of temporary and permanent market impact with and without competition that will be used in the remainder of this paper.
To begin, we note that in the context of optimal trading strategies, whether with or without competition, the objective relates to minimizing the cost of trading relative to the price that would have been in the absence of trading. In this context one has to imagine a counterfactual price path unperturbed by concerted trading (see Definition <ref>). Then one has to view the implementation cost of the strategy as the difference between trading at actual market prices and trading at the unperturbed prices.
With this said we set forth the linear models for temporary and permanent market impact with and without competition. Let A and B be traders, trading a(t) and b(t) in competition. Then we have the following mathematical formulations of temporary and permanent market impact defined conceptually in Definitions <ref> and <ref> respectively.
8pt
Temporary impact for A without competition: the impact on the price paid when A is trading without competition is proportional to ȧ(t) at time t and the total cost is ȧ^2(t);
Permanent impact for A without competition: the impact on the price paid at at time t when A traded without competition starting at time t=0 is proportional to a(t) and the total price paid is a(t)·ȧ(t);
Temporary impact for A competing with B: the impact on the price paid when A is trading in competition with B is proportional to ȧ(t) + ḃ(t) at time t and the total cost is (ȧ(t) + ḃ(t)) ·ȧ(t);
Permanent impact for A competing with B: the impact on the price paid at at time t when A is trading in competition with B is proportional to a(t) + b(t) at time t and the total cost is (a(t) + b(t)) ·ȧ(t);
8pt
The costs enumerated above are all stated as only up to a proportionality constant for two reasons. First, when searching for optimal trading strategies, knowing a quantities up to a constant multiple is all that is necessary, as with any optimization problem. Second, perhaps more importantly, what matters is how big permanent impact is relative to temporary impact.
§.§ Total Cost of trading
We now set forth equations that determine the total cost of trading when two traders, A and B, are trading in competition. These values will be computed only up to a constant. To do this we set the temporary impact constant of proportionality above to one and the permanent impact constant of proportionality to a value κ>0. We call κ the market impact coefficient and we will use it throughout this paper. We now write:
Instantaneous total cost of trading for A ∝ (ȧ + ḃ) ȧ + κ (a + b) ȧ
Instantaneous total cost of trading for B ∝ (ȧ + ḃ) ḃ + κ (a + b) ḃ
Using <ref> it is easy to see that the average cost of trading from time t=0 to t=1 must be given by the integral of those expressions and that the aim of a trader acquiring a position is to minimize the total cost of trading. In particular the job of trader A wishes to minimize the total cost of trading in competition given as follows. Let 𝒮 be the set of position-building strategies that are twice-differentiable:
𝒮 = { a:[0, 1]→𝐑 | a(0) = 0, a(1)=1, ä exists}
and let b(t) be the strategy trading competitively with A. We wish to place into a mathematical A's goal in trading which is to minimize the cumulative total cost of trading between the start and end of building the position.
a ∈𝒮minimize ∫_0^1 (ȧ(t) + ḃ(t) ) ȧ(t) + κ(a(t) + b(t)) ȧ(t) dt
namely this trader would like to minimize the total cost of acquiring the position. Analogously, trader B solves the problem:
b ∈𝒮minimize ∫_0^1 (ȧ(t) + ḃ(t) ) ḃ(t) + κ(a(t) + b(t)) ḃ(t) dt
In addition to solving eq:trading-program-aeq:trading-program-b individually, we will also be interested in solving them jointly when the quantities they trade are different from one another.
§.§ The balance of temporary and permanent impact and κ-regimes
Much of the work in this paper is devoted to analyzing the properties of the cost functions described in Section <ref> and its influence on the shape of trading strategies in competition. To fully analyze this we start by refining our understanding of κ in the total cost of trading formulas <ref>.
The constant κ in <ref> defines the relative quantity of transaction cost that arises from permanent versus temporary market impact. This quantity turns out to be the single most important quantity in analyzing trading in competition. We note this here because in optimal execution problems, see e.g. Section <ref> and in particular <ref>, the presence of permanent impact has no effect on optimal execution strategies and is therefore ignored.
By contrast, the presence of permanent market impact in competitive position-building problems is crucially important and the best way to view it is by means of κ regimes which we explain now.
Kappa regimes refers to how the value of κ, the determination of the relative contribution of temporary and permanent market impact, influences the cumulative cost functions defined in eq:trading-program-aeq:trading-program-b. We identify two broadly different κ-regimes according to whether total loss is dominated by temporary or permanent market impact:
* Temporary impact dominated, κ<1: in this regime, particularly when 0<κ≪ 1, trading costs are dominated by temporary market impact costs and traders do not concern themselves as much with a persistent rise in the price of the stock and concern themselves more with minimizing temporary cost. As a consequence they tend to trade strategies that are closer to risk-neutral in shape, as in Section <ref>; and
* Permanent impact dominated, κ>1: in this regime, particularly when κ≫ 1, trading costs are dominated by the permanent market impact costs and traders are concerned with "trading ahead" of their competition in order to buy before the price rises. As a consequence the shape of their strategies tend to be eager, as described Section <ref>.
§ PRELIMINARY EXAMPLES AND MOTIVATION FOR OPTIMAL STRATEGIES IN COMPETITION
In order to building intuition we digress into two examples that will help to motivate what follows.
§.§ Example calculation
In order to clarify the the exact nature of the trading costs, an example will be useful. Suppose B is trading a trading a simple λ-scaled risk-neutral strategy b(t):
b(t) = λ· t
and A is trades a unit parabolic strategy in competition with b(t):
a(t) = t(t-c)/1 - c
In this case the rate of trading of a(t) at time t is given by:
ȧ(t) = 1/1 - c( 2t - c )
and the instantaneous cost of trading for a at time t is cost(t):
Cost(t) = ( ȧ(t) + ḃ(t) ) ·ȧ(t) + κ·( a(t) + b(t) ) ·ȧ(t)
where ḃ(t)=λ. Then the cumulative cost function from time 0 to t is given by:
Cumulative cost(t) = ∫_0^t ( ȧ(s) + ḃ(s) ) ·ȧ(s) + κ·( a(s) + b(s) ) ·ȧ(s) ds
Figure <ref> demonstrates the cumulative costs of a variety of strategies.
§.§ Motivating understanding the mix of temporary and permanent impact
In order to get a feel for the problem domain of trading in competition we start with the following problem. Suppose A is trading in competition with B and A knows that B is going to trade the strategy b(t)=t, a simple risk-neutral strategy. Further, A only has two strategy options available, a rapid-buy and risk-neutral strategy respectively, as shown in Figure <ref>.
The first question we ask is, how should trader A go about choosing between t
To get at an answer we start by observing that the primary concern for a trader regarding permanent impact is that as B purchases the stock the price will remain higher throughout the buy-program and subsequent prices. In the absence of permanent impact, the trader's only concern is minimizing temporary impact and the best way to do this in the case where B is trading the risk-neutral strategy is to spread trades out as much as possible which clearly means trading the risk-neutral strategy as well (Figure <ref> right plot)..
At the other extreme when there is no temporary market impact, the only concern is to avoid paying higher prices for the stock as B pushes the price up during the course of buying. In this case since there is no penalty for a large quantity of purchases over a short period of time, the correct approach is to purchase all of the target quantity (one unit) immediately, and hence implement a rapid buy strategy (Figure <ref> left plot).
We see that in the two extreme cases of no temporary or no permanent impact it is fairly straightforward how to proceed when B is trading a risk-neutral strategy. But what if both temporary and permanent impact are present, as in Section <ref>? In this case we would need to implement more complex strategies. One possibility might be to estimate the κ representing the mix of permanent and temporary impact as in Sections <ref> and <ref> and then form a linear combination of a rapid-buy and a risk-neutral program. This is depicted in Figure <ref>. Not only is this not an optimal approach, it also does not indicate what to do if B is trading a λ-scaled strategy and not targeting one unit of stock. It also misses the mark for what to do if B is not trading a risk-neutral strategy.
To provide complete answers to these questions we need a better framework that builds strategies that are optimal responses to other strategies taking into account how and how much they are trading. We will develop these soon enough but next we do a small example calculation to get a feel for how costs are calculated in practice.
§ PASSIVE POSITION-BUILDING STRATEGIES
In this section we study passive position-building strategies, those strategies for which a trader builds a position without taking into account the actions of other traders (see section <ref>. We use the Almgren-Chriss algorithm to derive optimal passive position-building strategies for risk-averse traders in the absence of competition. This is not strictly necessary in the sequel but provides context for how the analysis of non-competitive trading differs from competitive trading.
§.§ Almgren-Chriss optimal execution review
In this section we review the Almgren-Chriss algorithm <cit.> and <cit.>[The algorithm was originally developed while Chriss was working in the Institutional Equity Division at Morgan Stanley, while working on the program trading desk from 1996-1997 and published in expository form in <cit.>.]. Almgren-Chriss was developed to tackle the problem of buying or selling a large quantity of a stock or portfolio that is too large to be executed in one trade. The problem in that paper was to balance temporary market impact costs that increase with the rate of trading and the risk inherent in holding the stock. The paper also considers permanent impact costs but in this framework it is an artifact of the model that permanent impact costs have no bearing on the optimal trading strategy. We will review this next.
The setup In Almgren-Chriss is that at time t=0 a trader owns a certain quantity of and seeks to liquidate it entirely by time t=T. The liquidation trajectory is described by a twice-differentiable function x(t) and the trader wishes to minimize a mix of temporary impact costs and risk as described by the following loss function[The function could easily be called a cost function and be denoted C but in keeping with the Euler-Lagrange equation we stick with L.]:
L = ẋ^2 + λσ^2 x^2
The strategy x(t) that has minimal cost among all strategies is called the optimal strategy. To find the minimal strategy is a straightforward application of the Euler-Lagrange equation[The Euler-Lagrange equation is ubiquitous, but see <cit.> for a good overview.]. Then the Euler-Lagrange equation is used to find the function x(t) that minimizes a functional defined by means of the loss function L(t, x(t), ẋ(t)), where ẋ(t) denotes the derivative of x(t) with respect to t, the Euler-Lagrange equation is given by:
∂ L/∂ x - d/dt( ∂ L/∂ẋ) = 0
Below is a brief explanation of the notation:
* ∂ L/∂ x is the partial derivative L with respect to x(t);
* ∂ L/∂ẋ is the partial derivative of L with respect to ẋ(t); and
* d/dt( ∂ L/∂ẋ) is the total derivative with respect to time of ∂ L/∂ẋ.
Recall a position-building strategy x(t) is one for which x(0)=0 and x(1)=1. We start by deriving an optimal position-building strategy with no competition. In this setup we find an extremum strategy for the loss function L in <ref> and obtain the following differential equation:
ẍ - λσ^2 x = 0
with boundary condition x(0)=0, x(1)=1. We solve for x(t) for the given boundary conditions x(0)=0 and x(1)=1[We wil generally place the boundary conditions for these sorts of differential equations directly in-line with the equation itself.] to obtain
x(t) = sinh(σ√(λ) t)/sinh(σ√(λ))
To simplify notation we absorb √(λ) into σ, thus making σ represent a kind-of risk-aversion-scaled volatility and call the resultant strategy the risk-averse position-building strategy:
x(t) = sinh(σ t)/sinh(σ )
Figure <ref> shows an example risk-averse position-building strategy.
§.§ Permanent impact has no effect on optimal execution
Suppose we augment the loss function <ref> with permanent impact so that
L = ẋ^2 + κ· x ·ẋ + λσ^2 x^2
then applying the Euler-Lagrange equation <ref> we obtain
∂ L/∂ x = κẋ + 2λσ^2 x
∂/∂ t∂ L/∂ẋ =
2 ẍ + κ·ẋ
and so from the Euler-Lagrange theorem <ref> we obtain ẍ - λσ^2 x = 0. This implies that the differential equation describing x is given by
ẍ - λσ^2 x
precisely matching <ref>, thus confirming that the presence of permanent impact has no influence on the shape of an optimal execution strategy.
§ NON-EQUILIBRIUM BEST-RESPONSE STRATEGIES
The previous section provides the building blocks of a game theory framework whose actions are drawn from the space of trading strategies and whose payoffs are described in terms of total cost of trading. In this setup there are two traders A and B in competition to buy the same stock starting at time t=0 and completing at time t=1. Both traders trade position-building strategies with trader A trading a unit strategy and trader B a λ-scaled strategy with λ>1. This setup corresponds to the notion of differential games and in particular open-loop games, as in <cit.>.
To fashion this into a game, A and B simultaneously choose their strategies immediately prior to t=0 and agree to adhere perfectly to their respective strategies during the interval of time from t=0 to t=1. To begin our investigation we start by understanding how B would trading knowing what strategy B intends to trade. In the next few sections we derive non-equilibrium best-response strategies to particular strategies. The setup is as follows:
* Traders A and B are trading the same stock over the same period of time with B trading a λ-scaled strategy;
* Trader B is trading passively (see Section <ref>); and
* Trader A knows precisely what B's strategy is.
We derive A's optimal strategy in light of B's which in keeping with game-theoretic terminology we call the best-response strategy to B. To emphasize that the best-response strategy is trading versus a passive trader, we call the results non-equilibrium strategies and call trader B an adversary.
§.§ Best-response strategy to a risk-averse adversary
Suppose trader B follows a λ-scaled risk-averse position-building strategy as in <ref>. If trader A is completely aware of B's strategy and wants to acquire one unit of the same stock over the same period of time, then what is the optimal strategy for trading A to follow? To compute this we start by defining a loss function for a(t) as follows:
L=(ȧ + λḃ) ·ȧ + κ( a + λ b) ·ȧ
To explain this, start by noting the first component of the loss function is the product of A's rate of trading and the total rate of trading of A and B, where B's trading is scaled by λ. The first component therefore represents the temporary market impact trader A will experience trading in competition with B. Unlike in Almgren-Chriss the temporary impact includes the impact of both traders, reflecting the competitive nature of the trading.
The second component of <ref> represents permanent market impact, where κ is the fraction of total trading that is retained in the price of the stock. As with temporary impact, the permanent impact is determined by the trading of both A and B. Applying the Euler-Lagrange equation yields
2ä + λb̈ + κȧ + κλḃ -κȧ = 0
ä = - λ/2 ( b̈ + κḃ)
We call <ref> the best-response equation for trader A responding to a single adversary B with known λ-scaled strategy b(t). Now substituting <ref> for b we obtain
a(t) = - λ/2 ( b + κ∫ b) + wt + z
= - λ/2( sinh(σ t)/sinh(σ ) + κ∫sinh(σ t)/sinh(σ )) + wt + z
= - λ/2(
sinh(σ t)/sinh(σ ) + κ/σcosh(σ t)/sinh(σ )) + wt + z
where the linear portion wt + z can be used to satisfy the boundary conditions a(0)=0 and b(0)=0. To simplify this further write ξ = κ/σ and
q(t) = sinh(σ t)/sinh(σ ) + ξcosh(σ t)/sinh(σ )
and we have
a(t) = -λ/2 q(t) + wt + z
Solving for the boundary conditions we have
a(0) = -λ/2 q(0) + z
a(1) = -λ/2 q(1) + w + z
which finally implies z=λ/2 q(0) and
w = 1+λ/2(q(1) - q(0) )
Putting this together we arrive at A's best-response to B's risk averse position-building strategy:
a(t) = -λ/2 q(t) + (1+λ/2 (q(1) - q(0) )) t + λ/2q(0)
Figure <ref> shows an example of the best-response to a risk-averse position-building strategy. In each plot the red dotted strategy is b(t), the risk-averse trader's position-building strategy. In this situation the risk averse trader B is simply trading without regard to the actions of trader A, while trader A is trading optimally with respect to B. Notice that the blue lines are trading ahead of the risk-averse trader. The first column shows the situation where the risk-averse trader and the best-response trade the same amount. The second column shows where λ=5, that is, the risk-averse trader is acquiring five times as much stock as the best-response. In this case the best-response is to over-buy and then sell to the risk-averse trader as they continue to buy.
In general we note that the best-response to a risk-averse strategy is eager in the sense of section <ref>.
§.§ Best-response to a risk-neutral adversary
Suppose a competitor B trades λ-scaled (λ≥ 1) risk-neutral strategy b(t)=t. The corresponding loss function is:
L = (ȧ + λḃ) ȧ + κ· (a + λ b) ȧ
and therefore from the Euler-Lagrange equation <ref> the differential equation describing the unit best-response to a λ-scaled strategy is
ä = -λ/2( b̈ + κḃ) = 0
from which we solve for a(t) with boundary conditions a(0)=0 and a(1)=1.
a(t) = (1 + - λκ/4) t^2 + λκ/4 t
In this setup A is reacting to a competitor B who is trading a larger quantity.
In general we note that the best-response to a risk-averse strategy is eager in the sense of section <ref>.
§.§ Best-response to an eager adversary
Finally we examine the case of the best-response strategy to an eager competitor. For this trader we use the λ-scaled strategy as follows:
b(t) = e^-λ t - 1/e^-λ - 1, eager λ-scaled
as above we solve for the best-response strategy a(t) using the differential equation <ref> with boundary conditions a(0)=0, a(1)=1 to obtain:
a(t) = e^-λ t(e^λ (λ -κ )-te^λ t (λ-κ+2)+e^λ(t+1) (κ
-λ +t (λ -κ + 2)))/2
(e^λ-1)
Figure <ref> shows an eager trading strategy for various values of λ.
Figure <ref> shows the best-response to an eager trader for various values λ (scaling) and κ (permanent impact coefficient). We note that in general the best-response to an eager strategy is risk-averse.
Examining the plots we see something curious. The best-response to the eager trader counter-intuitively begins by selling short and then later re-buys the stock. Since the eager trader is rapidly pushing the price of the stock up, can this possibly be correct? The answer comes down to precisely precisely how the model deals with market impact.
In Table <ref> we show the cost of trading both the best-response and risk-neutral unit strategy trading in competition with a passive, eager trader for the values of κ and λ in Figure <ref>. Beginning with the total column we see that the best-response strategy's total cost is indeed lower than the risk-neutral strategy's. For example, for λ=1 and κ=0.1 the values are very close but the best-response strategy's total cost was 2.09 versus the risk-neutral's of 2.11, while for lambda=10 and κ=0.10 the best-response strategy's total cost is significantly lower at -86.1 versus the risk-neutral strategy's total cost of 12.
To understand what is going on we look at the temporary and permanent impact columns. These columns show the calculated contribution of temporary and permanent impact to the total cost of trading. Examining these it is straightforward to see that the best-response strategy benefits from lower (and in the case of large λ, significantly lower) temporary impact costs. Why is this the case? To arrive at the answer we have to carefully examine the handling of temporary impact in the market.
§.§ Temporary impact revisited
Recall from Section <ref> that temporary market impact relaxes instantaneously the moment a trade occurs and has no bearing on the future price of a stock. In non-competitive trading, the meaning of this is clear both when the trader is buying and when the trader is selling. In the case of a passive trader, the temporary impact cost imposed on a trader trading at an instantaneous rate of (say) ẋ(t) at time t is ẋ^2(t) > 0. In other words the cost is always positive.
Compare this, however, to the situation in which a trader A is trading a(t) in competition with another trader, say B, trading λ b(t). In this situation, the market impact cost to A will depend on the net trading in the stock. It will be a profit (a negative cost) whenever sgn(ȧ(t) + λḃ(t)) sgn(ȧ(t)). That is, whenever the trading direction of A and B in aggregate is the opposite sign of A's then A's temporary impact will be a profit for A. We illustrate this in a small table:
§ TWO-TRADER EQUILIBRIUM STRATEGIES
This section defines what it means for two strategies trading in competition to be in equilibrium. The definition of equilibrium here is analogous to that of Nash equilibrium in standard treatments of game theory. Using our standard setup we start with a unit trader A and a λ-scaled competitor B, with strategies a(t) and b(t) respectively. Write L_a and L_b for the loss functions associated with the total cost of trading a and b, as in <ref> and <ref>.
Let A and B traders with strategies a(t) and b(t). Let â be the best-response to b; and b̂ be the best-response to a. Then a and b are in equilibrium if â=a and b̂=b.
In simple terms, A and B are in equilibrium if the best-response to b is a and the best-response to a is b, and moreover the best-response to the best-response to b is a. As it happens this definition is equivalent to the standard definition Nash equilibrium, which we summarize as follows.
The traders A and B are in equilibrium as in Definition <ref> if and only if for any unit strategy and λ-scaled strategy ã≠ a and/or b̃≠b we have
∫_0^1 L_ã dt > ∫_0^1 L_a dt, and/or
∫_0^1 L_b̃ dt > ∫_0^1 L_b dt
To find equilibrium strategies we start by deriving equations for joint best-response strategies.
§.§ Joint best-response strategies in general
Using the same concepts as in sec:best-response-risk-aversesec:best-response-eager we derive equations defining best-response strategies for A and B position-building in competition where B trades a λ-scaled strategy. First we derive A's best-response strategy for B noting the loss function is as follows:
L_a = (ȧ + λḃ) ·ȧ + κ (a + λ b) ·ȧ
L_b = (ȧ + λḃ) ·λḃ + κ (a + λ b) ·λḃ
Using the Euler-Lagrange equation <ref> for L_a and L_b respectively yield the equilibrium equations with boundary conditions as given:
ä = -λ/2 (b̈ + κḃ)
b̈ = -1/2λ (ä + κȧ)
with the boundary conditions a(0)=0, b(0)=0, a(1)=1, b(1)=1. We will use these in the following sections to analyize and solve for the equilibrium strategies for a and b.
§.§ Extremal kappa regimes
Before studying equilibrium in general we examine two extreme cases in order to build intuition. Fix λ> and. as usual, consider two strategies trading in competition: a unit strategy A and a λ-scaled strategy B. In equilibrium, the strategies satisfy the equations eq:eq-equation-ddotaeq:eq-equation-ddotb above.
The two extremal cases we investigate are when the contribution to market impact is entirely due to permanent market impact or entirely due to temporary impact. When market impact cost is due entirely to temporary impact, the equilibrium equations reduce to:
ä = -λ/2b̈
b̈ = -1/2λ
again with the boundary conditions a(0)=0, b(0)=0, a(1)=1, b(1)=1. Applying the boundary conditions for position-building strategies a(0), b(0)=0 and a(1), b(1)=1 the solutions are a(t) = t and b(t) = t. In other words, in the absence of permanent impact, the equilibrium competitive position building reduces to the most trivial of cases. This aligns with intuition: without permanent impact, each trader is exclusively concerned with minimizing total trading costs at each point in time. The best way to achieve this is by spreading trades evenly over the entire duration of trading.
At the other extreme is the cost of trading arises entirely from permanent impact. In this scenario, the equilibrium equations are derived from the following reduced equilibrium equations:
L_a = κ (a + λ b) ·ȧ
L_b = κ (a + λ b) ·λḃ
and the Euler-Lagrange equation yields for L_a and L_b respectively:
κȧ + κλḃ = 0
κλȧ + κλ^2 ḃ = 0
dividing through by κ in the first equation and κλ in the second, these reduce to a single equation
ȧ + λḃ = 0
and with the position-building boundary conditions there is no solution. One way to understand what is going on is the look at how the unit trader strategies look at what happens to the solutions a, b for the equilibrium equations eq:eq-equation-ddotaeq:eq-equation-ddotb for increasingly large values of κ, to understanding the limiting behavior of the equations as κ→∞. Here are plots for this scenario:
One can see that the limiting strategies in Figure <ref> are respectively bucket and barbell shaped (see Section <ref>). This makes intuitive sense: as market impact becomes dominant, i.e., as κ becomes large, the unit strategy a(t) desires to "get ahead" of the pending market impact caused by b(t) so quickly buys up more than its target quantity, hoping to sell at the very end at elevated prices caused by b(t) which is trading five times as much as a(t). A seemingly strong counter to this is for B to buy a portion of its target quantity immediately, imposing elevated prices on A and then delay to the very end completing the position-building.
§.§ The two-trader equilibrium strategies
Suppose a and b in competition with b λ-scaled. In this section we derive closed-form expressions for a and b that jointly satisfy eq:eq-equation-ddotaeq:eq-equation-ddotb. Solving these yields:
a(t) = -(1 - e^-κ t/3) (-e^κ /3(e^κ /3+e^2 κ /3+1) (λ +1)+(λ -1) e^κ t/3+(λ -1) e^2 κ t/3+(λ -1) e^κ t)/2 (e^κ-1)
b(t) = (1 - e^-κ t/3) (e^κ /3(e^κ /3+e^2 κ/3+1) (λ +1)+(λ -1) e^κ t/3+(λ -1) e^2 κ t/3+(λ -1) e^κ t)/2 (e^κ-1) λ
§.§ Two-trader equilibrium examples
To conclude our discussion of two-trader equilibrium trading we give some examples of what these strategies look like in general. Figure <ref> shows an example strategy a(t) and λ-scaled strategy b(t) in equilibrium. The plots show the equilibrium strategies for trader a unit trader A and a λ-scaled trader B trading in competition. Figure <ref> shows the situation where κ is relatively small in comparison to λ.
Note: The plots for b(t) show the shape of the strategy, without λ-scaling. Each column of plots shows a single value for permanent impact and each row shows a single values for λ. The basic message of the plots is that if A and B are trading the same size (λ=1) then they both simply trade a straight line. As the permanent market impact parameter increases and/or the size that B trades grows, the more convex A's strategy becomes while B remains a straight line.
Next in Figure <ref> we plot just the λ-scaled strategy from Figure <ref> in order to demonstrate that in sufficiently high κ, permanent impact dominated regimes, the trading strategy concentrates an increasing proportion of its trading at the start and end of the trading.
A close look at Figures <ref> and <ref> suggests the following heuristics for the shapes of the unit and λ-scaled strategies trading in two-trader competitive equilibrium for different κ-regimes:
* Negative-κ, dominated by temporary market impact: both the unit trader and the λ-scaled traders trade essentially risk-neutral strategies very close to y=t;
* Negative-κ, low-to-moderate relative temporary impact: the λ-scaled trader continues to trade an essentially risk-neutral strategies very close to y=t; however, the unit trader begins to trade an eager strategy in order to buy ahead of the pending price increases due to the permanent impact to be caused by the λ-scaled trader;
* Positive-κ, low-to-moderate relative permanent impact: the unit trader an increasingly eager strategy while the λ-scaled trader remains more-or-less risk-neutral; and
* Positive-κ, dominated by permanent impact: now the unit trader begins to trade a bucket strategy and the λ-scaled trader begins to trade a barbell strategy in response, these shapes as described in Section <ref>.
§ EQUILIBRIUM WHEN THERE ARE MANY UNIT TRADERS
In the prior section we derived a two-player competitive equilibrium (say, between players A, a unit trader, and B and λ-scaled) and found that the λ-scaled trader always trades something very close to a risk-neutral strategy while the unit trader trades increasingly eagerly as the size of λ grows. We argue that this makes sense because trader A knowing that trader B will push the market up significantly can take advantage of that by purchasing share early, waiting for the price to rise and selling the shares back at a profit. But is this the only possibility?
Consider the following alternative. Trader A is again a unit trader who wants to acquire one unit of stock between now and time one. They have market surveillance which says that there are one more competitors looking to purchase in aggregate λ units of the stock. The trader considers two alternative possibilities for what is happening:
* One large adversary: in this alternative there is a single competitor trading λ units exactly as in Sections <ref> and <ref>; and
* Many unit trader adversaries: in this alternative, assuming λ is an integer, there are λ adversaries each trading a unit strategy. For example, market surveillance says that there will be a total of eleven units traded, one by trader A and the rest traded by ten unit traders.
Of course there are infinitely many alternatives but closely studying these two will give insights into the central issue at hand. Suppose trader A has no way of identifying which of the two alternatives will transpire. The trader might reason in this situation:
What does it matter whether it is alternative one or alternative two. In either scenario I am trading one unit of the stock and my adversaries are trading ten, so aggregate level of competitive buying I am facing is the same in either scenario. Shouldn't the equilibrium strategy be the same?
This logic, while appealing, is, in fact, incorrect as we shall see next.
§.§ The multi-trader symmetric equilibrium strategies
To see why the above logic is incorrect we begin by deriving the equilibrium strategy for n+1 traders trading in competition. This is a simple matter of repeating the logic of Section <ref>.
If there are n+1 unit traders A_1, …, A_n+1 trading in competition we say that a_i(t) is trader i's strategy and has loss function:
L_i(t) = ȧ_i(t) ∑_j ȧ_j(t) +a_i(t) ·κ·∑_j a_j(t)
In the above equation the temporary impact from traders i=1,…,n+1 is ∑_i=1^n ȧ_i(t) while the permanent impact is κ ∑_i=1^n a_i(t). Now using the Euler-Lagrange equation <ref> we obtain that for each trader i we obtain a second order differential equation for
ä_i = -1/2(∑_j iä_j + κ ∑_j iȧ), a_i(0)=0, a_i(1)=1 and a_i=a_j for all i
We will call <ref> the multi-trader symmetric equilibrium equation in light of the fact we demand all strategies are equal. Assuming that all n+1 traders will trade identical strategies in equilibrium and solving <ref> simultaneously for all traders yields setting λ=n:
a(t) = e^λ·κ/λ + 2(1 - e^-λ·κ/λ + 2· t)/e^λ·κ/λ + 2 - 1
We call a(t) in <ref> the multi-trader symmetric equilibrium strategy and we plot it in Figure <ref>. Each plot shows the multi-trader equilibrium for three values of the number of the total number of traders, labeled λ and show values for a total of 10, 50 and 250 traders. As we move down the rows κ varies from a low- to high-κ regime (see Section <ref>). For the low κ-regime we see the equilibrium strategies are almost risk-neutral, but as κ grows the strategies become increasingly eager.
Next we examine what happens when the number of traders grows extremely large, roughly speaking, modeling the situation in which trader A competes with many traders each trading a single unit. This is equivalent to letting λ→∞ in which case λκ/(λ+2) →κ and <ref> becomes, in the limit:
a_i(t) = e^κ - e^κ(1-t)/e^κ - 1, i = 1, …, n
With <ref> in hand, we plot it with various values of κ in Figure <ref>.
§ THE INVERSE PROBLEM
In this section we develop a method diagnosing the appropriateness of a given strategy intended for trading in competition. As we say in Section <ref> if a trader (say, A) knows with certainty that there is a single adversary (say, B) trading a known strategy λ· b(t) then there is an optimal best-response strategy given by <ref>. One obvious problem with this is, what if the adversary B is trading a different strategy b^'?
§.§ What is my strategy the best-response to?
The first method for examining this issue is to solve the inverse problem. This amounts to trader A asking if I decide to trader strategy a(t), what strategy is this a best-response for?. To solve this is a simple matter of applying the Euler-Lagrange equation <ref> taking A's strategy as a given to solve for a λ-scaled strategy b^*(t) in the following loss function, taking κ as given:
L(t; a, b^*) = (ȧ + λḃ^*) ȧ + κ (a + λ b^*) ȧ
To be clear <ref> is the loss function (that is, the total cost of trading) for a(t) when trading in competition with λ b^*(t). We apply the Euler-Lagrange equation and solve for b^*:
∂ L/∂ a = 2ä + λb̈ + κȧ + κλḃ
∂/∂ t∂ L/∂ȧ = λȧ
and this yields by the Euler-Lagrange equation:
2ä + λb̈^* + κλḃ^* = 0
Now, unlike in Section <ref> we solve for b̈^* which will be the strategy that minimizes the cost for a(t). With this we get:
b̈^* = -1/λ(2 ä + κλḃ^*)
with boundary conditions b(0)=0, b(1)=1. As a sanity check we can take the solutions to the two-trader equilibrium equations eq:eq-equation-ddotaeq:eq-equation-ddotb and the solutions eqs:best-responses-aeqs:best-responses-b and see that if we use the solution for a(t) in the system of equations and then solve for b^*(t) in the inverse problem <ref> we obtain the same b^(t)=b(t) where b(t) is taken from <ref>. In some sense this is no surprise as this is precisely the meaning of the equations, however <ref> truly is answers the question if a(t) is A's strategy what is the b^*(t) for which a(t) is the best response? This has some useful properties which we explain now.
First suppose that A is trading a(t) and does not know what B is trading but knows the total quantity that will be traded is λ. Then A may solve for b^* in <ref> and examine it and ask is it sensible that B would trade this way?
Example: Suppose trader A knows that trader B is going to trader 5 units of stock and κ=2. If trader A wants to trader a risk-neutral strategy then <ref> states that this is the best-response strategy if B is trading
We show a plot of <ref> and a(t)=t in Figure <ref>.
We also solve the inverse problem when a λ-scaled trader is trading a strategy b(t) (as usual, its trajectory will be λ· b(t)). Analogous with above the solution for the strategy a^* which is the best response to the λ-scaled b(t) is given by
ä^* = -(2 λb̈ + κȧ^*)
with boundary conditions a^*(0)=0, a^*(1)=1. Specifically if we solve <ref> using the λ-scaled solution to the two-trader equilibrium <ref>:
b(t) =-e^-κ t/3(e^κ t/3-1) (-e^κ /3(e^κ /3+e^2 κ /3+1) (λ +1)+(λ -1) e^κ t/3+(λ -1) e^2 κ t/3+(λ -1) e^κ t)/2 (e^κ-1)
(note that we solve <ref> without using the factor of λ) we indeed see that it is the best response to
a^*(t) = -e^-κ t/3(e^κ t/3-1) (-e^κ /3(e^κ /3+e^2 κ /3+1) (λ +1)+(λ -1) e^κ t/3+(λ -1) e^2 κ t/3+(λ -1) e^κ t)/2 (e^κ-1)
which is equivalent to <ref>.
The inverse problem's solution yields another insight. It is an immediate consequence of <ref> that if trader A is trading a(t) and this is the best-response to trader B's strategy λ b^* then if B is trading any other strategy λ x(t) then
L(t; a, λ x) > L(t; a, λ b^*)
§ STRATEGY SELECTION IN THE PRESENCE OF UNCERTAINTY
In this section we look at a numerical analysis of how the framework presented in this paper may be extended for use in the presence of uncertainty. We do not present a complete framework but rather an analysis of a reasonably important special case, in order to illustrate the main features of the problem. Later we present several "hints" as to a more complete framework, but we leave that for a future paper.
§.§ Overview
We will examine in detail the situation where there are two traders, a unit trader A and a λ-scaled trader B, trading in competition. The difference from the prior sections is that trader B wants to trade an equilibrium strategy but does not know how trader A perceives the market situation. In particular we assume:
Trader A wants to build a position in the same stock as B and is deciding what strategy to trader; A has learned that in addition to its one unit an additional λ units will be traded. As such A has good idea of how much will be traded but does not know how it will be traded. In particular we assume A has narrowed down the possibilities to two: A believes that there is either a single adversary trading the two-trader λ-scaled equilibrium strategy of Section <ref> or there are λ adversaries each trading one unit using the multi-trader symmetric equilibrium strategies of Section <ref>.
To be clear, the ground truth is there is only one adversary, however A does not know this. This section is devoted to analyzing how B will think about strategy selection in light of A's incomplete knowledge. We do this next.
§.§ Analysis of A's strategy selection
In this there is a single trader, B, who is trading some strategy λ· b(t). Suppose that B believes that it is best to trade an equilibrium strategy, how does B determine its strategy? The issues is that it depends on what A believes is happening. Among all the possibilities, let's suppose that A correctly deduces that in addition to their trading one unit, there will be an additional λ units of the stock traded. Further suppose that A boils the situation down to two possibilities, either there is one adversary (the actual situation) or there are λ adversaries, each trading one unit. Depending on which of the two possibilities A believes they will trade a different equilibrium strategy and therefore B's best-response will vary as well. To illustrate these two possibilities, we analyze what B's best response is in each case.
[B believes A has correctly guessed there is only one adversary]
If, in fact, A believes there is only one adversary it will trade the strategy <ref> which we will identify as a_1a and has the form:
a_1a(t) = -(1 - e^-κ t/3) (-e^κ /3(e^κ /3+e^2 κ /3+1) (λ +1)+(λ -1) e^κ t/3+(λ -1) e^2 κ t/3+(λ -1) e^κ t)/2 (e^κ-1)
and the equilibrium strategy that B should trade given the assumption is that A believes there is a single adversary, and the strategy is given by <ref> which we will identify as b_1a:
b_1a(t) =(1 - e^-κ t/3) (e^κ /3(e^κ /3+e^2 κ/3+1) (λ +1)+(λ -1) e^κ t/3+(λ -1) e^2 κ t/3+(λ -1) e^κ t)/2 (e^κ-1) λ
To be clear, b_1a(t) is the strategy that B should trade based on their belief that A guesses that there is one strategy.
[B believes that A thinks there are a total λ adversaries each trading one unit]
If, in face, A does incorrectly believe that there are λ adversaries, then A will trade <ref> which we identify as a_1b:
a_1b(t) = e^λ·κ/λ + 2(1 - e^-λ·κ/λ + 2· t)/e^λ·κ/λ + 2-1
If B believes A is trading a_1b then the best-response, recalling that B will trade λ units of the stock, is given by the solutions to the best-response equation <ref>, we identify as b_1b(t) as follows:
b_1b(t) = e^κλ/λ +2((λ ^2-1)
t+1)-e^-κλ (t-1)/λ +2+λ ^2 (-t)+t/λ ^2
(e^κλ/λ +2-1)
Note that strategies b_1a and b_1b are very similar for small κ but for large κ they are considerably different. Figure <ref> shows plots comparing the two strategies for λ=5.
We can see from Figure <ref> that in large κ-regimes B's strategies vary substantially depending on whether we are in Case 1 or Case 2 above, in which case B trades b_1a and b_1b respectively. We can now ask the questions, what are the consequences of B's decision versus the ground truth? We analyze this next.
§.§ Cost analysis
Note that in terms of who actually trades what, each of A and B have two possible strategies to choose from {a_1a, a_1b} and {b_1a, b_1b} respectively. This means there are four total possible pairs of strategies that may be traded. Next we plot these four pairs together for a variety of different λ and κ values.
In Figure <ref> we plot the four possible combinations of strategies for κ=0.1 and λ=5. We note that in all four cases the strategies are risk-neutral and there will be no cost issues associated with mis-identification.
In Figure <ref> we plot the four possible combinations of strategies for κ=25 and λ=5 along the calculated total cost to B of trading the strategy. In this case we see that there is variation in B's strategy and the associated trading costs. We discuss this now. To analyze Figure <ref> we discuss each plot separately.
We analyze these in more detail in the next section.
§.§ Error analysis
In this section we analyze the impact of "guessing wrong" in each of the four possible combinations of strategies from Section <ref>.
* a_1a and b_1a: this case means that B is trading assuming A has correctly guessed there is only one adversary, and A is trading the equilibrium strategy. Thus the two strategies are equilibrium best-response strategies for one another and B is therefore trading optimally. The total cost of trading for B in this case is 600.0;
* a_1a and b_1b: In this case B is trading assuming that A has guessed incorrectly and is trading the equilibrium strategy for λ=5 unit traders. Put differently, B is trading the best-response for a_1b, shown in the bottom-left plot while A is actually trading trading a_1a. The conclusion is that B is trading sub-optimally. The total cost of trading for B in this case is 658.1;
* a_1b and b_1a: B is again trading assuming A has correctly guessed there is only one adversary, but now A trading a_1b the equilibrium strategy assuming there are 25 unit traders and therefore B is also trading sub-optimally. The total cost of trading for B in this case is 518.2; and
* a_1b and b_1b: Now B is trading under the assumption that A is trading the best-response for 25 adversaries and is again trading optimally. The total cost of trading for B in this case is 460.2.
To summarize the above note that in Figure <ref> each column represents on of the two possible choices for trader B, strategies b_1a and b_1b respectively. The rows represent the strategy that A has chosen to trade, strategies a_1a and a_1b and the diagonals of the plot grid are when B is trading the best response and the off-diagonals are when B is not trading the best-response. Note that since B does not know what A is going to guess concerning whether there is one adversaries or λ adversaries, then we can regard it as purely a matter of chance whether B ends up trading optimally or not[The basis for this statement is the B has exhausted all possible market surveillance and is left guessing whether A is going to trade according to their being one or many adversaries].
Because of this we can think costs in each column of Figure <ref> as a random variable with equal probabilities. We summarize these results in Table <ref> and note that the expected value of B's total cost of trading is roughly the same for b_1a and b_1b, however the standard deviation is much lower.
Examining Figure <ref> we are reminded that B is facing one of two strategies whether it chooses b_1a or b_1b, and we see that in a certain sense because the strategies have roughly the expected cost, if B wishes to minimize variance, then it is best to choose strategy b_1a.
§ STRATEGY SELECTION
Section <ref> presents a crude means of analyzing selection criteria in the case where B wants to select an equilibrium strategy but is uncertain of how A will proceed. We are able to nevertheless select a strategy by viewing the set of possible strategies that A would select as determining the set of strategies B would select. In this way we were able to compute a rudimentary expected value for each possible equilibrium strategy that B can select. This suggests a more rigorous notion, which we explain here.
§.§ Probabilistic strategy selection (speculative)
In this section we analyze the case where A is a trader wishing to buy one unit of stock S between time t=0 and t=1, and has collected data on what strategies may be trading in competition. We start by defining some terms.
§.§ Notation and terms
The set S is defined as the set of all λ-scaled strategies b(t) that B could possibly be trading, and write b_λ for the corresponding λ-scaled strategy which arises from the two-trader equilibrium strategy <ref> scaled by λ. For sure a b_λ write a_λ(b) for A's best-response strategy to b_λ and ȧ_λ(b) for its time derivative. With this said we can express the cost of trading a_λ in competition with b_λ as
C(a_λ, b_λ; κ) = ∫_0^1 (ȧ_λ + λḃ_λ)ȧ_λ + κ (a_λ + λ b_λ)ȧ_λ dt
the cost of trading the best-response strategy to the λ-scaled strategy b (for λ b ∈ S) assuming market impact parameter κ. Now assume we can equip the set S with a probability measure m then we can express the expected cost of trading with respect to the measure m as:
E[C | S] = ∫_b_λ∈ S C(a_λ, b_λ; κ) dm
= ∫_b_λ∈ S∫_0^1 (ȧ_λ + λḃ_λ)ȧ_λ + κ (a_λ + λ b_λ)ȧ_λ dt dm
from which we may define its corresponding variance:
Var[C| S] = E[C^2 | S] - E[C | S]^2
and proceed with a mean-variance analysis similar to as in Section <ref>.
§.§ Example computation with a log-normal distribution
Using the setup in Section <ref> we define S as the set of λ-scaled two-trader equilibrium strategies trading in competition with A. Assume that the market impact parameter κ is known[We can repeat these computations making κ probabilistic just as easily.]. In particular suppose that we equip 𝐑^+ with the log-normal probability density for some mean μ>0 and standard deviation σ>0. Then the density is given by
f(x; μ, σ) = 1/x σ√(2 π)exp(-(ln x - μ)^2/2 σ^2), x > 0, σ > 0
and write dm for the associated measure on 𝐑^+. Then write b_λ(t) for the two-trader λ-scaled equilibrium strategy of Section <ref>. Then the expected total cost of trading <ref> becomes:
E[C|b_λ, λ∈𝐑^+] = ∫_λ∈𝐑^+ C(a_λ, b_λ; κ) dm
= ∫_λ∈𝐑^+∫_0^1 (ȧ_λ + λḃ_λ)ȧ_λ + κ (a_λ + λ b_λ)ȧ_λ dt dm
We leave the computation of the expected cost <ref> and its associated variance for a future paper.
§ PARAMETER MIS-ESTIMATION
In this section we briefly touch upon what happens when the parameters related to optimal position-building. Consider what happens when we mis-estimate the value of κ by examining Figure <ref>. As with Section <ref> we do not present a complete picture but rather potential directions for future development.
§.§ Costs arising from errors in κ
We start by giving a numerical exposition of how changes in κ impact the total cost of trading in two-trader equilibrium strategies, specifically for the unit strategy a(t).
Another view of how changes in κ impact strategies is given in Figure in which we explicitly plot the two-trader equilibrium strategies a(t), b(t) (see eqs:best-responses-aeqs:best-responses-a) for various levels of κ. Each plot shows a(t) for values of κ shifted by 25% along with b(t) in the right panel and the difference between the two a(t) strategies in the left panel.
Examining Figure <ref> the question that arises is what are the cost differences associated with the strategies. To understand this we compute the impact costs of mis-estimating κ in Table <ref>. The table shows pairs of strategies, each the unit trader strategy a(t) in <ref> for related values of κ.
As a check on the impact of the level of λ on the sensitivity to κ we re-compute Table <ref> for λ=25 and present the results in Table <ref>.
Next we discuss ways to calculate mis-estimation risk more analytically.
§.§ Analytic evaluation of mis-estimation cost
Consider two traders A and B trading strategies a(t) and λ b(t). For this discussion it is not relevant how these two strategies are determined, but for example they can arise as two-trader equilibrium strategies. Clearly the specific strategies were arrived at with assumptions about λ and the market impact parameter κ. The total cost of trading formulas for A and B trading in competition are given as in Section <ref> and we re-write these here showing the explicit dependence on λ and κ:
C(a; b, λ, κ) = ∫_0^1 (
ȧ(t; λ, κ) + λḃ(t; λ, κ) ) ȧ(t; λ, κ) +
(
a(t; λ, κ) + λ b(t; λ, κ) ) ȧ(t; λ, κ) dt
C(b; a, λ, κ) = ∫_0^1 (
ȧ(t; λ, κ) + λḃ(t; λ, κ) ) λḃ(t; λ, κ) +
(
a(t; λ, κ) + λ b(t; λ, κ) ) λḃ(t; λ, κ) dt
The function C(a; b, λ, κ is the total cost of strategy a's trading while in competition with λ b, with the dependence on κ and λ made explicit. Similarly, C(b; a, λ, κ) is the total cost of b's trading while in competition with a. With this we can now for the partial derivatives:
∂ C(a; b, λ, κ)/∂λ = Sensitivity of a's total cost of trading to λ
∂ C(a; b, λ, κ)/∂κ = Sensitivity of a's total cost of trading to κ
∂ C(b; a, λ, κ)/∂κ = Sensitivity of b's total cost of tradiyng to κ
∂ C(b; a, λ, κ)/∂κ = Sensitivity of b's total cost of trading to λ
We call the above functions the cost sensitivity functions. The above partial derivatives may be computed either analytically or numerically and represent the sensitivity of the total cost of trading to changes in λ or κ and suggest they may be used to guide strategy selection using the dictum when in doubt, choose the less sensitive strategy.
§ TWO-TRADER EQUILIBRIUM WITH RISK AVERSION
Holding risk in this concept refers to a general aversion to holding the stock during the course of building the position. This may arise, for example, because while the predominant possibility is that the stock will rise during the position-building process and then "jump" when the catalyst occurs, there is a small chance that some bad news will occur and cause the price to fall. In this section augment two-trader equilibrium strategies (Section <ref>) with an aversion to holding risk, as in Section <ref> and which is a key feature in <cit.> and <cit.>.
§.§ Augmented equilibrium equations
To add risk-aversion to two-trader equilibrium we augment the loss functions <ref>eq-loss-fn-b with a term proportional to σ^2 times the holding itself, where σ is the volatility of the stock over the holding period:
L_a = (ȧ + λḃ) ȧ + κ (a + λ b) ȧ + ξ_a σ^2 a^2
L_b = (ȧ + λḃ) λḃ + κ (a + λ b) λḃ + ξ_b σ^2 b^2
Note that these equations are the same as <ref>eq-loss-fn-b but they penalize the total volatility a trader's position holds during the course of building the position.
Using the Euler-Lagrange equation <ref> we then obtain the system of differential equations
2ä + λb̈ + κλḃ - 2 ξ_a σ^2 a = 0
ä + 2 λb̈ + κȧ - 2 ξ_b/λσ^2 b = 0
These in turn yield the two-trader equilibrium equations with risk analogous to eq:eq-equation-ddotaeq:eq-equation-ddotb:
ä = -λ/2 (b̈ + κḃ) + ξ_a σ^2 a
b̈ = -1/2λ (ä + κȧ) +ξ_b/λ^2σ^2 b
with, as usual, the boundary conditions a(0)=0, b(0)=0, a(1)=1, b(1)=1. Note that in these equations ξ_a and ξ_b are trader-specific trade-off parameters that add risk-aversion to cost-of-trading, analogous to <ref>.
§.§ Exploration of two-trader equilibrium with risk-aversion
We briefly provide some numerical examples of the trading strategies satisfying eq:ddota-two-trader-with-riskeq:ddotb-two-trader-with-risk. We refer to the solutions as the two-trader equilibrium strategies with risk-aversion for the remainder[We do not present the equations here but see Appendix <ref> for that Mathematica code that produced Figure <ref>.]. In Figure <ref> we plot the solutions to these equations for various levels of risk aversion and constant levels of λ and κ (see the caption for details). The plots show the expected tension that arises between risk-aversion and the desire to minimize total cost of trading.
We draw the following conclusions examining Figure <ref> for the case where B (the red line) has λ=5 and A (the blue line) is a unit trader and σ=0.5, κ=5 throughout:
* When A and B have the same level of risk-aversion (the first row in Figure <ref>, A nevertheless trades an eager strategy;
* When A has greater risk aversion than B (the second row), A trades an eager strategy until the absolute level of risk aversion grows very high (that last plot in the second row); and
* When B has greater risk aversion than A (the second row), B trades a roughly risk-neutral strategy and B trades a very eager strategy.
With these conclusions in mind we repeat the plots of Figure <ref> in Figure <ref> but with σ=2.0, four times that of the Figure <ref>:
We see in Figure <ref> that increasing the volatility of the stock by a factor of four has several notable changes as compared to the prior Figure <ref>:
* When trader A's level of risk-aversion (as given by ξ_a) is moderately high (as opposed to very high), the strategy becomes risk-averse as is seen from the center plot and the last column of the first two rows in the two figures; and
* Trader A does not engage in significant overbuying unless B is significantly more risk-averse than A;
Next in Figure <ref> we plot the same scenarios as Figure <ref> but this time with a much lower level of permanent market impact relative to temporary, setting κ=0.1 throughout:
Figure <ref> shows a similar pattern to <ref> but much more muted. The conclusion in this scenario is that the importance of permanent impact is significantly reduced and so trader A can afford to trade a risk-averse strategy without paying excessive prices in the latter portion of the position-building.
As a final demonstration we plot the same as Figure <ref> but this time with κ=15 and λ=5. In these plots we see that the pressure of "buying ahead" of trader B (who is trading five times as much as A) generally places A in the eager trading regime. The sole exception is when A's risk aversion is extremely high relative to B's, as in the second row and far-right column of Figure <ref>.
§ SUMMARY
In this paper, we developed a framework for understanding and optimizing position-building strategies in competitive trading scenarios. Our analysis is grounded in a game-theoretic setting, where each trader's actions are represented by their trading strategies, and the primary objective is to minimize a cost function that accounts for both temporary and permanent market impact. We introduced and analyzed the concept of equilibrium strategies, where each trader's strategy is the best possible response to the other trader's strategy.
The framework was explored through a detailed study of various strategy types, including risk-neutral, risk-averse, and eager strategies. We also introduced more complex trading patterns such as bucket and barbell strategies, which traders may adopt under specific market conditions.
Through the use of differential equations and the Euler-Lagrange equation, we derived optimal strategies for different competitive scenarios. We further extended the analysis to multi-trader environments, showing how strategies evolve as the number of competing traders increases.
A key aspect of the paper is the exploration of the impact of the market impact coefficient, κ, on the optimal strategies. We identified different κ regimes—temporary impact-dominated and permanent impact-dominated—and demonstrated how these regimes influence the shape of optimal trading strategies.
We addressed the inverse problem, allowing traders to deduce the most likely adversary strategy given their own trading strategy. This approach provides a tool for traders to adapt their strategies in real-time as they gather more information about their competitors.
We also discussed various approaches to strategy selection and then introduced risk-aversion into the equilibrium equations.
Overall, this paper contributes to the understanding of competitive trading by offering a robust theoretical foundation for strategy optimization in the presence of market impact, and provides practical insights into how traders can respond to competitive pressures in financial markets.
§ APPENDIX: MATHEMATICA CODE FOR TWO-TRADER EQUILIBRIUM WITH RISK
We used Wolfram Mathematica <cit.> to solve eq:ddota-two-trader-with-riskeq:ddotb-two-trader-with-risk but did not provide the explicit solutions. This is because they are fairly complex and their specific forms do not readily provide insight into the nature of the solutions. For completeness we provide the code used to generate Figure <ref>. The other plots in Section <ref> were produced similarly.
10pt
[language=Mathematica, mathescape=true]
(* Define constants *)
λ = 5;
κ = 5;
σ = 0.5; (* Increased value for effect *)
(* Define the system of differential equations *)
solveSystem[ξ_a_, ξ_b_] := Module[eqns, bcs, sol, aSol, bSol,
eqns =
a^''[t] == -(λ/2) (b^''[t] + κ b^'[t]) + ξ_a σ^2 a[t],
b^''[t] == -(1/(2 λ)) (a^''[t] + κ a^'[t]) + (ξ_b/λ^2) σ^2 b[t]
;
bcs = a[0] == 0, a[1] == 1, b[0] == 0, b[1] == 1;
sol = DSolve[eqns, bcs, a[t], b[t], t];
a[t] /. sol[[1]], b[t] /. sol[[1]]
];
(* Define the grid of ξ_a and ξ_b values, multiplied by 3 *)
xiValues =
1.5, 1.5, 10, 10, 50, 50,
10, 1.5, 50, 1.5, 200, 1.5,
1.5, 10, 1.5, 50, 1.5, 200
;
(* Generate the 3x3 grid of plots *)
gridPlots = Grid[Table[
Module[aSol, bSol, ξ_a = xiValues[[i, j, 1]], ξ_b = xiValues[[i, j, 2]],
aSol, bSol = solveSystem[ξ_a, ξ_b];
Plot[aSol, bSol, t, 0, 1,
PlotLegends -> "a(t)", "b(t)",
PlotStyle -> Blue, Red,
Frame -> True,
FrameLabel -> "a(t), b(t)", "κ=" <> ToString[κ] <>
", λ=" <> ToString[λ], "t", "ξ_a=" <> ToString[ξ_a] <>
", ξ_b=" <> ToString[ξ_b] <> ", σ=" <> ToString[σ],
PlotRange -> 0, 1,0, 2.5]
], i, 1, 3, j, 1, 3], Frame -> All
];
(* Display the grid of plots *)
gridPlots
§ ACKNOWLEDGEMENTS
I extend my sincere thanks to the Machine Learning Research Group at Morgan Stanley, to whom I gave two seminars on this work in August of 2024 during which they made many insightful suggests which greatly improved this work. I also extend my gratitude to Mike Shelley of the Courant Institute and Flatiron Institute for his invaluable guidance and Jim Gatheral of Baruch college for many illuminating discussions on transaction costs.
alpha
|
http://arxiv.org/abs/2409.03229v1 | 20240905035610 | Bonding Hierarchy and Coordination Interaction Leading to High Thermoelectricity in Wide Bandgap TlAgI2 | [
"Xiaoying Wang",
"Mengyang Li",
"Minxuan Feng",
"Xuejie Li",
"Yuzhou Hao",
"Wen Shi",
"Jiangang He",
"Xiangdong Ding",
"Zhibin Gao"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"physics.app-ph"
] |
State Key Laboratory for Mechanical Behavior of Materials, School of Materials Science and Engineering,
Xi'an Jiaotong University, Xi'an 710049, China
School of Physics, Xidian University, Xi'an710071, China
State Key Laboratory for Mechanical Behavior of Materials, School of Materials Science and Engineering,
Xi'an Jiaotong University, Xi'an 710049, China
State Key Laboratory for Mechanical Behavior of Materials, School of Materials Science and Engineering,
Xi'an Jiaotong University, Xi'an 710049, China
State Key Laboratory for Mechanical Behavior of Materials, School of Materials Science and Engineering,
Xi'an Jiaotong University, Xi'an 710049, China
School of Chemistry, Sun Yat-sen University, Guangzhou, Guangdong, 510006, China
Institute of Green Chemistry and Molecular Engineering, Sun Yat-sen University, Guangzhou, Guangdong, 510006, China
School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083, China
State Key Laboratory for Mechanical Behavior of Materials, School of Materials Science and Engineering,
Xi'an Jiaotong University, Xi'an 710049, China
[E-mail: ][email protected]
State Key Laboratory for Mechanical Behavior of Materials, School of Materials Science and Engineering,
Xi'an Jiaotong University, Xi'an 710049, China
§ ABSTRACT
High thermoelectric properties are associated with the phonon-glass electron-crystal paradigm. Conventional wisdom suggests that the optimal bandgap of semiconductor to achieve the largest power factor should be between 6 and 10κ_BT. To address challenges related to the bipolar effect and temperature limitations, we present findings on Zintl-type TlAgI_2, which demonstrates an exceptionally low lattice thermal conductivity of 0.3 W m^-1 K^-1 at 300 K. The achieved figure of merit (ZT) for TlAgI_2,
featuring a 1.55 eV bandgap, reaches a value of 2.20 for p-type semiconductor.
This remarkable ZT is attributed to the existence of extended antibonding states [Ag-I] in the valence band. Furthermore, the bonding hierarchy, influencing phonon anharmonicity, and coordination bonds, facilitating electron transfer between the ligand and the central metal ion, significantly contribute to electronic transport.
This finding serves as a promising avenue for the development of high ZT materials with wide bandgaps at elevated temperatures.
Bonding Hierarchy and Coordination Interaction Leading to High Thermoelectricity in Wide Bandgap TlAgI_2
Zhibin Gao
September 9, 2024
========================================================================================================
§ I. INTRODUCTION
Thermoelectric technology, offering clean and sustainable means, can directly and reversibly convert heat into electrical energy. Typically, the thermoelectric conversion efficiency is gauged by the dimensionless figure of merit, ZT = S^2 σ T/κ, where S, σ, κ, and T represent the Seebeck coefficient, electrical conductivity, thermal conductivity, and working temperature, respectively. However, these parameters are tightly interconnected, and improving ZT necessitates optimizing the adversely interdependent S, σ, and κ as a collective. Therefore, there are several degrees of freedom to enhance ZT, such as spin, orbital, charge, and lattice <cit.>.
In a given working temperature range, the optimal ZT is constrained by the intrinsic electronic bandgap. Many celebrated narrow bandgap thermoelectric materials, such as PbTe (E_g=0.28 eV) <cit.> and (Bi,Sb)_2Te_3 (E_g=0.13 eV) <cit.>, have been identified. However, the thermoelectric properties are significantly affected when there is a substantial number of both electrons and holes contributing to charge transport, known as bipolar charge transport. This phenomenon occurs when electrons are excited across the bandgap, producing minority charge carriers (e.g., holes in an n-type material) in addition to majority charge carriers (e.g., electrons in an n-type material). Bipolar effects are observed in small bandgap materials at high temperatures (k_B∼E_g). Consequently, the Seebeck coefficient is dramatically affected because the minority charge carriers add a Seebeck voltage of the opposite sign to the majority carriers, greatly reducing the thermopower | S | <cit.>. Moreover, narrow-gap semiconductors cannot be effectively utilized at higher temperatures.
By employing a good rule of thumb, E_g = 2 e S_max T <cit.>, where S_max is the maximum Seebeck coefficient, and e is the unit charge, wide bandgap semiconductors could mitigate the bipolar effect and temperature range limitations <cit.>. In other words, wide bandgap semiconductors have the potential to overcome the restrictions of narrow bandgap materials and serve as promising thermoelectric candidates. For example, the three-element Heusler Li_2NaSb, with a 1.72 eV bandgap, achieves a ZT value of 1.20 <cit.>. A copper-tin compound, Cu_2Se, with a 1.20 eV bandgap, exhibits a ZT value of 1.40 <cit.>. The stannide tin compounds Cu_2ZnSnSe_4, featuring a 1.44 eV bandgap, demonstrate a ZT value of 0.75 <cit.>. However, all these systems have ZT values below 2.0, primarily due to poor electrical properties and high lattice thermal conductivity (κ_L).
In this study, we leverage chemical bonding hierarchy and coordination interaction to enhance the transport properties of the wide bandgap material TlAgI_2. The concept of chemical bond hierarchy involves ionic bonding, covalent bonding, and coordination interaction <cit.>, which explains the coexistence of weak and rigid bonds within materials. In materials undergoing thermally induced large amplitude vibrations, such as La_2Zr_2O_7 <cit.> and Bi_4O_4SeCl_2 <cit.>, the intrinsic coexistence of rigid crystalline sublattices and fluctuating noncrystalline sublattices is observed. This atomic-level heterogeneity results in vibrational modes that generate a mismatch in the phonon density of states, thereby enhancing phonon anharmonicity and reducing κ_L <cit.>.
We discovered that Zintl-type TlAgI_2 exhibits an ultralow κ_L of 0.30 W m^-1 K^-1 at 300 K, achieved by considering quartic anharmonicity renormalization and the off-diagonal term of the heat flux operators. Additionally, the weakening of bonds and strong phonon-phonon interactions are attributed to the antibonding states just below the Fermi level in the electronic band structure, arising from interactions between silver 4d and iodine 5p orbitals. Moreover, the unexpectedly strong hole transport performance, characterized by a large hole density of states, is influenced by the coordination interactions forming a stable coordination complex, Ag-I <cit.>. The wide bandgap, coupled with high energy band asymmetry, counteracts bipolar effects, resulting in a notably high Seebeck coefficient up to 704 μV K^-1 at a hole concentration of 10^18 at 1200 K. Ultimately, the achieved ZT values for TlAgI_2 with a 1.55 eV bandgap reach 2.20 and 1.82 for p-type and n-type concentrations, respectively. This finding suggests the potential of using bonding hierarchy and coordination interactions in designing high-temperature thermoelectric materials with wide bandgaps.
§ II. COMPUTATIONAL METHODS
Generally, κ_L is the summation of the Peierls contribution from diagonal term κ_p and the Wigner distribution
from off-diagonal term κ_c <cit.>, κ_Total = κ_c + κ_p. Wherein the κ_c originates from Wigner distribution associated with the wave-like tunnelling <cit.> and loss of coherence between different vibrational eigenstates. The κ_c can be expressed as,
κ_c = ħ^2/k_B T^2 V N_0∑_q∑_s ≠ s'ω^s_q + ω^s'_q/2v_q^s, s' v_q^s', s
×ω^s_q n^s_q ( n^s_q + 1 ) + ω^s'_q n^s'_q ( n^s'_q +1 ) / 4 ( ω^s'_q - ω^s_q )^2 + ( Γ^s_q + Γ^s'_q )^2
× ( Γ^s_q + Γ^s'_q ),
where ħ, k_B, T, V, N_0 are the reduced Planck constant,
Boltzmann constant, absolute temperature, primitive cell volume, and
the total number of sampled phonon wave vectors in the first Brillouin
zone, respectively.
For the Peierls-Boltzmann transport equation, the diagonal contribution κ_p can be calculated as,
κ_p=ħ^2/k_B T^2 V N_0∑_λ n_λ (n_λ + 1) ω^2_λv_λ⊗v_λτ_λ,
where κ_p represents the κ_p^3,4ph considering anharmonic phonon renormalization (APRN) at finite temperatures, three-phonon (3ph) and four-phonon (4ph) scatterings.
It is derived from Peierls contribution related to the particle-like propagation of phonon wave packets.
n_λ, ω_λ, v_λ, and τ_λ
are the equilibrium component
of the phonon population, frequency, group velocity, and lifetime for the
λ mode (wave vector q and branch index s), respectively.
Except for τ_λ, all the above parameters can be obtained from harmonic
approximation (HA). We adapted 3ph and 4ph scattering from the
self-consistent phonon (SCPH) <cit.> theory to obtained
τ_λ including anharmonic effects beyond perturbation
theory that considers the quantum effect of phonons <cit.>.
Among various existing approaches such as self-consistent ab initio lattice dynamics (SCAILD) <cit.>
and stochastic self-consistent harmonic approximation (SSCHA) <cit.>, self-consistent phonon (SCPH) approximation is one effective method that can rigorously account for the first-order correction of phonon
frequencies from the quartic anharmonicity. The SCPH approach can better describe the
soft phonon modes and strong anharmonicity. In brief, the SCPH can be
written as <cit.>
Ω_λ^2 = ω_λ^2+2Ω_λ∑_λ_1 I_λλ_1,
where ω_λ is the original phonon frequency from
the harmonic approximation and Ω_λ is the
temperature-dependent renormalized phonon frequency. The scalar
I_λλ_1 can be obtained by,
I_λλ_1=ħ/8 N_0V^(4) (λ,-λ,λ_1,-λ_1)/Ω_λΩ_λ_1[1+2n_λ(Ω_λ_1)],
in which V^(4) is the fourth-order IFCs in the reciprocal representation and
phonon population n_λ satisfies Bose-Einstein distribution as a function
of temperature. Both Eq. (<ref>) and Eq. (<ref>) have parameters
I_λλ_1 and Ω_λ in common, and thus SCPH equation can
be solved iteratively. Note that I_λλ_1 can be interpreted as the
interactions between a pair of phonon modes, λ, and λ_1 including
the temperature effects <cit.>.
DFT calculations were performed using the Vienna ab initio simulation
package (VASP) <cit.> with the projector-augmented
wave (PAW) method <cit.>. We used
the PBEsol functional to obtain lattice constants. Cutoff energy
of 400 eV was used with 11 × 11× 11 Monkhorst-Pack
k-grids. The self-consistent iteration for the energy convergence criterion
was 10^-8 eV, and all geometries were optimized by the conjugate-gradient
method until none of the residual Hellmann-Feynman forces exceeded 10^-6 eV/Å.
The optimized conventional cell lattice constant of tetragonal
I4/mcm phase (No. 140), a=b=8.188 Å, c=7.562 Å. A 2 × 2 × 2 supercell and 5 × 5 × 5 k-points were employed in all finite displacement
calculations.
We generated force-displacement data by performing ab initio molecular dynamics (AIMD) simulation with a 2 × 2 × 2 supercell at 300 K for 2000 steps with a time step of 2 fs using a Nosé-Hoover thermostat and 10^-6 eV energy threshold. We sampled 40 atomic configurations that were equally spaced in time by removing the first 400 steps from the trajectories and then randomly displaced all of the atoms within the supercell by 0.02 Å (second-order) and 0.1 Å (higher-order) in random directions in each configuration to decrease cross-correlations in the sensing matrix formed by products of atomic displacements. Finally, the 40 uncorrelated sets were computed using accurate DFT calculations with a 10^-8 eV energy convergence threshold.
We used 12 × 12 × 12 ngrids for κ_p^3ph, and 7 × 7 × 7 for κ_p^3,4ph and κ_c.
The electronic band structure and crystal orbital Hamilton population (COHP) were calculated using a 9 × 9 × 9 k-meshes.
The elastic constants, dielectric constants, deformation potential, and wave functions
were gained with 12 × 12 × 12 k-meshes.
The carrier transport properties were
obtained in uniform 41 × 41 × 45 k-point grids in electronic transport <cit.>.
We systematically studied the effect of quartic anharmonicity on the lattice dynamics, electronic transport, and thermal transport properties of TlAgI_2 by leveraging recent advances
including (i) compressive sensing lattice dynamics (CSLD) <cit.> to establish the high-order inter-atomic force constants (IFCs), that utilized the compressive sensing technique <cit.> to select the physically relevant IFCs from the force-displacement data under the constraints enforced by the space group symmetry. (ii) rigorous calculations of temperature-dependent phonons used SCPH theory and higher-order multiphonon scattering rates <cit.>, (iii) evaluation of κ_L employed a unified theory that accounts simultaneously for diagonal term from the standard Peierls contribution and off-diagonal terms from the coherent Wigner distribution <cit.>.
(iv) The Seebeck coefficient, conductivity, and power factor were calculated by considering the electron-phonon coupling such as the acoustic deformation potential, ionized impurity, and polar optical phonon scattering, as implemented in the amset package <cit.>.
§ III. RESULTS AND DISCUSSION
TlAgI_2 adopts a tetragonal structure (space group I4/mcm [140]), where Tl, Ag, and I occupy the 4a, 4b, and 8h sites with a total of 8 atoms in the primitive cell. In this structure, Tl and I form an octahedral cage <cit.>, illustrated in Fig. <ref>(a), with the Ag element embedded within the cage. All outcomes take into account the Self-Consistent Phonon (SCPH) effect, except for the Harmonic Approximation (HA). The lattice thermal conductivity (κ_L) is averaged over the three crystalline directions. κ_p^3,4ph represents lattice thermal conductivity considering quartic anharmonic phonon renormalization (APRN) and four-phonon (4ph) interactions.
The influence of SCPH is crucial, as evidenced by the contrast between HA and κ_p^3ph, highlighting a pronounced temperature effect on phonons. Compared with the κ_p^3ph value of 0.30 W m^-1 K^-1, κ_p^3,4ph decreases to 0.23 W m^-1 K^-1 at 300 K due to additional 4ph scattering. However, when considering the contribution of the off-diagonal term (κ_c), the total lattice thermal conductivity increases to 0.30 W m^-1 K^-1, constituting κ_p^3,4ph + κ_c. Interestingly, the contribution of coherent phonons κ_c grows significantly with increasing temperature. At room temperature, the lattice thermal conductivity of κ_p^3ph aligns with that of κ_p^3,4ph + κ_c.
Subsequently, we delve into the frequency-resolved analysis (filled region) and cumulative trends (solid lines) of κ_L at 300 K to further scrutinize the microscopic mechanisms of phonon vibrations leading to low κ_L, as illustrated in Fig. <ref>(a). The κ_L spectrum and the cumulative κ_L with respect to frequency reveal that the primary contributors to κ_L are the phonon branches within the 2-4 meV range, affirming the validity of our 4ph scattering calculation. At the same time, we find that acoustic phonons below 4 meV are the main contribution to the lattice thermal conductivity, no matter the κ_p^3ph (red line) and κ_p^3,4ph (blue line).
The temperature-dependent phonon dispersion unquestionably reveals the stiffening of both acoustic and optical branches with increasing temperature probably originating with weakly coupling between high-frequency optical phonons and overdamped acoustic phonons (I_λλ_1 is positive as mentioned in Eq. (<ref>)), as depicted in Fig. <ref>(b). The vibrational spectra of Tl atoms predominantly occupy the 2-4 meV regime, exerting significant influence on thermal transport as evidenced by the atom-projected phonon density of states (PDOS). It is also affirmed that the vibrations of Tl atoms serve as the primary scattering channel. Moreover, the strongly interlinked phonon branches within the low-frequency range of 4-5 meV are anticipated to establish a substantial phonon-phonon scattering channel for both acoustic and low-frequency optical modes <cit.>.
Finally, and perhaps most crucially, the outermost layer of Tl comprises three electrons, including 6s^2 and 6p^1. Theoretically, the valence states can be monovalent, divalent, and trivalent. Conversely, Ag possesses only one electron in the outermost layer, 5p^1. This electron likely transfers to I, forming a stable ionic bond. Consequently, Tl can contribute only one valence electron to I, leaving a 6s^2 lone pair electron. The Bader charge, detailed in Table SII in the Supplemental Materials,
further supports this. As a result, the compound exhibits the following valence state: Tl^1+Ag^1+(I^1-)_2.
The monovalent Tl in the TlAgI_2 system <cit.> with a 6s^2 lone pair, exhibits overlapping wave functions of the lone electron pair with nearby valence electrons. This overlap causes a nonlinear repulsive electrostatic force as atoms approach each other, leading to off-centering of the atoms. The interaction of the lone electron pair originating from the Tl 6s orbital with adjacent atoms induces bond anharmonicity and significantly reduces κ_L <cit.>.
All available three-phonon (3ph) and four-phonon (4ph) scattering rates are depicted in Fig. <ref>(a). At the same temperature, the relationship of scattering rates (SR) can be expressed as SR_4ph ≥ SR_3ph, indicating that 4ph scattering is not negligible in this system. Furthermore, scattering rates increase with temperature. Therefore, by including 4ph scattering, the lattice thermal conductivity is generally smaller than when considering only 3ph scattering, resulting in lower κ_L with increasing temperature.
Fig. <ref>(b) displays the absorption and emission processes of the 3ph and 4ph as a function of frequency at 300 K. For the 3ph scattering, we consider the phonon
splitting (λ→λ_1 + λ_2) and combination (λ
+ λ_1 →λ_2). In the case of 4ph, we count for phonon splitting (λ→λ_1 + λ_2 + λ_3), combination (λ + λ_1 + λ_2 →λ_3), as well as
redistribution (λ + λ_1 →λ_2 + λ_3) processes.
In the low-frequency region dominated by acoustic modes, 3ph combination is stronger than the splitting situation, while the redistribution process of 4ph is dominant. However, in the high-frequency region dominated by optical modes, the splitting process of 3ph becomes more important.
For 4ph scattering, the splitting course dominates the scattering process and has the same order as the redistribution process. A similar phenomenon was observed in the case of the halide perovskite material CsPbBr_3 <cit.>.
The phonon phase space at different temperatures for 3ph and 4ph interactions reveals a strong temperature dependence, particularly for 4ph, as illustrated in Fig. <ref>(c). The phase space of both 3ph and 4ph scattering increases as the temperature rising from 300 K to 700 K and 1200 K. It is important to note that the units of phase space for 3ph and 4ph are different, preventing a direct comparison between them <cit.>.
We observed that crystalline materials with an antibonding state and bonding hierarchy exhibit a coexistence of population phonons and coherent phonons. Coherent phonons are likely to manifest in the presence of complex crystals <cit.>, which require a large unit cell with numerous closely spaced phonon branches, strong anharmonicity (where phonon linewidths exceed interbranch spacings) <cit.>, and phonons below the Ioffe-Regel Limit (mean free path is around the interatomic spacing) contributing to heat transport due to their wavelike ability to interfere and tunnel <cit.>, as depicted in Fig. <ref>(d). Given that most mean free paths are smaller than the minimum atomic distance, known as the Ioffe-Regel Limit, it suggests that κ_c is likely important in TlAgI_2 <cit.>.
To deepen our understanding of the physical mechanism behind the ultralow κ_L and the significance of anharmonicity, we calculate the Grüneisen parameter, as illustrated in Fig. <ref>(e). The extent of anharmonicity is typically measured by the Grüneisen parameters (γ). In the top panel of Fig. <ref>(e), relatively large values of γ are observed in intertwined portions of the acoustic and optical branches regime at 300 K, confirming stronger scattering within the frequency range of 2 to 4 meV.
This is closely linked to the heavy atomic mass of the Tl element and the bonding hierarchy.
Moreover, TlAgI_2 exhibits substantial anharmonicity with a total Grüneisen value of 1.69, indicating more anharmonicity than the PbTe value of 1.45 <cit.>. Fig. <ref>(f) demonstrates the temperature effect on v^2, where v represents the group velocity. We observe a low speed of sound at 300 K for TlAgI_2, stemming from the heavy atomic mass of the Tl element and the vibration resistance, as indicated by the renormalized phonon dispersions at finite temperature.
In Fig. <ref>(a), the electronic band structure and projective density of states for different elemental orbitals of TlAgI_2 are presented. TlAgI_2 is identified as an indirect bandgap semiconductor, where the conduction band minimum (CBM) is located at the M point, and the valence band maximum (VBM) is situated at the Γ to X high symmetry line. The computed bandgap using the PBEsol functional is 1.55 eV, consistent with the findings from the Material Project database mp-27801.
TlAgI_2 is characterized as a wide bandgap semiconductor, a common feature in high temperature thermoelectric semiconductor materials, such as half-heusler alloys FeNb_0.88Hf_0.12Sb <cit.>.
Specifically, the CBM is predominantly contributed by the Tl atom's 6p orbital, while the VBM is almost entirely attributed to the Ag 4d and I 5p orbitals.
The conduction band near the Fermi energy level exhibits pronounced valleys, while the valence band appears notably flat. In semiconductors with highly asymmetric bands near the Fermi energy, there is a substantial difference in the density of states between electrons and holes. When the concentration of one type of carrier significantly exceeds that of the other, the detrimental impact of the bipolar effect on the Seebeck coefficient can be effectively mitigated <cit.>. Simultaneously, a higher valence density of states enhances the material's responsiveness to the effects of thermoelectric conversion.
For an in-depth understanding of atomic-level dynamics, we computed potential energy curves by displacing atoms along the z-direction relative to their static equilibrium positions at the Γ point. As depicted in Fig. <ref>(b), Ag and I atoms are confined in steep potential wells, while the Tl atom resides in a flat potential well. This suggests that the Tl atom can readily oscillate within the surrounding hollow cage with a large amplitude. The electrostatic repulsion between the localized electrons of Tl and the neighboring atoms likely induces significant vibrations of Tl. Consequently, chemical bonds various strengths coexist in the system, the pronounced mismatch in their vibrational modes impedes phonon propagation.
This increased anharmonicity effectively reduces the κ_L <cit.>. The flat potential well of Tl is consistent with its large Atomic Displacement Parameter (ADP) . The loose bonding between Tl and other elements, coupled with electrostatic repulsion between the localized electrons (lone pair 6s^2) of Tl and the surroundings, plausibly drives substantial displacement of Tl, leading to a large ADP along the c-axis, as illustrated in the Supplemental Materials FIG. S2.
The schematic of coordination compounds Ag^1+I^1-, illustrated in Fig. <ref>(c), reveals the unique electron configurations of the components. The I^1- anion adheres to the eight-electron rule, possessing four lone pair electrons. Conversely, the Ag^1+ cation displays unpaired 5s and 5p orbitals. This distinctive electron arrangement leads to the formation of the coordination bond I^1--Ag^1+, aligning with the Projected Density of States (PDOS) where p orbitals of Ag and I significantly contribute to the valence band. The coordination interactions are strengthened by similar energy levels and matching symmetry of the orbitals involved in bond formation. This results in the establishment of a stable coordination complex, AgI, where the I^1- anion acts as a ligand, and the Ag^1+ cation serves as the central metal ion. Such coordination bonds play a crucial role in electrical transport processes, facilitating the transfer of electrons between the ligand and the central metal ion <cit.>.
Furthermore, the exceedingly weak bonding between Tl and I, particularly evident around the Tl element and the delocalization of Tl, as depicted in Fig. <ref>(d), suggests a spherical charge density due to the electron lone pair of Tl (refer to Fig. <ref>(b)). This implies that the interactions between Tl and the surrounding Ag and I atoms are primarily electrostatic. With a combination of ionic bonding, covalent bonding, and coordination interaction, the hierarchical bonding and mismatched bond energies result in strong anharmonic interactions, especially below the Fermi level. Due to the strong electron separability at the top of the valence band, the robust coordination interactions of Ag and I contribute to a high density of states effective mass <cit.>.
Fig. <ref>(e) illustrates the projected crystal orbital Hamilton population (pCOHP) analysis. The Ag-I interaction exhibits strong bonding states, with antibonding states below the Fermi level in the electronic structure resulting from the interactions between silver 4d and iodine 5p orbitals. This weakens the bond and induces strong phonon anharmonicity. In contrast, Tl-Ag and Tl-I interactions display weak bonding states. Evidently, Tl atoms are loosely bound to other atoms, attributed to the lone pair and rattling vibration mode. The value of -ICOHP represents the integrals from -infinity to the Fermi level of COHP, signifying the ability to form bonds between different atoms, consistent with the 2D electron location function shown in Fig. <ref>(d).
The hole transport performance for p-type TlAgI_2 is depicted in Fig. <ref>. Observations reveal that the Seebeck coefficient S decreases with increasing carrier concentration n of holes at the same temperature, while S increases with increasing T at the same n. The electrical conductivity σ of p-type TlAgI_2 shows a positive correlation with the carrier concentration n and a negative correlation with the temperature T. Notably, p-type TlAgI_2 exhibits a larger S due to high energy band asymmetry and a wide bandgap that counteracts the bipolar effect, especially at high temperatures, as illustrated in Fig. <ref>(a). The conductivity σ is influenced by the carrier concentration n and inversely proportional to the temperature T. The former is attributed to the increasing concentration n, contributing to the conductivity, while the latter is due to the boosting of the electron-phonon interaction scattering rate with increasing temperature.
In general, the electrical conductivity σ and electronic thermal conductivity κ_e exhibit a similar trend with the increase in hole concentration, as depicted in Fig. <ref>(b) and Fig. <ref>(c), consistent with the Wiedemann-Franz law (κ_e = L σ T) <cit.>. Due to the coexistence of relatively large Seebeck coefficient S and σ, a high thermoelectric power factor (PF) is achieved, for instance, 8.94 μW cm^-1 K^-2 at 300 K in the z-direction at a hole concentration (n_h) of 10^21 cm^-3. Notably, there is a positive correlation between the bandgap and the temperature at the highest ZT value <cit.>. Considering the bandgap of TlAgI_2 is 1.55 eV, which is relatively large, we tentatively selected the highest temperature to be 1200 K, commonly used for high-temperature thermoelectric materials. It is observed that the highest power factor occurs at a hole doping concentration of 10^21. The electrical transport performance for n-type TlAgI_2 is presented in the Supplemental Material and shows lower performance.
As depicted in Fig. <ref>, the ZT values for p-type doping are relatively high, reaching 2.20, whereas for n-type doping, it is only 1.82. The highest ZT for n-type doping is observed at a carrier concentration of 2×10^19 at 1200 K, and the most reasonable concentration for p-type doping is 2×10^20 at 1200 K. An interesting observation is that the ZT without considering κ_c can be enhanced from 2.20 to 3.00, even though the value of κ_c is only 0.08 W m^-1 K^-1 (as shown in Supplemental Material TABLE. SI.) at 1200 K. Therefore, it is crucial to consider κ_c when calculating the ZT to avoid overestimating the thermoelectric performance, especially for materials with low thermal conductivity <cit.>.
Based on our investigation of existing wide bandgap thermoelectric materials, including pure semiconductors and doping-modified multifunctional semiconductor materials illustrated in Fig. <ref>, it is noteworthy that most of the thermoelectric materials with a bandgap exceeding 1.0 eV exhibit ZT values below 1.0. A notable exception is Cu_2Se, where doping with sulfur element elevates the ZT from 1.4 to 2.0. Consequently, our findings establish that the ZT value of 2.20 for TlAgI_2 takes a leading position in the bandgap range of 1.0 eV to 3.5 eV. This suggests that TlAgI_2 holds significant promise as a potential thermoelectric material at high temperatures. Furthermore, there is potential to enhance the ZT value through doping. Therefore, it is advisable to explore the application of wide bandgap semiconductors in high-temperature thermoelectric materials.
§ IV. CONCLUSIONS
In summary, we employed first-principles calculations, the self-consistent phonon (SCPH) theory, and Boltzmann transport equations to investigate the thermal and electrical transport properties of TlAgI_2. The results revealed distinctive effects of quartic anharmonicity and coherent phonons on lattice thermal conductivity. Key findings include:
(i) The study highlighted the significant contributions of four-phonon processes, anharmonicity phonon renormalization, and coherent phonons in achieving ultralow thermal conductivity. These factors are crucial in theoretical predictions for high thermoelectric performance materials.
(ii) The observed low thermal conductivity in TlAgI_2 is attributed to antibonding states of Ag 4d and I 5p orbitals below the Fermi level, along with the bonding hierarchy of ionic, covalent, and coordination interactions. Strong Ag-I coordination interactions lead to a large valence band state density. Favorable electrical transport properties are linked to high energy band asymmetry and a wide bandgap, countering bipolar effects, especially at high temperatures.
(iii) TlAgI_2 emerges as a potential candidate for thermoelectric applications due to its ultralow thermal conductivity and favorable electrical transport properties.
This research contributes to advancing our understanding of the thermal and electrical properties of TlAgI_2, offering guidance for the exploration of materials with wide bandgaps for high-temperature thermoelectric applications.
§ ACKNOWLEDGMENTS
This work is sponsored by the Key Research and Development Program of the Ministry of Science and Technology (No.2023YFB4604100).
We acknowledge the support from the National Natural Science Foundation of China
(No.12104356 and No.52250191, No.22103099),
the Opening Project of Shanghai Key Laboratory of Special Artificial Microstructure Materials
and Technology (No.Ammt2022B-1), and the Fundamental Research Funds for the Central
Universities.
W. Shi acknowledges the support from the Guangzhou Science and Technology Plan Project (202201011155).
We also acknowledge the support by HPC Platform, Xi’an Jiaotong University.
|
http://arxiv.org/abs/2409.02398v1 | 20240904024734 | Sharing Analysis in the Pawns Compiler | [
"Lee Naish"
] | cs.PL | [
"cs.PL",
"D.3.4; D.3.2"
] |
Pawns sharing
L. Naish
Lee Naish
Computing and Information Systems,
University of Melbourne, Melbourne 3010, Australia
[email protected],
Sharing analysis in the Pawns compiler
Lee Naish
======================================
§ ABSTRACT
Pawns is a programming language under development that supports algebraic
data types, polymorphism, higher order functions and “pure” declarative
programming. It also supports impure imperative features including
destructive update of shared data structures via pointers, allowing
significantly increased efficiency for some operations. A novelty of
Pawns is that all impure “effects” must be made obvious in the source
code and they can be safely encapsulated in pure
functions in a way that is checked by the compiler. Execution of a pure
function can perform destructive updates on data structures that are local
to or eventually returned from the function without risking modification
of the data structures passed to the function. This paper describes the
sharing analysis which allows impurity to be encapsulated. Aspects of the
analysis are similar to other published work, but in addition it handles
explicit pointers and destructive update, higher order functions including
closures and pre- and post-conditions concerning sharing for functions.
Keywords: functional programming language, destructive update, mutability,
effects, algebraic data type, sharing analysis, aliasing analysis
§ INTRODUCTION
This paper describes the sharing analysis done by the compiler for Pawns
<cit.>, a programming language that is currently under development.
It is a slightly updated version of <cit.>, with a section
that briefly describes a new abstract domain used for the analysis,
implemented since the original publication.
Pawns supports both declarative and imperative styles of programming.
It supports algebraic data types, polymorphism, higher order programming
and “pure” declarative functions, allowing very high level reasoning
about code. It also allows imperative code, where programmers can
consider the representation of data types, obtain pointers to the
arguments of data constructors and destructively update them. Such code
requires the programmer to reason at a much lower level and consider
aliasing of pointers and sharing of data structures. Low level “impure”
code can be encapsulated within a pure interface and the compiler checks
the purity. This requires analysis of pointer aliasing and data structure
sharing, to distinguish data structures that are only visible to the
low level code (and are therefore safe to update) from data structures
that are passed in from the high level code (for which update would
violate purity). The main aim of Pawns is to get the benefits of purity
for most code but still have the ability to write some key components
using an imperative style, which can significantly improve efficiency
(for example, a more than twenty-fold increase in the speed of inserting
an element into a binary search tree).
There are other functional programming languages, such as ML <cit.>,
Haskell <cit.> and Disciple <cit.>, that allow
destructive update of shared data structures but do not allow this
impurity to be encapsulated. In these languages the ability to update
the data structure is connected to its type[Disciple uses
“region” information to augment types, with similar
consequences.]. For a data structure to be
built using destructive update its type must allow destructive update
and any code that uses the data structure can potentially update it
as well. This prevents simple declarative analysis of the code and can
lead to a proliferation of different versions of a data structure, with
different parts being mutable. For example, there are four different
versions of lists, since both the list elements and the “spine” may
(or may not) be mutable, and sixteen different versions of lists of
pairs. There is often an efficiency penalty as
well, with destructive update requiring an extra level of indirection
in the data structure (an explicit “reference” in the type with most
versions of ML and Haskell). Pawns avoids this inefficiency and separates
mutability from type information, allowing a data structure to be mutable
in some contexts and considered “pure” in others. The main cost from
the programmer perspective is the need to include extra annotations and
information in the source code. This can also be considered a benefit,
as it provides useful documentation and error checking. The main
implementation cost is additional analysis done by the compiler, which
is the focus of this paper.
The rest of this paper assumes some familiarity with
Haskell and is structured as follows.
Section <ref> gives a brief overview of the relevant
features of Pawns. An early pass of the compiler translates Pawns
programs into a simpler “core” language; this is
described in Section <ref>.
Section <ref> describes the abstract domain originally used for
the sharing analysis algorithm,
Section <ref> describes the new abstract domain that is
now used,
Section <ref> defines the algorithm itself and
Section <ref> gives an extended example.
Section <ref> briefly discusses precision and efficiency
issues.
Section <ref> discusses related work and
Section <ref> concludes.
§ AN OVERVIEW OF PAWNS
A more detailed introduction to Pawns is given in <cit.>.
Pawns has many similarities with other functional languages. It supports
algebraic data types with parametric polymorphism, higher order
programming and curried function definitions. It uses strict evaluation.
In addition, it supports destructive update via “references” (pointers)
and has a variety of extra annotations to make impure effects more clear
from the source code and allow them to be encapsulated in pure code.
Pawns also supports a form of global variables (called state variables)
which support encapsulated effects, but we do not discuss them further
here as they are handled in essentially the same way as other variables
in sharing analysis. Pure code can be thought of in a declarative way,
were values can be viewed abstractly, without considering how they
are represented. Code that uses destructive update must be viewed at a
lower level, considering the representation of values, including sharing.
We discuss this lower level view first, then briefly present how impurity
can be encapsulated to support the high level view. We use Haskell-like
syntax for familiarity.
§.§ The low level view
Values in Pawns are represented as follows. Constants (data constructors
with no arguments) are represented using a value in a single word.
A data constructor with N>0 arguments is represented using a word
that contains a tagged pointer to a block of N words in main memory
containing the arguments. For simple data types such as lists the tag
may be empty. In more complex cases some bits of the pointer may be used
and/or a tag may be stored in a word in main memory along with the arguments.
Note that constants and tagged pointers are not always stored in main
memory and Pawns variables may correspond to registers that contain
the value. Only the arguments of data constructors are guaranteed to
be in main memory. An array of size N is represented in the same
way as a data constructor with N arguments, with the size given by
the tag. Functions are represented as either a constant (for functions
that are known statically) or a closure which is a data constructor
with a known function and a number of other arguments.
Pawns has a type constructor, representing a
reference/pointer to a value of type (which must be stored
in memory). Conceptually we can think of a corresponding
data constructor with a single argument, but this is never explicit
in Pawns code. Instead, there is an explicit dereference operation:
denotes the value points to. There are two
ways references can be created: let bindings and pattern bindings. A let
binding allocates a word in main memory, initializes it
to and makes a reference to it (Pawns omits
Haskell's and keywords; the scope is the
following sequence of statements/expressions). In a pattern
binding, if is the argument of a data constructor pattern,
is bound to a reference to the corresponding argument of the
data constructor if pattern matching succeeds (there is also a primitive
that returns a reference to the i^th element of an array).
Note it is not possible
to obtain a reference to a Pawns variable: variables do not denote memory
locations. However, a variable of type denotes
a reference to a memory location containing a value of type and
the memory location can be destructively updated by .
Consider the following code. Two data types are defined. The code
creates a reference to ( is stored in a newly
allocated memory word) and a reference to that reference (a pointer
to the word containing is put in another allocated word).
It also creates a list containing constants and
(requiring the allocation of two cons cells in memory; the
is copied). It deconstructs the list to obtain pointers to the head and
tail of the list (the two words in the first cons cell) then destructively
updates the head of the list to be .
The memory layout after the assignment can be pictured as follows,
where boxes represent main memory words and and
followed by an arrow represent pointers (no tag is used in either case):
[scale=1.0]
at (0,1.5) (c1) ;
at (3.5,1.5) (c2)
[4em][4em];
at (7.5,1.5) (ct)
[4em][4em];
[->] (c1) – (c2);
at (2.8,1.2) (h) ;
at (4.5,1.5) (t1) ;
at (4.0,1.2) (t) ;
at (8.2,0.7) (nb)
[4em];
at (4.4,0.7) (np) ;
at (3.4,0) (npp) ;
at (6.0,0) (npb)
[4em];
at (6.2,0.0) (r) ;
[->] (npp) – (npb);
[->] (r) – (nb);
[->] (np) – (nb);
at (0,0.7) (hp) ;
at (1,0.7) (hp1) ;
at (0,0) (tp) ;
at (1,0) (tp1) ;
[->] (hp1) – (h);
[->] (t1) – (ct);
[->] (tp1) – (t);
The destructive update above changes the values of both and
(the representations are shared). One of the novel features
of Pawns is that the source code must be annotated with “!” to make it
obvious when each “live” variable is updated. If both
and are used later, the assignment statement above must
be written as follows, with prefixed with “!” and an
additional annotation attached to the whole statement indicating
may be updated:
We say that the statement directly updates
and indirectly updates , due to sharing of
representations. Similarly, if was passed to a
function that may update it, additional annotations are required.
For example, makes the direct
update of and indirect update of clear.
Sharing analysis is used to ensure that source code contains all the
necessary annotations. One aim of Pawns is that any effects of code
should be made clear by the code. Pawns is an acronym for Pointer
Assignment With No Surprises.
Pawns functions have extra annotations in type signatures to
document which arguments may be updated. For additional documentation,
and help in sharing analysis, there are annotations to declare what
sharing may exist between arguments when the function is called
(a precondition) and what extra sharing may be added by executing the
function (called a postcondition, though it is the union of the pre- and
post-condition that must be satisfied after a function is executed).
For example, we may have:
[4]
The “!” annotation on parameter declares the first
argument of is mutable. The default is that arguments
are not mutable.
As well as checking for annotations on assignments and function calls,
sharing analysis is used to check that all parameters which may be
updated are declared mutable in type signatures, and pre- and post-conditions
are always satisfied. For example, assuming the previous code which
binds , the call annotates all
modified variables but violates the precondition of
because there is sharing between and at the
time of the call. Violating this precondition allows cyclic structures
to be created, which is important for understanding the code.
If the precondition was dropped, the second argument of
would also need to be declared mutable in the type signature and the
assignment to would require to be annotated.
In general, there is an inter-dependence between “!” annotations in the
code and pre- and post-conditions. More possible sharing at a call means
more “!” annotations are needed, more sharing in (recursive) calls and
more sharing when the function returns.
Curried functions and higher order code are supported by attaching
sharing and destructive update information to each arrow in a type,
though often the information is inferred rather than being given
explicitly in the source code. For example, implicit in the declaration
for above is that called with a single
argument of type creates a closure of type
containing that argument (and thus sharing the object of type
). The explicit sharing information describes applications
of this closure to another argument. There is a single argument in
this application, referred to with the formal parameter .
The other formal parameter, , refers to the argument of the
closure. In general, a type with N arrows in the “spine” has K+N
formal parameters in the description of sharing, with the first K
parameters being closure arguments.
The following code defines binary search trees of integers
and defines a function that takes a pointer to a tree and
inserts an integer into the tree. It uses destructive update, as would
normally be done in an imperative language. The declarative alternative
must reconstruct all nodes in the path from the root down to the new node.
Experiments using our prototype implementation of Pawns indicate that for
long paths this destructive update version is as fast as hand-written
C code whereas the “pure” version is more than twenty times slower,
primarily due to the overhead of memory allocation.
[4]
§.§ The high level view
Whenever destructive update is used in Pawns, programmers must be aware
of potential sharing of data representations and take a low level view.
In other cases it is desirable to have a high level view of values,
ignoring how they are represented and any sharing that may be present.
For example, in the two trees and depicted below,
it is much simpler if we do not have to care or know about the sharing
between the trees and within tree . The high level view
is they are both just .
[scale=1.0]
at (-0.5,1.8) (t1) ;
at (2.5,1.8) (t2) ;
at (1.0,1.0) (t1a)
[4em][4em][4em];
[->] (-0.2,1.6) – (-0.6,1.3);
[->] (2.9,1.6) – (4.5, 1.3);
at (6.5,1.0) (t2a)
[4em][4em][4em];
at (3.0,0.0) (l1a)
[4em][4em][4em];
[->] (-0.4,0.8) – (1.0,0.4);
[->] (2.4,0.8) – (1.2, 0.4);
at (8.5,0.0) (l2a)
[4em][4em][4em];
[->] (5.1,0.8) – (1.6,0.4);
[->] (7.9,0.8) – (7.0, 0.4);
Pawns has a mechanism to indicate that the high level view is
taken. Pre- and post-conditions can specify sharing with a special
pseudo-variable named [There is conceptually
a different variable for each distinct type.].
The sharing analysis of the Pawns compiler allows a distinction between
“abstract” variables, which share with and for which
the programmer takes a high level view, and “concrete” variables for
which the programmer must understand the representation and explicitly
declare all sharing in pre- and post-conditions. The analysis checks
that no live abstract variables can be destructively updated. Thus if a
function has a parameter which is updated, it must be declared mutable and
must not be declared to share with in the precondition
(non-mutable parameters may or may not share with ).
Checking of preconditions ensures that abstract variables are not
passed to functions which expect concrete data structures. For example,
an abstract tree cannot be passed to because the
precondition allows no sharing with . It is important
that the tree structure is known when is used
because the result depends on it. For example, inserting into the right
subtree of only affects this subtree whereas inserting into
the right subtree of (which has the same high level value)
also changes the left subtree of both and .
Note that concrete variables can be passed to functions which allow
abstract arguments. Pawns type
signatures that have no annotations concerning destructive update or
sharing implicitly indicate no arguments are destructively updated and
the arguments and result share with . Thus a subset of
Pawns code can look like and be considered as pure functional code.
The following code defines a function that takes a list of integers and
returns a binary search tree containing the same integers. Though it uses
destructive update internally, this impurity is encapsulated and it can
therefore be viewed as a pure function. The list that is passed in as an
argument is never updated and the tree returned is abstract so it is
never subsequently updated (a concrete tree could be returned if an
explicit postcondition without was given).
An initially empty tree is created locally.
It is destructively updated by inserting each integer of the list into it
(using , which calls ), then the
tree is returned. Within the execution of it is important
to understand the low level details of how the tree is represented,
but this information is not needed outside the call.
§ CORE PAWNS
An early pass of the Pawns compiler converts all function definitions
into a core language by flattening nested expressions, introducing extra
variables et cetera. A variable representing the return value of the
function is introduced and expressions are converted to bindings for
variables. A representation of the core language version of code is
annotated with type, liveness and other information prior to sharing
analysis. We just describe the core language here. The right side of
each function definition is a statement (described using the definition
of type below), which may contain variables, including
function names (), data constructors ()
and pairs containing a pattern () and statement for case
statements. All variables are distinct except for those in recursive
instances of and variables are renamed to avoid any
ambiguity due to scope.
Patterns in the core language only bind references to arguments — the
arguments themselves must be obtained by explicit dereference operations.
Pawns supports “default” patterns but for simplicity of presentation
here we assume all patterns are covered in core Pawns and we include an
error primitive. Similarly, we just give the general case for application
of a variable to N>0 arguments; our implementation distinguishes some
special cases. Memory is allocated for ,
(for non-constants) and (for unsaturated applications which
result in a closure).
The runtime behaviour of is identical to
but it is treated differently in type analysis.
Sharing and type analysis
cannot be entirely separated. Destructive update in the presence of
polymorphic types can potentially violate type safety or “preservation”
— see <cit.>, for example. For a variable whose type is
polymorphic (contains a type variable), we must avoid assigning a value
with a less general type. For example, in the type
of is “list of ”, where is a type variable.
Without destructive update it should be possible to use
wherever a list of any type is expected. However, if is
then assigned a list containing integers (which has a less general type),
passing it to a function that
expects a list of functions violates type safety (“calling” an arbitrary
integer is not safe). Pawns allows expressions to have their inferred
types further instantiated using “::”, and the type checking pass of
the compiler also inserts some type instantiation. The type checking
pass ensures that direct update does not involve type instantiation but
to improve flexibility, indirect update is checked during the sharing
analysis.
§ THE ABSTRACT DOMAIN
The representation of the value of a variable includes some set of main
memory words (arguments of data constructors). Two variables share if
the intersection of their sets of main memory words is not empty. The
abstract domain for sharing analysis must maintain a conservative
approximation to all sharing, so we can tell if two variables possibly
share (or definitely do not share). The abstract domain we use is a
set of pairs (representing possibly intersecting sets of main memory
locations) of variable components. The different components of
a variable partition the set of main memory words for the variable.
The components of a variable depend on its type. For non-recursive
types other than arrays, each possible data constructor argument is
represented separately. For example,
the type can have an argument
of an outer data constructor, an inner
and and . A component can be represented
using a list of pairs containing a data constructor and an
argument number, giving the path from the outermost data constructor
to the given argument. For example, the components of the type above
can be written as: , ,
and .
If variable has value , the expression
represents the single main memory word containing the
occurrence of .
For types we proceed as if there was a
data constructor, so represents the word
points to. For function types, values may be closures. A closure that
has had K arguments supplied is represented as a data constructor
𝙲𝚕_K with these K arguments; these behave in the same way as
other data constructor arguments with respect to sharing,
except Pawns provides no way to obtain a pointer to a closure argument.
Closures also
contain a code pointer and an integer which are not relevant to sharing so
they are ignored in the analysis.
We also ignore the subscript on the data constructor
for sharing analysis because type and sharing analysis only give a
lower bound on the number of closure arguments. Our analysis orders
closure arguments so that the most recently supplied argument is first
(the reverse of the more natural ordering). Consider the code below,
where is a function that is defined with four or more
arguments. The sharing analysis proceeds as if the memory layout was as
depicted in the diagram. The pre- and post-conditions of
are part of the type information associated with ,
and .
[c]0.4
[c]0.5
[scale=1.0]
at (0,3.3) (ip1) ;
at (5.5,3.3) (ip2)
[4em];
[->] (ip1) – (ip2);
at (0,2.6) (c1) ;
at (2.9,2.6) (c1a)
[4em][4em];
[->] (c1) – (c1a);
at (2.8,2.6) (c1b) ;
at (0,1.9) (c2) ;
at (0.7,1.9) (c2a) ;
at (1.6,2.3) (c2b) ;
[->] (c2a) – (c2b);
[->] (4.1,2.6) – (ip2) ;
at (0,1.2) (c3) ;
at (3.5,1.2) (f2)
[4em][4em][4em];
at (2.8,1.2) (f2a) ;
[->] (c3) – (f2);
[->] (5.2,1.3) – (ip2) ;
at (2.8,1.4) (f2c) ;
For arrays, is used to represent all words in the array.
The expression, represents the arguments of all
elements in an array of values.
For recursive types, paths are “folded” <cit.>
so there are a finite number
of components. Here we present the original published version of the
abstract domain, which is also used in later examples of the sharing
analysis algorithm. The compiler now uses a more expressive abstract
domain for recursive types, described in Section <ref>.
If a type T has sub-component(s) of type T we use
the empty path to denote the sub-component(s). In general, we construct
a path from the top level and if we come across a sub-component of type T
that is in the list of ancestor types (the top level type followed by the
types of elements of the path constructed so far) we just use the path
to the ancestor to represent the sub-component. Consider the following
mutually recursive types that can be used to represent trees which
consist of a node containing an integer and a list of sub-trees:
For type we have the components
(this folded path represents both and
, since they are of type ),
and
.
The expression represents the set of memory
words that
are the first argument of in variable of type
.
For type we have the components
(for , of type ),
and
(which is also the folded version of
, of type ).
In our sharing analysis algorithm we use a function (fold
component) which takes a v.c pair,
and returns v.c' where c' is the correctly folded component for the
type of variable v. For example, =
, assuming has type .
As well as containing pairs of components for distinct variables which may
alias, the abstract domain contains “self-alias” pairs for
each possible component of a variable which may exist. Consider the
following two bindings and the corresponding diagram (as with
, no tag is used for ):
[c]0.4
[c]0.5
[scale=1.0]
at (0,1.7) (c1) ;
at (3.5,1.7) (c2)
[4em][4em];
[->] (c1) – (c2);
at (0,0.7) (d1) ;
at (3.5,0.7) (d2)
[4em][4em];
at (2.8,1.5) (c2a) ;
at (2.8,0.7) (d2a) ;
[->] (d1) – (d2);
[->] (d2a) – (c2a);
With our domain, the most precise description of sharing after
these two bindings is as follows. We represent an alias pair as a set
of two variable components. The first five are self-alias pairs and
the other two describe the sharing between and .
[4]
Note there is no self-alias pair for since there is no strict
sub-part of that is an . Similarly, there
is no alias between and any part of .
Although the value is used as the first argument of
in , this is not a main memory word that is
used to represent the value of (indeed, the value of
has no cells). The tagged pointer value stored in variable
(which may be in a register) is copied into the cons cell.
Such descriptions of sharing are an abstraction of computation states.
The set above abstracts all computation states in which
is a tree with a single node, is a list of trees, elements
of may be or have as a subtree, and
there are no other live variables with non-atomic values.
§ THE NEW ABSTRACT DOMAIN
The Pawns compiler no longer uses empty paths to represent components
of variables with recursive types. One of the motivations was to get
more precise sharing analysis for code such as the following, which
repeatedly assigns different RTrees to a single pointer variable
(similar imprecision occured for other recursively defined types).
With the original domain, the memory word is represented
by , which also represents words that are the first
argument of a cells within the tree. The compiler
concludes the assignments may update and and
introduce sharing between , and .
The code can be re-written by renaming for each assignment
to get more precised analysis, but it can be inconvenient for programmers
to repeatedly rename variables and ideally it should not be necessary.
The new abstract domain used by the compiler contains all non-empty paths
that avoid repetitions corresponding to recursion in the type. Thus
will typically have four components: ,
(instead of ), and both
and . Only paths with repeated data constructors
are folded. For example, is folded
to become . Note that
represents memory words that arguments of whereas
represents memory words that arguments of
, and these must be distinct, even though they have the
same type. This distinction was not made in the old domain. Similarly,
there will typically be five components of ,
plus the four tree components prefixed by , so there is a
distinction made between the memory word used for and those
used by the tree. With the new abstract domain, sharing analysis for
let bound ref variables loses no information unless there is recursion
through the type. Note that the examples below use the old
abstract domain. Also, the sharing algorithm given below treats empty
paths as special in some places so the current compiler uses a some small
modifications to this algorithm.
§ THE SHARING ANALYSIS ALGORITHM
We now describe the sharing analysis algorithm. Overall, the compiler
attempts to find a proof that for a computation with a depth D of
(possibly recursive) function calls, the following condition C holds,
assuming C holds for all computations of depth less than D. This
allows a proof by induction that C holds for all computations that
terminate normally.
C: For all functions f, if the precondition of
f is satisfied (abstracts the computation state)
whenever f is called, then
* for all function calls and assignment statements in f, any live
variable that may be updated at that point in an execution of f is
annotated with “!”,
* there is no update of live “abstract” variables when executing
f,
* all parameters of f which may be updated when
executing f are declared mutable in the type signature of f,
* the union of the pre- and post-conditions of f abstracts
the state when f returns plus the values
of mutable
parameters in all states during the execution of f,
* for all function calls and assignment statements in f, any live
variable that may be directly updated at that point is updated with a value
of the same type or a more general type, and
* for all function calls and assignment statements in f, any live
variable that may be indirectly updated at that point only shares with
variables of the same type or a more general type.
The algorithm is applied to each function definition
in core Pawns to compute an approximation to the sharing before and
after each statement (we call it the alias set). This can be used to
check points 1, 2, 4 and 6 above. The algorithm checks that
preconditions are satisfied for each function call, allowing
the induction hypothesis to be used. Point 3
is established using point 1 and a simple syntactic check that any
parameter of f that is annotated “!” in the definition is declared
mutable in the type signature (parameters are considered live throughout
the definition). Point 5 relies on 3 and the type checking pass.
The core of the algorithm is to compute the alias set
after a statement, given the alias set before the statement. This is
applied recursively for compound statements in a form of abstract
execution. Note that for point 4, if a statement changes the set of
memory cells used to represent a mutable parameter, the algorithm computes
the sharing for the union of the two sets of cells.
We do not prove correctness of the algorithm but hope our presentation
is sufficiently detailed to have uncovered any bugs. A proof would
have a separate case for each kind of statement in the core language,
showing that if the initial alias set abstracts the execution state
before the statement the resulting alias set abstracts the execution
state after the statement. This would require a more formal description
of execution states and their relationship with the core language and
the abstract domain. The abstract domain relies on type information so
the sharing analysis relies on type preservation in the execution. Type
preservation also relies on sharing analysis. Thus a completely formal
approach must tackle both problems together. Although our approach is not
formal, we do state the key condition C, which has points relating to
both sharing and types, and we include in the core language.
The alias set used at the start of a definition is the precondition
of the function. This implicitly includes self-alias pairs
for all variable components of the arguments of the function and
the pseudo-variables 𝚊𝚋𝚜𝚝𝚛𝚊𝚌𝚝_T for each type T used.
Similarly, the postcondition implicitly includes self-alias pairs for
all components of the result (and the 𝚊𝚋𝚜𝚝𝚛𝚊𝚌𝚝_T variable if
the result is abstract)[Self-aliasing for arguments and results is
usually desired. For the rare cases it is not, we may provide a mechanism
to override this default in the future.]. As abstract execution proceeds,
extra variables from the function body are added to the alias set and
variables that are no longer live can be removed to improve efficiency.
For each program point, the computed alias set abstracts the computation
state at that point in all concrete executions of the function that
satisfy the precondition. For mutable parameters of the function, the
sharing computed also includes the sharing from previous program points.
The reason for this special treatment is explained when we discuss the
analysis of function application. The alias set computed for the end
of the definition, with sharing for local variables removed, must be a
subset of the union of the pre- and post-condition of the function.
Before sharing analysis, a type checking/inference pass is completed which
assigns a type to each variable and function application. This
determines the components for each variable. Polymorphism is also
eliminated as follows. Suppose we have a function ,
which returns the list containing the first elements of
:
[4]
For each call to , the pre- and post-conditions
are determined based on the type of the application. An application to
lists of Booleans will have two components for each variable whereas an
application to lists of lists of Booleans will have four. When analysing
the definition of we instantiate type variables such as
above to . This type has a single component
which can be shared to represent possible sharing of arbitrary components
of an arbitrary type.
Type checking prevents sharing between non-identical types, such as
and . Finally, we assume there is no type which is
an infinite chain of refs, for example,
(for which type folding results in an empty component rather than a
component; this is not a practical limitation).
Suppose a_0 is the alias set just before statement s. The following
algorithm computes 𝚊𝚕𝚒𝚊𝚜(s, a_0), the alias set just after
statement s. The algorithm structure follows the recursive definition
of statements and we describe it using pseudo-Haskell, interspersed with
discussion. The empty list is written , non-empty lists
are written [𝚊,𝚋,𝚌] or and ++
denotes list concatenation.
At some points we use high level declarative set
comprehensions to describe what is computed and naive implementation may
not lead to the best performance.
mmm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄m
{{𝚟1.c_1,𝚟1.c_2}|{𝚟2.c_1,𝚟2.c_2}∈𝚊0}
{{𝚟1.c_1,v.c_2}|{𝚟2.c_1,v.c_2}∈𝚊0}
𝚊0∪𝚜𝚎𝚕𝚏1∪𝚜𝚑𝚊𝚛𝚎1
{{𝚟1.[𝚁𝚎𝚏.1],𝚟1.[𝚁𝚎𝚏.1]}} ∪
{{𝚏𝚌 (𝚟1.(𝚁𝚎𝚏.1:c_1)),
𝚏𝚌 (𝚟1.(𝚁𝚎𝚏.1:c_2))}|{𝚟2.c_1,𝚟2.c_2}∈𝚊0}
{{𝚏𝚌 (𝚟1.(𝚁𝚎𝚏.1:c_1)),v.c_2}|{𝚟2.c_1,v.c_2}∈𝚊0}
𝚊0∪𝚜𝚎𝚕𝚏1∪𝚜𝚑𝚊𝚛𝚎1
Sequencing is handled by function composition. To bind a fresh variable
to a variable the self-aliasing of
(including aliasing between different components of )
is duplicated for and the aliasing for each component
of (which includes self-aliasing) is duplicated for .
Binding to
is done in a similar way, but the components of
must have prepended to them and the result folded, and
the component of self-aliases.
Folding is
only needed for the rare case of types with recursion through .
[4]
mmm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄m
{ v_a.c_a |{𝚟1.[𝚁𝚎𝚏.1],v_a.c_a}∈𝚊0}
{{𝚏𝚌 (v_a.(c_a++c_1)),
𝚏𝚌 (v_b.(c_b++c_2))}|
v_a.c_a ∈∧ v_b.c_b ∈∧{𝚟2.c_1,𝚟2.c_2}∈𝚊0}
{{𝚏𝚌 (v_a.(c_a++c_1)),v.c_2}|
v_a.c_a ∈∧{𝚟2.c_1,v.c_2}∈𝚊0}
𝚊0∪𝚜𝚎𝚕𝚏1𝚊𝚕∪𝚜𝚑𝚊𝚛𝚎1𝚊𝚕
{{𝚟1.(𝚁𝚎𝚏.1:d:c_1),v.c_2}|{𝚟1.(𝚁𝚎𝚏.1:d:c_1),v.c_2}∈𝚊0}
(𝚊0∖) ∪𝚜𝚎𝚕𝚏1𝚊𝚕∪𝚜𝚑𝚊𝚛𝚎1𝚊𝚕
Assignment to an existing variable differs from binding a fresh variable
in three ways. First, self-sharing for is not
added since it already exists. Second, may
alias several variable components (the live subset of these variables
must be annotated with “!” on the
assignment statement; checking such annotations is a primary purpose
of the analysis). All these variables end up sharing with
and what shares with (via ) plus themselves
and each other (via ). The components must be concatenated
and folded appropriately. Third, if is not a mutable
parameter the existing sharing with a path strictly longer than
(that is, paths of the form 𝚁𝚎𝚏.1:d:c_1)
can safely be removed, improving precision.
The component represents the single memory word
that is overwritten and whatever the old contents shared with is no
longer needed to describe the sharing for . For mutable
parameters the old value may share with variables from the calling
context and we retain this information, as explained later.
Consider the example below, where and are as
before and local variables and are references to
the element of . The value assigned, , is
.
[c]0.5
[scale=1.0]
at (2.2,4.2) (l1) Initial state;
at (0,3.7) (c1) ;
at (3.5,3.7) (c2)
[3em][3em];
at (3.0,3.5) (c2a) ;
[->] (c1) – (c2);
at (0,2.7) (d1) ;
at (3.5,2.7) (d2)
[3em][3em];
at (3.0,2.7) (d2a) ;
[->] (d1) – (d2);
at (0,2.2) (e1) ;
at (0.7,2.2) (e1a) ;
at (2.4,2.5) (e1b) ;
[->] (e1a) – (e1b);
at (0,1.7) (g1) ;
at (0.7,1.7) (g1a) ;
at (2.7,2.4) (g1b) ;
[->] (g1a) – (g1b);
at (0,1.2) (f1) ;
at (3.5,1.2) (f2)
[3em][3em];
at (2.8,1.2) (f2a) ;
[->] (f1) – (f2);
at (3.0,1.4) (f2c) ;
at (2.5,0.4) (h2)
[3em][3em];
at (3.5,-0.2) (h3)
[3em][3em];
[->] (4.0,1.1) – (2.2,0.7);
[->] (2.0,0.3) – (2.4,-0.1);
[->] (d2a) – (c2a);
[c]0.5
[scale=1.0]
at (2.2,4.2) (l1) After ;
at (0,3.7) (c1) ;
at (3.5,3.7) (c2)
[3em][3em];
at (3.0,3.5) (c2a) ;
[->] (c1) – (c2);
at (0,2.7) (d1) ;
at (3.5,2.7) (d2)
[3em][3em];
at (3.0,2.7) (d2a) ;
[->] (d1) – (d2);
at (0,2.2) (e1) ;
at (0.7,2.2) (e1a) ;
at (2.4,2.5) (e1b) ;
[->] (e1a) – (e1b);
at (0,1.7) (g1) ;
at (0.7,1.7) (g1a) ;
at (2.7,2.4) (g1b) ;
[->] (g1a) – (g1b);
at (0,1.2) (f1) ;
at (3.5,1.2) (f2)
[3em][3em];
at (2.8,1.2) (f2a) ;
[->] (f1) – (f2);
at (3.0,1.4) (f2c) ;
at (2.5,0.4) (h2)
[3em][3em];
at (3.5,-0.2) (h3)
[3em][3em];
[->] (4.0,1.1) – (2.2,0.7);
[->] (2.0,0.3) – (2.4,-0.1);
[->] (d2a) – (f2c);
There is aliasing of , and
so all these variables have the sharing of
and self-sharing added. Generally we must also add sharing between all
pairs of these variables. For example,
must be added because the component of did
not previously exist. The old sharing of with is
discarded. Note that we cannot discard the old sharing of and
with for two reasons. First, no definite
aliasing information is maintained, so we cannot be sure
or are modified at all. Second, the assignment updates
only one memory word whereas there may be other words also represented
by . In some cases the old sharing of
is discarded and immediately added again. Consider the following example,
which creates a cyclic list.
[c]0.5
[scale=1.0]
at (2.2,2.9) (l1) Initial state;
at (0,2.5) (v1) ;
at (0,1.5) (v2) ;
at (3.0,1.5) (c2)
[4em][4em];
[->] (v2) – (c2);
at (3.5,1.8) (tt) ;
[->] (v1) – (tt);
at (3.8,1.5) (t) ;
at (2.3,1.3) (h) ;
at (0,0.5) (v3) ;
at (3.0,0.5) (c3)
[4em][4em];
[->] (v3) – (c3);
at (2.4,0.8) (h3) ;
at (3.8,1.4) (t2) ;
[->] (t2) – (h3);
at (2.0,0.6) (bot) ;
[c]0.5
[scale=1.0]
at (2.2,2.9) (l1) After ;
at (0,2.5) (v1) ;
at (0,1.5) (v2) ;
at (3.0,1.5) (c2)
[4em][4em];
[->] (v2) – (c2);
at (3.5,1.8) (tt) ;
[->] (v1) – (tt);
at (3.8,1.5) (t) ;
at (2.3,1.3) (h) ;
[->] (t) – (3.8,0.9) – (2.3,0.9) – (h);
at (0,0.5) (v3) ;
at (3.0,0.5) (c3)
[4em][4em];
[->] (v3) – (c3);
at (2.4,0.8) (h3) ;
at (3.8,1.4) (t2) ;
at (2.0,0.6) (bot) ;
The sharing between and is discarded but
added again (via ) because also shares with
. Correctness of the algorithm when cyclic terms are created
depends on the abstract domain we use. A more expressive domain could
distinguish between different cons cells in a list. For example, if
types are “folded” at the third level of recursion rather than the
first, the domain can distinguish three classes of cons cells, where the
distance from the first cons cell, modulo three, is zero, one or two.
For a cyclic list with a single cons cell, that cons cell must be in all
three classes and our algorithm would need modification to achieve this.
However, in our domain types are folded at the first level of recursion so
we have a unique folded path for each memory cell in cyclic data structure
(cyclic terms can only be created with recursive types). There is no
distinction between the first and second cons cell in a list, for example.
mmm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mv_1,… v_N
{{𝚏𝚌 (𝚟.[𝚍𝚌.i]),𝚏𝚌 (𝚟.[𝚍𝚌.i])}| 1 ≤ i ≤ N } ∪
{{𝚏𝚌 (𝚟.(𝚍𝚌.i:c_1)),
𝚏𝚌 (𝚟.(𝚍𝚌.j:c_2)) }|{v_i.c_1,v_j.c_2}∈𝚊0}
{{𝚏𝚌 (𝚟.(𝚍𝚌.i:c_1)),
w.c_2 }|{v_i.c_1,w.c_2}∈𝚊0}
𝚊0∪𝚜𝚎𝚕𝚏1∪𝚜𝚑𝚊𝚛𝚎1
The case can be seen as equivalent to and binding a variable to a data constructor with N variable
arguments is a generalisation. If there are multiple v_i that
share, the corresponding components of must also share; these
pairs are included in .
[4]
mmm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄m
{{𝚟1.c_1,𝚟1.c_2}|{𝚏𝚌(𝚟2.(𝚁𝚎𝚏.1:c_1)),𝚏𝚌(𝚟2.(𝚁𝚎𝚏.1:c_2))}∈𝚊0}
{{𝚟1.c_1,v.c_2}|{𝚏𝚌(𝚟2.(𝚁𝚎𝚏.1:c_1)),v.c_2}∈𝚊0}
{{,v.c}|{,v.c}∈ (𝚜𝚎𝚕𝚏1∪𝚜𝚑𝚊𝚛𝚎1) }
𝚊0∪𝚜𝚎𝚕𝚏1∪𝚜𝚑𝚊𝚛𝚎1
(𝚊0∪𝚜𝚎𝚕𝚏1∪𝚜𝚑𝚊𝚛𝚎1)
∖𝚎𝚖𝚙𝚝𝚢1
The case is similar to the inverse of in
that we are removing rather than prepending it (the
definition implicitly uses the inverse of ). However,
if the empty component results we must check that such a component exists
for the type of .
mmm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mv_1,… v_N
“w_1, … w_K+N) = r”
.[.K],….[.1],v_1,… v_N
{{𝚟.[𝙲𝚕.i],𝚟.[𝙲𝚕.i]}| 1≤ i≤ N
} ∪
{{𝚟.((𝙲𝚕.(N+1-i)):c_1),𝚟.((𝙲𝚕.(N+1-j)):c_2)} |
{v_i.c_1,v_j.c_2}∈𝚊0} ∪
{{𝚟.((𝙲𝚕.(i+N)):c_1),𝚟.((𝙲𝚕.(j+N)):c_2)}|
{𝚏.((𝙲𝚕.i):c_1),𝚏.((𝙲𝚕.j):c_2)}∈𝚊0}
{{𝚟.((𝙲𝚕.(N+1-i)):c_1),x.c_2} |{v_i.c_1,x.c_2)}∈𝚊0} ∪
{{𝚟.((𝙲𝚕.(i+N)):c_1),x.c_2}|{𝚏.((𝙲𝚕.i):c_1),x.c_2}∈𝚊0}
{{x_1.c_1,x_3.c_3}|{x_1.c_1,x_2.c_2}∈𝚙𝚘𝚜𝚝∧{x_2.c_2,x_3.c_3}∈𝚊0}
{{x_1.c_1,x_2.c_2}|{x_1.c_1,v_i.c_3}∈𝚊0∧{x_2.c_2,v_j.c_4}∈𝚊0 ∧
{v_i.c_3,v_j.c_4}∈𝚙𝚘𝚜𝚝∧
v_i ∈𝚖𝚞𝚝∧ v_j ∈𝚖𝚞𝚝}
𝚊0∪𝚜𝚎𝚕𝚏𝚌∪𝚜𝚑𝚊𝚛𝚎𝚌∪𝚙𝚘𝚜𝚝𝚝∪𝚙𝚘𝚜𝚝𝚖
For many occurrences the function is known statically and we can
determine if the function is actually called or a closure is created
instead. However, in general we must assume either could happen and
add sharing for both. If a closure is created, the first N closure
arguments share with the N arguments of the function call and any
closure arguments of share with additional closure arguments of
the result (this requires renumbering of these arguments).
Analysis of
function calls relies on the sharing and mutability information attached
to all arrow types. Because Pawns uses the syntax of statements to
express pre- and post-conditions, our implementation uses the sharing
analysis algorithm to derive an explicit alias set representation
(currently this is done recursively, with the level of recursion limited
by the fact than pre- and post-conditions must not contain function
calls). Here we ignore the details of how the alias set representation
is obtained. The compiler also uses the sharing information immediately
before an application to check that the precondition is satisfied,
all required “!” annotations are present and abstract variables are
not modified.
Given that the precondition is satisfied, the execution of a function
results in sharing of parameters that is a subset of the union of
the declared pre- and post-conditions (we assume the induction hypothesis
holds for the sub-computation, which has a smaller depth of recursion).
However, any sharing between non-mutable arguments that exists
immediately after the call must exist before the call.
The analysis algorithm does not add sharing between non-mutable
arguments in the precondition as doing so would unnecessarily restrict
how “high level” and “low level” code can be mixed. It is important
we can pass a variable to a function that allows an abstract argument
without the analysis concluding the variable subsequently shares with
, and therefore cannot be updated. Thus
is just the declared
postcondition plus the subset of the precondition which involves mutable
parameters of the function, renamed appropriately.
The last N formal parameters,
w_K+1… w_K+N are renamed as the arguments of the call,
v_1 … v_N and the formal result r is renamed .
The formal parameters w_1… w_K represent closure arguments
K … 1 of . Thus a variable component such as
w_1 is renamed K.
It is also necessary to include one step of transitivity in the sharing
information: if variable components x_1.c_1 and x_2.c_2 alias in
and x_2.c_2 and x_3.c_3 (may) alias before the function
call, we add an alias of x_1.c_1 and x_3.c_3 (in ).
Function parameters are proxies for
the argument variables as well as any variable components they may alias
and when functions are analysed these aliases are not known.
This is why the transitivity step is needed, and why mutable parameters
also require special treatment. If before the call, x_1.c_1 and x_2.c_2
may alias with mutable parameter components v_i.c_3 and v_j.c_4,
respectively, and the two mutable parameter components alias in
then x_1.c_1 and x_2.c_2 may alias after the call;
this is added in . Consider the example below, where
we have a pair (of references to references to integers)
and variables and share with the two elements
of , respectively. When is passed to function
as a mutable parameter, sharing between and is
introduced. The sharing of the mutable parameter in the postcondition,
, results
in sharing between and being added in the analysis.
[c]0.5
[scale=0.9]
at (2.2,3.6) (l1) Initial state;
at (0,3.0) (x) ;
at (2.8,1.0) (ba)
[2em];
at (3.8,1.0) (bb)
[2em];
at (0,2.4) (v1) ;
at (3.3,2.4) (v1a) [2em][2em];
[->] (v1) – (v1a);
at (0,1.8) (y) ;
at (1.6,1.8) (ya) [2em];
at (4.8,1.8) (yb) [2em];
[->] (y) – (ya);
[->] (1.6,1.6) – (ba);
[->] (4.8,1.6) – (bb);
[->] (2.9,2.2) – (ya);
[->] (3.8,2.2) – (yb);
[->] (x) – (4.2,3.0) – (yb);
at (0,0.1) (bot) ;
[c]0.5
[scale=0.9]
at (2.2,3.6) (l1) After ;
at (0,3.0) (x) ;
at (2.8,1.0) (ba)
[2em];
at (3.8,1.0) (bb)
[2em];
at (0,2.4) (v1) ;
at (3.3,2.4) (v1a) [2em][2em];
[->] (v1) – (v1a);
at (0,1.8) (y) ;
at (1.6,1.8) (ya) [2em];
at (4.8,1.8) (yb) [2em];
[->] (y) – (ya);
[->] (1.9,1.8) – (3.4,1.3);
[->] (4.8,1.6) – (bb);
[->] (2.9,2.2) – (ya);
[->] (3.8,2.2) – (yb);
[->] (x) – (4.2,3.0) – (yb);
at (0,0.1) (bot) ;
The need to be conservative with the sharing of mutable parameters
in the analysis of function definitions (the special treatment in
) is illustrated by the example below. Consider the
initial state, with variables and which share
with and , respectively. After is
called and share, even though the parameters
and do not share at any point in the execution
of . If mutable parameters were not treated specially in
the case, would be accepted as the
postcondition of and the analysis of the call to
would then be incorrect. The sharing is introduced between memory cells
that were once shared with and others that were once shared
with . Thus in our algorithm, the sharing
of mutable parameters reflects all memory cells that are reachable from
the parameters during the execution of the function. Where the mutable
parameters are assigned in , the sharing of the parameters'
previous values ( and ) is retained.
Thus when the final assignment is processed, sharing between the
parameters is added and this must be included in the postcondition.
Although this assignment does not modify or , the
“!” annotations are necessary and alert the reader to potential
modification of variables that shared with the parameters when the
function was called.
[c]0.5
[scale=0.9]
at (2.2,4.3) (l1) Initial state;
at (0,3.0) (v1) ;
at (1.8,3.0) (v1a)
[2em];
at (3.2,3.0) (v1b)
[2em];
at (4.6,3.0) (v1c)
[2em];
[->] (v1) – (v1a);
[->] (3.5,3.0) – (v1c);
[->] (2.1,3.0) – (v1b);
at (0,1.2) (v2) ;
at (1.8,1.2) (v2a)
[2em];
at (3.2,1.2) (v2b)
[2em];
at (4.6,1.2) (v2c)
[2em];
[->] (v2) – (v2a);
[->] (3.5,1.2) – (v2c);
[->] (2.1,1.2) – (v2b);
at (0,2.4) (x) ;
[->] (x) – (1.8,2.4) – (v1b);
at (0,1.8) (y) ;
[->] (y) – (1.8,1.8) – (v2b);
at (0,0.1) (bot) ;
[c]0.5
[scale=0.9]
at (2.2,4.3) (l1) After ;
at (0,3.0) (v1) ;
at (1.8,3.0) (v1a)
[2em];
at (3.2,3.0) (v1b)
[2em];
at (4.6,3.0) (v1c)
[2em];
[->] (v1) – (v1a);
[->] (3.5,3.0) – (v2c);
at (3.2,3.8) (n1b)
[2em];
at (4.6,3.8) (n1c)
[2em];
[->] (2.1,3.0) – (n1b);
[->] (3.5,3.8) – (n1c);
at (0,1.2) (v2) ;
at (1.8,1.2) (v2a)
[2em];
at (3.2,1.2) (v2b)
[2em];
at (4.6,1.2) (v2c)
[2em];
[->] (v2) – (v2a);
[->] (3.5,1.2) – (v2c);
at (3.2,0.4) (n2b)
[2em];
at (4.6,0.4) (n2c)
[2em];
[->] (2.1,1.2) – (n2b);
[->] (3.5,0.4) – (n2c);
at (0,2.4) (x) ;
[->] (x) – (1.8,2.4) – (v1b);
at (0,1.8) (y) ;
[->] (y) – (1.8,1.8) – (v2b);
at (0,0.1) (bot) ;
[4]
mmm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄m ∅
(p_1,s_1),…(p_N,s_N
{{𝚟.c_1,v_2.c_2}|{𝚟.c_1,v_2.c_2}∈𝚊0}
⋃_1≤ i≤
N𝚊𝚕𝚒𝚊𝚜𝙲𝚊𝚜𝚎 𝚊0 𝚘𝚕𝚍 𝚟 p_i s_i
mmm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mv_1,… v_N
𝚊𝚟𝚍𝚌
{{𝚏𝚌(𝚟.(𝚍𝚌.i:c_1)),
w.c_2}|{𝚏𝚌(𝚟.(𝚍𝚌.i:c_1)),w.c_2}∈𝚊𝚟}
𝚛𝚜𝚎𝚕𝚏
{{v_i.[𝚁𝚎𝚏.1],v_i.[𝚁𝚎𝚏.1]}| 1≤ i≤ N}
𝚟𝚒𝚜𝚑𝚊𝚛𝚎
{{𝚏𝚌 (v_i.(𝚁𝚎𝚏.1:c_1)),
𝚏𝚌 (v_j.(𝚁𝚎𝚏.1:c_2))}|
{𝚏𝚌(𝚟.(𝚍𝚌.i:c_1)),𝚏𝚌(𝚟.(𝚍𝚌.j:c_2))}∈𝚊𝚟}
𝚜𝚑𝚊𝚛𝚎
{{𝚏𝚌 (v_i.(𝚁𝚎𝚏.1:c_1)),
w.c_2}|{𝚏𝚌(𝚟.(𝚍𝚌.i:c_1)),w.c_2))}∈𝚊𝚟}
(
𝚛𝚜𝚎𝚕𝚏∪𝚟𝚒𝚜𝚑𝚊𝚛𝚎∪𝚜𝚑𝚊𝚛𝚎∪
(∖)
∪
)
For a case expression we return the union of the alias sets obtained for
each of the different branches. For each branch we only keep sharing
information for the variable we are switching on that is compatible
with the data constructor in that branch (we remove all the old sharing,
, and add the compatible sharing, ).
We implicitly use the inverse of . To deal
with individual data constructors we consider pairs of components of
arguments i and j which may alias in order to compute possible
sharing between v_i and v_j, including self-aliases when i=j.
The corresponding component of v_i (prepended with and
folded) may alias the component of v_j. For example, if
of type is matched with and
self-aliases, we need to find the components which fold
to ( and ) in order
to compute the sharing for and . Thus we compute
that ,
may alias . This can occur if the
data structure is cyclic, such as the example below where is
a list containing a single tree with 2 in the node and as the
children (hence it represents a single infinite branch). Note that
represents both the memory cell
containing the pointer and the cell containing .
[scale=1.0]
at (3.5,3.7) (c2)
[4em][4em];
at (2.8,3.5) (c2a) ;
at (0,2.7) (d1) ;
at (3.5,2.7) (d2)
[4em][4em];
at (2.8,2.7) (d2a) ;
[->] (d1) – (d2);
at (0,1.9) (e1) ;
at (0.7,1.9) (e1a) ;
at (2.5,2.4) (e1b) ;
[->] (e1a) – (e1b);
at (0,1.2) (f1) ;
at (2.8,1.2) (f2a) ;
[->] (0.8,1.2) – (3.9,2.4);
[->] (4.1,3.6) – (3.0,3.0);
at (2.8,1.4) (f2c) ;
[->] (d2a) – (c2a);
mmm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄mm̄m
Type instantiation is dealt with in the same way as variable equality,
with the additional check that if any sharing is introduced, the variable
with the more general type is not implicitly updated later while still
live (it is sufficient to check there is no “” annotation
attached to a later statement).
§ EXAMPLE
We now show how this sharing analysis algorithm is applied to the binary
search tree code given earlier. We give a core Pawns version of each
function and the alias set before and after
each statement, plus an additional set at the end which is the union of
the pre- and post-conditions of the function. To save space, we write
the alias set as a set of sets where each inner set represents all sets
containing exactly two of its members. Thus {{a,b,c}} represents a
set of six alias pairs: aliasing between all pairs of elements, including
self-aliases. The return value is given by variable
and variables and are the versions of
for type and , respectively.
We start with the precondition:
a_0 = {{},
{}}.
Binding to a constant introduces no sharing so a_1 = a_0.
a_2 = a_1 ∪ {}.
The function call has precondition a_0 ∪{{},
{}}, which is a superset of a_2.
Since is a mutable argument the
precondition sharing for is added:
a_3 = a_2 ∪ {{ }}.
The final sharing includes the return variable, :
a_4 = a_3 ∪ {{},
{}}. After removing sharing
for the dead (local) variable we obtain
a subset of the union of the pre- and post-conditions,
which is
a_0 ∪{{},
{, }}.
We start with the precondition,
a_0 = {{}, {},
{},
{}}.
The branch of the case introduces sharing for and
:
a_1 = a_0 ∪ {{, ,
, },
{ , }}.
The list elements are atomic so a_2 = a_1.
The next binding makes the sharing of and the
same:
a_3 = a_2 ∪ {{ ,
,
},
{ , , ,
}}.
This can be simplified by removing the dead variables and
.
The precondition of the calls are satisfied and
a_6 = a_5 = a_4 = a_3.
For the branch we remove the incompatible sharing for
from a_0:
a_7 = {{}, {},
{},
{}}
and a_8 = a_7. Finally, a_9 = a_6
∪ a_8. This contains all the sharing for mutable parameter and,
ignoring local variables, is a
subset of the union of the pre- and post-conditions, a_0.
Here a_0 = {{},
{}} and
a_1 = a_0 ∪{{, },
{, }}.
For the branch we remove the sharing so a_4 =
a_3 = a_2 = a_0 and a_5 = a_4 ∪{{},
{}}.
After the destructive update, a_6 = a_5 ∪{{,
},
{,
}} ( is dead and can be
removed) and a_7 = a_6.
For the branch we have a_8 = a_1 ∪{{, , ,
},
{, ,
, , }}.
The same set is retained for a_9 … a_17 (assuming the dead
variable is retained), the preconditions
of the function calls are satisfied and the required annotations are
present. Finally, a_18 = a_17∪
a_7, which contains all the sharing for ,
and after eliminating local variables we get the postcondition,
which is the same as the precondition.
§ DISCUSSION
Imprecision in the analysis of mutable parameters could potentially be
reduced by allowing the user to declare that only certain parts of a
data structure are mutable, as suggested in <cit.>.
It is inevitable we lose some precision with recursion in types, but
it seems that some loss of precision could be avoided relatively easily.
The use of the empty path to represent sub-components of recursive
types results in imprecision when references are created. For example,
the analysis of concludes that the empty
component of may alias with itself and the
component of (in reality, has no sharing). Instead
of the empty path, a dummy path of length one could be used.
Flagging data structures which are known to be acyclic could also
improve precision for . A more
aggressive approach would be to unfold the recursion an extra level, at
least for some types. This could allow us to express (non-)sharing of
separate subtrees and whether data structures are cyclic, at the cost
of more variable components, more complex pre- and post-conditions and
more complex analysis for and .
Increasing the number of variable components also decreases efficiency.
The algorithmic complexity is affected by the representation of alias
sets. Currently we use a naive implementation, using just ordered
pairs of variable components as the set elements and a set library
which uses an ordered binary tree. The size of the set can be O(N^2),
where N is the maximum number of live variable components of the same
type at any program point (each such variable component can alias with
all the others). In typical code the number of live variables at any
point is not particularly large. If the size of alias sets does become
problematic, a more refined set representation could be used, such as the
set of sets of pairs representation we used in Section <ref>,
where sets of components that all alias with each other are optimised.
There are also simpler opportunities for efficiency gains, such as
avoiding sharing analysis for entirely pure code.
We have not stress tested our implementation or run substantial
benchmarks as it is intended to be a
prototype, but performance has been encouraging.
Translating the tree insertion code plus a test harness to C, which
includes the sharing analysis, takes less time than compiling
the resulting C code using GCC. Total compilation time is less than half
that of GHC for equivalent Haskell code and less than one tenth that of
MLton for equivalent ML code. The Pawns executable is around 3–4 times
as fast as the others.
§ RELATED WORK
Related programming languages are discussed in <cit.>; here we
restrict attention to work related to the sharing analysis algorithm.
The most closely related work is that done in the compiler for
Mars <cit.>, which extends similar work done for Mercury
<cit.> and earlier for Prolog <cit.>. All use a
similar abstract domain based on the type folding method first proposed
in <cit.>. Our abstract domain is somewhat more precise due
to inclusion of self-aliasing, and we have no sharing for constants.
In Mars it is assumed that constants other than numbers can share.
Thus for code such as our analysis concludes
there is no sharing between and whereas the Mars
analysis concludes there may be sharing.
One important distinction is that in Pawns sharing (and mutability) is
declared in type signatures of functions so the Pawns compiler just has to
check the declarations are consistent, rather than infer all sharing from
the code. However, it does have the added complication of destructive
update. As well as having to deal with the assignment primitive, it
complicates handling of function calls and case statements (the latter
due to the potential for cyclic structures).
Mars, Mercury and Prolog are essentially declarative languages.
Although Mars has assignment statements the semantics is that values are
copied rather than destructively updated — the variable being assigned
is modified but other variables remain unchanged. Sharing analysis
is used in these languages to make the implementation more efficient.
For example, the Mars compiler can often emit code to destructively update
rather than copy a data structure because sharing analysis reveals no
other live variables share it. In Mercury and Prolog the analysis can
reveal when heap-allocated data is no longer used, so the code can reuse
or reclaim it directly instead of invoking a garbage collector.
These sharing inference systems use an explicit graph representation
of the sharing behaviour of each segment of code. For example, code
s_1 may cause aliasing between (a component of) variables and
(which is represented as an edge between nodes
and ) and between and and code s_2 may
cause aliasing between and and between and
. To compute the sharing for the sequence s_1s_2
they use the “alternating closure” of the sharing for s_1 and s_2,
which constructs paths with edges alternating from s_1 and s_2, for
example (from s_1), (from s_2),
(from s_1) and (from s_2).
The sharing behaviour of functions in Pawns is represented explicitly,
by a pre- and post-condition and set of mutable arguments but there is
no explicit representation for sharing of statements. The (curried)
function represents the sharing behaviour of
and the sharing behaviour of a sequence of statements is represented by
the composition of functions. This representation has the advantage
that the function can easily use information about the current sharing,
including self-aliases, and remove some if appropriate.
For example, in the branch of
the case in the code below the sharing for is removed and
we can conclude the returned value does not share with the argument.
There is also substantial work on sharing analysis for logic programming
languages using other abstract domains, notably the set-sharing domain
of <cit.> (a set of sets of variables), generally with various
enhancements — see <cit.> for a good summary and evaluation.
Applications include avoiding the “occurs check” in unification
<cit.> and exploiting parallelism of independent sub-computations
<cit.>. These approaches are aimed at identifying sharing of logic
variables rather than sharing of data structures. For example, although
the two Prolog goals and share ,
they are considered independent if is instantiated to a data
structure that is ground (contains no logic variables). Ground data
structures in Prolog are read-only and cause no problem for parallelism
or the occurs check, whether they are shared or not.
The set-sharing domain is often augmented with extra information related
to freeness (free means uninstantiated), linearity (linear means
there are no repeated occurrences of any variable)
and/or groundness <cit.>. In Pawns there are no logic variables
but data structures are mutable, hence their sharing is important.
However, the set-sharing domain (with enhancements) has been adapted
to analysis of sharing of data structures in object oriented languages
such as Java <cit.>. One important distinction is that Pawns
directly
supports algebraic data types which allow a “sum of products”: there
can be a choice of several data constructors (a sum), where each one
consists of several values as arguments (a product). In Java and most
other imperative and object oriented languages additional coding is
generally required to support such data types. Products are
supported by objects containing several values but the only choice
(sum) supported directly
is whether the object is null or not. Java objects and pointers
in most imperative languages are similar to a algebraic
data type, with corresponding to null. A
cannot be null. The abstract domain of <cit.> uses set-sharing
plus additional information about what objects are definitely not null.
For Pawns code that uses s this information is given
by the data type — the more expressive types allow us to trivially
infer some information that is obscured in other languages. For code
that uses , our domain can express the fact that a
variable is definitely by not having a self-alias of
the component. The rich structural information in our
domain fits particularly well with algebraic data types.
There are also other approaches to and
uses of alias analysis for imperative languages, such as <cit.>
and <cit.>, but these are not aimed at precisely capturing
information about dynamically allocated data. A more detailed discussion
of such approaches is given in <cit.>.
§ CONCLUSION
Purely declarative languages have the advantage of avoiding side effects,
such as destructive update of function arguments. This makes it easier
to combine program components, but some algorithms are hard to code
efficiently without flexible use of destructive update. A function can
behave in a purely declarative way if destructive update is allowed,
but restricted to data structures that are created inside the function.
The Pawns language uses this idea to support flexible destructive update
encapsulated in a declarative interface. It is designed to make all
side effects “obvious” from the source code. Because there can be
sharing between the representations of different arguments of a function,
local variables and the value returned, sharing analysis is an essential
component of the compiler. It is also used to ensure
“preservation” of types in computations.
Sharing analysis has been used in other
languages to improve efficiency and to give some feedback to programmers
but we use it to support important features of the programming language.
The algorithm operates on (heap allocated) algebraic data types,
including arrays and closures. In common with other sharing analysis
used in declarative languages it supports binding of variables,
construction and deconstruction (combined with selection or “case”)
and function/procedure calls. In addition, it supports explicit pointers,
destructive update via pointers, creation and application of closures and
pre- and post-conditions concerning sharing attached to type signatures
of functions. It also uses an abstract domain with additional features
to improve precision.
Early indications are that the performance is acceptable: compared with
other compilers for declarative languages, the prototype
Pawns compiler supports encapsulated destructive update, is fast and
produces fast executables.
§ ACKNOWLEDGEMENTS
Feedback from reviewers, particularly Gianluca Amato, was very
helpful in ironing out some important bugs in the algorithm and
improving the presentation of this paper.
|
http://arxiv.org/abs/2409.03480v1 | 20240905124114 | Numerical study of Darcy's law of yield stress fluids on a deep tree-like network | [
"Stéphane Munier",
"Alberto Rosso"
] | cond-mat.dis-nn | [
"cond-mat.dis-nn"
] |
arrows.meta,arrows
/tikzfeynman/warn luatex=false
PP_satP_01]Stéphane Munier2]Alberto Rosso[1]CPHT, CNRS, École polytechnique, Institut Polytechnique de Paris, 91120 Palaiseau, France[2]LPTMS, CNRS, Université Paris-Saclay, 91405 Orsay, FranceNumerical study of Darcy's law of yield stress fluids
on a deep tree-like network
[
September 9, 2024
==================================================================================
§ ABSTRACT
Understanding the flow dynamics of yield stress fluids in porous media presents a substantial challenge. Both experiments and extensive numerical simulations frequently show a non-linear relationship between the flow rate and the pressure gradient, deviating from the traditional Darcy law. In this article, we consider a tree-like porous structure and utilize an exact mapping with the directed polymer (DP) with disordered bond energies on the Cayley tree. Specifically, we adapt an algorithm recently introduced by Brunet et al. [Europhys. Lett. 131, 40002 (2020)] to simulate exactly the tip region of branching random walks with the help of a spinal decomposition, to accurately compute the flow on extensive trees with several thousand generations. Our results confirm the asymptotic predictions proposed by Schimmenti et al. [Phys. Rev. E 108, L023102 (2023)], tested therein only for moderate trees of about 20 generations.
§ INTRODUCTION
The Darcy law describes the flow rate Q of a Newtonian fluid through a cylinder of radius R and length L, filled with a porous medium <cit.>:
Q = κ R^2 P/η L .
In this equation, P is the pressure drop between the two ends of the cylinder, and η is the viscosity of the fluid. The parameter κ, known as permeability, is dimensionally an area that characterizes the medium's ability to transmit fluid <cit.>. In the absence of a porous medium, the flow rate is described by the famous Poiseuille's law:
Q_Pois.(R) = π R^4 P/8 η L .
The permeability of a filled cylinder corresponds to an area much smaller than the section, π R^2. A simple model, proposed by H. Darcy, is based on the assumption that flow in a porous medium occurs through numerous thin, non-intersecting channels, each with a radius R_ ch≪ R and of number n^ ch per unit area. According to Poiseuille's law, the flow through the filled cylinder can be written as Q = π R^2 n^ ch× Q_Pois.(R_ ch). This leads to an expression for permeability as κ = (π/8) n^ ch R_ ch^4, which can be rewritten as the product of two factors: (i) the fraction of unit area occupied by the fluid, n^ ch R_ ch^2, a positive number smaller than one, and (ii) the area of the section of the microchannel π R_ ch^2. Although actual porous media feature a complex network of intersecting channels with varying shapes, the Darcy law holds true as long as the number of open channels n^ ch remains independent of pressure.
This Darcy framework is significantly altered in the presence of yield stress fluids—fluids that behave like a solid below a given yield stress, σ_Y, and flow only when the stress exceeds this threshold. The consequence of yield stress is that the number of channels through which the fluid flows increases as the pressure drop applied at the ends of the throat increases. Three regimes are observed in experiments and numerical simulations:
(i) no flow is observed below a critical pressure drop ;
(ii) a non linear flow is observed for P><cit.>, the non linearity being related to the growing number of open channels above threshold;
(iii) only above a saturation pressure ≫, does the flow revert to linear growth, the linear growth with the standard permeability κ <cit.>.
Pore-network models are the simplest models that capture these three regimes. They feature a graph structure in which the nodes represent large open pores with well-defined pressure, and the edges are throats connecting the pores. Establishing how the Poiseuille law is modified in the presence of a yield stress remains an open challenge. In this study, we examined the simplest modification to the Poiseuille law:
Q= π R^4/8η L×(P-P'-τ)_+ ,
where (x)_+≡max(0,x) and the threshold pressure drop is τ=L σ_Y /R<cit.>.
We shall assume that all throats have the same geometry. Here τ represents the minimum pressure difference required to establish flow through the throat. We randomly draw these thresholds as independent and identically distributed variables.
Following Refs. <cit.>, the hydraulic network is organized as a single inlet throat feeding a perfect binary tree with a total height of T-1 (see Fig. <ref>). The set of the 2^T-1 variables τ that specify each sample can be interpreted as “frozen disorder” in statistical physics terms.
The tree-like network of pipes maps exactly onto the configuration space of directed polymers (DP) <cit.> in the infinite-dimensional limit. The elementary throats correspond to bonds between monomers, and the τ variables represent their energies in that context. Therefore, as noted in Refs. <cit.>, many tools developed for the DP problem can be adapted to the Darcy problem. Understanding the former has benefited from the observation that configuration spaces are generated by branching random walks in the universality class of branching Brownian motion <cit.>. The main goal of this paper is to demonstrate how an algorithm developed in the latter context <cit.> can be applied to this fluid-mechanics problem to numerically investigate it on very large tree-like networks. Specifically, this algorithm will allow us to test a prediction proposed in Ref. <cit.>, which led to asymptotic analytical expressions for a set of relevant observables.
The paper is organized as follows. Section <ref> provides an alternative and pedagogical derivation of the algorithm used in Ref. <cit.>. This algorithm computes the flow curve for moderate trees T∼ 20 because it requires the prior generation of the full network (2^T-1 branches). In Section <ref>, we introduce a new algorithm to compute the flow curve, inspired by the one described in Ref. <cit.>. Our new algorithm can compute the flow curve for trees with T∼ 10^3 because it draws only the sequence of the first open channels, without prior generation of the full network. Our results are presented and discussed in Section <ref>, and our conclusions are drawn in Section <ref>. Two appendices provide technical details.
§ BASIC ALGORITHM TO COMPUTE THE FLOW
According to the discussion in the Introduction, we model the pressure dependence of the flow in each elementary throat using Eq. (<ref>). By an appropriate choice of units, we can set the overall constant to unity, to get
Q=(P-P'-τ)_+ .
Furthermore, the Kirchhoff's law <cit.> is assumed at each node of the tree that connects two outgoing throats to an incoming one. Specifically, the sum of the outgoing flows Q^(0) and Q^(1) is equal to the incoming flow Q:
Q=Q^(0)+Q^(1).
Equations (<ref>) and (<ref>) are sufficient to determine the properties of the flow for any disorder realization.
The goal of this section is to design the simplest algorithm to compute the flow curve Q_T(P) as a function of the pressure P applied at the inlet, for a given realization of the disorder in a network tree of height T.
§.§ Warm-up: the simple trees with T=1 and T=2
We begin by presenting detailed analytical calculations of the flow in elementary cases with T=1 and T=2, formulated in a way that facilitates understanding the more general case.
In the case T=1, there is only one throat, characterized by the threshold τ_01; see Fig. <ref>. The flow as a function of the pressure is then just given by Eq. (<ref>), in which P' is the pressure at the free outlet of the network, namely P'=0:
Q_1(P)=
0 for P≤τ_01,
P-τ_01 for P>τ_01.
[baseline=0.4cm]
[->] (0,0)–(0,1) node [anchor=south]Q_1(P);
[->] (0,0)–(2,0) node [anchor=west]P;
[thick] (0,0)–(1,0)–(2,1);
(1,-0.1)node[anchor=north]P_0=τ_01–(1,0.1);
Throughout, we denote P_0 as the disorder-dependent minimum pressure that needs to be applied at the inlet of a network to establish the flow. In the present case, obviously, P_0=τ_01.
In the case T=2, the network has one node connecting three throats; see Fig. <ref>.
Let us call P' the pressure at the node, and call τ_01, τ_12^(0) and τ_12^(1) the thresholds that characterize the incoming and the two outgoing throats respectively. For the fluid to flow, the two following conditions must be satisfied:
P-P'>τ_01 and P'>min(τ_12^(0),τ_12^(1)).
This implies P>P_0, where
P_0=τ_01+min(τ_12^(0),τ_12^(1)).
In words, Eq. (<ref>) says that P_0 minimizes the sum of the thresholds τ along the directed paths connecting the root to the leaves of the tree. This definition of P_0 actually generalizes to trees of any height.
Let us set the pressure P at the inlet just above the threshold P_0. Assuming for the time being that τ_12^(0)≠τ_12^(1) and hence that P_0 is not degenerate, the flow reads
Q_2(P)=P-P'-τ_01=P'-min(τ_12^(0),τ_12^(1)).
The first equality is just Eq. (<ref>) applied to the incoming throat. The second one follows from the conservation of the flow at the node Eq. (<ref>), in the case in which only one outgoing channel is open, and from Eq. (<ref>) applied to the open outgoing throat. We may solve for P', arriving at the following determination:
P'=1/2[P-τ_01+min(τ_12^(0),τ_12^(1))].
It is then straightforward to deduce Q_2(P):
Q_2(P)=1/2[P-τ_01-min(τ_12^(0),τ_12^(1))]_+=1/2(P-P_0)_+.
This equation holds for pressures below the threshold P_1 for the opening of the second channel.
The flow above P_1 is determined using again Eq. (<ref>) and the Kirchhoff law (<ref>) at the node, now assuming that both channels are open:
Q_2(P)=P-P'-τ_01=2P'-τ_12^(0)-τ_12^(1).
Solving for P',
P'=1/3(P+τ_12^(0)+τ_12^(1)-τ_01).
We evaluate the threshold pressure P_1 based on the requirement that, for the flow to establish in both branches of the tree, the pressure P' at the node must exceed the threshold pressures of both outgoing throats:
P'≥max(τ_12^(0),τ_12^(1)).
The pressure P_1 coincides with the value of the pressure P at the inlet such that this inequality for P' is saturated. Equation (<ref>) then yields
P_1=P_0+2[max(τ_12^(0),τ_12^(1))
-min(τ_12^(0),τ_12^(1))].
Now, inserting the expression for P' in Eq. (<ref>) into one of the equations (<ref>) for Q_2(P), we get the expression of the flow:
Q_2(P)=2/3[P-τ_01-1/2(τ_12^(0)+τ_12^(1))].
All in all, we have found that the flow depends on the applied pressure as
Q_2(P)=
0 for P≤ P_0
1/2[P-τ_01-min(τ_12^(0),τ_12^(1))] for P_0<P≤ P_1
2/3[P-τ_01-1/2(τ_12^(0)+τ_12^(1))]
for P>P_1,
[baseline=0.6cm]
[->] (0,0)–(0,1.5) node [anchor=south]Q_2(P);
[->] (0,0)–(2.5,0) node [anchor=west]P;
[thick] (0,0)–(0.5,0)–(1.5,0.5)–(2.5,0.5+1*2/3);
[dashed] (1.5,0.5)–(2.5,1);
(0.5,-0.1)node[anchor=north]P_0–(0.5,0.1);
(1.5,-0.1)node[anchor=north]P_1–(1.5,0.1);
[dotted](1.5,0.1)–(1.5,0.5);
where the thresholds P_0 and P_1 are given in Eqs. (<ref>) and (<ref>) respectively. Let us comment that the degenerate case τ_12^(0)=τ_12^(1) does not require a special treatment: P_0=P_1=τ_01+τ_12^(0), and the formula we have just established applies, the second distinguished case being simply no longer necessary as the pressure interval in which it applies becomes trivial.
We see that Q_2(P) is a piece-wise linear and continuous function. It is not difficult to figure out that this property generalizes for trees of arbitrary heights.
§.§ Trees of arbitrary heights
Given a network of size T with all the 2^T-1 elementary thresholds τ, we take the following generic Ansatz for the flow at the inlet of the complete tree, or of sub-trees of it, as that of an effective Darcy law:
Q(P)=κ_eff(P)[P-P^*_eff(P)].
We shall call the proportionality factor κ_eff(P) as the effective permeability of the considered (sub-)network, and P^*_eff(P) as the effective offset. These two flow parameters are non-decreasing piece-wise constant functions of P. Their discontinuities occur at the pressures at which new channels open. Our goal is to compute recursively κ_eff(P) and P_eff^*(P) for a pressure P such that there are, say, n^ch open channels.
§.§.§ Flow parameters at a given node
Let us start by assuming that the sub-tree of all open channels is known, and consider a generic node in the latter. We label the throats it connects as in Fig. <ref>, and write the Kirchhoff law (<ref>) at this node.
We then replace the flows in the right-hand side of the latter by Eq. (<ref>), with P set to the pressure P' at the node and with appropriate superscripts for the parameters. As for Q in the left-hand side of Eq. (<ref>), we shall eventually replace it by Eq. (<ref>).
We first address the case in which only one of the outgoing channels is open, say the one labeled (0):
Q=P-P'-τ=κ_eff^(0)(P'-P^*(0)_eff).
Q in the left-hand side will eventually depend on P only. The effective permeabilities and offsets in the right-hand side depend on P' as piece-wise constant functions. Hence in the restricted range of P (and in the corresponding one of P') of interest, the second and third members form a linear equation for P'. Solving the latter, inserting the solution back into one of the alternative expressions for Q, we get for the flow
Q=κ_eff^(0)/1+κ_eff^(0)(P-τ-P_eff^*(0)).
Recalling that Q(P) itself obeys Darcy's law (<ref>), it is straightforward to identify the parameters that characterize the flow at the inlet:
κ_eff^(0∧1)=κ_eff^(0)/1+κ_eff^(0)
P_eff^*(0∧1)=τ+P_eff^*(0) .
We introduced self-explanatory superscripts. It is of course enough to exchange the superscripts (0) and (1) to get the case in which sub-channel (1) is open and sub-channel (0) is closed
Next, let us assume that P is sufficiently large, such that the two outgoing channels are open. Instead of Eq. (<ref>), we now have the following equation:
Q=P-P'-τ=κ_eff^(0)(P'-P^*(0)_eff)+κ_eff^(1)(P'-P^*(1)_eff).
Solving for P' and inserting its solution back into the expression for Q, we arrive at
Q=κ_eff^(0)+κ_eff^(1)/1+κ_eff^(0)+κ_eff^(1)(P-τ-κ_eff^(0)P_eff^*(0)+κ_eff^(1)P_eff^*(1)/κ_eff^(0)+κ_eff^(1)).
Comparing to Eq. (<ref>), we easily obtain the following relations between the effective flow parameters:
κ_eff^(0∧1)=κ_eff^(0)+κ_eff^(1)/1+κ_eff^(0)+κ_eff^(1)
P_eff^*(0∧1)=τ+κ_eff^(0)P_eff^*(0)+κ_eff^(1)P_eff^*(1)/κ_eff^(0)+κ_eff^(1).
We take the convention that the effective permeability is null for closed channels. We easily check that in the case in which only channel (0) is open, Eq. (<ref>) boils down to Eq. (<ref>). Furthermore, the open throats located at the maximal depth, i.e. at the leaves of the sub-tree of open channels, have effective permeability κ_eff=1, and effective threshold P_eff^* equal to the elementary threshold of the considered throat. Then, iterating Eqs. (<ref>) and (<ref>) upward from the leaves of the open sub-tree to its root enables one to determine the parameters κ_eff and P_eff^* of all sub-channels, and, eventually, of the complete tree.
§.§.§ Threshold pressures
We recall that the expressions of the flow parameters just established are valid for pressures in the limited range in which the sub-tree we have singled out is that of all open channels. Let us now relate the threshold pressures of sub-trees rooted in the different throats joining at a given node.
We call P^(0)_thr and P^(1)_thr the minimum pressures to apply at the inlet of the outgoing throats, for the sub-trees they root to be independently open. We compute the corresponding minimum pressure P^(0∧1)_thr required at the inlet of the incoming throat to have a flow in both sub-trees. As above, we write the incoming flow for that pressure in two different ways:
Q=P^(0∧1)_thr-max(P^(0)_thr,P^(1)_thr)-τ
=(κ^(0)_eff+κ_eff^(1))max(P^(0)_thr,P^(1)_thr)-(κ^(0)_effP_eff^*(0)+κ^(1)_effP_eff^*(1)) .
From the equality of the second and third members in this equation, we deduce the value of the threshold pressure:
P^(0∧1)_thr=τ+(1+κ_eff^(0)+κ_eff^(1))max(P^(0)_thr,P^(1)_thr)-(κ_eff^(0)P_eff^*(0)+κ_eff^(1)P_eff^*(1)).
The case in which only one outgoing channel, say (0), is open is obtained by substituting max(P^(0)_thr,P^(1)_thr) with P^(0)_thr, and setting κ_eff^(1) to zero in this equation. The corresponding threshold pressure, that we shall denote by P^(0∧1)_thr, reads
P^(0∧1)_thr=τ+P^(0)_thr+
κ_eff^(0)(P^(0)_thr-P_eff^*(0)).
Obviously, the threshold pressure P^(0∧1)_thr, corresponding to the case in which (0) is closed and (1) is open, is obtained by interchanging the superscripts (0) and (1) in this formula.
It will prove useful to have in mind the particular case of one-branch sub-trees, for which the threshold pressures P_thr coincide with the effective pressures P^*_eff. According to Eq. (<ref>), the latter just amount to the sums of the elementary thresholds τ along the branch of the considered sub-trees.
We iterate Eqs. (<ref>) and (<ref>) from the leaves to the root to determine the minimum pressure that needs to be exerted at the inlet of the network to open all channels of that sub-tree.
§.§.§ One- and two-branch trees
Here, we shall work out the expression for the flow parameters and of the threshold pressures in the cases n^ch=1 and 2, but now for a tree of arbitrary height T.
Considering a generic directed path P that connects the root of the tree to some leaf, and its segment P_[t_1,t_2] between the depths t_1 and t_2, we shall introduce the convenient notation
ε^( P)_t_1 t_2≡∑_I∈ P_[t_1,t_2]τ_I
for the path sum of the elementary thresholds along this segment. The indices I over which the sum runs label the elementary segments the path is constructed of. We will also be led to use a notation such as
min_ P_[t,T]ε_0 T^( P) ,
by which we mean the minimum of the quantity ε_0 T^( P), varying the segment of the directed path P between the depths t and T (i.e. the nodes between t+1 and T) in all possible ways on the given tree, while keeping fixed the segment and all its nodes between the depths 0 and t.
Let us address the single-branch case n^ch=1, consisting of the directed path P_0. We may easily compute the flow parameters for sub-channels rooted at any node along the branch P_0. Iterating Eqs. (<ref>), we get
κ_eff t=1/T-t, P^*_eff t=ε^( P_0)_t T
for the sub-channel P_0 [t,T] rooted at the node located at depth t on P_0, where t is such that 0≤ t<T. The corresponding threshold pressures P_thr t identify to P^*_eff t, as observed in the previous section.
If we want to require P_0 to be the first path (or one of the first paths, in case of degeneracies) that opens when the pressure at the network inlet is increased from zero, it must obviously be a path that minimizes P_eff t=0^*=ε^( P_0)_0 T. We denote the minimum effective threshold by P_0. If there is only one path that realizes the minimum, which is always the case if the distribution p(τ) of the τ's is continuous, then the flow as a function of the pressure P at the inlet follows from Darcy's law (<ref>), with the parameters read off the previous equation after having set t=0:
Q_T(P)=1/T(P-P_0)_+.
This relation holds only below the pressure above which the next channel opens.
We now turn to the case n^ch=2. We shall assume that the second branch is attached at a node at depth t_1, and we denote by P_1t̂_1 the directed path from the root to the leaves of the tree that includes this branch. This path has obviously an overlap of size t_1 with P_0, which amounts to all throats between the inlet of the network and the node at depth t_1.
The flow parameters of the sub-trees rooted in nodes of depth t≥ t_1 are just those of the single-branch case for each of the channels. If t<t_1 instead, we apply Eq. (<ref>), and then Eq. (<ref>), possibly multiply. We find
κ_eff t=2/T+t_1-2t, P_eff t^*=ε^( P_0)_t t_1+1/2(ε^( P_0)_t_1 T+ε^( P_1t̂_1)_t_1 T).
As for the threshold pressure to open the full two-channel sub-network rooted at the node at depth t<t_1, we check that the recursion built from Eqs. (<ref>),(<ref>) is solved by
P_thr t=ε^( P_0)_t T+T-t/T-t_1(ε^( P_1t̂_1)_t_1 T-ε^( P_0)_t_1 T).
The path that opens first when the pressure at the network inlet is increased from P_0 must obviously be a path that minimizes P_thr t=0. Hence the expression of the second threshold pressure P_1 reads
P_1≡ P_0+ min_t_1', P'_1t̂_1' [t_1'+1,T]T/T-t_1'(ε^( P'_1t̂_1')_t_1' T-ε^( P_0)_t_1' T).
From now on, what we shall call P_1 will be a path that realizes this minimum of the threshold pressure, and t_1 will stand for its overlap with the path P_0. Note that in formula (<ref>), we may replace each sub-path sum ε^( P_0)_t_1 T,ε^( P_1)_t_1 T by the full path sum ε^( P_0)_0 T,ε^( P_1)_0 T: the difference would remain unchanged, since P_0 and P_1 overlap on [0,t_1].
Applying to the network a pressure P chosen above P_1 but below the threshold pressure P_2 for the opening of the next channel (assuming P_2>P_1), the P-dependence of the flow reads
Q_T(P)=2/T+t_1[P-ε_0 t_1^( P_0)-1/2(ε^( P_0)_t_1 T+ε^( P_1)_t_1 T)].
We check that this formula boils down to Eq. (<ref>) for T=2 and when the overlap t_1 is set to its only possible value in this case, namely 1.
Alternatively, Q_T(P) may be expressed in terms of the threshold pressures and of the overlap t_1:
Q_T(P)=2/T+t_1(P-T+t_1/2TP_0-T-t_1/2TP_1).
This formula gives also the correct flow in the case so far disregarded in which the first two channels P_0, P_1 are degenerate and open simultaneously, namely if P_1=P_0.
We may use the recursions we have established in this section and generalize the calculation we have just performed for the case of one and two branches to an arbitrary number of them. Based on the same method, it is now easy to design an algorithm that automates the search for open channels in a given tree-like network, and thus the calculation of the flow. This is what we will expose now.
§.§ Algorithm to determine the open sub-tree and the corresponding flow
We start with a fully-generated tree-like network of height T. Given this frozen disorder, we want to determine the sub-trees of open channels at any given pressure at the inlet, the thresholds P_0,P_1,⋯ at which the channels open successively, as well as the flow function Q(P). In this section, we shall follow closely in spirit the method exposed in Ref. <cit.>.
To begin, we search for a path P_0 that minimizes ε^( P_0)_0 T. Let us first assume, for simplicity, that the values of ε^( P)_0 T on all possible paths P in the tree are all distinct. Then, within this assumption, there is a single channel with lowest threshold pressure P_0. Once this first channel is determined, the flow parameters entering the Darcy law (<ref>) for this single-channel network are calculated iterating Eq. (<ref>) from the leaves to the root.
To find the next channels to open and the next threshold pressure P_1, we loop over all nodes of the path P_0, namely over the depth t_1, with 0<t_1<T. For each t_1, we search for the path P_1 that minimizes ε_t_1+1 T^( P_1). Then, using Eqs. (<ref>) and (<ref>), we compute the minimum pressure P_thr 0 that has to be applied at the inlet of the full network to open the second channel at the outlet of this very node, assuming that all other closed channels of the tree remain closed. We eventually open the channel that minimizes this threshold pressure. We arrive at a two-branch subtree, such as the one singled out in Fig. <ref>. The flow parameters are subsequently computed by iterating Eqs. (<ref>) and (<ref>) going up the tree, from the leaves to the root.
We repeat the steps just described to the two-branch sub-tree we have determined in order to search for the next channel to open. We test the opening pressure of all channels attaching to that sub-tree. These threshold pressures are determined iterating Eqs. (<ref>) and (<ref>). We open the channel endowed with the lowest threshold and compute its flow parameters.
We iterate this overall procedure for the next branches, until some predefined stopping condition is satisfied, depending on the observable we aim at measuring. We see that in this way, we are eventually able to compute the flow Q_T(P) for any value of P.
In the case in which there are n channels that have the same threshold pressure, the algorithm can be kept unchanged: we just open these degenerate channels successively, in random order to each other.
Reduction to the directed polymer problem.
This algorithm can be trivially modified to solve the problem of the determination of the successive configurations of the directed polymer problem, from the ground state to higher excited states. Instead of computing the threshold pressures P_0, P_1, P_2,⋯ and opening the channels in increasing order of these thresholds, we order the directed paths P_0, P_1, P_2⋯ on the tree in increasing sequence ε_0≡ε^( P_0)_0 T, ε_1≡ε^( P_1)_0 T, ε_2≡ε^( P_2)_0 T⋯ of the path sums of the τ's. The latter quantities correspond to the energies of the directed polymer, and the associated paths P_0, P_1, P_2,⋯ represent the corresponding polymer configurations.
§ OPTIMIZED ALGORITHM FOR SUCCESSIVE CHANNEL OPENING
The algorithm just described requires the prior generation of the network. The complexity of the generation of a tree of height T grows like the number of bonds, namely exponentially in T. This definitely restricts the values of T that may be reached, in any practical implementation, to a few dozen units. With such an algorithm, whatever the computing power available, it will never be possible to increase T by more than a factor of order 1.
However, there is a much more efficient way to proceed. Instead of generating all elementary thresholds τ a priori, we may grow the trees branch-by-branch. The new algorithm we shall introduce is inspired from that described in Ref. <cit.>, which was designed to generate exactly the tip region of a branching random walk at large times. The initial motivation came from particle physics: the problem there was to be able to efficiently generate scattering configurations relevant to electron-nucleus collisions, in some suitable asymptotic high-energy regime. (The interested reader is referred to <cit.> for a recent review on the connections between scattering in particle physics and general branching processes). But the algorithm may also be used to generate the lowest-lying energy levels of a directed polymer in a random medium in the mean-field approximation, which turns out to be a problem in the same class. Notably, the underlying method can be interpreted as a spinal decomposition of branching random walks, a technique previously recognized in mathematical literature (see e.g. Refs. <cit.>) but not previously developed into an algorithm.
Let us start by describing the algorithm to solve the directed polymer problem, before explaining how to adapt it to address the Darcy problem.
§.§ Energy-ordered tree generation
Here, we explain how to sequentially construct the configurations P of increasing energy ε^( P)_0 T, where ε^( P)_t_1 t_2 was defined in Eq. (<ref>), starting from the ground state.
The main idea is that the distribution of the ground state energy, min_ Pε_0 T^( P)≡ε_0, is determined by a non-linear evolution equation that is straightforward to solve numerically (at least when all energies are discrete). This allows us to determine the ground state energy without generating the entire random tree. Once ε_0 is known, a polymer configuration P_0 with that minimum energy can be exactly constructed using a branching random walk defined by two elementary processes with non-trivial but definite probabilities. Furthermore, we consider the first excited state as the configuration that minimizes the ground state energy among all polymer configurations branching off P_0. Its energy can also be determined without needing to generate all configurations a priori. This process can be iterated until all relevant configurations have been generated.
We will successively discuss the probability distribution of the ground state energy, the construction of the ground state configuration(s), and then the excited states.
Ground state energy.
We denote by _T(ε_0>ε) the probability that the energy of the ground state for a tree of height T is greater than ε. We call p(τ) the distribution of the random energies τ on the bonds. Then
_T(ε_0>ε)=∑_τ_01p(τ_01)_T-1(ε^bin_0>ε-τ_01) ,
where _s(ε_0^bin>ε') stands for the probability that the energy of the ground state of a perfect binary tree of height s is larger than ε'. The discrete sum becomes an integral if p is a probability density. But for practical reasons, we will assume that τ takes only integer values. Other cases may be recovered through an appropriate scaling, and possible limiting procedures.
The distribution of ε_0^bin obeys a recursion relation in the height of the binary tree:
_s+1(ε_0^bin>ε')=(∑_τ p(τ) _s(ε_0^bin>ε'-τ))^2 .
The ground state energy of a polymer made of a single monomer is of course null, and thus the initial condition simply reads
_s=0(ε_0^bin>ε')= 1_{ε'<0} .
The derivation of recursions such as Eq. (<ref>) is standard (see e.g. <cit.> for the discussion of a very similar model).
It proves convenient to introduce the probability that the ground state energy ε_0^bin is not larger than some ε'. It is just the complementary of the previous probability:
u_s(ε')≡_s(ε_0^bin≤ε')=1-_s(ε_0^bin>ε').
From Eq. (<ref>), we see that it obeys the non-linear finite-difference equation
u_s+1(ε')=2∑_τ p(τ)u_s(ε'-τ)-(∑_τ p(τ)u_s(ε'-τ))^2, with u_s=0(ε')= 1_{ε'≥ 0}.
Such equations can be shown to admit “pulled-front solutions”, and in this sense, belong to the universality class of the Fisher-Kolmogorov-Petrovsky-Piscounov (FKPP) equation <cit.> (see Ref. <cit.> for an extensive review, as well as Appendix <ref>).
From Eqs. (<ref>) and (<ref>), the distribution of ε_0 is found to read
_T(ε_0=ε)=∑_τ_01p(τ_01)
[u_T-1(ε-τ_01)-u_T-1(ε-τ_01-1)].
Building one configuration of minimal energy.
Once the energy τ_01 of the first bond and the minimum energy ε_0 are known, the paths P_0≡ P_0 [0,T] in the tree corresponding to the minimal-energy configurations can be constructed iteratively, using a biased branching random walk that we shall now fully specify.
We shall assume that the sub-path P_0 [0,t] of a configuration of minimum energy (which is not necessarily unique) is known. Given this, we can determine the law governing the next bond energy τ_t t+1. The sub-path P_0 [t,T], which we seek to construct, corresponds to a minimal-energy configuration of a polymer with T-t+1 monomers. This sub-path exists within the configuration space represented by the binary tree rooted in the node of P_0 [0,t] at depth t. The energy of this sub-path is fixed to ε^( P_0)_t T= ε_0-ε^( P_0)_0 t.
We now proceed with the construction. The sub-polymer P_0 [0,t] has two possible continuation bonds, with energies denoted by τ^(0)_t t+1 and τ^(1)_t t+1, respectively (see Fig. <ref>). To determine their joint distribution, we need the probabilities R_t^ε_0(ε) and B_t^ε_0(ε) which describe the likelihood that a polymer configuration with a segment energy ε over [0,t] will have a minimum total energy either equal to ε_0 or greater than ε_0, respectively. These probabilities correspond to the likelihood that the ground state energy of sub-polymers rooted at (ε,t), and made of T-t+1 monomers, is equal to or greater than ε_0-ε. It is then enough to recall that u_s(ε') is the probability that the ground state energy of a binary tree of height s is not larger than ε'. The sought probabilities follow immediately:
R_t^ε_0(ε)=u_T-t(ε_0-ε)-u_T-t(ε_0-ε-1)
and
B_t^ε_0(ε)=1-u_T-t(ε_0-ε) .
Let us now introduce the paths P_0^(0) and P_0^(1), defined such that they both overlap with P_0 on the segment [0,t] and then follow the two possible outgoing bonds from the node of P_0 located at depth t:
P_0 [0,t]^(0)= P_0 [0,t]^(1)= P_0 [0,t],
and ε^( P_0^(0))_t t+1=τ_t t+1^(0),
ε^( P_0^(1))_t t+1=τ_t t+1^(1).
The labels are chosen so that P_0^(0) will eventually coincide with P_0. We must also impose the following conditions on the paths P_0^(0) and P_0^(1):
min_ P_0 [t+1,T]^(0)ε^( P_0^(0))_0 T=ε_0,
and min_ P_0 [t+1,T]^(1)ε^( P_0^(1))_0 T≥ε_0.
We are now ready to express the probability distribution for the pair (τ^(0)_t t+1,τ^(1)_t t+1).
There are two cases to distinguish, depending on whether the inequality in Eq. (<ref>) holds as an equality, or as a strict inequality:
* Equality: the possible configurations P_0^(0) and P_0^(1) have the same minimum energy ε_0. Then, the joint probability of the latter event and that the bond energies have the definite values τ, τ', given that the minimal energy of the overall polymer configuration P_0 is ε_0, reads
(τ^(0)_t t+1=τ,
τ^(1)_t t+1=τ';min_ P_0 [t+1,T]^(0)ε^( P_0^(0))_0 T=min_ P_0 [t+1,T]^(1)ε^( P_0^(1))_0 T
=ε_0|min_ P_0[t,T]ε^( P_0)_0 T=ε_0)
=p(τ)p(τ')R_t+1^ε_0(ε^( P_0)_0 t+τ)
R_t+1^ε_0(ε^( P_0)_0 t+τ')/R_t^ε_0(ε^( P_0)_0 t).
* Strict inequality: one of the possible configurations, which we chose to label as P_0^(1), has ground state energy larger than ε_0. In this case,
(τ^(0)_t t+1=τ,
τ^(1)_t t+1=τ';min_ P_0 [t+1,T]^(0)ε^( P_0^(0))_0 T=ε_0,
min_ P_0 [t+1,T]^(1)ε^( P_0^(1))_0 T
>ε_0|min_ P_0[t,T]ε^( P_0)_0 T=ε_0)
=2p(τ)p(τ')R_t+1^ε_0(ε^( P_0)_0 t+τ)
B_t+1^ε_0(ε^( P_0)_0 t+τ')/R_t^ε_0(ε^( P_0)_0 t) .
We check that the sum of these probabilities marginalized with respect to τ and τ' is unitary, as it should: it is enough to replace R and B appearing in the above expressions by their definitions (<ref>) in terms of u's and perform the sums, using the evolution equation (<ref>).
We first determine if we are in case <ref> or <ref> using the probabilities (<ref>),(<ref>) summed over τ and τ'. In a second step, we draw τ and τ' from either the probability in Eq. (<ref>) or that in Eq. (<ref>), with appropriate normalization following from the conditioning to be in cases <ref> or <ref> respectively.
In both cases, the outgoing bond labeled (0) extends the path P_0. If one ends up in the non-degenerate case <ref>, then once τ_t t+1^(1) is determined, one draws the minimal energy that can have the other polymer configuration: the next configuration to be constructed will be one among those realizing this minimum. The distribution of min_ P_0^(1)ε^( P_0^(1))_t+1 T reads
(.min_ P_0 [t+1,T]^(1)ε^( P_0^(1))_t+1 T=ε|min_ P_0 [t+1,T]^(1)ε^( P_0^(1))_0 T>ε_0)
=u_T-t-1(ε)-u_T-t-1(ε-1)/B^ε_0_t+1(ε^( P_0)_0 t+τ_t t+1^(1)) 1_{ε>ε_0-ε^( P_0)_0 t-τ_t t+1^(1)} .
We shall call P_1t̂ a polymer configuration that realizes this minimum. Note that at this point, we don't need to have fully constructed this configuration: all we need for the next step is the value of its energy, and we have just shown that it can be drawn independently of knowledge of the detailed configuration.
One iterates this construction of the segments of increasing depth until the first path P_0 is complete. Whether there are degeneracies or not, we shall always build the configuration labeled (0) first, leaving the construction of the configuration (1) for a next step that we shall now describe. We store τ^(1)_t t+1, and if there is no degeneracy, namely in case <ref>, we also store the values of the minimum energy of the polymer configurations branching off this node of P_0.
Next configurations.
Once a ground state configuration has been constructed, we search for the next lowest energy level. There may be several configurations having the same ground state energy. In this case, we just pick randomly one of the nodes off which two sub-polymer configurations with degenerate energies branch, say at depth t, and we construct the second polymer configuration. The latter results from a branching random walk defined by the probabilities (<ref>) and (<ref>). We eventually get a path P_0', with overlap t with P_0. In the same way as we did for the path P_0, we draw and store the bond energies of the polymer configurations which branch off the newly-constructed nodes of P_0', as well as the minimum energies of these configurations.
If there is no degeneracy of the ground-state energy ε_0, or after having opened all degenerate paths, we determine the energy of the first excited state. It must be the lowest energy of all configurations P_1t̂', where t' runs over all nodes of the path P_0 (or of the set of paths P_0, P'_0,⋯ of energy ε_0). In the non-degenerate case, in which the sub-tree of configurations of energy ε_0 consists of a single branch,
ε_1=min_t'(ε_0 t'^( P_0)+ε_t' T^( P_1t̂')).
The polymer configuration P_1 that branches off P_0 at the node labeled, say, t, and minimizes this energy is then constructed, again from a branching random walk. The latter is defined by the probabilities (<ref>) and (<ref>), up to the replacements P_0→ P_1 (as well as for the super-scripted paths with (0) and (1)) and ε_0→ε_1. A sketch of two configurations corresponding to the two lowest values of the energy in the non-degenerate case is displayed in Fig. <ref>.
In the degenerate case, the minimum in Eq. (<ref>) is taken over all nodes of the sub-tree of the configurations of energy ε_0, and in the term ε_0 t'^( P_0), the path P_0 is replaced by a path of that tree that includes the considered node.
The whole procedure is thus iterated until some stopping condition is reached, e.g. on the maximum energy or on the number of configurations.
§.§ Threshold pressure-ordered tree generation
The algorithm to open the successive channels as the pressure at the inlet is increased can almost be taken over from the one just described for the directed polymer problem. The only variation will turn out to be in the ordering of the channels to open.
Let us start by recalling the dictionary between the directed polymer problem of finding the lowest-lying energy configurations, and the Darcy problem of opening channels when the pressure at the inlet of the network is dialed up. The bonds between monomers correspond to throats which join at the trivalent nodes of the tree-like network. The latter are identified to pores in the Darcy problem. The rank t of the monomers in the polymer is the depth of the throat joints, its size T is the height of the network. The bond energies τ are the elementary threshold pressure difference of the considered throats.
However, in the DP problem, the successive configurations that we generate are ordered in energy ε_0,ε_1,⋯. In the Darcy problem instead, we open the channels in increasing order of the applied pressure thresholds P_0, P_1,⋯. In general, the latter do not correspond to the successive energies ε_0 T^( P_0), ε_0 T^( P_1),⋯.
To construct the sub-tree of open channels, we start with the first channel, which matches exactly a configuration that possesses the ground state energy in the DP case. We then search for the channels that open successively as the pressure is increased. The procedure to find the next channel, after P_0, to open is exactly the same as the one implemented in the naive algorithm in Sec. <ref>. Given the open sub-tree, instead of looking for the next minimum energy, we compute the threshold pressure at the inlet for each of the closed channels attaching to it, assuming that all other channels remain closed. Instead of the energy ε_1 given by Eq. (<ref>), the relevant variable P_1 is this minimum threshold pressure (<ref>). We construct and open one of the channels that possess the threshold pressure P_1.
We then iterate, using the more general equations (<ref>),(<ref>) in order to compute P_2,P_3⋯, until some stopping condition is satisfied.
Performance of the new algorithm.
The major improvement brought by our new method is obviously that we do not need to generate the full tree of throats (i.e. the random medium) a priori. We are able to generate the branches that correspond exactly to the channels that open, in successive order when the exerted pressure is increased. This is all the more useful as we only need a number of open channels that is much smaller than the total number 2^T-1 of possible channels.
Consequently, the complexity of our new algorithm is linear in the height T of the tree-like network, instead of exponential as for the more straightforward algorithm described in Sec. <ref>. At each threshold, the calculation time is dominated by the time needed to search for the next channels to open. Hence for a total number of channels n^ch, the complexity is bounded by O(T×(n^ch)^2). This allows us to increase the height accessible to numerical investigations by several orders of magnitude.
Note however that a solution u_t(ε') of Eq. (<ref>) needs to be pre-computed and stored for all t in [0,T], and for all relevant values of ε', the number of which becomes proportional to √(T) at large T. But in practice, this does not impose any major restriction on the calculations that are useful for the insight we wish to gain from a numerical study.
All numerical data presented and discussed below were generated running our new algorithm on an off-the-shelf laptop computer, for not more than a few hours for each parameter setting. Note that we also implemented the naive algorithm, and performed accurate comparisons of the calculations of observables using both algorithms for low values of T, in order to validate our implementation.
§ ANALYSIS OF THE MODEL
In this section, we apply our optimized algorithm to generate new relevant numerical data. Before presenting our results, we will briefly revisit a few conjectures proposed in Ref. <cit.> (with detailed derivations in Ref. <cit.>). These conjectures lead to asymptotic formulas that our data will allow us to test, thereby providing an opportunity to confirm or refute the underlying theoretical framework.
§.§ Analytical formulas in two asymptotic limits
According to Ref. <cit.>, there are essentially two limits in which the flow can be worked out: (i) when the pressure P exerted at the inlet is large enough so that all channels are open, (ii) for fixed P, but very large tree height T. Let us consider these limits in turn.
High inlet pressure.
We first assume that P is set above the threshold for the opening of the last of the 2^T-1 channels. Then the tree of height T is complete, as well as all its sub-trees. Looking at equation (<ref>) for κ_eff, it is easy to convince oneself that the latter depends only on the depth of the node one considers. It depends neither on the τ's, nor on the particular node at a given depth.
Let us call κ_t the value of κ_eff at the top of a (sub-)tree of generic height t, and P_t^* the value of the effective threshold pressure at the inlet of the same (sub-)tree. [Note that at variance with κ_t, P_t^* is a random variable also in this case in which all channels are open, meaning that it depends on the particular realization of the (sub-)tree]. We then apply Eq. (<ref>) to the node of lowest depth, namely at the top of the tree: κ_T=2κ_T-1/1+2κ_T-1 and
P_T^*=τ_01+1/2(P_T-1^*(0)+P_T-1^*(1)) .
The equation for κ_T is a recursion. With the initial condition κ_1=1, it can be easily solved:
κ_T=2^T-1/2^T-1 .
The equation for P_T^* can also be turned into a recursion by taking the average over realizations of the disorder. The obtained equation for P^*_T can be trivial solved, given the initial condition P^*_0=P^*_0=0:
P^*_T=T×τ .
Hence these simple calculations yield the following useful asymptotical expressions:
κ_eff→1/2 and P^*_eff/T→τ when P→∞.
Note that P^*_eff is a self-averaging quantity, so we expect that at large pressure P and large tree height T, all the flow curves converge to
Q_T(P)=1/2(P-τ T).
Great tree height.
When the pressure at the inlet is finite, a finite number of channels are open. Of course, the channels that are most likely to open when the pressure at the inlet increases are expected to sit at low ε_0 T. But it is also advantageous that they have a small overlap with each other, namely that the paths get quickly separated. This is easily seen from the minimization of P_thr 0 in Eq. (<ref>), realized in Eq. (<ref>) by the paths P_0 and P_1:
P_1=P_0+T/T-t_1(ε_0 T^( P_1)-ε_0 T^( P_0)).
To get the threshold pressure P_1 as close as possible to P_0, it is apparent that the path sum ε_0 T^( P_1) of the newly-opened channel needs to be close to that of the first channel ε_0 T^( P_0), and at the same time, the overlap t_1 of the two channels needs to be as small as possible. Consequently, we may think that the ε's of the open channels are essentially independent identically-distributed random variables, like the energy levels in the Random Energy Model (REM) <cit.> (see Appendix <ref>).
From this observation, one predicts a definite form for the distribution of the number of open channels in a given pressure range above P_0, see Eq. (<ref>), that can be checked with the help of our numerical code. The mean flow for a pressure P such that P-P_0=x, where x is kept fixed, can also be evaluated in this simplified picture <cit.>:
Q_T(P_0+x)=e^β_c x-1/β_c T .
Let us give an alternative proof of this formula to that in Ref. <cit.>. For pressures different from the opening pressures of the various channels, the flow Q is the affine function of the pressure as described in Eq. (<ref>). Hence its derivative is simply the piece-wise constant function κ_eff:
dQ_T(P_0+x)/dx=κ_eff(P_0+x).
On the other hand, it is easy to see, iterating Eq. (<ref>), that for a deep tree made of n^ch branches which have negligible overlaps:
κ_eff=n^ch/T.
Let us now replace the right-hand side of Eq. (<ref>) by Eq. (<ref>), and take the expectation value:
dQ_T(P_0+x)/dx=n^ch(x)/T.
Consistently with our picture, we identify n^ch with n_REM provided in Eq. (<ref>). Integrating Eq. (<ref>) over x, taking account of the boundary condition Q_T(P_0)=0, one easily arrives at Eq. (<ref>). The latter is a prediction that we will be able to check numerically.
§.§ Calculation of selected observables in a numerical implementation of the improved algorithm
The numerical model is fully defined once the distribution p(τ) of the elementary thresholds τ is specified. We expect the asymptotic behavior of the observables we consider to be universal across a broad class of models. The qualitative behavior should not significantly depend on the specific choice of p(τ), aside from a small number of easily calculable constants, as long as the distribution is sufficiently regular and decreases “rapidly enough” for large values of |τ|.
For simplicity, we take the uniform distribution on a set of the contiguous non-negative integers {1,2,⋯,N}. The parameter N was set to 20 for the flow calculations presented in Sec. <ref>, as these calculations are relatively insensitive to N. We increased N to 200 for the observables discussed in Sec. <ref> to minimize discretization artefacts, which these observables are more sensitive to. There would be no problem in principle in taking a continuous distribution p(τ), but the implementation would be significantly more complicated, technically speaking. The tricky part would be solving Eq. (<ref>) and storing the result for all values of the depth between 0 and T.
The few model-dependent parameters needed as inputs to the analytical formulas are computed in Appendix <ref>.
§.§.§ Flow
As argued in Sec. <ref>, the flow for a given realization of the network is a piece-wise affine function of the pressure P applied at the inlet, described by the effective Darcy law (<ref>). In order to visualize Q_T(P), we generated 10 realizations of it for different values of T; see Fig. <ref>. We see that the realizations all exhibit the same global shape, essentially up to a T-dependent random shift. The main T-dependence must come from the random threshold pressure P_0, the expectation value of which roughly grows proportionally to T. We can also see clearly that at large P, namely for large number of open channels, the slope κ_eff of Q_T(P) for each of the realizations becomes close to the expected asymptotics given in Eq. (<ref>). It turns out that P_eff^* instead stays lower than the expected value T×τ, where τ=10.5 in our choice for the distribution p(τ). We shall discuss this fact further below. In Fig. <ref> is displayed the average of the flow Q_T(P_0+x) over many realizations, as a function of x. The curves that correspond to different values of T are very similar at least for large enough x, and almost superimpose, now that the realizations have been shifted by the threshold pressure -P_0. The inset shows that for large tree heights, the scaled mean flow T×Q_T(P_0+x) may converge to some scaling curve. We also see that the cluster of curves all tend to the function x↦ x at the origin x→ 0, consistently with the expression (<ref>) of Q_T just above the threshold for the opening of the network.
Let us focus on this mean scaled flow for moderate values of x. We perform the calculation with the values of T used in Ref. <cit.> (i.e. T≤ 21), as well as for much larger T: for this observable, we were able to generate data for values of T that are about three orders of magnitude larger than in the latter reference. The result of our calculation is shown in Fig. <ref>. With the new data, we see a clear convergence to the expected asymptotic expression (<ref>), which was not possible to see with lower values of T: indeed, this convergence turns out to be very slow with T.
Effective parameters.
In order to investigate the properties of the flow in more details, we compute the permeability κ_eff and the threshold pressure P_eff^* entering the effective Darcy law (<ref>) as a function of the number of open channels. We then average these quantities over the realizations.
Our numerical data for κ_eff is displayed in Fig. <ref>. We see that the asymptotics (<ref>) are clearly approached when T assumes moderate values. For larger values of T, the expected convergence is slower. For the largest values of T we set (T=1000), the limit is hardly approached even when as many as n^ch=1000 channels are open.
In order to quantify the convergence, we compute the expectation value of the minimum number of channels n^ch_SAT needed to reach an effective permeability κ_eff≥ 0.4; see Fig. <ref>. We find that the T-dependence of this quantity is linear to a good approximation when T becomes large. This clearly confirms the observation in Ref. <cit.>. The latter was inferred from data generated with the help of the equivalent of the basic algorithm described in Sec. <ref>, although we now see that these data were very far from any asymptotic regime.
It is also useful to plot κ_eff as a function of the scaled variable n^ch/T; see Fig. <ref>. It seems clear that at large T, one tends to a limiting curve. The fact that the cluster of curves exhibits some significant dispersion is related to the slow convergence with T. The shape of the limiting curve for low and high numbers of open channels is consistent with the asymptotic form we may expect: we approach Eq. (<ref>) in the regime n^ch≪ T, and Eq. (<ref>) when n^ch≫ T. The transition between these two regimes is confirmed to happen when the number of open channels is on the order of T <cit.>.
The mean threshold pressure P_eff^* is displayed in Fig. <ref>, as a function of the number of open channels n^ch and for different values of T. When n^ch is large, we expect that P_eff^*/T tends to τ. While the numerical results seem in agreement with this expectation for the lowest values of T, we see that when T gets large while keeping n^ch fixed, this quantity seems to converge instead to -v(β_c). Actually, this is the value that would be expected when only one channel is open, namely the ground state configuration in the DP problem, and in the limit T→∞. Indeed, according to Eq. (<ref>) and Eq. (<ref>),
P_eff^*(n^ch=1)/T=1/Tε^( P_0)_0 T=-v(β_c)+3/2β_cln T/T+ O(1/T) .
§.§.§ Statistics of open channels
So far, we have focused on the flow, the most natural observable for this problem. Now we focus on n^ch(x), namely the number of open channels at the pressure P= P_0+x. Note that here we fix the value of x for all disorder realization while the threshold P_0 and thus the pressure P are realization dependent. In particular we are interested in its average n^ch(x) and its distribution.
In Fig. <ref> is displayed the histogram of n^ch(x) for x=20. We see that the larger T, the closer one gets from what would be expected for the Random Energy Model (REM), namely
P_n^ch(x)=e^-β_c x(1-e^-β_c x)^n^ch-1 ,
consistently with the simplified picture (see Appendix <ref>). We have set N=200, instead of N=20 used for the flow calculation: in this way, the degeneracies of the energy levels due to finite N are strongly suppressed and we are able to recover the asymptotic large T behavior given in Eqs. (<ref>),(<ref>).
Figure <ref> shows the mean, n^ch(x), divided by what the expected leading behavior, namely e^β_c x. We see that the larger T, the curves get constant and closer to 1. Again, the convergence is quite slow. We deemed it useful to compare the mean number of open channels to the mean number of configurations of the DP in the energy interval x above the ground state. We see that the latter also converges to the expected asymptotics ∝ x e^β_c x.
Note that the small oscillations seen in the curves are an effect of finite N and thus of the fact that the energy levels are discrete. Their amplitude increases for a smaller N. However, we have verified that also for N=20 used for the flow the trend of the curves is the same.
§ CONCLUSION
We have designed and implemented a novel algorithm that exactly generates the open channels of a tree-like network of throats at a specified pressure. This method is effectively a spinal decomposition of the tree, with the primary spine corresponding to the first open channel. Leveraging this algorithm, we were able to analyze networks of unprecedented depth, achieving a two to three orders of magnitude increase in tree height T compared to previous studies, all while using a standard laptop. Given the algorithm's linear complexity with respect to T, even greater heights are within reach.
The results obtained from our algorithm confirm with remarkable precision the conjectures proposed in Ref. <cit.>. Specifically, we have successfully verified the analytical expression for the mean flow [Eq.(<ref>)], demonstrating the feasibility of deriving an explicit expression for the non-linear Darcy law of a yield stress fluid within this network. Moreover, our findings strongly support the physical picture argued in Ref. <cit.>:
* The first channels that open are those with low opening thresholds and minimal overlap. As a consequence, the number of open channels, n^ch(x), identifies to the number of low-energy levels of the Random Energy Model (REM).
* The effective permeability, κ_eff, initially increases linearly with the number of open channels. Upon the activation of approximately t channels, κ_eff saturates to the Newtonian permeability of the medium. It is noteworthy that t channels represent a very small fraction compared to the total possible channels in the network, 2^t-1.
* The effective offset, P_eff^*, gradually evolves from the ground state energy of the associated directed polymer to the mean energy of a non-optimized polymer.
These data and our algorithm open up the prospect of studying sub-asymptotic corrections, and to elaborate a more refined picture of the corresponding physical effects, beyond the simplified REM model for the tree-like network. These results provide a robust validation of the theoretical framework and offer significant insights into the behavior of yield stress fluids in complex networks.
A longer-term and more ambitious project would be to understand if the behavior of the effective permeability and effective offset is peculiar to tree-like networks or if it generalizes to networks of finite dimension <cit.>.
§ ACKNOWLEDGMENTS
S.M. thanks Pascal Maillard, Bastien Mallein, Michel Pain for their interest and for related discussions. A.R. thanks Chen Liu, A. De Luca, V. Schimmenti and L. Talon for the useful discussions. This project was initiated in the framework of the “GDR Branchement”. It has received financial support from the CNRS through the MITI interdisciplinary programs, and from the Agence Nationale de la Recherche (ANR), grant ANR-23-CE30-0031-04 (DISCREEP).
§ TRAVELING WAVE SOLUTIONS TO THE NON-LINEAR EQUATION AND ENERGY LEVELS OF THE DIRECTED POLYMER NEAR THE GROUND STATE
From Eq. (<ref>) one derives the probability distribution of the ground state of the directed polymer. Let us discuss the solutions to this equation.
It is well-known that for a wide class of initial conditions, including those relevant to this paper, the large-time solution of the non-linear equation (<ref>) forms a universal traveling wave (see e.g. Ref. <cit.>, and Ref. <cit.> for the mathematical proof in the case of the FKPP equation). The latter refers to a uniformly-moving front that monotonously connects u_s=0 (for ε'→-∞) to u_s=1 (for ε'→+∞). The standard way of determining the properties of the traveling wave is to solve the equation obtained from the linearization of Eq. (<ref>) in the region where u_s is small, where only the first term in the right-hand side is relevant. A suitable Ansatz is
u_s(ε')∝ e^β[ε'+v(β)s],
where β is a positive parameter. Requiring that this form be a solution of the linearized equation (<ref>) determines v(β):
v(β)=1/βln(2∑_τ p(τ)e^-βτ).
(See e.g. <cit.> for a discussion of a very similar model). For the initial condition in Eq. (<ref>), the shape of the asymptotic traveling wave in the region where u_s(ε')≪ 1 is known to be of the form (<ref>) with β=β_c, where β_c minimizes v(β). The velocity of the wave is then -v(β_c).
Corrections for large but finite network heights s are also known. The front position reads <cit.>
m_s=-v(β_c)s+3/2β_cln s+const+o(1) .
This quantity also represents the expectation value (or, up to a constant, the median value) of the ground-state energy ε_0^bin of a directed polymer on a binary tree of height s. For ε'<m_s and in the limit m_s-ε'≫ 1, the traveling wave has the following shape:
u_s(ε')≃const×(m_s-ε'+const') e^β_c(ε'-m_s)× e^-(ε'-m_s)^2/[2β_c v”(β_c)s] .
In the region m_s-ε'≪√(β_c v”(β_c)s), the dominant ε'-dependence is indeed the exponential form (<ref>), with β=β_c, corrected by a linear factor. For m_s-ε'≳√(β_c v”(β_c)s), the Gaussian factor drives u_s(P') rapidly to very small values.
A derivation of these results can be found e.g. in Ref. <cit.>. It is not difficult to recover them heuristically: it is sufficient to recognize that, at this level of accuracy and for the particular quantities considered, the non-linear evolution equation can be replaced by its linearization, supplemented with a moving absorptive boundary. We refer the reader to the established literature on this topic, e.g. Ref. <cit.> and references therein.
Mean density of energy levels.
Analyzing further the relationship between the FKPP equation and branching Brownian motion, or between the corresponding finite-difference equation and its associated branching random walk respectively, one may prove that the full information about the distribution of the energy levels near the ground state can be deduced from the solution to Eq. (<ref>), if one takes appropriate initial conditions <cit.>. Of particular interest for the present work is the expectation value n_BRW(x) of the number of energy levels at a distance x above the ground-state energy ε_0^bin <cit.>:
n_BRW(x)≃const× x e^β_c x .
This formula holds for (formally) infinite-s, and x≫ 1/β_c.
Choice of p(τ) in our numerical calculations.
In our particular implementation, we chose p(τ) uniform on the first N non-zero integers, namely
p(τ)=1/N for τ∈{1,2⋯ N},
0 else.
Concretely, for the two values of N we have used for our numerical simulations, the relevant parameters are given in Tab. <ref>.
§ ENERGY LEVELS IN A RANDOM ENERGY MODEL
Let us assume that our configuration space consists of 2^T-1 independent branches, and consider a branch labeled P. It is characterized by the random energy ε_0 T^( P). The generating function G(γ) of the probability (ε_0 T^( P)=ε) that this energy equals some given ε reads
G(γ)≡∑_ε e^-γε (ε_0 T^( P)=ε)
=∑_ε e^-γε[∑_τ'_0 1,⋯,τ'_T-1 T∈ P(∏_j=0^T-1 p(τ'_j j+1))δ_ε,∑_j=0^T-1τ'_j j+1],
and is obviously the same for all branches. An easy calculation yields
G(γ)=(∑_τ'p(τ')e^-γτ')^T.
The sought probability is then obtained from an appropriate integration:
(ε_0 T^( P)=ε)=∫_ Cdγ/2iπ e^γεG(γ)=∫_ Cdγ/2iπ e^γε(∑_τ'p(τ')e^-γτ')^T ,
where the integration domain can be, for example, the segment C≡[γ_0-iπ,γ_0+iπ] in the complex plane, for any real number γ_0. In the limit of very large T, the integral may be evaluated by a steepest-descent method. We introduce
χ(γ)≡ln(2∑_τ'p(τ')e^-γτ')
in terms of which the probabiity (<ref>) can be conveniently re-expressed:
(ε_0 T^( P)=ε)=∫_ Cdγ/2iπ e^γε+[χ(γ)-ln 2]T.
We denote by γ_c the solution to the saddle-point equation
ε+χ'(γ_c)T=0 .
We then set γ_0=γ_c, and observe that the integration domain C may be extended to an infinite line parallel to the imaginary axis in the complex plane: the contribution to the integral of the added pieces of contour would be small for large T. The steepest-descent method evaluation of (ε_0 T^( P)=ε) yields
(ε_0 T^( P)=ε)T≫ 1∼ e^γ_cε+[χ(γ_c)-ln 2]T .
From Eq. (<ref>), we see that (ε_0 T^( P)=ε) decreases exponentially as ε decreases. Thus the typical ground state energy ε_0 is found by requiring that the probability that there is no state of that energy among the 2^T-1 independent branches is of order unity (for example 1/2; the precise value is actually irrelevant). This probability reads
[1-(ε_0 T^( P)=ε_0)]^2^T-1 .
Asking for this quantity to be of order unity for T large is equivalent to requiring that 2^T (ε_0 T^( P)=ε_0)= O(1). Taking the logarithm and using the saddle-point evaluation of (ε_0 T^( P)=ε_0) in Eq. (<ref>), neglecting terms that are small compared to T, we find
γ_cε_0+χ(γ_c)T=0 .
Combining Eq. (<ref>) and Eq. (<ref>), up to the replacement ε→ε_0 in the latter, we see that necessarily
γ_cχ'(γ_c)=χ(γ_c).
Comparing χ defined in Eq. (<ref>) and v introduced in Appendix <ref> [Eq. (<ref>)], we observe that χ(γ)≡γ v(γ). Hence, Eq. (<ref>) is equivalent to v'(γ_c)=0, which implies that γ_c=β_c. The mean density of levels of energy ε near the typical ground state energy ε_0≃ -v(β_c)T then reads
2^T-1(ε_0 T^( P)=ε)≃ e^β_c(ε-ε_0).
If T is large enough, we can consider that the distribution of the energy levels is Poissonian <cit.>, the rate parameter of the Poisson law being the exponential in the right-hand side of Eq. (<ref>).
One may compute the probability P_n(x) of having n energy levels in the interval of size x above the ground state energy, namely in [ε_0,ε_0+x]. When β_c is small, which is verified for large N, then x can be considered a continuous variable, since the rate parameter of the Poisson process varies slowly with x. The latter becomes the intensity of a Poisson point-like process on the line. In this continuous limit, we get a simple expression for P_n(x):
P_n(x)=e^-β_c x(1-e^-β_c x)^n-1 .
The mean number of levels which follows from this probability reads
n_REM(x)=e^β_c x.
Comparing this formula to the equivalent one for branching random walks (<ref>), we see that they differ essentially by a linear factor x.
10Darcy:1856
H. Darcy, Les fontaines publiques de la ville de Dijon: Exposition et
application des principes à suivre et des formules à employer dans les
questions de distribution d'eau.
Victor Dalmont, Libraire des Corps imperiaux des ponts et
chaussées et des mines, 1856.
Bear:1988
J. Bear, Dynamics of Fluids in Porous Media.
Dover Civil and Mechanical Engineering Series, Dover, 1988.
Sahimi:2011
M. Sahimi, Flow and transport in porous media and fractured rock: from
classical methods to modern approaches.
John Wiley & Sons, 2011.
Blunt:2017
M. J. Blunt, Multiphase Flow in Permeable Media: A Pore-Scale
Perspective.
Cambridge University Press, 2017.
Feder:2022
J. Feder, E. G. Flekkøy, and A. Hansen, Physics of Flow in Porous Media.
Cambridge University Press, 2022.
Roux:1987
S. Roux and H. J. Herrmann, “Disorder-induced nonlinear conductivity,”
Europhysics Letters, vol. 4, p. 1227, dec 1987.
Liu:2019
C. Liu, A. De Luca, A. Rosso, and L. Talon, “Darcy's law for yield stress
fluids,” Phys. Rev. Lett., vol. 122, p. 245502, Jun 2019.
Barnes:2000
H. Barnes, A Handbook of Elementary Rheology.
Raymond F. Boyer Library Collection, University of Wales, Institute
of Non-Newtonian Fluid Mechanics, 2000.
Poiseuille:1840
J.-L.-M. P. Poiseuille, “Recherches expérimentales sur le mouvement des
liquides dans les tubes de très petits diamètre,” Comptes rendus des
séances de l’Académie des Sciences, vol. 11, pp. 1041–1048, 1840.
Schimmenti:2023
V. M. Schimmenti, F. Lanza, A. Hansen, S. Franz, A. Rosso, L. Talon, and
A. De Luca, “Darcy's law of yield stress fluids on a treelike network,” Phys. Rev. E, vol. 108, p. L023102, Aug 2023.
Schimmenti:2023_supplemental
V. M. Schimmenti, F. Lanza, A. Hansen, S. Franz, A. Rosso, L. Talon, and
A. De Luca, “The darcy law of yield stress fluids on a tree-like network.” Supplemental material to Ref. <cit.>, available at
http://link.aps.org/supplemental/10.1103/PhysRevE.108.L023102.
Mezard:1986
M. Mézard, G. Parisi, and M. Virasoro, Spin Glass Theory and Beyond.
WORLD SCIENTIFIC, 1986.
Derrida:1988
B. Derrida and H. Spohn, “Polymers on disordered trees, spin glasses, and
traveling waves,” Journal of Statistical Physics, vol. 51,
pp. 817–840, 1988.
Brunet:2020yiq
E. Brunet, A. D. Le, A. H. Mueller, and S. Munier, “How to generate the tip
of branching random walks evolved to large times,” EPL, vol. 131,
no. 4, p. 40002, 2020.
Kirchhoff:1868
G. Kirchhoff, “Ueber den Einfluss der Wärmeleitung in einem Gase auf die
Schallbewegung,” Annalen der Physik, vol. 210, no. 6, pp. 177–193,
1868.
Angelopoulou:2023qdm
A.-K. Angelopoulou, A. D. Le, and S. Munier, “Scattering from an external
field in quantum chromodynamics at high energies: from foundations to
interdisciplinary connections.” arXiv:2311.14796.
Hardy:2009
R. Hardy and S. C. Harris, “A SpineApproach to BranchingDiffusions
with Applications to L^p-Convergence of Martingales,” in
Séminaire de Probabilités XLII (C. Donati-Martin, M. Émery,
A. Rouault, and C. Stricker, eds.), pp. 281–330, Berlin, Heidelberg:
Springer Berlin Heidelberg, 2009.
Harris:2017
S. C. Harris and M. I. Roberts, “The many-to-few lemma and multiple
spines,” Annales de l'Institut Henri Poincaré, Probabilités et
Statistiques, vol. 53, no. 1, pp. 226 – 242, 2017.
Derrida:2016
B. Derrida and P. Mottishaw, “On the genealogy of branching random walks and
of directed polymers,” Europhysics Letters, vol. 115, p. 40005, sep
2016.
Fisher:1937
R. A. Fisher, “The wave of advance of advantageous genes,” Annals of
Eugenics, vol. 7, pp. 355–369, Jun 1937.
KPP:1937
A. Kolmogorov, I. Petrovsky, and N. Piscounov, “Étude de l'équation de la
diffusion avec croissance de la quantité de matière et son application
à un problème biologique,” Bull. Univ. État Moscou, A, vol. 1,
no. 6, pp. 1–25, 1937.
VanSaarloos:2003
W. Van Saarloos, “Front propagation into unstable states,” Physics
reports, vol. 386, no. 2-6, pp. 29–222, 2003.
Derrida:1980
B. Derrida, “Random-energy model: An exactly solvable model of disordered
systems,” Physical Review B, vol. 24, no. 5, pp. 2613–2626, 1981.
Bramson:1983
M. D. Bramson, “Convergence of solutions of the Kolmogorov equation to
travelling waves,” Memoirs of the American Mathematical Society,
vol. 44, Jul 1983.
Brunet:1997
E. Brunet and B. Derrida, “Shift in the velocity of a front due to a cutoff,” Phys. Rev. E, vol. 56, pp. 2597–2604, Sep 1997.
Brunet:2011
E. Brunet and B. Derrida, “A branching random walk seen from the tip,”
Journal of Statistical Physics, vol. 143, p. 420–446, Apr. 2011.
Aidekon:2013
E. Aïdekon, J. Berestycki, E. Brunet, and Z. Shi, “Branching brownian
motion seen from its tip,” Probability Theory and Related Fields,
vol. 157, no. 1-2, pp. 405–451, 2013.
Mueller:2019ror
A. H. Mueller and S. Munier, “Particle-number distribution in large
fluctuations at the tip of branching random walks,” Phys. Rev. E,
vol. 102, no. 2, p. 022104, 2020.
Derrida:2015
B. Derrida and P. Mottishaw, “Finite size corrections in the random energy
model and the replica approach,” Journal of Statistical Mechanics:
Theory and Experiment, vol. 2015, p. P01021, jan 2015.
|
http://arxiv.org/abs/2409.03274v2 | 20240905063137 | Recent Advances in Attack and Defense Approaches of Large Language Models | [
"Jing Cui",
"Yishi Xu",
"Zhewei Huang",
"Shuchang Zhou",
"Jianbin Jiao",
"Junge Zhang"
] | cs.CR | [
"cs.CR",
"cs.AI"
] |
Tensor network square root Kalman filter
for online Gaussian process regression
[
Received 16 July 2024; accepted 04 September 2024
=================================================================================
§ ABSTRACT Large Language Models (LLMs) have revolutionized artificial intelligence and machine learning through their advanced text processing and generating capabilities. However, their widespread deployment has raised significant safety and reliability concerns. Established vulnerabilities in deep neural networks, coupled with emerging threat models, may compromise security evaluations and create a false sense of security. Given the extensive research in the field of LLM security, we believe that summarizing the current state of affairs will help the research community better understand the present landscape and inform future developments. This paper reviews current research on LLM vulnerabilities and threats, and evaluates the effectiveness of contemporary defense mechanisms. We analyze recent studies on attack vectors and model weaknesses, providing insights into attack mechanisms and the evolving threat landscape. We also examine current defense strategies, highlighting their strengths and limitations. By contrasting advancements in attack and defense methodologies, we identify research gaps and propose future directions to enhance LLM security. Our goal is to advance the understanding of LLM safety challenges and guide the development of more robust security measures.
§ INTRODUCTION
Large Language Models (LLMs) represent a significant breakthrough in the fields of artificial intelligence (AI), particularly due to their ability to generate high-quality text. They have become deeply embedded in our daily lives, transforming how we interact with technology. Despite their impressive capabilities, it is not surprising that LLMs are not immune to safety and reliability concerns. Issues such as bias and harmful content, hallucinations, privacy risks, social engineering, and the generation of misleading or erroneous output continue to pose significant challenges <cit.>. Addressing these challenges has become a major focus of recent research, which explores novel attack methods, develops threat models, and creates defense strategies to mitigate these vulnerabilities.
Attackers now utilize LLMs to scale their methods, moving beyond hand-crafted samples to exploit vulnerabilities in the latent space for greater effectiveness <cit.>. Additionally, the transferability of attacks between open-source and closed-source models has become a significant concern, with attackers targeting the expanded attack surfaces created by enhanced model functionalities and integration.
Significant challenges remain in defending against these attacks. Safety-relevant features of LLMs highlight the persistent issues of model bias and toxicity, which continue to give rise to new jailbreaks and adversarial methods. Tackling these specific attacks often feels like a game of whack-a-mole, where each fix only temporarily mitigates the problem without offering a universal boost in safety and robustness. Moreover, overly aggressive defensive measures can lead to performance degradation, making it essential to strike a balance.
Given the rapid evolution of attack methods and the increasing sophistication of defense strategies, it is crucial to understand the current state of both. This paper aims to explore three primary research questions relevant to those already familiar with the field’s latest advancements:
* Where is the field currently?
As a survey, the primary goal is to highlight the key breakthroughs and significant findings that have defined the current state of the field. This includes summarizing major advancements, methodologies, and applications that have emerged.
* What are the open problems surrounding current attack and defense methods?
This question focuses on identifying unresolved issues and gaps in our understanding of both attack strategies and defense mechanisms. It aims to discuss the limitations, challenges, and areas where current methods fall short, thereby outlining the key open problems in the field.
* Relation with other surveys:
As this survey focuses on the most recent developments and cutting-edge research, mainly from 2023 and beyond, it may omit foundational aspects. For readers seeking a broader understanding or foundational knowledge in this area, we recommend consulting <cit.>, which provides a comprehensive overview of the basic concepts and previous work in the field.
While the research questions we address might be considered relatively moderate, our goal is to provide a thorough and current overview of the field. By doing so, we aim to offer a detailed summary of the present state, highlight key advancements, and identify ongoing challenges. We hope that our efforts will pave the way for future research and help the community navigate the evolving landscape of LLM safety and robustness.
The structure of this paper is as follows: First, we will explore recent vulnerabilities inherent in LLMs. This includes discussing both neural network-based vulnerabilities that LLMs inherit and the unique factors that make LLMs particularly susceptible to attacks. Understanding these vulnerabilities is crucial for setting the context for subsequent discussions on attack methods. Next, we will explore recent attack methods targeting LLMs. This section will cover the specific vulnerabilities these attacks exploit and how these methods represent improvements over past strategies. By linking attacks to the identified vulnerabilities, we will provide a comprehensive view of how threats have evolved and adapted. Finally, we will review recent defense strategies designed to counteract the discussed attacks. We will highlight the limitations of current defenses and propose future research directions aimed at enhancing the security and robustness of LLMs. This section will suggest ways to strengthen existing defenses and explore new approaches to addressing emerging threats.
§ VULNERABILITIES ANALYSIS
Understanding the vulnerabilities of LLMs is crucial for developing effective attack and defense mechanisms. This section outlines known vulnerabilities, incorporating recent studies to provide a foundational understanding for subsequent discussions.
§.§ Deep Neural Network Inherent Vulnerabilities
Deep neural networks (DNNs) are particularly susceptible to adversarial attacks due to several factors: non-robust and abstract features, complex decision boundaries, and data overfitting. Firstly, DNNs struggle with non-robust features, where small, imperceptible perturbations to the input can cause significant changes in the model’s output. Furthermore, these features can be highly abstract and diverse, lacking interpretability, which makes it difficult to detect and handle biased or harmful content generation. Additionally, DNNs create complex, non-linear decision boundaries in their feature space. These intricate boundaries can be exploited by adversaries who craft inputs that lie near the decision boundaries, causing misclassifications or undesired outputs. Moreover, DNNs can overfit to the training data, learning not only the underlying patterns but also the noise present in the training examples. This overfitting can make DNNs sensitive to adversarial attacks. Furthermore, overfitting leads to inadequate generalizations on unseen data, which may explain why some out-of-distribution adversarial examples could easily affect model behavior.
§.§ Alignment Mechanism Brittleness
§.§.§ Algorithmic Limitations
Alignment algorithms, such as reward-free Proximal Policy Optimization (PPO) <cit.> and Direct Preference Optimization (DPO) <cit.>, exhibit significant limitations in adapting to model changes <cit.>. As detailed by <cit.>, relying on deactivating specific activations rather than altering the model’s inner knowledge and capabilities can lead to fragile safety constraints in DPO. Furthermore, <cit.> demonstrates that even when safety-critical regions are frozen, fine-tuning attacks can circumvent safety mechanisms and exploit alternative pathways to breach model safety. This underscores the adverse effects of safety mechanism sparsity, which is attributed to algorithmic shortcomings.
§.§.§ Increased LLM Vulnerabilities from Fine-Tuning and Quantization
The absence of robust safety measures in fine-tuned and quantized models is a growing concern. Recent studies have shown that fine-tuning an initially aligned LLM—one that has established safety alignment through Reinforcement Learning with Human Feedback (RLHF)—can inadvertently weaken its safety mechanisms <cit.>. These studies emphasize that excessive focus on utility-oriented datasets during fine-tuning may divert the model's attention away from maintaining safety alignment, even if the datasets themselves are benign.
Research by <cit.> further explores how downstream tasks such as fine-tuning and quantization <cit.> affect the vulnerability of LLMs. The study indicates that both processes notably reduce the resilience of LLMs against jailbreak attacks. Specifically, fine-tuning can lead to increased susceptibility due to phenomena like catastrophic forgetting <cit.>, where the fine-tuning process alters the model's initial safety alignment and disrupts its prioritization of safety protocols. This phenomenon occurs because fine-tuning often adjusts the model's parameters in ways that can interfere with its previously established safety measures, making the model more vulnerable to adversarial inputs.
Additionally, quantization, which is used to reduce the model size and improve computational efficiency, may further exacerbate these vulnerabilities <cit.>. The reduction in model precision during quantization can affect the model's ability to handle subtle distinctions, potentially making it easier for adversaries to exploit weaknesses that were not apparent in the full-precision model.
Overall, the findings highlight the need for enhanced safety mechanisms that can withstand the effects of fine-tuning and quantization, ensuring that LLMs maintain their robustness and reliability even after these processes.
§.§.§ Susceptible to Attacks
Reinforcement Learning (RL) algorithms used in alignment, such as PPO, are vulnerable to backdoor attacks. Research has demonstrated that RL algorithms can be exploited with minimal effort to induce targeted or untargeted behaviors <cit.>. These attacks can manipulate the model’s behavior, either subtly or overtly, undermining its reliability and safety. The effectiveness of reward models employed in PPO can also be compromised if attackers exploit weaknesses in the reward design or if the reward signals are not well-calibrated. If the reward model is vulnerable or misaligned, it may fail to guide the model towards desired behaviors effectively, leading to performance issues and potential security risks <cit.>.
§.§ Gap Between Model Capacity and Alignment
Many attacks reveal the generalization gap in current adversarial training approaches. For instance, the attack method in <cit.> shows that refusal training in GPT-series models, including GPT-4, is vulnerable to simply reformulating a harmful request in the past tense. Unlike pre-training, which can leverage large amounts of diverse natural language data from the internet, alignment training requires carefully curated data that reflects human values and safety considerations. As a result, the alignment process may not keep up with the rapid advancements in model capacity, leading to vulnerabilities that can be exploited by jailbreaks <cit.>. The development of LLMs has led to a larger attack surface for prompt injection attacks, demonstrating the weaknesses of current safety training. This could be attributed to their ability to follow instructions, as larger models show better instruction-following capabilities <cit.>. Compared with larger models like GPT-3.5 and GPT-4, Vicuna is less responsive to instructions <cit.>. As pointed out by <cit.>, the capability gaps cause differences in prompt injection attack effectiveness. The brittleness explanation has been studied with safety-critical neurons forming a remarkably sparse structure in the model, as mentioned in <cit.>.
§.§ Intrinsic Conflict in the Objectives of LLMs
One potential explanation for the vulnerabilities observed in LLMs lies in the intrinsic conflict between their generation objectives and their instruction-following objectives. The generation objective of LLMs focuses on producing coherent, contextually relevant, and high-quality text that adheres to grammatical rules and reflects learned patterns from training data. Conversely, the instruction-following objective involves aligning the model’s outputs with ethical standards and societal norms through alignment training. This process integrates safety constraints to ensure outputs avoid harmful content and adhere to specified guidelines. However, balancing these objectives proves intricate, as stringent safety constraints can limit the model's ability to generate diverse and contextually appropriate responses. Such restrictions may lead to outputs that are overly cautious or fail to engage effectively with given contexts <cit.>. Moreover, LLMs are susceptible to manipulation, exemplified in scenarios like role-playing, where the model may produce outputs aligned with deceptive scenarios despite diverging from its safety training <cit.>. Addressing these challenges necessitates refining alignment strategies to better integrate safety constraints with the model’s generation capabilities, aiming to achieve outputs that are both ethically aligned and contextually relevant in practical applications of LLMs.
§.§ Supply Chain Vulnerabilities
Supply chain vulnerabilities in LLMs involve risks associated with third-party plugins or components that enhance the model’s functionality. Plugins from external developers or repositories may not undergo rigorous security testing or adhere to best practices, potentially introducing vulnerabilities such as code exploits, backdoors, or compatibility issues that could compromise the LLM’s integrity <cit.>.
§ ATTACKS
Before the advent of LLMs, the machine learning community was already grappling with a variety of safety challenges. Several attack methods, originally designed for traditional machine learning models (especially deep neural networks), have been adapted or found to be applicable to LLMs as well, such as adversarial samples. Additionally, some attacks are specific to the unique lifecycle stages of LLMs, such as alignment and instruction following. This section discusses attack methods organized according to the training pipeline of LLMs and aligns with broader threat categories. For instance, fine-tuning attacks primarily impact model integrity by manipulating model parameters during the training phase. Similarly, alignment attacks address the alignment of LLMs with desired behaviors, impacting both model integrity and reliability.
In examining specific attack methods, our goal is to highlight the contributions of each new method to the community, demonstrating how they address existing challenges and drive progress in the field. The key attack metrics to consider are attack success rate, attack effectiveness, and attack transferability.
§.§ Post-Training Attacks and Their Relevance to LLMs
Training LLMs from scratch is resource-intensive and costly, so attackers often focus on vulnerabilities during the post-training phase. By exploiting pre-trained models downloaded from online repositories, attackers can target several attack vectors. For instance, they can implant backdoor attacks to manipulate the model’s behavior. Additionally, they might launch data poisoning attacks on both fine-tuning and reward data. Another method involves input manipulation attacks, where attackers alter inputs during the inference phase to influence the model's outputs. We will now categorize these attacks based on their timing relative to the LLM training process: fine-tuning attacks and alignment attacks.
§.§.§ Fine-Tuning Phase Attacks
Fine-tuning attacks occur during the model fine-tuning phase, both on open-source models via weight editing and supervised fine-tuning <cit.>, and on closed-source models via data poisoning or malicious fine-tuning on APIs <cit.>. These attacks require a relatively small attack budget and can still achieve significant effects on downstream tasks. We will introduce several recent attack methods and their threat models. Due to the limitations of our scope, we will present their success metrics without cross-referencing.
<cit.> and <cit.> both introduce fine-tuning attack methods for injecting backdoors into LLMs. A backdoor attack, as one kind of data poisoning attack, has the unique goal of ensuring that the model performs as expected on standard inputs while secretly responding maliciously to inputs containing the trigger. <cit.> introduce Virtual Prompt Injection (VPI), which achieves behavior control of LLMs in specific scenarios by injecting a small number of poisoned samples into the instruction-tuning data. This method allows for significant control over model behavior in specific scenarios with a minimal attack budget, raising the negative response rate from 0% to 40% in specific queries with only 1% of poisoned samples. The low cost of this attack makes it more challenging for defenders to effectively filter out the abnormal data without thorough individual inspection.
<cit.> introduce a method called BadEdit, which injects backdoors into LLMs by directly editing the model parameters. It reframes the backdoor injection problem as a knowledge editing problem and incorporates new approaches to enable the model to learn the hidden trigger-target patterns with limited data instances and computing resources. Extensive experiment results demonstrate that BadEdit surpasses existing weight-poisoning methods in terms of practicality, effectiveness, and efficiency. BadEdit ensures that the model's performance is not significantly affected and is robust to defense methods such as fine-tuning and instruction-tuning.
Another emerging fine-tuning attack combines benign encoded datasets with fine-tuning. The covert malicious fine-tuning attack proposed by <cit.> trains GPT-4 to handle encoded harmful requests and responses while evading detection. It uses a dataset where each data point seems harmless, yet fine-tuning with this dataset leads GPT-4 to respond with encoded harmful content 99
In terms of effectiveness, all three methods achieve high attack success rates with low costs. While BadEdit demonstrates broader applicability across attack scenarios, its white-box nature requires access to internal model parameters, which may not always be feasible. Regarding defenses, all three methods challenge current mechanisms. BadEdit's direct manipulation of model parameters makes it harder to detect and defend against, whereas VPI, although potentially easier to detect, still poses significant defense challenges due to its subtle fine-tuning alterations. Covert malicious fine-tuning easily evades concurrent defense mechanisms. All methods exploit fine-tuning vulnerabilities, showing how adaptive attackers can severely undermine model safety in an evasive manner.
§.§.§ Alignment Attacks
Alignment attacks can be broadly categorized into two types. The first category is algorithmic attacks, which exploit vulnerabilities inherent in the alignment algorithms themselves. These attacks aim to undermine the integrity of the alignment process by directly targeting the algorithms' weaknesses. The second category is data poisoning attacks, which focus on corrupting the training data used in the alignment process. A significant portion of research in this area concentrates on reward hacking <cit.>, where adversaries manipulate the reward mechanisms to achieve undesired outcomes. By tampering with the data that shapes the model's behavior, these attacks can lead to misalignment and compromise the system's intended functionality.
Widely adopted alignment methods include reward-guided Proximal Policy Optimization (PPO) <cit.> and reward-free Direct Preference Optimization (DPO), with DPO considered a more efficient alternative to PPO. However, <cit.> conduct an empirical study revealing that both methods are vulnerable to backdoor and non-backdoor attacks, with DPO being more susceptible across a range of LLMs compared to PPO. Unlike PPO-based methods—which require at least 4% of the data to be poisoned to trigger harmful behavior—DPO can be compromised with as little as 0.5% of poisoned data. Furthermore, <cit.> perform a case study to investigate the underlying mechanisms of the DPO algorithm. They discovered that while DPO does not eliminate the generation of toxic outputs, it instead avoids regions that produce toxicity by learning an "offset" distributed across model layers. Based on these findings, they propose a method to reactivate the toxicity of aligned models. Both studies highlight the vulnerabilities in alignment mechanisms and the inadequacy of current defense strategies against these weaknesses.
Another branch of research identifies the reward model as a new attack surface, where data poisoning is particularly effective and stealthy against current defenses. <cit.> demonstrate that backdoor attacks can evade detection, causing the reward model to assign high scores to incorrect sentiment classes when a trigger appears, severely impacting the LLM’s performance on sentiment tasks trained with this poisoned reward model. This threat model is further examined in <cit.>, which shows that reward data poisoning can be highly effective, requiring less than 5% of the original dataset to cause significant damage. Additionally, <cit.> introduce a novel backdoor attack on LLMs aligned using Reinforcement Learning from Human Feedback (RLHF). This attack poisons both the reward model training stage and the DPO training to embed a "jailbreak backdoor" into the model. This backdoor includes a trigger word that functions like a universal sudo command, enabling harmful responses without the need for specific adversarial prompts. These studies indicate that current defense methods have not fully addressed this emerging attack surface, where poisonous data detection is ineffective against reward data poisoning.
§.§ Adversarial Attacks and Their Relevance to LLMs
Adversarial perturbations (attacks) leverage the vulnerabilities or weaknesses of machine learning models to cause them to behave in unintended or malicious ways at inference time. These attacks involve adding small, often imperceptible, changes to input data to fool a model into making incorrect predictions <cit.>. While this test-time attack method was originally discovered in image classification tasks <cit.>, it has been adapted to LLMs to manipulate outputs. In the context of LLMs, adversarial attacks often refer to jailbreaks and prompt injection attacks. These types of attacks are particularly relevant because they exploit the model's robustness to generate safe and appropriate text based on user-provided prompts.
§.§.§ Jailbreaks
Jailbreaks are designed to bypass the safety and alignment measures that have been put in place to prevent LLMs from generating harmful or inappropriate content. Initially, early instances targeting models like ChatGPT revealed significant challenges, where manually crafted adversarial examples led to outputs containing expressions of racism and illegal advice <cit.>.These early attacks highlighted crucial vulnerabilities, prompting efforts to enhance model safety against such naive methods <cit.>.
However, the landscape of jailbreak techniques has evolved significantly. Recent studies have shown that despite improvements in safety measures, jailbreaks are far from being eliminated <cit.>. For example, simple modifications such as tense changes have been found sufficient to bypass the safeguards of advanced models like GPT-4o <cit.>. This indicates that even minor alterations in input can undermine sophisticated defenses.
The sophistication of attack methods has further advanced with the use of automated techniques. Researchers have developed optimization strategies to generate universal adversarial examples that effectively bypass safety constraints across various models, including LLaMA2-7b, Vicuna-7b, and both closed-source GPT-3 and LLaMA <cit.>. Techniques leveraging tree-of-thought reasoning <cit.> automate prompt generation, enhancing the potency of attacks <cit.>. Additionally, adversarial prompt templates have been used to maximize target log probabilities, as demonstrated in recent studies <cit.>.
In response to these evolving threats, researchers have employed red-teaming strategies, scaling jailbreak attempts using models like ChatGPT <cit.>. They have also proposed training adversarial prompt generators with examples of unsafe outputs to better anticipate and counteract potential threats <cit.>. Furthermore, multi-step strategies exploiting vulnerabilities, such as privacy leakage in ChatGPT <cit.>, underscore the increasing sophistication of attacks as LLM capabilities grow <cit.>.
These developments underscore a critical disparity: while LLM capabilities advance rapidly, the techniques for aligning these models with ethical and safety standards are struggling to keep pace. This growing gap has increased the susceptibility of LLMs to adversarial manipulation. As LLMs become more powerful and accumulate more knowledge, their expanded capabilities create a larger attack surface for those seeking to exploit them for harmful or biased outputs.
§.§.§ Prompt Injection Attacks
Prompt Injection Attacks represent a significant form of adversarial attacks where the prompt or input provided to LLMs is manipulated to induce unintended or harmful responses. This can involve injecting malicious code, spreading misinformation, or compelling the model to perform unintended attacks. Such attacks typically involve crafting deceptive inputs designed to trick the LLM into producing specific outputs. For instance, a prompt like "Tell me a joke about [sensitive topic]" could be used to elicit offensive or inappropriate content from the model.
In contrast to jailbreaks, which tricking the model into bypassing its safety and usage restrictions, prompt injection attacks focus on manipulating the input itself, leading to unintended or harmful outputs. This distinction highlights how prompt injection targets the interaction layer between users and the model rather than exploiting intrinsic model weaknesses.
Initially, users discovered that LLMs are overly sensitive to instructions embedded in user inputs <cit.>. For example, a request to translate a sentence could sometimes lead to unintended responses, like instructions instead of a translation. This sensitivity inspired attackers to develop malicious instructions within user inputs <cit.>. Early prompt injection attacks were categorized as direct,' exploiting LLMs' susceptibility to manipulative prompts <cit.>. Conversely, indirect' prompt injection attacks involve malicious content passed through external sources, such as tool calls <cit.>, which further broaden the attack surface.
As LLMs evolved, so did the complexity of prompt injection attacks. Researchers noted that different LLMs employ varying mechanisms like tokenizers and alignment strategies, impacting the effectiveness of direct prompt injection attacks. Studies show that many LLMs are resilient to direct attacks <cit.>, prompting a shift towards indirect attack methods. Recent advancements include optimization and automation strategies that enhance the efficacy of indirect attacks. For instance, <cit.> introduces a method using gradient information to generate highly effective, universal injection data. This approach, based on the assumption that LLMs access external data <cit.>, demonstrates its broad applicability across different scenarios.
Additionally, research has highlighted the importance of distinguishing malicious prompts from legitimate instructions. Techniques proposed by <cit.> involve crafting payloads with separators to induce context separation, ensuring that malicious prompts lead to the desired outputs. This evolving landscape underscores the need for continuous improvements in prompt handling and model safety to counter increasingly sophisticated prompt injection attacks.
§.§ Data Privacy Attacks against LLMs
LLMs have revolutionized natural language processing tasks but are susceptible to various inference attacks and extraction attacks during deployment. These attacks exploit vulnerabilities in model outputs and operational processes, compromising user privacy and confidentiality. Inference attacks focus on inferring private or sensitive information about the data used to train a model, while extraction attacks involve querying a model to directly extract or reconstruct sensitive information that the model has learned during its training.
§.§.§ Inference Attacks
Membership inference attacks analyze the model behavior to infer whether a data record was used for training. The process relies on the fact that model gives training data a higher score than non-training data. Hence, the important part is to accurately define this score function.
Early methods feed target data to a learned reference model to regularize scores <cit.>. However, training such reference model is computational expensive and reliant on knowledge of the training data distribution.
To eliminates the need for prior knowledge of training data distribution and computational intensive training, <cit.> propose the Neighborhood Attack method to generate synthetic neighbors for a given sample and compare their loss difference under the target model to determine whether the given sample was presented in the training data or not. This method is highly effective than attacks that have perfect knowledge of the training data distribution. Another approach is proposed by <cit.>, providing a efficient way to perform membership inference attacks using stochastic noise in the embedding space. Notably, this approach eliminates the need for prior knowledge of the training data distribution and the computationally intensive training of additional shadow models. Hence, its more efficient and general.
Instead of training reference models, membership inference attacks could also use the model's outputs (such as prediction probabilities or loss values) to directly infer the membership of a sample. Besides saving computational power with reference training, reference-free methods could also formulate their threat model without overfitting assumption, as discussed in <cit.>, which reduce false-positive rate in practical scenario and show high attack effectiveness. This method is based on detecting memorization in LLMs and uses a self-prompt reference model to eliminate the need for access to a reference dataset.
Attribute inference attacks allow adversaries to leverage indirect information revealed through a model's predictions, responses, or patterns of behavior to deduce confidential attributes, potentially compromising the privacy of individuals represented in the training data.
Even though LLMs are constrained by privacy differential, and private data filtering, they are still suffer from new threat models of attribute inference attacks <cit.>. The scenario discussed in <cit.>, where adversaries use LLMs to analyze online comments to infer user attributes such as location, age, gender, and other sensitive information, illustrates a form of attribute inference attack. In this case, the adversaries exploit the model's ability to process and generate text to uncover confidential attributes that are not explicitly provided but can be deduced from the content and patterns observed in user-generated data. The authors also propose using LLMs to positively induce user answers to extract enough attributes to re-identify individuals. Both methods exemplify more sophisticated approaches to executing attribute inference attacks.
§.§.§ Extraction Attacks
Recent by <cit.> highlights instances of data memorization and extraction in LLMs. <cit.> demonstrates that GPT-2 can memorize specific training data, which could be later extracted by malicious actors. This phenomenon of memorization has been corroborated in subsequent studies<cit.>, indicating the persistence and relevance of this security concern. Moreover, the paper by <cit.> investigates the phenomenon of memorization in LLMs in-depth. It proposes predictive strategies for anticipating which sequences LLMs are likely to memorize during their training process.
Additionally, the emergence of the Special Characters Attack(SCA) introduces a novel method of extracting training data from LLMs. SCA leverages LLMs' tendency to memorize the co-occurrence between special characters and raw texts during training, exploiting this memorization to trigger data leakage. The effectiveness of SCA has been empirically validated against state-of-the-art LLMs, demonstrating its capability ot extract diverse data tyeps, including code repositories, web pages, and personally identifiable information <cit.>.
In summary, the potential risks associated with data privacy attacks against LLMs do pose significant challenges that could impede their widespread and safe deployment in daily applications. By understanding the current landscape of data privacy attacks against LLMs, stakeholders can better appreciate the importance of robust security measures and ethical considerations in deploying these powerful AI technologies.
§.§ Energy-Latency Attacks and Potential Threats against LLMs
Energy-latency attacks target the computational resources and operational efficiency of LLMs, aiming to exploit vulnerabilities in model performance by increasing computational load and inducing latency in responses. These attacks pose significant challenges to the practical deployment and performance of LLMs.
Attackers exploit LLMs by crafting inputs that prompt the model to generate excessively long responses. This strategy maximizes computational load and extends response times, effectively exhausting computational resources. Such attacks can lead to increased delays and inefficiencies, disrupting the smooth operation of LLM systems. Another tactic involves using inputs designed to trigger resource-intensive computations or deep network activations. By causing the model to perform complex and demanding computations, attackers aim to degrade performance, disrupt service availability, or compromise the quality of the model’s outputs.
The energy-latency attacks are originated in a broader area of neural networks and their inference processes. The NMTSloth methodology, as discussed in <cit.>, introduces a gradient-guided approach to detect efficiency degradation in Neural Machine Translation (NMT) systems. By delaying the appearance of the end-of-sequence (EOS) token through subtle perturbations, NMTSloth demonstrates how altering the output probability distribution can escalate computational demands. Another empirical study <cit.> leverages the adaptive nature of neural networks by introducing subtle perturbations during inference, significantly increasing the model's inference time.
LLMs, akin to other neural networks, are vulnerable to energy-latency attacks. Methods explored in the aforementioned paper can be similarly adapted for LLMs to heighten computational requirements and prolong response times. According to our research, we have identified that LLMs (LLaMa-series) tend to engage in excessive analysis in some trigger scenario'. For example, when the input data contains numerical values, using the instruction "let's think step-by-step" prompts the model to prioritize mathematical computations, even if the instruction itself is not directly related to solving math problems. This tendency can lead the model to engage in unnecessary and detailed mathematical analysis, which may not align with the user's intended query or task.
Although energy-latency attacks have not been extensively explored in the context of LLMs, addressing their potential threats is crucial for ensuring the efficient and reliable operation of these models. While building efficient LLMs has been the focus of extensive research <cit.>, understanding and mitigating the risks posed by energy-latency attacks remains an important area for future investigation. Enhancing our ability to counteract these attacks will contribute to the development of more resilient and effective LLM systems.
§ DEFENSE
This section explores advanced defense strategies aimed at enhancing the robustness and safety of LLMs. We categorize them into three main subsections: robustness enhancements, post-safety alignment, and model merge <cit.> techniques. Robustness enhancements encompass proactive measures to strengthen LLMs against adversarial inputs, data biases, and other potential weaknesses. These include Red Teaming, Adversarial Training, Safety Fine Tuning, Post-Safety Alignment, and Model Merging. Post-safety alignment strategies focus on ensuring the outputs of LLMs align with the ethical standards and societal expectations, addressing issues such as fairness, transparency, and accountability. Model merge techniques involve integrating multiple models or methodologies to harness their combined strengths, enhancing overall performance, reliability, and adaptability of LLMs. By comprehensively reviewing these defense strategies, this section aims to provide insights into cutting-edge approaches for defending LLMs, ensuring their secure and effective deployment in real-world applications. Understanding and advancing these defenses are crucial steps toward building trustworthiness and resilience in LLMs, thereby facilitating their responsible integration into various domains of modern society.
§.§ Red Team Defense
Another effective way to discover potential risks in LLMs deployment is to use the Red Team Defense. The Red Team Defense methodology is centered around simulating real-world attack scenarios to uncover potential vulnerabilities and weaknesses in LLMs. The outcomes are used to improve security policies, procedures, and technical defenses. The process includes the following steps:
* attack scenario simulation: researchers begin by simulating real-world attack scenarios, which may include generating abusive language, leaking private information, etc.
* test case generation: various methods are employed to generate test cases, such as using another LLM to generate test cases or employing a classifier to detect whether test cases could lead to harmful outputs from the LLM.
* attack detection: detect whether the model is susceptible to attacks, such as adversarial attacks, application security, and human factors.
* model improvement: the fourth step in Red Team Defense is to use the findings from the previous steps to improve the LLM's security. This can involve updating security policies, procedures, and technical defenses to address any identified vulnerabilities. The goal is to make the LLM more resilient to attacks and reduce the risk of successful exploitation.
Pain Points: Red teaming can be resource-intensive and requires skilled personnel to effectively mimic sophisticated attack strategies <cit.>. The development of red teaming methodologies is still in its early stages, resulting in a lack of comprehensive statistical data on its effectiveness and outcomes.
Recent advancements in leveraging language models for red teaming have introduced automated approaches to simulate test cases and then employ a classifier to detect whether these test cases could lead to harmful outputs from the LLM <cit.>. The shift towards automation brings several benefits: firstly, it increases test case diversity and difficulty; secondly, it enables scalable testing across a wide range of scenarios and environments.
The work proposed by <cit.> aimed to build a more efficient AI-assistant interface to collect scale red team data for further analysis. They also studied the scalability of different sizes and types of LLMs under read team attacks, pointing out that rejection sampling and RLHF could build a stronger defense against various attacks.
Besides security and defense purpose, red teaming could help boost LLM performance in different areas as well. In the paper <cit.>, the authors use red teaming techniques to simulate different types of mathematical problems and puzzles, and then evaluate the performance of LLMs in solving them. While the paper's focus on mathematical tasks is narrow, its methodology and findings are valuable for understanding the broader impact of red teaming techniques on LLM performance.
Future Work: Looking ahead, future research directions could explore leveraging LLMs to autonomously generate diverse test cases, detect failure modes comprehensively, and assist in the development of integrated attack scenarios that span multiple domains rather than being task-specific. As encouraged by <cit.>, there is potential in developing white-box red teaming methods where the target LLM itself participates in the red teaming process, providing deeper insights into its own vulnerabilities and strengths.
§.§ Adversarial Training
Adversarial training is a fundamental technique aimed at enhancing the robustness of LLMs against adversarial attacks and inherent noise during application. Typically, traditional adversarial training involves perturbing input data to create adversarial examples that exploit vulnerabilities in the model. Mathematically, the objective of adversarial training can be formulated as follows:
min_θ𝔼_(x,y) ∼𝒟[ max_δ∈𝒮ℒ(f_θ(x + δ), y) ],
where:
* θ represents the parameters of the model f.
* (x, y) are samples drawn from the distribution 𝒟, where x is the input and y is the corresponding label.
* δ denotes the perturbation applied to x, constrained within the set 𝒮.
* ℒ denotes the loss function used for training.
* f_θ(x + δ) is the model's prediction when the input x is perturbed by δ.
Pain Point: Given the vast input space, comprehensively identifying failure modes in LLMs is challenging and resource-intensive. Besides, defenders often focus on robustness-related failure modes, such as those involving Lp-norm attacks. However, attackers may employ more subtle methods, including Trojans and jailbreaks, which are harder to detect. And adversarial training often causes a trade-off between robustness and performance on clean data <cit.>. Furthermore, LLMs' responses to adversarial training are less effective than expected. They might memorize adversarial samples used in training rather than developing generalized defenses. And they struggle to modify pre-train gained knowledge <cit.>
Recent studies have introduced new approaches to address these challenges. <cit.> find that adversarial pre-training using ALUM (Adversarial Learning with Unlabeled Model) leads to improvements in both generalization and robustness across a range of natural language processing tasks. The use of virtual adversarial training objectives <cit.> allows ALUM to smooth embedding space of model and balance standard error with robust error effectively. This capability suggests a promising direction for adversarial training in LLMs, integrating robust defense strategies without compromising model performance. To further address the computational inefficiencies of traditional adversarial training methods, <cit.> introduces two novel algorithms that perform adversarial attacks within the continuous embedding space of LLMs. These algorithms—CAT (Continuous Adversarial Training) and CAPO (Continuous Adversarial Perturbation Optimization)—significantly enhance model robustness against discrete adversarial attacks while maintaining utility. By operating in the continuous embedding space, these algorithms reduce computational overhead and provide scalable solutions for adversarial training. In order to improve the generalization of adversarial training, <cit.> employ latent adversarial training instead of generating adversarial examples aimed at specific failure modes. Their primary objective is to enhance the model's robustness against potential shifts in data distribution that may occur between the model's development and deployment phases in real-world applications. These shifts could include the introduction of Trojans, jailbreaks, and other unforeseen challenges.
Future Work: Building on the insights from <cit.>, future research could explore the implementation of latent adversarial training as an alternative to traditional adversarial training during the fine-tuning stage. Additionally, incorporating adversarial training into the pre-training phase of model development could further enhance robustness against attacks. These approaches aim to improve the model’s resilience by addressing vulnerabilities early in the training process and better equipping the model to handle adversarial inputs.
§.§ Safety Fine-Tuning
Fine-tuning has gained great popularity among end-users as a core technique for customizing pre-trained LLMs to various downstream tasks.
However, recent research has revealed potential safety risks associated with exercising this privilege.
<cit.> find that the safety alignment of LLMs can be compromised by fine-tuning with only a few adversarially designed training examples. In addition, they further suggest that even without malicious intent, simply fine-tuning with benign and commonly used datasets may also inadvertently degrade the safety alignment of LLMs.
To address the vulnerabilities introduced by fine-tuning, a natural solution one would expect is to incorporate safety-related examples during the fine-tuning stage.
<cit.> and <cit.> prove the effectiveness of this approach. And they suggest that including a small number of safety examples in the fine-tuning process significantly enhances the safety of LLMs while not degrading their usefulness. However, adding excessive safety samples may lead to LLMs rejecting safe prompts that superficially resemble unsafe ones. <cit.> employ a similar practice to improve the robustness of LLMs against malicious queries when processing long text. Another noteworthy work by <cit.> uncovers the crucial role of the prompt templates in preserving safety alignment. The proposed "Pure Tuning, Safe Testing" strategy aims to maintain the model's safety constraints while enhancing its performance by employing carefully designed prompts. This dual approach helps ensure that the fine-tuning process does not compromise the model’s robustness and safety features.
Furthermore, to address the threat of extraction attacks, <cit.> propose a knowledge sanitization method that fine-tunes LLMs to generate innocuous responses such as "I don't know" when encountering sensitive data. This approach not only safeguards against the extraction of sensitive information but also maintains the overall performance of LLMs in various tasks. Additionally, <cit.> offer a defense mechanism that effectively prevents LLMs from being maliciously fine-tuned for harmful purposes. This mechanism works by removing information about harmful representations such that it is difficult to recover them during fine-tuning. More importantly, it is able to generalize across different subsets of harm that have not been seen during the defense process, substantially enhancing the robustness of LLMs to harmful fine-tuning attacks.
Pain Point: There is a well-known trade-off in enhancing LLM’s instruction-following capabilities while ensuring they remain safe and reliable.
On the other hand, fine-tuning itself can be used both to enhance the safety of LLMs by adding safety-related examples, and to attack LLMs by introducing adversarial examples. The mechanism proposed by <cit.>, while seemingly promising, is limited to defending against supervised fine-tuning attacks in LLMs.
Also, it requires paired safe and unsafe examples, which makes data collection more expensive and complex.
Future Work: Future work on safety fine-tuning perhaps should strive to achieve a win-win situation for both the safety and utility of LLMs.
At the same time, future research should invest in stronger attack settings to emulate worst-case attacks during the fine-tuning process and investigate different types of harm, based on which more effective and comprehensive defense mechanisms need to be developed.
§.§ Post-Safety Alignment
Beyond the regular safety alignment process, post-safety alignment has emerged as a secondary safeguard for LLMs against various potential vulnerabilities. This ongoing process is concerned with seeking solutions from both outside and inside the backbone of LLMs to keep them from generating undesirable behaviors <cit.>. Accordingly, relevant defense techniques have been developed to mitigate a majority of the safety risks posed by data privacy attacks, prompt injection attacks, and jailbreak attacks.
A significant branch of post-safety alignment techniques is machine unlearning <cit.>. This paradigm involves selectively forgetting or erasing undesirable knowledge in LLMs, e.g., copyrighted and user privacy content, expecting to mitigate risks associated with model outputs that are potentially influenced by biased or sensitive information. Under this principle, <cit.> have pioneered a work that uses only negative examples to unlearn LLMs, which is done by applying gradient ascent on the loss function of those undesirable samples. Subsequent studies <cit.> have improved this approach by emphasizing the need to retain general knowledge while unlearning harmful knowledge. Instead of tuning all model parameters for unlearning, <cit.> introduce an efficient framework to eliminate the effect of unwanted data by designing separate lightweight unlearning layers. These layers learn to forget different sets of data under the guidance of a selective teacher-student objective. Moreover, <cit.> present a two-stage unlearning framework based on the concept of first isolating and then removing harmful knowledge in model parameters. This framework has been shown to effectively balance the trade-off between removing undesirable information and preserving utility.
Another line of research seeks to prevent LLMs from generating harmful content or resisting jailbreak attacks via steering the decoding process of LLMs. Prompt engineering <cit.>, as a way to indirectly control the decoding of LLMs, has been preferred due to its ease of operation. To illustrate, <cit.> present a simple yet effective technique to defend against various jailbreak attacks by encapsulating the user's query in a system prompt that reminds ChatGPT to respond responsibly. Further, <cit.> explore a strategy called "In-Context Defense" that teaches the LLM to resist jailbreaking by imitating a few examples of refusing harmful queries. On the other hand, directly manipulating the probability of generated tokens provides a more precise manner to bootstrap the LLM's decoding process. For instance, <cit.> propose a straightforward method based on the principle of contrastive decoding, which aims to boost the probability of desired safe outputs by suppressing undesired outputs. Besides, <cit.> develop a safety-aware decoding strategy to protect LLMs from jailbreak attacks. This strategy identifies safety disclaimers and amplifies their token probabilities while attenuating the probabilities of token sequences aligned with jailbreak attacks' objectives.
Pain Point: Although unlearning-based methods succeed in enhancing the safety of the backbone model, they also significantly exacerbate the issue of over-safety. Additionally, utilizing general data for distillation not only incurs extra training costs but also has limited effectiveness in maintaining utility. On the other hand, decoding-based approaches do not address the core problem of harmful output from LLMs, since the harmful knowledge remains within the model. Additionally, they also introduce extra costs during inference. As a result, safety enhancement made by existing approaches typically comes at the cost of significantly increasing over-safety and compromising utility, with the drawbacks outweighing the benefits.
Future Work: As discussed above, future research might have to consider how to effectively achieve the three objectives simultaneously, i.e., safety enhancement, over-safety mitigation, and utility preservation. A recent work by <cit.> has made a beneficial exploration toward this direction, but it is limited to small-scale LLMs (no more than 13B) and focuses on general safety issues. Therefore, we urge that more efforts should be devoted to developing post-safety alignment techniques that can alleviate over-safety and maintain utility, as well as the comprehensive evaluation of these techniques in a wider range of domains.
§.§ Model Merge
The methodology of model merging originates from the observation that different fine-tuned models initialized from the same pre-trained backbone model may share a part of the optimization trajectory and diverge on only a fraction of the model parameters oriented to different learning tasks. The parameters of the finely tuned models adapted to different tasks can be hence merged via arithmetic averaging to reach better generalization over out-of-domain input and deliver multi-task learning at the same time. In fact, the idea of merging model parameters to mitigate conflicts between different tasks has been employed and proved to be effective in Federated Learning and continual learning methods.
Following the spirit, <cit.> adopt the model merging methods to achieve a balance between unlearning the unsafe response, yet avoiding over-defensiveness as much as possible. They first assume to have a collection of input questions and well-tagged harmful response texts. They use gradient ascent using the harmful data collection over the backbone model to first compute the model parameter update of the backbone model, dedicated to preventing harmful answers. Furthermore, <cit.> choose to perform gradient descent over the unaligned model using the harmful input-answer pairs to compute the model parameter update to mitigate over-defense. The two patches of model updates are then integrated using the model merging technique to derive a safety-aligned model balancing the safety alignment and utility.
This line of model merging techniques <cit.>, though operationally simple, still lack deep theoretical investigation regarding two perspectives. First of all, it is unclear how the adversarially updated model parameters with the unlearning objective and tagged harmful responses are associated with the embeddings with safe response. It is possible that adversarial training with a limited set of harmful response texts is prone to overfitting and makes the model still vulnerable to further new jailbreaking prompts. Second, it is difficult to control the over-defensiveness of the merged model parameters. Merging the model parameters does not provide an explicit explanation of how the overdefensive response may be prevented.
§ DISCUSSION
The field of large language models (LLMs) is indeed dynamic and rapidly evolving, with continuous advancements in both attacks and defenses. As attackers uncover new vulnerabilities and develop sophisticated exploits, defenders are required to respond with equally innovative countermeasures. This ongoing interaction between attackers and defenders is crucial for developing safe and reliable LLM systems.
Measuring attack effectiveness on LLMs is often more straightforward than evaluating the robustness of defense. Nevertheless, the effectiveness of defenses must be scrutinized carefully to avoid a false sense of security. Guaranteeing the defense strength is necessary to ensure that the defenses are not only theoretically sound but also practically effective against real-world threats in LLM systems.
Theoretical analysis of LLMs has indeed struggled to keep pace with their rapid growth and deployment. The sheer complexity and scale of these models pose significant challenges for theoretical frameworks that aim to understand and predict their behavior. While theoretical analysis provides valuable insights, it often falls short in capturing the nuanced performance of LLMs in practical applications.
Recent research has made strides by focusing on scaled case studies and examining specific features and inner mechanisms of LLMs <cit.>. These studies enhance our understanding of model behavior and contribute to improving model interpretability. For instance, the work inspired by <cit.> emphasizes the need to explore fundamental questions about LLMs, such as the possibility of completely eliminating non-robust features.
In summary, advancing our understanding of LLMs requires a concerted effort to address both theoretical and practical challenges. Researchers are encouraged to investigate fundamental questions and refine both attack and defense strategies to keep pace with the rapid advancements in LLM technology.
tmlr
|
http://arxiv.org/abs/2409.02188v1 | 20240903180032 | Stochastic Dark Matter: Covariant Brownian Motion from Planckian Discreteness | [
"Emma Albertini",
"Arad Nasiri",
"Emanuele Panella"
] | gr-qc | [
"gr-qc",
"astro-ph.CO",
"hep-th"
] |
=1
red
positioning
intersections
fadings
arrows.meta
arrows
decorations.pathmorphing
decorations.pathreplacing,decorations.markings
backgrounds,automata
Γ
Ṙ
R̈
ϕ̇
ϕ̈
P̃
p̃
#1[#1]
|
http://arxiv.org/abs/2409.03312v1 | 20240905073838 | Quantum Algorithm For Testing Convexity of Function | [
"Nhat A. Nghiem",
"Tzu-Chieh Wei"
] | quant-ph | [
"quant-ph"
] |
[email protected]
Department of Physics and Astronomy, State University of New York at Stony Brook, Stony Brook, NY 11794-3800, USA
C. N. Yang Institute for Theoretical Physics, State University of New York at Stony Brook, Stony Brook, NY 11794-3840, USA
Department of Physics and Astronomy, State University of New York at Stony Brook, Stony Brook, NY 11794-3800, USA
C. N. Yang Institute for Theoretical Physics, State University of New York at Stony Brook, Stony Brook, NY 11794-3840, USA
§ ABSTRACT
Functions are a fundamental object in mathematics, with countless applications to different fields, and are usually classified based on certain properties, given their domains and images. An important property of a real-valued function is its convexity, which plays a very crucial role in many areas, such as thermodynamics and geometry. Motivated by recent advances in quantum computation as well as the quest for quantum advantage, we give a quantum algorithm for testing convexity of polynomial functions, which appears frequently in multiple contexts, such as optimization, machine learning, physics, etc. We show that quantum computers can reveal the convexity property superpolynomially faster than classical computers with respect to number of variables. As a corollary, we provide a significant improvement and extension on quantum Newton's method constructed in earlier work of Rebentrost et al [New J. Phys. 21 073023 (2019)]. We further discuss our algorithm in a broader context, such as potential application in the study of geometric structure of manifold, testing training landscape of variational quantum algorithm and also gradient descent/Newton's method for optimization.
Quantum Algorithm For Testing Convexity of Function
Tzu-Chieh Wei
Received 16 July 2024; accepted 04 September 2024
=====================================================
§ INTRODUCTION
Quantum computers hold great promise to solve difficult computational problems that lie beyond the reach of classical computers. The underlying power of quantum computers is due to two intrinsic properties of quantum mechanics: superposition and entanglement. Tremendous efforts have been made to exploit the potential of quantum computers in various contexts. Some early pioneering works <cit.> showed that quantum computers could probe properties of a blackbox function with a single query usage. The breakthrough work of Shor <cit.> showed that quantum computers can factorize a given integer number superpolynomially faster than their classical counterpart, which has been recently improved in <cit.>. Grover <cit.> later showed that a quadratic speedup is achieved for unstructured database search. Further quantum speedup has been showcased in a wide array of problems, such as simulating quantum systems <cit.>, solving linear systems <cit.>, supervised and unsupervised learning <cit.>, principle component analysis <cit.>, topological data analysis <cit.>, learning from experiments <cit.>, etc. As a whole, these developments have ignited an exciting view towards the application of quantum computers, as well as triggering efforts in experimental realization of fault-tolerance devices as to bring quantum computation steps closer to reality.
Given these successes, the question of the repertoire of tasks quantum computers can excel in is still worthy of pursuing. An important model that has been central to investigating quantum advantage is the so-called blackbox model. In such a model, we have access to the blackbox with an unknown structure, that computes some Boolean functions, e.g., accepting some input strings and then outputting a Boolean variable (0 or 1). As the structure is unknown to us, the goal is to extract properties of such functions with minimum resource, e.g., the number of access to the corresponding blackbox. In fact, some early works <cit.> showed that quantum computers can reveal hidden properties of given functions using a minimum number of queries much smaller compared to classical counterparts. More relevant to our work, Jordan <cit.> considered the numerical gradient estimation problem given the blackbox that computed some function explicitly and showed that a single query is sufficient to reveal the gradient at a given point, up to some desired accuracy. Inspired by such a line of pursuit, we consider the potential advantage of quantum computers in the topic of functional analysis, where our object of interest is a (multivariate) function, which is a very basic object in mathematics. The intriguing point is that a function can possess very rich analytical properties, and thus the problem is very appealing to explore from a computational perspective. A particular property that we focus on in our work is the convexity, which captures the shape of the function in some domain (see Fig. <ref> and <ref> below).
A convex function has a peculiar feature that a local minimum is also a global minimum (see Fig. <ref> for simple illustration). Additionally, if a function is convex in some domain, then a minimum is easily obtained, e.g., by the gradient descent method, which makes it very useful in optimization areas.
In this work, we aim to tackle the challenge of testing the convexity of some polynomial function. We begin with a simple case, which is a homogeneous polynomial of even degree (to be defined later) and subsequently, building upon such homogeneous polynomial, we generalize the construction to arbitrary polynomial type. The kind of homogeneous polynomial of even degree was also considered in <cit.>, where the authors proposed the quantum gradient descent and quantum Newton's method for finding local minima. In fact, our work is quite inspired by them, as the structure of a given function makes it simpler to compute two important quantities: the gradient and Hessian. Building upon <cit.>, we show that the ability to obtain the analytical form of Hessian translates into the ability to test convexity by examining the sign of Hessian's spectrum, and that quantum computers can achieve that goal with superpolynomially less cost than their classical counterpart in relative to the number of variables included in the given polynomial function. Along the way, as a corollary, we show how to construct the Hessian more efficiently than the original method in <cit.>, thus providing a significant improvement on quantum Newton's method that also appeared in <cit.>. First, our improved quantum Newton's method work on arbitrary polynomial, instead of homogeneous one of even degree. Second, our method reduces the complexity dependence on (inverse of) error tolerance from polynomial to polylogarithmic. The complexity dependence on the degree of given function is also reduced by a power of 4. Additionally, at each step of the Newton's method, the number of copies of (block encoding of) temporal solution required in our work is polynomially less than (by a power of 5) that of <cit.>. We further mention three potential subjects that might be useful for our framework: differential geometry, variational quantum algorithm and gradient descent as well as Newton's method for finding minima of objective function.
The structure of the paper is as follows. First, in Section <ref>, we provide an overview of our objectives, with the corresponding assumptions and criteria for convexity in Section <ref>. Our main algorithm is outlined in details in Section <ref>. Remarks and further discussions are given in Section <ref>, where we show that our method can be generalized to an arbitrary polynomial and showcase some potential applications of our method in the context of differential geometry and variational quantum algorithm, as well as improving the quantum Newton's method originally introduced in <cit.>. Appendix <ref> contains the necessary recipes that underlie our work. Appendix <ref> provides a proof of Lemma <ref>.
§ OVERVIEW
§.§ Overview of the Problem and Assumptions
We consider a multivariate function f: ℝ^n ⟶ℝ which is a homogeneous polynomial of an even degree 2p (p ∈ℤ). Let x = (x_1, x_2, ..., x_n); as shown in <cit.>, such a polynomial admits a tensor algebraic decomposition:
f(x) = 1/2x^T ⊗⋯⊗x^T A x⊗⋯⊗x,
where A is an s-sparse matrix of dimension n^p × n^p.
Throughout this work, we assume that the oracle's access to A is given in a similar fashion to that in <cit.>. The knowledge of the polynomial is obtained by the oracle's access to matrix A, which can be formally decomposed as:
A = ∑_α = 1^K A_1^α⊗⋯⊗ A_p^α,
where each A_i^α is a matrix of size n× n (i= 1,2,...,p) and K is some natural number. While the above general decomposition is not required explicitly in our work (i.e., we do not assume the oracle has direct access to the submatrices A_i's), the expression is helpful as the important quantities, such as the gradient and Hessian, admit analytical forms. More specifically, the gradient can be written as:
∇ f(x) = D(x) ,
where D is specified as
D() = ∑_α=1^K∑_j=1^p ( ∏_i=1, i≠ j^p ^T A_i^α) A_j^α,
and the Hessian matrix of function f can be written as:
() = 2 ∑_α =1^K ∑_j,k = 1, j≠ k^p ∏_i=1, i≠ j,k^p (^T A_i^α) A_k^α^T A^α_j + D().
The special formulation above provides us an alternative way to compute the Hessian and gradient at a given point as follows. Denote ^T as ρ_x, then we can write the gradient as
D() = _1,2,...,p-1( M_D ( (^T)^⊗ p-1⊗_n ) ),
where
M_D = ∑^p_m=1 M_m,
and each M_m (m=1,..,p) can be obtained from A via permutation of entries as explained below. Recall that
A = ∑_α = 1^K A_1^α⊗⋯⊗ A_m^α⊗⋯⊗ A_p^α,
and M_m is defined similarly to A except having the p-th matrix swapped with the m-th matrix, i.e.,
M_m = ∑_α = 1^K A_1^α⊗⋯⊗ A_p^α⊗⋯⊗ A_m^α.
In a similar fashion, the Hessian can be expressed alternatively as:
() = _1,2,...,p-1( (M_H + M_D) ( (^T) ^⊗ p-1⊗_n ) ),
where
M_H = 2 ∑_j ≠ k^p Θ_jk,
and each Θ_jk (for j,k = 1,...,p) can be obtained from A via permutation of matrices, e.g., the j,k-th matrices are swapped to p-1 and p-th ones, respectively,
Θ_jk = ∑_α = 1^K A_1^α⊗⋯⊗ A_p-1^α⊗⋯⊗ A_p^α⊗⋯⊗ A_j^α⊗ A_k^α.
Therefore, the oracle access to A allows us to obtain entries of all Θ_jk respectively.
We remark that in the above, we discuss homogeneous polynomials of even degree as a primary target due to its simplification of gradient and Hessian formulation. As mentioned in <cit.>, inhomogeneity can be inserted. For example, given a homogeneous polynomial f_ homo, one can multiply it with c^T where c is some n-dimensional vector to obtain an inhomogeneous polynomial. In the following, we describe our framework with a homogeneous polynomial of even degree and then provide a generalization in Section <ref>.
Let ||·|| denotes the operator norm. As specified in <cit.>, we have that the norm || D || ≤ p ||A|| and || || ≤ p^2 ||A||. Without loss of generalization, we can set the norm of A, ||A|| = 1, so that p^2 ||A|| ≤ p^2, and thus we can guarantee that ||D|| < p and |||| ≤ p^2. Throughout this work, we assume this condition holds for convenience, as rescaling by some factor does not change the nature of the problem, e.g., the convexity of function and particularly the (asymptotical) running time of our method.
§.§ Dissecting Convexity of Function
We are interested in the convexity of f in some domain 𝒟⊂ℝ^n. By trivially redefining the function (e.g., with a coordinate shift), we can choose 𝒟 to be a hypersphere with radius 1 for simplicity. The following criterion is a well-known result in mathematics and can be found in many standard literature.
Criterion: Over some domain 𝒟, if the Hessian matrix of a given function f is positive-semidefinite at every point, then the function f is convex.
The application of the above criterion is straightforward. We compute the Hessian matrix of f at chosen points in the region and check its spectrum. The positive-semidefiniteness of a matrix is equivalent to all eigenvalues being non-negative. Therefore, the sign of the spectrum is necessary and sufficient for revealing convexity. As we mentioned, the Hessian matrix can be written down explicitly, which is convenient for spectral analysis. As the dimension of grows with the number of variables (there are n variables), finding the full spectrum of is computationally demanding. Furthermore, the only information that matters is the sign of the smallest eigenvalue, as we only need to see if it is positive or not. Therefore, we propose to find the minimum eigenvalue of . If such an eigenvalue is non-negative, then all other eigenvalues are non-negative, which implies that is positive-semidefinite and hence, the corresponding f is convex within the domain 𝒟. In the following section, we construct a quantum algorithm to first find and then verify the sign of the smallest eigenvalue, thereby dissecting the convexity of the function f, given the details of f and related assumptions from the previous section Sec. <ref>.
§ QUANTUM ALGORITHM
We begin with a remark that all details regarding relevant definitions as well as useful tools are provided in Appendix <ref>. Here proceed to describe our main result.
§.§ Constructing Hessian for a Single Point
In Sec. <ref>, we first introduce the general idea behind our work: finding the minimum eigenvalue of . Our first task is to produce the block encoding of that can be used for further analysis. In order to achieve this goal, we recall the following important formulation:
() = _1,2,...,p-1( (M_H+M_D) ( (^T)^⊗ p-1⊗_n ) ),
where M_H = 2 ∑_j ≠ k^p Θ_jk
and M_D = ∑_m=1^p M_m. The oracle access to entries of each M_m and H_jk allows us to use Lemma <ref> to produce the ϵ-approximated block encoding of M_H/(sp^2) and M_D/(sp) (where s is the sparsity of A) in time:
𝒪( p(plog(n) + log^2.5(1/ϵ) ) ).
Thus, it is quite simple to obtain the block encoding of M_D/(sp^2) from M_D/(sp) by multiplying with a factor 1/p (see Lemma <ref>). Lemma <ref> then allows us to obtain the ϵ-approximated block encoding of (M_H+M_D)/(2sp^2).
The next recipe that we need includes the following simple relations.
_1 (A ⊗ B) (^T ⊗) = (^T A ) B,
(^T ⊗) (A ⊗ B) (^T ⊗) = ^T ⊗ (^T A ) B.
Using these properties plus the tensor structure of M_D and M_H, we can show that:
(^T) ^⊗ p-1⊗_n (M_H + M_D) (^T) ^⊗ p-1⊗_n = (^T) ^⊗ p-1⊗().
The reason comes from the fact that
(^T)^⊗ p-1 = ^⊗ p-1 (^T)^⊗ p-1
and that
(^T)^⊗ p-1⊗_n (M_H+M_D) ^⊗ p-1⊗_n = ().
For now, we assume that we have a block encoding of (^T), as subsequently, we will show how to produce such a block encoding and generalize it to deal with multiple points chosen from the domain 𝒟, as required by the convexity criterion. Lemma <ref> allows us to use p-1 block encodings of (^T) and a trivial block encoding of _n to obtain the block encoding of (^T)^⊗ p-1⊗_n. Then Lemma <ref> (combined with the simple relations derived above) yields the block encoding of (^T)^⊗ p-1⊗()/(2sp^2).
§.§ Constructing “multi-points” Hessian
§.§.§ For homogeneous polynomial of even degree
We remark that the above construction yields the block encoding of the operator (^T)^⊗(p-1)⊗ ()/(2sp^2), which contains the tensor product of ^T and Hessian matrix at a given point . As it will become clearer later, in order to take advantage of quantum parallelism for analyzing the Hessian spectrum at multiple points, we need to adjust the above procedure to construct what we call a “multi-point” Hessian.
For further clarity and to avoid confusion with the above construction, we set the following notation for subsequent discussions. Let 𝒩 be the number of points of consideration; _i (∈ℝ^n) is the i-th point; the corresponding Hessian of f evaluated at _i is then (_i). The first goal is to produce the block encoding of the following operator ⊕_i^𝒩 (_i_i^T)^⊗p-1⊗(_i)/(2sp^2), which has the matrix representation as:
1/2sp^2 [ (_1_1^T)^⊗p-1⊗(_1) ⋯ ⋯ ⋯; ⋯ (_2_2^T)^⊗p-1⊗(_2) ⋯ ⋯; ⋯ ⋯ ⋯ ⋯; ⋯ ⋯ ⋯ (_𝒩_𝒩^T)^⊗p-1⊗(_𝒩 ); ].
Now, we outline the procedure that produces the desired block encoding. First, we have the following lemma:
Given block encoding of (M_H+M_D)/(2sp^2) (as constructed in Sec. <ref>), the block encoding of operator ⊕_i^𝒩 (M_H + M_D)/(2sp^2) can be prepared with extra 𝒪(1) cost.
Proof: To show this, we need to add an extra register of dimension 𝒩. The resulting tensor product of _𝒩 and unitary block encoding of (M_H+M_D)/(2sp^2) produces the block encoding of _𝒩⊗ (M_H+M_D)/(2sp^2), which is exactly ⊕_i^𝒩 (M_H + M_D)/(2sp^2) by a simple algebraic property.
Next, we introduce the following crucial recipe:
The block encoding of the operator ⊕_i=1^𝒩 (_i_i^T)^⊗ p-1⊗_n can be prepared in time 𝒪(p(log(n)).
Proof: To prove the above lemma, we first consider a log(n𝒩) qubits system which resides a Hilbert space ℋ of dimension n𝒩. Let U be some unitary operator. Once acting on the basis state | 0⟩_ we obtain the state |ϕ⟩, i.e.,
U | 0⟩_ = |ϕ⟩.
Let |ϕ⟩ = (x_1,x_2,...,x_n𝒩)/C where
C = √(∑_k=1^n x_k^2 )
is the normalization factor. In this paper, we work in the real regime, i.e., x_i ∈ℝ for all i = 1,2,..., n. If we break such a vector into parts, and denote each part as _j (j=1,2,..., ), e.g., _j = (x_(j-1)n +1,x_(j-1)n+2, ..., x_(j-1)n +n). Note that given such notation, the normalization factor C is essentially equal to C = ∑_i=1^ |_i|^2 where |.|^2 refers to the usual l_2 norm of a vector. It is then straightforward to decompose |ϕ⟩ as: |ϕ⟩ = 1/C∑_j=1^𝒩|j⟩⊗_j. Before proceeding, we remark on the unitary U that generates the desired state |ϕ⟩. We are interested in points {_i}_i=1^. If we choose these points classically, which means that we know their coordinates plus the norms respectively, then we can use the well-known amplitude encoding method <cit.> to load these entries into a quantum state, resulting in a quantum circuit U of depth 𝒪(log(n)). On the other hand, if we choose U to be a random unitary circuit, then a constraint is imposed on the coordinates of |ϕ⟩, which implies C=1. In this case, the set of points {_i}_i=1^ must have their norms summing up to 1, which may reduce the number of points considered in a given domain in one go.
Now we append another ancillary system having dimension initialized in | 0⟩_, to obtain the state | 0⟩_⊗|ϕ⟩ = 1/C∑_j=1^𝒩| 0⟩_⊗|j⟩⊗_j. Using CNOT gates to copy the second register to the first one, i.e.,
1/C∑_j=1^𝒩| 0⟩_⊗|j⟩⊗_j ⟶1/C∑_j=1^𝒩|j⟩_⊗|j⟩⊗_j.
If we trace out the first register (with subscript ), we obtain the state (1/C^2)∑_j=1^|j⟩⟨j| ⊗ _j_j^T (we recall that it should be _j_j^†, but since we work in the real regime, ^T and ^† are identical). Lemma <ref> allows us to prepare the (exact) block encoding of (1/C^2) ∑_j=1^|j⟩⟨j| ⊗ _j_j^T (in complexity 𝒪(log (n) ), whose matrix representation is as follows:
1/C^2∑_j=1^|j⟩⟨j| ⊗ _j_j^T = 1/C^2[ _1_1^T · · ·; · _2 _2^T · ·; · · · · ; · · · __^T ].
If C is greater than 1, the factor C^2 from the above representation can be removed using the amplification technique (basically uniform singular value amplification from <cit.>), with a further complexity 𝒪(C^2). In reality, if we prefer to choose points {_i}_i=1^ uniformly within the hypersphere, then we can expect that C^2 = ∑_i=1^ |_i|^2 ≤, which means that the complexity of the above step can be 𝒪() (for C ≥ 1).
For C smaller than 1, one cannot use the amplification method <cit.>. Therefore, the factor C cannot be removed by amplification. (Of course, this can be avoided using different or more points.) For now, we continue the construction with C being greater or equal to 1 (which means it is removed from the above equation). Subsequently, we will return to the case C being smaller than 1 and show that the structure of a homogeneous polynomial allows us to factor out some power of C, resulting in a different expression for the Hessian evaluated at given points. Hence, the final complexity is different (from the case C> 1) by a factor of power of C.
Define |_i⟩≡_i/|_i|, e.g., the normalized vector. We have that _i_i^T = |_i|^2 |_i⟩⟨_i|. Our goal is to transform the above operator into its square root, e.g., for each i, we aim to transform |_i|^2 |_i⟩⟨_i|⟶ |_i||_i⟩⟨_i|. In order to achieve such a goal, we recall two results from <cit.> and <cit.>:
Given δ, ϵ∈ (0, 1/2], c∈ (0,1] and let f(x) = 0.5 x^c. There exists an even/odd polynomial of degree 𝒪( 1/δlog( 1/ϵ ) ) such that
||P-f||_ [δ,1] ≤ϵ, ||P||_[-1,1]≤ 1.
[<cit.> Theorem 56]
Suppose that U is an
(α, a, ϵ)-encoding of a Hermitian matrix A. (See Definition 43 of <cit.> for the definition).
If P ∈ℝ[x] is a degree-d polynomial satisfying that
* for all x ∈ [-1,1]: |P(x)| ≤1/2,
then, there is a quantum circuit Ũ, which is an (1,a+2,4d √(ϵ/α))-encoding of P(A/α) and
consists of d applications of U and U^† gates, a single application of controlled-U, and 𝒪((a+1)d)
other one- and two-qubit gates.
Choosing c= 1/2, by using polynomial from Lemma <ref> as an approximation to 0.5√(x) plus Lemma <ref>, we can obtain the following (ϵ-approximated) transformation (remind that for all i, _i_i^T ≡ |_i|^2 |_i⟩⟨_i|):
[ _1_1^T · · ·; · _2 _2^T · ·; · · · · ; · · · __^T ]⟶[ 0.5 |_1| |_1⟩⟨_1| · · ·; · 0.5 |_2| |_2⟩⟨_2| · ·; · · · · ; · · · 0.5 |_| |_⟩⟨_| ]
Note that we can not remove the factor 0.5 by amplification, as the norm needs to be less than or equal to 1/2. Moreover, Lemma <ref> admits an approximation of a given positive power function on the interval [δ,1]. To apply such a result to our context, we need to make sure that our interval is appropriate, which means that δ needs to be less than or equal to min_i { |_i|^2 }_i=1^. In order to find such a minimum, we can use Lemma <ref> to the left-hand side of the above equation in the case that the points are unknown. However, if we classically pick these points, we know their norms, including the minimum.
In Sec. <ref>, we have mentioned a very simple way to prepare the block encoding of the identity matrix of any dimension. Suppose we have a block encoding of _n(p-1), then Lemma <ref> allows us to prepare the block encoding of ∑_i=1^|i⟩⟨i|⊗ |_i||_i⟩⟨_i|⊗_n^⊗ p-1. We note that with 2log(n) SWAP gates, we can swap the order of |_i||_i⟩⟨_i| and any _n among p-1 such ones. Therefore, it takes p-1 further steps to achieve all of them, e.g., block encoding of operators of the form
∑_i=1^|i⟩⟨i|⊗_n ⊗ |_i| |_i⟩⟨_i|⊗⋯⊗_n, ∑_i=1^|i⟩⟨i|⊗_n ⊗_n ⊗ |_i||_i⟩⟨_i|⊗⋯⊗_n, ..., ∑_i=1^|i⟩⟨i|⊗_n ⊗_n ⊗⋯⊗ |_i||_i⟩⟨_i|⊗_n.
Lemma <ref> yields the block encoding of their products, and it is easy to note that their product is ∑_i=1^|i⟩⟨i|⊗ ( |_i||_i⟩⟨_i|)^⊗ p-1⊗_n, which is exactly ⊕_i=1^𝒩 (|_i||_i⟩⟨_i|)^⊗ p-1⊗_n by a simple tensor algebraic property. The complexity of this step is 𝒪(plog(n) 1/|_min|), where |_min| = min_i { |_i|^2 }_i=1^. To summarize what we have so far, we state the following:
Assuming that C = √(∑_i=1^ |_i|^2)≥ 1. An ϵ-approximated block encoding of ⊕_i=1^𝒩 (|_i||_i⟩⟨_i|)^⊗ p-1⊗_n can be prepared in time complexity 𝒪( p 1/|_min|log(n) log(1/ϵ) ).
We are now ready to construct the “multi-point” Hessian, which is straightforward by combining the methods outlined in <ref>, Lemmas <ref> and <ref> and the following simple property of matrix multiplication:
[ A_1 0; 0 A_2 ]·[ B_1 0; 0 B_2 ]
= [ A_1B_1 0; 0 A_2B_2 ],
which holds for any higher dimension N, i.e.,
⊕_i=1^N A_i ⊕_i=1^N B_i = ⊕_i=1^N A_i B_i.
More specifically, from Lemma <ref>, we have the block encoding of ⊕_i=1^ (M_H+M_D)/(2sp^2). Using Lemma <ref> to obtain the block encoding of
⊕_i=1^𝒩( 0.5|_i||_i⟩⟨_i|) ^⊗ p-1⊗_n · ⊕_i=1^(M_H+M_D)/2sp^2· ⊕_i=1^𝒩( 0.5 |_i| |_i⟩⟨_i|)^⊗ p-1⊗_n,
which is exactly
⊕_i=1^( 0.5|_i||_i⟩⟨_i|)^⊗ p-1⊗_n ·(M_H+M_D)/ 2sp^2·( 0.5|_i||_i⟩⟨_i|)^⊗ p-1⊗_n = 1/4^p-1⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗(_i)/2sp^2.
Due to the fact that, for all i, |_i||_i⟩⟨_i| = |_i⟩_i^T (basically we absorb the norm to the ⟨_i| to obtain _i^T), then we use property (<ref>) to obtain the block encoding of ⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗(_i)/(2sp^2). We remark a subtlety that, for any i, 0.5|_i||_i⟩ is essentially proportional to |_i||_i⟩, which means that one can `absorb' that factor 0.5 into the calculation of Hessian. This is a particular property of homogeneous polynomial (see Sec. <ref>), whereby a rescale of the given input ⟶λ (for λ∈ℝ) would result in a rescaling of Hessian, i.e., (λ) = λ^2(p-1)().
At the beginning of this section, we mentioned two cases: the normalization factor C ≥ 1 or C < 1. For C <1, we cannot remove the factor C in Eqn (<ref>); the aforementioned property of homogeneous polynomial then allows us to handle the case C ≤ 1 in a simple manner, as we treat the point _i/C as a scaled point _i ⟶_i/C, which means that eventually there will be a factor C^2p-2 absorbed, e.g, in the above formulation, we would have the following operator:
1/(4C^2)^p-1⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗(_i)/2sp^2.
One might wonder that for the case C ≥ 1 if we did not use amplification to remove the factor C, then we would end up having the same form as above. It does not seem to be an issue for homogeneous polynomials, as discussed above, due to the difference in the Hessians being just a scaling factor. However, there are two reasons. First, our method is aimed at dealing with polynomials of arbitrary type, as we will generalize subsequently. This means that the homogeneous property will not hold; the homogeneous polynomial is just a base on which we can build and achieve the generalization conveniently. Second, as mentioned previously, for C < 1, we can not use the amplification method to remove the factor C. Besides that, the subsequent strategy that we will use to dissect convexity (in Section <ref>) is tracking the value of some operators that have the form as the above. If there is an extra factor C (being greater than 1), we will need to choose the error tolerance to be smaller (by a factor of C^2p-2) to reveal the correct eigenvalue of the desired Hessian, which would result in a substantial running time.
With the above operator (for both cases C ≥ 1 and C < 1), we are able to generalize it to polynomials of arbitrary type. The following section shows how to achieve our goal based on what we have obtained so far.
§.§.§ Generalization to Polynomial of Arbitrary Kind
Previously, we used the particular form of f, which is a homogeneous polynomial of even degree. Now we generalize our method to deal with arbitrary polynomials, or more specifically, monomials as also mentioned in <cit.>, which include homogeneous polynomials of odd degree and inhomogeneous polynomials. According to <cit.>, an inhomogeneous function can be given below by inserting an extra factor into a homogeneous one:
f() = ∑_q=1^P-1 (c_q^T ) ∏_k=1^q-1 (^T B_k q ).
We recognize that the term ∏_k=1^q-1 (^T B_kq) is in fact an alternative expression for a homogeneous polynomial of even degree 2(q-1), and c_q^T is a -dependent coefficient that adds inhomogeneities. Therefore, all terms in the above summation share a similar form, and the function f(x) is simply adding them. Since the derivative of a sum of functions is the sum of the derivative of the constituting functions, we consider the following part separately g() = (c^T ) ∏ (^T B ) where we already ignore the subscript and treat with full generalization, e.g., with arbitrary order of the polynomial. The partial derivative:
∂ g/∂ x_m = c^T ∂∏ (^T B ) /∂ x_m + ∏ (^T B ) ∂ c^T /∂ x_m.
Now we take further partial derivative:
∂^2 g/∂ x_n ∂ x_m = c^T ∂^2 ∏ (^T B ) /∂ x_n ∂ x_m + ∂∏ (^T B ) /∂ x_m ∂ c^T /∂ x_n + ∂∏ (^T B ) /∂ x_n ∂ c^T /∂ x_m.
Denote ∏ (^T B ) ≡ h(). Since the term ∏ (^T B ) is a homogeneous polynomial of even degree, we know its gradient (composed of partial derivatives, see Eqn. (<ref>)) and Hessian matrix (composed of partial derivatives of second order, see Eqn. (<ref>)), therefore, the above equation implies that the Hessian of g is generally expressed as:
(g()) = (c^T ) (h() ) + ∇ h() c^T + c ∇ h() ^T.
Since h() is a homogeneous polynomial of even degree, its gradient and Hessian can be computed using known technique <cit.> (see further Sec. <ref>, more specifically Eqn (<ref>) and Eqn (<ref>)). The only difference we need to consider is the contribution from c, which accounts for the inhomogeneities part. How to deal with the extra term c^T depends greatly on what kind of access we have to c. We now outline a solution in the case where c is generated by some unitary U_C, e.g., U_C | 0⟩ = |c⟩≡ c (assuming further that |c|=1). From the unitary U_C, using Lemma <ref> we have the block encoding of cc^T.
For now, we assume to work in the regime where the normalization factor C ≥ 1. The first goal is to produce the block encoding of some operator that includes ∇ h() c^T as a tensor component (as in Sec. <ref>). Since h() is the regular homogeneous part, the gradient operator can be computed according to Sec. <ref>, i.e., there exists a procedure similar to what was outlined in Sec. <ref> and Sec. <ref> that produces the block encoding of the following operator
⊕_i=1^1/4^p-1(|_i⟩⟨_i|)^⊗ p-1⊗1/sp^2∇ h(_i) _i^T.
More specifically, from equation (<ref>), if we ignore the M_H operator, we then obtain a similar property for D():
(^T) ^⊗ p-1⊗_n · M_D · (^T) ^⊗ p-1⊗_n = (^T) ^⊗ p-1⊗ D().
Consequently, we can use the result of Section <ref> and <ref> to obtain the block encoding of the operator ⊕_i=1^( 0.5 |_i||_i⟩⟨_i|)^p-1⊗_n. Then we use the block encoding of operator M_D/sp^2 plus lemma <ref> to obtain the block encoding of their products, which is basically
1/4^p-1⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗1/sp^2 D(_i).
From Sec. <ref>, we have the operator ⊕_i=1^_i_i^T (the construction above Lemma 3). If we use Lemma <ref> to construct the block encoding of ⊕_i=1^_i _i^T ⊗_n^⊗ p-1. Then, with 2log(n) SWAP gates, we can swap the first and last register, e.g., obtaining ⊕_i=1^_n^⊗ p-1⊗_i_i^T. Then we use Lemma <ref> to obtain the block encoding of:
1/4^p-1⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗1/sp^2 D(_i) ·⊕_i=1^_n^⊗ p-1⊗_i_i^T = 1/4^p-1⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗1/sp^2 D(_i)_i_i^T
= ⊕_i=1^1/4^p-1(|_i⟩⟨_i|)^⊗ p-1⊗1/sp^2∇ h(_i) _i^T.
Note that we used the property of gradient operator of homogeneous even degree function D(_i)_i = ∇ h(_i).
From the block encoding of cc^T, it is trivial to produce the block encoding of ⊕_i=1^_n^⊗ p-1⊗ cc^T. Denote c^T _i = _i^T c ≡β_i. We then use Lemma <ref> to produce the block encoding of the multiplied operator:
⊕_i=1^ ( |_i⟩⟨_i|)^⊗ p-1⊗∇ h(_i) _i^T/4^p-1 sp^2·⊕_i=1^_n^⊗ p-1⊗ cc^T = ⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗β_i/4^p-1 sp^2∇ h(_i) c^T.
In order to remove the factor β_i from the above formulation, we use the following procedure.
In Appendix <ref>, we prove the following thing:
Given the block encoding of cc^T, then it is possible to construct the block encoding of the diagonal matrix ℬ with entries ℬ_ij = β_iβ_j δ_ij where β_i = _i^T c.
Recall that for the homogeneous part h(), procedure in Sec. <ref> obtains the following operator:
1/4^p-1⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗1/2sp^2(_i).
Thus, we use Lemma <ref> and Lemma <ref> to obtain the block encoding of:
1/4^p-1⊕_i=1^|_i⟩⟨_i|^⊗ p-1 ⊗β_i^2 (_i) /2sp^2.
Recall that we have the block encoding of the operator
⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗β_i/4^p-1 sp^2∇ h(_i) c^T.
Using Lemma <ref> to insert an extra factor 1/2 into the above operator, i.e., we obtain:
⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗β_i/4^p-1 2 sp^2∇ h(_i) c^T.
We remark that, the transpose of the block encoding of the above operator is the block encoding of:
⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗β_i/4^p-1 2 sp^2 c ∇ h(_i)^T
We then use Lemma <ref> to obtain the block encoding of a sum of two operators above,
ℙ = 1/3( 1/4^p-1⊕_i=1^|_i⟩⟨_i|^⊗ p-1 ⊗β_i^2 (_i) /2sp^2 + ⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗β_i/4^p-1 2 sp^2∇ h(_i) c^T +
⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗β_i/4^p-1 2 sp^2 c ∇ h(_i)^T)
= 1/31/4^p-1sp^2( ⊕_i=1^β_i |_i⟩⟨_i|^⊗ p-1 ⊗_inho(_i) ),
where _inho(_i) refers generally to the Hessian evaluated at _i of the given inhomogeneous function (see derivation in Eqn. <ref>). Our goal is to remove the factor β_i (for all i) in the above operator. Recall that from Lemma <ref> we have block encoding of ℬ that contains β_i^2 (for i=1,2,...,) on the diagonal. Note that we also have block encoding of ℬ⊗_n ≡⊕_i=1^β_i^2 _n (which is trivial to obtain using block encoding of identity matrix _n plus Lemma <ref>). We then can use the following polynomial approximation of the negative power function from <cit.>:
Let δ, ϵ∈ (0,1/2], c>0 and let f(x) = δ^c/2 x^-c, then there exists a (could be even or odd) polynomial P such that || P - f(x) ||_δ,1≤ϵ, ||P||_-1,1≤ 1/2. The degree of the polynomial P is 𝒪(max [1,c]/δlog(1/ϵ) ).
To apply the above lemma to the operator ⊕_i=1^β_i^2 _n, we need to know the lower bound, i.e., the minimum of {β_i^2}_i=1^, denoted as β_min which can be estimated using Lemma <ref>. Then, we use the above Lemma (with c = 1/2) with Lemma <ref> to ϵ approximately transform the operator
⊕_i=1^β_i^2 _n ⟶⊕_i=1^√(β_min)/2β_i_n.
The complexity of this step is 𝒪( 1/β_minlog(1/ϵ ) ). Then, we can use Lemma <ref> to obtain the approximation of the block encoding of
⊕_i=1^√(β_min)/2β_i_n ·ℙ = √(β_min)/2· 3 · 4^p-1sp^2 ( ⊕_i=1^|_i⟩⟨_i|^⊗ p-1 ⊗_inho(_i) ).
Finally, in order to remove the factor √(β_min)/6, we can use the amplification method <cit.> with further complexity 𝒪(6/√(β_min) ) = 𝒪(1) as it doesn't scale as any input parameter. In the end, we obtain the following operator:
1/4^p-1 sp^2( ⊕_i=1^|_i⟩⟨_i|^⊗ p-1 ⊗_inho(_i) ),
which has a similar form to Eqn. (<ref>). The above construction was discussed for the case C ≥ 1. For C < 1, all the steps are essentially the same (as in Section <ref>), except that there will be an appearance of factor C at the end, i.e., we would obtain the following operator:
1/(4C^2)^p-1 sp^2( ⊕_i=1^|_i⟩⟨_i|^⊗ p-1 ⊗_inho(_i) ).
Before moving further, we make a simple remark that for either case, C ≥ 1 and C < 1, the operators of interest (e.g., the above one) only differ by a factor of C^2p-2. Now, we are ready to tackle our main objective, which is to dissect the convexity of a given function of arbitrary type.
§.§ Testing Positive-semidefiniteness
Remind that we are interested in the spectrum, or more specifically, the minimum eigenvalue of Hessian at some given point . From Sec. <ref> we have that |||| ≤ p^2 at any point , which means the eigenvalues of /p^2 lie within (-1,1). Given such a range of eigenvalues of , the shifted matrix (_n - /p^2)/2 would have a spectrum lying in the range (0,1). The reason why we change the attention to (_n - /p^2)/2 is because we can apply the improved quantum power method proposed in <cit.> to find its maximum eigenvalue.
Given the block encoding of some positive-semidefinite matrix A whose eigenvalues are ∈ (0,1), then its largest eigenvalue can be estimated up to additive accuracy δ in time
𝒪( T_A/δ( log(1/δ) + log(n)/2) ),
where T_A is the complexity of producing the block encoding of A.
To see how it applies to our main problem, let λ_min denotes the minimum eigenvalue of /p^2. If λ_min < 0 (being negative) then (1 - λ_min)/2 > 1/2, and that (1 - λ_min)/2 is the maximum eigenvalue of (_n - /p^2)/2. Therefore, we can track sign of λ_min from estimating minimum eigenvalue of (_n - /p^2)/2.
From the previous section we have obtained the block encoding of 1/4^p-1⊕_i=1^|_i⟩⟨_i|^⊗ p-1 ⊗(_i)/2sp^2. The last recipe that we need to complete our algorithm is the following. First, suppose {A_i}_i=1^M is a set of Hermitian operators, then the eigenspace of ⊕_i=1^M A_i is simply the union of the eigenspace of all {A_i}_i=1^M <cit.>. Second, the eigenspace of ⊗_i=1^M A_i is the tensor product of the eigenspace of each {A_i}, which means that the eigenvalues of ⊗_i=1^M A_i is the multiplication of eigenvalues of contributing matrices. From the second property, we can claim that for any i, the eigenvalues of (1/4^p-1) |_i⟩⟨_i|^⊗ p-1 ⊗(_i)/(2sp^2) is the eigenvalues of (1/4^p-1) (_i)/(2sp^2). The reason is that, each operator |_i⟩⟨_i| is a projector and hence, its only eigenvalue is 1. Therefore, from the first property, we have that
maximum eigenvalue of ⊕_i=1^1/21/2s 4^p-1( _n - |_i⟩⟨_i|^⊗ p-1 ⊗(_i)/p^2) = max__i{_n - (_i)/p^2/4s 4^p-1}_i=1^.
One may wonder why the right-hand side is important. We remind that if the function is convex in the given domain 𝒟, then its Hessian matrix does not have non-negative eigenvalues at all points in 𝒟. Therefore, if max__i{(_i) } is non-negative, then the Hessian is semi-definite at all points of consideration. As we pointed out from the beginning of this section, the condition of min__i{(_i) } being non-negative is equivalent to min__i{ (_n - (_i)) / 2 } being no less than 1/2.
Fortunately, given the block encoding of (1/4^p-1)⊕_i=1^|_i⟩⟨_i|^⊗ p-1 ⊗(_i)/2sp^2, it is simple to use lemma <ref> (plus a trivial block encoding of identity matrix of dimension n) to obtain the block encoding of their summation, which is exactly:
⊕_i=1^1/4 sp^2 4^p-1( _n + |_i⟩⟨_i|^⊗ p-1 ⊗(_i) ).
Then, we can find its minimum eigenvalue using recent work of <cit.>, as we mentioned at the beginning of this subsection, e.g., Lemma <ref>.
We finally remark that we are estimating the minimum eigenvalue of the operator
⊕_i=1^1/4s 4^p-1( _n - |_i⟩⟨_i|^⊗ p-1 ⊗(_i)/p^2),
which contains the extra factor 4s 4^p-1. What we actually want is the maximum eigenvalue of the operator
⊕_i=1^1/2( _n - |_i⟩⟨_i|^⊗ p-1 ⊗(_i) /p^2),
which corresponds exactly with
max__i{_n - (_i)/2p^2}_i=1^.
Therefore, we need to use Lemma <ref> with an adjusted multiplicative error, e.g., by choosing
δ⟶δ/2s 4^p-1.
Summary of the quantum algorithm procedure. For convenience, we provide key points of our framework (which means we leave out technical details) plus corresponding complexity at each step.
∙ We begin with a set of points {_i}_i=1^ (each _i ∈ℝ^n and |_i| ≤ 1) of interest. Define C^2 = ∑_i=1^ |_i|^2 as normalization factor.
∙ Load the above points into quantum state |ϕ⟩ = 1/C∑_i=1^|i⟩⊗_i, then construct the block encoding of the following operator
1/C^2[ _1_1^T · · ·; · _2 _2^T · ·; · · · · ; · · · __^T ].
The complexity of the above step is 𝒪( log(n ).
∙ Break into two cases C ≥ 1 and C < 1. First, consider C ≥ 1, then use amplification <cit.> to remove the factor C^2. The complexity for amplification step is 𝒪(C^2 log(n)). Then we
construct the following operator
[ 0.5 |_1| |_1⟩⟨_1| · · ·; · 0.5 |_2| |_2⟩⟨_2| · ·; · · · · ; · · · 0.5 |_| |_⟩⟨_| ].
The complexity of this step is 𝒪( C^2/_minlog(n) ) where _min≡min{ |_i|^2 }_i=1^.
∙ Using the above operator, we construct the following operator
1/2^p-1[ ( |_1| |_1 ⟩⟨_1 | )^⊗ p-1⊗_n ; ( |_2| |_2 ⟩⟨_2 | )^⊗ p-1⊗_n ; ⋯ ; ( |_| |_⟩⟨_ | )^⊗ p-1⊗_n ].
The complexity upon this step is 𝒪( p C^2/_minlog(n) ).
∙ Use oracle access to A (defined in section <ref>) to construct the block encoding of (M_H + M_D)/2sp^2. The complexity of this step is 𝒪( p^2 log(n) ).
∙ Employing mathematical property of homogeneous polynomial (see <ref>), we construct the block encoding of
1/4^p-1⊕_i=1^ (|_i⟩⟨_i|)^⊗ p-1⊗(_i)/2sp^2
that contains our Hessian (for homogeneous polynomial).
∙ Generalize the above construction to a polynomial of arbitrary type. The complexity of this step (including everything from the beginning) is
𝒪( C^2/_minlog(n) + p^2 log(n) ).
∙ Construct the block encoding of the operator
⊕_i=1^1/4 s 4^p-1( _n - |_i⟩⟨_i|^⊗ p-1 ⊗(_i/p^2) ).
∙ Find minimum eigenvalue of the above operator by using Lemma <ref> (with accuracy being scaled ϵ/4sp^2 4^p-1, and infer the convexity from such eigenvalue. More specifically, if the minimum eigenvalue is less than 1/2, then the function is not convex. Otherwise, it is convex. The complexity of this step is:
𝒪( 4^p-1 sp^2 ( log(n) + log( 4^p-1 sp^2 ) ) ( p C^2/_min (log(n) + p^2 log(n) ) ).
∙ Finally consider the case where the normalization factor C < 1. Repeat the same procedure, which results in almost the same complexity, i.e.,
𝒪( (4C^2)^p-1 sp^2 ( log(n) + log( 4^p-1 sp^2 ) ) ( p C^2/_min (log(n) + p^2 log(n) ) ).
We remark that in the above summary, we did not take into account the error term ϵ that appears in multiple steps, such as amplification, encoding matrices M_H,M_D, and taking the square root of the operator. To dissect the convexity, we can ignore the error-dependence factors because we only care about the sign of the minimum eigenvalue instead of a real estimation. Therefore, one can set the error to be some constant. Furthermore, the complexity dependence on the error is polylogarithmic, which is efficient. The generalization to polynomials of arbitrary type is carried out in Section <ref>. A subtle detail about the above running time is that the scaling depends on |_min|, which is the minimum norm of the length of chosen points, as well as C, which is the normalization factor. Let's say we choose points to be distributed uniformly in the domain of interest (hypersphere), then we can assume that |_min| ∼𝒪(1) (being some constant) and hence 1 ≤ C^2 ≤. So the factor C^2/|_min| ∈𝒪(), which is linear in . For C being smaller than 1, which means that we choose points having very small norm (at least, for most of them). Then we can assume that |_min| is smaller than 1/, which means that 1/|_min| is greater than , but no greater than 𝒪(). All in all, for both cases where C ≥ 1 or C < 1, the factor C^2/|_min| is 𝒪(), which is linear in .
Now we are ready to state our main result formally:
Let an objective function f (of arbitrary type), with certain assumptions defined as in Sections <ref> and <ref>. Over the domain 𝒟⊆ (-1,1)^n, let 𝒩 be the chosen sample points, denoted as {_i }_i=1^ for convexity testing. Let C^2 = ∑_i=1^ |_i|^2 (≤)) and _min = min_i { |_i|^2 }_i=1^. If C ≥ 1, then the quantum algorithm outlined above can reveal the positive-semidefiniteness of corresponding Hessian in time
𝒪( 4^p-1 sp^2 ( log(n) + log(4^p-1 2sp^2)) ( p (log(n)) + p^2 log(n) ),
where α is some bounded constant. In particular, if C < 1, then the running time is:
𝒪( (4C^2)^p-1 sp^2 (log(n) + log(4^p-1 2sp^2)) ( p (log(n)) + p^2 log(n) ).
Potential advantage
To see the advantage, we take a look at how classical computers can solve the above problem. The Hessian matrix is computed as
() = _1,2,...,p-1( (M_H + M_D) ( ρ_x^⊗ p-1⊗_n ) ),
which involves matrix operation of size n^p. Each operator M_H and M_D is computed from matrix A which takes further time 𝒪( pn^p + p^2 n^p ). The Hessian is then evaluated at points to find the sign of minimum eigenvalue, which results in the final complexity 𝒪( ( pn^p + p^2 n^p)).
The above running time clearly suggests that the quantum algorithm can test convexity superpolynomially more efficiently than its classical counterpart with respect to the number of variables n, meanwhile a bit slower in relative to the number of sample points . Therefore, the quantum method is very efficient when dealing with high-dimensional cases as well as high-degree polynomials.
An interesting question from the above classical procedure is: what if we follow the same routine using a quantum computer for all points? It means that one can run the quantum framework to check the spectrum of Hessian at each point , and repeat the procedure for points. In such a case, the quantum procedure would be very similar to our prior construction, except that we do not care about other - 1 points, more specifically, if we take a look at Eqn. <ref> and pay attention only to the top-left corner, e.g., the operator _1 _1^T (the factor C is treated in a similar manner as the above construction), which means that we treat this point _1 as a point of interest. Then, the same procedure can be carried out to first build the squared operator:
[ _1_1^T · · ·; · · · ·; · · · · ; · · · · ]⟶[ 0.5 |_1| |_1⟩⟨_1| · · ·; · · · ·; · · · · ; · · · · ].
Then, from the right-handed side operator, one proceeds to build the block encoding of ( 0.5 |_1||_1⟩⟨_1|)^⊗ p-1⊗_n (using the routine below Equation (<ref>)). Then, one uses the same properties of Hessian (see Eqn. (<ref>)) to obtain the following:
( 0.5 |_1||_1⟩⟨_1|)^⊗ p-1⊗_n · (M_H+M_D)/2sp^2·( 0.5 |_1||_1⟩⟨_1|)^⊗ p-1⊗_n = 1/4^p-1|_1⟩⟨_1|^p-1⊗(_1)/2sp^2.
One can see that the above formula is essentially the top-left block corner of Eqn. (<ref>), which is quite obvious. Then, one can use the same Lemma <ref> to reveal the minimum value, which indicates the positivity of Hessian at given point _1. One repeats the process for different points, which results in asymptotically the same complexity. The difference is that, as we need to store different eigenvalues (of Hessian at points), the memory usage is required to be as much as . Meanwhile, the quantum process that we have outlined during this work requires 𝒪(log()) qubits to handle the same task, which is more effective in terms of memory usage but sharing the same running time.
§ REMARKS AND DISCUSSION
In this section, we discuss our algorithm in a larger context, showing the potential application of our result in multiple directions.
§.§.§ From Hessian to Curvature and Geometric Structure of Manifold
As we mentioned, the Hessian of a function at a given point encodes the local geometric structures of such a function, e.g., convexity. While the sign of eigenvalues of Hessian can reveal the convexity, the magnitude of eigenvalues can actually reveal how curved it is (see Fig. <ref> for a simple example in 3D). Figure <ref> features two simple surfaces in 3D, which are a plane and a sphere. For the plane, it is easy to compute the Hessian of z(x,y) = -(a/c)x - (b/c)y + d/c in this case, which is simply a 2 × 2 matrix of 0 entries, which implies that the surface is not curved at all, i.e., exactly match the plane shape. On the other hand, for a sphere, z =√(r^2 - x^2 -y^2) so the Hessian matrix is not as straightforward as a plane, but as we can see, the function z is convex on the lower half of plane x-y, but not on the upper half.
In fact, studying the underlying structure of a manifold using functions defined on the manifold is a fundamental aspect of differential geometry. Thus, from this angle, our work suggests a potential aid of a quantum computer in the study of the geometric structure of a manifold, given that a manifold is locally Euclidean and arbitrary smooth function of some variables can be approximated by certain polynomials in some domains.
§.§.§ Variational Quantum Algorithm
A popular topic of recent progress in quantum computation is the variational quantum algorithm (VQA) <cit.>. One of the reasons that make variational quantum algorithms so appealing is that typically, they require low-depth circuits, which is very suitable in the near-term era. VQA has shown its success in many areas such as combinatorial optimization <cit.>, supervised learning <cit.>, etc. A common strategy in the context of VQA eventually boils down to the minimization problem:
min_θ f(θ) = ⟨0| U(θ)^† O U(θ) |0⟩,
where U(θ) refers to some variational circuit, e.g., a circuit composed of rotational gates with adjustable angles. A common method to optimize the above quantity is gradient descent, where the above observables are usually defined as a cost function, and its gradient is computed classically. Then, the parameters are updated iteratively by tuning corresponding rotational gates. Optimizing a quantum circuit is apparently not easy, as a phenomenon called the barren plateau can occur <cit.>, which prevents the efficient training of quantum circuits. In particular, training general variational quantum circuits is even NP-hard <cit.>.
Theoretically, the cost landscape of the above function is also a factor that affects the minimization. If the domain is convex, then any initial randomization falling into that domain can lead to a minimum, which implies that the optimization is efficient. While the exact expression for f(θ) might not necessarily be a polynomial, we remark that over some domains that are sufficiently small, arbitrary functions can be approximated by some polynomials, e.g., Taylor series. Therefore, a visible strategy that we see from our work is that we can consider multiple domains and check the convexity. Given that the task of dissecting convexity can be done superpolynomially fast in the quantum realm, it turns out to be quite useful in this case as we can scan the optimization landscape efficiently with respect to the number of parameters. The challenge then only arises as to how to represent f(θ) with proper polynomials, which allows computable parameters in a form that we assume in our work. At first glance, this challenge seems impossible for a random circuit U, but might not be so for a structured U that has been employed in several contexts, such as the so-called QAOA ansatz <cit.>. We leave this question as a motivation for future exploration of the application of our convexity tester.
§.§.§ Improving Quantum Newton's Method
As a striking corollary of our method is the generalization to arbitrary polynomial as well as major improvement on the resource requirement, upon the quantum Newton's method developed in <cit.>. In <cit.>, we already made progress on quantum gradient descent, and in fact, the method outlined in this work shares certain similarities with what in <cit.>, but the scope is different.
To begin with, we recall some aspects of quantum Newton's method, which is basically a modified gradient descent method. We first follow the same notation and assumption from <cit.>. In such a problem, we are given an objective function f: ℝ^n ⟶ℝ, which is a homogeneous polynomial of even degree as we defined in Section <ref>. The goal is to find the point ∈ℝ^n at which f() is minimum. A standard method for this kind of optimization problem is gradient descent, which is an iterative method in that one begins with a random solution _0 and performs the update iteratively as follows:
_t+1 = _t - η∇ f(_t).
As mentioned in Section <ref>, such particular form of f admits an analytical form for the gradient, as ∇ f() = D(), which was used in <cit.> (and improved in <cit.>) to construct the quantum process carrying out the above iteration procedure. As also mentioned in the same work <cit.>, Newton's method modifies directly upon the gradient descent method by taking account of the curvature of f, i.e., the update rule is as follows:
_t+1 = _t - η ^-1(_t) ∇ f(_t).
Roughly speaking, at a given time step t-th, the method in <cit.> takes multiple copies of _t, use oracle access to simulate exp(-iM_D t) and exp(-i M_H t), to construct the gradient and Hessian (via relation in Equation <ref> and <ref>). Then they perform the subtraction of vectors by using extra ancilla and Hadamard gates, to obtain _t+1. If one wish to obtain a normalized version of temporal solution _t+1, |⟩_t+1, then by measuring the ancilla and post-select on ancilla being |0⟩, one achieves the goal, as in <cit.>. Here, we lift such requirement and consider the problem in a more general manner, that is obtaining a temporal solution written in a form similar to “density operator”, _t _t^T for any given time step t.
From Section <ref>, <ref> (particularly equation (<ref>)) and most importantly section <ref>, we have the block encoding of (^T)^⊗ p-1⊗()/(2sp^2) where refers to the Hessian of polynomial of arbitrary kind. What is lacked is the gradient of polynomial of arbitrary kind, which was not constructed before and neither in the previous work <cit.>, so here we first fill this gap. Recall that from section <ref>, we have that for a general polynomial:
f() = ∑_q=1^P-1 (c_q^T ) ∏_k=1^q-1 (^T B_k q ).
Since the derivative of a sum is equal to sum of derivative of each term within the summation, then for simplicity, as similar to what we did in section <ref>, we consider each constituent of the above summation, which has the form g() = (c^T ) ∏ (^T B ) ≡ (c^T ) h() (where we have defined ∏ (^T B ) ≡ h(). Then by chain rule, it is simple to see that:
∂ g/∂ x_m = (c^T) ∂ h()/∂ x_m + h() ∂ c^T /∂ x_m
= (c^T) ∂ h()/∂ x_m + h() ∂∑_i=1^n c_j x_j /∂ x_m
= (c^T) ∂ h()/∂ x_m + h() c_m
where x_m is the m-th variable of and c_m is the m-th entry of c (note that 1 ≤ m ≤ n and n is th dimension of ). As the gradient of a function is composed of partial derivative of such function with respect to all variables, we have that:
∇ g() = (c^T ) ∇ h() + h() c
Since h() is a homogeneous even degree polynomial, its gradient admits an explicit analytical expression. Recall that from section <ref> and property in equation <ref>, we have that:
(^T) ^⊗ p-1⊗_n ( M_D) (^T) ^⊗ p-1⊗_n = (^T) ^⊗ p-1⊗ D(h())
where D(h()) refers directly to the fact that it is the gradient operator of h() evaluated at the point .
Recall that via oracle access to A, we have ϵ-approximated block encoding of M_D/(sp). Then we can use lemma <ref> to construct the block encoding of
(^T) ^⊗ p-1⊗_n M_D/sp (^T) ^⊗ p-1⊗_n = (^T) ^⊗ p-1⊗D(h())/sp
We also have block encoding of cc^T (by assumption, see section <ref>). Lemma <ref> allows us to construct the block encoding of (^T) (cc^T) = (^T c) c^T. Then using lemma <ref> using block encoding of , we can obtain block encoding of ⊗ (^T c) c^T. Then lemma <ref> allows us to construct the block encoding of
(^T) ^⊗ p-1⊗D(h()) (^T c) c^T /sp = (^T) ^⊗ p-1⊗ (^T c) ∇ h() c^T/sp
Using lemma <ref> again with block encoding of ⊗ (c^T) c^T (which is the transpose of ( c^T) c^T), we obtain the block encoding of
(^T) ^⊗ p-1⊗ (^T c) ∇ h() c^T (c^T) c^T /sp = (^T) ^⊗ p-1⊗ ( c^T)^2 ∇ h() ^T /sp
where we have use c^T c = 1 and ^T c = c^T due to the real regime that we work on. Now we handle the term h() c. Since h() is homogeneous even degree, it has the familiar form:
h() = 1/2⟨|^⊗ p A |⟩^⊗ p
As we have oracle access to A, we can construct the block encoding of A/2s. From block encoding of ^T, it is trivial to obtain the block encoding of (^T)^⊗ p using lemma <ref>. We have that:
(^T)^⊗ pA/2s (^T)^⊗ p = (^T)^⊗ p-1⊗h() /s^T
Now we use the block encoding of ⊗ cc^T plus lemma <ref> to construct the block encoding of
( ⊗ cc^T) ( (^T)^⊗ p-1⊗h() /s^T ) = (^T)^⊗ p-1⊗ (c^T) h() c^T /s
We use lemma <ref> to add a scaling of 1/p to the above term, e.g, obtaining the block encoding of (^T)^⊗ p-1⊗ (c^T) h() c^T /ps. Then we use lemma <ref> to construct the block encoding of:
1/2( (^T) ^⊗ p-1⊗ ( c^T)^2 ∇ h() ^T /sp +(^T)^⊗ p-1⊗ (c^T) h() c^T /ps)
= 1/2( (^T)^⊗ p-1⊗^T c/sp ( (^T c) ∇ h()^T + h()c ^T ) )
= 1/2( (^T)^⊗ p-1⊗^T c/sp ∇ g()^T )
Before moving further, we remind that we have the (ϵ-approximated) block encoding of the following operator:
(^T )^⊗ p-1⊗()/2sp^2 and 1/2( (^T)^⊗ p-1⊗^T c/sp ∇ g()^T )
Now we have enough recipe to deal with quantum Newton's method. Let us recall a lemma from <cit.> (note that in their context, xx^T is exactly ^T in our case):
Given the block encoding of (x x^T)^⊗ p-1⊗D(x)/ps (with complexity T), then it is possible to obtain the block encoding of D(x)/p in time 𝒪( γ^2(p-1) s T_D ), where γ is some bounded constant.
The above form coincides with what we have right above. Therefore, using the same procedure, we can obtain the block encoding of ()/p^2 in similar time
𝒪(γ^2p-2 s ( p^2log(n) + plog^2.5(1/ϵ))).
and we also obtain the block encoding of (^T c) ∇ g()^T/p in time
𝒪( γ^2(p-1) s ( p^2log(n) + plog^2.5(1/ϵ))). )
The extra factor x^T c can be estimated up to small accuracy by using routine in appendix <ref>, setting = 1 and using amplitude estimation. The concrete procedure will be elaborated in appendix <ref>. Then it is removed from the above operator using amplification method, with further 𝒪( 1/(x^T c)) = 𝒪(1) complexity. Therefore, we obtain the ϵ-approximated block encoding of ∇ g()^T/p in complexity
𝒪( γ^2(p-1) s ( p^2log(n) + plog^2.5(1/ϵ)))
The minimum eigenvalue (in magnitude) of ()/p^2 can be found exactly via the convexity testing framework (see section <ref> and <ref>), which is required to perform inversion of () in order to execute quantum Newton's method. The inversion of ()/p^2, or more precisely, the transformation from ()/p^2 ⟶^-1()/Γ, where Γ is roughly κ, where κ is roughly a reciprocal of minimum eigenvalue of () (according to <cit.>), is then carried out using popular methods <cit.>, with running time
𝒪(γ^2p-2 2s ( p^2log(n) + plog^2.5(1/ϵ) ) κ polylog(κ/ϵ) ),
Therefore, it is sufficient to complete the improved quantum Newton's method. More specifically, at t-th time step, we are presented with _t _t^T. In particular, _t _t^T is related to _t+1_t+1^T via the following relation, that was used in <cit.>:
_t+1 (_t+1)^T = (_t - η^-1(_t) ∇ g(_t))( _t - η^-1(_t) ∇ g(_t) )^T
= (_t - η^-1(_t) ∇ g(_t)) ( _t^T - η( ^-1() ∇ g(_t))^T )
= (_t _t^T) - η_t ( ^-1(_t) ∇ g(_t) )^T - η^-1(_t) ∇ g(_t) _t^T +
η^2 (^-1(_t) ∇ g(_t) ) · (^-1(_t) ∇ g(_t) )^T
The block encoding of _t _t^T is apparently presented, more concretely, in quantum Newtons' method, it is from the previous t-1-th step. Previously, we have constructed the (ϵ-approximated ) block encoding of H^-1/Γ and of ∇ g(_t) _t^T. The block encoding of ∇1/p g(_t) _t^T naturally yields the block encoding of (∇1/p g(_t) _t^T)^T = 1/p_t ∇ g(_t)^T. Then we can use lemma <ref> to construct the block encoding of
1/p_t ∇ g(_t)^T (^-1(_t)/Γ)^T
We then can use lemma <ref> to insert the factor η, that we transform the block encoding of the above operator into
η/p_t ∇ g(_t)^T (^-1(_t)/Γ)^T
The unitary transpose of the block encoding of above operator is exactly the block encoding of
η/p^-1(_t)/Γ ∇ g(_t) _t^T
We can use lemma <ref> to construct the block encoding of their product, e.g.,
( η/p^-1(_t)/Γ ∇ g(_t) _t^T ) ·( _t ∇ g(_t)^T (^-1(_t)/Γ)^T η/p)
= |_t|^2 η^2/p^21/Γ^2 (^-1(_t) ∇ g(_t) ) · (^-1(_t) ∇ g(_t) )^T
In order to remove the factor |_t|^2, we just need to note that since we have the block encoding of _t _t^T, we can use lemma <ref> to efficiently find its maximum eigenvalue, which is exactly |_t|^2. Then we can use the amplification method to remove such factor, resulting in further complexity 𝒪( 1/|_t|^2 ).
Due to the term p^2 Γ^2 (which is known as roughly one over minimum eigenvalue of (_t) , we need to use lemma <ref>) to change the block encoding of _t _t^T into 1/p^2Γ^2_t _t^T, and of (η/p) _t ∇ g(_t)^T (^-1(_t)/p Γ)^T into η_t ∇ g(_t)^T (^-1(_t)/p Γ^2)^T (same thing for its transpose). Then we employ lemma <ref> to construct the block encoding of
1/4p^2 Γ^2((_t _t^T) - η_t ( ^-1(_t) ∇ g(_t) )^T - η^-1(_t) ∇ g(_t) _t^T + η^2 (^-1(_t) ∇ g(_t) ) · (^-1(_t) ∇ g(_t) )^T )
which is exactly the block encoding of _t+1_t+1^T/(4pΓ^2) where we remind that Γ^2 is ∼ one over minimum eigenvalue of (_t), or ||(_t)^-1||, which is known because we estimated it at first. This factor can be removed by amplification, resulting in further 𝒪(Γ^2) complexity. Therefore, the total running time of a single step of our quantum Newton's method is
𝒪( 𝒪(γ^2p-2 2s p^2 ( p^2log(n) + plog^2.5(1/ϵ) ) κ polylog(κ/ϵ) ) )
The improvement compared to <cit.> turns out to be substantial as the number of copies requirement for ^T is significantly reduced. At a certain step of Newton's method, the work in <cit.> requires 𝒪(p^5/ϵ^3) copies of the temporal solution to perform an update of the solution, with a total time complexity 𝒪(p^8 log(n) κ /ϵ^4 ). Meanwhile, the running time of our work depends polylogarithmically in terms of error tolerance, and that the number of “copies” of (block encoding of) ^T is p, which is a major advantage compared to <cit.>. In particular, the dependence on p reduces by a power-of-4. The most important thing is that our framework generalizes to polynomials of all kinds, not limited to homogeneous, even-degree ones. We finally emphasize that according to the analysis given in <cit.>, the factor γ from the above running time can be chosen to be arbitrarily small by choosing the parameter η properly, e.g., choosing η≤ 1/(2p ||||^-1) guarantees that γ≤ 4 (see Section 4, Result 4 of <cit.> for detailed derivation).
An important aspect of our improved Newton's method is that the inversion of Hessian () is more accurate. We recall that the method <cit.> requires a reasonable lower bound on the eigenvalues of (). However, in this case, the Hessian depends on , and it changes per each iteration. Therefore, in order to apply the method in <cit.>, we need to estimate the smallest eigenvalue in magnitude, which can be done efficiently, e.g., logarithmically with respect to the size of the matrix, using the method in <cit.>. We note that this extra step has a running time smaller than the inversion itself, which implies that, asymptotically, it does not increase the overall complexity. In <cit.>, the authors assumed the inversion is executed on some well-conditioned subspace of , with the cutoff threshold chosen to be some constant Λ_H^-1. Where and how to obtain a reasonable value for Λ_H^-1 is unclear to us; as we mentioned, the spectrum of Hessian is not a constant over different iterations. Therefore, we believe that the extra step of estimating such a threshold is an important improvement upon <cit.>.
As another application of our convexity testing framework, in particular, one can see that our framework can be applied to enhance the performance of (improved) quantum Newton's method as well as gradient descent method. The reason is straightforward to see, as the gradient descent and quantum Newton's method are aimed to optimize a function by shifting toward the minima from some initialization. Therefore, by scanning the landscape, one can see if the given function is convex within such region. If the function is convex, then a minima is guaranteed, and hence, the algorithm is considered successful.
§ CONCLUSION
In this work, we have investigated the potential of quantum computers in the context of functional analysis, specifically testing the convexity of a given objective function. The problem of testing convexity is converted to the problem of determining the non-negativity of a matrix, e.g., all eigenvalues are non-negative. By employing a useful algebraic property of homogeneous polynomials of even degrees, we wrote down an explicit form of the so-called Hessian and built upon it to generalize the Hessian to arbitrary polynomials. We then combine it with a powerful quantum singular value transformation framework, plus a quantum power method to construct a quantum algorithm that allows us to test the positive-semidefiniteness of such Hessian matrix at multiple points from a given domain. The procedure has running time polylogarithmically respective to the dimension n and linear with the number of sample points , which is superpolynomially faster than the best-known classical approach relative to the number of variables, meanwhile keeping the same complexity on the number of sample points. We also point out two examples where we envision a potential application of convexity testing, including studying the geometric structure of manifold, testing training landscape of variational quantum algorithms as well as optimization landscape for gradient descent/Newton's method for optimization. In particular, as a striking corollary of our result, we provide a major improvement upon the work of <cit.> in multiple aspects: running time (with respect to error tolerance), generality (arbitrary polynomial type) and the subtle detail regarding the inversion of the Hessian. As a whole, our work has added one more interesting example to the field of quantum computation, revealing that the area of functional analysis, and many more areas, is a rich avenue deserving of further research from computational aspects. An interesting open direction that we believe is worth looking at is that the model we work on in this case is somewhat explicit, e.g., the function is a polynomial of computable coefficients given via an oracle. In some traditional problems, the input function is typically given as a blackbox, such as Grover's search problem <cit.>, and we wish to reveal hidden properties. Suppose, instead, we are given a blackbox function that computes some analytical function; then, how do we extract the hidden properties, such as the convexity of the function in some domain? We note that there are two prior relevant works such as <cit.>, where the author considered the gradient estimation, and <cit.>, where the authors considered a sampling problem from a blackbox oracle that computes the value of given function within some domain. While there are seemingly overlaps, we do not see a direct solution to our case, thereby we leave the challenge for future investigation.
§ ACKNOWLEDGEMENT
We thank Hiroki Sukeno and Shuyu Zhang for carefully reading and detailed feedback on the manuscript. This work was supported by the U.S. Department of Energy, Office of Science, Advanced
Scientific Computing Research under Award Number DE-SC-0012704.
We also acknowledge the support of a Seed Grant from
Stony Brook University’s Office of the Vice President for Research and the Center for Distributed Quantum Processing.
unsrt
§ PRELIMINARIES
Here, we summarize the main recipes of our work. We keep their statements brief but precise for simplicity, with their proofs/ constructions referred to in their original works.
<cit.>
Let A be some Hermitian matrix of size N × N whose matrix norm |A| < 1. Let a unitary U have the following form:
U = [ A ·; · ·; ].
Then U is said to be an exact block encoding of matrix A. Equivalently, we can write:
U = |0⟩⟨0|⊗ A + ⋯,
where | 0⟩ refers to the ancilla system required for the block encoding purpose. In the case where the U has the form
U = |0⟩⟨0|⊗A + ⋯,
where || A - A || ≤ϵ (with ||.|| being the matrix norm), then U is said to be an ϵ-approximated block encoding of A.
The above definition has multiple simple corollaries. First, an arbitrary unitary U block encodes itself. Suppose A is block encoded by some matrix U, then A can be block encoded in a larger matrix by simply adding ancillas (which have dimension m). Note that _m ⊗ U contains A in the top-left corner, which is a block encoding of A again by definition. Further, it is almost trivial to block encode the identity matrix of any dimension. For instance, we consider σ_z ⊗_m (for any m), which contains _m in the top-left corner.
Let ρ = _A |Φ⟩⟨Φ|, where ρ∈ℍ_B, |Φ⟩∈ℍ_A ⊗ℍ_B. Given unitary U that generates |Φ⟩ from | 0⟩_A ⊗| 0⟩_B, then there exists an efficient procedure that constructs an exact unitary block encoding of ρ.
The proof of the above lemma is given in <cit.> (see their Lemma 45).
Given the unitary block encoding of two matrices A_1 and A_2, an efficient procedure exists that constructs a unitary block encoding of A_1 A_2.
The proof of the above lemma is also given in <cit.>.
Given the unitary block encoding {U_i}_i=1^m of multiple operators {M_i}_i=1^m (assumed to be exact encoding), then, there is a procedure that produces the unitary block encoding operator of ⊗_i=1^m M_i, which requires a single use of each {U_i}_i=1^m and 𝒪(1) SWAP gates.
The above lemma is a result in <cit.>.
Given the oracle access to s-sparse matrix A of dimension n× n, then an ϵ-approximated unitary block encoding of A/s can be prepared with gate/time complexity 𝒪(log n + log^2.5(1/ϵ)).
This is also presented in <cit.>. One can also find similar construction in Ref. <cit.>.
Given unitary block encoding of multiple operators {M_i}_i=1^m. Then, there is a procedure that produces a unitary block encoding operator of ∑_i=1^m ± M_i/m in complexity 𝒪(m).
Given a block encoding of some matrix A (as in <ref>), then the block encoding of A/p, where p > 1, can be prepared with an extra 𝒪(1) cost.
§ PROOF OF LEMMA <REF>
Remind that we are trying to prove the following
Given the block encoding of ⊕_i=1^ cc^T = _ cc^T and ⊕_i=1^_i_i^T, then it is able to constructs the block encoding of the diagonal matrix ℬ with entries ℬ_ij = β_i^2 δ_ij where β_i = _i^T c
From the block encoding of ⊕_i=1^ cc^T = _ cc^T as in the above lemma, plus the block encoding of ⊕_i=1^_i _i^T, we can use lemma <ref> to construct the block encoding of their products, e.g, we would obtain the block encoding of ⊕_i=1^β_i c _i^T, denoted as U_β. Given the definition of block encoding (<ref>), we have that:
U_β| 0⟩_u |Φ⟩ = | 0⟩_u (⊕_i=1^β_i c _i^T) |Φ⟩ + |Garbage⟩
where |Garbage⟩ satisfies: | 0⟩⟨ 0|⊗·|Garbage⟩ = 0. Given that ⊕_i=1^β_i c _i^T = ∑_i=1^|i⟩⟨i|⊗β_i c _i^T and if we choose |Φ⟩ = |j⟩⊗|c⟩, we have that
⊕_i=1^β_i c _i^T ·|Φ⟩ = δ_ijβ_i β_j c
Recall further that we also have a unitary U_C such that U_C| 0⟩ = |c⟩≡ c. Therefore, the state |Φ⟩ could be created from | 0⟩⊗|j⟩ by simply applying U_C to the first register, e.g, to obtain |c⟩⊗|j⟩ and use SWAP gate to swap them, e.g, we obtain |j⟩⊗|c⟩. Denote such process as U_s.
To match the same dimension as U_β, we simply add extra register (with corresponding dimension). We have that:
⟨ 0|_u ⟨ 0|⟨i| (_u ⊗ U_C^†) U_β (_u ⊗ U_s) | 0⟩_u | 0⟩|j⟩ = δ_ijβ_i β_j
which basically is a block encoding of a diagonal matrix that contains β_i^2. Using the result from <cit.> to remove the power factor 2, we are left with block encoding of δ_ijβ_i β_j.
|
http://arxiv.org/abs/2409.03163v1 | 20240905014743 | CyberDep: Towards the Analysis of Cyber-Physical Power System Interdependencies Using Bayesian Networks and Temporal Data | [
"Leen Al Homoud",
"Katherine Davis",
"Shamina Hossain-McKenzie",
"Nicholas Jacobs"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
[]979-8-3503-7240-3/24/$31.00 2024 IEEE []
CyberDep: Towards the Analysis of Cyber-Physical Power System Interdependencies Using Bayesian Networks and Temporal Data
This work was supported by the Sandia Laboratory Directed Research and Development Project #229324 and the US Department of Energy under award DE-CR0000018.
Leen Al Homoud14, Student Member, IEEE,
Katherine Davis1, Senior Member, IEEE,
Shamina Hossain-McKenzie3, Member, IEEE,
Nicholas Jacobs3, Member, IEEE,
1Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX, USA
3Sandia National Laboratories, Albuquerque, NM, USA
[email protected]
Accepted XXX. Received YYY; in original form ZZZ
===========================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Modern-day power systems have become increasingly cyber-physical due to the ongoing developments to the grid that include the rise of distributed energy generation and the increase of the deployment of many cyber devices for monitoring and control, such as the Supervisory Control and Data Acquisition (SCADA) system. Such capabilities have made the power system more vulnerable to cyber-attacks that can harm the physical components of the system. As such, it is of utmost importance to study both the physical and cyber components together, focusing on characterizing and quantifying the interdependency between these components. This paper focuses on developing an algorithm, named CyberDep, for Bayesian network generation through conditional probability calculations of cyber traffic flows between system nodes. Additionally, CyberDep is implemented on the temporal data of the cyber-physical emulation of the WSCC 9-bus power system. The results of this work provide a visual representation of the probabilistic relationships within the cyber and physical components of the system, aiding in cyber-physical interdependency quantification.
Cyber-Physical Interdependencies, Cyber-Physical Power Systems, Graph Theory, Bayesian Networks, Dependency Graphs, Temporal Data
§ INTRODUCTION
Over the past decade, power systems have become increasingly recognized as cyber-physical systems. The importance of understanding the interdependency between cyber and physical components is highlighted by the many new developments in the grid, such as, but not limited to, renewable energy integration, distributed energy generation monitoring and control, and new cyber-security technology. Specifically, a power grid's Supervisory Control and Data Acquisition (SCADA) system allows for the monitoring and control of physical components in the system and communicates with devices through a variety of protocols, such as the Distributed Network Protocol 3 (DNP3) <cit.>. This protocol is one of the most widely used in electric power utilities. Such developments have made the power grid more vulnerable to cyber-attacks that target the physical components of the system at the generation, transmission, and distribution levels. Two infamous threats are the Industroyer and Industroyer2 malware that affected Ukraine in 2016 and 2022, respectively <cit.>. Both of these threats targeted electrical substations in the country, with the Industroyer malware sending SCADA commands to the field devices resulting in an hour-long power outage across Ukraine.
As such, it is now of utmost importance to analyze and study the power system as both a cyber and physical system, while taking into account how the cyber and physical components of the system are interdependent. There is a lot of emerging literature focused on understanding cyber-physical power system interdependencies <cit.>. In <cit.>, the authors study the interdependency relationship between the physical power grid and its corresponding communication network when dealing with and mitigating cascading failures. Through numerical simulations of a cyber-attack on a cyber-physical power system model, the authors found that power systems divide into clusters when facing cascading failures. These results showed that there is a correlation between system robustness and cluster size, proving that these cyber-physical clusters are still interdependent of each other, but operating separately. In <cit.>, the authors claim that interdependence is an intrinsic feature of cyber-physical systems. The authors back up this claim by characterizing cyber-physical interdependencies using correlation metrics aimed at predicting the propagation of failure following a cyber-attack on the network. Huang et al. <cit.> also study interdependencies concerning cascading failures following a mathematical estimation approach using concepts of graph theory. Other applications that utilize cyber-physical interdependency analysis in power grids include improving power system reliability modeling <cit.> and developing cyber-physical resiliency metrics <cit.>.
In addition to the literature detailed above, Bayesian Networks have also been used in cyber-physical power systems for many applications including, but not limited to, attack graph generation <cit.>, cyber threat mitigation <cit.>, scalable anomaly detection <cit.>, and risk analysis and assessment <cit.>. Sahu et al. <cit.> focused on developing a Bayesian attack graph and updating it through the use of constraint-based structural learning methods that focus on scalability and accuracy. In <cit.>, the authors develop a quantitative framework using Bayesian networks to define all possible vulnerabilities and optimize this framework to achieve mitigation of cyber-physical attacks, while Krishnamurthy et al. <cit.> focused on creating Bayesian networks of power systems to study the different cyber-physical relations between the nodes to achieve anomaly detection focused on power system scalability.
It is also important to note that this work is part of a larger effort aimed at characterizing and quantifying cyber-physical power system interdependencies <cit.>. Much of the ongoing work is focused on the development and use of a variety of graph clustering methods that aid in characterizing cyber and physical disturbances and cyber-physical interdependencies. Therefore, the work in this paper focuses on continuing these efforts through the development of a Bayesian network generation algorithm that inputs temporal data generated from the earlier work in <cit.> and outputs graph visualizations of the probabilistic relationship between different nodes in a cyber-physical power system model.
With that being said, the contributions of this paper are as follows:
* Development of a Bayesian network generation algorithm through the use of temporal data and conditional probability calculations of cyber traffic flows between system nodes.
* Application of this algorithm on the temporal data of the cyber-physical emulation of the WSCC 9-bus system <cit.> under physical, cyber, and cyber-physical disturbances.
* Visualization of the probabilistic relationships between the different system nodes, aiding in cyber-physical interdependency quantification.
§ METHODOLOGY
In this section, we first describe the temporal dataset generated by the earlier work in <cit.> and list the physical, cyber, and cyber-physical threat vectors that make up the different disturbance scenarios. Then, we focus on conceptualizing Bayesian networks and detailing the development of the generation of such networks, specifically known as Dependency Graphs.
§.§ Disturbances and Dataset Description
The HARMONIE <cit.> project focused on developing a cyber-physical response engine that generates real-time cyber-physical power system mitigations through a machine learning classification framework and automated remedial action schemes (RAS). The techniques developed were tested in a cyber-physical emulation environment built using a real-time digital simulator (RTDS) and SCEPTRE™<cit.>. SCEPTRE™<cit.> is a modeling and emulation platform developed by Sandia for emulating Industrial Control Systems (ICS). It allows for the modeling and emulation of different virtual and hardware devices, such as, but not limited to, switches, servers, and relays. It also supports power system simulations and ICS communication protocols such as DNP3.
The data used in this paper was generated as part of the different experiments that were run in the emulation environment on the WSCC 9-bus power system <cit.>, where the cyber-physical mapping of the 9-bus system is shown in Figure <ref> and the network diagram is shown in Figure <ref>. In Figure <ref>, it can be observed that the WSCC 9-bus system is divided into three substations, with Substation A containing Bus 2, Substation B containing Bus 1 (slack bus), and Substation C containing Bus 3. The fourth substation in the cyber-physical emulation is the control center, which contains the SCADA system that sends commands to the field devices. In the environment, the WSCC system is emulated as a 4-substation network, with a router connecting each substation to the rest of the network, as shown in Figure <ref>.
The three disturbances that were tested in this environment are <cit.>:
* Cyber: The cyber disturbance consisted of a Denial-of-Service (DOS) intrusion. The mitigation implemented for this threat included using firewall rules to block adversary communication.
* Physical: The physical disturbance included the loss of a generator and a branch that led to line overloading. The mitigation implemented is load shedding at two different buses using an automated remedial action schemes (AutoRAS) algorithm <cit.>.
* Cyber-Physical: This disturbance is a combination of the above disturbances with both mitigation strategies implemented.
The dataset contains four cyber-physical disruption scenarios based on the three disturbances listed above. These scenarios are run three times each and are as follows:
* Baseline: Normal system operations.
* DOS: This scenario includes only the cyber disturbance with no physical disruptions.
* No Mitigation: This scenario includes a physical disturbance as well as a cyber one that affects load shedding.
* Mitigation: This is the same as the No Mitigation scenario with an addition of the firewall rules put in place to block the cyber-attack.
§.§ Dependency Graph Generation
Dependency graphs (DGs) are a type of Bayesian network that helps represent the different cyber and physical system characteristics during normal operating conditions and under threats. Dependency graph (DG) conceptualization is provided in <cit.>, where the authors focused on developing a cyber-physical resiliency metric using graph theory concepts. For this paper, we will focus on DGs to help quantify cyber-physical interdependencies. DGs are generated through the conditional probability calculations of the frequency of communication between the different nodes using the following formula <cit.>:
P(x|P(x)) = 1 - ∏_p^i_x ∈ P(x)(1 - 1_(p_x^i)× P(p_x^i → x))
where,
P(x|P(x)) :Probability of x given P(x)
1_(p_x^i) :Indicator function, which is 1 if the condition
in parentheses holds and 0 otherwise
P(p_x^i → x) :Probability of information flow from p_x^i to x
A DG captures the relationships between the different files and processes in a network, which depend on whether there is data flow between two nodes. As such, a DG implies that if there is traffic moving from object o_i to o_j, then object o_j is dependent on object o_i. This dependency is represented by an edge on the graph, o_i → o_j. In this example, the dependency relationship is characterized by three components, which are the source, o_i, the sink, o_j, and the security contexts, cyber traffic information between nodes o_i and o_j. The nodes of a DG are modeled as binary random variables, and the edges are labeled with the frequency of communication between two different nodes, which is the calculated probability dependent on the number of system calls between each of the nodes.
System calls, syscalls in short, are the communication requests and responses made between each node. For the DNP3 communication protocol, the syscalls under consideration are Request Link Status, Read, Respond, and Direct Operate commands, explained in more detail in the DNP3 protocol primer <cit.>. A sample of a dependency graph can be seen in Figure <ref> <cit.>. The conditional probability that File F4 would be affected if a cyber-attack were to affect either Process P1 or P9 is given by <cit.>:
P(F4|P1, P9) = 1 - (1 - 0.3) × (1 - 0.8) = 0.86
Similarly, the probability that File F2 would be affected if Process P9 is affected is 0.2, and the probability that File F7 would be affected if Process P1 is affected is 0.7.
Algorithm <ref> shows the steps for generating the DGs. The datasets collected from the emulation are in JSON file format. The first step is to load the files, and then filter the traffic out for DNP3 data. DNP3 traffic is selected because it collects information on the physical components of the network as well as cyber information, thus providing a better insight into cyber-physical interdependencies. Once the input data is processed, the IP addresses are then mapped to the device names using the network topology information. The frequency of communication is then counted, and the conditional probability is then calculated. Four graphs for each of the runs are generated, as a result, for each of the four scenarios.
§ RESULTS AND DISCUSSIONS
In this section, we will discuss the results for each of the three experimental runs and their respective four cyber-physical disruption scenarios. Figures <ref>, <ref>, and <ref> display the results for runs 1, 2, and 3, respectively.
Across all three experimental runs, the baseline graphs exhibited similar patterns of behavior in which the probabilities of all the edges were equal. The probabilities amounted to 0.1 for each edge in run 1 and run 2 and 0.17 for each edge in run 3. A crucial observation is that the DOS Only, No Mitigation, and With Mitigation scenarios behaved similarly across runs 1 and 2; however, the DOS Only scenario in run 3 behaved differently. For runs 1 and 2, the DOS Only scenario shows that the highest probabilities are for the edges connecting each of loads 5 and 6 to the SCADA node. This result makes sense as this scenario consists of a DOS threat through DNP3 increasing the amount of packets traveling between the objects affected. It is also important to note that while this intrusion was cyber in nature, we were able to see the relation and the effect of a purely cyber-attack on two physical components in the network, loads 5 and 6. For the DOS Only scenario for run 3, the opposite is observed. The edges connecting loads 5 and 6 to the SCADA node have the lowest probabilities. This could be due to the fact that the DOS threat in this scenario was not implemented for the same duration of time as the other two runs. As such, further analysis of the network topology and experimental setup would be required to interpret this result.
Moving onto the No Mitigation and With Mitigation scenarios for all three runs, similar patterns and probabilities were observed for the edges in the graphs. Specifically, an important observation can be seen where the edges with the highest probabilities are the ones connecting generator 1 (the slack bus) and load 5 to the SCADA node. These results are also justified as this scenario consists of both a cyber-attack and a physical disturbance to the system affecting load shedding. What can be understood from this graph is that the DOS cyber-attack is implemented on load 5 in the network, and the physical disruption affected the load shedding setup in the power system that the SCADA node needed to send more commands to change the generation values at the slack bus (generator 1) to make up for the loss or increase in power generation. These are all valid points to discuss as, once again, the cyber-physical interdependencies can be understood from the dependency graphs (DGs).
Last but not least, observing the With Mitigation results for all runs shows us that the edges with the highest probabilities are the ones connecting loads 5 and 6 to the SCADA node with generator 1 having the second highest probability on the edge connecting it to the SCADA node. These results also make sense as there are firewall rules set up now that prevent the cyber-attack from occurring, hence the increased communication between the SCADA node and the rest of the objects to prevent the attack from occurring.
§ CONCLUSIONS AND FUTURE WORK
In conclusion, CyberDep was developed to generate dependency graphs using the temporal data of the WSCC 9-bus system and perform conditional probability calculations of cyber traffic flows between system nodes. Additionally, we can observe from the results above that the work on CyberDep aided in providing insight into cyber-physical interdependencies through quantifying and visualizing the probabilistic relationships between the different system nodes.
Future work includes the consideration of bi-directional traffic flows, integration of more datasets to include additional cyber and physical devices, such as routers and switches, and expansion to larger power systems. CyberDep can be utilized to infer and build access paths for cyber-physical threat models and generate cyber-physical kill chains.
§ ACKNOWLEDGMENTS
The authors would like to thank Christopher Goes at Sandia National Laboratories for his efforts in generating the datasets used in this work and the members of Sandia Laboratory Directed Research and Development Project #229324 for their collaborative discussions. This work was supported by the Sandia Laboratory Directed Research and Development Project #229324 and the US Department of Energy under award DE-CR0000018.
IEEEtran
|
http://arxiv.org/abs/2409.03523v1 | 20240905133410 | Euclid preparation. Simulations and nonlinearities beyond $Λ$CDM. 2. Results from non-standard simulations | [
"Euclid Collaboration",
"G. Rácz",
"M. -A. Breton",
"B. Fiorini",
"A. M. C. Le Brun",
"H. -A. Winther",
"Z. Sakr",
"L. Pizzuti",
"A. Ragagnin",
"T. Gayoux",
"E. Altamura",
"E. Carella",
"K. Pardede",
"G. Verza",
"K. Koyama",
"M. Baldi",
"A. Pourtsidou",
"F. Vernizzi",
"A. G. Adame",
"J. Adamek",
"S. Avila",
"C. Carbone",
"G. Despali",
"C. Giocoli",
"C. Hernández-Aguayo",
"F. Hassani",
"M. Kunz",
"B. Li",
"Y. Rasera",
"G. Yepes",
"V. Gonzalez-Perez",
"P. -S. Corasaniti",
"J. García-Bellido",
"N. Hamaus",
"A. Kiessling",
"M. Marinucci",
"C. Moretti",
"D. F. Mota",
"L. Piga",
"A. Pisani",
"I. Szapudi",
"P. Tallada-Crespí",
"N. Aghanim",
"S. Andreon",
"C. Baccigalupi",
"S. Bardelli",
"D. Bonino",
"E. Branchini",
"M. Brescia",
"J. Brinchmann",
"S. Camera",
"V. Capobianco",
"V. F. Cardone",
"J. Carretero",
"S. Casas",
"M. Castellano",
"G. Castignani",
"S. Cavuoti",
"A. Cimatti",
"C. Colodro-Conde",
"G. Congedo",
"C. J. Conselice",
"L. Conversi",
"Y. Copin",
"F. Courbin",
"H. M. Courtois",
"A. Da Silva",
"H. Degaudenzi",
"G. De Lucia",
"M. Douspis",
"F. Dubath",
"C. A. J. Duncan",
"X. Dupac",
"S. Dusini",
"A. Ealet",
"M. Farina",
"S. Farrens",
"S. Ferriol",
"P. Fosalba",
"M. Frailis",
"E. Franceschi",
"M. Fumana",
"S. Galeotta",
"B. Gillis",
"P. Gómez-Alvarez",
"A. Grazian",
"F. Grupp",
"S. V. H. Haugan",
"W. Holmes",
"F. Hormuth",
"A. Hornstrup",
"S. Ilić",
"K. Jahnke",
"M. Jhabvala",
"B. Joachimi",
"E. Keihänen",
"S. Kermiche",
"M. Kilbinger",
"T. Kitching",
"B. Kubik",
"H. Kurki-Suonio",
"P. B. Lilje",
"V. Lindholm",
"I. Lloro",
"G. Mainetti",
"E. Maiorano",
"O. Mansutti",
"O. Marggraf",
"K. Markovic",
"M. Martinelli",
"N. Martinet",
"F. Marulli",
"R. Massey",
"E. Medinaceli",
"S. Mei",
"Y. Mellier",
"M. Meneghetti",
"G. Meylan",
"M. Moresco",
"L. Moscardini",
"S. -M. Niemi",
"C. Padilla",
"S. Paltani",
"F. Pasian",
"K. Pedersen",
"W. J. Percival",
"V. Pettorino",
"S. Pires",
"G. Polenta",
"M. Poncet",
"L. A. Popa",
"F. Raison",
"R. Rebolo",
"A. Renzi",
"J. Rhodes",
"G. Riccio",
"E. Romelli",
"M. Roncarelli",
"R. Saglia",
"J. -C. Salvignol",
"A. G. Sánchez",
"D. Sapone",
"B. Sartoris",
"M. Schirmer",
"T. Schrabback",
"A. Secroun",
"G. Seidel",
"S. Serrano",
"C. Sirignano",
"G. Sirri",
"L. Stanco",
"J. Steinwagner",
"A. N. Taylor",
"I. Tereno",
"R. Toledo-Moreo",
"F. Torradeflot",
"I. Tutusaus",
"L. Valenziano",
"T. Vassallo",
"G. Verdoes Kleijn",
"Y. Wang",
"J. Weller",
"E. Zucca",
"A. Biviano",
"A. Boucaud",
"E. Bozzo",
"C. Burigana",
"M. Calabrese",
"D. Di Ferdinando",
"J. A. Escartin Vigo",
"G. Fabbian",
"F. Finelli",
"J. Gracia-Carpio",
"S. Matthew",
"N. Mauri",
"A. Pezzotta",
"M. Pöntinen",
"C. Porciani",
"V. Scottez",
"M. Tenti",
"M. Viel",
"M. Wiesmann",
"Y. Akrami",
"V. Allevato",
"S. Anselmi",
"M. Archidiacono",
"F. Atrio-Barandela",
"A. Balaguera-Antolinez",
"M. Ballardini",
"D. Bertacca",
"L. Blot",
"S. Borgani",
"S. Bruton",
"R. Cabanac",
"A. Calabro",
"B. Camacho Quevedo",
"A. Cappi",
"F. Caro",
"C. S. Carvalho",
"T. Castro",
"K. C. Chambers",
"S. Contarini",
"A. R. Cooray",
"B. De Caro",
"S. de la Torre",
"G. Desprez",
"A. Díaz-Sánchez",
"J. J. Diaz",
"S. Di Domizio",
"H. Dole",
"S. Escoffier",
"A. G. Ferrari",
"P. G. Ferreira",
"I. Ferrero",
"A. Fontana",
"F. Fornari",
"L. Gabarra",
"K. Ganga",
"T. Gasparetto",
"E. Gaztanaga",
"F. Giacomini",
"F. Gianotti",
"G. Gozaliasl",
"C. M. Gutierrez",
"A. Hall",
"H. Hildebrandt",
"J. Hjorth",
"A. Jimenez Muñoz",
"J. J. E. Kajava",
"V. Kansal",
"D. Karagiannis",
"C. C. Kirkpatrick",
"F. Lacasa",
"J. Le Graet",
"L. Legrand",
"J. Lesgourgues",
"T. I. Liaudat",
"A. Loureiro",
"J. Macias-Perez",
"G. Maggio",
"M. Magliocchetti",
"F. Mannucci",
"R. Maoli",
"C. J. A. P. Martins",
"L. Maurin",
"R. B. Metcalf",
"M. Miluzio",
"P. Monaco",
"A. Montoro",
"A. Mora",
"G. Morgante",
"S. Nadathur",
"Nicholas A. Walton",
"L. Patrizii",
"V. Popa",
"D. Potter",
"P. Reimberg",
"I. Risso",
"P. -F. Rocci",
"M. Sahlén",
"A. Schneider",
"M. Sereno",
"A. Silvestri",
"A. Spurio Mancini",
"J. Stadel",
"K. Tanidis",
"C. Tao",
"N. Tessore",
"G. Testera",
"R. Teyssier",
"S. Toft",
"S. Tosi",
"A. Troja",
"M. Tucci",
"C. Valieri",
"J. Valiviita",
"D. Vergani",
"P. Vielzeuf"
] | astro-ph.CO | [
"astro-ph.CO"
] |
Simulations and nonlinearities beyond ΛCDM. 2. Results from non-standard simulations
Simulations and nonlinearities beyond ΛCDM – Results from non-standard simulations
Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA, 91109, USA
Department of Physics, P.O. Box 64, 00014 University of Helsinki, Finland
Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans, s/n, 08193 Barcelona, Spain
Institut de Ciencies de l'Espai (IEEC-CSIC), Campus UAB, Carrer de Can Magrans, s/n Cerdanyola del Vallés, 08193 Barcelona, Spain
Laboratoire Univers et Théorie, Observatoire de Paris, Université PSL, Université Paris Cité, CNRS, 92190 Meudon, France
Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX, UK
School of Physics and Astronomy, Queen Mary University of London, Mile End Road, London E1 4NS, UK
Institut d'Astrophysique de Paris, UMR 7095, CNRS, and Sorbonne Université, 98 bis boulevard Arago, 75014 Paris, France
Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, 0315 Oslo, Norway
Institut für Theoretische Physik, University of Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany
Institut de Recherche en Astrophysique et Planétologie (IRAP), Université de Toulouse, CNRS, UPS, CNES, 14 Av. Edouard Belin, 31400 Toulouse, France
Université St Joseph; Faculty of Sciences, Beirut, Lebanon
Dipartimento di Fisica “G. Occhialini", Università degli Studi di Milano Bicocca, Piazza della Scienza 3, 20126 Milano, Italy
INAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, Via Piero Gobetti 93/3, 40129 Bologna, Italy
IFPU, Institute for Fundamental Physics of the Universe, via Beirut 2, 34151 Trieste, Italy
Dipartimento di Fisica e Astronomia "Augusto Righi" - Alma Mater Studiorum Università di Bologna, via Piero Gobetti 93/2, 40129 Bologna, Italy
ICSC - Centro Nazionale di Ricerca in High Performance Computing, Big Data e Quantum Computing, Via Magnanelli 2, Bologna, Italy
Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy, University of Manchester, Oxford Road, Manchester M13 9PL, UK
INAF-IASF Milano, Via Alfonso Corti 12, 20133 Milano, Italy
Dipartimento di Fisica "Aldo Pontremoli", Università degli Studi di Milano, Via Celoria 16, 20133 Milano, Italy
INFN Gruppo Collegato di Parma, Viale delle Scienze 7/A 43124 Parma, Italy
SISSA, International School for Advanced Studies, Via Bonomea 265, 34136 Trieste TS, Italy
International Centre for Theoretical Physics (ICTP), Strada Costiera 11, 34151 Trieste, Italy
Center for Cosmology and Particle Physics, Department of Physics, New York University, New York, NY 10003, USA
Center for Computational Astrophysics, Flatiron Institute, 162 5th Avenue, 10010, New York, NY, USA
Dipartimento di Fisica e Astronomia, Università di Bologna, Via Gobetti 93/2, 40129 Bologna, Italy
INFN-Sezione di Bologna, Viale Berti Pichat 6/2, 40127 Bologna, Italy
Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh EH9 3HJ, UK
Higgs Centre for Theoretical Physics, School of Physics and Astronomy, The University of Edinburgh, Edinburgh EH9 3FD, UK
Institut de Physique Théorique, CEA, CNRS, Université Paris-Saclay 91191 Gif-sur-Yvette Cedex, France
Departamento de Física Teórica, Facultad de Ciencias, Universidad Autónoma de Madrid, 28049 Cantoblanco, Madrid, Spain
Instituto de Física Teórica UAM-CSIC, Campus de Cantoblanco, 28049 Madrid, Spain
Centro de Investigación Avanzada en Física Fundamental (CIAFF), Facultad de Ciencias, Universidad Autónoma de Madrid, 28049 Madrid, Spain
Department of Astrophysics, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland
Institut de Física d'Altes Energies (IFAE), The Barcelona Institute of Science and Technology, Campus UAB, 08193 Bellaterra (Barcelona), Spain
Istituto Nazionale di Fisica Nucleare, Sezione di Bologna, Via Irnerio 46, 40126 Bologna, Italy
Max-Planck-Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85748 Garching, Germany
Université de Genève, Département de Physique Théorique and Centre for Astroparticle Physics, 24 quai Ernest-Ansermet, CH-1211 Genève 4, Switzerland
Department of Physics, Institute for Computational Cosmology, Durham University, South Road, DH1 3LE, UK
Institut universitaire de France (IUF), 1 rue Descartes, 75231 PARIS CEDEX 05, France
Universitäts-Sternwarte München, Fakultät für Physik, Ludwig-Maximilians-Universität München, Scheinerstrasse 1, 81679 München, Germany
Excellence Cluster ORIGINS, Boltzmannstrasse 2, 85748 Garching, Germany
Dipartimento di Fisica e Astronomia "G. Galilei", Università di Padova, Via Marzolo 8, 35131 Padova, Italy
INFN-Padova, Via Marzolo 8, 35131 Padova, Italy
INAF-Osservatorio Astronomico di Trieste, Via G. B. Tiepolo 11, 34143 Trieste, Italy
INFN, Sezione di Trieste, Via Valerio 2, 34127 Trieste TS, Italy
Dipartimento di Scienze Matematiche, Fisiche e Informatiche, Università di Parma, Viale delle Scienze 7/A 43124 Parma, Italy
Aix-Marseille Université, CNRS/IN2P3, CPPM, Marseille, France
Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08544, USA
The Cooper Union for the Advancement of Science and Art, 41 Cooper Square, New York, NY 10003, USA
Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822, USA
Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT), Avenida Complutense 40, 28040 Madrid, Spain
Port d'Informació Científica, Campus UAB, C. Albareda s/n, 08193 Bellaterra (Barcelona), Spain
Université Paris-Saclay, CNRS, Institut d'astrophysique spatiale, 91405, Orsay, France
INAF-Osservatorio Astronomico di Brera, Via Brera 28, 20122 Milano, Italy
INAF-Osservatorio Astrofisico di Torino, Via Osservatorio 20, 10025 Pino Torinese (TO), Italy
Dipartimento di Fisica, Università di Genova, Via Dodecaneso 33, 16146, Genova, Italy
INFN-Sezione di Genova, Via Dodecaneso 33, 16146, Genova, Italy
Department of Physics "E. Pancini", University Federico II, Via Cinthia 6, 80126, Napoli, Italy
INAF-Osservatorio Astronomico di Capodimonte, Via Moiariello 16, 80131 Napoli, Italy
INFN section of Naples, Via Cinthia 6, 80126, Napoli, Italy
Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP, Rua das Estrelas, PT4150-762 Porto, Portugal
Faculdade de Ciências da Universidade do Porto, Rua do Campo de Alegre, 4150-007 Porto, Portugal
Dipartimento di Fisica, Università degli Studi di Torino, Via P. Giuria 1, 10125 Torino, Italy
INFN-Sezione di Torino, Via P. Giuria 1, 10125 Torino, Italy
INAF-Osservatorio Astronomico di Roma, Via Frascati 33, 00078 Monteporzio Catone, Italy
INFN-Sezione di Roma, Piazzale Aldo Moro, 2 - c/o Dipartimento di Fisica, Edificio G. Marconi, 00185 Roma, Italy
Institute for Theoretical Particle Physics and Cosmology (TTK), RWTH Aachen University, 52056 Aachen, Germany
Dipartimento di Fisica e Astronomia "Augusto Righi" - Alma Mater Studiorum Università di Bologna, Viale Berti Pichat 6/2, 40127 Bologna, Italy
Instituto de Astrofísica de Canarias, Calle Vía Láctea s/n, 38204, San Cristóbal de La Laguna, Tenerife, Spain
European Space Agency/ESRIN, Largo Galileo Galilei 1, 00044 Frascati, Roma, Italy
ESAC/ESA, Camino Bajo del Castillo, s/n., Urb. Villafranca del Castillo, 28692 Villanueva de la Cañada, Madrid, Spain
Université Claude Bernard Lyon 1, CNRS/IN2P3, IP2I Lyon, UMR 5822, Villeurbanne, F-69100, France
Institute of Physics, Laboratory of Astrophysics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Observatoire de Sauverny, 1290 Versoix, Switzerland
UCB Lyon 1, CNRS/IN2P3, IUF, IP2I Lyon, 4 rue Enrico Fermi, 69622 Villeurbanne, France
Departamento de Física, Faculdade de Ciências, Universidade de Lisboa, Edifício C8, Campo Grande, PT1749-016 Lisboa, Portugal
Instituto de Astrofísica e Ciências do Espaço, Faculdade de Ciências, Universidade de Lisboa, Campo Grande, 1749-016 Lisboa, Portugal
Department of Astronomy, University of Geneva, ch. d'Ecogia 16, 1290 Versoix, Switzerland
INAF-Istituto di Astrofisica e Planetologia Spaziali, via del Fosso del Cavaliere, 100, 00100 Roma, Italy
Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France
Institut d'Estudis Espacials de Catalunya (IEEC), Edifici RDIT, Campus UPC, 08860 Castelldefels, Barcelona, Spain
FRACTAL S.L.N.E., calle Tulipán 2, Portal 13 1A, 28231, Las Rozas de Madrid, Spain
INAF-Osservatorio Astronomico di Padova, Via dell'Osservatorio 5, 35122 Padova, Italy
Max Planck Institute for Extraterrestrial Physics, Giessenbachstr. 1, 85748 Garching, Germany
Felix Hormuth Engineering, Goethestr. 17, 69181 Leimen, Germany
Technical University of Denmark, Elektrovej 327, 2800 Kgs. Lyngby, Denmark
Cosmic Dawn Center (DAWN), Denmark
Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France
Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany
NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA
Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT, UK
Department of Physics and Helsinki Institute of Physics, Gustaf Hällströmin katu 2, 00014 University of Helsinki, Finland
Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking, Surrey RH5 6NT, UK
Helsinki Institute of Physics, Gustaf Hällströmin katu 2, University of Helsinki, Helsinki, Finland
NOVA optical infrared instrumentation group at ASTRON, Oude Hoogeveensedijk 4, 7991PD, Dwingeloo, The Netherlands
Centre de Calcul de l'IN2P3/CNRS, 21 avenue Pierre de Coubertin 69627 Villeurbanne Cedex, France
Universität Bonn, Argelander-Institut für Astronomie, Auf dem Hügel 71, 53121 Bonn, Germany
Aix-Marseille Université, CNRS, CNES, LAM, Marseille, France
Université Paris Cité, CNRS, Astroparticule et Cosmologie, 75013 Paris, France
Institut d'Astrophysique de Paris, 98bis Boulevard Arago, 75014, Paris, France
European Space Agency/ESTEC, Keplerlaan 1, 2201 AZ Noordwijk, The Netherlands
Department of Physics and Astronomy, University of Aarhus, Ny Munkegade 120, DK-8000 Aarhus C, Denmark
Waterloo Centre for Astrophysics, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada
Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario N2L 3G1, Canada
Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5, Canada
Space Science Data Center, Italian Space Agency, via del Politecnico snc, 00133 Roma, Italy
Centre National d'Etudes Spatiales – Centre spatial de Toulouse, 18 avenue Edouard Belin, 31401 Toulouse Cedex 9, France
Institute of Space Science, Str. Atomistilor, nr. 409 Măgurele, Ilfov, 077125, Romania
Departamento de Astrofísica, Universidad de La Laguna, 38206, La Laguna, Tenerife, Spain
Departamento de Física, FCFM, Universidad de Chile, Blanco Encalada 2008, Santiago, Chile
Universität Innsbruck, Institut für Astro- und Teilchenphysik, Technikerstr. 25/8, 6020 Innsbruck, Austria
Satlantis, University Science Park, Sede Bld 48940, Leioa-Bilbao, Spain
Instituto de Astrofísica e Ciências do Espaço, Faculdade de Ciências, Universidade de Lisboa, Tapada da Ajuda, 1349-018 Lisboa, Portugal
Universidad Politécnica de Cartagena, Departamento de Electrónica y Tecnología de Computadoras, Plaza del Hospital 1, 30202 Cartagena, Spain
INFN-Bologna, Via Irnerio 46, 40126 Bologna, Italy
Kapteyn Astronomical Institute, University of Groningen, PO Box 800, 9700 AV Groningen, The Netherlands
Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125, USA
INAF, Istituto di Radioastronomia, Via Piero Gobetti 101, 40129 Bologna, Italy
Astronomical Observatory of the Autonomous Region of the Aosta Valley (OAVdA), Loc. Lignan 39, I-11020, Nus (Aosta Valley), Italy
Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK
School of Physics and Astronomy, Cardiff University, The Parade, Cardiff, CF24 3AA, UK
Junia, EPA department, 41 Bd Vauban, 59800 Lille, France
CERCA/ISO, Department of Physics, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH 44106, USA
INFN-Sezione di Milano, Via Celoria 16, 20133 Milano, Italy
Departamento de Física Fundamental. Universidad de Salamanca. Plaza de la Merced s/n. 37008 Salamanca, Spain
Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, Via Giuseppe Saragat 1, 44122 Ferrara, Italy
Istituto Nazionale di Fisica Nucleare, Sezione di Ferrara, Via Giuseppe Saragat 1, 44122 Ferrara, Italy
Kavli Institute for the Physics and Mathematics of the Universe (WPI), University of Tokyo, Kashiwa, Chiba 277-8583, Japan
Dipartimento di Fisica - Sezione di Astronomia, Università di Trieste, Via Tiepolo 11, 34131 Trieste, Italy
Minnesota Institute for Astrophysics, University of Minnesota, 116 Church St SE, Minneapolis, MN 55455, USA
Université Côte d'Azur, Observatoire de la Côte d'Azur, CNRS, Laboratoire Lagrange, Bd de l'Observatoire, CS 34229, 06304 Nice cedex 4, France
Department of Physics & Astronomy, University of California Irvine, Irvine CA 92697, USA
Department of Astronomy & Physics and Institute for Computational Astrophysics, Saint Mary's University, 923 Robie Street, Halifax, Nova Scotia, B3H 3C3, Canada
Departamento Física Aplicada, Universidad Politécnica de Cartagena, Campus Muralla del Mar, 30202 Cartagena, Murcia, Spain
Instituto de Astrofísica de Canarias (IAC); Departamento de Astrofísica, Universidad de La Laguna (ULL), 38200, La Laguna, Tenerife, Spain
Department of Physics, Oxford University, Keble Road, Oxford OX1 3RH, UK
Department of Computer Science, Aalto University, PO Box 15400, Espoo, FI-00 076, Finland
Instituto de Astrofísica de Canarias, c/ Via Lactea s/n, La Laguna E-38200, Spain. Departamento de Astrofísica de la Universidad de La Laguna, Avda. Francisco Sanchez, La Laguna, E-38200, Spain
Ruhr University Bochum, Faculty of Physics and Astronomy, Astronomical Institute (AIRUB), German Centre for Cosmological Lensing (GCCL), 44780 Bochum, Germany
DARK, Niels Bohr Institute, University of Copenhagen, Jagtvej 155, 2200 Copenhagen, Denmark
Univ. Grenoble Alpes, CNRS, Grenoble INP, LPSC-IN2P3, 53, Avenue des Martyrs, 38000, Grenoble, France
Department of Physics and Astronomy, Vesilinnantie 5, 20014 University of Turku, Finland
Serco for European Space Agency (ESA), Camino bajo del Castillo, s/n, Urbanizacion Villafranca del Castillo, Villanueva de la Cañada, 28692 Madrid, Spain
ARC Centre of Excellence for Dark Matter Particle Physics, Melbourne, Australia
Centre for Astrophysics & Supercomputing, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia
Department of Physics and Astronomy, University of the Western Cape, Bellville, Cape Town, 7535, South Africa
Université Libre de Bruxelles (ULB), Service de Physique Théorique CP225, Boulevard du Triophe, 1050 Bruxelles, Belgium
ICTP South American Institute for Fundamental Research, Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo, Brazil
IRFU, CEA, Université Paris-Saclay 91191 Gif-sur-Yvette Cedex, France
Oskar Klein Centre for Cosmoparticle Physics, Department of Physics, Stockholm University, Stockholm, SE-106 91, Sweden
Astrophysics Group, Blackett Laboratory, Imperial College London, London SW7 2AZ, UK
INAF-Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125, Firenze, Italy
Dipartimento di Fisica, Sapienza Università di Roma, Piazzale Aldo Moro 2, 00185 Roma, Italy
Centro de Astrofísica da Universidade do Porto, Rua das Estrelas, 4150-762 Porto, Portugal
HE Space for European Space Agency (ESA), Camino bajo del Castillo, s/n, Urbanizacion Villafranca del Castillo, Villanueva de la Cañada, 28692 Madrid, Spain
Aurora Technology for European Space Agency (ESA), Camino bajo del Castillo, s/n, Urbanizacion Villafranca del Castillo, Villanueva de la Cañada, 28692 Madrid, Spain
Dipartimento di Fisica, Università degli studi di Genova, and INFN-Sezione di Genova, via Dodecaneso 33, 16146, Genova, Italy
Theoretical astrophysics, Department of Physics and Astronomy, Uppsala University, Box 515, 751 20 Uppsala, Sweden
Institute Lorentz, Leiden University, Niels Bohrweg 2, 2333 CA Leiden, The Netherlands
Department of Physics, Royal Holloway, University of London, TW20 0EX, UK
Cosmic Dawn Center (DAWN)
Niels Bohr Institute, University of Copenhagen, Jagtvej 128, 2200 Copenhagen, Denmark
Euclid Collaboration
The mission will measure cosmological parameters with unprecedented precision. To distinguish between cosmological models, it is essential to generate realistic mock observables from cosmological simulations that were run in both the standard Λ-cold-dark-matter () paradigm and in many non-standard models beyond .
We present the scientific results from a suite of cosmological N-body simulations using non-standard models including dynamical dark energy, k-essence, interacting dark energy, modified gravity, massive neutrinos, and primordial non-Gaussianities. We investigate how these models affect the large-scale-structure formation and evolution in addition to providing synthetic observables that can be used to test and constrain these models with data.
We developed a custom pipeline based on the halo finder and the large-scale structure toolkit to analyse the particle output of non-standard simulations and generate mock observables such as halo and void catalogues, mass density fields, and power spectra in a consistent way. We compare these observables with those from the standard model and quantify the deviations.
We find that non-standard cosmological models can leave significant imprints on the synthetic observables that we have generated. Our results demonstrate that non-standard cosmological N-body simulations provide valuable insights into the physics of dark energy and dark matter, which is essential to maximising the scientific return of .
preparation
Euclid Collaboration: G. Rá[email protected]<ref>,<ref>
M.-A. Breton<ref>,<ref>,<ref>
B. Fiorini0000-0002-0092-4321<ref>,<ref>
A. M. C. Le Brun0000-0002-0936-4594<ref>,<ref>
H.-A. Winther0000-0002-6325-2710<ref>
Z. Sakr0000-0002-4823-3757<ref>,<ref>,<ref>
L. Pizzuti0000-0001-5654-7580<ref>
A. Ragagnin0000-0002-8106-2742<ref>,<ref>,<ref>,<ref>
T. Gayoux0009-0008-9527-1490<ref>
E. Altamura0000-0001-6973-1897<ref>
E. Carella<ref>,<ref>
K. Pardede0000-0002-7728-8220<ref>,<ref>,<ref>,<ref>
G. Verza0000-0002-1886-8348<ref>,<ref>
K. Koyama0000-0001-6727-6915<ref>
M. Baldi0000-0003-4145-1943<ref>,<ref>,<ref>
A. Pourtsidou0000-0001-9110-5550<ref>,<ref>
F. Vernizzi0000-0003-3426-2802<ref>
A. G. Adame0009-0005-0594-9391<ref>,<ref>,<ref>
J. Adamek0000-0002-0723-6740<ref>
S. Avila0000-0001-5043-3662<ref>
C. Carbone0000-0003-0125-3563<ref>
G. Despali0000-0001-6150-4112<ref>,<ref>,<ref>
C. Giocoli0000-0002-9590-7961<ref>,<ref>
C. Hernández-Aguayo0000-0001-9921-8832<ref>
F. Hassani0000-0003-2640-4460<ref>
M. Kunz0000-0002-3052-7394<ref>
B. Li0000-0002-1098-9188<ref>
Y. Rasera0000-0003-3424-6941<ref>,<ref>
G. Yepes0000-0001-5031-7936<ref>,<ref>
V. Gonzalez-Perez0000-0001-9938-2755<ref>
P.-S. Corasaniti0000-0002-6386-7846<ref>
J. García-Bellido0000-0002-9370-8360<ref>
N. Hamaus0000-0002-0876-2101<ref>,<ref>
A. Kiessling0000-0002-2590-1273<ref>
M. Marinucci0000-0003-1159-3756<ref>,<ref>
C. Moretti0000-0003-3314-8936<ref>,<ref>,<ref>,<ref>,<ref>
D. F. Mota0000-0003-3141-142X<ref>
L. Piga0000-0003-2221-7406<ref>,<ref>,<ref>
A. Pisani0000-0002-6146-4437<ref>,<ref>,<ref>,<ref>
I. Szapudi0000-0003-2274-0301<ref>
P. Tallada-Crespí0000-0002-1336-8328<ref>,<ref>
N. Aghanim0000-0002-6688-8992<ref>
S. Andreon0000-0002-2041-8784<ref>
C. Baccigalupi0000-0002-8211-1630<ref>,<ref>,<ref>,<ref>
S. Bardelli0000-0002-8900-0298<ref>
D. Bonino0000-0002-3336-9977<ref>
E. Branchini0000-0002-0808-6908<ref>,<ref>,<ref>
M. Brescia0000-0001-9506-5680<ref>,<ref>,<ref>
J. Brinchmann0000-0003-4359-8797<ref>,<ref>
S. Camera0000-0003-3399-3574<ref>,<ref>,<ref>
V. Capobianco0000-0002-3309-7692<ref>
V. F. Cardone<ref>,<ref>
J. Carretero0000-0002-3130-0204<ref>,<ref>
S. Casas0000-0002-4751-5138<ref>
M. Castellano0000-0001-9875-8263<ref>
G. Castignani0000-0001-6831-0687<ref>
S. Cavuoti0000-0002-3787-4196<ref>,<ref>
A. Cimatti<ref>
C. Colodro-Conde<ref>
G. Congedo0000-0003-2508-0046<ref>
C. J. Conselice0000-0003-1949-7638<ref>
L. Conversi0000-0002-6710-8476<ref>,<ref>
Y. Copin0000-0002-5317-7518<ref>
F. Courbin0000-0003-0758-6510<ref>
H. M. Courtois0000-0003-0509-1776<ref>
A. Da Silva0000-0002-6385-1609<ref>,<ref>
H. Degaudenzi0000-0002-5887-6799<ref>
G. De Lucia0000-0002-6220-9104<ref>
M. Douspis0000-0003-4203-3954<ref>
F. Dubath0000-0002-6533-2810<ref>
C. A. J. Duncan<ref>
X. Dupac<ref>
S. Dusini0000-0002-1128-0664<ref>
A. Ealet0000-0003-3070-014X<ref>
M. Farina0000-0002-3089-7846<ref>
S. Farrens0000-0002-9594-9387<ref>
S. Ferriol<ref>
P. Fosalba0000-0002-1510-5214<ref>,<ref>
M. Frailis0000-0002-7400-2135<ref>
E. Franceschi0000-0002-0585-6591<ref>
M. Fumana0000-0001-6787-5950<ref>
S. Galeotta0000-0002-3748-5115<ref>
B. Gillis0000-0002-4478-1270<ref>
P. Gómez-Alvarez0000-0002-8594-5358<ref>,<ref>
A. Grazian0000-0002-5688-0663<ref>
F. Grupp<ref>,<ref>
S. V. H. Haugan0000-0001-9648-7260<ref>
W. Holmes<ref>
F. Hormuth<ref>
A. Hornstrup0000-0002-3363-0936<ref>,<ref>
S. Ilić0000-0003-4285-9086<ref>,<ref>
K. Jahnke0000-0003-3804-2137<ref>
M. Jhabvala<ref>
B. Joachimi0000-0001-7494-1303<ref>
E. Keihänen0000-0003-1804-7715<ref>
S. Kermiche0000-0002-0302-5735<ref>
M. Kilbinger0000-0001-9513-7138<ref>
T. Kitching0000-0002-4061-4598<ref>
B. Kubik0009-0006-5823-4880<ref>
H. Kurki-Suonio0000-0002-4618-3063<ref>,<ref>
P. B. Lilje0000-0003-4324-7794<ref>
V. Lindholm0000-0003-2317-5471<ref>,<ref>
I. Lloro<ref>
G. Mainetti0000-0003-2384-2377<ref>
E. Maiorano0000-0003-2593-4355<ref>
O. Mansutti0000-0001-5758-4658<ref>
O. Marggraf0000-0001-7242-3852<ref>
K. Markovic0000-0001-6764-073X<ref>
M. Martinelli0000-0002-6943-7732<ref>,<ref>
N. Martinet0000-0003-2786-7790<ref>
F. Marulli0000-0002-8850-0303<ref>,<ref>,<ref>
R. Massey0000-0002-6085-3780<ref>
E. Medinaceli0000-0002-4040-7783<ref>
S. Mei0000-0002-2849-559X<ref>
Y. Mellier<ref>,<ref>
M. Meneghetti0000-0003-1225-7084<ref>,<ref>
G. Meylan<ref>
M. Moresco0000-0002-7616-7136<ref>,<ref>
L. Moscardini0000-0002-3473-6716<ref>,<ref>,<ref>
S.-M. Niemi<ref>
C. Padilla0000-0001-7951-0166<ref>
S. Paltani0000-0002-8108-9179<ref>
F. Pasian0000-0002-4869-3227<ref>
K. Pedersen<ref>
W. J. Percival0000-0002-0644-5727<ref>,<ref>,<ref>
V. Pettorino<ref>
S. Pires0000-0002-0249-2104<ref>
G. Polenta0000-0003-4067-9196<ref>
M. Poncet<ref>
L. A. Popa<ref>
F. Raison0000-0002-7819-6918<ref>
R. Rebolo<ref>,<ref>
A. Renzi0000-0001-9856-1970<ref>,<ref>
J. Rhodes0000-0002-4485-8549<ref>
G. Riccio<ref>
E. Romelli0000-0003-3069-9222<ref>
M. Roncarelli0000-0001-9587-7822<ref>
R. Saglia0000-0003-0378-7032<ref>,<ref>
J.-C. Salvignol<ref>
A. G. Sánchez0000-0003-1198-831X<ref>
D. Sapone0000-0001-7089-4503<ref>
B. Sartoris0000-0003-1337-5269<ref>,<ref>
M. Schirmer0000-0003-2568-9994<ref>
T. Schrabback0000-0002-6987-7834<ref>
A. Secroun0000-0003-0505-3710<ref>
G. Seidel0000-0003-2907-353X<ref>
S. Serrano0000-0002-0211-2861<ref>,<ref>,<ref>
C. Sirignano0000-0002-0995-7146<ref>,<ref>
G. Sirri0000-0003-2626-2853<ref>
L. Stanco0000-0002-9706-5104<ref>
J. Steinwagner<ref>
A. N. Taylor<ref>
I. Tereno<ref>,<ref>
R. Toledo-Moreo0000-0002-2997-4859<ref>
F. Torradeflot0000-0003-1160-1517<ref>,<ref>
I. Tutusaus0000-0002-3199-0399<ref>
L. Valenziano0000-0002-1170-0104<ref>,<ref>
T. Vassallo0000-0001-6512-6358<ref>,<ref>
G. Verdoes Kleijn0000-0001-5803-2580<ref>
Y. Wang0000-0002-4749-2984<ref>
J. Weller0000-0002-8282-2010<ref>,<ref>
E. Zucca0000-0002-5845-8132<ref>
A. Biviano0000-0002-0857-0732<ref>,<ref>
A. Boucaud0000-0001-7387-2633<ref>
E. Bozzo0000-0002-8201-1525<ref>
C. Burigana0000-0002-3005-5796<ref>,<ref>
M. Calabrese0000-0002-2637-2422<ref>,<ref>
D. Di Ferdinando<ref>
J. A. Escartin Vigo<ref>
G. Fabbian0000-0002-3255-4695<ref>,<ref>,<ref>
F. Finelli0000-0002-6694-3269<ref>,<ref>
J. Gracia-Carpio<ref>
S. Matthew0000-0001-8448-1697<ref>
N. Mauri0000-0001-8196-1548<ref>,<ref>
A. Pezzotta0000-0003-0726-2268<ref>
M. Pöntinen0000-0001-5442-2530<ref>
C. Porciani0000-0002-7797-2508<ref>
V. Scottez<ref>,<ref>
M. Tenti0000-0002-4254-5901<ref>
M. Viel0000-0002-2642-5707<ref>,<ref>,<ref>,<ref>,<ref>
M. Wiesmann0009-0000-8199-5860<ref>
Y. Akrami0000-0002-2407-7956<ref>,<ref>
V. Allevato0000-0001-7232-5152<ref>
S. Anselmi0000-0002-3579-9583<ref>,<ref>,<ref>
M. Archidiacono0000-0003-4952-9012<ref>,<ref>
F. Atrio-Barandela0000-0002-2130-2513<ref>
A. Balaguera-Antolinez0000-0001-5028-3035<ref>,<ref>
M. Ballardini0000-0003-4481-3559<ref>,<ref>,<ref>
D. Bertacca0000-0002-2490-7139<ref>,<ref>,<ref>
L. Blot0000-0002-9622-7167<ref>,<ref>
S. Borgani0000-0001-6151-6439<ref>,<ref>,<ref>,<ref>
S. Bruton0000-0002-6503-5218<ref>
R. Cabanac0000-0001-6679-2600<ref>
A. Calabro0000-0003-2536-1614<ref>
B. Camacho Quevedo0000-0002-8789-4232<ref>,<ref>
A. Cappi<ref>,<ref>
F. Caro<ref>
C. S. Carvalho<ref>
T. Castro0000-0002-6292-3228<ref>,<ref>,<ref>,<ref>
K. C. Chambers0000-0001-6965-7789<ref>
S. Contarini0000-0002-9843-723X<ref>
A. R. Cooray0000-0002-3892-0190<ref>
B. De Caro<ref>
S. de la Torre<ref>
G. Desprez<ref>
A. Díaz-Sánchez0000-0003-0748-4768<ref>
J. J. Diaz<ref>
S. Di Domizio0000-0003-2863-5895<ref>,<ref>
H. Dole0000-0002-9767-3839<ref>
S. Escoffier0000-0002-2847-7498<ref>
A. G. Ferrari0009-0005-5266-4110<ref>,<ref>
P. G. Ferreira0000-0002-3021-2851<ref>
I. Ferrero0000-0002-1295-1132<ref>
A. Fontana0000-0003-3820-2823<ref>
F. Fornari0000-0003-2979-6738<ref>
L. Gabarra0000-0002-8486-8856<ref>
K. Ganga0000-0001-8159-8208<ref>
T. Gasparetto0000-0002-7913-4866<ref>
E. Gaztanaga0000-0001-9632-0815<ref>,<ref>,<ref>
F. Giacomini0000-0002-3129-2814<ref>
F. Gianotti0000-0003-4666-119X<ref>
G. Gozaliasl0000-0002-0236-919X<ref>,<ref>
C. M. Gutierrez0000-0001-7854-783X<ref>
A. Hall0000-0002-3139-8651<ref>
H. Hildebrandt0000-0002-9814-3338<ref>
J. Hjorth0000-0002-4571-2306<ref>
A. Jimenez Muñoz0009-0004-5252-185X<ref>
J. J. E. Kajava0000-0002-3010-8333<ref>,<ref>
V. Kansal0000-0002-4008-6078<ref>,<ref>
D. Karagiannis0000-0002-4927-0816<ref>,<ref>
C. C. Kirkpatrick<ref>
F. Lacasa0000-0002-7268-3440<ref>,<ref>
J. Le Graet0000-0001-6523-7971<ref>
L. Legrand0000-0003-0610-5252<ref>
J. Lesgourgues0000-0001-7627-353X<ref>
T. I. Liaudat0000-0002-9104-314X<ref>
A. Loureiro0000-0002-4371-0876<ref>,<ref>
J. Macias-Perez0000-0002-5385-2763<ref>
G. Maggio0000-0003-4020-4836<ref>
M. Magliocchetti0000-0001-9158-4838<ref>
F. Mannucci0000-0002-4803-2381<ref>
R. Maoli0000-0002-6065-3025<ref>,<ref>
C. J. A. P. Martins0000-0002-4886-9261<ref>,<ref>
L. Maurin0000-0002-8406-0857<ref>
R. B. Metcalf0000-0003-3167-2574<ref>,<ref>
M. Miluzio<ref>,<ref>
P. Monaco0000-0003-2083-7564<ref>,<ref>,<ref>,<ref>
A. Montoro0000-0003-4730-8590<ref>,<ref>
A. Mora0000-0002-1922-8529<ref>
G. Morgante<ref>
S. Nadathur0000-0001-9070-3102<ref>
Nicholas A. Walton0000-0003-3983-8778<ref>
L. Patrizii<ref>
V. Popa0000-0002-9118-8330<ref>
D. Potter0000-0002-0757-5195<ref>
P. Reimberg0000-0003-3410-0280<ref>
I. Risso0000-0003-2525-7761<ref>
P.-F. Rocci<ref>
M. Sahlén0000-0003-0973-4804<ref>
A. Schneider0000-0001-7055-8104<ref>
M. Sereno0000-0003-0302-0325<ref>,<ref>
A. Silvestri0000-0001-6904-5061<ref>
A. Spurio Mancini0000-0001-5698-0990<ref>,<ref>
J. Stadel0000-0001-7565-8622<ref>
K. Tanidis<ref>
C. Tao0000-0001-7961-8177<ref>
N. Tessore0000-0002-9696-7931<ref>
G. Testera<ref>
R. Teyssier0000-0001-7689-0933<ref>
S. Toft0000-0003-3631-7176<ref>,<ref>,<ref>
S. Tosi0000-0002-7275-9193<ref>,<ref>
A. Troja0000-0003-0239-4595<ref>,<ref>
M. Tucci<ref>
C. Valieri<ref>
J. Valiviita0000-0001-6225-3693<ref>,<ref>
D. Vergani0000-0003-0898-2216<ref>
P. Vielzeuf0000-0003-2035-9339<ref>
September 9, 2024
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
2024. All rights reserved.
The concordance Λ-cold-dark-matter () model is the simplest cosmological scenario that accounts for the cosmological observations thus far available. It is based on the assumption that in addition to baryonic matter and radiation, the Universe is filled with two invisible components: an exotic form of matter, dubbed dark energy and described by a Cosmological Constant (Λ) in Einstein's equations of General Relativity, and a cold-dark-matter (CDM) component that is non-relativistic and only interacts through gravity. In this scenario, dark matter is primarily responsible for fostering the formation of the visible structures we observe today, while dark energy drives the accelerated expansion of the Universe at late times. This model has been remarkably successful in explaining a variety of cosmological observations, such as the Hubble diagram from luminosity distance measurements of Type Ia Supernovae <cit.>, temperature and polarisation anisotropy angular power spectra of the cosmic microwave background <cit.>, the galaxy power spectrum of the large scale structure <cit.>, and the presence of baryonic acoustic oscillations (BAO) in the LSS <cit.>. Despite the great success of the model, the physical origin of dark energy and dark matter remains unknown. Unveiling the nature of these dark components is the primary motivation for many investigations in modern cosmology.
In the last decade, multiple tensions among different types of cosmological observations have emerged. As an example, while CMB measurements indicate a value of the Hubble constant of H_0=67.7 ± 0.4 <cit.>, local measurements, often based on the observations of supernovae in nearby galaxies, suggest a higher value of H_0=73.0 ± 1.0 <cit.>. This 5σ discrepancy is called the Hubble tension. A similar tension has been identified in the S_8=σ_8 √(/0.3) parameter, which combines the amplitude of linear-matter density fluctuations on the 8 scale, σ_8, and the cosmic matter density, . Measurements derived from the CMB <cit.> appear to yield a value of S_8 2.9σ higher than that obtained from observations of the LSS <cit.>, such as measurements of the clustering of galaxies and weak gravitational lensing <cit.>. Such tensions may result from systematic errors yet to be identified in the data. Alternatively, they may be a manifestation of the limits of the model, since modifications to the standard cosmological model can provide a solution to these tensions <cit.>.
Ongoing and upcoming Stage-IV surveys, such as <cit.>, Dark Energy Spectroscopic Instrument <cit.>, Vera C. Rubin Observatory Legacy Survey of Space and Time <cit.>, Spectro-Photometer for the History of the Universe, Epoch of Reionization, and the Ices Explorer <cit.>, and the Nancy Grace Roman Space Telescope <cit.>, will collect unprecedented amounts of data on the LSS, which will enable detailed assessments of the Hubble and S_8 tensions in addition to shedding new light on the nature of the invisible components in the Universe.
is a space mission led by the European Space Agency (ESA) with contributions from the National Aeronautics and Space Administration (NASA), aiming to study the nature and evolution of the dark universe. The survey uses a 1.2-m-diameter telescope and two instruments, a visible-wavelength camera, and a near-infrared camera/spectrometer, to observe billions of galaxies over more than a third of the sky in optical and near-infrared wavelengths. measures the shapes <cit.> and redshifts <cit.> of galaxies, in order to determine the weak gravitational lensing and clustering of galaxies, covering a period of cosmic history over which dark energy accelerated the expansion of the Universe. These measurements will provide detailed insights into the properties of dark energy, dark matter, and gravity by probing the expansion history of the Universe and the growth rate of structures over time <cit.>. was launched on 1 July 2023 and is designed to operate for six years. The survey will provide unprecedented constraints on cosmological parameters and tests of fundamental physics, as well as a rich catalogue of legacy data that can be used for a wide range of astrophysical research. The mission data will be publicly released within two years of acquisition. is one of the most ambitious and exciting space missions in the field of cosmology and will enable a thorough validation of a broad range of cosmological models.
observations will provide precise measurements of the clustering of matter over a wide range of scales, where effects due to the late-time nonlinear gravitational collapse of matter need to be taken into account. A key tool in the preparation of the cosmological analyses and the interpretation of the data is the use of cosmological N-body simulations, which can follow the nonlinear evolution of matter clustering. This is a numerical technique that calculates the evolution of the matter density field under the effect of gravity across cosmic time and predicts the LSS of the Universe for a given cosmological model <cit.>. In this method, the matter density field is sampled with discrete N-body particles, whose equations of motion are solved in the Newtonian limit in an expanding Friedmann–Lemaître–Robertson–Walker (FLRW) universe. These simulations enable the study of the formation and growth of cosmic structures from linear to nonlinear scales, predict the distribution of matter in galaxy clusters, filaments, and voids, for a range of cosmological models and parameters <cit.>, as well as initial conditions <cit.>. The cosmological models beyond the standard-paradigm are expected to have left imprints that should be detectable in the observables, such as the redshift-space power spectra of galaxies or the void-size functions.
This article is part of a series that collectively explores simulations and nonlinearities beyond the ΛCDM model:
* Numerical methods and validation (Adamek et al. in prep.).
* Results from non-standard simulations (this work).
* Cosmological constraints on non-standard cosmologies from simulated Euclid probes (D'Amico et al. in prep.).
* Constraints on f(R) models from the photometric primary probes (Koyama et al. in prep.).
For further details, see our companion papers. In this work, we consistently analyse large numbers of N-body simulations over a wide range of non-standard cosmological scenarios, to generate catalogues of synthetic observables for . This analysis is achieved using a pipeline that was specifically written for that task. We calculate reconstructed density fields, halo and void catalogues, halo mass functions, dark matter, and halo power spectra in real and redshift space, as well as halo bias functions. The paper is organised as follows: in Sect. <ref>, we introduce the analysed non-standard models; then, in Sect. <ref>, we present an overview of the analysed cosmological N-body simulations. In Sect. <ref>, we describe the analysis pipeline and the calculated quantities. We demonstrate the imprints of the non-standard models in the computed observables in Sect. <ref> and finally, we summarise our results in Sect. <ref>.
§ COSMOLOGICAL MODELS BEYOND THE STANDARD LCDM PARADIGM
To address the tensions and anomalies in the model, various non-standard cosmological models have been proposed that extend or modify the standard model in different ways. Some examples of non-standard cosmological models are dark energy models, such as quintessence and phantom energy, modified-gravity theories, such as f(R) gravity, and massive-neutrino models, such as sterile neutrinos and self-interacting neutrinos. These models introduce new degrees of freedom or new mechanisms that can affect the dynamics and observables of the universe at different scales and epochs. In this section, we will discuss the main features, motivations, and challenges of these non-standard cosmological models.
§.§ Dark energy models
§.§.§ wCDM
A simple generalisation of the cosmological constant assumes that dark energy is a fluid with a constant equation-of-state w ≡ p_ / (ρ_ c^2), where p_ and ρ_ are, respectively, the pressure and density of the fluid, and c is the speed of light. To trigger an accelerated phase of cosmic expansion, the dark energy equation-of-state parameter must be w<-1/3. The ΛCDM model corresponds to the w = -1 specific case, while w<-1 corresponds to so-called phantom dark energy models <cit.>, though such values may also result from an unaccounted interaction between dark energy and dark matter <cit.>.
§.§.§ Dynamical dark energy
The dark energy equation-of-state could be a function of redshift. Chevallier, Polarski <cit.> and Linder <cit.> proposed a simple parameterization of
w_(z) = w_0 + w_a z/1+z = w_0 + w_a (1-a) ,
where the w_0 parameter represents the value of the equation-of-state at the present time, and w_a defines the rate of change with redshift. This model is also called the CPL parametrisation of dark energy, after the initials of the authors who proposed it.
This dark energy parametrisation is a fitting function of a general w_(z) around z=0, assuming that w(z) is smooth and slowly changing with the scale factor. As a consequence, this model can closely follow the expansion history of a wide range of other models with w_(z) at late times. Despite its simple form, it shows a wide range of interesting properties <cit.>. The cosmological constant corresponds to w_0 = -1 and w_a = 0 in the CPL parametrisation.
§.§.§ K-essence
The k-essence model is characterised by an action for the scalar field of the following form
S=∫ ^4 x √(-g) p(ϕ,X) ,
where X=(1/2)g^μν∇_μϕ∇_νϕ. The energy density of the scalar field is given by
_ϕ=2X p/ X-p ,
and the pressure is p_ϕ=p(ϕ,X). This pressure gives an effective fluid equation-of-state parameter as
w_ϕ=p_ϕ/_ϕ=-p/p-2X p,_X ,
where the subscript ,_X indicates a derivative with respect to X, and a dimensionless speed-of-sound parameter for the k-essence fluctuations as
c_ s^2 =p,_X/p,_X+2Xp,_XX .
The k-essence field satisfies the continuity equation
_ϕ=-3H (_ϕ+p_ϕ) ,
which results in the scalar equation of motion
G^μν∇_μ∇_νϕ+2X∂^2p/∂ X ∂ϕ-∂ p/∂ϕ=0 ,
where
G^μν=∂ p/∂ Xg^μν+∂^2 p/∂ X^2∇^μϕ∇^νϕ .
k-essence was first proposed by <cit.>, who showed that there exist tracking attractor solutions to the equation of motion during the radiation and matter-dominated eras of the universe, and that with a suitably chosen p, the scalar can have an appropriate equation of state that allows it to act as dark energy for the background accelerated expansion. In addition, whenever the kinetic terms for the scalar field are not linear in X, the speed of sound of the fluctuations differs from unity, allowing the clustering of the dark energy field at sub-horizon scales, which should be modelled at the perturbations level.
§.§.§ Interacting dark energy
In the interacting dark energy (IDE) models <cit.>, dark energy and cold dark matter are allowed to interact through an exchange of energy-momentum in order to keep the total stress-energy tensor T_μν conserved:
∇ _μT^(c)μ_ν = C_ν(ϕ ) = - ∇ _μT^(ϕ )μ_ν ,
where C_ν(ϕ ) is a conformal coupling function expressed in the form:
C_ν(ϕ ) = κ β(ϕ) _ c∇ _νϕ ,
where κ≡8π G_ N/c^2, G_ N is Newton's gravitational constant, _ c is the cold dark matter energy density in the IDE model [Note that we choose u_c here for our IDE model to better distinguish it from ρ_CDM used right afterwards to describe the background evolution.], and β (ϕ ) is a coupling function.
The dark energy scalar field, ϕ, has an intrinsic energy density and pressure given by
_ϕ = 1/2 g^μν ∂ _μϕ ∂ _νϕ + V(ϕ ) ,
p_ϕ = 1/2 g^μν ∂ _μϕ ∂ _νϕ - V(ϕ ) ,
where V(ϕ ) is a self-interaction potential. The conservation equations then translate in the following set of background-dynamic equations under the assumption of a constant coupling function β (ϕ )= β:
ϕ̈ + 3Hϕ̇ + V/ϕ = κ β _ c ,
_ c + 3H _ c = -κ β _ c ,
In the standard approach, a theoretically-motivated analytical form for the self-interaction potential function V(ϕ ) is chosen. However, the simulations that are considered in the present work implement the alternative approach proposed by <cit.> which consists of imposing a standard ΛCDM background expansion history by setting
H^2=H^2_Λ CDM ,
where H_Λ CDM is the standard Hubble function defined by
H^2_Λ CDM = 8 π G_ N/3(ρ _ r + ρ _ b + ρ _ CDM + ρ _Λ) ,
where ρ_ r, ρ_ b, ρ_ CDM, and ρ_Λ are the mass densities of the radiation, baryon, CDM, and Λ components of the background model. This will determine an effective potential, V(ϕ ), according to the resulting evolution of the scalar field, ϕ. Taking the time derivative of Eq. (<ref>) and using the continuity Eqs. (<ref> & <ref>), one gets the scalar-field energy density and pressure as
_ϕ = ρ _ CDM c^2 + ρ _Λ c^2 - _ c ,
p_ϕ = p_Λ=- _Λ ,
which can be combined with Eqs. (<ref> & <ref>) to obtain the dynamics of the scalar field:
ϕ̇^2 = ρ _ CDM c^2 - _ϕ .
The scalar-field potential, V(ϕ ), can then be reconstructed using Eqs. (<ref> & <ref>) as:
V(ϕ ) = 1/2ϕ̇^2 + ρ _Λ c^2 ,
and taking the time derivative of Eq. (<ref>), one can derive the scalar-field equation of motion
2ϕ̈ + 3Hϕ̇ -κβ _c = 0 ,
which can be numerically solved for the dynamical evolution of the system. With this choice, the β coupling remains the only free parameter of this model.
Observational constraints on the model was computed in <cit.> which found that the model can alleviate the σ_8 tension, but that CMB prefers the ΛCDM limit. In particular, they find that the CMB constrains |β| ≲ 0.02, RSD constraints |β| ≲ 0.10, while weak lensing data from the Kilo-Degree Survey actually prefers a non-zero value |β| ∼ 0.1.
§.§ Modified gravity models
§.§.§ nDGP gravity
The Dvali–Gabadadze–Porrati (DGP) model <cit.> assumes that our universe is described by a 5-dimensional bulk, while the visible matter component is confined to the 4-dimensional brane described by the Minkowski metric, γ.
This model's action is
S = c^4/16π G_5∫_ M d^5 x √(-γ) R_5
+ ∫_∂ M d^4 x √(-g) (c^4/16π G_ N R + L_ m) ,
where G_5 and G_ N are the 5- and 4-dimensional Newton's constants, respectively, and L_ m is the matter Lagrangian. At small scales, 4-dimensional gravity is recovered due to an intrinsic Einstein–Hilbert term sourced by brane curvature causing a gravitational force that scales as r^-2, while, at large scales, the gravity behaves as a 5-dimensional force. The transition between the 5-dimensional modifications and the 4-dimensional gravity is given by the cross-over scale r_ c = G_5/(2 G_ N), from which we construct the dimensionless parameter Ω_ rc≡ c^2/(4r_ c^2H_0^2). The modified Friedmann equation on the brane <cit.> becomes
H^2 = ± c H/r_ c + 8 π G_ N/3ρ̅ .
The model we investigate in this paper is the normal branch with the - sign <cit.> characterised by a background achieved by introducing an additional dark energy contribution with an appropriate equation-of-state <cit.>
ρ_(a) = ρ_ cr,0 ( + 2 √(Ω_ rc)√( + a^-3) ) ,
where ρ_ cr,0 is the critical density.
The observational constraints on the model require the cross-over scale r_ c to be larger than the size of the horizon H_0^-1 today. For example, Solar System constraints require r_cH_0 ≳ 1.6 <cit.>, and galaxy clustering in the BOSS survey constraints r_cH_0 ≳ 4.5 <cit.>.
§.§.§ f(R) gravity
The f(R) theory of gravity <cit.> is characterised by the following action:
S = c^4/16π G_ N∫ d^4 x √(-g) [ R+f(R) ] ,
where g_μν is the metric tensor and f(R) is a functional form of the Ricci scalar, R.
Here we consider the Hu–Sawicki model <cit.> with n=1, where in the limit of f_R= f/ R≪1 we have
f(R) = - 6 H_0^2/c^2 + |f_R0| R̅_0^2/R ,
where f_R0 is the free parameter of the model,
R̅_0 is the Ricci scalar evaluated at background at present time, H_0 is the Hubble constant, and is the energy-density parameter of the cosmological constant.
|f_R0| characterises the magnitude of the deviation from , with smaller values corresponding to weaker departures from General Relativity until we recover in the limit of f_R0→0, but for the small |f_R0| values still allowed by observations, the background expansion history approximates that of and
R̅_0 = 3 H_0^2/c^2(1+ 4 /) ,
with matter energy density parameter =1-. However, though the background expansion could mimic that of a cosmological-constant model, it still differs at the level of cosmological perturbations where the growth of structure is driven by a modification of gravity following the above adopted model of f(R).
The observational constraints on the model parameter |f_R0| vary from |f_R0| ≲ 10^-6 in the Solar System, |f_R0| ≲ 10^-8 from galaxy scales <cit.> to |f_R0| ≲ 10^-6–10^-4 from various cosmological probes <cit.>. The parameter values of the simulations presented in this paper are similar to the current cosmological constraints.
§.§ Massive and number of relativistic neutrinos
Neutrinos are mainly characterised by two properties, their mass, M_ν, and the number of neutrino species, . More in general, parametrises the contribution of relativistic species to the background density of radiation, ρ_ r, as
ρ_ r = [1 + 7/8(4/11)^4/3]ρ_γ ,
where ρ_γ is the photon background density. In the standard model, N_ eff is expected to be ∼ 3.045 <cit.> for three families of active neutrinos that thermalised in the early Universe and decoupled well before electron-positron annihilation. The calculation of N_ eff involves the complete treatment of neutrino decoupling, which incorporates non-instantaneous decoupling. A deviation from the fiducial value serves to account for the presence of non-standard neutrino features, or additional relativistic relics contributing to the energy budget <cit.>. Here we focus on standard neutrino families only.
In addition, oscillation experiments <cit.> showed that at least two neutrinos are massive by measuring two squared-mass differences.
It can be shown that the minimum value of the neutrino mass sum is either 0.06 eV in the normal or 0.10 eV in the inverted hierarchy. This value can be well constrained through cosmological observations since neutrinos are known to impact the expansion history and suppress the clustering of cold dark matter, which can be observed in the large-scale distribution of galaxies <cit.>.
Neutrinos with mass ≲ 0.6 eV become
non-relativistic after the epoch of recombination probed by the CMB,
and this mechanism allows massive neutrinos to alter the
matter-radiation equality for a fixed h^2
<cit.>. Massive neutrinos act as non-relativistic
particles on scales k>k_ nr=0.018(m_ν/1
eV)^1/2^1/2 , where k_ nr is the
wavenumber corresponding to the Hubble horizon size at the epoch
z_ nr when the given neutrino species becomes non-relativistic following 1+z_nr≃ 1900 ( m_ν/1 eV),
is the matter density parameter, and h=H_0/100 km
s^-1 Mpc^-1. The large velocity dispersion of non-relativistic
neutrinos suppresses the formation of neutrino perturbations in a way
that depends on m_ν and redshift z, leaving an imprint on the
matter power spectrum at scales k>k_ fs(z), with
k_ fs=0.82 H(z)/H_0(1+z)^2( m_ν/1 eV) ,
where neutrinos cannot cluster and do not contribute to the gravitational potential wells produced by cold dark matter and baryons <cit.>. This modifies the shape of the matter power spectrum and the correlation function on these scales.
§.§ Primordial non-Gaussianities
The simplest inflation models predict that primordial curvature perturbations follow a distribution that is close to Gaussian <cit.>. However, there are many alternative inflation models that predict certain amounts of primordial non-Gaussianity (PNG). One of the simplest cases is that of the so-called local primordial non-Gaussianities <cit.>. For this case, the primordial potential ϕ is given by
ϕ(x) = ϕ_G(x) + f_ NL^ local (ϕ^2(x) - ⟨ϕ^2(x) ⟩),
where ϕ_G(x) is the Gaussian potential, while ϕ is the non-Gaussian potential. f_ NL^ local measures the level of deviations from Gaussianity.
The perturbations in the primordial potential produce perturbations in the density field and they are related through Poisson's equation. Therefore, in Fourier space, the density field is given by
δ(k,z) = α(k,z) ϕ(k,z),
where
α(k,z) = 2 D(z)/3 c^2/H_0^2g(0)/g(z_ rad) k^2 T(k),
T(k) is the transfer function normalised at T(k→0)=1, and D(z) is the growth factor normalised at D(z=0) =1. The factor g(0)/g(z_ rad), where g(z)= (1+z)D(z), takes into account the difference between our normalisation of D(z) and the early-time normalisation where D(z) ∝ 1/(1+z) during matter-domination. This factor is g(z_ rad)/g(0)∼ 1.3, with a small dependency on the cosmology.
This type of non-Gaussianity characteristically affects the clustering of biased tracers, inducing a scale-dependent bias <cit.>. To linear order, the power spectrum of galaxies can be given as
P_ t,t(k,z) = [b_1 + b_ϕ f_ NL^ local/α(k,z)]^2 P_ m,m(k,z),
where P_ t,t(k,z) is the power spectrum of the tracer, P_ m,m(k,z) is the power spectrum of the matter, b_1 is the linear bias, and b_ϕ is the response of the tracer to the presence of the local-PNG. Now, P_ t,t(k,z) has a dependency with k which scales as k^-2 at leading order due to the α(k,z) term.
The b_ϕ is usually parametrised as
b_ϕ = 2 δ_c (b_1 -p).
Although it is possible to make a theoretical prediction for p (by assuming a universal mass function, p=1, ), several studies using numerical simulations have shown that the prediction may be different depending on the type of galaxy or tracer under consideration <cit.>.
§ SIMULATIONS
This section summarises the simulations used for this project and gives a very brief description of each setup. The analysed simulations followed the evolution of the matter field with discrete N-body method in the models described in Sect. <ref>. Baryonic and hydrodynamical effects are neglected in this paper. For a comprehensive description of each of the simulation suites, we refer the reader to the main references given in Table <ref> along with the volumes, resolutions, initial redshifts, and the used order of the Lagrangian perturbation theory (LPT) during the initial-condition generation.
§.§ The simulations
The simulation series is a set of 4 cosmological N-body simulations in and cosmologies.
This suite used the complementary-simulation method <cit.>, which is a novel technique in which cosmological N-body simulations are run in phase-shifted matching pairs. One simulation starts from a regular random Gaussian initial condition, while the second simulation has modified initial amplitudes of the Fourier modes to ensure that the average power spectrum of the pair is equal to the cosmic mean power spectrum from linear theory at the initial time. The average statistical properties of a pair of such simulations have greatly suppressed variance. In this paper, we have analysed two complementary pairs using and cosmologies.
The simulation pair used the best-fit Planck2018 <cit.> cosmological parameters:
=1-=0.3111, =0.04897, H_0=67.66, n_ s=0.9665, and σ_8=0.8102. The pair had the following parameters: w_0=-1.04, =0.3096, =0.04899, =0.6904, H_0=67.66, n_ s=0.9331, and σ_8=0.8438.
The cosmological simulations of this series were run using the cosmological N-body code <cit.>.
All simulations in the series contained 2160^3 dark matter particles in a (1.5)^3 volume, with ε=13.8 softening length.
The initial conditions (ICs) were generated by a modified version of the code <cit.> by using the Zeldovich approximation
and initial linear power spectra from the Boltzmann code <cit.>.
The simulations started from redshift z_ init=127, with a total of 48 output times. In this project, 31 particle snapshots were analysed in the 0.5≤ z ≤ 2.0 redshift range for each simulation.
§.§ The simulation suite
The Dark Energy and Massive Neutrino Universe () simulations <cit.> have been produced with the aim of investigating the LSS in the presence of massive neutrinos and dynamical dark energy, and they were conceived for the nonlinear analysis and modelling of different probes, including dark matter, halo, and galaxy clustering <cit.>, weak lensing, CMB lensing, Sunyaev-Zeldovich, and Integrated Sachs-Wolfe (ISW) effects <cit.>, cosmic void statistics <cit.>, and cross-correlations among these probes <cit.>.
The simulations were run using the tree particle mesh-smoothed particle hydrodynamics (TreePM-SPH) code <cit.>, specifically modified as in <cit.> to account for the presence of massive neutrinos. This modified version of follows the evolution of CDM and neutrino particles, treating them as two distinct collisionless components.
The reference cosmological parameters were chosen to be close to the baseline Planck 2013 cosmology <cit.>:
=0.05, =0.32, H_0=67.0, n_ s=0.96, and A_ s =2.127 × 10^-9.
Given these values, the reference (i.e., the massless neutrino case) CDM-particle mass resolution is m^ p_ CDM = 8.27× 10^10, which is decreased according to the mass of neutrino particles, in order to keep the same among all the simulations. In fact, massive neutrinos are assumed to come as a particle component in a three-mass-degenerate scenario, therefore, to keep fixed, an increase in the massive neutrino density fraction yields a decrease in the CDM density fraction.
The simulations balance mass resolution and volume to include perturbations at both large and small scales. The simulations are characterised by a softening length of ε=20, a comoving volume of 8 filled with 2048^3 dark matter particles and, when present, 2048^3 neutrino particles. The simulations are initialised at z_ init=99 with Zeldovich initial conditions. The initial power spectrum is rescaled to the initial redshift via the rescaling method developed in <cit.>. Initial conditions are then generated with a modified version of the software, assuming Rayleigh random amplitudes and uniform random phases.
§.§ The simulations
The simulations <cit.> are a set of two dark-matter only simulations in and cosmologies. The simulations were performed with the Adaptive-Mesh Refinement (AMR) N-body code <cit.>.
These simulations have a box size of 2625 for 4096^3 particles, which results in a smoothing scale of 5 at the maximum refinement level.
Both simulations share the parameters H_0= 72.0, n_ s = 0.963, = 0.04356 and Ω_ r = 8.076× 10^-5. The flat simulation has a WMAP7 cosmology <cit.>: = 0.25733, and σ_8 = 0.80101, while the flat simulation is consistent at the 1 σ-level with a WMAP7 cosmology with = 0.27508, σ_8 =0.85205, and w = -1.2.
In both cases, Gaussian initial conditions are generated using a modified version of the code <cit.> with the displacement field computed using second-order Lagrangian perturbation theory (2LPT) to minimise the effect of transients <cit.>. The initial redshift has been set to z_ init∼ 46 such as to ensure that the maximum displacement is of the order of one coarse cell. Such a late start guarantees smaller discreteness errors <cit.>. For the present work, we focus on the snapshots at z = 0, 1, and 2.
§.§ The simulation suite
The Extended LEnsing PHysics using ANalaytic ray Tracing () cosmological simulation suite was run using the simulation code <cit.>, which is based on the dark matter and hydrodynamic AMR simulation code and includes various types of modified gravity models <cit.>. It is particularly designed to solve for a nonlinear scalar field using AMR. New simulations were run for the purpose of testing the effective field theory of large-scale structure (EFTofLSS) pipeline for spectroscopic galaxy clustering <cit.>. For this purpose, 11 simulations were carried out using the reference cosmology without massive neutrinos for ΛCDM and the nDGP model (Table 2 of ).
The cosmological parameters of the simulations are: = 0.319, = 0.049, =0.681, H_0=67.0, A_ s = 2.1 × 10^-9, and n_ s=0.96.
The nDGP simulations used the same parameters as the simulations with the cross-over scale r_ c = 1.2 c / H_0.
All of the simulations in this simulation suite had a box size of 1024 and 1024^3 particles. The initial conditions were generated at z_ init = 49 with 2LPT using the code[https://github.com/HAWinther/FMLgithub https://github.com/HAWinther/FML] with fixed initial amplitudes. The phases of 10 realisations were extracted with different random seeds, while one realisation shares the same random seed as one of the other simulations, but with opposite phases to have a single paired-and-fixed simulation pair with suppressed cosmic variance <cit.>. Output redshifts were selected from the Euclid Collaboration forecast paper for galaxy clustering <cit.>.
§.§ The simulations
This simulation series contains overall seven simulations in and nDGP cosmologies that were run with , a modified gravity extension of the COmoving Lagrangian Acceleration (COLA) algorithm as implemented in the code. The COLA method uses a combination of analytic 2LPT displacement and particle mesh (PM) simulations to perform fast approximate simulations <cit.>. These techniques are extended to modified-gravity models using approximate screening methods to preserve the speed advantage of COLA simulations <cit.>.
The downside of PM simulations is that the internal structure of dark matter haloes is not well resolved due to limited resolution. This has an important implication for dark matter halo statistics. To mitigate this problem, the COLA simulations were run with an increased mass resolution <cit.>.
All simulations in this suite have a box size of 1024, with 2048^3 particles.
The base cosmological parameters of the simulations are the Planck 2015 parameters <cit.>: =1-=0.3089, =0.0486, H_0=67.74, n_ s=0.9667, and σ_8=0.8159. This simulation series focuses on nDGP gravity and tested 4 cases: r_ c ={0.5, 1, 2, 5} c / H_0.
The series contains paired-and-fixed simulations <cit.> to suppress cosmic variance in ΛCDM and in the nDGP model for r_ c = 1 c / H_0, while for the others they were only run for a single fixed amplitude realisation.
The initial conditions were generated at z_ init=127 using 2LPT.
Full particle snapshots were stored at 4 redshift values, z=1.0, 1.2, 1.4 and 1.65, motivated by the expected Hα-emitters redshifts in the spectroscopic survey <cit.>.
§.§ The and simulations
The (Dark Universe Simulations to Test GRAvity In the presence of Neutrinos) project is an initiative aimed at investigating the degeneracy between f(R) gravity and massive neutrinos at the level of nonlinear cosmological observables, which was first pointed out in <cit.>. More specifically, the project includes two suites of cosmological dark-matter-only simulations named the -pathfinder <cit.> and the -fullscale simulations that have been run by joining the <cit.> solver for f(R) gravity and the massive neutrinos implementation <cit.> available within the code. The former has been described and validated in <cit.> and Adamek et al. (in prep.), while the latter has been compared with other methods in <cit.>.
The simulations have been developed to sample the joint (f_R0, m_ν) parameter space to identify the most degenerate combinations of parameters with respect to some basic LSS statistics. These include the nonlinear matter power spectrum, the halo mass function, weak-lensing-convergence power spectrum, various higher-order statistics, cosmic voids, velocity fields <cit.>.
This series includes in total 13 simulations in f(R)+m_ν cosmology, plus an additional suite of 12 standard ΛCDM simulations for varying one single standard cosmological parameter at a time that have been specifically run for the Higher-Order Weak Lensing Statistics (HOWLS) project <cit.>. These simulations have a box size of 750 per side, used a softening length of ε=20, and include (2× )768^3 particles (for the CDM and neutrinos components). The cosmological parameters (for the reference ΛCDM cosmology with massless neutrinos) have been set to = 1 - = 0.31345, σ_8 = 0.842, H_0 = 67.31, n_ s = 0.9658, and the total matter density has been kept constant when varying the neutrino mass. Full snapshots have been stored at 34 output times between z=99 (corresponding to the starting redshift of the simulation) and z=0.
The -fullscale simulations include only three runs (a reference ΛCDM cosmology and two f(R) gravity models with f_R0=-10^-5 and different values of the total neutrino mass, namely m_ν={0.1, 0.16} eV) simulated in a 8 volume containing (2× )2048^3 particles. In order to allow for a direct comparison with the simulations described above, and to produce an extension to the latter for f(R) gravity with massive neutrino cosmologies, the -fullscale simulations share the same initial conditions with for each of the values of the neutrino mass. Therefore, the two sets of simulations have the same statistical realisations of the universe and identical cosmological parameters. Full snapshots have been stored for 73 output times between z=99 (i.e., the initial conditions) and z=0.
§.§ The simulations
The Constrained Interacting Dark EneRgy scenario <cit.> is a particular type of coupled Quintessence models characterised by a background cosmic expansion which is fixed by construction to be identical to a standard ΛCDM cosmology. As discussed in Sect. <ref>, this implies refraining from choosing a priori any specific functional form for the scalar self-interaction potential and letting the dynamic evolution of the field sample the potential shape required to match the imposed expansion history.
The main feature of the models is that they show a suppressed growth of structures compared to a standard ΛCDM model with the same expansion history, thereby possibly easing the σ _8 tension without further exacerbating the tension on H_0. For these reasons, the model has received some attention even though – at least in its original form – it may already be quite tightly constrained by CMB observations <cit.>.
The simulations have been run with the code (, see also Adamek et al. in prep.) that implements all the relevant features of interacting dark energy models, and includes three values of the coupling β = 0.03, 0.05, 0.08 besides a reference ΛCDM cosmology corresponding to the case β = 0. All simulations clearly share the same expansion history, consistent with the following cosmological parameters: = 1- = 0.311, = 0.049, H_0 = 67.7, n_ s = 0.9665, A_ s = 1.992 × 10^-9, corresponding to a value of σ _8 = 0.788 at z=0 in the reference ΛCDM model. The simulations follow the evolution of 2× 1024^3 particles for the (coupled) dark matter and (uncoupled) baryon components in a cosmological volume of 1 with a softening length of ε=25. The baryonic species are treated as a separate family of collisionless particles, i.e., no hydrodynamic forces nor radiative processes are considered in the simulations, and its inclusion is required in order to consistently represent the effects of the non-universal coupling characterising these models. Therefore, baryonic particles will interact with other massive particles according to standard Newtonian forces, while the interaction between pairs of CDM particles will be governed by an effective gravitational constant G_ eff=G_ N[ 1+ (4/3)β ^2] <cit.>.
Full snapshots have been stored for 25 output times between z=99 and z=0.
§.§ The and simulations
The Dark Scattering (DS) scenario <cit.> is another particular class of coupled Quintessence models where a non-universal interaction between dark matter particles and a classical scalar field playing the role of dark energy is characterised by a pure momentum exchange between the two species, with no transfer of rest-frame energy <cit.>. In this respect, this interaction resembles a process of elastic scattering of massive particles (i.e. the dark matter) moving in a homogeneous fluid with an equation-of-state parameter w (i.e. the dark energy field), which can be simulated by introducing a velocity-dependent force acting on dark matter particles which will depend on the evolution of the dark energy equation-of-state parameter w, and on the cross-section, σ, characterising the interaction strength <cit.>.
The <cit.> and simulations have been run with the code and cover various combinations of the shape of w(z), including the CPL parametrisation as given by Eq. (<ref>) and hyperbolic tangent shapes, and of the cross-section, σ, giving rise to a diverse phenomenology at both linear and nonlinear scales. In particular, DS models have been shown to suppress the linear growth of perturbations for equation-of-state parameters w>-1 <cit.> thereby possibly addressing the σ _8 tension, but such suppression is typically paired with a substantial enhancement of structure growth at deeply nonlinear scales.
The simulations are subject to the approximation of considering the entirety of matter in the universe is in the form of dark matter, thereby slightly overestimating the effect of the interaction as well as not capturing the segregation effects between dark matter and baryons due to the non-universality of the coupling. These have been run for a cosmology with = 1- = 0.308, H_0 = 67.8, n_ s = 0.966, A_ s = 2.215× 10^-9, in a simulation box with a volume of 1 filled with 1024^3 dark matter particles and using a softening length of ε=12.
The simulations, instead, share the same cosmology and the same statistical realisation as the simulations described above (i.e., the two sets of simulations share exactly the same reference ΛCDM run) and include collisionless baryons as a separate family of uncoupled particles, thereby consistently capturing the non-universality of the DS interaction. As for the simulations, a collection of 25 full snapshots for redshifts between z=99 and z=0 has been stored.
§.§ The simulations
The Clustering Dark Energy simulations are run using the code, a relativistic N-body code <cit.> based on <cit.>. In , the field equations for k-essence type theories (Eq. <ref>) are solved using the effective field theory (EFT) framework. We have two free parameters in the EFT framework of these theories: the equation-of-state parameter w(τ) appearing at the background level and kineticity α_ K(τ) at the perturbation level. In the fluid picture of these theories, the relevant parameters are the speed of sound c_ s(τ) and the equation-of-state parameter w(τ), which in general are time-dependent. The term “clustering dark energy” refers to the fact that these theories include a sound-horizon scale, beyond which scalar-field perturbations can grow.
In the analysed simulations, constant w_0 and c_ s^2 are used, with cosmological parameters based on the reference cosmology <cit.>. The suite contains one ΛCDM simulation and four clustering dark energy simulations: (w_0, c_ s^2)=(-0.9, 1 c^2), (-0.9, 10^-4 c^2), (-0.9, 10^-7 c^2), and (-0.8, 10^-7 c^2). In these simulations, the box size was set to 2 with N=1200^3 particles. Moreover, two sets of simulations with different resolutions were considered to study the convergence of the results. In this high-resolution simulation set, the box size was set to 2 with N = N_ grid = 2400^3.
In this series, the particle snapshots were saved in format at five different redshifts z ∈{2, 1.5, 1, 0.5, 0}.
§.§ The and simulation suites
The simulation suite <cit.> is a set of 198 dark matter only simulations for f(R) gravity and run with the cosmological simulation code <cit.> using its MG module <cit.>. The simulations explore the cosmological and f(R) parameter space spanned by (= 1-), h, σ_8, and through 50 combinations (nodes) of these parameters sampled in a Latin-hypercube. All other cosmological parameters are fixed to a Planck cosmology <cit.>. For each node, consists of a pair of large box simulations with 512^3 particles in a 1.5 side-length box and a pair of high-resolution runs with 1024^3 particles in a 500 box. For each pair, the initial conditions are chosen such that the large-scale variance in the 3D matter power spectrum approximately cancels when averaged over the two simulations <cit.>. All simulations in this suite started from z_ init = 127, with initial conditions generated using the <cit.> code.
The simulation suite <cit.> uses the same base setup and ICs as , but for the nDGP gravity model implemented into the code <cit.>. Accordingly, instead of varying the parameter, the nDGP parameter, is varied to explore the cosmological parameter space, with all the other parameters identical to those in for corresponding nodes.
The fact that the and simulations have the same cosmological parameters, node by node, allows a third suite of simulations to be done as a control set to quantify the effect of modified gravity and how it correlates to the effect of varying cosmological parameters. This additional suite of simulations, -, uses the same setup but runs for the counterparts of the corresponding f(R) and nDGP models.
As the , -, and simulations have different cosmologies, the mass resolution differs amongst them. In Table <ref>, we thus quoted an order of magnitude for the mass resolution.
§.§ The simulation
The <cit.> is a twin of one of the existing UNITsims <cit.>, but with local primordial non-Gaussianities given by f_ NL=100.
The simulation assumes the following ΛCDM parameters: = 1 - = 0.3089, H_0 = 67.74, n_ s = 0.9667, σ_8 = 0.8147. It consists of N=4096^3 particles in L=1 evolved with , which is a version of optimised for massive parallelisation, using a tree-PM algorithm with a softening length of ε=6.
The initial conditions are run with the 2LPT implementation in the <cit.> code at z=99. Both the and are run with fixed initial conditions <cit.>, which set the amplitude of the ICs to their expected value. Whereas there are 4 simulations in 2 sets of pairs (within each pair, each simulation has the inverted phases one with respect to another, following ), we only have one simulation for the . The PNG-UNITsim is run with the phases of the ICs matched to one of the UNITsims, which is labelled in the databases as “Ampl1”. The usage of fixed ICs with local PNG was validated in <cit.>, where it was also shown how to increment the precision of the statistics measured from matched simulations. Overall 129 snapshots were stored during the simulation, and 32 in the 0.5<z<2.0 range.
§ ANALYSIS
We have developed a cosmological analysis pipeline to generate mock observables from non-standard cosmological simulations in a consistent and rapid way. The pipeline is a [<https://slurm.schedmd.com/>] script that runs in parallel on multiple nodes on the machines where the simulations are stored. The pipeline consists of several modules that can be activated or deactivated independently. The modules are controlled by a configuration file that specifies the input and output parameters, as well as the options for each module. The input of the pipeline is the particle snapshots of the non-standard cosmological simulations. The supported input formats are binary and Hierarchical Data Format version 5 <cit.>, HDF5 <cit.>, HDF5 format from <cit.>, and HDF5 <cit.>. The main steps of the analysis are summarised in Fig. <ref>. In this section, we describe the quantities generated by this pipeline.
§.§ Dark Matter density field
The pipeline uses <cit.> to read and analyse the dark matter particle data of the input-simulation snapshots. This package is an open-source, massively parallel toolkit that provides a set of LSS algorithms useful in the analysis of cosmological data sets from N-body simulations and observational surveys. During the dark matter density-field analysis, generates a reconstructed density field from the input-particle distribution with the triangular-shaped-cloud (TSC) density-assignment function. We chose to use a
N_ grid = 2^{[log_2(√(N_ part))]-1}
linear grid size for every analysed snapshot, where N_ part is the number of stored particles in the snapshot. With this choice, there will always be at least eight particles on average in each cubic density cell. The reconstructed density fields were saved in bigfile format <cit.> for future analysis.
§.§.§ Real-space power spectrum
The real-space matter power spectrum is defined via
<δ̃(k⃗)δ̃^*(k⃗’) > = (2π)^3 P(k) δ_ D^(3)(k⃗ - k⃗’) ,
where δ̃(k⃗) is the Fourier-transform of the matter overdensity field
δ(r⃗) = ρ(r⃗)/ρ - 1 ,
and k⃗ is the wavevector. We estimate the power spectrum using . The density field is created by binning the particles into a grid using a TSC-density-assignment function, with the linear-grid size defined in Eq. (<ref>). The density field is Fourier transformed and the power spectrum is computed by binning |δ̃(k⃗)|^2, deconvolving the window function and subtracting shot noise. We also use the interlacing technique for reducing aliasing <cit.>. The bin size of the power spectrum was set to
Δ k = k_ f = 2π/L_ box,
where k_ f is the fundamental wavenumber, and L_ box is the linear size of the simulation. The pipeline saves the power spectrum of every calculated bin below the
k_ Ny = π N_ grid/L_ box
Nyquist wavenumber with the number of modes into a simple ASCII format file.
§.§.§ Redshift-space power spectrum
The real-space matter power spectrum is not directly measurable in galaxy surveys because we cannot probe the real-space positions of galaxies. What we can directly measure is the redshift-space power spectrum P^s(k,μ) where μ = n̂⃗̂_ LOS·k̂⃗̂ and n̂⃗̂_ LOS is a unit vector in the line-of-sight (LOS) direction. This can be expanded in multipoles P^s(k,μ) = ∑_ℓ=0^∞ P_ℓ^s(k)ℒ_ℓ(μ) where ℒ_ℓ(μ) are the Legendre polynomials. The multipoles are then computed from the redshift-space power spectrum as
P_ℓ^s(k) = 2ℓ +1/2∫_-1^1 P^s(k,μ) ℒ_ℓ(μ) dμ.
We compute the redshift-space power spectrum in 25 μ bins and the redshift-space multipoles (the monopole P_0, the quadrupole P_2, and the hexadecapole P_4) using from the input dark matter density field. For this, we use the distant-observer approximation
s⃗_i = r⃗_i + ( n̂⃗̂_ LOS·v⃗_i/a H) ·n̂⃗̂_ LOS,
to add the redshift-space distortions using the three coordinate axes as the LOS directions (observables are computed as the mean over these three individual axes). Here, r⃗_i and v⃗_i are the real-space particle coordinates and peculiar velocities inside the periodic simulation box, s⃗_i is the corresponding redshift-space position we compute, a=1/(z+1) is the scale factor, and H is the Hubble parameter at the redshift of the snapshot. We deconvolve the window function for the density assignment, applying interlacing, and the resulting density power spectrum is finally shot-noise subtracted. The saved wavenumber bins are the same as in Sect. <ref>.
§.§.§ Linear dark matter power spectrum
During the analysis of scale-independent models, multiple modules use the linear dark matter power spectrum as an input. To make the analysis more transparent, we also generated and saved the linear power spectrum for all analysed redshifts. For this, the pipeline needs the P_ lin(k,z_ start) input linear power spectrum of the simulation that was used during the initial condition generation. This can be defined at any z_ start redshift. Then, this linear power spectrum is renormalised with the cosmological parameter σ_8 at z=0. The normalised P_ lin(k,z=0) power spectrum is rescaled to z_ snap redshifts of all analysed snapshots as
P_ lin(k,z_ snap) = [D(z_ snap)/D(z=0)]^2 P_ lin(k,z=0) ,
where D(z) is the linear growth function. The pipeline uses this back-scaling since the linear growth in these Newtonian simulations follows this scale-independent evolution. For reference simulations, we used the
D(a) = 5 H_0^2/2H(a)∫_0^a a'/ȧ'^3
linear growth to scale the linear spectrum <cit.>. This growth function only describes the linear growth in the framework. In the case of and CPL models, we solve the
G” + [7/2 + 3/2w(a)/1+X(a)]G'/a + 3/21-w(a)/1+X(a)G/a^2 = 0
ordinary differential equation <cit.> with the python package <cit.>, where G(a) = D(a)/a and
X(a) = /1- e^-3∫_a^1 a'w(a')/a'.
For every other model, we use tabulated linear growth functions. The linear power spectra are calculated in the same wavenumber bins as the nonlinear real-space matter power spectra.
§.§ Halo Catalogues
<cit.> is a friends-of-friends (FoF) halo-finder algorithm that uses information from the full 6D phase space (positions and the velocities) of the particles. The code initially creates FoF groups in real space, with a large linking length (b ≃ 0.28). It then does a new FoF search using the phase-space metric
d = √(| x_1 - x_2|^2/σ_x^2 + | v_1 - v_2|^2/σ_v^2) ,
where σ_x and σ_v are the particle-position and velocity dispersions for the given FoF group. Finally, it links particles into subgroups and this is done iteratively on each subgroup creating a hierarchical set of structures. By default, the algorithm calculates halo and subhalo masses using dark matter particles from the spherical regions around the friends-of-friends group with gravitationally unbound particles removed. The halo masses calculated this way are called bound-only (BO) masses. If the unbound particles are not removed during the mass calculation, the calculated masses are strict spherical-overdensity (SO) masses.
We made a custom version of the publicly-available code <cit.> to analyse our non-standard simulations. We added new input formats for simulations, options to read tabulated expansion histories, and already internally computed quantities in the outputs such as halo minor axis vectors and radii at different mass definitions. None of these modifications impact the halo-finding algorithm.
In our pipeline, we use the following mass-definitions: M_ 200c (SO & BO), M_ 500c (SO), M_ 1000c (SO), M_ 2500c (SO), M_ 200b (SO).
The M_ vir masses are not calculated by the pipeline, since this mass definition is dependent on the cosmological parameters and on the laws of gravity. Many non-standard cosmological models are changing the dynamics of the dark matter component, and this choice simplifies the future expansion of the database without the need of implementing new cosmologies in . After the catalogue (in ASCII format) is produced, we run a post-processing script to find parent haloes for subhaloes and store the information as an additional index column. Extra information is saved in the header such as scale factor, box length, and particle mass. Additional particle data for each halo are also saved by in a custom BGC2 binary data format. During the execution of our pipeline, these BGC2 files are temporarily stored to provide additional input for other analysis modules.
§.§.§ Halo mass function and power spectra
By default, we compute the halo mass function (HMF) in the range 11 < [M/()] < 14 with 32 logarithmic bins using the main mass definition (200c) and excluding substructures. The pipeline allows the user to use different mass definitions (see Sect. <ref>), include substructures, and vary the HMF range and binning.
In practice, many simulations produce very large ASCII files which are not practical to read using standard libraries such as [<https://numpy.org/>] or .[<https://pandas.pydata.org/>] To speed up the analysis pipeline, we therefore use ,[<https://pola.rs/>] a fast multi-threaded dataframe library.
The halo real-space and redshift-space power spectra are computed using the same tools as in Sects. <ref> and <ref>. The user can specify the halo mass range, SO or BO for the main mass definition, and whether or not to include substructures.
§.§.§ Halo bias
For each catalogue, we infer the linear halo bias with the estimator
b = ⟨√(P_ h(k)/P_ m(k)) ⟩_k<k_ max,
with P_ m(k) and P_ h(k) the matter and halo real-space power spectra, respectively estimated in Sects. <ref> and <ref>. This estimator calculates the bias by taking the square root of the ratio of the halo power spectrum P_ h(k) to the matter power spectrum P_ m(k), and then averaging this ratio over all k bins where k < k_ max with uniform weighting. We only use this computed quantity to be as model-independent as possible and to remove cosmic variance (since matter and halo both share the same sample and cosmic variance). We compute the mean power spectra ratio up to a conservative value of k_ max = 0.1 to mitigate the effects of nonlinear clustering. This method works reliably for scale-independent bias with sub-percent accuracy, but cannot be used for models that have scale-dependent bias at k<k_ max wavenumber.
§.§.§ Redshift-space Gaussian covariance
To produce Gaussian covariances of the power-spectrum multipoles in redshift space, we need the linear bias and power-spectrum multipoles including the shot-noise contributions as inputs. The former is estimated numerically in Sect. <ref>, while the latter can be internally computed from an input linear power spectrum; in this case, the covariance is estimated as in <cit.>, or with the EFT model using the emulator <cit.> and the covariance formulae from <cit.>.
When analysing snapshots, it may be interesting to compute the power-spectrum multipoles averaged over the three box directions to significantly suppress variance. However, this procedure also has to be carefully accounted for in the covariance <cit.> since the LOS-averaged covariance is not equal to the single-LOS covariance divided by three (as one might naively expect). Thanks to the LOS-averaged covariance implemented in , it can also be part of the outputs of our pipeline.
§.§.§ 2D and 3D Halo Profiles
The generation of binned 3-dimensional and 2-dimensional projected profiles is performed using a custom analysis module which reads the halo catalogues as well as the BGC2 particle data, and stores the resulting profiles of each halo in a separated HDF5 file.
All the profiles are obtained considering 50 log-spaced bins in a fixed radial range [0.001,5], in units of r_ 500c. This analysis module provides cumulative mass, density, number density, and cumulative number density profiles, as well as velocity and velocity dispersion profiles for the cartesian- and spherical-coordinates components of the velocities.
The 2D profiles correspond to projecting the LOS along each of the cartesian coordinates in a cylinder of length 5r_ 500c. In an upcoming version of the pipeline, the profiles will also be available for the projections along the axes of the inertia ellipsoid a, b, and c.
§.§ Cosmic Voids
The Void IDentification and Examination toolkit, https://bitbucket.org/cosmicvoids/vide_public/src/master/ <cit.>, is a parameter-free topological void finder, conceived for galaxy-redshift surveys and N-body simulations. The pipeline is an open-source code, based on the http://skysrv.pha.jhu.edu/ neyrinck/voboz/ <cit.> software and can be launched on any tracer distribution. The algorithm follows the following main steps: i) estimation of the density field of a tracer distribution using the Voronoi tessellation <cit.>; ii) detection of all the relative minima; iii) merging of nearby Voronoi cells into zones via the watershed transform <cit.>, cells correspond to local catchment “basins”, which are identified as voids. can also merge adjacent voids to construct a nested hierarchy of voids if a merging threshold is provided. In this case, when two adjacent voids have at least one Voronoi cell on the ridge separating them lower than the threshold, they are merged into a parent void. In this work, in order to leave the algorithm parameter-free, and for consistency with other Euclid void analyses <cit.>, we do not explore this possibility.
provides some fundamental properties of voids. The void size is measured by the effective radius, defined as the radius of a sphere with the same volume as the void, R_ eff = [ (3/4 π) ∑_i V_i ]^1/3, where V_i is the volume of the i^ th Voronoi cell belonging to the void. The void centre is defined as the volume-weighted barycentre, X_v = ∑_i x⃗_i V_i/V_ tot, with V_ tot=∑_i V_i. Note that this corresponds to the geometric centre of the void. In addition, also provides the position of the tracer sitting in the lowest-density Voronoi cell, i.e. the minimum. The void's depth is estimated via the central density, defined as the mean density in a sphere centred in the barycenter X_v with radius R_ eff/4. also computes void shapes via the inertia tensor as well as the corresponding eigenvalues and eigenvectors. The ellipticity is then computed as ϵ = 1- (J_1/J_3)^1/4, where J_1 and J_3 are the smallest and largest eigenvalues.
We detect voids in the distribution of haloes. After the void catalogues are produced, we post-process them to measure the void-size function, which is the number density of voids as a function of their size, R_ eff. The void-size function is a sensitive probe for cosmology, strongly complementary to the galaxy 2pt-statistics <cit.>. Additionally, albeit not computed for this paper, the void catalogues allow to compute the void-galaxy cross-correlation function, another powerful statistic to constrain cosmology <cit.>.
§ INTERPRETATION
The mock observables that we compute contain several interesting signatures of non-standard models. Due to a large number of analysed models, in this paper, we focus on showing results from the nDGP, f(R), and interacting-dark-energy models, which we obtained thanks to the analysis of the , , and simulations suites respectively.
To mitigate the noise due to sample variance in the figures below, we average the signals over the available realisations of the simulations. In the case of the suite, we focus on a single cosmology[In particular we use node 10 of the Latin-Hypercube sampling described in Table 1 of <cit.>.] corresponding to |f̅_R0| = 10^-5.34219. In the case of the simulations, we focus on the IDE model with β=0.03 coupling.
The nDGP, f(R), and IDE simulations were run with the same initial conditions as their counterpart so the effects of non-standard cosmologies can be studied directly by comparing the generated observables.
The main difference between the modified-gravity simulation and is the inclusion of a fifth force. For nDGP, this fifth force acts on all scales and increases in strength towards redshift zero (and goes to zero as we go to higher and higher redshifts). This is also true for f(R), with the exception that the fifth force has only a finite range, so it does not affect the clustering on the largest scales.
In the IDE simulations, the dark energy component interacts with the dark matter, resulting in a transfer of energy between the two. This interaction affects the growth of cosmic structures by modifying the gravitational potential. This cosmological scenario is expected to suppress structure formation at late time compared to the standard model.
These differences lead to a number of different observable signatures, a few of which we will highlight below.
Abundance of dark matter haloes – In Fig. <ref>, we compare the cumulative halo mass function of the nDGP and models. The inclusion of the fifth force means structures will form more rapidly than in and this is indeed what we see. This is most pronounced at the high-mass end where the abundance is up to 50% larger.
In Fig. <ref>, we compare the cumulative halo mass function of the f(R) and model. We see roughly the same qualitative features as for nDGP in that the halo abundance generally increases with halo mass and with time. However, as opposed to nDGP, we see an over-abundance of “small” haloes at earlier times in the f(R) simulations. This is a consequence of the fact that the fifth force only acts on “small” scales and the fact that the screening mechanism is more effective at suppressing the fifth force in and around the most massive haloes.
The comparison of the halo mass function between the standard and the β = 0.03 IDE model can be seen in Fig. <ref>. In the IDE model, the interaction between the dark energy and dark matter caused a significant reduction in the HMF. This is a straightforward consequence of the suppressed growth rate of the matter fluctuations.
Clustering of dark matter – In Figs. <ref>, <ref>, and <ref>, we show the calculated real- and redshift-space power-spectrum multipoles for the haloes and dark matter for nDGP, f(R), and IDE respectively. For the modified gravity models, the effect of the fifth force is again clearly in the dark matter power spectrum and shows two different effects: for nDGP, we have a scale-independent growth rate causing the power spectrum to be boosted on all scales displayed, while for f(R) we have a scale-dependent growth-rate where f(R) agrees with on the largest scales, but is boosted below a critical scale which is related to the range of the fifth force. For both models, the difference with respect to increases in strength as we get closer to the present time.
In the case of the interacting-dark-energy model, the energy transfer between the dark energy and dark matter caused a scale-independent suppression in the dark matter power spectrum. At redshift 2, this is ≃3%, and this difference increases to ≃5% for z=0.55 compared to the model.
Halo bias – When it comes to halo clustering in real space, we see the opposite effect as for the dark matter power spectrum in Figs. <ref> and <ref>, with nDGP and f(R) being less clustered than . This comes from a smaller halo bias in these modified-gravity models (see e.g. for a theoretical explanation for nDGP).
In the interacting-dark-energy scenario, the halo bias in real space is 6% higher than for . As a consequence, the real-space clustering of the dark matter haloes is more prominent in the IDE simulation.
Redshift space distortions – For the redshift-space halo power spectra, the boost in the ratio with respect to is seen to be larger than in real space which comes from the larger velocities in the modified-gravity simulations, leading to enhanced redshift-space distortions. The monopole redshift-space power spectra of the haloes in the mass bin 10^12.7<M_ halo<10^13.2 in the IDE simulations are showing a 5-10% excess power compared to the counterpart, similarly to the real-space clustering. On the other hand, the quadrupole only shows a significant power increase at smaller, nonlinear scales.
§ SUMMARY
In this paper, we described a new pipeline based on the halo finder and the LSS toolkit to post-process cosmological simulations with modified gravity, non-standard expansion history, modified dark matter or dark energy components, or altered initial conditions. We used this pipeline to analyse 474 cosmological N-body simulations in various and non-standard cosmological scenarios in a consistent way. With this pipeline, we generated halo catalogues, halo mass functions, reconstructed density fields, real- and redshift-space power spectra, Gaussian covariances, halo biases, and void catalogues. This generated data will serve as a theoretical prediction and reference for as well as other Stage-IV cosmology projects. Using the calculated quantities, we identified distinctive signatures of non-standard behaviour in nDGP and f(R) modified-gravity models, and in the interacting-dark-energy scenario.
The synthetic halo catalogues are crucial in the production of additional observables, which can be used for a direct comparison with cosmological observations of .
In the near future, we will extend the generated database with halo density profiles <cit.>, synthetic galaxy catalogues <cit.>, weak lensing <cit.> and ISW <cit.> maps, and lightcones <cit.>.
We have generated overall more than 100 TB of post-processed data from the available non-standard simulations. During the analysis, the pipeline used 66 CPU hours and 60GB of memory per billion particles per snapshot on average. The data are available on request on the CosmoHub (<https://cosmohub.pic.es/home>, see and ) platform designed for interactive exploration and distribution of massive cosmological datasets.
GR’s research was supported by an appointment to the NASA Postdoctoral Program administered by Oak Ridge Associated Universities under contract with NASA. GR and AK were supported by JPL, which is run under contract by the California Institute of Technology for NASA (80NM0018D0004). GR acknowledges the support of the Research Council of Finland grant 354905.
The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC and visualization resources that have contributed to the research results reported within this paper. URL: <http://www.tacc.utexas.edu>.
This project was provided with computer and storage resources by GENCI at TGCC thanks to the grant 2023-A0150402287 on Joliot Curie's SKL partition.
This work has made use of CosmoHub. CosmoHub has been developed by the Port d'Informació Científica (PIC), maintained through a collaboration of the Institut de Física d'Altes Energies (IFAE) and the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) and the Institute of Space Sciences (CSIC & IEEC).
CosmoHub was partially funded by the "Plan Estatal de Investigación Científica y Técnica y de Innovación" program of the Spanish government, has been supported by the call for grants for Scientific and Technical Equipment 2021 of the State Program for Knowledge Generation and Scientific and Technological Strengthening of the R+D+i System, financed by MCIN/AEI/ 10.13039/501100011033 and the EU NextGeneration/PRTR (Hadoop Cluster for the comprehensive management of massive scientific data, reference EQC2021-007479-P) and by MICIIN with funding from European Union NextGenerationEU(PRTR-C17.I1) and by Generalitat de Catalunya. ZS acknowledges funding from DFG project 456622116 and support from the IRAP and IN2P3 Lyon computing centers.
During part of this work, AMCLB was supported by a fellowship of PSL University-Paris Observatory.
CG thanks the support from INAF theory Grant 2022: Illuminating Dark Matter using Weak Lensing by Cluster Satellites, PI: Carlo Giocoli.
VGP is supported by the Atracción de Talento Contract no. 2019-T1/TIC-12702 granted by the Comunidad de Madrid in Spain. VGP, and by the Ministerio de Ciencia e Innovación (MICINN) under research grant PID2021-122603NB-C21.
We extend our sincere gratitude to Christian Arnold and Claudio Llinares for their valuable contributions to this research. Their work significantly influenced the development of this project.
|
http://arxiv.org/abs/2409.02854v1 | 20240904162507 | Sharp Fourier decay estimates for measures supported on the well-approximable numbers | [
"Robert Fraser",
"Thanh Nguyen"
] | math.CA | [
"math.CA",
"42A99"
] |
Vacuum Radiation Pressure Fluctuations on Electrons
L. H. Ford
September 9, 2024
====================================================
§ ABSTRACT
We construct a measure on the well-approximable numbers whose Fourier transform decays at a nearly optimal rate. This gives a logarithmic improvement on a previous construction of Kaufman.
§ INTRODUCTION AND BACKGROUND
§.§ Harmonic analysis on fractal sets
An interesting class of problems in harmonic analysis involves determining information about the Fourier transform of a compactly supported measure μ given information about the support μ of the measure μ. A standard result in this area is Frostman's lemma, which states that if E is a set of Hausdorff dimension s, then for any t < s, there exists a Borel probability measure μ_t supported on E satisfying the condition that
∫_ξ∈ℝ^n |μ̂_t(ξ)|^2 (1 + |ξ|)^-t < ∞.
Frostman's lemma states that, up to an ϵ-loss in the exponent, the set E supports a measure whose Fourier transform decays like |ξ|^-s/2 in an L^2-average sense.
This version of Frostman's lemma motivates the definition of Fourier dimension. The Fourier dimension of a set E ⊂ℝ^n is the supremum of those values 0 ≤ s ≤ n such that E supports a Borel probability measure μ_s satisfying the pointwise condition
|μ̂_s(ξ)| ≲ (1 + |ξ|)^-s/2.
Observe that the condition (<ref>) for some value of s implies equation (<ref>) for any t < s. However, there is no reason to expect a converse statement to hold; in fact, if E is the usual middle-thirds Cantor set, there is no Borel probability measure μ on E such that |μ̂(ξ)| → 0 as |ξ| →∞. A measure μ such that |μ̂(ξ)| → 0 as ξ→∞ is called a Rajchman measure. On the opposite extreme, there are a number of examples of sets E of Hausdorff dimension s supporting Borel probability measures satisfying (<ref>) for all t < s. Such sets are called Salem sets.
If s = n - 1, a simple stationary phase calculation shows that the usual surface measure on the sphere satisfies the condition
|μ̂(ξ)| ≤ (1 + |ξ|)^-(n-1)/2.
This well-known computation can be found in the textbooks of Wolff <cit.> and Mattila <cit.>.
If n = 1 and 0 < s < 1, the first examples of Salem sets were given by Salem <cit.> via a random Cantor set construction. A later random construction was given by Kahane <cit.>, who shows that if Γ : [0,1] →ℝ^n is a Brownian motion and E ⊂ [0,1] is a set of Hausdorff dimension s, then Γ(E) will almost surely have Fourier dimension equal to 2s. Kahane <cit.> also constructed Salem sets using random Fourier series whose coefficients are given by Gaussian random variables.
The first explicit, deterministic example of a Salem set of fractional dimension in ℝ was given by Kaufman <cit.>. For an exponent τ, the well-approximable numbers E(q^-τ) are defined by
E(q^-τ) = {x : | x - p/q| ≤ q^-τ for infinitely many pairs of integers (p,q)}.
A classical result of Jarník <cit.> and Besicovitch <cit.> states that the Hausdorff dimension of E(q^-τ) is equal to 2/τ. Kaufman shows that E(q^-τ) supports a Borel probability measure μ satisfying
|μ̂(ξ)| ≲ (1 + |ξ|)^-1/τ o(log |ξ|).
Bluhm <cit.> provides an exposition of Kaufman's argument to prove a slightly weaker result in which the o(log |ξ|) term is replaced by O(log |ξ|).
More generally, given a function ψ : ℕ→ [0, ∞), it is of interest to consider the set of ψ-approximable numbers
E(ψ) = {x : | x - p/q| ≤ψ(q) for infinitely many pairs of integers (p,q)}.
Hambrook <cit.> obtains lower bounds on the Fourier dimension of such sets in terms of the function ψ.
§.§ Some problems in geometric measure theory
In this paper, we will consider the question of locating sets E satisfying more precise estimates than (<ref>) under the constraint that E has finite Hausdorff measure. As a motivating example, consider the (n-1)-dimensional sphere in ℝ^n. This set has positive and finite (n-1)-dimensional Hausdorff measure and supports a measure μ with Fourier transform satisfying (<ref>). Mitsis <cit.> posed the following problem.
[Mitsis's problem]
For which values of 0 < s < n does there exist a measure μ such that μ simultaneously satisfies the ball condition
μ(B(x,r)) ∼ r^s for all x ∈μ and all r > 0
and the Fourier decay condition
|μ̂(ξ)| ≤ |ξ|^-s/2?
We will consider a related problem. Let 0 ≤ s ≤ n. Recall that a subset E of ℝ^n is said to be an s-set if the Hausdorff measure ℋ^s(E) satisfies 0 < ℋ^s(E) < ∞.
[Fourier transform on s-sets]
For which values of 0 < s < n does there exist an s-set E supporting a measure μ such that μ satisfies the Fourier decay condition
|μ̂(ξ)| ≤ |ξ|^-s/2?
Of course, such a set E must be a Salem set of Hausdorff dimension s.
This problem can be extended to a question about generalized Hausdorff dimension. Recall that a positive, increasing function α is said to be a dimension function if α(u) → 0 as u → 0. We will say that E is an α-set if 0 < ℋ_α(ℰ) < ∞, where ℋ_α is the generalized Hausdorff measure associated to α. The following question generalizes the previous one:
[Fourier transform on α-sets]
For which dimension functions α does there exist an α-set E supporting a measure μ such that μ satisfies the Fourier decay condition
|μ̂(ξ)| ≲√(α(1/ξ)) for |ξ| ≥ 1?
We conjecture that the only such dimension functions α are integer powers α(u) = u^-s for integers 0 ≤ s ≤ n.
On the other hand, we also wish to pose the problem of determining the optimal Fourier decay estimates for measures supported on the set of well-approximable numbers E(ψ).
[Fourier decay of measures supported on E(ψ)]
Fix a function ψ. For which functions Θ does there exist a measure μ supported on E(ψ) such that
|μ̂(ξ)| ≲Θ(ξ)?
Although we are unable to answer Problems <ref>, <ref>, and <ref> in this work, we are able to obtain “near"-answers to all three of these questions if the dimension function α or the approximation function ψ decay at a polynomial rate.
§.§ Notation
In this paper, constants are always allowed to depend on the parameters τ, σ, and ρ. Any dependence on these parameters will always be suppressed for simplicity of notation.
If A and B are any two quantities, we write A = O(B) or A ≲ B to imply that A ≤ C B for some constant C that does not depend on A or B (but may depend on τ, σ, or ρ). We write B ≳ A to mean the same thing as A ≲ B. If A ≲ B and B ≲ A, we write A ∼ B. If the implicit constant in any of these inequalities is allowed to depend on some other parameter such as ϵ, we write A ≲_ϵ B, A ≳_ϵ B, or A ∼_ϵ B.
If A(x) and B(x) are functions of a variable x, we write A(x) ⪅ B(x) if A(x) ≲_ϵx^ϵ B(x) for every ϵ > 0. So, for example, we write
x^3 exp (√(log x)) log x loglog x ⪅ x^3.
If A(x) ⪅ B(x) and B(x) ⪅ A(x), we write A(x) ≈ B(x).
§ RESULTS
First, we describe a result in the direction of Problem <ref>.
Let ψ(q) be an arbitrary nonnegative, decreasing function satisfying the conditions
2 < lim_q →∞ -log(ψ(q))/log q = τ < ∞.
Suppose also that there exists σ > 1 such that ψ satisfies the polynomial-type decay condition
ψ(q_1)/ψ(q_2)≥(q_2/q_1)^σ for q_2 > q_1 sufficiently large.
Suppose further that 1 ≤χ(q) ≤log q is a nonnegative function that satisfies
∑_q=1
q prime^∞1/qχ(q) = ∞.
and also satisfies the subpolynomial-type growth condition for any ϵ > 0:
χ(q_2)/χ(q_1) < (q_2/q_1)^ϵ for q_1, q_2 sufficiently large depending on ϵ.
Then for any increasing function ω with lim_ξ→∞ω(ξ) = ∞, there exists a Borel probability measure μ supported on a compact subset of the ψ-well-approximable numbers satisfying the estimate
|μ̂_χ, ω(ξ)| ≲ω(|ξ|)/ψ^-1(1/|ξ) χ (ψ^-1(1 /|ξ|)) for all ξ∈ℝ.
In order to simplify our notation, we define
θ(ξ) := 1/ψ^-1(1/ξ) χ(ψ^-1(1/ξ)).
If ψ(q) = q^-τ, Theorem <ref> gives estimates that improve on those of Kaufman <cit.>. In this case, the estimate (<ref>) becomes
|μ̂_χ, ω(ξ)| ≲ |ξ|^-1/τω(|ξ|)/χ(|ξ|^1/τ).
Observe that, for example, the choice χ(q) = loglog q satisfies (<ref>). On the other hand, ω can be taken to be any function that increases to ∞, so it is possible to choose ω(ξ) = logloglogξ, for example. Hence there exists a measure μ supported on the well-approximable numbers satisfying
|μ̂(ξ)| ≲ |ξ|^-1/τlogloglog |ξ|/loglog |ξ|≪ |ξ|^-1/τ.
Our next result is in the direction of Problem <ref>.
Let α be a dimension function with
0 < lim_x → 0logα (x)/log x = ν < ∞
and for some ρ < 1 such that
α(x_1)/α(x_2)≥(x_1/x_2)^ρ
for sufficiently small x_1 < x_2.
Let ω be an increasing function such that lim_ξ→∞ω(ξ) = ∞. Then there exists a compact set F_α of zero α-Hausdorff measure such that there exists a measure μ_α, ω supported on F_α satisfying
|μ̂(ξ)|≲√(α (1/|ξ|))ω (|ξ|)
for all s ≠ 0. Such a set is given by an appropriately chosen subset of the well-approximable numbers E(ψ) where
ψ(q) = α^-1(q^-2).
Although this does not provide an answer to Problem <ref>, it comes within an arbitrarily slowly growing function of answering this problem. In other words, any improvement on the estimate of Theorem <ref> will give an answer to Problem <ref>.
Observe that the condition (<ref>) on α implies the condition (<ref>) on ψ for τ = 2/ν. A simple calculation also shows that the condition (<ref>) implies the condition (<ref>) with σ = 2/ρ. This is the only way in which the assumptions (<ref>) and (<ref>) will be used.
Finally, we show that, for any decreasing approximation function ψ, the set E(ψ) supports a Rajchman measure. This improves a result of Bluhm <cit.> constructing a Rajchman measure supported on the set of Liouville numbers.
For an arbitrary nonnegative, decreasing function ψ there exists a Rajchman measure, μ, supported on a compact subset of the ψ-well-approximable numbers.
In a recent work, Polasek and Rela <cit.> improve Bluhm's result in a different way by showing an explicit Fourier decay bound on the set of Liouville numbers. They show that if f : ℝ^+ →ℝ^+ is any function such that
lim sup_ξ→∞ξ^-α/f(ξ) = 0 for all α > 0,
then there exists a measure μ_f supported on the set of Liouville numbers such that |μ̂_f(ξ)| ≲ f(|ξ|) for all ξ; on the other hand, if g : ℝ^+ →ℝ^+ is any function such that
lim inf_ξ→∞ξ^-α/f(ξ) > 0 for some α > 0,
then there does not exist a measure μ_g supported on the set of Liouville numbers such that |μ̂_g(ξ)| ≲ g(|ξ|) for all ξ∈ℝ.
§ CONVOLUTION STABILITY LEMMAS
The proofs of the main results of this paper rely on the construction of a sequence of functions which will approximate the measures that satisfy the statements of the theorem. The functions of the sequence are themselves a product of functions. In the frequency space, these products become convolutions and a major component of the proof is show that the Fourier decay estimates of these functions remain stable as the number of convolutions tends to infinity. The following two lemmas will be referred to when making an argument for stability by induction. This first lemma will be applied to Theorem <ref> and Theorem <ref>.
(Convolution Stability Lemma)
Let ω : ℕ→ℝ^+ be a function that increases to infinity such that ω(t) ≤log t for t ≥ 2. Suppose that N_2 is sufficiently large depending on ω and N_1. Moreover, let G,H : ℤ→ℂ be functions satisfying the following bounds for some N_3 > 1/ψ (β(N_2))^2:
rCll
|G(s)| ≤ 1 for all s ∈ℤ
G(0) = 1
G(s) = 0 0 < |s| ≤N_2
|G(s)| ≲ θ(|s|) everywhere
|G(s)| ≲ exp(-1/2|s/2N_3|^σ+1/4 σ) when |s|≥2N_3
|H| ≤ 2
|H(s)| ≲ θ(|s|) ω(|s|) everywhere
|H(s)| ≲ exp(-1/2|s/8N_1|^σ+1/4 σ) when |s|≥8N_1.
Then
rCll
|H*G(s) - H(s)| ≲ N_2^-99 when 0≤|s| < N_2/4
|H*G(s)| ≲ θ(|s|) ω(|s|) everywhere
|H*G(s)| ≲ exp(-1/2|s/8N_3|^σ+1/4 σ) when |s| ≥8N_3.
A different version of this lemma will be applied to prove Theorem <ref>.
Let δ < 1/N_1^2. Suppose that N_2 is sufficiently large depending on ω and N_1. Moreover, let G,H : ℤ→ℂ be functions satisfying the following bounds for some N_3 > 1/ψ (β(N_2)):
rCll
|G(s)| ≤ 1 for all s ∈ℤ
G(0) = 1
G(s) = 0 0 < |s| ≤N_2
|G(s)| ≲ δ s ≠0
|G(s)| ≲ exp(-1/2|s/2N_3|^3/4) when |s|≥2N_3
|H| ≤ 2
|H(s)| ≲ exp(-1/2|s/8N_1|^3/4) when |s|≥8N_1.
Then
rCll
|H*G(s) - H(s)| ≲ N_2^-99 when 0 ≤|s| < N_2/4
|H*G(s)| ≲ δ^1/2 when |s| ≥ N_2/4
|H*G(s)| ≲ exp(-1/2|s/8N_3|^3/4) when |s| ≥8N_3.
Before proving these lemmas, we need a preliminary estimate on θ. We will show that the function θ(ξ) decays like ξ^-1/τ up to an ϵ-loss in the exponent.
Let ψ, χ be as in Theorem <ref>, and let θ(ξ) be as in (<ref>). Then θ(|ξ|) ≈ |ξ|^-1/τ for large |ξ|.
Since ψ(q) ≈ q^-τ by assumption, we have that ψ^-1(t) ≈ t^-1/τ. A similar argument shows that χ(t) ≈ 1. Hence χ(ψ^-1(1/|ξ|)) ≈ 1. Thus
1/ψ^-1(1/|ξ|) χ(ψ^-1(1/|ξ|))≈ |ξ|^-1/τ.
§.§ Proof of Lemma <ref>
First, we prove (<ref>). Assume that 0 ≤ |s| ≤ N_2/4. Rewrite the expression as
rCl
|H * G (s) - H (s)| = |∑_t∈ℤ H(s-t)G (t) - H (s)|
= |H(s)G(0) -H(s) +∑_t ≠0 H(s-t)G (t)|
≤ ∑_t ≠0 |H(s-t)G (t)|.
Observe that we need only consider summands such that |t| ≥ N_2 because G(t) = 0 for |t| < N_2. The previous expression becomes
∑_|t| ≥ N_2 |H (s-t) G (t)|.
Apply the bound (<ref>) to |G(t)|. Notice that |s-t| ≥ |t|/2 ≫ N_1 when |s| < N_2/4. We may apply (<ref>) with |t|/2 in place of s to get an upper bound given that the bounding function is decreasing. Hence
rCl
∑_|t| ≥N_2 |H (s-t) G (t)| ≲ ∑_|t| ≥N_2exp(-1/2|t/8N_1|^σ+1/4 σ)
≤ N_2^-99.
The last inequality holds provided that N_2 is sufficiently large depending on N_1. The next task is to prove the estimate (<ref>). For 0 < |s| < N_2/4, the estimate follows from (<ref>) (<ref>). Indeed, the difference
|H*G(s) - H(s)| ≲ N_2^-99≲ |s|^-99.
By the estimate (<ref>), we have that
|s|^-99≲θ(|s|).
Now assume that |s| ≥ N_2/4. We have the inequality
|H * G (s)| ≤I + II,
where
I = ∑_|t| < 2N_1 |H (t)G (s-t)|,
and
II = ∑_|t| ≥ 2N_1 |H (t)G (s-t)|.
Beginning with the sum I, we apply (<ref>) and observe that |s-t| ≥ |s| -|t| ≥ |s|/2 when |s| ≥ N_2/4. Then we may apply (<ref>) with |s|/2 in place of |s| to get
rCl
I ≲ θ(|s|) ∑_|t| < 2N_1 1
≲ θ(|s|) ω(|s|),
provided that N_2 is sufficiently large depending on ω and N_1 so that the final inequality holds. To bound the sum II, write
II = A + B,
where
A = ∑_|t| ≥ 2N_1
|s-t| ≤ |s|/2 |H (t)G (s-t)|,
and
B = ∑_|t| ≥ 2N_1
|s-t| > |s|/2 |H (t)G (s-t)|.
To estimate the sum A, we apply (<ref>) and (<ref>). Observe that |s-t| ≤ |s|/2 implies that |t| ≥ |s|/2. Thus, t ≫ N_1 when |s| ≥ N_2/4. Therefore
A ≲∑_|t| ≥ |s|/2exp(-1/2|t/8N_1|^σ+1/4 σ).
By the integral test, we get the following upper bound for A:
A ≲∫_|s|/2^∞exp(-1/2|t/8N_1|^σ+1/4 σ) dt.
Observe that the integrand is decaying nearly exponentially. From (<ref>), we may conclude
A ≲θ(|s|).
For the sum B, we apply (<ref>) to G. Additionally, we may apply (<ref>). Since |s|/2 ≫ N_1, we have after making the substitution u = s - t that
rCl
B ≲ θ(|s|) ∑_|u| ≥|s|/2 exp(-1/2|u/8N_1|^σ+1/4 σ)
≲ θ(|s|)
where the last inequality is implied by
∑_|u| ≥ |s|/2exp(-1/2|u/8N_1|^σ+1/4 σ) ≤ 1.
Combining the bounds for I and II completes the proof for (<ref>). We turn now to proving (<ref>). Assume |s| ≥ 8N_3. We decompose the convolution as
|H * G (s)| ≤I + II,
where
I = ∑_|t| < |s|/2 |H (s-t)G (t)|
and
II = ∑_|t| ≥ |s|/2 |H (s-t)G (t)|.
Starting with I, we apply (<ref>). Then apply (<ref>) with s-t in place of s, and use the fact that |s-t|≥ |s|/2. Then
I≲∑_|t| < |s|/2exp(-1/2|s/16N_1|^σ+1/4 σ).
There are at most |s|/2 summands in the above sum. Therefore
I≲ |s| exp(-1/2|s/16N_1|^σ+1/4 σ).
We may bound the prior estimate by a single exponential function by choosing a smaller negative power and eliminating the linear term. Hence,
I≲exp(-1/2|s/8N_3|^σ+1/4 σ).
For the sum II, apply the bounds (<ref>) and (<ref>) to get
rCl
II ≲ ∑_|t| > |s|/2 exp(-1/2|t/2N_3|^σ+1/4 σ).
To bound the above sum, we use the integral test. Thus
II≲∫_t > |s|/2exp(-1/2| t/2 N_3|^σ + 1/4 σ) dt.
To estimate this integral, we begin with a substitution. Let
u = 1/2|t/2N_3|^σ+1/4 σ.
Then
du = σ +1/16 σ N_3|t/2N_3|^-3σ+1/4 σ dt.
The integral may be rewritten as
16 σ N_3/σ +1∫_t = |s|/2^∞exp(-u) (2u)^3σ -1/σ +1 du.
Integrating by parts yields
(16 σ N_3/σ +1) ( -exp(-u)(2u)^3σ -1/σ +1 |_t = |s|/2^∞ + 6σ - 2/σ + 1∫_t = |s|/2^∞exp(-u) (2u)^3σ -1/σ +1-1 du).
It is easy to see that expression above is dominated by the first term and the integral is an error term. Indeed, repeated integration by parts will yield relatively small terms (we are only interesting in finding an estimate up to a multiplicative constant) which contribute a negligible amount to the estimate. We consider only the first term in the estimate and evaluating it at the endpoints to get
II≲ N_3 exp(-1/2|s/4N_3|^σ+1/4 σ)|s/4N_3|^3σ-1/4 σ.
Observe that exponential term is dominant for large values of s. We may bound the above expression by a single exponential term by choosing a smaller negative power. Hence,
rCl
II ≲ N_3 exp(-1/2|s/4N_3|^σ+1/4 σ)|s/4N_3|^3σ-1/4 σ
≤ exp(-1/2|s/8N_3|^σ+1/4 σ).
Combining the estimates on I and II completes the proof of the lemma.
§.§ Proof of Lemma <ref>
The proof of Lemma <ref> shares many similarities with the proof of Lemma <ref>.
Beginning with (<ref>), assume |s| ≥ N_2/4 and write
|H * G (s) - H (s)| ≤∑_|t| ≥ N_2 |H (s-t) G (t)|.
Apply the estimate (<ref>) with |t|/2 in place of s and the estimate (<ref>). Then
rCl
|H * G (s) - H (s)| ≲ δ∑_|t| ≥N_2exp(-1/2|t/8N_1|^3/4)
≤ δ
since the sum may be bounded above by 1.
In order to prove the estimate (<ref>), we assume |s| ≥ N_2/4. Write
|H * G (s)| ≤I + II,
where
I = ∑_|t| < N_1 |H (t)G (s-t)|
and
II = ∑_|t| ≥ N_1 |H (t)G (s-t)|.
For the sum I, apply the estimates (<ref>) and (<ref>). Then
rCl
I ≲ δ∑_|t| < N_1 1
≲ δN_1
< δ^1/2.
where the final inequality follows from the fact that δ < 1/N_1^2.
For the sum II, consider the term where s=t separately from other summands. Write
II = ∑_|t| ≥ N_1
s≠ t |H (t)G (s-t)| + |H(s)G(0)|.
Apply the estimates (<ref>), (<ref>) and (<ref>). Then
rCl
II ≲ δ∑_|t| ≥N_1
s≠t exp(-1/2|t/8N_1|^3/4) + exp(-1/2|s/8N_1|^3/4)
≲ δ^1/2.
The last inequality is implied by the bounds
∑_|t| ≥ N_1
s≠ texp(-1/2|t/8N_1|^3/4) ≲ 1
and
exp(-1/2|s/8N_1|^3/4) ≲δ^1/2.
For the final estimate (<ref>), assume |s|≥ 8N_3 and write
|H * G (s)| ≤I + II,
where
I = ∑_|t| < |s|/2 |H (s-t)G (t)|
and
II = ∑_|t| ≥ |s|/2 |H (s-t)G (t)|.
For the sum I, use the fact that |s - t| ≥ |s|/2 and apply (<ref>) and (<ref>) with |s|/2. Then
rCl
I ≲ ∑_|t| < |s|/2 exp(-1/2|s/16N_1|^3/4)
≲ exp(-1/2|s/8N_3|^3/4).
For the sum II, apply (<ref>) and (<ref>) with |t| in place of s. Then
rCl
II ≲ ∑_|t| > |s|/2 exp(-1/2|t/2N_3|^3/4)
≤ exp(-1/2|s/8N_3|^3/4).
This completes the proof of Lemma <ref>.
§ DOUBLING FUNCTIONS
If f : ℝ^+ →ℝ^+ is a decreasing or eventually decreasing function, we say that f is doubling if f(ξ/2) ≲ f(ξ) for all sufficiently large ξ.
We will need a few basic facts about doubling functions.
The function θ(ξ) is doubling.
The fact that θ(ξ) ≈ξ^-1/τ implies that θ(ξ) is eventually decreasing. To see that θ(ξ) is doubling, note that for sufficiently large q_1 and q_2 with q_1 < q_2, we have the assumption (<ref>), which is reproduced below for convenience.
ψ(q_1)/ψ(q_2)≥(q_2/q_1)^σ.
Since ψ^-1 is decreasing, we have that ψ^-1(1/ξ) > ψ^-1(2/ξ). If ξ is sufficiently large that (<ref>) applies with q_1 = ψ^-1(2/ξ) and q_2 = ψ^-1(1/ξ), then we have
ψ(q_1)/ψ(q_2) = 2/ξ/1/ξ≥(ψ^-1(1/ξ)/ψ^-1(2/ξ))^σ.
Hence
Cl
θ(ξ/2)/θ(ξ)
= ψ^-1(1/ξ) χ(ψ^-1(1/ξ))/ψ^-1(2/ξ) χ(ψ^-1(2/ξ))
≤ (ψ^-1(1/ξ)/ψ^-1(2/ξ) )^1 + ϵ
≤ 2^(1 + ϵ)/σ.
Hence θ(ξ) is doubling.
Next, we show under very general conditions that a function with limit 0 must admit a decreasing, doubling majorant.
Suppose that M: ℤ→ℂ is any function such that |M(s)| → 0 as |s| →∞. Then there is a decreasing function N: ℝ^+ →ℝ^+ such that N(ξ) → 0 as ξ→∞ satisfying the doubling property such that |M(s)| ≤ N(|s|) for all s ∈ℤ.
First, we replace M by a decreasing function M_1 : ℝ^+ →ℝ^+ as follows. For s ∈ℕ, define
M_1(s) = sup_|t| ≥ s |M(t)|.
Then M_1 is decreasing on [0,∞), |M_1(s)| ≤ M(|s|) for all s ∈ℤ, and lim_s →∞ M_1(s) = 0.
We construct N by taking the average of M. For ξ∈ℝ^+, define
N(ξ) = 1/⌊ξ⌋ + 1∑_t ∈ℕ
t ≤ξ M_1(t).
As N is an average of a decreasing function, it follows that N is decreasing; moreover, since M_1(t) → 0 as t →∞, it follows that N(ξ) → 0 as ξ→∞. Furthermore, it is easy to see that M_1(s) ≤ N(s) for s ∈ℕ:
rCl
N(s) = 1/s + 1 ∑_t=0^s M_1(t)
≥ 1/s + 1 ∑_t = 0^s M_1(s)
= 1/s + 1 (s + 1) M_1(s)
= M_1(s),
So |M(s)| ≤ M_1(|s|) ≤ N(|s|)) for all s ∈ℤ.
It only remains to verify that N(s) has the doubling property (<ref>). We have for s ≠ 0 that
rCl
N(s/2) = 1/⌊s/2 ⌋+ 1 ∑_t ≤s/2
t ∈ℕ M_1(t)
≤ 1/⌊s/2 ⌋+ 1 ∑_t ≤⌊s/2 ⌋
t ∈ℕ M_1(t) + 1/⌊s/2 ⌋+ 1 ∑_⌊s/2 ⌋+ 1 ≤t ≤2 ⌊s/2 ⌋+ 1
t ∈ℕ M_1(t)
≤ 1/⌊s/2 ⌋+ 1 ∑_t ≤⌊s/2 ⌋
t ∈ℕ M_1(t) + 1/⌊s/2 ⌋+ 1 ∑_⌊s/2 ⌋+ 1 ≤t ≤s + 1
t ∈ℕ M_1(t)
≤ 2/2 ⌊s/2 ⌋+ 2 ∑_t ≤s + 1
t ∈ℕ M_1(t).
≤ 2/s ∑_t ≤s + 1
t ∈ℕ M_1(t)
≤ 2/s ∑_t ≤s
t ∈ℕ M_1(t) + 2/s M_1(s + 1)
≤ 2/s ∑_t ≤s
t ∈ℕ M_1(t) + 2/s ∑_t ≤s
t ∈ℕ M_1(t)
≤ 4/s ∑_t ≤s
t ∈ℕ M_1(t)
≤ 8/s + 1 ∑_t ≤s
t ∈ℕ M_1(t)
= 8 N(s),
as desired.
§ SINGLE-FACTOR ESTIMATES
§.§ Single-factor estimates for Theorem <ref> and Theorem <ref>
In this section, we construct a function g_k with its support contained in intervals centered at rational numbers with denominator close to some number M_k. Let ψ(q) be a function satisfying (<ref>) and (<ref>). Suppose χ(q) is a function satisfying (<ref>). In the case of Theorem <ref>, we take χ(q) ≡ 1.
Let M_k be a large positive integer. We choose an integer β(M_k) and a positive real number C_k so that
1 ≤∑_M_k ≤ q ≤β(M_k)
q prime1/q χ(q) = C_k ≤ 2.
The support of g_k will be contained in a family of intervals centered at rational numbers whose denominator is a prime number between M_k and β(M_k).
We choose a nonnegative function ϕ∈ C_c^∞ with support in the interval [-1/2,0] satisfying the conditions
ϕ̂(0) = 1
and
ϕ̂(s) ≲exp( -|s|^σ + 1/2 σ).
The existence of such a function is guaranteed by a result of Ingham <cit.>.
Let
ϕ_p,q(x) = 1/q^2χ(q) ψ (q)ϕ(1/ψ(q)(x-p/q))
Now define
g_k (x) = C_k^-1∑_M_k ≤ q < β(M_k)
q prime∑_p = 1^qϕ_p,q(x).
Observe that the function g_k is supported on the interval [0,1].
Suppose g_k is defined as above. Then we have the following estimates for s ∈ℤ.
rCll
ĝ_k(0) = 1
ĝ_k(s) = 0 if 0 < |s| < M_k
|ĝ_k(s)| ≲ θ(|s|) if s ≠ 0
|ĝ_k(s)| ≲ exp(-1/2 (ψ(β(M_k))^2|s|)^σ+ 1/4 σ ) if |s| ≥ψ(β(M_k))^-2.
A simple calculation gives us that
ĝ_k (s) = C_k^-1∑_M_k ≤ q < β(M_k)
q prime1/q^2 χ(q)∑_p = 1^q(p/qs) ϕ̂(ψ(q) s)
where (u) = ^-2 π i u. The sum in p has the value
∑_p=1^q (p/q s ) =
q if q | s
0 if q ∤ s.
Therefore, if s = 0, then the above sum will be equal to 1, and if 0 < |s| < M_k, then the above sum will vanish. This proves (<ref>) and (<ref>). Thus,
ĝ_k (s) = C_k^-1∑_M_k ≤ q < β(M_k)
q prime
q | sϕ̂(ψ(q) s)/qχ(q).
For |s| ≥ M_k, we split the above sum into three pieces according to the size of q. We write
ĝ_k (s) = C_k^-1 (I + II + III),
where
rCl
I = ∑_q ≥ψ^-1(1/|s|)
q prime
q |s ϕ̂(ψ(q) s)/qχ(q),
II = ∑_ψ^-1 (1/ √(|s|)) ≤q ≤ψ^-1(1/|s|)
q prime
q |s ϕ̂(ψ(q) s)/qχ(q),
III = ∑_q < ψ^-1(1/√(|s|))
q prime
q |s ϕ̂(ψ(q) s)/qχ(q).
*Estimate for I. For the sum I, we observe that the number of summands is ≲ 1. This observation is a consequence of assumption (<ref>) since it is implied that for a large enough q depending on ϵ we have,
q^-τ-ϵ≤ψ(q) ≤ q^-τ + ϵ
which gives us
t^-1/τ + ϵ≤ψ^-1(t) ≤ t^- 1/τ - ϵ
since ψ is decreasing. Taking logarithms, we conclude
1/τ + ϵlog |s| ≤logψ^-1(1/|s|) ≤1/τ - ϵlog |s|.
Hence, the number of summands in the sum I is at most log |s|/logψ^-1(1/|s|)≲ 1.
Apply the bound ϕ̂(ψ(q) s) ≤ 1 to each summand to get
∑_q ≥ψ^-1(1/|s|)
q prime
q | sϕ̂(ψ(q) s)/qχ(q)≲θ(|s|).
*Estimate for II. For the sum II, we observe that there are ≲ 1 summands by a similar argument as for the sum I. We apply the bound (<ref>) to show that the summand is bounded above by
exp(- |ψ(q) s|^σ + 1/2 σ)/q χ(q)
If q = ψ^-1(1/|s|), then
exp(- |ψ(q) s|^σ + 1/2 σ)/q χ(q)≲θ(|s|).
It is enough to show for each q < ψ^-1(1/|s|) that
exp (- | ψ(q + 1) s|^σ + 1/2 σ)/(q + 1) χ(q + 1) - exp(- |ψ(q) s|^σ + 1/2 σ)/q χ(q) > 0.
If the inequality (<ref>) holds for all q < 1/|s|, then the summand is increasing in this domain, and is therefore maximized when q = 1/|s|, establishing the bound (<ref>) for such q.
In order to establish (<ref>), it is enough to verify that the numerator of the difference is positive. This numerator is
exp (- |ψ(q + 1) s|^σ + 1/2 σ) q χ(q) - exp( - |ψ(q) s|^σ + 1/2 σ) (q + 1) χ(q + 1).
Since the logarithm is an increasing function, it is enough to show that
-|ψ(q + 1) s|^σ + 1/2 σ + log q + logχ(q) > - |ψ(q) s|^σ + 1/2 σ + log (q + 1) + logχ(q + 1).
This inequality is equivalent to
log (q + 1) - log q + logχ(q + 1) - logχ(q) < |s|^σ + 1/2 σ (ψ(q)^σ + 1/2 σ - ψ(q + 1)^σ + 1/2 σ).
The Taylor series for the logarithm guarantees that log (q + 1) - log q = 1/q + O (1/q^2); the subpolynomial growth condition (<ref>) guarantees that logχ(q + 1) - logχ(q) = o (1/q). In total, the left side of inequality (<ref>) is 1/q + o (1/q). On the other hand, since we are in the regime where q < ψ^-1(1/|s|), the right side of (<ref>) is bounded below by
|s|^σ + 1/2 σ (ψ(q)^σ + 1/2 σ - ψ(q + 1)^σ + 1/2 σ) ≥ 1 - (ψ(q + 1)/ψ(q))^σ + 1/2 σ.
By (<ref>), we have
(ψ(q + 1)/ψ(q))^σ + 1/2 σ≤(q/q + 1)^σ + 1/2.
Hence,
rCl
1 - (ψ(q + 1)/ψ(q))^σ+ 1/2 σ ≥ 1 - (q/q + 1 )^σ+ 1/2
= (σ+ 1/2) 1/q + o (1/q^2 ).
Since σ + 1/2 > 1, we see that the inequality (<ref>) holds for ψ(1 / √(|s|)) ≤ q ≤ψ(1 / |s|) provided that M_k (and hence |s|) is sufficiently large.
Hence we have the estimate
II≲θ(|s|).
*Estimate for III. For the final sum, we apply the estimate (<ref>) to ϕ̂ to get
rCl
∑_q <ψ^-1(1/√(|s|))
q prime
q |s ϕ̂(ψ(q) s)/qχ(q) ≲ ∑_q < ψ^-1(1/ √(s))
q prime
q |s exp(-| ψ(q) s |^σ+ 1/2 σ)/q
≤ ∑_q <ψ^-1(1/√(|s|))
q prime
q |s exp(-|s |^σ+ 1/4 σ)/q
≤ exp(-|s |^σ+ 1/4 σ) log(ψ^-1(1/√(|s|)))
≲ θ(|s|).
For the estimate (<ref>), we observe that |s| is sufficiently large for the estimate (<ref>) to apply to ϕ̂ for every q ∈ [M_k, β(M_k)]. As such
rCl
|ĝ_k(s)| ≲ ∑_M_k ≤q ≤β(M_k)
q prime
q |s exp(-(ψ(q)|s|)^σ+1/2σ)/χ(q)q
≤ ∑_M_k ≤q ≤β(M_k)
q prime
q |s exp(-(ψ(β(M_k))|s|)^σ+1/2σ)/χ(M_k)M_k.
The inequality |s| ≥ψ(β(M_k))^-2 gives us ψ(β(M_k)) ≤1/√(|s|). Therefore,
|ĝ_k(s)| ≲∑_M_k ≤ q ≤β(M_k)
q prime
q | sexp(-|s|^σ +1/4σ)/χ(M_k)M_k.
Observe that the number of summands is less than β(M_k). Moreover, we may disregard the denominator, for large M_k, to derive an upper bound. Hence,
|ĝ_k(s)| ≲β(M_k) exp(-|s|^σ +1/4σ).
Now, we need to eliminate the β(M_k) term from the estimate but this will be at the cost some decay from the exponent. Rewrite the above inequality as
rCl
|ĝ_k(s)| ≲ β(M_k) exp(-1/2|s|^σ+1/4σ)exp(-1/2|s|^σ+1/4σ)
≤ β(M_k) exp(-1/2ψ(β(M_k))^-σ+1/2σ)exp(-1/2|s|^σ+1/4σ)
when we apply |s| ≥ψ(β(M_k))^-2. From the equation (<ref>), when M_k is large enough we have
1/2τ≤ -logψ(β(M_k))/logβ(M_k)≤ 2 τ
which may be rewritten as
-1/2τlogβ(M_k) ≥logψ(β(M_k)) ≥ -2 τlogβ(M_k).
Exponentiating gives
β(M_k)^-1/2τ≥ψ(β(M_k)) ≥β(M_k)^-2 τ.
Applying the upper bound from the equation (<ref>) to (<ref>), we get
|ĝ_k(s)| ≲β(M_k) exp(-1/2β(M_k)^τ(σ +1)/4σ)exp(-1/2|s|^σ +1/4σ).
For large M_k, we observe that the exponential term dependent on M_k is decaying much faster than β(M_k). Hence,
rCl
|ĝ_k(s)| ≲ exp(-1/2|s|^σ+1/4σ)
≲ exp(-1/2 (ψ(β(M_k))^2 |s|)^σ+ 1/4 σ ).
§.§ Single-factor estimate for Theorem <ref>
In the case of Theorem <ref>, it is more convenient to choose the function g_k to be supported in a neighborhood of rational numbers with different denominators at very different scales. Thus, only one denominator will meaningfully contribute to the value of |ĝ_k(s)|.
As in subsection <ref>, we begin by defining a smooth function ϕ with its support in the interval [-1/2, 0] satisfying the conditions
ϕ̂(0) = 1
and
ϕ̂(s) ≲exp(-|s|^3/4).
Let n_k be an increasing sequence of integers to be specified later. For a given k, we choose q_k,1, …, q_k,n_k of prime numbers as follows. First, we choose q_k,1 to be a large prime number. We choose the remaining q_k,j so that q_k,2≫1/Ψ(q_k,1), q_k,3≫1/Ψ(q_k,2),…, q_k,n_k≫1/Ψ(q_k,n_k-1). Furthermore, we also assume that for each j, we have
max(1/q_k,j, ψ(q_k,j) ) < 1/2ψ(q_k,j-1).
Define
g_k(x) = 1/n_k∑_j = 1^n_k1/q_k,jψ(q_k,j)∑_p = 1^q_k,jϕ(1/ψ(q_k,j)(x-p/q_k,j)).
Then
ĝ_k(s) = 1/n_k∑_j = 1^n_k1/q_k,j∑_p = 1^q_k,jϕ̂(ψ(q_k,j) s) (p/q_k,js).
Remove any terms for which q_k,j does not divide s to get
ĝ_k(s) = 1/n_k∑_1 ≤ j ≤ n_k
q_k,j| sϕ̂(ψ(q_k,j) s) .
Suppose that g_k is defined as above. Then we have the following estimates for s ∈ℤ.
rCll
ĝ_k(0) = 1
ĝ_k(s) = 0 if 0 < |s| < q_k,1
|ĝ_k(s)| ≲ 1/n_k s ≠0
|ĝ_k(s)| ≲ exp(-1/2 |ψ(q_k,n_k) s|^3/4 ) if |s| ≥ψ(q_k,n_k)^-1.
First, it is clear from (<ref>) and (<ref>) that ĝ_k(0) = 1, establishing (<ref>). Moreover, the sum (<ref>) is seen to be empty if 0 < |s| < q_k,1, establishing (<ref>).
To prove (<ref>), we split the sum (<ref>) depending on the size of q_k,j relative to s. Suppose j_0(s) is such that ψ(q_k,j_0)|s| > 1, but such that ψ(q_k,j_0 + 1)|s| ≤ 1, taking j_0(s) = 0 if ψ(q_k,1 )|s| < 1 or j_0 = n_k if ψ(q_k, n_k)|s| > 1.
|ĝ_k(s)| ≤1/n_k∑_j_0(s) + 1 ≤ j ≤ n_k
q_k,j| s|ϕ̂(ψ(q_k,j) s)| + 1/n_k∑_1 ≤ j ≤ j_0(s)
q_k,j| s|ϕ̂(ψ(q_k,j) s)|
For the second sum, we may apply (<ref>), the Schwartz tail for ϕ̂. Hence, using the assumption (<ref>),
rCl
1/n_k ∑_1 ≤j ≤j_0(s)
q_k,j |s |ϕ̂(ψ(q_k,j) s) | ≲ 1/n_k ∑_1 ≤j ≤j_0(s) exp(-| ψ(q_k,j)s|^3/4)
≲ 1/n_k ∑_1 ≤j ≤j_0(s) exp( -2^3(j_0(s) - j)/4 )
≲ 1/n_k.
For the first sum, recall that j_0 is chosen so that ψ(q_k,j_0 + 1)s < 1. Since q_k, j≥1/ψ(q_k,j_0 + 1) for any j ≥ j_0 + 2, it follows that 1/q_k,j s < 1 for such j. This means that it is impossible for q_k,j to divide s for j > j_0 + 1. Hence, the only term that can contribute to the sum is the j = j_0 + 1 term. To control the contribution of this term, we simply apply the bound
|ϕ̂(ψ(q_k,j) s)| ≤ 1
to bound the second sum by a constant times 1/n_k. Thus, for any integer s ≠ 0, we have the bound
|ĝ_k(s)| ≲1/n_k.
It remains to show the bound (<ref>). For s ≥ψ(q_k,n_k)^-1, we can in fact apply the Schwartz bound (<ref>) for ϕ to every summand in (<ref>). Hence
rCl
|ĝ_k(s)| ≤ 1/n_k ∑_1 ≤j ≤n_k
q_k, j |s |ϕ̂(ψ(q_k,j) s)|
≲ 1/n_k ∑_1 ≤j ≤n_k exp( - | ψ(q_k,j) s|^3/4 )
≲ 1/n_k ∑_1 ≤j ≤n_k exp(- |2^n_k - j ψ(q_k, n_k) s|^3/4 )
≲ exp(- |ψ(q_k, n_k) s|^3/4 ).
§ STABILITY AND CONVERGENCE OF Μ̂_Χ,Ω
In order to prove Theorems <ref>, <ref>, and <ref>, we will piece together the functions g_k provided in Section <ref> across multiple scales. Lemmas <ref> and <ref> are used to show that the Fourier transforms ĝ_k of the functions g_k do not exhibit much interference. The construction proceeds slightly differently in the case of Theorem <ref>, as this theorem does not prescribe a specific decay rate for μ̂.
§.§ Construction of μ for Theorem <ref> and Theorem <ref>
Let ψ and χ be functions satisfying the assumptions (<ref>), (<ref>), (<ref>), and (<ref>). Recall that in the case of Theorem <ref> that we take χ≡ 1, and we showed in Remark <ref> that ψ satisfies assumptions (<ref>) and (<ref>).
We begin by constructing a sequence of functions (μ_χ, ω, k)_k ∈ℕ where μ_χ, ω, k(x) is the product
μ_χ, ω, k(x) = ∏_i=1^k g_i(x).
For each g_i we choose an associated M_i such that the estimates in Lemma <ref> apply. We further assume that the M_i's are spaced sufficiently far apart to satisfy the conditions of Lemma <ref>. In particular, this implies that for each i ≥ 1 we have
M_i+1≥ψ(β(M_i))^-2.
Taking the Fourier transform of this sequence, we get the sequence (μ̂_χ, ω, k)_k ∈ℕ where
μ̂_χ, ω, k (s) = ĝ_1 * ⋯ * ĝ_k (s).
With this sequence of functions defined, the next objective is to show that the sequence is uniformly convergent and that the functions μ̂_χ, ω, i satisfy a similar decay estimate (up to a constant) for all i. We begin with the latter:
For the sequence of functions (μ̂_χ, ω, k)_k ∈ℕ defined above, we have the following statements for any integers k, l with k > l:
rCll
|μ̂_χ, ω, k(0)| ≤ 2
|μ̂_χ, ω, l(s) -μ̂_χ, ω, k(s)| ≲ ∑_j = l +1^k M_j^-99 when 0 ≤|s| < M_l/4
|μ̂_χ, ω, k(s)| ≲ θ(|s|) ω(|s|) for all s ≠ 0
|μ̂_χ, ω, k(s)| ≲ exp(-1/2 (ψ(β(M_k))^2|s|)^σ+ 1/4 σ ) if |s| ≥ψ(β(M_k))^-2.
Note that since μ is a positive measure, (<ref>) implies that |μ̂_χ, ω, k(s)| ≤ 2 for all s.
We prove Lemma <ref> by induction and repeated application of Lemma <ref>. We begin with the basis by letting k = 2. Then μ̂_2 = ĝ_1 * ĝ_2. Apply Lemma <ref> with H = ĝ_1, G= ĝ_2, N_1 = ψ(β(M_1))^-2, N_2 = M_2 and N_3 = ψ(β(M_2))^-2. Then the estimates (<ref>), (<ref>) and (<ref>) immediately follow from (<ref>), (<ref>) and (<ref>), respectively. The statement (<ref>) can be shown by the following calculation:
rCl
|μ̂_χ, ω, 1(0)| ≤ |ĝ_1(0) - ĝ_1* ĝ_2 (0)| + |ĝ_1(0)|
≤ 𝒪(M_2^-99) + 1
≤ 2
where the last inequality holds for the choice of a sufficiently large M_2. Now assume that Lemma <ref> holds for k up to n-1. Then for the case k = n, we make the choice H = μ̂_n-1 = ĝ_1*⋯*ĝ_n-1, G= ĝ_n, N_1 = ψ(β(M_n-1))^-2, N_2 = M_n and N_3 = ψ(β(M_n))^-2. From the induction hypothesis and Lemma <ref>, the estimates (<ref>) and (<ref>) immediately follow. For estimate (<ref>), assume l < k and assume |s| ≤ M_l/4. Then the triangle inequality gives:
|μ̂_χ, ω, l(s) -μ̂_χ, ω, k(s)| ≤ |μ̂_χ, ω, l(s) - μ̂_χ, ω, l+1(s)| + ⋯ + |μ̂_χ, ω, k-1(s) - μ̂_χ, ω, k(s)|.
By the induction hypothesis, |μ̂_i(s) - μ̂_i+1(s)| ≲ M_i+1^-99 for l ≤ i ≤ k-2 and Lemma <ref> gives
|μ̂_χ, ω, k-1(s) - μ̂_χ, ω, k(s)| ≲ M_k^-99.
Consequently,
|μ̂_χ, ω, l(s) -μ̂_χ, ω, k(s)| ≲∑_j = l+1^k M_j^-99.
Finally, from the calculation
rCl
|μ̂_χ, ω, k(0)| ≤ |μ̂_χ, ω, 1 (0) - μ̂_χ, ω, k (0)| + |μ̂_χ, ω, 1 (0)|
≤ 1 +𝒪(∑_j = 2^k M_j^-99)
≤ 2,
the estimate (<ref>) is proved.
Turning now to proving the uniform convergence of the sequence (μ̂_χ, ω, k)_k ∈ℕ, we have the following lemma.
The sequence (μ̂_χ, ω, k)_k ∈ℕ converges uniformly for all s ∈ℤ to some function M(s). This function M(s) has the property that
|M(s)| ≲θ(|s|) ω(|s|); s ∈ℤ
Let ϵ > 0. There exists an m_0, depending on ϵ and ω, sufficiently large such that
θ(|s|) ω(|s|) < ϵ/2C
when |s| ≥ M_m_0/4. Here C is taken to be the implicit constant for estimate (<ref>). Then for m≥ n≥ m_0
rCl
|μ̂_χ, ω, m(s) - μ̂_χ, ω, n(s)| ≤ |μ̂_χ, ω, m (s)| + |μ̂_χ, ω, n (s)|
< ϵ.
When 0 ≤ |s| ≤ M_m_0/4, applying the estimate (<ref>) gives
|μ̂_χ, ω, m(s) - μ̂_χ, ω, n(s)| ≤∑_j = n^∞ M_j^-99
and the sum may be made to be less than ϵ.
Hence the sequence μ̂_χ, ω, n has a uniform limit M(s). An upper bound on |M(s)| will follow from Lemma <ref>. Suppose |s| is such that M_k/4≤ |s| ≤M_k+1/4.
Then the estimate (<ref>) gives that
|μ̂_χ, ω, k(s) | ≲θ(|s|) ω(|s|),
and (<ref>) and the triangle inequality give
rCl
|M(s)| ≤ |μ̂_χ, ω, k(s)| + lim sup_l ≥k |μ̂_χ, ω, k(s) - μ̂_χ, ω, l(s)|
≲ θ(|s|) ω(|s|) + ∑_j=k+1^∞ M_j^-99
≲ θ(|s|) ω(|s|) + |s|^-99
≲ θ(|s|) ω(|s|),
as desired.
In order to show that the sequence μ_χ, ω, n converges to a weak limit μ using the convergence of the μ̂_χ, ω, n(s), it is normal to appeal to a theorem such as the Lévy continuity theorem. However, this is slightly inconvenient as we only have estimates for μ̂_χ, ω, n(s) at integer values s. We will provide a proof of the weak convergence below. First, we will need the following technical lemma relating the Fourier series of a measure supported on the interval [0,1] to its Fourier transform. A stronger version of this lemma can be found as Lemma 1 of Chapter 17 in the book of Kahane <cit.>.
Suppose that μ is a measure supported on the interval [0,1] satisfying an estimate of the form
|μ̂(s)| ≲ N(|s|) for all s ∈ℤ
where N: ℝ^+ →ℝ^+ is a non-increasing function satisfying the doubling property
N(ξ/2) ≲ N(ξ) for all ξ∈ℝ^+.
Then |μ̂(ξ)| ≲ N(|ξ|) for all ξ∈ℝ.
We have already seen that θ(ξ) ω(ξ) is a doubling function for ξ > 0. Thus we can apply Lemma <ref>.
The sequence of measures μ_χ, ω, k has a weak limit μ_χ, ω. This weak limit μ_χ, ω satisfies the estimate
μ̂_χ, ω(ξ) ≲θ (|ξ|) ω(|ξ|)
for all real numbers ξ.
Observe that each measure μ_χ, ω, n has total variation norm bounded by 2. We claim that the measures μ_χ, ω, k have a weak limit. First, by the Banach-Alaoglu theorem, there exists a subsequence μ_χ, ω, n_k that has a weak limit μ_χ, ω. Since each measure μ_χ, ω, n_k is supported in [0,1], the weak limit μ_χ, ω is supported in [0,1].
In particular, since μ is supported in [0,1], each Fourier coefficient μ̂(s) of μ is obtained by integrating against a continuous, compactly supported function. Hence, for each s ∈ℤ, lim_k →∞μ̂_χ, ω, n_k(s) = M(s), where M(s) is the limit in Lemma <ref>.
By the corollary to Theorem 25.10 of Billingsley <cit.>, it is enough to check that each weakly convergent subsequence of {μ_χ, ω, n} converges weakly to μ_χ, ω. Suppose {ν_χ, ω, n} is a subsequence of the μ_χ, ω, n with some weak limit ν. Then ν is supported on [0,1], so by the same argument as in the previous paragraph, Lemma <ref> implies that ν̂(s) = M(s) for every s ∈ℤ. Since a measure supported on [0,1] is uniquely determined by its Fourier-Stieltjes series, it follows that ν = μ_χ, ω as desired.
Finally, we verify that μ̂_χ, ω(ξ) satisfies the estimate (<ref>). This estimate holds for integer values of s by the estimate (<ref>). Hence, Lemma <ref> shows that μ̂_χ, ω(ξ) satisfies the same estimate for ξ∈ℝ.
Hence the measures μ_χ, ω, k have a weak limit supported on [0,1]. We now verify that this weak limit is indeed supported on the set E(ψ).
Let μ be as in Lemma <ref>. Then μ is supported on E(ψ).
It is easy to see that
ϕ_p,q⊂[p/q - 1/2ψ(q), p/q + 1/2ψ(q)]
and therefore
g_k ⊂⋃_M_k≤ q ≤β(M_k)
q prime⋃_p = 0^q-1[p/q - 1/2ψ(q), p/q + 1/2ψ(q)].
Since each μ_χ, ω, k is the product of g_i's its support is an intersection of these supports.
μ_χ, ω, k⊂⋂_i = 1^k⋃_M_i≤ q ≤β(M_i)
q prime⋃_p = 0^q-1[p/q - 1/2ψ(q), p/q + 1/2ψ(q)].
Because the measure μ_χ, ω is defined as the weak limit of the measures μ_χ, ω, k, we have the containment
μ_χ, ω⊂⋂_i = 1^∞⋃_M_i≤ q ≤β(M_i)
q prime⋃_p = 0^q-1[p/q - 1/2ψ(q), p/q + 1/2ψ(q)].
Observe that if x ∈μ_χ, ω and k ∈ℕ, then x must also lie in one of the intervals
[p/q - 1/2ψ(q), p/q + 1/2ψ(q)]
for some M_k ≤ q ≤β(M_k).
Therefore, there exists an infinite number of rational numbers p/q which satisfy
| x-p/q| ≤ψ (q)
and we may conclude that μ⊂ E(ψ).
The measure μ_χ, ω satisfies all of the properties required to prove Theorem <ref>. Hence, the proof of Theorem <ref> is complete.
To show Theorem <ref>, it is also necessary to verify that the support of μ is contained in a set of generalized α-Hausdorff measure equal to zero. This will be shown in Section <ref>.
§.§ Construction of μ for Theorem <ref>
We now construct the measure μ described in Theorem <ref>. The biggest difference between this construction and the one in the previous subsection is that we do not state explicit quantitative estimates describing the decay of the Fourier transform of the measures.
Choose a positive integer n_1 and let M_1 be a large integer. We will choose the sequences {n_j : j ≥ 2} and {M_j : j ≥ 2} to be rapidly increasing sequences of integers satisfying a certain set of conditions below. For each j, we choose prime numbers q_j, 1, …, q_j,n_j with M_j ≤ q_j,1≪⋯≪ q_j, n_j. When we choose the M_j, we will impose the condition that M_j+1≫ q_j, n_j as well. Given q_j,1, …, q_j,n_j we define the function g_j as in Subsection <ref>.
We define the function μ_k to be the pointwise product
μ_k(x) = ∏_j=1^k g_j(x)
so μ̂_1(s) = ĝ_1(s) and so that for any k ≥ 2
μ̂_k(s) = ĝ_k(s) * μ̂_k-1(s).
We are now ready to state the main estimate on μ̂_k.
Suppose that the functions g_k are chosen as above. Then provided that the sequences n_j and M_j are chosen appropriately, the measures μ̂_k satisfy the following estimates for all integers k ≥ l. All implicit constants below are assumed to be independent of k and l.
rCll
|μ̂_k(0)| ≤ 2
|μ̂_k(s) - μ̂_l(s)| ≤ ∑_j=l+1^k M_j^-99 if 0 ≤ |s| ≤ M_l/4
|μ̂_k(s)| ≲ n_k^-1/2 if |s| ≥ M_k/4
|μ̂_k(s)| ≲ exp(-1/2 |1/8 ψ(q_k, n_k) s |^3/4 ) if |s| ≥ 8ψ(q_k,n_k)^-1.
Let n_1 and M_1 be positive integers, and choose prime numbers q_1, 1, …, q_1, n_1 such that 1 ≤ q_1,1 < q_1,2 < ⋯ < q_1, n_1 satisfy the conditions of Lemma <ref>. Then ĝ_1 satisfies the estimates of Lemma <ref> and in particular satisfies the estimates of Lemma <ref>.
Given g_1, …, g_k such that μ_k satisfies the four conditions above, we will describe how to choose the integers n_k+1 and M_k+1 and how to choose the function g_k+1 so that μ_k+1 will satisfy the four conditions above. Let N_1 = ψ(β(M_k))^-1. Lemma <ref> requires that the quantity δ is chosen so that δ < 1/N_1^2; hence, we select n_k+1 = 100 N_1^2. Choose M_k+1 = N_2 ≫ n_k to be a prime number that is sufficiently large to satisfy the conditions of Lemma <ref>. Take N_2 = q_k+1, 1 < ⋯ < q_k+1, n_k+1 sufficiently well-spaced to satisfy the conditions of Lemma <ref>. Then, choose N_3 = 1/ψ(q_k+1, n_k+1)≫ q_k+1,1. With these choices, we define g_k+1 as in Subsection <ref>. Hence Lemma <ref> implies that ĝ_k+1 satisfies the estimates required to serve as the function G in Lemma <ref>.
Hence, we can apply Lemma <ref> with H = μ̂_k, G = ĝ_k+1, N_1 = ψ(q_k, n_k)^-1, N_2 = q_k+1,1, and N_3 = ψ(q_k+1, n_k+1)^-1, and δ = 1/n_k+1.
This implies the estimates
rCll
| μ̂_k+1(s) - μ̂_k(s)| ≤ M_k+1^-99 if 0 ≤ |s| ≤M_k+1/4
|μ̂_k+1(s)| ≲ n_k+1^-1/2 if |s| ≥M_k+1/4
|μ̂_k+1(s)| ≲ exp(-1/2 |1/8 ψ(q_k+1, n_k+1) s |^3/4 ) if |s| ≥ 8ψ(q_k+1,n_k+1)^-1.
Hence μ̂_k+1 satisfies the estimates (<ref>) and (<ref>). In order to check (<ref>), assume l < k+1 and |s| ≤M_l/4. If l = k, then the inequality follows from (<ref>). If l < k, then applying the inductive assumption (<ref>) to estimate the difference μ̂_k - μ̂_l and applying (<ref>) to estimate the difference μ̂_k+1- μ̂_k gives
rCl
|μ̂_k+1(s) - μ̂_l(s)| ≤ |μ̂_k+1(s) - μ̂_k(s)| + |μ̂_k(s) - μ̂_l(s)|
≤ M_k + 1^-99 + ∑_j=l+1^k M_k^-99
= ∑_j=l+1^k+1 M_k^-99.
This establishes (<ref>) for μ̂_k+1. Applying (<ref>) with l = 1 and s = 0, we see that
rCl
|μ̂_k+1(0)| ≤ |μ̂_k+1(0) - μ̂_1(0)| + |μ̂_1(0)|
≤ ∑_j=2^k+1 M_j^-99 + 1
≤ 2
assuming the M_j grow sufficiently rapidly.
The sequence μ̂_k converges uniformly for all s ∈ℤ to a function M(s). This function M(s) has the property that |M(s)| → 0 as |s| →∞.
The proof is similar to that of Lemma <ref>. Let ϵ > 0. Because n_k →∞, there is an index k_0 such that n_k_0^-1/2 + ∑_j=k_0 + 1^∞ M_j^-99 < ϵ / 2C, where C is the implicit constant from Lemma <ref>.
Suppose |s| > M_k_0/4, and choose l ≥ k_0 such that M_l/4≤ |s| < M_l + 1/4. We have that |μ̂_l(s)| ≲ n_l^-1/2≤ n_k_0^-1/2 < ϵ/2. Hence for k ≥ l, we have
rCl
|μ̂_k(s)| ≤ |μ̂_l(s)| + |μ̂_k(s) - μ̂_l(s)|
≤ ϵ/2 + ∑_j=l+1^k M_j^-99
≤ ϵ/2 + ∑_j=k_0 + 1^∞ M_j^-99
≤ ϵ.
Hence |μ̂_k(s)| ≤ϵ for all |s| ≥M_k_0/4 and all k ≥ k_0.
If |s| ≤M_k_0/4 and k_0 ≤ l ≤ k, then we have
|μ̂_k(s) - μ̂_l(s)| ≤∑_j=l+1^k M_j^-99≤∑_j=k_0 + 1^∞ M_j^-99 < ϵ/2.
This proves that the sequence μ̂_k(s) is uniformly Cauchy and hence uniformly convergent. Let M(s) denote the uniform limit of this sequence.
Finally, we verify that M(s) → 0 as |s| →∞. Suppose |s| is such that M_k/4≤ |s| ≤M_k+1/4. Then we have from Lemma <ref> that |μ̂_k(s)| ≲ n_k^-1/2, and
rCl
M(s) ≲ |μ̂_k(s)| + lim sup_l →∞ |μ̂_l(s) - μ̂_k(s)|
≲ n_k^-1/2 + ∑_j=k+1^∞ M_j^-99
≲ n_k^-1/2
since M_k+1≫ n_k. Since the sequence n_k →∞, this shows that M(s) → 0 as |s| →∞, as desired.
The rest of this proof is similar to the proof of Theorem <ref>. In order to apply Lemma <ref>, we use the fact from Lemma <ref> that M is majorized by N(|s|), where N is a doubling function. This will allow us to apply Lemma <ref>.
We are now ready to show that the measures μ_k converge to a weak limit.
The sequence of measures μ_k has a weak limit μ. This weak limit μ satisfies the estimate
|μ̂(ξ)| → 0 as |ξ| →∞ in ℝ.
Hence μ is a Rajchman measure.
This proof is almost exactly the same as the proof of Lemma <ref>, but when we apply Lemma <ref>, we use N(|s|) as the bound on M(s), where N(s) is the function constructed in Lemma <ref>.
Let μ be the weak limit in Lemma <ref>. Then the support of μ is contained in E(ψ).
This lemma can be shown in a similar manner to Lemma <ref>; we see that if x ∈(μ) then for each k, there exists a denominator q_k,j_k and a numerator p_k, j_k such that |x - p_k, j_k/q_k, j_k| ≤ψ(q_k, j_k); hence, x is ψ-well-approximable.
Therefore, the measure μ satisfies all of the properties promised in the statement of Theorem <ref>. Thus the proof of Theorem <ref> is complete.
§ A BOUND ON THE GENERALIZED HAUSDORFF MEASURE
To complete the proof of Theorem <ref>, we must show that F_α, which is taken to be the support of μ_k,ω, has zero α-Hausdorff measure.
Let F_α be a closed subset of
{x : | x - r/q| < ψ(q) for some integers 0 ≤ r ≤ q-1, M_k ≤ q ≤β(M_k), q prime,
k ∈ℕ}.
Let ϵ > 0. Then there exists a cover 𝒰 of F_α by open intervals U such that
∑_U ∈𝒰α(U) < ϵ.
The set F_α satisfies the following containment:
F_α⊂⋂_k=1^∞⋃_M_k ≤ q ≤β(M_k)
q prime⋃_p=0^q-1[p/q-ψ(q), p/q+ψ(q)].
For any k the following collection of closed intervals is a cover for F_α:
{[p/q-ψ(q), p/q+ψ(q)]: M_k ≤ q ≤β(M_k), q prime, 0 ≤ p ≤ q-1}.
Denote this collection as ℐ_k. The following collection 𝒰 is also a cover of F_α:
𝒰 = {J⋂[p/q-ψ(q), p/q+ψ(q)]: M_k ≤ q ≤β(M_k), q prime, 0 ≤ p ≤ q-1; J ∈ℐ_k-1}.
Fix a prime number q with M_k ≤ q ≤β(M_k) and let J ∈ℐ_k-1. Observe that the intersection of J with the interval [p/q - ψ(q), p/q + ψ(q) ] is either empty or is a closed interval of length at most 2 ψ(q). We claim that the number of such intervals that intersect J satisfies
#{p: [p/q-ψ(q), p/q+ψ(q)]∩ J ≠∅}∼ |J|q,
where |J| denotes the length of the interval J.
The interval J belongs to ℐ_k-1. Therefore, |J| ≥ψ(β(M_k-1))^-2 by the assumption (<ref>). Hence |J| ≫1/M_k, and therefore, |J| ≫1/q.
The interval [p/q - ψ(q), p/q + ψ(q)] intersects J if and only if p/q lies in a ψ(q)-neighborhood of J. Since ψ(q) ≈ q^-τ by (<ref>) and τ > 2, we have that ψ(q) ≪ 1/q if k is sufficiently large. Hence, [p/q - ψ(q), p/q + ψ(q)] intersects J if and only if p/q lies in an interval J' of length |J'| = |J| + 2 ψ(q) ∼ |J|.
Write J' = [a,b]. Then the smallest multiple of p/q contained in J' is ⌈ q a ⌉/q, and the largest multiple of p/q contained in J' is ⌊ q b⌋/q. So the total number of multiples of p/q contained in J' is
Cl
⌊qb ⌋- ⌈q a ⌉+ 1
= qb - qa + O(1)
= q|J'| + O(1)
∼ q |J| + O(1).
Since |J| ≫ 1/q, we have q |J| ≫ 1. Therefore,
#{p : [p/q - ψ(q), p/q + ψ(q) ] ∩ J ≠∅}∼ |J|q,
as claimed.
Then
rCl
∑_U ∈𝒰 α((U)) ≤ ∑_J∈ℐ_k-1 ∑_M_k ≤q ≤β(M_k)
q prime ∑_0≤p ≤q-1 α((J⋂[p/q-ψ(q), p/q+ψ(q)]))
∼ ∑_J∈ℐ_k-1 |J| ∑_M_k ≤q ≤β(M_k)
q prime q α(ψ(q)).
From assumption (<ref>), α(ψ(q)) = q^-2. Therefore,
∑_J ∈ℐ_k-1 |J| ∑_M_k ≤ q ≤β(M_k)
q prime q α(ψ(q)) = ∑_J∈ℐ_k-1 |J| ∑_M_k ≤ q ≤β(M_k)
q prime q^-1.
By choosing β(M_k) = M_k^γ where γ > 1 is some positive number we get
∑_M_k ≤ q ≤β(M_k)
q prime q^-1∼loglogM_k^γ - loglog M_k = logγ.
Consequently,
rCl
∑_J∈ℐ_k-1 |J| ∑_M_k ≤q ≤β(M_k)
q prime q^-1 ∼ ∑_J ∈ℐ_k |J|
≲ ∑_M_k-1 ≤q ≤β(M_k-1)
q prime ∑_0 ≤p ≤q-1 ψ(q)
= ∑_M_k-1 ≤q ≤β(M_k-1)
q prime q ψ(q).
Recall that ψ(q) ≈ q^-τ, so
rCl
∑_M_k-1 ≤q ≤β(M_k-1)
q prime q ψ(q) ≈ ∑_M_k-1 ≤q ≤β(M_k-1)
q prime q^-τ+1
≲ M_k-1^-τ+2.
The exponent -τ + 2 < 0. Hence, if k is chosen to be sufficiently large, we have
∑_U ∈𝒰α(U) < ϵ,
as desired.
myplain
|
http://arxiv.org/abs/2409.03632v1 | 20240905154704 | Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning | [
"Andrew Smart",
"Atoosa Kasirzadeh"
] | cs.LG | [
"cs.LG"
] |
Google Research
San Francisco
USA
[email protected]
Google Research
San Francisco
USA
[email protected]
§ ABSTRACT
What is it to interpret the outputs of an opaque machine learning model? One approach is to develop interpretable machine learning techniques. These techniques aim to show how machine learning models function by providing either model-centric local or global explanations, which can be based on mechanistic interpretations (revealing the inner working mechanisms of models) or non-mechanistic approximations (showing input feature-output data relationships). In this paper, we draw on social philosophy to argue that interpreting machine learning outputs in certain normatively-salient domains could require appealing to a third type of explanation that we call “socio-structural” explanation. The relevance of this explanation type is motivated by the fact that machine learning models are not isolated entities but are embedded within and shaped by social structures. Socio-structural explanations aim to illustrate how social structures contribute to and partially explain the outputs of machine learning models. We demonstrate the importance of socio-structural explanations by examining a racially biased healthcare allocation algorithm. Our proposal highlights the need for transparency beyond model interpretability: understanding the outputs of machine learning systems could require a broader analysis that extends beyond the understanding of the machine learning model itself.
Beyond Model Interpretability: Socio-Structural Explanations in Machine Learning
Atoosa Kasirzadeh
September 9, 2024
================================================================================
§ INTRODUCTION
In order to formulate a learning theory of machine learning, it may be necessary to move from seeing an inert model as the machine learner to seeing the human developer—along with, and not separate from, his or her model and surrounding social relations—as the machine learner.
- Reigeluth & Castelle <cit.>
The past decade has seen massive research on interpretable machine learning (ML).[For the purposes of this paper, we use “interpretable ML,” “explainable ML,” “interpretable AI,” and “explainable AI” interchangeably.] Here is a rough restatement of the goal of interpretable ML research program: many ML models are opaque in that even the expert humans cannot robustly understand, in non-mathematical terms, the reasons for why particular outputs are generated by these models <cit.>. To overcome this opacity, various model-centric techniques have been developed to interpret their outputs. These techniques are diverse. They range from producing counterfactual explanations or heatmaps that offer insights into how changing inputs affect outputs <cit.>, to interpreting the inner workings of the model by probing patterns of neuron activations or attention mechanisms <cit.>.[Researchers are actively developing unified frameworks that integrate multiple interpretability methods, with the aim of providing a comprehensive conceptual toolkit for understanding the outputs of complex ML models <cit.>.]
Despite these advancements, ML interpretability remains a contentious and ambiguous topic in the scientific community, lacking a universally accepted scope and definition <cit.>. This ambiguity complicates the evaluation and regulation of opaque ML systems, raising questions about what constitutes sufficient interpretation and how it should be assessed. A pragmatic and pluralistic approach to interpretability has gained traction, viewing explanations as context-dependent responses to why-questions <cit.>. On this pluralistic approach, the adequacy of an explanation depends on the specific inquiry.
For simple classification tasks, techniques like saliency maps or feature importance may suffice. For instance, if a model is differentiating between images of cats and dogs, saliency maps could highlight the pixels most influential in the decision-making process. However, for complex and socially-embedded topics — such as biased healthcare algorithms — these model-centric explanations can fall short. Consider an algorithm that predicts hospital readmission risk but systematically underestimates it for certain racial groups. A model-centric explanation might highlight “total healthcare costs incurred in the past year” as an important feature. However, this alone might not fully reveal why the algorithm underestimates risk for a specific racial group. The algorithmic choice could come from the fact that this racial group, due to systemic inequities, have historically been unable to afford adequate healthcare and thus incurred lower costs. As a result, the low value for the “total healthcare costs incurred in the past year” feature does not necessarily indicate better health. Instead, it may suggest unmet healthcare needs, leading to higher readmission rates that the algorithm does not effectively account for. In such cases, interpretations that consider both model-specific details like feature importance and relevant social and structural factors like healthcare affordability disparities among racial groups are crucial for understanding ML predictions or decisions.
In this paper, we draw on social philosophy <cit.> to advocate for a more comprehensive approach to ML interpretability research, expanding beyond model-centric explanations. We propose incorporating relevant socio-structural explanations to achieve a deeper understanding of ML outputs in domains with substantial societal impact.
In the rest of the paper, we introduce the concept of socio-structural explanations and discuss their relevance to understanding ML outputs. We then examine how these explanations can enhance the interpretation of automated decision-making by ML systems in healthcare <cit.>. Our paper expands the discourse on transparency in machine learning, arguing that it extends beyond model interpretability. We propose that in high-stake decision domains, a socio-structural analysis could be necessary to understand system outputs, uncover societal biases, ensure accountability, and guide policy decisions.
§ INTERPRETABLE ML AND ITS DISCONTENTS
ML interpretability aims to generate human-understandable explanations for model predictions. This process requires the specification of two key components: the explanandum (the phenomenon requiring explanation) and the explanans (the elements providing the explanation). The model's prediction (or decision) typically serves as the explanandum, while visualizations or linguistic descriptions generated via interpretability techniques act as the explanans. To better understand the landscape of interpretability methods, we provide a broad classification of prominent approaches.[The interpretable ML literature has grown extensively, making a comprehensive survey beyond the scope of this paper. For recent overviews, see <cit.>.]
Model-centric interpretability approaches can be classified according to various criteria, with one fundamental distinction being between intrinsic and post-hoc interpretability <cit.>. Intrinsic interpretability achieves transparency by restricting the complexity of the ML model itself, using approaches such as short decision trees or rule-based systems. In contrast, post-hoc interpretability involves applying methods after model training. These methods include SHAP (SHapley Additive exPlanations) values <cit.>, LIME (Local Interpretable Model-agnostic Explanations) <cit.>, saliency maps for neural networks <cit.>, and mechanistic interpretability tools <cit.>.[Post-hoc methods can also be applied to intrinsically interpretable models, such as computing permutation feature importance for decision trees, which can provide additional insights into their decision-making process.] Another popular classification criterion categorizes ML interpretability techniques into two main types: local and global. This categorization offers a complementary perspective by focusing on the scope and depth of the explanations they provide.
Local explanations focus on explaining individual (or a specific group of) predictions or decisions made by a model. Local explanations often use techniques like feature attribution <cit.> or counterfactual instances <cit.>. For example, for an image classification model that predicts "dog," a pixel attribution method might highlight the pixels around the dog's ears and tail as being most influential in the prediction "dog." The explanation could be "The model classified this image as a dog primarily because of the distinctive shapes in these highlighted areas (pointing to highlighted pixels in a visualization). The pointed ears here and the curved tail shape here were the most influential features in making this prediction. Other parts of the image, such as the background or the dog's body, had less impact on the classification." For a loan approval ML model, a counterfactual explanation could be "If your income was 5,000 US dollars higher, your loan would have been approved."
Global explanations shed light on the average behavior of the model and provide an overall understanding of how a model works across possible inputs. These methods are often expressed as expected values based on the distribution of the data. Global explanations aim to answer questions like "What features are generally most important for this model's predictions?" or "How does the model behave across different types of inputs?" Techniques for global explanations include partial dependence plots <cit.> and accumulated local effects <cit.>. For example, a partial dependence plot, a type of feature effect plot, can show the expected prediction when all other features are marginalized out. In a house price prediction model, a partial dependence plot might show how the predicted price changes as the house size increases, averaged across all other features like location, number of bedrooms, or age of the house. Since global interpretability methods describe average behavior, they are particularly useful when the modeler wants to understand the general mechanisms in the data, debug a model, or gain insights into its overall performance across various scenarios.
Mechanistic interpretations expand upon both local and global explanations. These interpretability tools seek to understand the internals of a model. In the case of neural networks, mechanistic interpretability tools reverse engineer the algorithms implemented by neural networks into concepts, often by examining the weights and activations of neural networks. This approach includes methods such as circuit analysis or dictionary learning for identifying specific subnetworks of neurons within larger models to understand the implementation of particular behavior <cit.>.[Neurons in neural networks can be monosemantic (representing a single concept) or polysemantic (representing multiple unrelated concepts). Monosemantic neurons activate for a single semantic concept, suggesting a one-to-one relationship between neurons and features <cit.>. However, neurons are often polysemantic, activating for multiple unrelated concepts, complicating network interpretation. For instance, researchers have empirically shown that for a certain language model, a single neuron can correspond to a mixture of academic citations, English dialogue, HTTP requests, and Korean text <cit.>. Polysemanticity makes it difficult to reason about the behavior of the network in terms of the activity of individual neurons.] Mechanistic interpretability is an emerging and highly active area of research, with rapid developments in its neural analysis techniques.
Each of the above-mentioned approaches offers different perspectives on model behavior, ranging from specific instance explanations to overarching principles of operation and fundamental computational mechanisms. The choice of method depends on the specific interpretability goals and the nature of the model being analyzed. There are several acknowledged limitations to existing interpretability approaches.
First, interpretability techniques can be brittle, sensitive to the target of interpretation <cit.>, to minor perturbations in model parameters <cit.> or input data <cit.>. This fragility raises concerns about the reliability and robustness of generated explanations using interpretability methods, especially in real-world scenarios where models are subject to noisy data and evolving conditions. Recent work on mechanistic interpretability has begun to discover features of large language models that are more robust <cit.>. However, there is still significant progress to be made in developing consistently reliable interpretability methods <cit.>.
Second, for a given model and input, there may be multiple valid explanations, each highlighting different aspects of the prediction-making or decision-making process <cit.>. This multiplicity embodies both a feature and a bug: as a feature, it reflects the need for a multi-faceted understanding of the complexity of ML predictions; as a bug, it introduces potential confusion and conflicting interpretations, challenging efforts to identify the most relevant or meaningful explanation. Consider, for instance, an ML model used in hiring decisions that recommends not to hire a particular candidate. A feature importance analysis might indicate that the candidate's educational background was the primary factor in the decision. However, a counterfactual explanation might suggest that changing the candidate's gender would alter the outcome. Simultaneously, a SHAP analysis could show that a combination of factors including work experience, interview performance, and age contributed to the decision. Each of these explanations provides insight into the model's reasoning, but emphasizes different aspects, some of which may be more socially sensitive or legally problematic than others. This diversity of explanations challenges practitioners in determining which aspects are most crucial for the model's behavior. Moreover, different stakeholders - such as job applicants, hiring managers, and legal compliance officers - might prefer or trust certain types of explanations over others, further complicating the practical application of these interpretability methods <cit.>. Despite the growing number of interpretability approaches, there is a lack of standardized benchmarks and evaluation frameworks to assess their legal and ethical relevance and compare their performance <cit.>.
Third, we still have no provable guarantee that a post-hoc explanation accurately reflects the true reasoning behind a model's prediction <cit.>. Explanations may be overly simplistic, highlight irrelevant features, or even be misleading, potentially leading to incorrect conclusions about the model's behavior. The potential lack of faithfulness is particularly problematic in high-stakes domains where decisions have significant consequences. For example, a counterfactual explanation for a loan denial might suggest that increasing income would lead to approval. However, the true cause might be a complex interaction of credit history and debt-to-income ratio, not captured by the explanation <cit.>. Given these limitations, researchers are exploring novel methods to enhance our understanding of ML models and their predictions (or decisions).
A promising approach to enhance model transparency involves expanding the scope of interpretations beyond the internal mechanics of the model itself. This expanded perspective recognizes that ML models do not operate in isolation, but within complex social and institutional contexts that can significantly influence their behavior and impact. Here, we propose a new perspective to interpreting ML outputs that incorporates relevant social and structural factors into transparency demands. In particular, we think that in certain situations, the soundness and stability of ML explanations can be improved by appealing to what we call socio-structural explanations that are external to an ML model. Our thesis is that in some socially salient applications of ML models, perhaps the most important constraints on model behavior are external to the model itself. Extending the idea that the machine learner is not only the inert model, but includes the human developers, uses and surrounding social relations and practices <cit.>, we propose to explain the behavior of a model, in such instances, given its place in a social structure. We call such explanations socio-structural explanations. In order to understand socio-structural explanations, we first need to know what are social structures?
§ SOCIAL STRUCTURES AND SOCIO-STRUCTURAL EXPLANATIONS
Social structures are the underlying realities that shape our social lives, influencing our choices, opportunities, and experiences <cit.>. They are the invisible scaffolding of society, both constraining and enabling our individual and collective actions. They give rise to social hierarchies through institutions, policies, economic systems, and cultural or normative belief systems such as race or socioeconomic status <cit.>. Social structures manifest in various forms, from the subtle influence of societal norms to the explicit impact of legal frameworks, creating a multilayered reality that shapes our experiences and opportunities.
Social and political philosopher, Iris Marion Young <cit.>, defines social structures as the interplay of institutional rules, interactive routines, resource mobilization, and physical infrastructure. These enduring elements shape the context within which individuals act, offering both opportunities and limitations.[Social structures, according to Young <cit.>, are defined as follows:
As I understand the concept, the confluence of institutional rules and interactive routines, mobilization of resources, as well as physical structures such as buildings and roads. These constitute the historical givens in relation to which individuals act, and which are relatively stable over time. Social structures serve as background conditions for individual actions by presenting actors with options; they provide “channels” that both enable action and constrain it.
] These structures, while socially constructed, possess a reality for exerting tangible influences on individuals and institutions <cit.>. They are powerful forces that can constrain and enable actions, cause the specific distribution of resources, and define social roles and expectations. Social structures can explain persistent patterns and circumstances in society, such as racial inequality or gender disparities. To get more concrete, let us analyze the notion of social structures in the context of a socio-structural explanation, borrowing a simple example from Garfinkel <cit.>:
Suppose that, in a class I am teaching, I announce that the course will be “graded on a curve,” that is, that I have decided beforehand what the overall distribution of grades is going to be. Let us say, for the sake of the example, that I decide that there will be one A, 24 Bs, and 25 Cs. The finals come in, and let us say Mary gets the A. She wrote an original and thoughtful final.
<cit.> argues that the explanation "She wrote an original and thoughtful final" is inadequate to answer the explanation-seeking question "Why did Mary get an A?" In a curved grading system, achieving the sole A grade requires more than just quality work. For Mary to earn the only A in the class, her final would need to be the best. If the instructor had not implemented a grading curve, multiple students could have earned As by producing thoughtful and original finals. Garfinkel elaborates on this point, stating "So it is more accurate to answer the question by pointing to the relative fact that Mary wrote the best paper in the class" <cit.>. Mary's A grade was not solely a result of her individual performance, but also a consequence of her relative standing among peers, combined with the specific grading structure that emphasized this comparative aspect. The grading structure, in this case, serves as a crucial contextual element shaping the explanation. More precisely, the structural aspect of this explanation is "the predetermined grading curve that limited the number of As to one," while the social aspect is "Mary's performance relative to her peers (she wrote the best paper in the class)."
Here is a different example for further clarification of the notion of socio-structural explanation. Consider the following explanatory question: “Why do women continue to be economically disadvantaged relative to men (as opposed to reaching economic parity with men?)” <cit.>. <cit.> argues that we can have (at least) three explanations for this question: biological, individualistic, and structural.
Biologistic explanation: Women are inherently less capable than men in biological qualities deemed necessary (such as intelligence or competitiveness) for success in high-paying jobs.
Individualistic explanation: Women, to a greater extent than men, prioritize child-rearing over pursuing high-paying careers, thus voluntarily sacrificing economic success for the perceived rewards and satisfactions of motherhood.
Structural explanation: Women are embedded within a self-reinforcing economic structure that systematically disadvantages them through institutional practices, social norms, and power dynamics.
Each of these explanations refers to different causes, operating at distinct levels of analysis. The biologistic and individualistic explanations focus on factors intrinsic to individuals or groups, without considering the broader socio-structural context. In contrast, the structural explanation situates individual actions and outcomes within a larger system of interconnected social forces. If the social structure is in place, then we can view individuals as occupying specific "nodes" within a complex social network or structure. The socio-structural explanation posits that gender wage disparities arise from the complex interplay of societal, economic, and institutional factors that collectively shape opportunities and constraints. Given the socio-structural limitations in place, we can explain why women, at the population level, experience economic disadvantages compared to men based on their position within the social structure. In this context, "social structure" refers to the complex network of institutions, relationships, and cultural norms that organize society. It includes economic systems that historically undervalue work traditionally performed by women, political institutions that may underrepresent women's interests, and educational structures that can reinforce gender stereotypes. Additionally, it includes cultural norms that influence career choices and work-life balance expectations, organizational hierarchies that often favor male leadership, and legal frameworks that may inadequately address gender discrimination.
Women's place within this multifaceted social structure often results in reduced access to resources, limited decision-making power, and fewer opportunities for advancement, collectively contributing to persistent economic disparities at the population level. The socio-structural approach to explanation, when rigorously applied, offers valuable insights. It demonstrates how individual choices and actions can be profoundly shaped by the surrounding social structures. By highlighting the influence of broader structural forces on seemingly personal decisions, it reveals patterns often operating beyond an individual's immediate awareness or control.
Let us draw a close analogy between the above instances of socio-structural explanations and a toy example of interpreting an ML model's output. Consider an ML-powered hiring model that consistently recommends male candidates over female candidates for senior executive positions in a tech company. An initial explanation of the recommendations generated by a SHAP interpretability method might say: "The model recommends male X over female Y because X's features contribute more positively to the model's output. Specifically, X's 10 years of tech leadership experience contributes +0.4 to the score, while Y's 7 years contributes only +0.2." Let us assume similar explanations (relating years of tech leadership experience to the recommendation score) are generated for a population of females. These explanations might fail to capture the full picture.
Upon deeper investigation, an auditor team uncovers a more complex and nuanced reality. First, the auditors find that the ML model was trained on the company's historical hiring data from 2000-2020, which included 85% male executives. This data reflects the company's past hiring practices, which favored men for leadership roles. The socio-structural aspect here is the historical underrepresentation of women in executive positions, rooted in long-standing societal norms and institutional practices. A socio-structural explanation could look like: "The model's bias reflects decades of systemic exclusion of women from leadership roles in the tech industry, perpetuating a cycle where the lack of female representation in executive positions reinforces the perception that these roles are best suited for men." Second, the ML model places high importance on continuous work experience, with any gap longer than 6 months reducing a candidate's score by 0.1 per year. 40% of female candidates had career gaps averaging 2.5 years, compared to 10% of male candidates averaging 1 year, often coinciding with childbearing ages. This reflects the socio-structural reality of women bearing a disproportionate responsibility for child-rearing and family care, leading to more frequent and longer career interruptions. A socio-structural explanation could look like: "The model's penalty for career gaps disproportionately impacts women due to societal expectations and norms that place the primary burden of childcare and family responsibilities on women, resulting in more frequent and longer career interruptions that are then interpreted by the model as reduced qualifications." Third, the model does not consider geographic location in its evaluation. However, geographic disparities affect job availability and commute times, disproportionately impacting women with childbearing responsibilities. This reflects socio-structural expectations around family care that often limit women's job options to those closer to home or with flexible hours. A socio-structural explanation could look like: "The model's failure to account for geographic factors overlooks the societal expectations that often constrain women's job choices based on proximity to home and flexibility for family care. This oversight particularly disadvantages women who may be highly qualified but limited in their job options due to these socially imposed constraints."
Producing rigorous socio-structural explanations can be challenging as it requires significant sociological understanding and interdisciplinary expertise. However, once obtained, these explanations enable novel forms of interventions. Here are some examples of possible interventions enabled by obtaining socio-structural explanations. The first is to modify the model to cap the maximum contribution of "continuous experience" at +0.2. The second is to introduce a new feature "diverse experience" that values varied career paths, including those with gaps. The third is to augment the training data with 500 profiles of successful executives who have had career gaps of 1-3 years, ensuring at least 50% are women. The fourth is to implement a company-wide policy requiring human review for any candidate the ML system ranks lower primarily due to career gaps (>0.2 score difference). This toy example is supposed to highlight that integrating socio-structural explanations into the ML transparency toolkit enables us to transcend superficial model-centric solutions (when relevant) and address the fundamental causes underlying ML outputs.
Of course, the specific interventions depend on what we want to change and the particular context of the problem at hand. Socio-structural explanations are not always useful or applicable in every situation. The effectiveness of these explanations and subsequent interventions can vary based on the complexity of the social systems involved, the quality of available data, and the specific goals of the analysis. In some cases, other approaches might be more appropriate or effective.
In the following section, we examine a case study of algorithmic deployment in healthcare decision-making, highlighting the critical relevance of socio-structural explanations in this context. This analysis demonstrates how a deeper understanding of social structures can inform more effective strategies for developing and implementing algorithmic systems in high-stakes decision domains. While our original focus was on socio-structural explanations for ML systems, we recognize that the importance of these explanations generalizes to a broader range of automated decision systems.
§ SOCIO-STRUCTURAL EXPLANATIONS OF RACIAL BIAS IN HEALTH-CARE ALGORITHMS
A widely discussed example in the growing body of literature on algorithmic bias is the study by <cit.>. This research revealed that a commonly used US hospital predictive algorithm for allocating scarce healthcare resources systematically discriminated against Black patients. Specifically, the algorithm assigned lower risk scores to Black patients who were equally in need as their White counterparts. The root cause was the algorithm's use of healthcare costs as a proxy for "healthcare need" <cit.>. This approach led to a significant underestimation of health risks for Black patients who, on average, incurred lower healthcare costs than White patients with similar chronic conditions due to systemic disparities in care access and quality <cit.>.
Empirical investigations demonstrated that the care provided to Black patients cost an average of USD 1,800 less per year than the care given to a white person with the same number of chronic health problems. At a given risk score, Black patients are considerably sicker than White patients, as evidenced by signs of uncontrolled illnesses <cit.>. The algorithm predicted that this disparity in spending corresponded to a similar disparity in actual health-care needs and therefore risk score. Consequently, Black people had to be much sicker in order to be referred for treatment or other resources. The algorithm's prediction of health needs is, in fact, a prediction on health costs <cit.>.
When algorithmic decision systems fail in consequential domains like health-care, the repercussions can be severe, potentially leading to patient deaths. It is crucial to understand the reasons and causes for such failures. Therefore, explaining the "why" behind these failures through the analysis of failed outputs is critical. One prevalent type of failure is algorithmic bias that perpetuates existing socio-structural inequalities, such as structural racism <cit.>. Structural racism refers to the complex ways in which historical and contemporary racial inequities are reproduced through interconnected societal systems like healthcare, education, housing, and the criminal justice system <cit.>. Even when race is not explicitly considered, its influence can be deeply embedded in the data, shaping associations and outcomes <cit.>. In the context of this instance, the following explanatory question demands a response: Why did this algorithm systematically discriminate against Black people?
To answer this question, we must consider both the interpretation in reference to the model and the broader socio-structural context in which it operates.[For a discussion of the levels of interpretation see <cit.> and <cit.>.] <cit.> show that this particular algorithm discriminated against Black patients due to its use of healthcare costs as a proxy for "healthcare need." This choice reflects a fundamental misunderstanding of the relationship between costs and needs in a healthcare system marked by systemic racial disparities. <cit.> demonstrated that conditioning on healthcare costs is the mechanism by which the bias arises in this case, and we must change the data we feed the algorithm and use new labels that better reflect social reality, which in turn requires deep understanding of the domain, the ability to identify and extract relevant data elements, and the capacity to iterate and experiment <cit.>. The socio-structural interpretation of the algorithm's behavior is as follows.
The algorithm discriminates against Black patients because it is designed and deployed in a healthcare system characterized by longstanding racial inequities. By encoding healthcare costs as a proxy for healthcare needs, the algorithm inadvertently encodes and perpetuates systemic disparities in care access and quality. Black patients, on average, incur lower healthcare costs not because they are healthier, but due to historical patterns of exclusion, lack of access to care, and underinvestment in healthcare resources for Black communities. The algorithm interprets these lower costs as lower needs, thereby underestimating the health risks for Black patients and perpetuating a cycle of inadequate care allocation. This reflects how the algorithm manages to reproduce structural racism through its uncritical use of data that embodies these systemic inequalities.
It remains challenging for practitioners to identify the harmful repercussions of their own systems prior to deployment, and, once deployed, emergent issues can become difficult or impossible to trace back to their source <cit.>. Unfortunately, many failures of algorithmic decision systems in the healthcare industry disproportionately impact people or communities who have been put already in a structurally vulnerable social positions <cit.>. This can be due to many factors. However, a consistent theme in the study of these failures, that is often only revealed after the fact, is that there is a lack of socio-structural understanding among the designers and users of these systems <cit.>. The study presented in this section exemplifies this challenge. Employing model-centric explanations would likely highlight the importance of the cost feature to algorithmic output, but would not expose the underlying racial bias originating from historical and systemic inequalities in healthcare access and delivery. In this context, socio-structural explanations consider the relevant societal context in which the model operates, in relation to relevant historical biases, societal norms, and institutional practices.
§.§ How structural racism works in healthcare
Structural racism and its impact on health outcomes for minorities in the American healthcare context is increasingly being acknowledged in the medical literature <cit.>. Structural racism is a critical body of knowledge needed for creating healthcare machine learning systems that actually benefit all patients, rather than exacerbating existing health disparities <cit.>.
re-write: The use of ML in healthcare must reckon with histories of enslavement, redlining, hyperencarceration, and racism. Modern American medicine and healthcare have historical roots in scientific racism and eugenics movements <cit.>. Scientific racism reified the concept of race as an innate biologic, and later genetic, attribute using culturally influenced scientific theory and inquiry <cit.>. However, we know now that race and ethnicity are social constructs, without scientific or biological meaning <cit.>. Thus we cannot appeal to biological explanations for the observed racial disparities healthcare outcomes. However, in the case of racial disparities in health outcomes it is clear that, despite being entirely socially constructed categories, race and ethnicity do have causal and explanatory power - in other words these categories act in the world to cause measurable material differences in people's lives. This is because of the racialized structural hierarchy in the US <cit.>.
Salient examples of health outcome inequities are that age-adjusted breast cancer mortality is about 40% higher among Black women than among non-Hispanic White women <cit.>, with COVID-19 Black people have been hospitalized at 2.3 times the rate and died at 1.7 times the rate of non-Hispanic White people, and Hispanic or Latinx people have been hospitalized at 2.2 times the rate and died at 1.8 times the rate of non-Hispanic White people <cit.>, between 2018 and 2020 Black individuals lost an estimated 3.3 years and Hispanic/Latinx individuals lost an estimated 3.9 years in life expectancy, while White individuals lost an estimated less than 1 year <cit.>.
What explains these patterns of racial health and healthcare disparity? Again, we cannot appeal to biological explanations because race and ethnicity are not biological categories, we therefore must appeal to social structural explanations. In particular, we must appeal to structural racism. This is the ways in which racial inequities in outcomes are perpetuated by social, economic, and political systems, including mutually reinforcing systems of health care, education, housing, employment, the media, and criminal justice. It results in systematic differences in access to health care <cit.>. Black and Indigenous individuals and other people of color face significant barriers to obtaining quality health care services in the US <cit.>. Black, American Indian and Alaska Native individuals, groups also with the shortest life expectancy, also spend significantly less on healthcare <cit.>.
What does this mean for ML explainability? With this background knowledge on structural racism we can see where the conceptual and theoretical errors occurred in the development of the healthcare algorithms analyzed by Obermeyer <cit.>. As outlined above, the creators of these kinds of algorithms use future healthcare costs as a proxy for the complex target notion "healthcare need" and "which patients will benefit the most". As Obermeyer points out, because of the structural and institutional incentives surrounding the creation of these algorithms, it is not unreasonable to use health care cost as predictor. Jacobs and Wallach highlight that computational systems often try to make predictions about unobservable theoretical constructs that cannot be measured directly and must instead be inferred from observable properties thought to correlate or be influenced by them - i.e., operationalized in a measurement model <cit.>. As an interviewee who works as a product manager stated in a recent study on ML development, “I look for features from data scientists, [who have ideas of] things that are correlated with what I’m trying to predict.” <cit.>
However, acting on the outputs of a deployed model is de facto a causal interpretation of the model. If this causal interpretation is wrong, or misrepresents human reality or society, the intervention based on the model's ouput will be wrong. Wrong interventions, as in the case of racially biased healthcare allocation, cause harm.
In the health care algorithm case, "future health care need" is what the developers were trying to predict and they needed to find things that are correlated with that unobservable construct. Obermeyer et al also show that bias is often attributable to label choice—the difference between some unobserved optimal prediction and the prediction of an algorithm trained on an observed label <cit.>.
We argue that an adequate explanation for why these health care algorithms are biased is not to be found somewhere inside the model. Even though many researchers argue that "explainability can support developers and clinicians to detect and correct such biases—a major potential source for injustice—ideally at the early stage of AI development and validation, e.g. by identification of important features indicating a bias in the model." Emphasis ours <cit.>. Obermeyer et al acknowledge that understanding the necessary structural aspects of prediction in health care, criminal justice, employment and other socially consequential domains is critical <cit.>.
Interpretability techniques are useful in diagnosing these issues, as evidenced by the Obermeyer et als analysis, however they were required to reconstruct the algorithm from scratch using existing data.
An individualist approach to interpreting the outputs of these healthcare algorithms would consider the model as an individual object, and attempt to peer inside the model to discover what kinds of undesirable associations the model might have learned from the data.
§ IMPLICATIONS FOR ML TRANSPARENCY RESEARCH AND CONCLUSION
ML research and practice are fundamentally shaped by the approaches adopted by practitioners. These approaches influence the entire process: from the questions asked and data collected, to the choice of objective functions and the selection of proxy or target variables for optimization. Throughout this paper, we have argued that model-centric explanations, while valuable, can be inadequate for comprehensively understanding whether a model truly benefits or potentially harms people. This inadequacy is particularly pronounced in high-stakes domains where ML models are often developed and deployed into complex social and structural contexts without sufficient domain-specific theoretical understanding. We have argued that to meaningfully interpret the social predictions (or decisions) of models in high-stake domains, a deep socio-structural understanding is required.
One challenge lies in that many ML practitioners and researchers may not feel adequately equipped to analyze and respond to social structures. Alternatively, they may be hindered from leveraging social structural knowledge due to constraints in time, training, incentives, or resources <cit.>. This gap between technical expertise and socio-structural understanding presents a significant hurdle in developing truly beneficial ML systems.
Algorithmic transparency and accountability research in ML is often motivated by the need to foster trust in these systems. Much of this research rightly argues for the critical importance of model-centric interpretations <cit.>. However, the demands for transparency of ML models must extend beyond model-centric details to encompass socio-structural factors in socially-salient prediction or decision domains.
Producing new, more representative labels and objectives for ML models requires a deep understanding of the domain, the ability to identify and extract relevant data elements, and the capacity to iterate and experiment <cit.>.
The importance of socio-structures has been increasingly recognized in recent literature on algorithmic justice <cit.>. These works argue for a more holistic approach to ML development and deployment, one that considers not just model-centric measures but also societal impacts.
In light of these considerations, we call for further research into the integration of socio-structural understanding into different stages of the ML lifecycle, from problem formulation and data collection to model development, deployment, and ongoing monitoring. We think that sometimes socio-structural interpretations can reveal causally-relevant reasons for why an algorithm behave in a certain way for a certain population. By doing so, we can work towards ML systems that are not only technically proficient but also socially aware and beneficially aligned.
§ ACKNOWLEDGMENTS
We would like to thank Donald Martin, Been Kim, Darlene Neal, Mayo Clinic Accelerate Program and The Impact Lab team at Google Research for helpful discussions and feedback on earlier drafts of this paper.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.03019v1 | 20240904182411 | Optical sensitivities of current gravitational wave observatories at higher kHz, MHz and GHz frequencies | [
"Roman Schnabel",
"Mikhail Korobko"
] | astro-ph.IM | [
"astro-ph.IM",
"gr-qc",
"hep-ex"
] |
Institut für Quantenphysik & Zentrum für Optische Quantentechnologien, Universität Hamburg,
Luruper Chaussee 149, 22761 Hamburg, Germany
§ ABSTRACT
GEO 600, Kagra, LIGO, and Virgo were built to observe gravitational waves at frequencies in the audio band, where the highest event rates combined with the largest signal to noise ratios had been predicted. Currently, hypothetical sources of cosmological origin that could have produced signals at higher frequencies are under discussion. What is not widely known is that current interferometric GW observatories have a frequency comb of high optical sensitivity that encompasses these high frequencies. Here we calculate the high-frequency noise spectral densities of operating GW observatories under the justified assumption that photon shot noise is the dominant noise source. We explain the underlying physics of why high sensitivity is achieved for all integer multiples of the free spectral ranges of the observatory's resonators when an interferometer arm is not orientated perpendicular to the propagation direction of the GW. Proposals for new concepts of high-frequency GW detectors must be compared with the high-frequency sensitivities presented here.
Optical sensitivities of current gravitational wave observatories
at higher kHz, MHz and GHz frequencies
M. Korobko
September 9, 2024
=========================================================================================================
§ INTRODUCTION
At the time when Rainer Weiss analysed the concept of earthbound laser interferometric gravitational wave (GW) detection in terms of signal strength and noise more than 50 years ago <cit.>, astrophysical sources of signals in the audio band were known <cit.>. The probability of being able to measure GW signals on Earth in this band increased further over the following ten years <cit.>.
At the turn of the millennium, a total of six Michelson-type laser-interferometric GW detectors – GEO 600 <cit.>, LIGO (3) <cit.>, Tama <cit.> and Virgo <cit.> – were under construction, targeting the audio-band.
On September 14th, 2015, Advanced LIGO observed the first GW, which had frequency components up to about 300 Hz, emitted by the merger of two black holes at a distance of about 1.3 billion light years <cit.>. By 2020, up to 90 signals from compact binary mergers were detected by LIGO and Virgo <cit.>.
Also below the audio band, a large number of sources are expected to emit signals of measurable amplitude. Avoiding the strong terrestrial noise in this frequency range, LISA is a space-based GW observatory that targets the range from 0.1 mHz to 0.1 Hz <cit.>. It is due to be launched in the 2030s. Pulsar timing arrays (PTAs) are used to measure GW in the nHz range <cit.>. In 2023, several PTA collaborations found evidence for an incoherent background of gravitational waves produced by the collisions and mergers of supermassive black holes, see for instance <cit.>.
GWs at frequencies above the audio band would be rather exotic. There are no known astrophysical sources from star formation or evolution that could emit such high frequencies with measurable amplitudes. However, cosmological sources from the early universe cannot be ruled out. The early universe is not well understood, and inflation <cit.>, first-order phase transitions <cit.>, topological defects <cit.>, and other effects could have generated gravitational waves that still have frequencies in the MHz or even GHz range today despite redshift. A review can be found in Ref. <cit.>.
New detectors have been proposed to measure gravitational waves above 10 kHz. A recent and highly cited review is given by Ref. <cit.>. However, it was largely overlooked that today's gravitational wave detectors can measure high-frequency gravitational waves not only via the non-linear memory effect <cit.>,
but also have a considerable optical sensitivity above 10 kHz as well as in the MHz and GHz range.
As early as 2002, it was found that first-generation GW detectors have relatively high sensitivity at frequencies corresponding to integer multiples of the free spectral range (FSR) of the arm resonators <cit.>. In the case of the LIGO arm resonators, the FSR is 37.5 kHz. A readout channel for this frequency was developed by down-converting the signal to match the existing LIGO data acquisition system <cit.>.
The LIGO antenna pattern was analyzed on gravitational-wave frequencies corresponding to multiples of the cavity free spectral range <cit.>.
Here, we present the optical strain sensitivities up to 10^11 Hz of the Advanced LIGO observatory, representing also Virgo and KAGRA <cit.>, as well as GEO 600. We additionally consider a 100-m and a 1-m laser interferometer with high-finesse arm resonators, but no other resonant enhancements such as power or signal recycling.
At about 1 MHz, current LIGO achieves a one-sided strain-normalised shot noise spectral density of the order of 10^-22/√( Hz). Generally, such high sensitivities at high frequencies are achieved if observatory arm resonators are tilted in the direction of GW propagation and if the GW frequency matches the differential frequency of two longitudinal modes of the optical resonator. The latter condition corresponds to the situation when the arm resonator roundtrip length equals an integer multiple of the GW wavelength. An example is illustrated in Fig. <ref>.
§ METRIC OF A WEAK GRAVITATIONAL WAVE
In all metrics that solve Einstein's field equations, the 4-dimensional events along a propagating laser beam have zero distances.
The metric of a weak gravitational wave that is (+)polarised and propagates along the z-direction is therefore described in the transverse-traceless (TT) gauge by
ds^2 = -c^2 dt^2 + (1+h_+) dx^2 + (1-h_+) dy^2 + dz^2=0 ,
where |h_+| ≪ 1 is the amplitude of the polarised GW as described above and c is the speed of light.
Eq. (<ref>) makes it possible to easily determine the change in the propagation time of a laser beam along the y-direction (or x-direction) <cit.>.
The above equation simplifies to
c^2 dt^2 = (1 - h_+(t,y))dy^2 .
The time span τ (y) that the light needs to propagate from y_0 to y is for |h_+| ≪ 1 then given by
τ(y) = t_0 + 1/c∫_y_0^y√(1-h_+ (τ_0(y'))) dy'
≈ t_0 + y-y_0/c - 1/2c∫_y_0^y h_+(τ_0(y')) dy' ,
where τ_0(y') = t+(y'-y_0)/c is the unperturbed time span for light starting at time t.
For a monochromatic GW with amplitude h_+(t) = h_+ cos(2π f t) and a laser beam that is retro-reflected by a mirror at distance L,
the change in the round trip time at y_0=0 is given by
Δτ_2L = -h_+/2c∫_0^Lcos[2π f(t+y'/c)] dy'
.
+ h_+/2c∫_L^0 cos[2π f(t+y'-L/c)] dy' . .
The solution of the integral provides the known amplitude for the change in the round-trip time in an arm resonator aligned along y in case of a (+)polarized GW
Δτ_2L = -Lh_+/2c(π f/f_ FSR) ,
where f_ FSR = c/(2L) is the free spectral range of the arm resonator. For an arm resonator in the x direction, Eq. (<ref>) has the opposite sign.
For low GW frequencies, i.e. f ≪ f_ FSR (audio-band frequencies for km-scale arm resonators) one gets the well-known relation
Δτ_2L≈ -Lh_+/2c ⇒ Δ L ≈ -Lh_+/2 ,
where Δ L is the amplitude of the effective arm length change.
Relevant for this work is Eq. (<ref>). It states that the time delay due to GW is zero, if the GW frequency is f = n · f_ FSR, with n a natural number. Often overlooked, however, is the limited range of validity of Eq. (<ref>), and it is therefore wrongly concluded that laser interferometric GW observatories are generally not sensitive to GWs at these “FSR-frequencies”.
In fact, the current GWOs are only insensitive to these high-frequency gravitational waves if they come from the zenith (or nadir). For all other alignments, interferometric GW observatories have significant response precisely at all GW frequencies that correspond to an integer multiple of f_ FSR.
The rather high response at these particular frequencies comes from the fact that optical resonators resonantly enhance all signal frequencies f = n · f_ FSR since these frequencies correspond to the frequency separation of neighbouring longitudinal resonator modes.
As an example, we calculate the time delay for a resonator round trip when the resonator is inclined at θ = 45^∘ against the propagation direction of the GW, as sketched in Fig. <ref>. The coordinate transformation between the (+)polarisation and the (x, y)-oriented arms leads to a halving of the GW amplitude contribution. Additionally, the trajectory of light changes over the duration of the gravitational wave, i.e. we replace in Eq. (<ref>) h_+(t) by h_+(t+y/√(x c^2))/2 for the light propagating along the y axis.
If we carry out the calculation analogous to the one above, we arrive at a relatively simple integral to solve.
Here, we limit ourselves to specific GW frequencies.
For f≪f_ FSR we obtain
Δτ_2L≈ -Lh_+/4c ⇒ Δ L ≈ -Lh_+/4 ,
i.e. half of the signal for the optimal alignment.
For f = f_ FSR, the integral reduces to
Δτ_2L= -1/4c∫_0^L h_+cosπ y'/Lcosπ y'/√(2)L dy' ≈ -0.18 L/c h_+ ,
which is only slightly worse than the response at low frequencies according to Eq. (<ref>).
Similarly, the detector has significant response at all cavity FSRs, see next section.
Current GW observatories have optical responses to gravitational waves according to Eqs. (<ref>)-(<ref>).
Since our calculation uses the TT-gauge, the light's red-shift (when propagating through expanding spacetime) and blue-shift (when propagating through shrinking spacetime) are already included <cit.>.
We note that we have set the phase of the GW to zero. In the more general case, the above equations would get slightly more complex, as we show below.
§ FREQUENCY COMBS OF HIGH SENSITIVITY OF CURRENT GW OBSERVATORIES
An optical resonator shows a longitudinal resonance if its round trip length 2L equals an integer multiple of the wavelength of the light coupled to it (2L = n ·λ). The frequency spacing of two neighboring resonances is called the free spectral range (f_ FSR = c/2L). Phase modulations of carrier light that meets one resonance condition are optically enhanced at all frequencies f that correspond to n · f_ FSR, where n is again a natural number.
In the case of the 4 km LIGO detectors, the comb spacing is 37.5 kHz. The 3 km Virgo and KAGRA detectors have a comb spacing of 50 kHz. The 1.2 km long, folded-arm signal recycling cavity of GEO 600 results in integer multiples of 125 kHz. In all cases, the linewidths of the resonances are of the order of a kHz. The proposed 10 km Einstein Telescope and the 40 km Cosmic Explorer have comb spacings of 15 kHz and 3.75 kHz <cit.>.
Spectral sensitivities of GW observatories are best described by one-sided (positive frequencies only) amplitude noise spectral densities (ASD) normalized to the signal strength at the respective frequency. The relation between the ASD normalized to strain h and normalized to the round trip phase difference of the laser beams φ reads
√(S_h(f)) = c/2Lω√(S_φ(f)) ,
where ω is the angular frequency of the laser light. If the noise in S_φ (f) is dominated by photon shot noise <cit.>, which is a well-justified approximation for signal frequencies above a few kilohertz, the noise spectrum alone is “white”, i.e. independent of the frequency. The phase signal on the laser light, however, depends on the frequency and additionally on the GW's polarisation with respect to the orientation of the observatory, on its alignment with respect to the GW's direction of propagation, and on length and linewidth of the arm resonators and further enhancement resonators coupled to it.
We show how the phase signal for an observatory with two equally long arms under 90^∘ can be calculated for arbitrary alignments in the next section.
Fig. <ref> presents quantum noise amplitude spectral densities of Advanced LIGO for two different sky locations of the GW sources for mixed GW polarisation. The zenith sky location provides the lowest noise for audio-band frequencies (green). The second sky location is optimized for lowest noise at LIGO's FSR frequency of 37.5 kHz (blue). LIGO's quantum noise limited amplitude sensitivity at this frequency is just a factor of about six lower than that at the optimal frequency between 100 and 200 Hz.
Fig. <ref> presents polarisation-averaged and sky-averaged amplitude shot-noise spectral densities at the “FSR frequencies” of two GW detectors in operation and two conceivable detectors with less complexity and shorter arms.
In all traces, the noise to signal ratio increases proportional to the GW frequency. This is a general property of laser interferometers with resonator round trip time larger than the gravitational wave period, because the effective propagation time over which the effect of the GW is accumulated is inversely proportional to the GW frequency. This property is described by the sinc-function in Eq. (<ref>).
Comparing the interferometer sensitivities shows that for frequencies above the largest FSR (here 150 MHz), arm length is not an issue.
§ HIGH-F ANTENNAE PATTERN
Here, we outline how to derive the direction-dependent amplitude of GW induced phase signals for an interferometric GW observatory with perpendicular arms following the work <cit.>.
The arms are oriented along x and y, of which the x-arm provides the reference.
We define the GW-induced change of the optical round-trip phase as
φ_a = h_aℱ_a(f)G(f) ,
where h_a is the polarisation of the GW coming from direction a defined with respect to the x-arm, ℱ_a(f) is the response to this GW, and G(f) is the optical transfer function (e.g. of an arm cavity). Direction a then reads
a_x = sinθsinϕ ,
a_y = sinθcosϕ ,
where θ∈ [0,π] is co-latitude, and ϕ∈ [0, 2π] is longitude of the source.
The polarisation along the two arms is related to (+,×)polarisation through the coordinate transformation
h_xx(θ, ϕ) = h_+ (cos^2 θcos^2ϕ - sin^2ϕ)
+2 h_×cosθsinϕcosϕ,
h_yy(θ, ϕ) = h_+ (cos^2θsin^2ϕ - cos^2ϕ)
-2 h_×cosθsinϕcosϕ .
Following the detailed derivation of the high-frequency response given in <cit.>, we define the response function
ℱ_a(f) = e^-iπ f/f_ FSR/2(1 - a^2)π f/f_ FSR
×( sin(π f/f_ FSR) - a sin(π a f/f_ FSR) .
.
- i a ( cos(π f/f_ FSR) - cos(π f a/f_ FSR) ) ) .
For gravitational waves from zenith (a = 0) the equation is reduced to Eq. (<ref>), which is zero for f=f_ FSR.
For other orientations, however, the above equation is not zero for f=f_ FSR.
The phase difference between two arms can be expressed in terms of the response function
Δφ = φ_x - φ_y = h_xxℱ_x(f)G_x(f) - h_yyℱ_y(f)G_y(f) .
Typically, the optical response of the arms is identical, G_x(f)=G_y(f)=G(f). Importantly, in detectors with symmetric arms, the response to GWs only depends on the arm lengths, and not on other parameters of optical responses, which is reflected in the fact that G(f) enters as a common factor.
We can re-write the phase difference in terms of the response of the detector to the two polarisations yielding
Δφ = (h_+ℱ_+(f) - h_×ℱ_×(f))G(f),
where ℱ_+ can be computed using Eq. (<ref>) and formally setting h_+=1, h_×=0 (and vice versa for ℱ_×). We don't explicitly write the resulting bulky expressions in the interest of space. It is common to use the averaged polarisation response defined by
ℱ̅ = √(|ℱ_+|^2 + |ℱ_×|^2 ).
To quantify the sensitivity of a GW observatory, it is also common to use the sky-averaged response. We have followed both in Fig. <ref>.
Alternatively, the signal amplitude can be displayed for a fixed signal frequency as a function of the localisation in the sky, which are the so-called antennae pattern.
Fig. <ref> show the characteristic antennae pattern of GW observatories with perpendicular equally log arms for the audio band and a selection of their FSR frequencies. The upper plot presents the response to audio band frequencies. It is maximal for GWs coming from zenith and nadir.
The next plots present the antennae pattern for frequencies n · f_ FSR, with n = 1, 5, 100.
Here, the highest response at FSR frequencies is achieved for other sky locations.
The higher the frequency, the finer the antenna pattern.
The maximum response is inversely proportional to the order n of the FSR.
§ CONCLUSIONS
Resonator-enhanced laser interferometers with movable test-mass mirrors have significant optical sensitivities to gravitational waves at a comb of higher kHz, MHz and GHz frequencies. State of the art laser interferometer techniques allow for strain normalized sensitivities below 10^-22/ √( Hz) around one MHz and 10^-19/ √( Hz) around one GHz (Fig. <ref>) for 100 m and 1 m arm lengths, respectively.
Modifications to the optical systems of existing laser interferometers are not required for use in the high-frequency range. Main modifications would be faster detector electronics and higher sampling rates, which is easy to realise, as well as adapted absolute calibration procedures.
If the one-MHz range was read out by the currently operating Advanced LIGO, a sensitivity of the order 10^-22/ √( Hz) would be achieved, which is well inside the range of existing and proposed dedicated instruments, compare table 1 in <cit.>.
However, even these sensitivities are about seven orders of magnitude too low around one MHz and about 15 orders of magnitude too low around one GHz compared to predicted amplitudes of stochastic sources such as cosmic strings or early-universe first-order phase transitions, see Fig. 1 in <cit.>.
Dedicated interferometric detectors as in Fig. <ref> would require 10 orders of magnitude increased optical power and ten years of data taking to detect these signals at a MHz. At a GHz, even 24 orders of magnitude increased optical power would be required.
Interestingly, the existence of gravitational waves at MHz and GHz frequencies can even be detected at audio-band frequencies. This is due to the fact that during the passage of transient gravitational wave energy, the local region of space-time slowly accumulates a lasting space-time distortion. This is the nonlinear memory effect of GW bursts <cit.>.
Ref. <cit.> analyses the audio-band memory effect of high-frequency gravitational wave bursts and claims that Advanced LIGO would have already measured the memory effect from arbitrarily high gravitational wave frequencies of the order of 6 · 10^-21 / √( Hz) with a signal-to-noise ratio (S/N) of five if they existed.
Fig. 3 in <cit.> claims that a sine-Gaussian burst signal around 100 MHz that produces a S/N of 5 in a high-frequency detector with a broadband sensitivity of 10^-22/ √( Hz) as proposed in <cit.> would produce an audio-band S/N of greater 1000 in Advanced LIGO.
On the similar grounds, Ref. <cit.> points out that if the observation in <cit.> at the frequency of 5.5 MHz came from a GW, the LIGO/Virgo detectors would have registered this signal with an S/N of greater than 10^6 in the audio band.
The current concept for measuring gravitational waves in the audio band therefore provides a considerable sensitivity to MHz and GHz gravitational waves, both directly and indirectly.
On the other hand, the proposed GW amplitudes in these frequency ranges are extremely weak and the potential sources of them are not widely accepted to exist.
Overall, this highlights the necessity of re-evaluating whether it is essential to invest in the development of novel detectors for the high-frequency range.
Data availability —
The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
Code availability —
The code that supports the findings of this study is available from the corresponding authors upon reasonable request.
10
url<#>1urlprefixURL
Weiss1972
authorWeiss, R.
titleElectronically Coupled Broadband Gravitational
Antenna.
journalQuarterly Progress Report, Research
Laboratory of Electronics (MIT) volume105,
pages54 (year1972).
Forward1967
authorForward, R. L. & authorBerman, D.
titleGravitational-Radiation Detection Range for Binary
Stellar Systems.
journalPhysical Review Letters
volume18, pages1071–1074
(year1967).
<https://link.aps.org/doi/10.1103/PhysRevLett.18.1071>.
Ostriker1969
authorOstriker, J. P. & authorGunn, J. E.
titleOn the nature of pulsars. I. theory.
journalThe Astrophysical Journal
volume157, pages1395 (year1969).
Rees1973
authorRees, M. J.
titleAstrophysical aspects of gravitational waves.
journalAnnals of the New York Academy of Sciences
volume224, pages118–124
(year1973).
Epstein1975
authorEpstein, R. & authorWagoner, R. V.
titlePost-Newtonian generation of gravitational waves.
journalThe Astrophysical Journal
volume197, pages717–723
(year1975).
Thorne1980
authorThorne, K. S.
titleGravitational-wave research: Current status and
future prospects.
journalReviews of Modern Physics
volume52, pages285–297
(year1980).
<https://link.aps.org/doi/10.1103/RevModPhys.52.285>.
Thorne1987
authorThorne, K. S.
titleGravitational Radiation.
In editorIsrael, S. W. H. & editorW. (eds.)
booktitle300 Years of Gravitation,
pages330–458. (publisherCambridge University Press,
year1987).
Schutz1989
authorSchutz, B. F.
titleGravitational Radiation.
journalAnnals of the New York Academy of Sciences
volume571, pages27 (year1989).
Willke2002
authorWillke, B. et al.
titleThe GEO 600 gravitational wave detector.
journalClassical and Quantum Gravity
volume19, pages1377 (year2002).
Abramovici1992
authorAbramovici, A. et al.
titleLIGO: The Laser Interferometer Gravitational Wave
Observatory.
journalScience volume256,
pages325 (year1992).
Uchiyama1998
authorUchiyama, T. et al.
titleCryogenic cooling of a sapphire mirror-suspension
for interferometric gravitational wave detectors.
journalPhysics Letters A
volume242, pages211–214
(year1998).
<https://linkinghub.elsevier.com/retrieve/pii/S0375960198002059>.
Bradaschia1990
authorBradaschia, C. et al.
titleThe VIRGO Project: A wide band antenna for
gravitational wave detection.
journalNuclear Instruments and Methods in Physics
Research Section A: Accelerators, Spectrometers, Detectors and Associated
Equipment volume289, pages518–525
(year1990).
<https://linkinghub.elsevier.com/retrieve/pii/016890029091525G>.
GW150914
authorAbbott et al., B. P.
titleObservation of Gravitational Waves from a Binary
Black Hole Merger.
journalPhysical Review Letters
volume116, pages061102
(year2016).
<https://link.aps.org/doi/10.1103/PhysRevLett.116.061102>.
1602.03837.
GWo3b2023
authorAbbott et al., R.
titleGWTC-3: Compact Binary Coalescences Observed by LIGO
and Virgo during the Second Part of the Third Observing Run.
journalPhysical Review X
volume13, pages041039
(year2023).
<https://doi.org/10.1103/PhysRevX.13.041039
https://link.aps.org/doi/10.1103/PhysRevX.13.041039>.
2111.03606.
Danzmann1995
authorDanzmann, K.
titleLISA.
journalAnnals of the New York Academy of Sciences
volume759, pages481–484
(year1995).
<https://nyaspubs.onlinelibrary.wiley.com/doi/10.1111/j.1749-6632.1995.tb17590.x>.
Moore2015
authorMoore, C. J., authorTaylor, S. R. &
authorGair, J. R.
titleEstimating the sensitivity of pulsar timing
arrays.
journalClassical and Quantum Gravity
volume32, pages055004
(year2015).
<https://iopscience.iop.org/article/10.1088/0264-9381/32/5/055004>.
1406.5199.
Agazie2023
authorAgazie et al., G.
titleThe NANOGrav 15 yr Data Set: Evidence for a
Gravitational-wave Background.
journalThe Astrophysical Journal Letters
volume951, pagesL8 (year2023).
<https://iopscience.iop.org/article/10.3847/2041-8213/acdac6>.
2306.16213.
Grishchuk1975
authorGrishchuk, L.
titleAmplification of gravitational waves in an isotropic
universe.
journalSoviet Journal of Experimental and
Theoretical Physics volume40, pages409
(year1975).
Caprini2016
authorCaprini, C. et al.
titleScience with the space-based interferometer eLISA.
II: gravitational waves from cosmological phase transitions.
journalJournal of Cosmology and Astroparticle
Physics volume2016, pages001–001
(year2016).
<http://stacks.iop.org/1475-7516/2016/i=04/a=001?key=crossref.0814b6077a33a20a6c679242cd040e24>.
1512.06239.
Blanco-Pillado2017
authorBlanco-Pillado, J. J. & authorOlum, K. D.
titleStochastic gravitational wave background from
smoothed cosmic string loops.
journalPhysical Review D
volume96, pages104046
(year2017).
<https://link.aps.org/doi/10.1103/PhysRevD.96.104046>.
1709.02693.
Caprini2018
authorCaprini, C. & authorFigueroa, D. G.
titleCosmological backgrounds of gravitational waves.
journalClassical and Quantum Gravity
volume35, pagesaac608
(year2018).
<https://doi.org/10.1088/1361-6382/aac608>.
1801.04268.
Aggarwal2021
authorAggarwal, N. et al.
titleChallenges and opportunities of gravitational-wave
searches at MHz to GHz frequencies.
journalLiving Reviews in Relativity
volume24, pages4 (year2021).
<https://link.springer.com/10.1007/s41114-021-00032-5>.
2011.12414.
Christodoulou1991
authorChristodoulou, D.
titleNonlinear nature of gravitation and
gravitational-wave experiments.
journalPhysical Review Letters
volume67, pages1486–1489
(year1991).
<https://link.aps.org/doi/10.1103/PhysRevLett.67.1486>.
Favata2010
authorFavata, M.
titleThe gravitational-wave memory effect.
journalClassical and Quantum Gravity
volume27, pages084036
(year2010).
<https://iopscience.iop.org/article/10.1088/0264-9381/27/8/084036>.
Rakhmanov2002
authorRakhmanov, M., authorSavage, R.,
authorReitze, D. & authorTanner, D.
titleDynamic resonance of light in Fabry–Perot
cavities.
journalPhysics Letters A
volume305, pages239–244
(year2002).
<https://linkinghub.elsevier.com/retrieve/pii/S037596010201469X>.
Rakhmanov2006TLIGO
authorRakhmanov, M.
titleResponse of LIGO to Gravitational Waves at High
Frequencies and in the Vicinity of the FSR ( 37 . 5 kHz ).
journalLIGO Technical Note T060237-00
pages1 – 16 (year2006).
Markowitz2003TLIGO
authorMarkowitz, J., authorSavage, R. &
authorSchwinberg, P.
titleDevelopment of a Readout Scheme for High Frequency
Gravitational Waves.
journalLIGO Technical Note T030186-00-W
pages1–15 (year2003).
Elliott2005
authorElliott, H.
titleAnalysis of the Frequency Dependence of LIGO
Directional Sensitivity ( Antenna Pattern ) and Implications for Detector
Calibration.
journalLIGO Technical Note T050136-00-W
pages1–27 (year2005).
Akutsu2019
authorAkutsu et al., T.
titleKAGRA: 2.5 generation interferometric gravitational
wave detector.
journalNature Astronomy
volume3, pages35–40 (year2019).
<https://www.nature.com/articles/s41550-018-0658-y>.
1811.08079.
rakhmanov2005response
authorRakhmanov, M.
titleResponse of test masses to gravitational waves in the
local lorentz gauge.
journalPhysical Review D
volume71, pages084003
(year2005).
Rakhmanov2008
authorRakhmanov, M., authorRomano, J. &
authorWhelan, J. T.
titleHigh-frequency corrections to the detector response
and their effect on searches for gravitational waves.
journalClassical and Quantum Gravity
volume25, pages184017
(year2008).
Rakhmanov2009
authorRakhmanov, M.
titleOn the round-trip time for a photon propagating in
the field of a plane gravitational wave.
journalClassical and Quantum Gravity
volume26, pages155010
(year2009).
Essick2017
authorEssick, R., authorVitale, S. &
authorEvans, M.
titleFrequency-dependent responses in third generation
gravitational-wave detectors.
journalPhysical Review D
volume96, pages084004
(year2017).
Schnabel2010
authorSchnabel, R., authorMavalvala, N.,
authorMcClelland, D. E. & authorLam, P. K.
titleQuantum metrology for gravitational wave
astronomy.
journalNature communications
volume1, pages121 (year2010).
<http://www.ncbi.nlm.nih.gov/pubmed/21081919>.
Schnabel2017
authorSchnabel, R.
titleSqueezed states of light and their applications in
laser interferometers.
journalPhysics Reports
volume684, pages1–51
(year2017).
<http://linkinghub.elsevier.com/retrieve/pii/S0370157317300595>.
LSC2011
authorAbadie et al., J.
titleA gravitational wave observatory operating beyond
the quantum shot-noise limit.
journalNature Physics
volume7, pages962–965
(year2011).
<http://arxiv.org/abs/1109.2295
http://www.nature.com/doifinder/10.1038/nphys2083>.
Lough2021
authorLough, J. et al.
titleFirst Demonstration of 6 dB Quantum Noise Reduction
in a Kilometer Scale Gravitational Wave Observatory.
journalPhysical Review Letters
volume126, pages041102
(year2021).
<https://doi.org/10.1103/PhysRevLett.126.041102
https://link.aps.org/doi/10.1103/PhysRevLett.126.041102>.
2005.10292.
Vahlbruch2010
authorVahlbruch, H. et al.
titleThe GEO 600 squeezed light source.
journalClassical and Quantum Gravity
volume27, pages084027
(year2010).
<http://stacks.iop.org/0264-9381/27/i=8/a=084027?key=crossref.b3463c93b9c8ddc9d2bc372a9edfff0b>.
Aasi2015a
authorAasi et al., J.
titleAdvanced LIGO.
journalClassical and Quantum Gravity
volume32, pages074001
(year2015).
<https://iopscience.iop.org/article/10.1088/0264-9381/32/7/074001
http://arxiv.org/abs/1411.4547
http://stacks.iop.org/0264-9381/32/i=7/a=074001?key=crossref.20895763c84bce3f8929251031b2475c>.
1411.4547.
Servant2024
authorServant, G. & authorSimakachorn, P.
titleUltrahigh frequency primordial gravitational waves
beyond the kHz: The case of cosmic strings.
journalPhysical Review D
volume109, pages103538
(year2024).
<https://link.aps.org/doi/10.1103/PhysRevD.109.103538>.
McNeill2017
authorMcNeill, L. O., authorThrane, E. &
authorLasky, P. D.
titleDetecting Gravitational Wave Memory without Parent
Signals.
journalPhysical Review Letters
volume118, pages181103
(year2017).
<http://link.aps.org/doi/10.1103/PhysRevLett.118.181103>.
Goryachev2014
authorGoryachev, M. & authorTobar, M. E.
titleGravitational wave detection with high frequency
phonon trapping acoustic cavities.
journalPhysical Review D - Particles, Fields,
Gravitation and Cosmology volume90,
pages1–9 (year2014).
1410.2334.
Lasky2021
authorLasky, P. D. & authorThrane, E.
titleDid Goryachev et al. detect megahertz gravitational
waves?
journalPhysical Review D
volume104, pages103017
(year2021).
<https://doi.org/10.1103/PhysRevD.104.103017
https://link.aps.org/doi/10.1103/PhysRevD.104.103017>.
2110.13319.
Goryachev2021
authorGoryachev, M. et al.
titleRare Events Detected with a Bulk Acoustic Wave High
Frequency Gravitational Wave Antenna.
journalPhysical Review Letters
volume127, pages071102
(year2021).
<https://doi.org/10.1103/PhysRevLett.127.071102
https://link.aps.org/doi/10.1103/PhysRevLett.127.071102>.
2102.05859.
Acknowledgments —
We thank Rick Savage and Paul Lasky for fruitful discussion.
Author Contributions —
R.S. initiated the research, wrote the main manuscript text and prepared Fig. 1. M.K. developed the theoretical description, contributed to the manuscript text and provided the data for the remaining figures. All authors reviewed the manuscript.
Competing interests —
The authors declare no competing interests.
Additional information —
Correspondence and requests for materials should be addressed to R.S.
|
http://arxiv.org/abs/2409.03420v1 | 20240905110900 | mPLUG-DocOwl2: High-resolution Compressing for OCR-free Multi-page Document Understanding | [
"Anwen Hu",
"Haiyang Xu",
"Liang Zhang",
"Jiabo Ye",
"Ming Yan",
"Ji Zhang",
"Qin Jin",
"Fei Huang",
"Jingren Zhou"
] | cs.CV | [
"cs.CV"
] |
Automatic occlusion removal from 3D maps for maritime situational awareness
Felix Sattler1, Borja Carrillo Perez1, Maurice Stephan1, Sarah Barnes1
1German Aerospace Center (DLR), Institute for the Protection of Maritime Infrastructures,
Bremerhaven, Germany
Received Month dd, yyyy; accepted Month dd, yyyy
===============================================================================================================================================================================================
[1]Corresponding author
Anwen Hu1 Haiyang Xu1[1] Liang Zhang2 Jiabo Ye1 Ming Yan1[1]
Ji Zhang1 Qin Jin2 Fei Huang1 Jingren Zhou1
1Alibaba Group 2Renmin University of China
{huanwen.haw, shuofeng.xhy, ym119608}@alibaba-inc.com
<https://github.com/X-PLUG/mPLUG-DocOwl>
§ ABSTRACT
Multimodel Large Language Models(MLLMs) have achieved promising OCR-free Document Understanding performance by increasing the supported resolution of document images. However, this comes at the cost of generating thousands of visual tokens for a single document image, leading to excessive GPU memory and slower inference times, particularly in multi-page document comprehension. In this work, to address these challenges,
we propose a module to compress each high-resolution document image into 324 tokens, guided by low-resolution global visual features. With this compression module, to strengthen multi-page document comprehension ability and balance both token efficiency and question-answering performance, we develop the under a three-stage training framework: Single-image Pretraining, Multi-image Continue-pretraining, and Multi-task Finetuning. sets a new state-of-the-art across multi-page document understanding benchmarks and reduces first token latency by more than 50%, demonstrating advanced capabilities in multi-page questioning answering, explanation with evidence pages, and cross-page structure understanding. Additionally, compared to single-image MLLMs trained on similar data, our achieves comparable single-page understanding performance with less than 20% of the visual tokens. Our codes, models, and data are publicly available at <https://github.com/X-PLUG/mPLUG-DocOwl/tree/main/DocOwl2>.
§ INTRODUCTION
Understanding a multi-page document or news video is common in human daily life. To tackle such scenarios, Multimodal Large Language Models (MLLMs) <cit.> should be equipped with the ability to understand multiple images with rich visually-situated text information.
Different from natural images mainly comprising of objects, comprehending document images asks for a more fine-grained perception to recognize all texts. To tackle high-resolution document images, some works <cit.> propose to add an additional high-resolution encoder while more works <cit.> choose to crop a high-resolution image to low-resolution sub-images and let the Large Language Model to understand their relationship. By increasing the cropping number, the latter achieves better performance of OCR-free document understanding but also results in too many visual tokens for only 1 document image, e.g., InternVL 2 <cit.> costs a average of 3k visual tokens on single-page document understanding benchmark DocVQA <cit.>.
As shown in <ref>(a), such long visual tokens not only result in long inference time but also occupy too much GPU memory, making it difficult to understand a complete document or video and greatly limiting their application scenarios. Inspired by Natual Language Processing work <cit.> which summarizes a textual paragraph/document into fewer tokens and maintains most semantics, we argue that visual tokens of document images can also be further compressed while maintaining both layout and most textual information.
Existing compressing architecture in MLLMs are hard to balance information retention and token efficiency during document image encoding. As shown in <ref>(a), independently compressing each crop of a document image <cit.> could reduce visual tokens of each sub-image but still results in a long sequence of visual tokens after concatenating all sub-images. Leveraging learnable queries <cit.> or selected tokens <cit.> as compressing guidance could produce an identical length of tokens for any resolution but overlook the overall layout information, as shown in <ref>(b). Layout-aware guidance is important for compressing visual features of document images because texts within a layout region are semantic-coherent and easier to summarize. For example, in a two-column paper, texts belonging to the `Related Work' section are difficult to summarize with texts on the same line but belonging to the `Method' section.
In this work, as shown in <ref>(c), we propose a layout-aware compressing architecture based on cross-attention to compress document images into fewer tokens and achieve better performance than existing compressing methods. Considering that a global low-resolution image can well capture the overall layout information, we utilize visual features of a global low-resolution image as the compressing guidance (query). Each visual feature in the global feature map just captures the layout information of partial regions. Therefore, each query attending to all high-resolution features will not only make information compression more difficult but also increase computation complexity. To summarize text information within a layout region, for each query from the global feature map, a group of high-resolution features with identical relative positions in the raw image is collected as compressing objects, sometimes spanning multiple sub-images. Besides, since the vision-to-text (V2T) module of MLLMs could convert visual features into textual feature space, we argue that compressing visual features after the vision-to-text module could better maintain textual semantics in document images. Therefore, based on the architecture of DocOwl 1.5 <cit.>, we propose mPLUG- by placing the afther its V2T module: H-Reducer. To take full advantage of the compressing method, our model is trained with a three-stage framework: Single-image Pretraining, Multi-image Continue-Pretraining, and Multi-task Finetuning to support both single-image and multi-image/frame understanding. Our experiments on single-page and multi-page document benchmarks demonstrate the good balance of OCR-free document understanding performance and token efficiency of . We perform sufficient ablation studies to validate the superiority of our and the benefits of the three-stage training framework for both single-page and multi-page understanding performance.
Our contributions in this work are three-fold:
* We propose a novel compressing architecture, namely , to greatly reduce visual tokens of high-resolution document images. Compared with existing compressing methods, our method achieves better OCR-free single-image document Understanding performance with fewer visual tokens.
* achieves state-of-the-art performance on Multi-page Document understanding benchmarks with 50% First Token Latency.
* Compared with state-of-the-art MLLMs with similar model size and training data, achieves comparable performance with 20% visual tokens on 10 single-image document benchmarks.
§ RELATED WORK
§.§ OCR-free Visual Document Understanding
Visual Document Understanding aims to comprehend images with rich text information, including scans of document pages <cit.>, infographics <cit.>, charts <cit.>, tables images <cit.>, webpage screenshots <cit.> and natural images with scene texts <cit.>. Recently, many Multimodal Large Language Models have been proposed to perform visual document understanding in an OCR-free manner. mPLUG-DocOwl <cit.> and UReader <cit.> first propose to unify different tasks across 5 types of document images in the seq-to-seq format. To encode rich text information in high-resolution images, UReader <cit.> proposes a Shape-adaptive Cropping Module to cut the raw image into multiple low-resolution sub-images and utilizes an identical low-resolution encoder to encode both sub-images and a global image. Monkey <cit.> proposes to employ a sliding window to partition high-resolution images and a resampler to reduce redundant information of each sub-image. mPLUG-DocOwl1.5 <cit.> increases the basic resolution of the low-resolution encoder and replaces the Visual Abstractor <cit.> with 1 simple convolution layer to better maintain the structure information. DocPedia <cit.> directly processes high-resolution images in the frequency domain. CogAgent <cit.> proposes to utilize a high-resolution encoder to encode high-resolution visual features and a low-resolution encoder to encode low-resolution global features. Series work of InternLM-XComposer <cit.> and InternVL <cit.> further optimize the cropping method or increase the cropping number and greatly improves the OCR-free Document Understanding performance. These works achieve promising performance but suffer from too many visual tokens for a high-resolution image (always 1k tokens for a common A4-sized document page), which hinders the development of OCR-free multi-page document understanding.
§.§ Visual Feature Compressing
Reducing visual tokens of a single image enables a Multimodal Large Language Model with limited maximum sequence length to leverage more images as contexts to perform complex multimodal tasks, such as video understanding, embodied interaction, or multi-page document understanding. There have been some architectures proposed for compressing visual features of general images with fewer learnable queries, such as the Resampler <cit.>, Abstractor <cit.> and Q-former <cit.>. Randomly initialized Learnable queries can ensemble object information in general images but is hard to summarize rich text information in high-resolution document images. As a compromise solution, TokenPacker <cit.> proposes to compress each sub-image with its downsampled visual features as the query to perform cross-attention. TokenPacker just reduces each sub-image's visual tokens, thus still creates more than 1k visual tokens when processing high-resolution document images. TextMonkey <cit.> first filters valuable visual tokens and then uses them as guidance to aggregate all visual tokens. Due to that valuable visual tokens are selected by measuring the token similarity, visual information of partial regions may not be covered and thus not well compressed during following cross-attention. In this work, our leverages visual features from the row-resolution global images as the query, the ensembled feature map of sub-images as key and value. This not only produces a fixed number of visual tokens for images of any resolution but also covers all areas during compression. Compared to Mini-Gemini <cit.> which compresses general visual features, there are major two differences with our . Firstly, we make full use of global visual features and sub-image features produced by an identical low-resolution vision encoder and don't need to add an extra high-resolution encoder. Secondly, for better summarizing textual information in document images, our cross-attention is applied based on visual features that have been aligned with textual features of LLM. We argue that directly compressing outputs of the vision encoder will lose more visually situated textual information while comprising features aligned with LLM is like summarizing texts <cit.> and can better maintain textual semantics in document images. Fair comparisons are performed in our experiments to support our hypothesis.
§ MPLUG-
As shown in <ref>, leverages a Shape-adaptive Cropping Module and a low-resolution vision encoder to encode high-resolution document images. Then, it utilizes a vision-to-text module to ensemble horizontal visual features and align the dimension of vision features with Large Language Models. Furthermore, a high-resolution compressor is designed to greatly reduce the number of visual features while maintain most visual information. Finally, compressed visual tokens of multiple images/pages are concatenated with text instructions and input to a Large Language Model for multimodal understanding.
§.§ High Resolution Vision Encoding
Following UReader <cit.> and DocOwl 1.5 <cit.>, utilizes a parameter-free Shape-adaptive Cropping Module to preprocess high-resolution images. Concretely, it cuts each high-resolution image I into R × C size-fixed sub-images I^s = {I^s_xy}, 1 ≤ x ≤ R, 1 ≤ y ≤ C, where cropping rows R and columns C are flexibly decided based on the raw resolution of I. Besides, to maintain the overall layout information, the raw image is also directly resized to a global image I^g. Both the global image and sub-images are sized H × W.
After the cropping module, a low-resolution transformer-based vision encoder ViT <cit.> is utilized to independently extract vision features of each sub-image and the global image as follows:
V^g = ViT(I^g)
V^s_xy = ViT(I^s_xy), 1 ≤ x ≤ R, 1 ≤ y ≤ C,
where both V^g and V^s_xy are visual features with the shape of h × w × d, d is the feature dimension and w, h are the width and height of the feature map.
Following DocOwl 1.5, after the ViT, for each sub-image or global image, we apply a vision-to-text module to ensemble horizontal 4 features by a convolution layer and align the feature dimension with the Large Language Model with a fully connected layer. The calculation of is represented as follows:
V̂ = FC( Conv(V)), V ∈{V^g, V^s_xy}, 1 ≤ x ≤ R, 1 ≤ y ≤ C,
where the shape of the visual feature map V̂ is h ×w/4×d̂, d̂ is the dimension of hidden states of the large language model.
§.§ High Resolution Full-Compressing
Although the has reduced the visual tokens of each sub-image or global image to 1/4 the length of original visual features, the token length of high-resolution images is still too long to perform multi-page/image joint understanding for Large Language Models. For example, the token
length of 1 high-resolution image in DocOwl 1.5 <cit.> is (R × C+1) × h ×w/4, which will be 2,560 when the raw resolution is 1,344 × 1,344.
In Natural Language Processing, a sentence/paragraph/document of text tokens can be compressed into fewer summary vectors while maintaining most semantics <cit.>. Besides, since visual features have been aligned with the textual feature space of large language models, the visual tokens of document images after the vision-to-text module can also be treated as textual tokens encoding different parts of textual information in the image. Thus, taking into account these two points, in this work, we argue that visually situated textual information of document images can also be further compressed into fewer tokens, especially after the vision-to-text alignment.
Ideally, the compression of visual texts should be based on their layout. Texts from the same layout region (e.g., a title/paragraph region) are more appropriate to be fused into an identical token. After the vision-to-text module , the global visual feature V̂^g mainly encodes the overall text layout information while visual features of sub-images {V̂^s_xy} capture detailed textual information. Besides, due to both the global image and cropped sub-images come from an identical image, there is a clear mapping between the visual tokens of V̂^g and {V̂^s_xy}. As shown in <ref>, each visual token in V̂^g can be aligned with R × C visual tokens in {V̂^s_xy}. Therefore, in this work, with global visual features as query, and the visual features from sub-images as key and value, we propose to utilize cross-attention to ensemble textual semantics and greatly reduce the number of visual tokens of a high-resolution image to the one of a low-resolution global image.
Concretely, we first re-organize feature maps of cropping images ({V̂^s_xy}, 1 ≤ x ≤ R, 1 ≤ y ≤ C) to a complete feature map V̂^s according to their positions in the raw high-resolution image. Then, for each visual token in the feature map V̂^g of the global image, we collect its corresponding R × C visual tokens from V̂^s as the key and value, the cross-attention layer in this compressor is calculated as follows:
v̂^g_ij∈V̂^g, 1 ≤ i ≤ h, 1 ≤ j ≤ w/4
v̂^s_ij = [v̂^s_i^'j^'] ⊂V̂^s, (i-1)R+1 ≤ i^'≤ iR, (j-1)C+1 ≤ j^'≤ jC
v̅_ij = softmax(W^qv̂^g_ijW^kv̂^s_ij/√(d_k))W^vv̂^s_ij + v̂^g_ij
where v̂^g_ij is a visual token from the feature map of the global image, v̂^s_ij are visual tokens from the re-organized feature map of cropping images. v̂^g_ij and v̂^s_ij correspond to the same area in the raw image. W^q,W^k,W^v are learnable projection matrics.
After high-resolution compressing, the compressed feature map of each image is organized into a sequence V̅=[v̅_1,v̅_2,...,v̅_h ×w/4] for subsequent understanding of the large language model.
§.§ Multi-image Modeling with LLM
Through the high-resolution compressing, the number of visual tokens for each high-resolution image is reduced from (R × C+1) × h ×w/4 to h ×w/4. Such efficient vision encoding allows joint understanding of multiple document images with Large Language Models. To help the LLM better distinguish visual features from different images and understand the ordinal number of images, we add a textual ordinal token before the visual features of each image, where x is the ordinal number. Overall, the decoding of the decoder for multiple images is as follows:
Y = LLM([P_0;V̅_0; P_1;V̅_1, ...,P_n; V̅_n;T])
where [;] means the concatenation operation, n is the number of images, P_x, 1 ≤ x ≤ n is the textual embedding of the ordinal token , V̅_x is the visual features for each image, T is the textual instruction and Y is the predicted answer.
§.§ Model Training
is trained with three stages: Single-image Pre-training, Multi-image Continue Pretraining, and Multi-task Finetuning.
At the first stage, to ensure the compressed visual tokens can encode most visual information, especially visually situated texts, we first perform Unifed Structure Learning as DocOwl 1.5 with the dataset DocStruct4M <cit.>, which covers the learning of struct-aware document parsing, table parsing, chart parsing and natural image parsing of a single image.
After Single-image Pretraining, to empower our model with the ability to correlate multiple images, we further perform Multi-image Continue Pretraing with a struct-aware multi-page document parsing dataset . With partial documents from two datasets of PixParse[<https://huggingface.co/datasets/pixparse/idl-wds>][<https://huggingface.co/datasets/pixparse/pdfa-eng-wds>], we design two symmetrical tasks of multi-image understanding: Multi-page Text Parsing and Multi-page Text Lookup. Given successive page images in a document, the Multi-page Text Parsing instructs the model to parse texts of specified one or two pages, such as . As for the Multi-page Text Lookup task, with texts from 1-2 pages as input, the model is required to predict the concrete ordinal number of images containing these texts, for example, .
Besides , during this stage, we also randomly chose 0.5M samples from DocStruct4M to avoid the catastrophic forgetting of structure parsing across different types of images.
Finally, we ensemble single-image and multi-image instruction tuning datasets to perform multi-task tuning. We leverage DocDownstream-1.0 <cit.> and DocReason25K <cit.> as single-image datasets. DocDownstream-1.0 is an ensembled dataset comprising of DocVQA <cit.>, InfoVQA <cit.>, DeepForm <cit.>, KLC <cit.>, WTQ <cit.>, TabFact <cit.>, ChartQA <cit.>, TextVQA <cit.>, TextCaps <cit.> and VisualMRC <cit.>. DocReason25K is a question-answering dataset with detailed explanations.
As for multi-image understanding, we ensemble 2 document datasets, MP-DocVQA <cit.> and DUDE <cit.>, and 1 news video dataset NewsVideoQA <cit.> as concise question-answering datasets. MP-DocVQA contains 46k question-answering pairs on 60k page images scanned from 6k industry documents with rich tables, diagrams, pictures, and both handwritten and printed texts. DUDE covers more domains of documents, including medical, legal, technical, financial, etc. It contains 41k question-answering pairs on 5k documents. NewsVideoQA collects news videos with rich visually-situated texts from diverse English news channels around the world, such as BBC, CNN, etc. It contains 8k question-answering pairs framed on 3k videos. Besides, to trigger the ability of detailed explanations with evidence pages, we built MP-DocReason51K based on DocReason25K. Concretely, for each single-image sample from DocReason25K, we construct two multi-image samples with noisy images randomly chosen from the same or different categories. After randomly inserting the evidence image into noisy images, we add an extra evidence description (e.g., ) into the raw detailed explanation to get the target of multi-image samples. Most question-answering samples just focus on 1-2 pages of a document, to further strengthen the ability of a comprehensive understanding of a document, we leverage a small part of annotations from DocGenome <cit.> to construct text sequences in the JSON format, which represents the hierarchical structure of a scientific paper and partial detailed texts.
The detailed statistics of training datasets of are shown in <ref>.
§ EXPERIMENTS
§.§ Implementation Details
The maximum number of crops is set to 12. The resolution of each sub-image or the global image is 504x504. The comprises of 2 layers of cross attention. Initialized from mPLUG-Owl2 <cit.>, the vision encoder (ViT/L-14 <cit.>), H-Reducer and are trained during the Sinlge-image Pretraining. Besides, the main parameters of the Large Language Model <cit.> are frozen while a Modality Adaptive Module (MAM) <cit.> used to distinguish visual and textual features in the LLM is tuned. The first stage is trained 12k steps with a batch size of 1,024 and the learning rate set as 1e-4. During the Multi-image Continue-pretraining, the vision encoder is further frozen and the H-Reducer, and MAM is tuned. The second stage is trained 2.4k steps with a batch size of 1,024 and the learning rate set as 2e-5. At the final Multi-task Finetuning stage, all parameters except the vision encoder are optimized. The batch size, training step, and learning rate at this stage are set as 256, 9k, and 2e-5, respectively.
§.§ Main Results
We compare with state-of-the-art Multimodal Large Language Models on 10 single-image document understanding benchmarks, 2 Multi-page document Understanding benchmarks, and 1 text-rich video understanding benchmark. Both question-answering performance and the First Token Latency (seconds) are considered to show the effectiveness of our model.
§.§.§ Single-image Document Understanding
For Single-image Document Understanding, we divide baselines into three groups: (a) models without Large Language Models as decoders <cit.>, (b) Multimodal LLMs <cit.> with an average number of visual tokens over 1k for a single document image and (c) Multimodal LLMs <cit.> with an average number of visual tokens less than 1k. As shown in <ref>, although specifically fine-tuned on each downstream dataset, Donut <cit.> or PixsStruct <cit.> are not as good as Multimodal LLMs, showing the potential of MLLMs for generalized OCR-free document understanding. Compared with MLLMs with 1k visual tokens, our achieves better or comparable performance on 10 benchmarks. Especially, with fewer visual tokens, our model outperforms both TextMonkey <cit.> and TokenPacker <cit.> which also aim to compress visual tokens, showing that our `Full Guidance & Full Candidate' architecture is better at summarizing and maintaining textual information in high-resolution document images. Besides, compared with state-of-the-art MLLMs with 1k visual tokens, achieves 80% performance on 7/10 benchmarks while with 20% visual tokens. <ref> visualizes the comparison with SOTA in terms of question-answering performance and the number of visual tokens.
Furthermore, we compare the First Token Latency (seconds) on the 3 most frequently compared datasets, representing documents, charts, and natural images. As shown in <ref>, the far greater number of visual tokens enable InternVL 2 <cit.> and IXC 2.5 <cit.> to achieve better performance but also result in higher inference time. Considering the model architecture and training data, it's most fair to compare with DocOwl 1.5. After adding the , with similar training data of OCR learning, achieves 98% performance of DocOwl 1.5 while reducing 50% First Token Latency with just 20% visual tokens. This validates the effectiveness of our compressor for compressing visually-situated text information on the most common documents, charts, and natural images.
§.§.§ Multi-page/Video Document Understanding
For Multi-page Document Understanding and Text-rich Video Understanding benchmarks, we choose recently proposed Multimodal LLMs <cit.> with multi-page OCR-free document understanding abilities and can be fed into more than 10 images under a single A100-80G as baselines. As shown in <ref>, with fewer visual tokens for a single image/frame, our model achieve better question-answering performance and much less First Token Latency, validating the good balance of between the OCR-free document understanding performance and token efficiency.
§.§ Ablation Study
We perform sufficient ablation studies to show the effectiveness of the architecture of and the three-stage training strategy of .
§.§.§ Compressor Architecture
To validate the effectiveness of our , we compare different compressing architectures with an identical training pipeline of Single-image Pretraing and Single-image Document Understanding Finetuning, keeping both training data and training setting consistent.
As shown in <ref>, compared with CAbstractor <cit.>, Resampler <cit.> achieves worse document understanding performance (r2 vs r1). This shows that due to no prior knowledge, such as spatial relationship, is leveraged as compressing guidance, utilizing queries learned from scratch to compress rich visually-situated text information is more challenging than simple adaptive mean pooling. Our outperforms CAbstractor (r3 vs r2), validating that leveraging global visual features as layout-aware guidance can better distinguish the information density of each fine-grained visual feature and therefore maintain more visually-situated text information.
Instead of placing the compressor after the vision-to-text module H-Reducer, we also try inserting it between the vision encoder and the vision-to-text module. Such a setting results in performance decreases across three datasets (r4 vs r3), validating our hypothesis that compressing features after the vision-to-text module is like summarizing textual features and can maintain more textual semantics while compressing visual features after the visual encoder loses more visually situated text information. Besides, without aligning each query token in the global feature map with R × C fine-grained visual tokens from the re-organized feature map to perform attention within a group as <ref>, we try utilizing each query token to attend all visual tokens of sub-images. Such complete attention not only brings higher computational complexity but also causes performance decreases (r5 vs r3), showing that the positional correspondence between the global visual map and the re-organized fine-grained visual map is a reliable prior knowledge for compressing visual features efficiently. Furthermore, directly performing mean pooling on each group of R × C fine-grained visual features underperforms utilizing global visual features as the query to perform cross-attention (r6 vs r3). This also proves the importance of reliable guidance during compressing.
Compared with 2 layers of cross-attention, decreasing cross-attention layers bring a slight performance increase on DocVQA <cit.> but more performance decrease on WikiTablesQA (WTQ) <cit.> (r7 vs r3). Further increasing to 4 layers doesn't significantly improve performance (r8 vs r3). This shows that compressing high-resolution visual features doesn't require a deep neural network. Finally, increasing the maximum number of crops and the base resolution of the global image or each sub-image are two main strategies to increase the supported input resolution. Our experiments show that increasing the cropping number (r9 vs r3) or basic resolution (r10 vs r9) benefits the document understanding performance. Increasing basic resolution brings more improvement because of more visual tokens after compressing.
§.§.§ Training Strategy
is trained with three stages: Single-image Pretraining, Multi-image Continue-pretraining, and Multi-task Finetuning. <ref> shows the influence of each stage for OCR-free single-page and multi-page document understanding. With the Single-image Pretraining and Single-image finetuning (r1), the model achieves promising performance on single-page benchmark DocVQA and documents from MP-DocVQA with only 1 page. Although only trained with 1 image as the input, the model can also achieve around 50% accuracy when fed into 2-10 page images. However, the model struggles to understand documents with more than 10 pages, which greatly exceeds the number of input images during training and brings great difficulty in correlating images and finding answers. Performing Multi-image Fintuing could greatly improve the model's ability to understand multiple images (r2 vs r1). Furthermore, adding the Multi-image Continue-pretraining could also improve the question-answering performance on downstream datasets, especially for documents with more than 10 pages (r3 vs r2). This demonstrates that parsing texts of the specified page or judging which pages contain specified texts among multi-page documents is a basic ability for multi-page document understanding. Finally, by ensembling both single-image and multi-image instruction tuning sets to perform the Multi-task Finetuning (r4), achieves the best performance on both single-page and multi-page document benchmarks, showing the cross-improvement between single-image and multi-image comprehension.
§.§ Qualitative Results
As shown in <ref>, after the Multi-image Continue Pretraining stage, is able to locate the corresponding image of the given texts accurately. Besides, although representing each high-resolution image with just 324 tokens, is still capable of parsing detailed texts of specified two images, validating the promising OCR-free multi-page document understanding performance of . It also demonstrates our proposal that 324 tokens are enough to encode detailed text information in common A4-sized document pages and the effectiveness of our .
After the Multi-task Finetuning, given multiple images and a question, can give a simple answer first and then provide a detailed explanation with the evidence, as shown in <ref>. can comprehend not only page images rendered from PDF files (<ref>(c)) but also scan images of a document (<ref>(a-b)). When a question is unanswerable, can also tell and give corresponding reasons (<ref>(c)).
Besides multi-page documents, is also capable of understanding text-rich videos. As shown in <ref>, among similar frames within a video, can distinguish fine-grained textual differences, locate relevant frames, and give accurate answers.
§ CONCLUSION
In this work, we propose mPLUG-, a Multimodal Large Language Model with the ability of efficient OCR-free Multi-page Document Understanding. The novel architecture in compresses each high-resolution document image into 324 tokens through cross-attention with the global visual feature as guidance, and re-organized features of cropped images as keys and values. On single-image document understanding benchmarks, with fewer visual tokens, outperforms existing compressing methods and achieves comparable performance with SOTA MLLMs with similar training data. Besides, achieves OCR-free state-of-the-art performance on two multi-page document understanding benchmarks and 1 text-rich video understanding benchmark. Our experiments validate that thousands of visual tokens for 1 common A4-sized document page may be so redundant that too many computational resources are wasted. We hope could bring more researchers' attention to the balance of efficient representation of high-resolution images and OCR-free Document Understanding performance.
iclr2024_conference
|
http://arxiv.org/abs/2409.03644v1 | 20240905160211 | RealisHuman: A Two-Stage Approach for Refining Malformed Human Parts in Generated Images | [
"Benzhi Wang",
"Jingkai Zhou",
"Jingqi Bai",
"Yang Yang",
"Weihua Chen",
"Fan Wang",
"Zhen Lei"
] | cs.CV | [
"cs.CV"
] |
Gravitational waves from decaying sources in strong phase transitions
[
September 9, 2024
=====================================================================
[footnote]† Corresponding Authors
§ ABSTRACT
In recent years, diffusion models have revolutionized visual generation, outperforming traditional frameworks like Generative Adversarial Networks (GANs).
However, generating images of humans with realistic semantic parts, such as hands and faces, remains a significant challenge due to their intricate structural complexity. To address this issue, we propose a novel post-processing solution named RealisHuman. The RealisHuman framework operates in two stages. First, it generates realistic human parts, such as hands or faces, using the original malformed parts as references, ensuring consistent details with the original image. Second, it seamlessly integrates the rectified human parts back into their corresponding positions by repainting the surrounding areas to ensure smooth and realistic blending. The RealisHuman framework significantly enhances the realism of human generation, as demonstrated by notable improvements in both qualitative and quantitative metrics. Code is available at <https://github.com/Wangbenzhi/RealisHuman>.
§ INTRODUCTION
Diffusion models have emerged as a powerful approach in the field of visual generation, significantly surpassing traditional frameworks such as Generative Adversarial Networks (GANs) <cit.>. These models function as parameterized Markov chains, showcasing an exceptional capability to convert random noise into complex images through a sequential refinement process. Starting with noise, diffusion models progressively enhance the visual quality, ultimately producing high-fidelity representations. With ongoing technological advancements, diffusion models have shown substantial promise in image generation and various related tasks<cit.>.
Despite their remarkable performance in generating a diverse range of objects, diffusion-based models encounter significant challenges when reconstructing realistic human features, particularly faces and hands. The intricate structural complexity of these parts, coupled with the limited information preserved after VAE encoder downsampling <cit.>, often leads to incorrect hand structures or distorted faces.
As depicted in Fig.<ref>, these inaccuracies highlight the difficulties these models face in human image generation.
To address this issue, HandRefiner<cit.> proposed a lightweight post-processing solution that employs a conditional inpainting approach to correct malformed hands while preserving other image regions. Utilizing a hand mesh reconstruction model, HandRefiner ensures accurate finger counts and hand shapes, fitting the desired hand pose. By leveraging ControlNet modules, HandRefiner reintegrates correct hand information into the generated images, enhancing overall image quality. However, this method has two notable limitations. As illustrated in Fig.<ref>, HandRefiner often fails to maintain consistency in skin tone and texture due to missing reference information. It also struggles with reconstructing detailed hands when the regions are small. Additionally, it can introduce distortions in other areas, like the face, compromising the overall image integrity.
In this paper, we propose a novel post-processing solution named RealisHuman to address the challenge of refining malformed human parts. To ensure high-quality refinements in small regions, our method locates and crops the malformed areas, allowing us to concentrate on detailed local refinements. Compared to HandRefiner, our method is capable of refining various human parts, not just hands, while preserving intricate details such as skin tone and texture. This capability ensures that the refined parts are both realistic and consistent with the surrounding image. Additionally, our approach demonstrates strong generalization capabilities, effectively handling different styles of images, including cartoons, sketches, and so on. As shown in Fig.<ref>, our RealisHuman framework operates in two stages. In the first stage, our goal is to generate rectified human parts that preserve the consistent details of the original malformed parts. By using the malformed parts as references, we extract detailed information through the Part Detail Encoder and DINOv2, ensuring the preservation of fine-grained details and enhancing the overall realism of the generated parts. Additionally, we incorporate 3D pose estimation results extracted from the malformed parts to guide the generation of human part images, ensuring that the poses are both accurate and realistic. After obtaining the rectified human parts, the subsequent challenge is to seamlessly integrate them into the original local image. We address this as an inpainting problem. Initially, the rectified human parts are placed back into their original positions, and the surrounding areas are masked. We then train a model capable of seamlessly blending the human parts with the surrounding areas, ensuring a smooth transition and realistic integration. Finally, the refined human parts are pasted into the original image, completing the process of malformed human parts refinement. This approach not only corrects structural inaccuracies but also maintains visual coherence with the original image, providing a robust solution for human parts refinement in image generation tasks. The RealisHuman framework significantly enhances the realism of human generation, as validated by comprehensive experiments demonstrating improvements in both qualitative and quantitative measures.
Our contributions are summarized as follows:
* We propose a novel post-processing framework named RealisHuman to address the task of refining human parts in generated images. Our method maintains consistent details with the original image, effectively handles small part refinements, and demonstrates strong generalization across different image styles.
* We propose a novel two-stage local refinement paradigm, which can be extended to the refinement of other structurally fixed objects, such as distorted logos.
* The RealisHuman framework significantly enhances the realism of human generation, as evidenced by extensive experiments demonstrating enhancements in both qualitative and quantitative metrics.
§ RELATED WORK
Diffusion Model for Image Generation.
Recently, diffusion models have attracted a lot of attention because of their powerful generating ability and have become a hot research direction in the field of computer vision. These models have exhibited superior performance, surpassing conventional techniques due to their intrinsic capability to generate high-quality and diverse outputs. However, the high dimensionality of images introduces significant computational complexity. To address this, the Latent Diffusion Model (LDM) <cit.> was proposed. LDM performs denoising within a lower-dimensional latent space using a pre-trained autoencoder. This approach effectively balances computational efficiency with generative performance, representing a pivotal advancement in the scalability of diffusion-based image generation.
Despite these advancements, controlling the generative process of diffusion models remains a challenge, particularly when precise semantic adherence is required. Diffusion models have achieved great success in producing realistic images that adhere to the semantic content provided by encoding text inputs into latent vectors via pre-trained language models like CLIP <cit.>. However, relying solely on text descriptions for controlling the model is insufficient, especially when it comes to describing postures and actions <cit.>.
To enhance controllability and precision in generated imagery, researchers have explored the incorporation of additional control signals. ControlNet <cit.> employs a trainable duplicate of the Stable Diffusion (SD) encoder architecture to extract features from conditional inputs. Similarly, T2I-Adapter <cit.> utilizes lightweight, composable adapter blocks for feature extraction. These additional conditional layers have proven instrumental in improving the model's controllability under various conditions, such as pose, mask, and edge, thereby significantly influencing the direction of its output.
Realistic Human Image Generation.
Diffusion models have been extensively utilized for pose-conditioned human image synthesis tasks. Animate Anyone <cit.> proposes a novel network architecture, ReferenceNet, specifically designed as a symmetrical UNet structure to capture the spatial details of reference images. MagicAnimate <cit.> adopts a similar approach but utilizes a ControlNet specifically tailored for DensePose <cit.> inputs instead of the more commonly used OpenPose <cit.> keypoints, thereby offering more precise pose guidance. Champ <cit.> incorporates four distinct control signals simultaneously as conditions for guiding the image generation process, namely depth, normal, semantic, and skeleton, which are extracted from SMPL <cit.> models.
Despite the remarkable advancements in generating high-quality synthetic images of humans, a persistent challenge remains in the synthesis of hands. This is primarily due to the intricate nature of hand anatomy and the difficulty in accurately depicting hands using skeletal frameworks. Some approaches have begun to specifically focus on generating higher-quality hands. Diffusion-HPC <cit.> introduces a technique that employs depth maps of human bodies rendered from reconstructed human body meshes, utilizing conditional diffusion models to correct morphological abnormalities in generated human bodies. Similarly, HandRefiner <cit.> proposes a post-processing approach that utilizes a reconstructed hand mesh to provide essential information about hand shape and location. While these methods can be effective in addressing distortions in hand morphology, they often fall short in preserving fine details such as skin tone consistency and texture.
To enhance the realism of human generation, we propose a two-stage post-processing method named RealisHuman. In the first stage, our method rectifies malformed parts by utilizing detailed information and 3D pose estimation results from the original malformed parts. In the second stage, we seamlessly integrate the rectified human parts back into the original image to complete the refinement process.
§ METHOD
Our goal is to refine the malformed parts while preserving the consistent details of the original parts. The overall framework pipeline is depicted in Fig.<ref>. To ensure the realism of the rectified human parts, the pipeline is divided into two distinct stages. In the first stage, the rectified human parts are generated under the guidance of the parts meshes and the malformed part images. In the second stage, the rectified human parts obtained from the first stage are integrated back into the local image, followed by repainting the surrounding region to achieve the final results.
§.§ Preliminary
Latent Diffusion Models. Our approach builds upon the foundation of Stable Diffusion (SD)<cit.>, which originates from the Latent Diffusion Model (LDM). LDMs are designed to operate within the latent space managed by an autoencoder, specifically 𝒟(ℰ(·)). A prime example of these models is Stable Diffusion (SD), which combines a Variational AutoEncoder (VAE)<cit.> and a time-conditioned U-Net<cit.> to estimate noise. For handling text inputs, SD uses a CLIP ViT-L/14<cit.> text encoder to transform textual queries into embeddings, denoted as c_text.
In the training phase, the model processes an image I and a corresponding text condition c_text. The image is encoded into a latent representation z_0 = ℰ(I), which then undergoes a predefined sequence of T diffusion steps governed by a Gaussian process, resulting in a noisy latent representation z_T ∼𝒩(0, 1). The objective of SD is to iteratively refine z_T back to z_0, using the following loss function:
L = 𝔼_E(I), c_text, ϵ∼𝒩(0,1), t[ ϵ - ϵ_θ(z_t, t, c_text) _2^2 ],
where t = 1, ..., T denotes the timestep embedding. ϵ_θ denotes the trainable components within the denoising U-Net, which processes the noisy latents z_t and the text condition c_text. The architecture of the U-Net includes convolutional layers (Residual Blocks) and both self- attention and cross-attention mechanisms (Transformer Blocks).
The training process involves encoding the image into a latent form z_0 and subjecting it to a sequence of diffusion steps, producing z_T. The denoising U-Net is trained to predict and remove the noise added during these steps. Once trained, the model can generate z_0 from z_T using a deterministic sampling method (such as DDIM<cit.>), and the final image is reconstructed through the decoder 𝒟.
During inference, the initial latent z_T is sampled from a Gaussian distribution with the initial timestep T and gradually refined through iterative denoising steps to yield z_0. At each step, the U-Net predicts the noise present in the latent features corresponding to that specific timestep. The decoder 𝒟 then reconstructs the final image from z_0.
§.§ Realistic Human Parts Generation
In the first stage, our objective is to generate realistic parts that maintain consistent detail and pose with the original images. This is achieved by using the guidance of meshes and reference information from the malformed parts. Leveraging these, we ensure the rectified human parts match the intended appearance and pose.
Data preparation. Suppose we have a series of original human images and corresponding generated images that contain malformed human parts, produced by algorithms such as <cit.>. We begin by locating and cropping the target part regions using the human skeleton estimation method <cit.>. After isolating the target part regions, we employ the state-of-the-art (SOTA) mesh reconstruction method <cit.> to estimate the meshes for each part. Additionally, we render the meshes to produce depth maps and binary mask maps m. To reduce the influence of the background and focus on realistic human parts generation, we apply the mask m to filter out the background and obtain the foreground regions of the human parts as reference images I_ref.
Part Detail Encoder. Previous image-conditioned generation tasks <cit.> have typically utilized the CLIP image encoder <cit.> to encode reference images. Specifically, these methods compress reference images from a spatial size of 224 × 224 × 3 into a one-dimensional vector of dimension 1024, and then employ cross-attention mechanisms to integrate the latent representation with this vector. However, these approaches face challenges in preserving appearance details, as encoding reference images into semantic-level features results in a loss of spatial representations. Previous works <cit.> have demonstrated that the self-attention mechanism can significantly enhance the preservation of detail in reference images. Inspired by these findings, we introduce the Part Detail Encoder to improve the realism of rectified human parts by integrating detailed information from the reference images I_ref. The Part Detail Encoder shares the same architecture as the original Stable Diffusion (SD), comprising self-attention and cross-attention layers, and is initialized with the original SD UNet. To achieve this, we use the reference images as input to the Part Detail Encoder and obtain intermediate outputs. To better integrate detailed information, we modify the input to the self-attention mechanism of the UNet. Specifically, we concatenate the intermediate outputs of the Part Detail Encoder with those of the original SD, and use this concatenated output as the input to the self-attention mechanism of the original SD. This approach ensures that fine-grained details are preserved, enhancing the overall realism of the generated human parts. The modified self-attention mechanism can be formulated as:
f_s=softmax(Q_o · (K_o ⊕ K_h)^T/√(d)) · (V_o ⊕ V_h),
where d is the feature dimension. Q_o, K_o, and V_o denote the query, key, and value from the self-attention layers of the original SD, respectively. Meanwhile, K_h and V_h denote the key and value from the self-attention layers of the Part Detail Encoder.
Meanwhile, we employ DINOv2<cit.> to get the image embedding c_r of the reference image, which is then passed into the model through a cross-attention mechanism. This approach supplements the semantic-level features of the reference image. The depth map is processed through several convolution layers to obtain the pose condition c_p, which is then added to the noise latent before being input into the denoising UNet,as described in <cit.>.
Training. With the design of above, the loss term of this stage is computed as:
ℒ_1 = 𝔼_z_0, c_p, c_r, I_ref, ϵ∼𝒩(0,1),t [||ϵ - ϵ_θ(z_t,c_p, c_r, I_ref,t)||_2^2],
where ϵ_θ denotes the trainable parameters of the denoising UNet and t is the timestep embedding.
§.§ Seamless Human Parts Integration.
Another issue is that directly pasting back the rectified human parts r_part introduces copy-and-paste artifacts in the edited region, making the generated image appear unnatural. To address this issue, we repaint the area between the background and the rectified human parts, seamlessly integrating them into the target region for a more natural appearance.
Data Preparation. Given an image containing human parts like the face or hands, we first locate and crop the target regions and obtain the binary masks using the same approach mentioned in the first stage. For each part, we dilate its binary mask m using the kernel k_d to obtain the dilated mask m_d = dilate(m, k_d). Additionally, we erode the binary mask m with a small kernel k_e to obtain the eroded mask m_e = erode(m, k_e). Using the eroded mask, we extract the eroded human part and paste it back into the corresponding region. The erosion process is crucial because the rectified human parts generated in the first stage often exhibit inharmonious edges, which significantly affect the repainting results. By eroding the human part regions, we aim to equip the model with the ability to complete human part edges during the repainting process. This approach helps mitigate issues caused by inharmonious edges, resulting in a more natural and seamless integration of the rectified human parts into the target regions. Suppose the local human part image is denoted as I. The corresponding masked image and binary mask can be formulated with Eq. <ref> and Eq. <ref>.
I_f = I ⊙ (1-m_d) + I ⊙ m_e,
m_f = m_d - m_e.
Our goal is to predict the area where the binary mask m_f equals one while keeping the other areas unchanged, resulting in the final output I^'. To achieve this, we first encode the masked image I_f to obtain the masked latent l_m = ℰ(I_f). Next, we downsample the binary mask m_f to match the size of the masked latent l_m. Similar to SD-inpainting, we add five additional input channels for the UNet: four for the encoded masked image l_m and one for the mask m_f. Additionally, we initialize the model with SD-inpainting weights. With this design, the loss term for this stage is computed as follows:
ℒ_2 = 𝔼_z_0, l_m, m_f, ϵ∼𝒩(0,1), t[ ϵ - ϵ_θ(z_t, l_m, m_f, t) _2^2 ],
where t is the timestep embedding.
During inference, we paste the rectified human part r_part back into the corresponding region and predict the unknown area to ensure harmonious integration of the rectified human part. The formulation of I_f during the inference process is given by Eq. <ref>:
I_f = I ⊙ (1-m_d) + r_part⊙ m_e.
§ EXPERIMENTS
In this section, we begin by detailing the implementation aspects of our approach, followed by a description of the datasets and evaluation protocols used. Additionally, We present comparative experiments to benchmark our method against previous work, and conduct ablation studies to assess the efficacy of each component in our framework.
Our RealisHuman is trained in two stages: realistic human parts generation and seamlessly integrated the human parts. All experiments are conducted on 8 NVIDIA A800 GPUs. In the first stage, both the main UNet and the Part Detail Encoder are initialized from Real Vision v5.1, and all components are optimizable except for DINOv2<cit.> and VAE encoder/decoder<cit.>. Training is conducted for 50,000 steps with a batch size of 5. In the second stage, only the Inpainting U-Net is optimizable, which is initialized from SD-inpainting<cit.>. We train the Inpainting U-Net for 20,000 steps with a batch size of 16. For both two stages, the learning rate is set to 5e-5. The image is resize to a resolution of 512×512. The zero-SNR<cit.> and classifier-free guidance(CFG) <cit.> are enabled. The unconditional drop rate is set to 1e-2. We employ HaMeR<cit.> and 3DDFAv3<cit.> to estimate the meshes for each human part.
During inference, we adopt a DDIM sampler for 20 denoising steps. We set the hyper-parameter g_d to 5 and g_e to 0.05 times the perimeter of the mask. The images demonstrated in our paper are generated by SDXL<cit.> and SDXL-LEOSAM[https://civitai.com/models/43977/leosams-helloworld-xl].
§.§ Datasets and Evaluation Protocol.
We have collected a dataset comprising approximately 58,000 high-quality local hand images and 38,000 high-quality local face images for training our model. To demonstrate the effectiveness of our approach for refining malformed parts, we evaluate its performance on
UBC Fashion<cit.> dataset. The human subjects in UBC Fashion exhibit clearly visible hands and faces.
UBC Fashion consists of 500 training and 100 testing videos, each containing roughly 350 frames.
We follow the official train/test split for both UBC Fashion.
Specifically, we use Fréchet Inception Distance (FID)<cit.> and the keypoint detection confidence scores of a hand detector or face detector<cit.> to evaluate the plausibility of the generated human parts.
§.§ Results and Comparisons.
We generate human images with pose guidance on the UBC Fashion dataset
using the most advanced human synthesis methods<cit.>. After generating the human images, we located and cropped the regions containing human parts and applied our RealisHuman framework to refine the malformed parts. To mitigate the influence of the relatively small size of human parts in the original images and to better evaluate the metrics, we focused the evaluation specifically on the regions containing human parts.
In Tab.<ref>, we report the FID and Det. Conf. scores before and after using our RealisHuman for both face and hand regions. The results demonstrate the effectiveness of our method. Specifically, we observe significant improvements in both metrics after applying our refinement process. The reduction in FID scores indicates that the refined images are perceptually closer to real images, showcasing enhanced realism. Similarly, the increase in Det. Conf. scores reflects improved detection confidence by the detectors, highlighting the structural accuracy and plausibility of the refined face and hand regions.
To evaluate the effectiveness of our method in refining hand images, we compare our method with the popular malformed hands refining method HandRefiner in Fig.<ref>. Additionally, we conduct a detailed analysis to illustrate the advantages of our approach. As shown in Fig.<ref>, each comparison figure consists of three horizontally aligned images: from left to right, they display the original image, our method's repair result, and the HandRefiner method's repair result. This figure presents a comprehensive comparison between our method and the HandRefiner method across several critical aspects:
(a) Preservation of Hand Details: Our method excels at maintaining and matching the original details, such as the skin tone of the hands. It demonstrates superior consistency in preserving intricate details, accurately restoring textures and fine features of the hands. As a result, the repaired hands have a more natural and realistic appearance.
(b) Effectiveness in Small Hand Repair: Compare to HandRefiner, our method is particularly effective in repairing smaller hands, meticulously restoring their details and shapes.
(c) Preservation of Other Regions: Unlike HandRefiner, which can cause distortions in other areas such as the face while repairing hands, our method preserves the overall integrity and appearance of the image, ensuring that other regions remain unaffected. Effectively showcases these advantages, highlighting the superior performance of our method in hand repair tasks compared to the HandRefiner method. This comparison underscores the efficacy and reliability of our approach in producing high-quality hand restorations.
Additionally, we demonstrate the capability of RealisHuman in facial refinement. As shown in Fig.<ref>, our method effectively addresses issues such as distorted facial features and unfocused eyes in the original images, highlighting the efficacy of our approach. The results illustrate that RealisHuman can significantly enhance the realism and accuracy of facial features, further validating the robustness, versatility, and strong generalization capability of our method across various styles of human image restoration. For additional examples and details, please refer to the supplementary materials.
§.§ Abalation Study.
Effect of the second stage.
As discussed above, we address the issue of copy-and-paste artifacts by repainting the transition area between the background and the rectified human parts, ensuring seamless integration into the target region for a more natural appearance. Fig.<ref> compares the results of directly pasting the rectified human part r_part with our method. It can be observed that our approach effectively integrates the rectified human parts into the surrounding area without introducing copy-and-paste artifacts.
Effect of the eroded mask m_e. As discussed above, the eroded mask m_e is used to mitigate the effects of inharmonious edges in the second stage. Without the eroded mask, these inharmonious edges can hinder the seamless integration of the rectified human parts with their surroundings, leading to the generation of discordant elements such as hair, watches, and other artifacts. We illustrate the impact of the eroded mask in Fig.<ref>, comparing images processed with and without it. The first row shows the results without the eroded mask, where noticeable artifacts are present. The second row demonstrates the results when using the eroded mask, which effectively reduces edge artifacts and achieves a smoother integration.
§ LIMITATIONS AND DISCUSSION
While our method has demonstrated notable improvements in refining and reconstructing human hands, it still faces several challenges, as illustrated in Fig.<ref>
. Firstly, the method may struggle to accurately reconstruct interaction between hands and objects. Secondly, it may fail to maintain consistency in the presence of objects. Thirdly, when the original hand is severely distorted, the method may be unable to estimate the correct hand pose, leading to unsuccessful hand reconstruction. Addressing these issues will be the focus of our future work, potentially incorporating more sophisticated modeling techniques or leveraging additional contextual information to improve performance in these areas.
§ CONCLUSION
In this paper, we introduced RealisHuman, a novel post-processing solution for refining malformed human parts in generated images. Our method operates in two stages: first, generating realistic human parts using the original malformed human parts as the reference to maintain consistent details; second, seamlessly integrating the rectified human parts by repainting the surrounding areas. This framework effectively addresses the challenges of human parts generation and can be extended to other local refinement tasks, such as logo refinement. Comprehensive experiments demonstrate significant improvements in both qualitative and quantitative measures, validating the effectiveness and robustness of our approach.
|
http://arxiv.org/abs/2409.02367v1 | 20240904013826 | SDSPT2s: SDSPT2 with Selection | [
"Yibo Lei",
"Yang Guo",
"Bingbing Suo",
"Wenjian Liu"
] | physics.chem-ph | [
"physics.chem-ph"
] |
§ ABSTRACT
As an approximation to SDSCI [static-dynamic-static (SDS) configuration interaction (CI), a minimal MRCI; Theor. Chem. Acc. 133, 1481 (2014)], SDSPT2 [Mol. Phys. 115, 2696 (2017)]
is a CI-like multireference (MR) second-order perturbation theory (PT2)
that treats single and multiple roots on an equal footing. This feature permits the use of configuration selection
over a large complete active space (CAS) P to end up with a much reduced reference space P̃,
which is connected only with a portion (Q̃_1) of the full first-order interacting space Q connected to P.
The effective interacting Q̃ space can further be truncated by an integral-based cutoff threshold.
With marginal loss of accuracy,
the selection-truncation procedure, along with an efficient evaluation and storage of internal contraction coefficients,
renders SDSPT2s (SDSPT2 with selection) applicable to systems that cannot be handled by the parent CAS-based SDSPT2,
as demonstrated by several challenging showcases.
Diffusion-limited settling of highly porous particles in density-stratified fluids
Daniel M. Harris
September 3, 2024
==================================================================================
§ INTRODUCTION
A system of strongly correlated electrons is characterized by a large number of energetically adjacent and singly occupied frontier orbitals.
The strong static/nondynamic correlation among such orbitals renders the many-electron wave function
a heavy mixture of a huge number of Slater determinants or
equivalently configuration state functions (CSF). As such, such systems (especially of low spins) go beyond the capability of
single reference methods. Instead, the use of a multireference (MR) method is mandatory. The available MR methods can be classified into
three families<cit.>: static-then-dynamic (SD), dynamic-then-static (DS), and static-dynamic-static (SDS).
Briefly, the SD type of methods start with a diagonalization of the bare molecular Hamiltonian projected onto
a reference/active space P={Φ_R; R=[1,N_R]}, so as to
obtain a reference state Ψ^(0)_k=∑_R=1^N_RΦ_RC̅_R k^(0).
This step captures static correlation, especially when Ψ^(0)_k is only qualitatively or semi-quantitatively correct.
The remaining dynamic correction to Ψ^(0)_k can be accounted for in a number of ways, even just to first order Ψ^(1)_k.
Classic examples of this family of methods include complete active space second-order perturbation theory<cit.>,
multiconfigurtion quasi-degenerate perturbation theory<cit.>, and n-electron valence
second-order perturbation theory (NEVPT2)<cit.>.
A common feature of such methods lies in that the coefficients C̅_R k^(0) of the reference state Ψ^(0)_k are used to construct Ψ^(1)_k
for dynamic correlation but are not relaxed in the presence of dynamic correlation. Colloquially,
dynamic correlation sees static correlation but static correlation
does not see dynamic correlation. This is true even for their multi-state (MS) variants<cit.>
because of their insufficient revision of the coefficients C̅_R k^(0) of any Ψ^(0)_k.
In contrast, the DS family of methods incorporate dynamic correction to each reference function Φ_R and then
construct and diagonalize an effective Hamiltonian in the space P, thereby producing a wave function that can be (very) different from Ψ^(0)_k
obtained by diagonalizing the bare Hamiltonian in the same space. Classic examples
are the (shifted) B_k type of methods<cit.>.
Since the zero-order coefficients C̅_R k^(0) are not involved in the dynamic correlation step at all,
it can be said that dynamic correlation does not see static correlation in such family of methods.
It should be clear that both the SD and DS families of methods cannot achieve balanced treatments of the static and dynamic components of
the overall correlation, especially when the two components are strongly entangled and even interchangeable.
This situation warrants a SDS type of treatment, where the predetermined coefficients C̅_R k^(0) are used
in dynamic correlation but are then sufficiently or even fully relaxed in the presence of dynamic correlation.
That is, the static and dynamic components of the overall correlation do see each other.
Several variants of multireference second-order perturbation theory (MRPT2)<cit.>,
internally contracted multireference configuration interaction (ic-MRCI)
<cit.>,
and internally contracted multireference coupled-cluster methods
<cit.>
belong to this family. However, such methods usually require the construction and diagonalization of a large Hamiltonian matrix
even just for one state.
At variance with this, one of the present authors proposed<cit.> a restricted SDS framework for
constructing the many-electron wave functions, where no matter how many electrons and how many orbitals are to be correlated,
only a 3N_P-by-3N_P Hamiltonian matrix is constructed and diagonalized for N_P states.
The framework leads to a series of methods, including SDSCI<cit.>, SDSPT2<cit.>, iterative configuration interaction (iCI)<cit.>,
iCI with configuration and perturbation (iCIPT2)<cit.>, and extended variants<cit.> of SDSCI and SDSPT2. Albeit a minimal MRCI, SDSCI
is very close in accuracy to ic-MRCI<cit.>, with a computational cost being only that
of one iteration of ic-MRCI. As an approximation to SDSCI, SDSPT2 is a CI-like MRPT2
and treats single and multiple roots in the same way.
In particular, SDSPT2 gives rise to MS-NEVPT2 for free. While SDSPT2 is usually very similar to MS-NEVPT2 in accuracy
<cit.>, it does outperform MS-NEVPT2
for situations with multiple nearly degenerate states<cit.> .
As an iterative version of SDSCI, iCI is an exact solver of full CI, whereas iCIPT2 is one of the most efficient
near-exact methods<cit.>. Taking iCI as the CASCI solver, we obtain iCISCF<cit.>, which can handle
active spaces as large as CAS(60,60) (i.e, 60 electrons in 60 orbitals).
One major problem associated with the above MR methods lies in that a large reference space P
leads to an exceedingly large first-order interacting space (FOIS) Q that is intractable.
The only way to go is to reduce the reference space from P to P̃,
so as to reduce the FOIS from Q to Q̃.
The Q̃ space can further be treated approximately by separating it into an important subset
that is treated rigorously and an unimportant subset that can be treated approximately.
Several approaches have been proposed along this line
<cit.>, which differ from
each other in the choice of reduced reference space P̃, zeroth-order Hamiltonian H_0, perturbers spanning Q̃,
and effective Hamiltonian to be diagonalized for the final solutions.
In the present work, we apply such approximations to the
complete active space self-consistent field (CASSCF)-based SDSPT2<cit.>, so as to render it
applicable to systems that require very large active spaces.
The paper is organized as follows. The essential features of SDSPT2<cit.> are first recapitulated in Sec. <ref>. The
hole-particle symmetry-based graphic unitary group approach (HPS-GUGA)<cit.> is then employed in Sec. <ref> to construct
the spin-adapted, internally contracted configurations (ICC), starting with the orbital configurations (oCFG) contained
in the iCISCF/CASSCF<cit.> wave functions Ψ^(0)_k (=∑_R^N_RΦ_RC̅_R k^(0)).
Such ICCs span the full FOIS Q. However, the ICCs belonging to the Q-Q̃ portion of Q and hence having zero Hamiltonian matrix elements
with the reduced reference wave functions
Ψ̃^(0)_k=∑_R=1^Ñ_RΦ_RC̃̅̃_R k^(0) (resulting from the truncation of Ψ^(0)_k)
can be screened out automatically. An integral-based cutoff is further introduced to truncate the most expensive
subspaces of Q̃
The efficacy of SDSPT2s (SDSPT2 with selection) is illustrated in Sec. <ref> with
several challenging systems, including a simplified model of heme, chromium dimer, Cu_2O^2+_2 core, and transition metal complex [Co(TC-3,3)(NO)]. The paper is closed with a summary in Sec. <ref>.
§ SDSPT2
Unless otherwise stated, the notations documented in Table <ref> are to be used for the orbitals and states, under the Einstein summation
convention over repeated indices.
The SDS framework<cit.> starts with the following wave functions for N_P lowest states
|Ψ_I⟩
=∑_k^N_P |Ψ_k^(0)⟩C̃_kI + ∑_k^N_P|Ψ_k^(1)⟩C̃_(k+N_P)I + ∑_k^N_P|Ψ_k^(2)⟩C̃_(k+2N_P)I,
where Ψ_k^(0) (=∑^N_R_R=1Φ_RC̅_Rk^(0)), Ψ_k^(1), and Ψ_k^(2) represent the zeroth-order, first-order, and secondary functions, respectively. With the introduction of the primary (P_m) and secondary (P_s) parts of the P space,
P_m = ∑_k=1^N_P|Ψ^(0)_k⟩⟨Ψ_k^(0)|,
P_s = P-P_m
= ∑^N_R_R=1|Φ_R⟩⟨Φ_R|-P_m=∑_l=N_P+1^N_R|Ψ^(0)_l⟩⟨Ψ_l^(0)|,
the first-order and secondary functions can be defined as
|Ψ^(1)_k⟩ = Q1/E_k^(0)-H_0QH|Ψ^(0)_k⟩=∑_q∈ Q|Φ̅_q⟩C̅^(1)_qk,
Q = 1-P=∑_q∈ Q|Φ̅_q⟩⟨Φ̅_q|,
|Ψ^(2)_k⟩ = P_sH|Ψ^(1)_k⟩
≈ P_s^' H|Ψ^(1)_k⟩, P_s^'=∑_l=N_P+1^N_P+M_P|Ψ^(0)_l⟩⟨Ψ_l^(0)|,
= ∑_R=1^N_R|Φ_R⟩C̅^(2)_Rk,
{Φ̅_q} in Eq. (<ref>) are orthonormalized ICCs obtained by single and double excitations from Ψ_k^(0).
Eq. (<ref>) represents the projection of the Lanczos vector H|Ψ^(1)_k⟩ onto the secondary space,
which is further approximated to Eq. (<ref>) to simply the evaluation of the matrix elements.
It should be clear from Eq. (<ref>) that the secondary functions are linear combinations
of the reference CSFs, with the coefficients related to the first-order functions.
As such, they can facilitate the relaxation of the reference coefficients in the presence of dynamic correlation,
particularly for avoided crossings<cit.> or multiple quasi-degenerate states<cit.>.
The fact that |Ψ_k^(0)⟩, |Ψ_k^(1)⟩, and |Ψ_k^(2)⟩ have decreasing weights
in the wave function |Ψ_I⟩ justify the characterization of them as primary, external, and secondary states.
Since both |Ψ_k^(1)⟩ and |Ψ_k^(2)⟩ are specific to |Ψ_k^(0)⟩, the generalized eigenvalue problem
𝐇̃𝐂̃=𝐒̃𝐂̃𝐄̃
for determining the expansion coefficients of |Ψ_I⟩ is only of dimension 3N_P. The Hamiltonian and metric matrices have the following structures
H̃ = [ P_mHP_m P_mHQ P_mHP_s; QHP_m QHQ QHP_s; P_sHP_m P_sHQ P_sHP_s ],
= [ E_k^(0)δ_kl ⟨Ψ_k^(0)|H|Ψ_l^(1)⟩ 0; ⟨Ψ_l^(1)|H|Ψ_k^(0)⟩ ⟨Ψ_k^(1)|H|Ψ_l^(1)⟩ ⟨Ψ_k^(1)|H|Ψ_l^(2)⟩; 0 ⟨Ψ_l^(2)|H|Ψ_k^(1)⟩ ⟨Ψ_k^(2)|H|Ψ_l^(2)⟩ ], k,l=1,⋯, N_P,
S̃ = [ δ_kl 0 0; 0 ⟨Ψ_k^(1)|Ψ_l^(1)⟩ 0; 0 0 ⟨Ψ_k^(2)|Ψ_l^(2)⟩ ].
The above is nothing but a minimal MRCI (dubbed as SDSCI<cit.>). Taking SDSCI as the seed,
various methods can be derived<cit.>, among which the simplest variant
is SDSPT2, which amounts to replacing the QHQ block in Eq. (<ref>) with QH_0Q. Like NEVPT2<cit.>,
the Dyall CAS/A Hamiltonian<cit.> is adopted here for H_0:
To satisfy the zeroth-order Schödinger equation,
H_0|Ψ_J^(0)⟩=E_J^(0)|Ψ_J^(0)⟩,⟨Ψ_I^(0)|Ψ_J^(0)⟩=δ_IJ, ,
H_0=H_I^D + H_A^D,
H_I^D=∑_iϵ_iÊ_ii + ∑_aε_aÊ_aa +C_I^D,
H_A^D=∑_tu f^c_tuÊ_tu+1/2∑_tuvw(tu|vw)ê_tu,vw,
C_I^D= E^c-2∑_iε_i, E^c=∑_i(h_ii+f^c_ii),
f^c_pq=h_pq+∑_i[2(pq|ii)-(pi|iq)].
where the quasi-canonical orbital energies, ϵ_i and ϵ_a,
are obtained by diagonalizing the generalized Fock matrix
F_pq=f^c_pq+∑_tu[(pq|tu)-1/2 (pu|tq)]D_tu,
D_tu=∑_k w_k⟨Ψ_k^(0)|Ê_tu|Ψ_k^(0)⟩
for the doubly occupied and virtual subspaces separately.
The particular choice of the constant C^D_I (<ref>) is to
make ⟨Φ_μ|H^D|Φ_ν⟩ equal to ⟨Φ_μ|H|Φ_ν⟩ for all CSFs
in P, such that
⟨Ψ_k^(0)|H^D|Ψ_l^(0)⟩=⟨Ψ_k^(0)|H|Ψ_l^(0)⟩=E_k^(0)δ_kl,
where
E_k^(0)=E^c+∑_tuf^c_tu⟨Ψ^(0)_k|E^t_u|Ψ^(0)_k⟩+1/2∑_tuvw(tu|vw)⟨Ψ^(0)_k|E^tv_uw|Ψ^(0)_k⟩.
Since only the P_mHP_m and P_mHQ blocks of Eq. (<ref>) are required by NEVPT2, the free production of NEVPT2 by SDSPT2 is obvious.
However, it should be noted that SDSPT2 is not size consistent. Nevertheless, the size consistency errors
can readily be cured by the Pople correction<cit.>, as demonstrated before<cit.>.
Notations for the orbitals and states used in this work.
=20pt
Space Indices
Orbital space
Arbitrary orbital p,q,r,s
Hole ( or closed) orbital i,j,k,l
Active orbital u,v,t,w
External ( or virtual) orbital a,b,c,d
Configuration space
Step vector in GUGA |(d)_μ⟩=|d_1,d_2,⋯,d_n⟩
Configuration state function (CSF) Φ_μ,Φ_ν
Electronic state Ψ_I,Ψ_J
Reference CSF Φ_R
Reference orbital configuration Φ_R̅^oCFG
Reference wavefunction Ψ^(0)_k,Ψ^(0)_l
A sub-DRT in the active space formed by all vertices and arcs in DRT from vertex X̅ to Y
X̅Y
§ IMPLEMENTATION OF SDSPT2S
The SDSPT2 method can be divided into two categories based on the choice of reference space. As illustrated in Fig. <ref>, SDSPT2 with a CAS reference has been implemented using HPS-GUGA directly in our previous work<cit.>, where the molecular orbitals are optimized through CASSCF. For systems with a large active space as the reference, SDSPT2 involves a vast number of Φ_R and Φ_q for the P and Q spaces, respectively, raising several questions:
(1) How should sufficient reference configurations be selected?
(2) How can an affordable Q space be constructed?
(3) How can redundant Φ_q be eliminated?
(4) How can negligible Φ_q and Φ̅_q be pruned?
(5) How can the corresponding H̃ and S̃ be calculated?
For the second category, iCISCF calculations with the C_min parameter are performed to generate sufficiently accurate Ψ_k^(0) and molecular orbitals as preliminary steps for SDSPT2. However, in practice, the number of Φ_R for Ψ_k^(0) is too large to be used as reference CSFs for SDSPT2 with large active spaces. Therefore, the reference CSFs that make up the P space must be chosen based on the coefficients of the iCISCF expansion, subject to the condition
min(|C̅_Rk^(0)|) > P_min, k ∈ N_P,
where P_min is the threshold for selecting the dominant Φ_R among all N_P Ψ_k^(0). It should be noted that this selection process might result in a scenario where all the chosen reference CSFs have no electron occupation on certain active orbitals, rendering these active orbitals irrelevant within the selected active space. In such cases, C_min should be adjusted to a smaller value to ensure the selection of reference CSFs that encompass all active orbitals.
In previous works<cit.>, the construction of the Q space for large active spaces is typically accomplished by exciting the reference orbital configurations (Φ_R̅^oCFG), which are derived from the CSFs of the P space. Given that one Φ_R̅^oCFG corresponds to several Φ_R with the same occupation pattern, the Q space constructed from the single and double excitations of selected Φ_R̅^oCFG includes many redundant Φ_q that do not interact with the chosen Φ_R. To remove the Q_r space containing these redundant Φ_q, the effective First Order Interacting Space (eFOIS) method was employed in our implementation. This approach is beneficial because it allows us to discard Φ_q when both ⟨Φ_q|Ê_pq|Ψ_k^(0)⟩=0 and ⟨Φ_q|ê_pq,rs|Ψ_k^(0)⟩=0 for all N_P target states, where Ê_pq and ê_pq,rs represent one- and two-electron excitation operators, respectively.
§.§.§ CI space construction with selection
As is well known, dealing with the Q space for large active spaces as references is complex. Therefore, efficiently constructing and compactly representing the Q space is one of the main objectives of the selection-based SDSPT2 in this study. One method to address these challenges is the graphical unitary group approach (GUGA)<cit.>, which offers a compact structure of CSFs through distinct row tableaux (DRT) as proposed by Shavitt<cit.>.
The DRT is comprised of nodes (or vertices) and arcs (or sloped line segments). It represents all the CSFs of the full-CI method by specifying only the electron number (N) and orbital number (n), along with a defined total spin value S for the target state. Each node (a_r,b_r) signifies a unique row, while an arc labeled d_r shows the step from (a_r,b_r) downward to the adjacent node (a_r-1,b_r-1). The value of d_r is calculated as 3a_r+b_r, where x_r=x_r-x_r-1 for x=a,b. In this context, a_r=N_r/2-S_r, b_r=2S_r, with N_r representing the electron count in the first r occupied orbitals, and S_r being the intermediate spin value derived from the coupling of the first r orbitals. The possible values for d_r are 0, 1, 2, or 3. A d_r value of 0, depicted by a vertical line (arc), indicates that the r-th orbital is unoccupied. Values of d_r equal to 1, 2, or 3, represented by increasing slopes (arcs), correspond to positive spin coupling (spin up), negative spin coupling (spin down), and doubly occupied r respectively. The step from (a_r,b_r) to (a_r-1,b_r-1) through different d_r follows specific rules:
(1) d_r=0, (a_r-1,b_r-1)=(a_r,b_r);
(2) d_r=1, (a_r-1,b_r-1)=(a_r,b_r-1);
(3) d_r=2, (a_r-1,b_r-1)=(a_r-1,b_r+1);
(4) d_r=3, (a_r-1,b_r-1)=(a_r-1,b_r);
A DRT is constructed by connecting all possible nodes from the top (head) vertex (a_n,b_n) to the bottom (tail) vertex (0,0), ensuring that these connections adhere to the specified rules. The inverse sequence of d_r, ranging from the first to the n-th orbital, forms a step vector |(d)_μ⟩, which is used to sequentially document each CSF (|Φ_μ⟩).
The vast number of CSFs impedes Full-CI's capability to compute systems with numerous electrons and orbitals, despite the use of DRT to streamline the representation of the Full-CI space comprising these CSFs. The conventional technique to compress the CI space involves selecting reference CSFs and subsequently generating only their singly and doubly excited CSFs, thereby eliminating all potential higher-level excitations.
This truncation approach facilitates the construction of CSFs for the multi-reference configuration interaction with single and double excitations (MRCISD).<cit.>
In the MRCISD method, orbital space is typically partitioned into three categories: hole (closed or inactive) space, active space, and external (virtual) space. Within this framework, the reference CSFs are characterized by double occupancy in the hole space, variable occupancy in the active space, and no occupancy in the external space.
Accordingly, a DRT can be segmented into hole, active and external components as illustrated in Fig. <ref>. In this context, n_e, n_a, and n_h represent the counts of the external, active, and hole orbitals, respectively. Notably, the HPS-GUGA employs an inverse orbital order, as highlighted previously<cit.>.
In MRCISD, excitations from the hole space relative to the reference CSFs are limited to moving only one or two electrons, resulting in one or two holes in the hole space. Consequently, the external space is capable of accommodating only one or two additional electrons.
Fig. <ref> illustrates that the hole space graph is symmetrically matched with the external space graph, demonstrating hole-particle symmetry (HPS). This DRT contains two boundaries: one between the hole and active spaces, and the other between the active and external spaces. For S=0, the boundary vertices between the hole and active spaces are defined by S̅ for the CI subspace with two singlet holes, T̅ with two triplet holes, D̅ with one hole, and V̅ with no hole. In the case of S=1/2, D̅ is replaced by D̅_1/2 for spin up and D̅_-1/2 for spin down electrons on the hole space. For S ≥ 1, an additional vertex T̅_-1 appears for S_z=-1 on the hole space, while S̅ and T̅ represent S_z=0 and S_z=1, respectively. This results in six vertices: T̅_-1, D̅_-1/2, S̅, T̅, D̅ (or D̅_1/2), V̅ on the boundary between the hole and active spaces. The corresponding (a_r,b_r) values for these vertices are calculated as follows:
T̅_-1: (a-n_h+2,b-2),
D̅_-1/2: (a-n_h+1,b-1),
S̅: (a-n_h+1,b),
T̅: (a-n_h,b+2),
D̅: (a-n_h,b+1),
V̅: (a-n_h,b),
where a = N/2 - S and b = 2S, with N being the total number of electrons.
Conversely, the boundary between the active and external spaces comprises four vertices: S, T, D, and V, associated with (1,0), (0,2), (0,1), and (0,0), respectively. These correspond to the aforementioned S̅, T̅, D̅, and V̅ in the HPS-GUGA representation, as depicted in Fig. <ref>.
The whole CI space of uncontracted MRCISD (uc-MRCISD), as represented by the DRT in Fig. <ref>, can be segmented into several subspaces, each represented by sub-DRTs within the HPS-GUGA framework. Each sub-DRT corresponds to a collection of arcs on the active space, bounded above by X̅ and below by Y. Consequently, a sequence of |(d)_μ⟩ originating from the head vertex, passing through the X̅Y pair, and terminating at the tail vertex constitutes a CI subspace denoted as X̅Y.
The number of steps from X̅ to the head vertex is termed the number of up steps (NUS(X̅)), while the steps from Y to the tail vertex represent the number of down steps (NDS(Y)). The quantity D(X̅Y) represents the sum of all possible steps between X̅ and Y. Thus, the entire CI space can be categorized by a sequence of X̅Y pairs, with the total dimension expressed as:
Dim(uc-MRCISD)=∑_X̅,YNUS(X̅) × D(X̅Y) × NDS(Y).
In this formulation, all CSFs are naturally orthonormal. Each X̅Y pair signifies a specific excitation type, as illustrated in Table <ref>. This table also highlights the correspondence between conventional CI subspace notations<cit.> and those used in HPS-GUGA under the column related to S_l^(k).
CI subspace correspondence between S_l^(k) and sub-DRTa.
=20pt
t]@ccccccccccccc@
Excitation operator (Ê^X̅Y_M) n_hb n_ec S_l^(k) sub-DRT(X̅Y)d
Ê_uv,ê_uv,tw 0 0 S^(0) V̅V
Ê_ui, ê_ui,vw 1 0 S_i^(1) D̅V
Ê_au, ê_au,vw 0 1 S_a^(-1) V̅D
Ê_ai, ê_ai,uv, ê_ui,av 1 1 S_a,i^(0) D̅D
ê_ui,vj 2 0 S_ij^(2) P̅V
ê_au,bv 0 2 S_ab^(-2) V̅P
ê_ai,uj 2 1 S_a,ij^(1) P̅D
ê_ai,bu 1 2 S_ab,i^(-1) D̅P
ê_ai,bj 2 2 S_ab,ij^(0) P̅P
a S_l^(k) denotes CI subspace on ref. <cit.>, where k is the difference between the numbers of holes and particles.
b Number of holes on the hole space.
c Number of electrons on the virtual space.
d P = S or T, S: two-electron singlet; T: two-electron triplet; D: one-electron doublet; V: void.
The example of water depicted in Fig. <ref> demonstrates that a mere 32 nodes and their connecting arcs suffice to characterize the 259 CSFs encompassing the total CI space, showcasing the compactness of the representation. Henceforth, the efficient identification of these nodes becomes paramount for the construction of the CI space (P and Q) in this study.
Given the structured and straightforward DRT configurations of the hole and external subspaces, the initial eight nodes pertaining to the hole space are predetermined. Consequently, the terminal eight nodes associated with the external space can also be determined effortlessly without the need for a search.
The search for nodes is specifically required only within each sub-DRT (X̅Y), focusing on the region between X̅ and Y, guided by the reference in the active space.
When the CAS comprises 6 orbitals, either Φ_R or Φ_R̅^oCFG, searching for nodes between each pair of X̅ and Y becomes straightforward. This ease arises because all possible excitations are confined to the active orbitals relative to each Φ_R within the CAS. The variations among CI subspaces (or X̅Y pairs) primarily stem from differences in hole and particle numbers, as elucidated in Table <ref>. Each sub-DRT, extending from X̅ to Y, can be conceived as a miniature DRT equivalent to a Full-CI calculation, per the discussion above. Notably, all sub-DRTs possess identical nodes on the active space; for example, nodes 9∼24 are shared in Fig. <ref>. This indicates that all Φ_R are utilized collectively in the construction of the CI space. It is crucial to highlight that nodes situated on the (n_a-1)-th row are discarded if they do not connect downwards to any Y on the n_a-th row. This removal criterion stems from the fact that the step vectors passing through these nodes represent excitations exceeding double excitations from all Φ_R in the CAS.
In the context of a large active space with specific reference configurations, not all nodes and arcs within each sub-DRT, bounded by X̅ and Y, are permissible. This restriction arises because certain Φ_R within the CAS are omitted as reference CSFs. Consequently, nodes and arcs associated with excited CSFs originating from these unselected Φ_R̅^oCFG (or Φ_R) must be eliminated. This process requires an individual examination of single and double excitations from each Φ_R̅^oCFG. For instance, Fig. <ref>(a) shows that for the sub-DRT of D̅D, 8 nodes generate 5 excited CSFs when one Φ_R̅^oCFG is used. When two Φ_R̅^oCFG are chosen, there are 9 nodes producing 8 excited CSFs, as depicted in Fig. <ref>(b). Moreover, selecting three Φ_R̅^oCFG results in 10 nodes and 9 excited CSFs, illustrated in Fig. <ref>(c). Notably, the graph connecting X̅ and Y in Fig. <ref>(c) mirrors that of the entire sub-DRT between X̅ and Y in Fig. <ref>. This similarity indicates that selecting three Φ_R̅^oCFG from the CAS can replicate the same CI subspace generated by employing all Φ_R̅^oCFG of the CAS. Concerning the selected Φ_R, certain excited CSFs become redundant and can be effectively pruned using the eFOIS method mentioned earlier.
The process of constructing the CI space using selected Φ_R̅^oCFG is evidently more complex than when the entire CAS serves as the reference. This complexity necessitates the application of two specific restrictions to eliminate nodes and arcs within the sub-DRTs pertaining to the selected configurations. Table <ref> introduces two auxiliary indices related to these restrictions. The first index, T^ex_X̅Y, corresponds to T_R̅^r,μ, which is designed to track the cumulative excitation number (a_r,b_r)_μ in relation to Φ_R̅^oCFG. Here, r signifies the r-th active orbital, and μ represents the index of node (a_r,b_r)_μ, as depicted in Fig. <ref>. Various methods can define T_R̅^r,μ, including a comparison of the occupation numbers of CSFs that pass through (a_r,b_r)_μ with those of Φ_R̅^oCFG in the r-th orbital. The definition of T_R̅^r,μ can be mathematically expressed as:
T_R̅^r,μ=∑_v=1^r(N^occ_v,μ-N^occ_v,R̅), when N^occ_v,μ≥ N^occ_v,R̅,
where N^occ_v,μ ranges from 0 to 2, N^occ_v,R̅ also ranges from 0 to 2, v spans from 1 to n_act, and R̅ ranges from 1 to n_ref^oCFG. Here, n_ref^oCFG represents the total count of Φ_R̅^oCFG, N^occ_v,R̅ denotes the occupation number of Φ_R̅^oCFG on the v-th row, and N^occ_v,μ signifies the occupation number of the v-th orbital following electron excitation from this orbital, considering possible excitation types d_v of 0, 1, 2, or 3. Consequently, T_R̅^r,μ serves to count the number of excited electrons from X̅ to (a_r,b_r)_μ with the Φ_R̅^oCFG. Should T_R̅^r,μ for all Φ_R̅^oCFG surpass T^ex_X̅Y for a given sub-DRT, such as V̅D, and the condition outlined below is satisfied:
min(T_R̅^r,μ)>T^ex_X̅Y,
then the node (a_r,b_r)_μ and its downward linking arcs to Y are removed.
The second auxiliary index is the occupation index N^ex_X̅Y, which pertains to the cumulative occupation number of Φ_R̅^oCFG on the r-th row. This relationship is defined by:
N^sum_r,R̅=N^occ_r,R̅+N^sum_r-1,R̅, N^sum_0,R̅=0,
where N^sum_r,R̅ represents the summation of occupation numbers up to the r-th row for Φ_R̅^oCFG. Consequently, the remaining active electron number (N^re_r,R̅) up to the r-th orbital of Φ_R̅^oCFG is given by:
N^re_r,R̅=N_a-N^sum_r,R̅.
In this study, V̅V, composed of Φ_R, serves as the P space in selection-based SDSPT2. This choice stems from the observation that the CI space derived from iCISCF(2) can effectively serve as the P space for subsequent SDSPT2 calculations when the static correlation computed by iCISCF(2) closely matches that obtained through CASSCF<cit.>. Each Q subspace, generated by single and double excitations from Φ_R̅^oCFG, exhibits a characteristic where N̅^r_μ=(2a_r+b_r)_μ does not exceed the value of V̅V without excitation from Φ_R̅^oCFG by more than two. This discrepancy is captured by N^ex_X̅Y as outlined in Table <ref>. Thus, the adjusted remainder of active electrons in Φ_R̅^oCFG is expressed as:
N_R̅^r=N^re_r,R̅+N^ex_X̅Y.
Should the candidate node (a_r,b_r)_μ meet the condition:
N̅^r_μ=(2a_r+b_r)_μ>max(N_R̅^r),
it must be eliminated, as no electrons remain available for excitation from any Φ_R̅^oCFG. With both restrictions, as defined by Eqs.(<ref>) and (<ref>), the viable nodes and their linking relationships from X̅ to Y can be efficiently determined.
CI subspace constructions by auxiliary excitation and occupation indices relative to the selected orbital configurations,
which are labeled by the number of parenthesis of (T^ex_X̅Y,N^ex_X̅Y), respectively.a
=40pt
t]@ccccccccccccc@
(T^ex_X̅Y,N^ex_X̅Y) V D P
V̅ (0,0) (1,2) (0,2)
D̅ (2,2) (1,2) (0,2)
P̅ (2,2) (1,2) (0,2)
a P = S or T, S: two-electron singlet; T: two-electron triplet; D: one-electron doublet; V: void. P̅ = S̅ or T̅, S̅: two-hole singlet; T̅: two-hole triplet; D̅: one-hole doublet; V̅: void.
For the illustrated example shown in Fig. <ref>(a), when a single Φ_1^oCFG is selected, the node (2,1) for r=1 is eliminated because N̅^1_μ=5, which exceeds N_R̅^r=N_1^1=4 (calculated as 2+2), thus satisfying the condition in Eq.(<ref>). Similarly, the node (1,1) for r=2 is removed since N̅^2_μ=3 surpasses N_R̅^r=N_1^2=2. In the case of Fig. <ref>(b), where two Φ_R̅^oCFG are selected, the node (1,1) for r=2 remains because N̅^2_μ = N_R̅^r = N_2^2 = 3. Furthermore, with three selected Φ_R̅^oCFG as in Fig. <ref>(c), the node (2,1) for r=1 is retained because N̅^1_μ = N_R̅^r = N_3^1 = 5. It is worth noting that the condition specified in Eq.(<ref>) is not met in this example due to the limited active space.
It is important to note that no duplicate nodes exist across all sub-DRTs, as each newly introduced node (a_r,b_r)_μ must be verified as unique by comparison with the existing nodes in the r-th orbital. This verification process is a time-consuming aspect of the selection-based construction of sub-DRTs. For every node, a quadruple (r,a_r,b_r,λ_r)_μ and T_R̅^r,μ corresponding to n_ref^oCFG must be stored. Here, λ_r denotes the irreducible representation associated with orbital symmetry. Each (r,a_r,b_r,λ_r)_μ can be stored using a single 64-bit integer for molecular systems where n_a ≤ 255 and N_a/2-S, since four integers ranging from 0 to 255 can be encoded within a 64-bit integer using bitwise operations. Furthermore, each T_R̅^r,μ can be represented using 2 bits, as its values are constrained to 0, 1, 2, or 3. The value of 3 is significant because in MRCISD, all high excitations lead to T_R̅^r,μ≥ 3, allowing one 64-bit integer to store 32 different T_R̅^r,μ values. Indeed, two nodes that share the same (r,a_r,b_r,λ_r)_μ but have different T_R̅^r,μ values are considered distinct, not to mention those with differing (r,a_r,b_r,λ_r)_μ.
The conditions and bit representation discussed above facilitate a straightforward and efficient algorithm to address the aforementioned question (2). We propose the following steps for this algorithm to search and verify new nodes in each sub-DRT:
(1) Establish the upper boundary vertex X̅.
(2) Search for (a_r,b_r)_μ connecting from the directly preceding active orbital using one of d_r=0, 1, 2, 3, adhering to the aforementioned rules of DRT, and check if the resulting node will lead to higher excitation using the conditions in Eqs.(<ref>) and (<ref>).
(3) It is only necessary to check for duplication on the same orbital, as nodes with different orbital indices are inherently distinct. Therefore, when there is more than one node on the r-th orbital, compare the (r,a_r,b_r,λ_r)_μ of the new node with the existing (r,a_r,b_r,λ_r)_ν. If (r,a_r,b_r,λ_r)_μ≠ (r,a_r,b_r,λ_r)_ν, the new node remains.
(4) If (r,a_r,b_r,λ_r)_μ = (r,a_r,b_r,λ_r)_ν but ∑_R̅T_R̅^r,μ≠∑_R̅T_R̅^r,ν, it represents a new node.
(5) If ∑_R̅T_R̅^r,μ = ∑_R̅T_R̅^r,ν, further examination of each pair of integers with 32 (T_R̅^r,μ, T_R̅^r,ν) is required. If all integer pairs are identical, the node is a duplicate; however, any differing integer pair results in the retention of the candidate node.
(6) When the new node is located on the last active orbital n_a, it should be removed if its (a_n_a,b_n_a)_μ differs from that of the lower boundary vertex Y.
In this approach, each node between X̅ and Y is distinct, facilitating the creation of orthonormally excited CSFs from Φ_R̅^oCFG within the Q subspace of X̅Y. To store a node with the necessary information (r,a_r,b_r,λ_r)_μ, only one 64-bit integer is required. Furthermore, the indices of the next four connected nodes corresponding to d_r = 0, 1, 2, 3 are saved using four additional 64-bit integers. This efficient storage mechanism ensures that each node can be uniquely identified and linked to its adjacent nodes, facilitating the navigation and processing within the defined Q subspace.
Given the significantly lower number of nodes compared to the generated CSFs, the memory required to save these CSFs in this study is minimal when contrasted with that used for bit representation. Moreover, the generation of new nodes and the checking for duplicates are performed concurrently, thereby eliminating the substantial computational load associated with duplicate checking of generated CSFs when using other techniques like iCI<cit.> and ICE<cit.>.
§.§.§ Evaluating matrix elements of SDSPT2
For the SDSPT2, it is necessary to perform an internal contraction of the excited CSFs in order to compact the Q space, which consequently reduces the number of combination coefficients of Ψ_k^(1) on Eq.(<ref>).
The internally contracted functions, denoted as, Φ^X̅Y_MI, are characterized as excited functions relative to the reference state |Ψ^(0)_I⟩, and are defined as follows:<cit.>
Φ^X̅Y_MI =Ê^X̅Y_M|Ψ^(0)_I⟩=∑_μ∈X̅Y|Φ_μ⟩⟨Φ_μ|Ê^X̅Y_M|Ψ^(0)_I⟩
= ∑_μ∈X̅Y|Φ_μ⟩∑_R∈V̅V⟨Φ_μ|Ê^X̅Y_M|Φ_R⟩ C^I_R
=∑_μ∈X̅Y|Φ_μ⟩C̃^I_μ M.
The spin-free excitation operator, Ê^X̅Y_M, represents either Ê_pq or ê_pq,rs, where M is a collective index that specifies the active orbital indices participating in these excitation operators. The target state index is denoted by I, and the specific definitions of Ê^X̅Y_M are provided in Table <ref> for distinct sub-DRTs (or CI subspaces). Furthermore, the contraction function can be described as a linear combination of the complete set of Φ_μ functions within X̅Y, as evidenced by the identity ∑_μ∈X̅Y|Φ_μ⟩⟨Φ_μ|=1.
In the HPS-GUGA representation, as described by Wang et al.<cit.>, |Φ_μ⟩ is defined as a step vector |(d)_μ⟩, which can be further detailed as |((d)_h(d)_a(d)_e)_μ⟩. Here, (d)_h, (d)_a, and (d)_e represent the step values for the hole, active, and external spaces, respectively.
The coupling coefficient ⟨(d)_μ|Ê^X̅Y_M|(d)_ν⟩ can be rewritten as:
⟨(d)_μ|Ê^X̅Y_M|(d)_ν⟩ =⟨ ((d)_e(d)_a(d)_h)_μ|Ê^X̅Y_M| ((d)_h(d)_a(d)_e)_ν⟩
=∑_J=0,1ω_J HLS(X̅X̅^')_μν· ALS(X̅Y,X̅^'Y^')_μν· ELS(YY^')_μν,
Here, |(d)_μ⟩ and |(d)_ν⟩ belong to sub-DRTs X̅Y and X̅^'Y^', respectively.
The one-electron coupling coefficient is simplified compared to the two-electron case. For the one-electron coupling coefficient, the summation and index J are not needed, and the values are fixed at J=0 and ω_J=1. This means that for one-electron coupling, there is no spin coupling to consider, and the factor ω_J is always 1, indicating no exchange or direct type coupling complexity.
For the two-electron coupling coefficient, the index J accounts for the spin coupling between two generators, E_pq and E_rs, each having a spin of 1/2. The factor ω_J is determined by the intersection of the two generator lines: ω_J=1 for exchange type coupling and ω_J=-1 for direct type coupling. If E_pq and E_rs have no orbital overlap, then ω_J=0.
The coupling coefficients are reexpressed as a product of three segmental factors: HLS, ALS, and ELS, which are defined as follows:
HLS(X̅X̅^')_μν=∏^n_h_r=1W(Q_r(X̅X̅^');(d_r)_μ(d_r)_ν,▵b_r,(b_r)_ν,J)
ALS(X̅Y,X̅^'Y^')_μν=∏^n_h+n_a_r=n_h+1W(Q_r(X̅Y,X̅^'Y^');(d_r)_μ(d_r)_ν,
▵b_r,(b_r)_ν,J)
ELS(YY^')_μν=∏^n_r=n_h+n_a+1W(Q_r(YY^');(d_r)_μ(d_r)_ν,▵b_r,(b_r)_ν,J)
Here, W(Q_r;(d_r)_μ(d_r)_ν,▵ b_r,b_r,J) represents the segment factors that depend on the segment type Q_r, with ▵b_r=(b_r)_ν-(b_r)_μ. The detailed descriptions of these types and their corresponding segment factors are provided by Paldus, Boyle, and Payne <cit.>.
HLS(X̅X̅^')_μν, termed as the hole loop shape, represents a graphical illustration commencing from the loop head to X̅. This may manifest as either a complete or partial loop, contingent upon whether the loop is closed in the hole space. Similarly, ELS(YY^')_μν signifies the external (partial) loop shape corresponding to the graph extending from Y to the loop tail.
These two types of shapes are associated with specific formulae, enabling their pre-calculation as described by Wang et al. <cit.>. In contrast, the active partial and complete loops, denoted as ALS(X̅Y,X̅^'Y^')_μν, necessitate determination through loop searching originating from X̅ to Y. It is noteworthy that confining the search to the active space results in fewer partial loops compared to implementations not utilizing the HPS-GUGA approach. This reduction not only conserves computational time but also mitigates computing bottlenecks. Moreover, multiple ALS(X̅Y,X̅^'Y^')_μν possessing identical active orbital indexes can be simultaneously searched. Subsequently, these can be multiplied in parallel with block-stored molecular integrals sharing the same orbital indexes, thereby expediting the evaluation of the Hamiltonian matrix.
The contraction coefficient, C̃^I_μ M on Eq.(<ref>), can be derived from the interaction between Φ_μ and the reference wavefunction. This coefficient can alternatively be expressed as
C̃^I_μ M =∑_R∈V̅V⟨Φ_μ|Ê^X̅Y_M|Φ_R⟩ C^I_R
=∑_R∈V̅V⟨ (d)_μ|Ê^X̅Y_M| (d)_R⟩ C^I_R
=∑_R∈V̅V⟨ ((d)_e(d)_a(d)_h)_μ|Ê^X̅Y_M| ((d)_h(d)_a(d)_e)_R⟩ C^I_R
=ELS(YV) · A^I_μ M· HLS(X̅V̅)
where
A^I_μ M=∑_R∈V̅VALS(X̅Y,V̅V)_μ R C^I_R.
HLS(X̅V̅) and ELS(YV) are precomputed and do not influence the C̃^I_μ M values. These coefficients pertain exclusively to A^I_μ M, which relates to the step vectors connecting X̅ to Y within the active space, involving Φ_μ and Φ_R. The linear dependence of the internally contracted functions Φ^X̅Y_MI is also determined by A^I_μ M.
For example, contraction functions Φ^D̅D_MI and Φ^D̅D_NI, which involve different orbitals u and v but share the same indices i and a, may be nonorthogonal. To eliminate this linear dependence, one can compute the overlap matrix S^I_MN = ∑_μν⟨ A^I_μ M | A^I_ν N⟩. Subsequently, Löwdin's symmetric orthogonalization<cit.> can be applied to this matrix to ensure orthogonality among the functions.
The generated Φ^X̅Y_MI serve as basis functions for the CI Hamiltonian matrix, e.g. Dyall Hamiltonian in Eq.(<ref>), allowing the coupling coefficients to be adapted as follows:
⟨Φ^X̅Y_MI|Ê^X̅Y_M|Φ^X̅^'Y^'_NJ⟩ =∑_J=0,1ω_J ∑_μ∈X̅Y,ν∈X̅^'Y^'C̃^I_μ M⟨Φ_μ|Ê^X̅Y_M|Φ_ν⟩C̃^J_ν N
=∑_J=0,1ω_J ELS(YY^') · HLS(X̅X̅^') ·∑_μ∈X̅Y,
ν∈X̅^'Y^'A^I_μ M· ALS(X̅Y,X̅^'Y^')_μν· A^J_ν N
=∑_J=0,1ω_J ELS(YY^') · HLS(X̅X̅^') · [𝐀^I^†·ALS·𝐀^J]_MN,
=∑_J=0,1ω_J ELS(YY^') · HLS(X̅X̅^') ·ALS^'_MN.
In this equation, ALS^' represents the assembled matrix of internally contracted partial loops, while 𝐀^I and 𝐀^J are matrices of contraction coefficients. The expression in Eq.(<ref>) differs from that in Eq.(<ref>) primarily by the linear transformation of ALS from uncontracted to internally contracted forms, mediated by A^I_μ M.
In the context of large active spaces with numerous active orbitals, internal contraction may become infeasible. Specifically, the two CI subspaces D̅V and V̅D contain a substantial number of Φ^X̅Y_MI (or Φ̅_q as discussed in subsection <ref>). This abundance leads to a computational bottleneck, necessitating the storage of A^I_μ M, where the largest data block encompasses approximately n_a^3 × D(V̅D) or n_a^3 × D(D̅V) 64-bit real numbers.
To mitigate memory consumption, we employ the FOIS method to prune Φ^D̅V_MI and Φ^V̅D_MI. For example, Φ_μ^D̅V does not contribute to Φ^D̅V_MI and can be pruned if it satisfies the condition:
max|⟨Φ_μ^D̅V|(iu|vw)ê_iu,vw|Φ_R⟩ C^I_R|^R∈V̅V_i=1,n_h<Q_min,
where Q_min is the pruning threshold. This criterion is analogous to that used for pruning generated CSFs in methods such as SBCI<cit.> and iCI<cit.>.
Through this pruning procedure, many of Φ^D̅V_MI are filtered due to their minor contraction coefficients. Similarly, V̅D can be compressed using the specified Q_min. The number of Φ_μ^D̅V or Φ_μ^V̅D can be reduced by two orders of magnitude or more, further reducing an order of magnitude of Φ^D̅V_MI or Φ^V̅D_MI. This approach effectively addresses the issue mentioned in question (4).
Furthermore, the sparse nature of 𝐀^J significantly reduces both computational cost and memory requirements, resolving the issue presented in question (5).
§ RESULTS AND DISCUSSION
The SDSPT2 algorithm described in the previous section has been implemented in a developing version of the Xi’an-CI module <cit.> within the BDF program package <cit.>. Scalar relativistic effects were considered in all calculations using the spin-free exact two-component (sf-X2C) relativistic Hamiltonian <cit.> and the core orbitals were also frozen.
The multi-reference wave functions of all molecules were computed using iCAS <cit.> and iCISCF <cit.> with the default C_min threshold of 1.0 × 10^-4. In SDSPT2 calculations, the iCISCF reference wave function is further truncated based on thresholds P_min and Q_min to reduce computational costs for computing correlation energies. The final results are computed by adding up the SDSPT2 correlation energy and the reference energy by iCISCF(2) <cit.>. In each SDSPT2 calculation, multi-state NEVPT2 (abbreviated as NEVPT2) <cit.> result is obtained simultaneously as a side product. All calculations were performed on a computer equipped with 2 Intel(R) Xeon(R) Platinum 8375C CPUs (64 cores in total) and 1 TB of memory.
§.§ Truncation parameter P_min and Q_min
In principle, the present SDSPT2 algorithm is flexible and could employ any MCSCF wave functions as reference. However, the computational costs of SDSPT2 for references with more than 10^6 CSFs are hardly manageable, whereas the iCISCF method can handle references with about 10^9 CSFs. Thus, the reference CSFs generated by iCISCF should be further truncated before SDSPT2 calculations using P_min. Additionally, in the SDSPT2 calculation, the CSFs in D̅V and V̅D subspaces are reduced by Q_min, as detailed in Section <ref>.
To determine the optimal values of the two parameters, a simplified model of heme (denoted Fe^IIL_2) <cit.> is used as
the model system, which is considered as a strongly correlated system<cit.>. The Cartesian coordinates of Fe^IIL_2 were taken from previous work by Radoń<cit.> and are provided in the supporting information. The iCISCF calculation with respect to three low-lying states, ^5A_g, ^3B_3g, and ^1A_g, is performed with an active space of CAS(14,17), which includes Fe 3d4s4d, two 4p orbitals within the molecular plane, as well as a frontier σ ligand orbital of nitrogens. The 1s of C, N, and Fe's 1s2s2p orbitals are frozen in SDSPT2 calculations. The def2-TZVP and the corresponding auxiliary basis sets are employed for the resolution of identity (RI) approximations in both iCISCF as well as SDSPT2 calculations.
The C_min parameter influences the accuracy of the iCISCF wave function. In previous work, we found that setting C_min to 1.0×10^-4 produced sufficiently accurate results and orbitals.<cit.> However, its impact on the accuracy of subsequent MRPT2 calculations has not been investigated. Therefore, before examining the effects of P_min and Q_min, we first test the influence of C_min on SDSPT2, using the settings P_min=C_min and Q_min=0.0. The SDSPT2 energies computed using references with different C_min values are provided in the SI. The results indicate that the energy changes for the selected three electronic states are only 0.1 kcal/mol when C_min≤ 10^-4. Hence, for the rest of the work, a C_min setting of 10^-4 is used in all calculations unless otherwise specified.
For calculations involving large active spaces, the number of CSFs generated by iCISCF can be too large to perform SDSPT2 calculations. Therefore, the size of the reference used in SDSPT2 is further reduced based on the magnitude of the CI coefficients of the iCISCF wave function. This refinement process utilizes the parameter P_min. Comparisons of both absolute and relative energies for various low-lying excited states of the Fe^IIL_2 complex, calculated employing different P_min thresholds, are provided in Table <ref>. By adopting P_min=10^-3, the resultant absolute energy deviations compared to the calculations with P_min=10^-4 are less than 10^-3 Hartree for all states. The deviations in excitation energies are less than 0.3 kcal/mol. However, the numbers of reference CSFs in SDSPT2 calculations are reduced from 7204 to 2727 for the ^5A_g state.
It is well known that in internally contracted MRPT2 or MRCI calculations with large active space reference, the D̅V and V̅D subspaces pose computational bottlenecks. As discussed in Section <ref>, the truncation parameter Q_min could be introduced to reduce the number of functions in FOIS. The results for Fe^IIL_2 using P_min=10^-3 and different Q_min values are given in Table S4. These results indicate that by setting Q_min=10^-5, there is almost no loss in accuracy. The number of CSFs in Φ^V̅D_q is reduced by an order of magnitude. Thus, to balance computational costs and accuracy, the settings of P_min=10^-3 and Q_min=10^-5 are used throughout the rest of the work.
SDSPT2 energies (+1727.0 E_h) of Fe^IIL_2 with iCISCF(14,17) providing optimized orbitals and reference configurations and
Q_min=0.0.
=5pt
t]@ccccccccccccc@
P_min number of Φ_R of ^5A_g ^5A_g ^3B_3g ^1A_g E(^5A_g → ^3B_3g) (kcal/mol)
E(^5A_g→ ^1A_g) (kcal/mol)
1.0×10^-3 2727 -0.506706 -0.500335 -0.445045 4.0 38.7
5.0×10^-4 5051 -0.505963 -0.499160 -0.444192 4.3 38.8
1.0×10^-4 7204 -0.505751 -0.499040 -0.444233 4.2 38.6
SDSPT2 energies (+1727.0 E_h) of Fe^IIL_2 with iCISCF(14,17) providing optimized orbitals and reference configurations and
P_min=10^-3.
=5pt
t]@ccccccccccccc@
Q_min number of Φ^V̅D_q of ^5A_g ^5A_g ^3B_3g ^1A_g E(^5A_g → ^3B_3g) (kcal/mol) E(^5A_g→ ^1A_g) (kcal/mol)
1.0×10^-3 58800 -0.502386 -0.492765 -0.438914 6.0 39.8
1.0×10^-4 71330 -0.505569 -0.498354 -0.443763 4.5 38.8
1.0×10^-5 214000 -0.506566 -0.500099 -0.444933 4.1 38.7
0.0 1746717 -0.506706 -0.500335 -0.445045 4.0 38.7
The energy gaps of Δ E(^5A_g→^3B_3g) and Δ E(^5A_g→^1A_g) of Fe^IIL_2 computed by SDSPT2, using these default thresholds, are 4.1 and 38.7 kcal/mol, respectively. These results are in good agreement with the 3.8 and 40.7 kcal/mol obtained from CCSD calculations by Radoń<cit.>.
§.§ A test on computing resources
To analyze the computational costs associated with selection-based SDSPT2 relative to CAS-based SDSPT2, we conducted SDSPT2 calculations for the ground state of anthracene, both with and without the selection of CAS(14,14). Table <ref> presents the CPU time (in minute) for the three most time-consuming components of SDSPT2, corresponding to different values of P_min. The data reveals that constructing the Q space using CAS as a reference is highly efficient, yet the time cost escalates as P_min decreases for SDSPT2 with selection. Moreover, the time cost ratios for constructing QH_0Q and P_mHQ as defined in Eq.(<ref>) align closely with the ratios of the numbers of Φ_q and Φ_R, respectively.
It becomes evident that the primary computational task in selection-based SDSPT2 is the construction of QH_0Q, which requires approximately an order of magnitude more resources than P_mHQ. Conversely, in the non-selective SDSPT2 approach, P_mHQ emerges as the most resource-intensive component, especially when the number of Φ_R surges from P_min=1.0×10^-3 to P_min=0.0. The peak memory consumption occurs during the storage of C̃^I_μ M, as depicted in Eq.(<ref>). Notably, employing a reference selection with P_min=1.0×10^-3 reduces the memory usage to roughly one-tenth compared to using CAS as a reference. In conclusion, the selection-based SDSPT2 markedly diminishes the demand for computational resources, making it an advisable choice for multi-reference calculations involving large active spaces.
Number of Φ_R and Φ_q, CPU time (in minute) and memory Usage (in GBtype) for the SDSPT2 calculations of the ground state of anthracene with and without selection of CAS(14,14).
=5pt
t]@ccccccccccccc@
P_min number of Φ_R number of Φ_q time for Q space time for QH_0Q time for P_mHQ maximal used memory
1.0×10^-3 9638 3428202 4 46 4 1.1
1.0×10^-4 43826 14848192 50 273 28 2.3
0.0 2760615 67113618 0 967 1429 11.2
§.§ Size Consistency
As a CI-like MRPT2 method, SDSPT2 inherently lacks size consistency. However, the size-consistency errors of SDSPT2 when using CAS as a reference can be effectively corrected through the Pople correction<cit.>, as previously validated by our research<cit.>. In this study, we also examined the errors associated with selection-based SDSPT2, by analyzing the energies of low-lying singlet and triplet states of anthracene...Rg complexes (where Rg represents rare gas atoms such as He, Ne, Ar, Kr) relative to isolated anthracene.
For these complexes, the rare gas atom was positioned 100 Å away from the center of anthracene, with the geometry of anthracene obtained from <cit.>. We selected CAS(14,14) that includes all π and π^* orbitals of anthracene and utilized SA3-iCISCF(14,14)/cc-pVDZ to optimize the orbitals and generate Φ_R.
Table <ref> demonstrates that the size-consistency errors in SDSPT2 applying the Pople correction remain within an acceptable range for both the ground and excited states of anthracene...Rg complexes, with the maximum error recorded being only 0.10 eV. It is also noteworthy that NEVPT2, when tested with selected reference CSFs from a large active space, exhibited fundamental size consistency, as evidenced in Table <ref>.
To assess the accuracy of selection-based SDSPT2 (iCISCF/SDSPT2), we additionally performed calculations using CAS-based SDSPT2 (CASSCF/SDSPT2) with C_min=P_min=Q_min=0.0, which is expected to yield more precise results. Table <ref> reveals that the size-consistency errors for iCISCF/SDSPT2 and CASSCF/SDSPT2, as well as for the analogous iCISCF/NEVPT2 and CASSCF/NEVPT2 computations, are nearly identical, suggesting consistent trends across different methods and parameter settings.
Size-consistency errors (in meV) of SA3-iCISCF(14,14), NEVPT2 and SDSPT2 and basis set of cc-pVDZ for the low-lying singlets and triplets energies of anthracene⋯Rg (Rg = He, Ne, Ar, Kr; 100 Å away from the center of anthracene) with respect to anthracene.
=10pt
t]@ccccccccccccc@
Species iCISCF iCISCF/NEVPT2 CASSCF/NEVPT2 iCISCF/SDSPT2a CASSCF/SDSPT2a
Singlet
anthracene⋯He 1^1A_1 0.0 0.1 0.0 1.9 1.8
anthracene⋯He 2^1A_1 0.0 2.5 2.6 4.0 4.0
anthracene⋯He 3^1A_1 0.0 2.4 1.9 4.4 3.8
anthracene⋯Ne 1^1A_1 0.0 0.3 0.0 -2.0 -2.3
anthracene⋯Ne 2^1A_1 0.0 1.9 2.6 -1.7 -1.5
anthracene⋯Ne 3^1A_1 0.0 2.7 1.9 1.8 1.0
anthracene⋯Ar 1^1A_1 0.0 0.2 0.0 30.2 29.7
anthracene⋯Ar 2^1A_1 0.0 2.4 2.7 31.4 30.7
anthracene⋯Ar 3^1A_1 0.0 2.3 2.7 33.8 33.5
anthracene⋯Kr 1^1A_1 0.0 0.3 0.0 95.6 94.6
anthracene⋯Kr 2^1A_1 0.0 2.9 2.5 97.2 94.9
anthracene⋯Kr 3^1A_1 0.0 2.7 1.7 100.3 97.2
Triplet
anthracene⋯He 1^3A_1 0.2 -5.3 1.3 -3.1 3.0
anthracene⋯He 2^3A_1 0.1 0.0 2.1 1.7 3.8
anthracene⋯He 3^3A_1 0.0 6.2 2.0 7.7 3.6
anthracene⋯Ne 1^3A_1 0.0 1.3 1.3 -1.1 -1.4
anthracene⋯Ne 2^3A_1 0.0 6.2 2.1 3.8 -0.3
anthracene⋯Ne 3^3A_1 0.0 8.0 2.1 5.5 -0.4
anthracene⋯Ar 1^3A_1 0.2 -6.4 1.2 24.3 30.6
anthracene⋯Ar 2^3A_1 0.1 -0.1 2.1 30.3 31.7
anthracene⋯Ar 3^3A_1 0.0 8.8 2.1 38.0 30.8
anthracene⋯Kr 1^3A_1 0.3 -12.6 1.3 84.4 95.5
anthracene⋯Kr 2^3A_1 0.1 -5.6 2.1 90.7 96.4
anthracene⋯Kr 3^3A_1 0.0 7.6 2.2 101.5 94.5
a SDSPT2 with Pople correction.
§.§ The Potential Energy Curve of Cr_2
The chromium dimer, Cr_2, has been a challenging system in quantum chemistry for decades.<cit.> Early on, CAS(12,12) was one of the most commonly used references for MRPT2 or MRCI. However, various theoretical results from MRPT2 and MRCISD suggested that the minimal CAS(12,12), consisting of 3d and 4s orbitals of Cr atom, is insufficient for a quantitative description of the Cr_2 potential energy curve (PEC).
With the development of MR methods, larger active spaces such as CAS(12,22)<cit.> (3d4s4d), CAS(12,28)<cit.> (3d4s4d4p), and even CAS(12,42)<cit.> (3d4s4d4p4f) have been utilized as references to perform subsequent dynamic correlation energy calculations at the MRPT2 and MRCISD levels. Most recently, Chan et al. reported a PEC that is in quantitative agreement with experimental data.<cit.>
In the present work, the CAS(12,22) active space suggested by Chan and coworkers is adopted in our NEVPT2 and SDSPT2 calculations using cc-pwCV5Z-DK basis sets. The NEVPT2 and SDSPT2 results are depicted in Fig. <ref>, alongside the CASPT2 results with a CAS(12,28) reference<cit.> and NEVPT2 results with a CAS(12,22) reference<cit.>. Additionally, the theoretical best estimation (TBE) and experimental curves reported by Chan et al. are presented for comparison<cit.>. Our results show that both NEVPT2 and SDSPT2 methods slightly underestimate the binding energy compared to experimental data, yet offer better accuracy than the DMRG-NEVPT2 results using the same active space<cit.>. Notably, in the bond length region from 2.0 to 2.4 Å, our SDSPT2 and NEVPT2 PECs are more consistent with the latest experimental and TBE results<cit.>, whereas a plateau is observed in the DMRG-NEVPT2 PEC<cit.>.
The absolute energies, and the reference information, are detailed in Table S5. The number of reference CSFs used in the SDSPT2 calculations exhibits fluctuation. However, the total weights of reference determined by P_min=10^-3, account for no less than 98.9% of those by iCISCF(12,22) across all points on the PEC. The size-consistent errors at both NEVPT2 and SDSPT2 levels are less than 0.5 kcal/mol for Cr_2.
The PECs obtained from NEVPT2 and SDSPT2 calculations are sufficiently smooth to enable the fitting of spectroscopic constants. The spectroscopic constants of Cr_2 fitted at the iCISCF(12,22)-NEVPT2 and -SDSPT2 levels are presented in Table <ref>. The binding energies predicted by NEVPT2 and SDSPT2 are 1.486 and 1.449 eV, respectively, which are close to the earlier experimental value of 1.47(5) eV obtained from negative ion photoelectron spectroscopy measurements by Casey and Leopold<cit.>. The bond length and vibrational frequency by our methods are also in good agreement with experimental values<cit.>. The spectroscopic constants produced by relevant MR methods with large active spaces are also listed in Table <ref>. The results from DMRG-SC-NEVPT2 and DMRG-CASPT2 yield similar equilibrium bond lengths to those from SDSPT2, whereas the value from DMRG-ec-MRCISD+Q is slightly longer.
Spectroscopic constants of Cr_2 by various MR methods.
=20pt
t]@ccccccccccccc@
R_e(Å) D_e(eV) ω _e(cm^-1)
DMRG(12,28)-CASPT2/cc-pwCV5Zd 1.681 1.610 470
DMRG(12,22)-SC-NEVPT2/CBSe 1.655 1.435 469
DMRG(12,42)-ec-MRCISD+Q/ANO-RCC-VQZPf 1.71 1.62 479
iCISCF(12,22)-NEVPT2/cc-pwCV5Z-DKi 1.658 1.486 469
iCISCF(12,22)-SDSPT2/cc-pwCV5Z-DKi 1.659 1.449 462
TBEh 1.685 1.58±0.02 495
Experiment 1.679a 1.47(5)b, 1.56±0.06c 481b
^aExp.(1983)<cit.>, ^bExp.(1993)<cit.>,
^cExp.(1998)<cit.>, ^dTheo.(2011)<cit.>, ^eTheo.(2016)<cit.>, ^fTheo.(2018)<cit.>,
^gTheo.(2020)<cit.>, ^hTheo.(2022)<cit.>, ^iTheoretical computations in this work.
§.§ [Cu_2O_2]^2+ core
The [Cu_2O_2]^2+ complex, a binuclear copper activating O_2 model system, is another popular model for validating the performance of strongly correlated methods. The isomerization energy between the bis(μ-oxo) (F = 0.0) and μ-η^2:η^2 peroxo (F = 1.0) configurations has been extensively studied by various approaches<cit.>. Without sufficiently large active spaces, such as CAS(16,14)<cit.>, CASPT2 might predict an incorrect geometric minimum. Gagliardi and colleagues<cit.> emphasized the importance of including the double-shell effect of copper for producing quantitative correct isomerization energies, which necessitates active spaces comprising 28-32 orbitals.
To compare with results from Chan's DMRG-SC-CTSD<cit.> and Sharma's SC-NEVPT2(s)<cit.>, the same active space CAS(28,32), comprising Cu 3d4d and O 2p3p orbitals, was employed with ANO-RCC-VQZP basis sets. The relative energies of different [Cu_2O_2]^2+ structures, as predicted by iCISCF(28,32) followed by NEVPT2 and SDSPT2 calculations, are presented in Table <ref>. The iCISCF(28,32) results show a shallow well at F = 0.8, indicating that calculating static correlation is insufficient. The NEVPT2 and SDSPT2 calculations yield nearly identical relative energies with respect to the μ-η^2:η^2 peroxo isomer results. The relative energies from NEVPT2 and SDSPT2 closely align with those obtained using SC-NEVPT2(s), the deviations of relative energies do not exceed 4.0 kcal/mol across all examined points. Notably, our SDSPT2 method yields larger relative energies from F = 0.4 to F=0.8 structures compared to those from DMRG-SC-CTSD and SC-NEVPT2(s).
Most recently, Ma and coworkers investigated the isomerization energy between bis(μ-oxo) and μ-η^2:η^2 peroxo isomers as well<cit.>. Utilizing renormalized-residue-based fic-MRCISD+Q with a CAS(24,24) active space, they predict the isomerization energy to be 37.44 kcal/mol. Nonetheless, with sufficiently large active spaces, all these MR methods predict similar isomerization energies between F=0.0 and F=1.0, ranging from 37.4 to 41.3 kcal/mol.
[Cu_2O_2]^2+ energies (in kcal/mol) from various MR methods, including iCISCF, NEVPT2 and SDSPT2 with CAS(28,32) active space. Energies are relative to the μ-η^2:η^2 peroxo isomer (F = 1.0).
=20pt
t]@ccccccccccccc@
Method F=0.0 F=0.2 F=0.4 F=0.6 F=0.8 F=1.0
iCISCF 18.5 11.1 5.6 1.1 -1.3 0.0
NEVPT2 40.9 35.4 30.5 23.9 13.1 0.0
SDSPT2 40.6 35.1 30.2 23.6 12.7 0.0
SC-NEVPT2(s)a 41.3(8) 33.5(9) 26.3(9) 19.9(8) 9.3(9) 0.0
DMRG-SC-CTSDb 37.4 29.0 22.0 14.4 6.1 0.0
a Ref..
b Ref..
§.§ Cobalt Tropocoronand Complex
To further validate the performance of the newly developed SDSPT2, low-lying states of the cobalt tropocoronand complex, [Co(TC-3,3)(NO)], were studied (as depicted in Fig. <ref>). Both experimental and theoretical evidence has confirmed that the ground state of [Co(TC-3,3)(NO)] is a diamagnetic singlet<cit.>. However, the triplet state assignments remain ambiguous. Ghosh and coworkers, utilizing DFT, characterized T_1 as a low-spin S = 1/2 cobalt(II) ferromagnetically coupled to a NO radical, and T_2 as a high-spin S = 2 cobalt(III) antiferromagnetically coupled to an S=1 NO^- anion. In contrast, recent studies by Freitag et al. employing DMRG-SC-NEVPT2 with a CAS(22,22) reference<cit.> elucidated that T_1 features S = 3/2 cobalt(II) antiferromagnetically coupled to a NO radical, whereas T_2 exhibits S = 1/2 cobalt(II) ferromagnetically coupled to an S=1/2 NO radical. Upon reviewing and repeating the DFT calculations of Ghosh et al., we found that, for the T_2 state at the B3LYP level, the Mulliken spin population on cobalt is 2.636, and the spin population of the CoN_4 center is 2.96. More importantly, as Ghosh et al. reported, the NO bears a spin population of -1.092<cit.>. As a matter of fact, our broken symmetry calculation for the T_1 state started from a quintet high-spin state instead of a septet state. Thus, the T_2 state calculated by B3LYP may be better described as a S = 3/2 Co(TC-3,3) complex antiferromagnetically coupled to a NO radical, which is the T_1 state produced by DMRG-SC-NEVPT2. The results reported by Ghosh et al.<cit.> imply that the broken symmetry B3LYP method might incorrectly predict the ordering of the two triplet states, whereas the excitation energies calculated using pure functionals PW91 and OLYP align with the results of DMRG-SC-NEVPT2.
The electronic structure of [Co(TC-3,3)(NO)] was revisited using the SDSPT2 method in this work, employing the adiabatic molecular geometries<cit.> and the CAS(22,22) active space utilized by DMRG-SC-NEVPT2 computations<cit.>. The def2-TZVP basis set was utilized
alongside the corresponding auxiliary basis for the RI approximation. The initial guess for the iCISCF active space was automatically generated by iCAS<cit.>, including five Co 3d orbitals, a σ ligand MO (denoted as σ_3d_xy), eight π, π^∗ MOs of TC-3,3, four π, π^∗ MOs of NO, and four Co 4d MOs. The converged active MOs for the S_0, T_1, and T_2 states are displayed in Fig. S1-S3, respetively. The occupation numbers for the Co 3d MOs and two π^∗ MOs of NO in the active space are presented in Fig. <ref>. For comparison, the occupation numbers reported by Freitag et al. at the DMRG-SCF level are also provided<cit.>. The occupation numbers predicted by DMRG-SCF and iCISCF for both the S_0 and T_2 states are in good agreement. However, there are slight differences for the occupation numbers of 3d_xy and π^∗_NO,x in the T_1 state, leading to different excitation energies of T_1 reference.
In comparison to the iCISCF outcomes, the T_1 excitation energy is notably overestimated by DMRG-SCF, as depicted in Table <ref>. This discrepancy might stem from issues encountered during the iterative diagonalization process inherent to DMRG. To validate this hypothesis, we reassessed the iCISCF(22,22) with a focus on two roots, optimizing solely the second root. The adiabatic excitation energy for the second root, when active-active rotations were either included or not, was determined to be 38.0 kcal/mol and 38.8 kcal/mol, respectively. These values align closely with the 38.6 kcal/mol obtained from DMRG-SCF computations. Such consistency suggests that the initial root, possessing an excitation energy of 20.7 kcal/mol, was inadvertently omitted in the preceding DMRG-SCF study.
The excitation energies for the T_1 and T_2 states, as determined using iCISCF, NEVPT2, and SDSPT2 methods, are summarized in Table <ref>. Notably, the energy gap S_0 → T_2 computed via SDSPT2 (iCISCF) concurs with the findings from DMRG-SC-NEVPT2 (DMRG-SCF). However, due to disparities in the reference wave functions for T_1, the excitation energy of T_1 derived from SDSPT2 is 19.9 kcal/mol lower than that from DMRG-SC-NEVPT2, which reported degenerate excitation energies of 35.0 and 36.1 kcal/mol for T_1 and T_2, respectively. To reexamine the excitation energies from DMRG-SC-NEVPT2, additional NEVPT2 and SDSPT2 calculations were conducted based on the aforementioned second root of T_1. Both methods yielded an excitation energy of 39.6 kcal/mol, closely matching the 40.1 (40.3) kcal/mol value for T_2, thereby elucidating the degeneracy observed in prior work. For comparative analysis, DFT results by Ghosh et al. are also included<cit.>. It is acknowledged that the S-T gap in metal complexes cannot be accurately predicted by the B3LYP (PW91) functional due to a inherent bias towards triplet (singlet) states<cit.>. Conversely, S-T gaps estimated using the OLYP functional tend to be more reliable<cit.>. The excitation energies for T_1 and T_2 obtained with the OLYP functional align qualitatively with those predicted by SDSPT2 and NEVPT2.
The excitation energies of two triplet states (in kcal/mol) of [Co(TC-3,3)(NO)] calculated by various methods.
=10pt
t]@ccccccccccccc@
State iCISCF SDSPT2 NEVPT2 DMRG-SCFa DMRG-SC-NEVPT2a PW91b OLYPb B3LYP-D3b
T_1 20.7 19.9 19.7 38.6 35.0 24.4 18.0 15.0
T_2 29.7 40.1 40.3 29.6 36.1 25.1 23.8 10.4
a From ref.. The excitation energies were obtained from DMRG-SC-NEVPT2 (DMRG-SCF) with m=512.
b re-assigned states at the DFT level are from ref..
§ CONCLUSIONS AND OUTLOOK
In this study, we have proposed an efficient selection based SDSPT2 method,
thereby extending its applicability to molecular systems characterized by large active space references.
This innovative SDSPT2 approach utilizes molecular orbitals and reference configurations obtained from iCISCF calculations, offering the capability to concurrently and accurately compute both static and dynamic correlations.
A pivotal challenge addressed in this work concerned the construction of the Q using HPS-GUGA.
We accomplished this efficiently by integrating advanced computer storage technologies with distinct processing techniques for different excitation subspaces.
Furthermore, we tackled the issue of dimensionality reduction in the two expansive Q subspaces of D̅V (or S_i^(1))
and V̅D (or S_a^(-1)), as well as minimizing the number of required internal contraction coefficients.
These hurdles were surmounted by amalgamating the FOIS method with sparse matrix storage technique.
Utilizing these strategies, our novel SDSPT2 method has exhibited impressive computational efficiency without compromising on calculational precision, as evidenced by tests on both model and real systems. This method has been effectively applied to molecular systems characterized by large active spaces and a high number of atoms. Notably, the complex Cr_2 system was successfully computed using SDSPT2 in conjunction with iCISCF(12,22) at the cc-pwCV5Z-DK level, yielding accurate potential energy curves and spectroscopic constants. Furthermore, SDSPT2, when paired with the substantial active space CAS(28,32), has been adeptly employed to determine the energy differences across various geometries of the Cu_2O^2+_2 core. Additionally, SDSPT2 with CAS(22,22) has accurately determined the correct energy ordering of three low-lying electronic states for the [Co(TC-3,3)(NO)] transition metal complex, which lacks spatial symmetry. Looking ahead, we aim to integrate the HPS-GUGA method based on configuration selection within other MRCI approaches, such as SDSCI and ic-MRCISD, to further enhance the computational accuracy for complex molecular systems.
§ ACKNOWLEDGEMENT
This work was supported by
the National Natural Science Foundation of China (Grant Nos. 21833001, 21973054, 22273071 and 22273052) and
Mount Tai Climbing Program of Shandong Province.
§ DATA AVAILABILITY STATEMENT
|
http://arxiv.org/abs/2409.03403v1 | 20240905103915 | RoVi-Aug: Robot and Viewpoint Augmentation for Cross-Embodiment Robot Learning | [
"Lawrence Yunliang Chen",
"Chenfeng Xu",
"Karthik Dharmarajan",
"Zubair Irshad",
"Richard Cheng",
"Kurt Keutzer",
"Masayoshi Tomizuka",
"Quan Vuong",
"Ken Goldberg"
] | cs.RO | [
"cs.RO"
] |
^*Equal Contribution
^1UC Berkeley
^2Toyota Research Institute
^3Physical Intelligence
Tight minimum degree condition to guarantee C_2k+1-free graphs to be r-partite
Zilong Yan [School of Mathematics, Hunan University,
Changsha 410082, P. R. China. E-mail: [email protected].] Yuejian Peng [Corresponding author. School of Mathematics, Hunan University, Changsha 410082, P. R. China. E-mail: [email protected]. Supported in part by National Natural Science Foundation of China (No. 11931002)] Xiaoli Yuan [School of Mathematics, Hunan University,
Changsha 410082, P. R. China. E-mail: [email protected].]
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Scaling up robot learning requires large and diverse datasets, and how to efficiently reuse collected data and transfer policies to new embodiments remains an open question.
Emerging research such as the Open-X Embodiment (OXE) project has shown promise in leveraging skills by combining datasets including different robots. However, imbalances in the distribution of robot types and camera angles in many datasets make policies prone to overfit.
To mitigate this issue, we propose , which leverages state-of-the-art image-to-image generative models to augment robot data by synthesizing demonstrations with different robots and camera views. Through extensive physical experiments, we show that, by training on robot- and viewpoint-augmented data, can zero-shot deploy on an unseen robot with significantly different camera angles. Compared to test-time adaptation algorithms such as Mirage, requires no extra processing at test time, does not assume known camera angles, and allows policy fine-tuning. Moreover, by co-training on both the original and augmented robot datasets, can learn multi-robot and multi-task policies, enabling more efficient transfer between robots and skills and improving success rates by up to 30%.
§ INTRODUCTION
Emerging research in robot learning suggests that scaling up data can help learned policies be more generalizable and robust <cit.>. However, compared to state-of-the-art foundation models <cit.> in computer vision (CV) <cit.> and natural language processing (NLP), the size of robotic data is still several orders of magnitude smaller than those used to train large language and multi-modal models <cit.>. Collecting real robot data is time-consuming <cit.> and labor intensive <cit.>, and ensuring data diversity for generalizable policies requires careful balance <cit.>.
Can we more effectively leverage currently available real robot data?
In an unprecedented community effort, the Open-X Embodiment (OXE) project <cit.> combines 60 robot datasets and finds that co-training can exhibit positive transfer and improves the capabilities of multiple robots by leveraging experience from each other. However, the OXE dataset is highly unbalanced, dominated by a few robot types such as Franka and xArm. Additionally, most datasets have a limited diversity of camera poses. Policies trained on such data tend to overfit to those robot types and viewpoints and need fine-tuning when deploying on other robots or at even slightly different camera angles. To mitigate this issue, a test-time adaptation algorithm, Mirage <cit.>, uses “cross-painting” to transform an unseen target robot into the source robot seen during training, to create an illusion as if the source robot is performing the task at test time. While Mirage can achieve zero-shot transfer on unseen target robots, it has a few limitations: (1) It requires precise robot models and camera matrices; (2) It does not allow policy finetuning; (3) It is limited to small camera pose changes due to depth reprojection error.
In this work, we seek to bridge these limitations. Rather than naively co-training on combined data from multiple robots, we aim to more explicitly encourage the model to learn the cross-product of the robots and skills contained in each dataset. We aim to improve the robustness and generalizability of the policy to different robot visuals and camera poses during training instead of relying on an accurate test-time cross-painting pipeline. We propose , a robot and viewpoint augmentation pipeline that synthetically generates images with different robot types and camera poses using diffusion models. Through extensive real-world experiments, we show that, by training on robot- and viewpoint-augmented data, can zero-shot control different robots with significantly different camera poses compared to the poses seen during training. In contrast to Mirage, does not assume known camera matrices and allows policy fine-tuning to increase performance on challenging tasks. Furthermore, by co-training on original and augmented robot datasets, can learn multi-robot and multi-task policies and improve finetuning sample efficiency.
This paper makes 3 contributions:
* , a novel approach to robot data augmentation that uses diffusion models to generate trajectories with novel robots and viewpoints;
* Physical experiments with Franka and UR5 suggesting that robot augmentation enables zero-shot deployment on target robots and viewpoint augmentation improves the robustness of policies to camera pose changes. When combined, they yield policies that work for target robots at camera poses significantly different from those in the initial demonstration data;
* Experiments suggesting that can learn multi-robot multi-task policies and improve the finetuning sample efficiency of a generalist policy on novel robot-task combinations.
§ RELATED WORK
§.§ Cross-Embodiment Robot Learning
Recognizing the high cost of collecting real robot data, many prior works have studied using other data sources, such as simulation <cit.>, other robot data <cit.>, and human or animal videos <cit.>, to increase sample efficiency and accelerate learning <cit.>. In a transfer learning setting, one can first pretrain a visual encoder <cit.>, dynamics model <cit.>, or policy <cit.> and then perform online finetuning using reinforcement learning. In a cross-domain imitation paradigm, methods often involve learning correspondences between the source and target domains <cit.>, and then constructing auxiliary rewards <cit.> or applying adversarial training <cit.>. <cit.> use meta-learning to enable a new robot to quickly learn from few-shot trajectories at test time.
Cross-embodiment learning could also be used to learn more robust and generalizable policies through joint training in a multi-robot multi-task fashion. For example, by training on a family of robots with varying kinematics and dynamics in simulation, robot-conditioned policies <cit.> are robust to novel morphologies within the range of training distribution, and modular policies <cit.> can be more transferrable to different robots and tasks. More recently, many works have also explored training on large and diverse real robot data <cit.> to learn visual representations <cit.> and predictive world models <cit.> and showed that policies trained are more generalizable to new objects, scenes, tasks, and embodiments <cit.>.
In this work, we build on these insights and propose to more explicitly encourage positive transfer between robots and skills by performing data augmentation.
Our method is inspired by Mirage <cit.>, a recent test-time adaptation algorithm that uses “cross-painting” to achieve cross-embodiment policy transfer by replacing the target robot in the image with a source robot seen during training. While Mirage avoids modifying the source robot policy and enables zero-shot transfer, it has several limitations, such as requiring a fast renderer, precise robot models, and accurate camera calibration. We address these issues by using training time data augmentation with diffusion models trained on randomized robot poses and camera angles, eliminating the need for camera matrix knowledge. Our approach additionally allows zero-shot deployment as well as finetuning or cotraining on additional data to improve the performance and learn multi-robot multi-skill policies that are robust to significant camera angle changes.
§.§ Generative Models and Data Augmentation in Robotics
With the significant progress in generative models including large language and multi-modal models <cit.> and diffusion models <cit.> trained on Internet-scale data, there is a growing interest in leveraging these models for robotics. For example, prior work has explored using language models for planning <cit.>, control <cit.>, reward specification <cit.>, and data relabeling <cit.>. Image and video generation models have been used for generative simulation <cit.>, data augmentation <cit.> and visual goal planning <cit.>. Our method falls into the data augmentation category. However, unlike prior work that generates distractor objects, backgrounds, and new tasks <cit.>, we use diffusion models to generate alternative robots and camera viewpoints. As such, enables trained policies to generalize to different robots with different camera setups.
§.§ Viewpoint Adaptation and Viewpoint Robust Policy
Visuomotor control policies that take in images as inputs tend to overfit to the camera angle in the training data, and even small changes between training and testing could severely hurt performance <cit.>. While using 3D representations <cit.> alleviates the problem, it requires a calibrated depth camera or multiple views <cit.>, and is more computationally expensive. For mobile robots, <cit.> extract a 3D point cloud from the training data and performs re-rendering, and Ex-DoF <cit.> applies virtual rotation of the robot's 360^∘ camera to augment training data. To improve viewpoint robustness of image-based policies, <cit.> use a recurrent neural network to understand how actions affect arm movement through history. <cit.> use many simulated viewpoints to learn a visual representation, whose downstream policy exhibits viewpoint robustness. Instead of pretraining in simulation with diverse rendering, we synthesize novel views of real scenes. SPARTN <cit.> and DMD <cit.> use neural radiance fields (NeRFs) and diffusion models, respectively, to generate perturbed viewpoints for wrist cameras, whereas our viewpoint augmentation applies to fixed third-person views.
§ PROBLEM STATEMENT
We assume a demonstration dataset 𝒟^𝒮 = {τ_1^𝒮, τ_2^𝒮, ..., τ_n^𝒮} consisting of n successful trajectories of a source robot 𝒮 performing some task. Each trajectory τ_i^𝒮 = ({o_1..H_i^𝒮}, {p_1..H_i^𝒮}, {a_1..H_i^𝒮}), where {o_1^𝒮, ..., o_H_i^𝒮} is a sequence of RGB camera observations, {p_1^𝒮, ..., p_H_i^𝒮} is the sequence of corresponding gripper poses, and {a_1^𝒮, ..., a_H_i^𝒮} is the sequence of corresponding robot actions. This dataset can be used to train models with behavior cloning for robot 𝒮.
Our goal is to augment 𝒟^𝒮 into 𝒟^Aug such that we can learn a policy that can be successfully deployed on a different robot 𝒯, known as the target robot, with a potentially different camera viewpoint. In this work, we focus on robot arms mounted on a stationary base and assume the grippers are similar in shape and function.
Similar to prior work <cit.>, we use Cartesian control and assume knowledge of the two robots' end effector coordinate frames with respect to their bases (e.g., moving forward corresponds to an increase in the x-axis) such that we can use a rigid transformation T_𝒯^𝒮 to preprocess the data and align the robots' end effector poses p^𝒮 = T_𝒯^𝒮 p^𝒯 and actions a^𝒮 = T_𝒯^𝒮 a^𝒯 into the same vector space. Thus, for notational convenience, we omit the superscript differentiating gripper poses and actions between the robots. However, the image observations o^𝒮 and p^𝒯 cannot be easily aligned since the robots may look very different. We do not assume knowledge of the camera matrices in either setup.
After augmentation, we learn a policy π(a_t | o_t^𝒯, p_t) on 𝒟^Aug using imitation learning. At test time, it takes as inputs the observations from the target robot and outputs actions that can be deployed on the target robot. Additionally, by co-training on the original data 𝒟^𝒮 as well as 𝒟^Aug, we can also obtain a multi-robot policy.
§
In this section, we describe , an automated pipeline for augmenting and scaling up robot data. Our key insight is that the robot's actions should be invariant to its visual appearances and camera viewpoints.
Our robot augmentation pipeline leverages state-of-the-art diffusion models <cit.> to synthesize alternative robots and novel viewpoints. Fig. <ref> illustrates pipeline.
§.§ Robot Augmentation (Ro-Aug)
Given a sequence of robot image observations D_i^𝒮 = {o_1^𝒮, ..., o_H_i^𝒮}, we seek to transform the robot 𝒮 in the images into a different robot 𝒯 at the same gripper pose, a process known as cross-painting. While Mirage <cit.> proposes to perform cross-painting using a renderer to compute source robot masks and target robot visuals, it requires precise camera calibration which is unavailable for most open-source datasets. To relax this assumption, we approach cross-painting as an image-to-image translation problem. begins by predicting semantic mask on the robot 𝒮, which are then extracted and transformed into robot 𝒯 using a robot-to-robot (R2R) diffusion model. Meanwhile, the masked regions in the original images are inpainted using a video inpainting network to ensure visual continuity and integrity. Finally, the generated robot 𝒯 is pasted back into the background image (see Fig. <ref>).
Robot Segmentation.
In order to replace robot 𝒮 with robot 𝒯 in the image, we first need to detect the robot using semantic segmentation <cit.>. We find that off-the-shelf segmentation models <cit.> often fail to accurately segment out the robot, potentially due to the fact that robot images are under-represented in their training data.
As such, we finetune a pretrained Segment Anything Model (SAM) <cit.> using Low-Rank Adaptation (LoRA) <cit.>. We use simulation to synthetically generate a large dataset of different robot images with corresponding masks, where we randomly sample a wide range of camera and robot poses. We apply brightness augmentation and resizing to simulate different lighting and fields of view. To create diverse backgrounds, we paste the generated robot parts into various background images <cit.>.
By training the LoRA layer on this synthetic dataset, we obtain a mask model capable of handling different robot and camera poses.
Robot-to-Robot (R2R) Generation
Next, we aim to transform the segmented robot 𝒮 into robot 𝒯. We use an image-to-image diffusion model.
Similar to semantic segmentation, training a diffusion model capable of handling various camera and robot poses requires a large dataset of paired images. As collecting paired real robot data is challenging due to the need for precise adjustments of camera and robot poses, we again use simulation to generate pairs of robots at the same randomly sampled robot poses and camera poses, with brightness and resizing augmentations. Inspired by <cit.>, we use a ControlNet <cit.> to finetune a pretrained Stable Diffusion <cit.>. Even though we train the model on simulation images, we find that it still performs well on real segmented robot images.
Robot Inpainting
Inspired by <cit.>, after segmenting out robot 𝒮 from the image, we inpaint the missing region using a video inpainting model E^2FGVI <cit.>. The final step involves pasting the generated robot 𝒯 back to the image. As the R2R diffusion model is trained on simulated robot images, there is a visual gap from the real robot, particularly with the illumination. To prevent the trained policy on the augmented data from overfitting to the synthetic robot visuals, we perform random brightness augmentation to the generated robot before pasting it. We find in our experiments that this randomization significantly helps the performance of the trained policy (Section <ref>).
At the end of the Robot Augmentation pipeline, we obtain a sequence of cross-painted observations with synthesized target robot: D_i^𝒮→𝒯 = {o_1^𝒮→𝒯, ..., o_H_i^𝒮→𝒯}.
§.§ Viewpoint Augmentation (Vi-Aug)
To increase robustness of the trained policy to camera pose changes, we propose to augment the viewpoints of the images. This is orthogonal to robot augmentation and can be applied to both D_i^𝒮 and D_i^𝒮→𝒯.
We use ZeroNVS <cit.>, a state-of-the-art 3D-aware diffusion model that can zero-shot synthesize 360^∘ view of a scene from a single image. Compared to prior methods <cit.> that are limited to segmented object with no background, ZeroNVS works with multi-object scenes with complex backgrounds.
For each image o_t ∈ D_i, we uniformly sample perturbations (R̃_t, T̃_t) ∈ SE(3) from a box range, where each component in T̃_t is bounded by an interval. We parametrize R̃_t with Euler angles and each of those three angles is uniformly sampled within an interval described in Section <ref>. This process produces a resulting image as if the camera were perturbed by the sampled transformation:
o^R̃, T̃_t = f ( o_t; R̃, T̃), where we use f to denote the camera transformation. We denote the resulting augmented data as D_i^Vi-Aug = {o_1^R_1, T_1, ..., o_H_i^R_H_i, T_H_i}.
We experiment with two strategies for sampling the perturbations: independently sampling random (R̃_t, T̃_t) for each image, or applying a consistent random transformation (R̃, T̃) across the entire trajectory in D_i.
§.§ Policy Training
After applying robot and viewpoint augmentation, we can train a policy π based on the Diffusion Policy architecture <cit.> on the augmented dataset 𝒟^𝒮→𝒯^Vi-Aug and zero-shot deploy the policy on the target robot 𝒯. For challenging tasks or when there is a large difference in the dynamics between the robots, we can also collect a small demonstration dataset 𝒟^𝒯 on the target robot directly and few-shot finetune π on 𝒟^𝒯 to further improve policy performance. Alternatively, we can co-train π on 𝒟^𝒮^Vi-Aug⋃𝒟^𝒮→𝒯^Vi-Aug to obtain a multi-robot policy.
Additionally, if we have multiple datasets with different tasks, can mix-and-match the datasets and train a multi-robot multi-task policy. For example, given data 𝒟_1^𝒮 and 𝒟_2^𝒯 with robot 𝒮 performing task 1 and robot 𝒯 performing task 2, we can train on the cross-product 𝒟_1^𝒮⋃𝒟_2^𝒯→𝒮⋃𝒟_1^𝒮→𝒯⋃𝒟_2^𝒯 and their viewpoint-augmented versions to obtain a policy that can perform both tasks on both robots. In this way, we efficiently reuse the datasets and explicitly encourage transfer between robots and skills.
§ EXPERIMENTS
§.§ Implementation Details
To train our robot segmentation and Robot-to-Robot generation models, we use the Robosuite simulator <cit.> to generate a large dataset of paired robot images with corresponding masks with randomly sampled robot poses and camera poses (see supplementary material for details). We use 4 robots: Franka, UR5, Sawyer, and Jaco, with 800k images each. We finetune a LoRA layer while keeping SAM frozen with a learning rate of 1e-4 for just one epoch to avoid the overfitting. We train a ControlNet for each robot pair based on Stable Diffusion v1.5 <cit.> with a learning rate of 1e-4 for 20k steps. During robot inpainting, we randomly sample perturbations of the value channel in the HSV space between -30 and 30.
For view augmentation sampling, T̃_x, T̃_z ∈ (-0.25 m, 0.25 m), T̃_y ∈ (-0.1 m, 0.1 m). The y (vertical) direction has a lower translation range, as we have noticed that when moving excessively along the vertical direction, ZeroNVS outputs larger, more distracting artifacts. For rotation, we sample each Euler angle between ± 0.1 radians.
§.§ Experiment Setup
We design experiments to answer the following research questions:
(1) Can robot augmentation (Ro-Aug) effectively bridge the visual gap between the robots?
(2) Can viewpoint augmentation (Vi-Aug) improve policy robustness to camera pose changes?
(3) Can policies trained with be successfully deployed zero-shot on a different robot with camera changes?
(4) Does enable multi-robot multi-task training and better facilitate transfer between robots and skills?
To answer the first three questions, we study policy transfer between a Franka and a UR5 robot on 5 tasks (Fig. <ref>): (1) Open a drawer, (2) Pick up a toy tiger from the table and put it into a bowl (Place Tiger), (3) Stack cups, (4) Sweep cloth from right to left, and (5) Transport a toy tiger between two bowls. See the Appendix for more details. For the first three tasks, we collect demonstrations on the Franka, and for the latter two, we collect demonstrations on the UR5. All demonstrations are collected via teleoperation at 15 Hz <cit.>, with 150 trajectories each. A typical trajectory consists of 75-120 timesteps (5-8 s). We use a ZED 2 camera positioned from the side for each robot. We augment the demonstration data with robot augmentation (Ro-Aug) using the other robot, viewpoint augmentation (Vi-Aug), as well as both (), train a diffusion policy, and evaluate on the other robot. All experiments are evaluated with 10 trials each.
To answer the last question, we combine demonstration data from Franka and UR5 for different tasks, perform robot augmentations, and train a multi-robot multi-task diffusion policy. We also select the Berkeley UR5 dataset <cit.> from the OXE data <cit.>, apply to generate synthetic Franka images and finetune a generalist policy, Octo <cit.>, on the augmented datasets. We additionally collect 50 demonstrations on the target robot (Franka) and further finetune Octo-Base in a language goal-conditioned format. We compare whether training Octo on the augmented data improves the finetuning sample efficiency on the downstream tasks.
§.§ Results
Table <ref> shows the effect of robot augmentation when the camera poses are the same. The policy is deployed zero-shot. We compare Ro-Aug with 2 baselines, no augmentation and Mirage, and an ablation that does not apply random brightness augmentation during the Ro-Aug pipeline. Without robot augmentation, the policy trained on the source robot only barely achieves success on the target robot. On the other hand, Ro-Aug achieves comparable zero-shot performance as Mirage.
Additionally, we see that brightness randomization helps performance, suggesting that it effectively prevents the policy from overfitting to the lighting in simulation that the R2R model is trained on.
Table <ref> shows the policies trained on Ro-Aug data can be finetuned with 5-10 demonstrations on the target robot to further improve performance. Compared to few-shot policies trained without Ro-Aug, we see that Ro-Aug improves finetuning sample efficiency and exceeds the performance of all policies in Table <ref>. In contrast, Mirage does not allow finetuning and cannot improve performance on challenging tasks such as cup stacking.
Table <ref> evaluates the effect of viewpoint augmentation. We choose the Tiger Place task on the Franka robot and study how different strategies of camera perturbation sampling affect policy robustness. We sample translations T̃_x and T̃_z between ± 0.1 m, ± 0.25 m, and ± 0.4 m, and compare consistent perturbation across trajectories or independently on each image. From Table <ref>, we see that larger variation during augmentation improves policy robustness under severe camera pose changes. However, the performance decreases under the original camera angle, potentially due to lower density of each camera pose as the sampling range increases. Additionally, inconsistent augmentation seems to slightly outperform consistent augmentation., suggesting potential benefit from more augmentation. We note that the diffusion policy takes in only 2 steps of history, so viewpoint inconsistency may not matter much. Future work can study whether inconsistent augmentation would harm policies that use a longer history. Based on the results, we choose to apply inconsistent augmentation with 25 cm perturbation range for other experiments.
[b]0.48
max width=
Franka UR5
Place Tiger 80% 70%
Transport Tiger 60% 80%
tableRobot-Skill Cross Product. We train a multi-robot multi-task diffusion policy trained on pooling the Franka Tiger Place data and UR5 Tiger Transport data as well as their versions together.
[b]0.48
max width=
2*[width=8em,trim=l]PoliciesTasks 2cOXE UR5 → Franka
(lr)2-3
Sweep Cloth Transport Tiger
Octo-Base 30% 20%
Octo-Base + 60% 40%
tableOcto finetuning from the OXE datasets with 50 in-domain demonstrations for each task. improves finetuning sample efficiency.
Table <ref> evaluates on different robots with different viewpoints. We can see that viewpoint augmentation is crucial and Mirage struggles with larger camera pose changes. In contrast, can still achieve success when the target robot viewpoint is significantly different from source robot.
To evaluate robot-skill cross-product, we combine the Tiger Place demonstration data from the Franka and Tiger Transport demonstration data from the UR5, as well as their robot-augmented UR5 and Franka versions, and train a multi-robot multi-task diffusion policy. From Table <ref>, we can see that the policy can successfully execute the two tasks on both robots. Additionally, we evaluate whether improves finetuning sample efficiency. From Table <ref>, we can see that after training Octo on the augmented OXE data, the policy has seen the synthetic target robots performing the tasks, accelerating downstream finetuning of similar tasks.
§ LIMITATIONS AND FUTURE WORK
We present , a pipeline for robot and viewpoint augmentation that bridges different robot datasets and better facilitates transfers between robots and skills. There are several limitations, which open up possibilities for future work: (1) Our robot augmentation pipeline relies on a sequence of different models so artifacts can cascade. For example, inaccuracies in the robot segmentation stage (e.g., mistakenly segmenting the object out) could lead to bad robot-to-robot generations in the second stage. See the Appendix for more details on artifacts. Additionally, instead of training an R2R diffusion model for each robot pair, future work could explore a unified model that handles multiple pairs.
(2) For viewpoint augmentation,
future work could improve the quality of novel view synthesis by finetuning the model on robotics data or using video-based models <cit.>. (3) While we mitigate viewpoint changes in this work, there are also often background changes in practice during cross-embodiment transfer. Future work could combine with prior orthogonal approaches such as object, background, and task augmentation <cit.> to further obtain more generalizable policies.
(4) We only demonstrate transfer between stationary robot arms
and do not consider very different grippers such as multi-fingered hands. We leave these extensions to future work.
This research was performed at the AUTOLab at UC Berkeley in affiliation with the Berkeley AI Research (BAIR) Lab, and the CITRIS “People and Robots” (CPAR) Initiative, and in collaboration with Google DeepMind. The authors are supported in part by donations from Google, Toyota Research Institute, and equipment grants from NVIDIA. L.Y. Chen is supported by the National Science Foundation (NSF) Graduate Research Fellowship Program under Grant No. 2146752. We thank reviewers for valuable feedback.
§ APPENDIX
In this section, we provide additional implementation details of and our physical experiments.
§.§ Algorithm Pseudocode
In this section, we provide the pseudocode for Ro-Aug and Vi-Aug.
§.§ Robot Augmentation
§.§.§ Training Data Generation
To train our robot segmentation and Robot-to-Robot generation models, we use the Robosuite simulator <cit.> to generate a large dataset of paired robot images with corresponding masks with randomly sampled robot poses and camera poses. The sampling procedure is as follows:
The robot pose is specified by the end-effector pose. The translation component is sampled uniformly with (x, y, z) ∈ [-0.25, 0.25] × [-0.25, 0.25] × [0.6, 1.3] (unit in meters). For the rotation component, we parameterize it as [inward, rightward, z_axis]. To bias the unit vector z_axis towards pointing downward, we parameterize it using spherical coordinate θ, ϕ where θ (zenith angle) is sampled from a normal distribution 𝒩(π, π / 3.5) and ϕ (azimuthal angle) is uniformly sampled between 0 and 2π.
After sampling the robot pose, we randomly sample the camera pose with the following procedure:
The position is sampled from a half hemisphere with radius r ∈ 𝒩(0.85, 0.2) and zenith angle θ∈𝒩(π/4, π / 2.2), and azimuthal angle ϕ∈Unif[-π·3.7/4, π· 3.7/4]. The viewing direction is towards the center of the hemisphere, which we offset as the gripper position. We also sample camera field of view between 40 and 70. Finally, we randomly perturb the camera pose with noises.
We randomly sample robot poses, and for each robot pose, we randomly sample 5 different camera poses. In addition to pure random sampling, we also add some camera poses and robot poses similar to those in the RT-X datasets and add perturbations. We obtain paired images between different robots and their segmentation mask from Robosuite, and we add random brightness augmentation with range [-40, 40] to the source robot images to increase the robustness of the segmentation model and R2R model to real-world lighting. In this way, we obtain about 800k images for each of the 4 robot types: Franka, UR5, Sawyer, and Jaco. See Fig. <ref> for some example images.
To create the dataset for training the segmentation model, we paste the generated robot image onto backgrounds from ImageNet <cit.>. See Fig. <ref> for some example images.
§.§.§ Model Training Details
Regarding the robot segmentation model, we fine-tune SAM with LoRA with 4 A6000 GPU for 1.5 hours. In particular, we leverage mixed-precision (8-bit and 16-bit) and the torch.compile feature to accelerate training. The model is trained with a mini-batch size of 64, a learning rate of 1e-5, and a LoRA rank of 4.
Regarding the Robot-to-Robot generation model, we finetune Stable Diffusion with ControlNet on 1 A100 GPU for 36 hours on 800K paired images. We use a learning rate of 1e-4 and a batch size of 512. During inference, we leverage the Stream Batch proposed by <cit.> to batchify the generation phase, making the generation phase achieve around 3.2 FPS.
We use ZeroNVS and the video inpainting model E2FGVI off-the-shelf without finetuning.
§.§.§ Computation Time for Data Augmentation
The advantage of over Mirage is that the primary of the compution is performed offline, not during execution time. Moreover, each model in RoVi-Aug’s pipeline can be parallelized to process batchified video frames efficiently. We measured the throughputs of each module: Robot segmentation model achieves 4.1FPS, Robot-to-Robot achieves 3.2FPS, and the video inpainting model achieves 4.6FPS. On a single A100 GPU, it takes about 4-5 hours to perform Ro-Aug on a dataset of 200 trajectories. Similarly, the throughput for ZeroNVS inference is 1.3FPS, translating to 4.2 hours of viewpoint augmentation time on a dataset of 200 trajectories.
§.§.§ Example Augmented Images
In Fig. <ref>, we show some example results of applied to the training images of the 5 tasks. The left column is the original images; the middle column is the cross-painted images using the robot augmentation pipeline; the right column shows the view augmented images applied on top of the robot augmented images. The black regions in the generated robot are due to incomplete segmentation mask (missing some regions in the generated robot) when pasting the generated robot to the original image. We can see that in general, generates diverse view angles of the target robot performing the task of interest.
§.§.§ Generation Artifacts
We observe a few different types of artifacts: 1) illumination difference, 2) inaccurate object segmentation, 3) temporal inconsistency, and 4) inaccurate robot-to-robot generation.
For 1), since there are almost always differences in the lighting conditions between the simulated images that are used to train the R2R diffusion model and that of the test robots which are unknown a priori, we perform random brightness augmentation to the generated robot scenes in the augmentation pipeline. As shown in Table <ref>, we find this mitigation strategy is generally effective.
For 2), the robot segmentation model may sometimes under-segment or over-segment, particularly when the source robot is occluded or interacting with objects. As the R2R diffusion model is not trained on source robot images with objects in the gripper or with a partially segmented robot, the generated target robot can have large artifacts including distortion or hallucination due to out-of-distribution inputs.
For 3), due to the stochastic nature of diffusion models and possible multiple inverse kinematics solutions for putting the end effector of the target robot at the position of the source robot with different joint angles, the generated images may not be consistent across time. We did not observe this as a big problem potentially due to two reasons: (1) The Diffusion Policy does not use a long history so temporally inconsistent artifacts may not have a large effect; (2) The stochasticity of the generated images has an effect of randomization, which may help the policy be more robust to visual artifacts.
Future work could also use a video diffusion model <cit.> to perform robot generation based on the entire robot trajectory to improve robot pose consistency.
For 4), even though our robot-to-robot diffusion model is trained on a large number of paired robot data, the generated images may still contain visible artifacts. For example, due to the ambiguity of inferring the field of view parameter from an image, the generated robot arm may be too thin or too thick. The generated gripper may also have artifacts or its position or orientation may not completely align with the source robot.
Due to these artifacts, we observe in Table <ref> that Mirage achieves better performance than Ro-Aug on tasks that require more precision, such as cup stacking. This is because Mirage has the benefit of using a URDF with precise camera calibration to put the gripper at the exact location desired. On the other hand, artifacts in the R2R Generation model mean that the gripper of the target robot may not have the exact same pose as the original robot. However, as we show in Table <ref>, the ability of RoVi-Aug to perform finetuning can bring the performance higher than Mirage.
§.§ Physical Experiment Details
We provide more details on the physical experiment setups described in Section <ref>.
For the Franka-UR5 transfer experiments, we study 5 tasks: (1) Open a drawer, (2) Pick up a toy tiger from the table and put it into a bowl (Place Tiger), (3) Stack cups, (4) Sweep cloth from right to left, and (5) Transport a toy tiger between two bowls. For each task, the initial position of the robot gripper is randomized. For (1), the position and orientation of the drawer on the table is randomized, and the goal for the robot gripper is to go into the handle, pull it out, and leave the drawer. For (2), the positions of the tiger and the drawer are randomized. For (3), the positions of both cups are randomized. For (4), the initial position of the cloth is randomized in the right region of the table, and the robot needs to push it to the left region of the table, a distance of about 0.5 m. For (5), there are 2 bowls (red and grey) whose positions are randomized, and the toy tiger is always in the red bowl initially. The robot needs to grasp it and drop it into the grey bowl. Among them, stacking cup requires high precision and is most difficult, and sweeping cloth is the easiest.
For the OXE dataset experiments, the 2 tasks from the Berkeley UR5 datasets (Transport Tiger, Sweep Cloth) are the same as (4) and (5) above. For the 2 tasks from the Jaco Play datasets, the “Pick Cup” task requires the robot to pick up a cup that is randomly initialized on the table, and the “Bowl in Oven” task requires the robot to pick up a bowl and put it into a toaster oven.
§.§.§ Policy Learning Details
We use the codebase from DROID <cit.> as our Diffusion Policy implementation, which is an open-source version integrated with Robomimic <cit.>. Similar to them, We use downsample camera observations at a resolution of 128 × 128 and the robot proprioception as input, and produce absolute robot end-effector translation, rotation, and gripper actions. And as with DROID and the original Diffusion Policy implementation, we train the diffusion policy to generate 16-step action sequences, and during rollouts, step 8 actions open loop before re-running policy inference. Compared to DROID, we use a ResNet-18 visual encoder instead of a pre-trained ResNet-50 for faster training, and we do not condition the policy with language input since we train a separate policy for each task (or 2 tasks for Table <ref>).
For few-shot finetuning experiments, we did not freeze any part of the diffusion policy and simply continued training on the target robot dataset (5/10 demonstrations) for only 100 epochs (about 20 minutes) to prevent the policy from overfitting to the target data too much.
§.§.§ Failure Modes
We describe the common failure modes of and baselines here.
For the 3 pick and place tasks (“Place Tiger,” “Stack Cup,” and “Transport Tiger”), failure cases are usually missed grasp or inaccurate placing. For “Open Drawer,” failure cases are typically gripper missing the drawer handle. For “Sweep Cloth," failure cases include inaccurate reaching and gripper being too high or leaving the table too early during the trajectory. For baselines, failure modes also include the robot getting confused and simply hovering over the objects without performing the task.
§.§ Model and Computation Details
For Ro-Aug, our segmentation model is a 636M-parameter SAM model with 35.6M-parameter LoRA layers; the video inpainting model E2FGVI is a 41.8M parameter model that we use off-the-shelf; the Robot-to-Robot (R2R) Generation model is a 1B-parameter Stable Diffusion model with around 350M-parameter ControlNet. For Vi-Aug, ZeroNVS is a 1B model that we use off-the-shelf. For policy learning, we use Diffusion Policy with a ResNet18 encoder and 1D-UNet with 80M parameters in total.
§.§ Example of Cross-Painting with a Mobile Robot
Generalization from arms mounted on a stationary base to mobile robots is much more challenging.
In this section, we try an experiment using images of the Franka arm and apply robot augmentation to replace the Franka with a Boston Dynamics Spot to illustrate some examples with cross-painting to a mobile robot. While we do not have the hardware to perform physical experiments, the cross-painted images look somewhat realistic (see Figure <ref>), so it may be possible that the cross-painted Franka dataset could jumpstart the training for Spot. There are additional challenges associated with mobile manipulation, such as coordination between base and arm movements and less accurate arm control, which we will leave as future work.
|
http://arxiv.org/abs/2409.03016v1 | 20240904181135 | Resolving Twin Jets and Twin Disks with JWST and ALMA: The Young WL 20 Multiple System | [
"Mary Barsony",
"Michael E. Ressler",
"Valentin J. M. Le Gouellec",
"Łukasz Tychoniec",
"Martijn L. van Gelder"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
Mary Barsony
[email protected]
0009-0003-2041-7911]Mary Barsony
13115 Dupont Road,
Sebastopol, CA 95472, USA
0000-0001-5644-8830]Michael E. Ressler
Jet Propulsion Laboratory, California Institute of Technology,
4800 Oak Grove Drive,
Pasadena, CA 91109, USA
0000-0002-5714-799X]Valentin J.M. Le Gouellec
NASA Postdoctoral Fellow
NASA Ames Research Center,
Space Science and Astrobiology Division,
M.S. 245-6,
Moffett Field, CA 94035, USA
0000-0002-9470-2358]Łukasz Tychoniec
Leiden Observatory,
Leiden University,
P.O. Box 9513,
2300RA Leiden, The Netherlands
0000-0002-6312-8525]Martijn L. van Gelder
Leiden Observatory,
Leiden University,
P.O. Box 9513,
2300RA Leiden, The Netherlands
§ ABSTRACT
We report the discovery of jets emanating from pre-main-sequence objects exclusively at mid-infrared wavelengths,
enabled by the superb sensitivity of JWST's Mid-InfraRed Medium-Resolution Spectrometer (MIRI MRS) instrument. These jets are observed only in
lines of [NiII], [FeII], [ArII], and [NeII]. The H_2 emission, imaged in eight distinct transitions, has a completely
different morphology, exhibiting a wide-angled, biconical shape, symmetrically distributed about the
jet axes. Synergistic high-resolution Atacama Large Millimeter/submillimeter Array (ALMA) observations
resolve a pair of side-by-side edge-on accretion disks lying at the origin of the twin mid-infrared jets.
Assuming coevality of the components of the young multiple system under investigation,
the system age is at least (2 - 2.5) × 10^6 yr, despite the discrepantly
younger age inferred from the spectral energy distribution of the combined edge-on disk sources.
The later system evolutionary stage is corroborated by ALMA observations of CO(2-1), ^13CO(2-1), and
C^18O(2-1), which show no traces of molecular outflows or remnant cavity walls.
Consequently, the observed H_2 structures must have their origins in wide-angled disk winds,
in the absence of any ambient, swept-up gas.
In the context of recent studies of protostars, we propose an outflow evolutionary scenario
in which the molecular gas component dominates in the youngest sources,
whereas the fast, ionized jets dominate in the oldest sources, as is the case for the twin jets discovered in the WL 20 system.
§ INTRODUCTION
Most stars form in multiple systems <cit.>, defying simple theoretical collapse models <cit.>.
It is now recognized that in addition to gravity and thermal pressure, turbulence, magnetic fields, and
interactions between the members of multiple systems all play a role in their formation <cit.>.
Observational studies of multiple formation are, therefore, crucial for further progress.
How do the individual members of a multiple system coevolve?
The WL 20 triple system in Ophiuchus (Oph) is of great interest in this context, since it is a member of a rare class of multiple systems known as InfraRed Companion (IRC) systems,
in which one member has the appearance of being significantly younger than its companions, despite the fact that they have all formed from a single core e.g.,
T Tau, Glass I, Haro 6-10, Z CMa, XZ Tau, DoAr24E
<cit.>.
Such systems are particularly interesting for pre-main-sequence evolutionary studies, since they pose
a puzzle as to why one of two or more coeval components appears significantly redder, and, in this case,
more luminous than its companions.
WL 20 was discovered in the course of a 2 μm bolometer survey, carried out with a 12^'' beam,
of the 10.5^'× 10.5^' region exhibiting the the strongest C^18O emission in the Rho Ophiuchi (ρ Oph) cloud
<cit.>. When near-infrared arrays became available, WL 20 was resolved first into a binary
and, eventually, into a triple system, at 2.2 μm <cit.>.
WL 20E (K=10.13) and WL 20W (K=10.40) have an angular separation of 3.17^'' at P.A. 270^∘ E of N,
whereas the IRC, WL 20S, lies 2.26^'' from its nearest neighbor, WL 20W, at a P.A. of 173^∘ <cit.>.
The first near-infrared spectra of the two brighter components, obtained with R ≡λ/δλ≤ 1000,
determined spectral types consistent with K-M for WL 20E through an extinction of
A_V=15.4 and K7-M0 through A_V=18.1 for WL 20W <cit.>. Subsequent higher-resolution (R ∼ 1200) near-infrared
spectra established spectral types of K6 for WL 20E (GY 240B) and M0 for WL 20W (GY 240A), both seen through A_V = 16.3 <cit.>.
It took NIRSPEC on the Keck II 10.4-meter telescope to finally obtain a spectrum of the fainter (K=12.6)
IRC, WL 20S. The R = 2200 spectra of each component refined the spectral types of WL 20E to K7 IV/V and WL 20W to M0 IV/V,
using veiling independent line ratios, whereas the spectrum of WL 20S is so heavily veiled that no absorption lines could be detected in its spectrum.
Nevertheless, the spectral shape and δ K = 2.2 mag brightness difference between WL 20S
and WL 20W constrains WL 20S to have an infrared excess r_K < 0.9, but with an additional A_V=25, relative to its neighbors <cit.>.
Mid-infrared observations of WL 20 were first acquired with a 6^'' (or 8^'')
aperture at 10 μm, which did not resolve the multiple system <cit.>.
In the morphological classification scheme of pre-main-sequence spectral energy
distributions (SEDs) devised by Lada and coworkers (e.g., <cit.>),
WL 20 was classified as a Class I object <cit.>.
Diffraction-limited mid-infrared imaging on the Keck II telescope allowed
spatially resolved determination of each component's SED, confirming the previous Class II
spectroscopic classifications of WL 20E and WL 20W <cit.>, but demonstrating the
Class I SED of WL 20S <cit.>.
The first millimeter continuum detection of WL 20 was with the IRAM 30-meter telescope at 1.3 mm
with an 11^'' beam <cit.>. Interferometric observations
with the six-element Owens Valley Radio Observatory (OVRO) array were required to identify WL 20S as the
source of the millimeter dust continuum emission associated with the system <cit.>.
The single-telescope and interferometric flux measurements were consistent, implying a compact source
structure origin, with no emission from any envelope component.
This conclusion is further corroborated
by comparison of HCO+ J = 4 → 3 and 850 μm dust maps of WL 20S: Since a hallmark of the earlier
evolutionary stage is the presence of a centrally condensed envelope, such sources should exhibit
HCO+ J = 4 → 3 emission coincident with the dust continuum peak, since
this transition has a high critical density (> 10^6 cm^-3), unique to
the dense gas located in the inner regions of protostellar envelopes. Such a spatial coincidence was not
found for WL 20S, leading to the conclusion that it lacks an infall envelope component, and
sports a Class I SED due to its edge-on orientation <cit.>.
Intriguingly, the 12.81 [NeII] line was detected in WL 20 in a 4.7^'' slit at R=600 by Spitzer's IRS,
but was undetected in a much narrower, 0.4^'' slit by the Very Large Telescope's (VLT's) VISIR at R=30,000,
implying a spatially extended origin for the emission <cit.>.
Thermal radio jets are typically associated with Class 0/I protostars <cit.>. A 6 cm source detected with the Very Large Array (VLA) in an
11^''× 5^'' beam was first associated with WL 20 in
the survey of two dense cores, A and E/F, in the ρ Oph star-forming cloud <cit.>.
Higher-angular-resolution JVLA 3.0 cm maps (5.1^''× 2.4^'' beam at P.A.= -5^∘)
associated the radio jet with WL 20S <cit.>.
To further delve into the mysteries posed by this system, we have acquired JWST MIRI MRS
integral field unit imaging spectroscopy encompassing all three components,
covering the 5 - 28 μm wavelength range, supplemented by high-resolution ALMA data.
The paper is structured as follows: Observations and Data Reduction for MIRI MRS and for the ALMA data are presented in 2.1
and 2.2, respectively. MIRI MRS results are provided in 3.1 as follows: 3.1.1 covers continuum images,
3.1.2 shows the on-source spectra, 3.1.3 highlights jet images and spectra in both low- and high-excitation lines,
and 3.1.4 features the molecular hydrogen line images.
ALMA results are presented in 3.2: 3.2.1 features Band 4 (1.9 mm) continuum images and 3.2.2 both Band 6 (1.3 mm)
continuum and CO(2-1), ^13CO(2-1), and C^18O(2-1) line maps.
Discussion is contained in 4 and Conclusions in 5.
§ OBSERVATIONS AND DATA REDUCTION
§.§ MIRI MRS
WL 20 was observed during UTC 12-13 April 2023
as part of Guaranteed Time Observations (PID 01236; P.I. Ressler) with JWST's
MIRI <cit.>
in its MRS mode <cit.>.
Observations were acquired with a two-point dither pattern.
The pointing center for all observations was α_2000 = 16h 27m 15.77s, δ_2000 -24^∘ 38^' 44.3^''.
To maximize on-source integration time,
no dedicated background observations were undertaken. Instead,
background observations acquired earlier under PID 01236 are used to subtract the telescope
background and detector artifacts during the data processing.
All three gratings (A, B, and C) were employed through all four MIRI Channels,
using the FASTR1 read mode, thereby covering the entire 4.9 - 28.6 μm spectral range at resolutions ranging from 3500 ≥ R ≥ 1500.
Integration times were 15 minutes 55 seconds through the short grating, and 15 minutes 52 seconds in each of the medium and long gratings.
In addition, the F1500W filter was chosen for parallel off-source imaging for the duration of the MIRI MRS observations.
There are three necessary data-processing steps for producing usable science data from
the MIRI MRS raw data.
These were performed using the JWST pipeline version 1.11.4 <cit.> using reference context
jwst_1118.pmap of the JWST Calibration Reference Data System (CRDS; <cit.>).
Level 1 processing was performed with the default settings in
the Detector1Pipeline.
Level 2 processing was performed using the Spec2Pipeline. During this step, the dedicated background of Program 01236,
obtained at a different time and region of sky, was subtracted on the detector level in order to subtract the telescope background and detector artifacts.
Additionally, fringe corrections were performed using the fringe flat for extended sources (Mueller, M. et al., in prep.)
and detector level residual fringe corrections were applied (Kavanagh, P. et al., in prep.).
Finally, the Spec3Pipeline was run with both the outlier rejection
and master background subtraction steps switched off. This processing resulted in 12 calibrated data cubes,
one for each combination of channel and grating settings.
§.§ ALMA
Band 6 (1.3 mm) ALMA observations of the WL 20 system were obtained as part of program 2019.1.01792.S (PI: D. Mardones). Two datasets are used, one high-angular-resolution dust continuum observation reaching ∼ 0.11^'' with 36 sec of integration time, and a lower-angular-resolution observation reaching ∼ 1.1^'' with 194 sec of integration time. The longer-integration time, lower-angular-resolution data target various molecular lines with high spectral resolution (i.e., 0.08 km s^-1). In this work we will focus on the CO(2→1), ^13CO(2→ 1), C^18O(2→ 1) transitions. We used the task of CASA version 6.5.2 to produce the dust continuum image and molecular line channel maps, with Briggs weighting and the auto-multithresh option for the masking operations. Robust parameters of 0.0 and -0.5 were used for the dust continuum and molecular line maps, respectively. We reached noise levels of 0.27 mJy beam^-1 and 25 mJy beam^-1 per 0.125 km s^-1, for the continuum and spectral line observations, respectively,
calculated using the root-mean-square (rms) flux of an emission-free region in the image plane in each instance.
Band 4 (1.9 mm) 155 GHz ALMA observations were obtained on 2023 June 12 within program 2022.1.01734.S (PI: Ł. Tychoniec) with 30 sec integration time.
The observations were self-calibrated with the package[https://github.com/jjtobin/auto_selfcal].
The resulting measurement set was imaged with the procedure within CASA version 6.5.2 <cit.>. A robust parameter of 0.5 was used, and automasking with standard parameters was applied.
The resulting image has a resolution of 0.096^'' × 0.16^'' and sensitivity of 0.075 mJy beam^-1 measured from the rms signal in an emission-free part of the continuum image.
§ RESULTS
§.§ MIRI MRS
§.§.§ Line-free Continuum Images: Newly Discovered Source in WL 20S
Figure <ref>
shows the appearance of the WL 20 system at four different line-free continuum
wavelengths through the SHORT (A) grating in each of MIRI's four MRS Channels (see Table 1 of <cit.>).
Surprisingly, a faint, new source, WL 20SE, was discovered next to WL 20S, evident at the shorter MIRI MRS wavelengths (left panel of
Figure <ref>), with a separation of
∼ 0.58^'' ± 0.03^'' at PA 76.1^∘ ± 0.5^∘ (measured E from N),
relative to the brighter component, WL 20S. In the 8.1 μm continuum image and at longer wavelengths, however,
this new source can no longer be separated from its brighter
companion, their fluxes being blended together due to the increasing point-spread-function (PSF) with wavelength.
Consequently, when these two objects are spatially resolved, the IRC source, previously designated as WL 20S,
will now be referred to as WL 20SW, and its newly discovered neighbor will be called WL 20SE.
Table <ref> lists the detected sources and source coordinates, as determined
from the WCS header coordinates produced by the JWST pipeline for MIRI MRS and refer to
the source centroid coordinates at 5.3 μm. A second set of coordinates, derived
from the ALMA continuum observations as described in <ref> ALMA Band 4 Data, is also listed.
In order to determine whether or not any of the continuum sources are extended, we
used the PSF standard observations of 10 Lac (PID 3779; PI: D. Gasman) with which to compare
radial profiles of WL 20E, WL 20W, and WL 20SW.
WL 20SE has too low signal-to-noise and is too confused
with WL 20SW to state one way or the other as to whether or not it is extended.
The azimuthally averaged cross-cuts at 6.2 μm and 16.6 μm of
10 Lac, WL 20E, WL 20W, and WL 20SW, each scaled to their individual peak pixel value,
are shown in Figure <ref>. Examination of this figure
demonstrates that whilst WL 20E and WL 20W are consistent with being point sources,
WL 20SW is definitely extended at these wavelengths.
Continuum fluxes for each component of the WL 20 quadruple system are listed in Table <ref>.
The MIRI MRS fluxes are measured at the median wavelengths of the SHORT grating in each of the four MIRI MRS channels.
The apertures through which the fluxes are measured have diameters that vary with wavelength as
2.0 × FWHM_PSF for all sources except for WL 20E, for which the aperture diameter was varied as 3.0 × FWHM_PSF,
where FWHM_PSF is given by 0.033 (λ / μ m) + 0.106^'' <cit.>.
The larger apertures were necessary for WL 20E in order to minimize fringing at the shortest wavelengths, since WL 20E was very close to the edge of the detector for Channel 1.
ALMA Band 6 (1.3 mm) and Band 4 (1.9 mm) continuum fluxes, and the derived dust disk masses, are also tabulated in Table <ref>; how these were arrived at
is detailed in <ref>.
l c c c c c c |c c c c c c
7
0pt
The WL 20 System: Coordinates
Source 6cSource Coordinates (MIRI MRS) 6cSource Coordinates (ALMA Bands 4 & 6)
Name 3c α (2000) 3c δ (2000) 3c α (2000) 3c δ (2000)
h min sec ^∘ ^' ^'' h min sec ^∘ ^' ^''
WL 20E 16 27 15.9075 -24 38 43.8180 16 27 15.889 -24 38 43.977
WL 20W 16 27 15.666 -24 38 43.930 16 27 15.652 -24 38 43.982
WL 20SW 16 27 15.686 -24 38 46.219 16 27 15.674 -24 38 46.260
WL 20SE 16 27 15.728 -24 38 46.141 16 27 15.713 -24 38 46.133
l c c c c c c c
5
0pt
The WL 20 System: Continuum Fluxes and Disk Dust Masses
Source Flux at Flux at Flux at Flux at Flux at Flux at Dust mass
Name 5.3 μm 8.1 μm 12.5 μm 19.3 μm 1.3 mm 1.9 mm
(mJy) (mJy) (mJy) (mJy) (mJy) (mJy) M_⊕
WL 20E 81± 0.5 27 ± 1 85 ± 0.5 95 ± 2 2.1 ± 0.2 1.3 ± 0.1 3.3 ± 0.4
WL 20W 34 ± 2 42 ± 1 50 ± 1 133 ± 1 2.8 ± 0.3 1.4 ± 0.2 3.6 ± 0.5
WL 20SW 14 ± 2 89 ±1 47.5 ± 5 2300 ± 5 36.1 ± 0.6 16.2 ± 0.3 42 ± 2
WL 20SE 3.7± 0.5 - - - 20.1 ± 0.6 9.0 ± 0.3 24 ± 4
§.§.§ On-source Spectra
On-source MIRI MRS spectra of WL 20E, WL 20W, and WL 20S,
are presented in the top, middle, and bottom panels of Figure <ref>, respectively.
Spectra were extracted through apertures scaling as 2.0 × FWHM_PSF for WL 20W and WL 20S,
and as 3.0 × FWHM_PSF for WL 20E. The larger aperture was used for WL 20E in order to minimize fringing at the shortest wavelengths,
since WL 20E was close to the edge of the detector for Channel 1.
Although the aperture through which the WL 20S spectrum was extracted is centered on the coordinates of WL 20SW,
the spectra of WL 20SW and WL 20SE are blended longwards of 6.0 μm.
Spectra of the faint new source, WL 20SE and its bright companion, WL 20SW, are presented in
Figure <ref>, in the wavelength region where the two sources can be spatially resolved.
Note the bright emission lines of H_2 and [FeII].
The 5.3 μm continuum flux level of WL 20SE is a factor
of 3.8 times weaker than that of its neighbor, WL 20SW (see Table <ref>).
In the resolved spectra of Figure <ref>, WL 20SW displays a steeply rising continuum in contrast
to the relatively flat continuum of its newly discovered neighbor, WL 20SE.
Figure <ref> shows the
spectra of WL 20E and WL 20W over this same wavelength interval.
The spectral types and temperatures shown on their spectra are from <cit.>.
Note the much higher continuum fluxes of WL 20E at 81 mJy and WL 20W
at 34 mJy in this wavelength region relative to their southern neighbors,
WL 20SW at 14 mJy and WL 20SE at 3.7 mJy, emphasizing the large extinction difference between them.
Also notable is the complete absence of the [FeII] and H_2 emission
lines in the spectra of WL 20E and WL 20W,
in striking contrast to their appearance in the spectra of the WL 20SE/WL 20SW pair.
In comparison with recent MIRI/MRS spectra of Class 0 and Class I protostars,
what is remarkable in these spectra are the features which are lacking:
namely, the missing broad, deep absorption features associated with various ices, including
H_2O, CO, CH_3OH, CH_4, NH_3, to name just a few <cit.>.
This is yet another indicator of the relatively advanced, Class II
stage, of the WL 20 multiple system.
Although there are a wealth of spectral features in each component of the WL 20 system,
the focus of this work is on the [NeII] emission from each, and on the
emission lines found in the WL 20S pair.
Identifying and analyzing all of the spectral lines and solid state features detected in these spectra is left for future investigation.
§.§.§ Jet Images and Spectra in Low- and High-excitation Ionic Lines
A completely unexpected discovery from the MIRI MRS imaging data was that of twin, parallel
jets, powered by the sources WL 20SW and WL 20SE in both low- and high-excitation ionic lines.
Table <ref> lists the ionic transitions, excitation potentials, wavelengths, spectral resolutions at these wavelengths, and velocity
extents of the jets. Continuum-subtracted, emission-line images of these twin jets, each powered by one component of the WL 20S binary, are
presented in Figure <ref> for the [FeII] lines, in Figure <ref> for the [NiII] lines,
and Figure <ref> in the [ArII] and [NeII] lines.
Surprisingly, WL 20SE, the faint, newly discovered source, drives jets that have stronger emission in the
highly excited [ArII] and [NeII] lines than the jets in these lines emanating from its brighter neighbor, WL 20SW.
The situation is reversed in the lower-excitation [FeII] and [NiII] lines, in which the WL 20SE jets
are fainter than the corresponding jets seen in these lines driven by WL 20SW.
Since these results are difficult to see from the jet images of Figures <ref>, <ref>, and <ref>,
in order to best distinguish the separate jets driven by WL 20SE and WL 20SW,
we present zoomed-in views of the twin jets in Figure <ref>.
The top panel shows the unprocessed, continuum-subtracted line images
of [FeII] (5.340 μm), [NiII] (6.636 μm), and [ArII] (6.985 μm).
These transitions were chosen because they are at the shortest wavelengths in the low- and high-excitation jets,
where the angular resolutions are best. The bottom panel of Figure <ref> shows the
corresponding deconvolved images, in which the faint southern jet, driven by WL 20SE,
is clearly seen and is indicated by the white arrows.
To further highlight the differences in excitation of the jets, additional jet spectra were obtained. The jet spectra
were extracted from carefully
sized and placed apertures to minimize cross-contamination between the jets. The locations and sizes of these apertures are depicted by white circles
on the continuum-subtracted 6.985 μm [Ar II] line image in the top-right panel of Figure <ref>.
The central coordinates of the circular, 1.00^'' diameter apertures used to extract the jet spectra
are listed in Table <ref>.
For the WL 20SW jet, the chosen aperture is to the northwest of WL 20SW, to best isolate its jet emission
from the WL 20SE jet's emission. For the WL 20SE jet, the chosen aperture is located to the southeast of WL 20SE,
as far as
possible from the southern jet driven by WL 20SW, to avoid contamination by WL 20SW's southern jet, but still capturing as
much as possible of the emission from WL 20SE's jet.
These apertures were
chosen so that they are separated by the FWHM_PSF even at emission lines detected at MIRI's longest wavelengths.
Nevertheless, there will be the inevitable contamination of the continuum levels of the extracted jet spectra, which increases
with wavelength.
The extracted spectra from these two apertures are presented in Figure <ref>.
At the shortest wavelengths, where the spatial resolution allows the best separation of the jet apertures from the continuum emission
of WL 20SE and WL 20SW, the spectra exhibit a suppressed continuum dominated by emission lines from shocked gas.
As we progress to longer wavelengths, the inevitable contamination of the continuum levels of the jet spectra increases
in Figure <ref>.
Examination of the jet spectra of WL 20SE and WL 20SW in Figure <ref> shows the presence of both low- and high-excitation
emission lines in the jets powered by each source – a result that is difficult to see from the jet images
of Figures <ref> - <ref> alone. The jet spectra also show the much stronger [ArII] and [NeII] line strengths in the jet driven by
WL 20SE compared with those of the WL 20SW jet.
lcccccc
7
0pt
Low- and High-excitation Jet Emission Lines from WL 20SW and WL 20SE
Line Wavelength Transition Excitation Ionization Spectral Velocity
(μm) Potential Potential Resolution Extent
(eV) (eV) km s^-1a km s^-1
[FeII] 5.340169 a4F9/2-a6D9/2 7.90 16.19 ∼ 83 -76.8→ +102.8
[FeII] 6.721283 a4F9/2-a6D7/2 7.90 16.19 ∼ 83 -3.7→ + 67.7
[FeII] 17.935950 a4F7/2-a4F9/2 7.90 16.19 ∼ 150 -183.0→ + 117.9
[FeII] 24.519250 a4F5/2-a4F7/2 7.90 16.19 ∼ 180 -149.8→ -3.0
[FeII] 25.988290 a6D7/2-a6D9/2 7.90 16.19 ∼ 200 -130.2→ +77.4
[NiII] 6.636000 2D3/2-2D5/2 7.64 18.17 ∼ 84 -90.4→ +126.5
[NiII] 10.682200 4F7/2-4F9/2 7.64 18.27 ∼ 92 -60.3→ +49.1
[ArII] 6.985274 2P1/2-2P3/2 15.76 27.63 ∼ 84 -71.8→ +65.5
[NeII] 12.813550 2P1/2-2P3/2 21.56 40.96 ∼ 100 -53.8→ +121.7
aUsing R_MRS(max) value from Table 3, Column 6 for the central wavelength of each subband from Labiano et al. 2021, except for Channel 4,
where we have used the resolving power listed in the last column of Table 1 in <cit.>.
Line data are from https://www.mpe.mpg.de/ir/ISO/linelists
l c c c c c c
7
0pt
Center Coordinates for Extracted Jet Spectra
Jet 6cJet Aperture Coordinates
Aperture 3c α (2000) 3c δ (2000)
Name h min sec ^∘ ^' ^''
WL 20SW Jet 16 27 15.6686 -24 38 45.0694
WL 20SE Jet 16 27 15.8045 -24 38 47.0638
§.§.§ Molecular Hydrogen Line Images
Figure <ref> displays the distribution of molecular hydrogen line emission in the eight transitions listed in Table <ref>, continuum-subtracted and
integrated over all channels in which each line is detected.
Note the stark contrast between the appearance of the gas traced via the molecular hydrogen transitions and the jet-like structures evident in the ionized lines
of Figures <ref> through <ref>.
rcccccccc
9
0pt
Molecular Hydrogen Lines Detected in the WL 20 System
Wavelength
Transition
J
J^'
g_J
E_u
A
Spectral Resolutiona
Velocity Extent
(μm) (K) s^-1 (km s^-1) (km s^-1)
17.0348 0-0 S(1) 3 1 21 1015 4.761 × 10^-10 ∼ 121 -63.2 → +24.8
12.2785 0-0 S(2) 4 2 9 1682 2.755 × 10^-9 ∼ 100 -57.6 → +3.4
9.66491 0-0 S(3) 5 3 33 2504 9.836 × 10^-9 ∼ 91 -5.3 → +75.4
8.02505 0-0 S(4) 6 4 13 3475 2.643 × 10^-8 ∼ 85 -82.2 → +15.0
6.90952 0-0 S(5) 7 5 45 4586 5.879 × 10^-8 ∼ 84 -83.3 → +55.5
6.10856 0-0 S(6) 8 6 17 5830 1.142 × 10^-7 ∼ 83 -86.9 → +30.9
5.51116 0-0 S(7) 9 7 57 7197 2.001 × 10^-7 ∼ 83 -64.2 → +66.4
5.05311 0-0 S(8) 10 8 21 8677 3.236 × 10^-7 ∼ 83 -42.7 → +4.8
aUsing R_MRS(max) value from Table 3, Column 6 for the central wavelength of each subband from Labiano et al. 2021
In order to examine the physical conditions of the molecular hydrogen gas, we have produced an excitation
diagram obtained at the apex of the brightest H_2 S(5) emission north of WL 20E,
at J(2000) 16h 27m 15.85s, -24^∘ 38^' 41.89^'', simultaneously fitting for
A_V and T_ex. The results are displayed in Figure <ref>. The derived, best-fit
parameters at this position yield T_ex = 1161K ± 70K, N_H2 = 7.98 ± 1.77 × 10^21 cm^-2,
and A_V = 12 ± 1.
§.§ ALMA
§.§.§ ALMA Band 4 Data
Figure <ref> shows Band 4 (1.9 mm) images of the WL 20 system obtained with ALMA.
The WL 20S source is resolved into a binary, with the eastern component spatially coinciding with the newly discovered companion
evident at the shortest MIRI MRS wavelengths seen in Figure <ref>.
The millimeter source positions and continuum fluxes from
ALMA Band 6 (1.3 mm) and Band 4 (1.9 mm) are obtained by using the procedure in CASA <cit.>.
For WL 20E and WL 20W, each of which remains unresolved by ALMA, a single Gaussian fit was
used for both flux and positional determinations. By contrast, WL 20SE and WL 20SW are both resolved by ALMA (see Figure <ref>),
and their elongated morphologies are best fit by a combination of two Gaussian components.
The tabulated positions for WL 20SE and WL 20SW are simply the mean value of the two Gaussian fits, whereas the reported fluxes are the sum of the
two Gaussian components fitted to each source. The resulting positions and fluxes are reported in Tables <ref> and <ref>, respectively.
The continuum fluxes can be used to infer disk dust masses, assuming optically thin dust emission, using the equation from <cit.>:
M=D^2F_ν/κ_ν B_ν(T_ dust)
with D - the distance to the source, B_ν - the Planck function for a temperature T_ dust, κ_ν - the dust opacity. The temperature of the dust is assumed to be
30 K, typical for young protostellar envelopes <cit.>. A value of κ_1.9 mm= 0.6 cm^2 g^-1 was used, scaled from κ_1.3 mm= 0.9 cm^2 g^-1 provided in <cit.> using β = 1. <cit.>. The resulting dust masses are reported
in the last column of Table <ref> in units of Earth masses.
§.§.§ ALMA Band 6 Data
Band 6 (1.3 mm) continuum ALMA data were acquired at similar resolution to the Band 4 (1.9 mm) data, and are
represented by gray contours appearing in Figure <ref>.
Positions and fluxes of the sources detected in the Band 6 continuum are presented in Tables <ref> and <ref>, respectively.
Figure <ref> shows the appearance of the WL 20 system in ^12CO J=2→1 (left panel),
^13CO J=2→1 (middle panel), and C^18O J=2→1 (right panel), all shown in color,
with their respective flux scales indicated by the color bars in Jy/beam km s^-1 units.
The FWHM beam size used for the CO isotopologue observations was roughly a factor of 10 larger in each dimension
than the beam size used for the continuum observations, as indicated in the Figure <ref> caption.
The ^12CO J=2→1 emission appears to peak near the elongated continuum structures associated with WL 20SW and its newly discovered companion, WL 20SE.
The gray ellipse drawn in each panel of Figure <ref> indicates the location and extent of this CO J=2→1 peak,
for direct comparison with the gas morphologies evident in the ^13CO J=2→1
and C^18O J=2→1 maps, as well as with the Moment 1 maps presented in Figure <ref>.
Although the ALMA CO observations do not have enough angular resolution to resolve
the twin disks of WL 20SE and WL 20SW individually, we can, nevertheless, determine velocity-integrated 3σ line fluxes from the ^13CO(2-1) and C^18O(2-1)
moment-zero maps for later use in determining the total gas mass associated with both disks together.
When measured within two beam areas (0.46^'' × 0.92^'' at P.A. = 345^∘),
centered on the two edge-on disks detected in the continuum,
the ^13CO(2-1) and C^18O(2-1) line fluxes
are 0.6869 Jy km s^-1 and 0.09979 Jy km s^-1, respectively.
In contrast to the CO J=2→1 peak associated with the twin disks of WL 20SW + WL 20SE,
there is a remarkable lack of gas emission toward either of the sources, WL 20E or WL 20W. Quantitative upper limits
on this emission are derived by measuring the 3 σ rms in a line-free region of the ^13CO(2-1) moment-zero
map, which yields a value of ≤ 1.11 × 10^-3 Jy km s^-1, and a corresponding 3 σ upper limit for the C^18O(2-1) emission ≤ 5.1 × 10^-4 Jy km s^-1.
§ DISCUSSION
§.§ MIRI MRS Continuum Sources
lccccccc
8
0pt
Continuum Flux Comparison of the WL 20 Components
3c —— Keck II ——– 3c —— MIRI MRS ——
[-10pt]
E W SW + SE E W SW + SE
[-10pt]
Combined Combined
[-10pt]
Wavelength 6c—————————— Flux —————————————
[-10pt]
(microns) (mJy) (mJy) (mJy) (mJy) (mJy) (mJy)
[-10pt]
7.9 121 38.4 123.0 88 35 78.8
10.3 72.6 49.6 345.0 76.9 71.3 178
10.8 79.0 51.5 281.0 85 73.4 249
12.5 86.8 44.3 610.0 85 49.7 475
17.9 78.0 93.9 2720.0 87 117.5 1955.7 Channel 3 LONGa
17.9 … … … 86 113 1822.5 Channel 4 SHORTa
20.8 109.0 117.0 3700.0 99.3 154.3 2856.4
24.5 <155.0 <155.0 6600.0 116.0 241.6 4217.5
aFor 17.9 μm, we list two MIRI MRS measurements since this wavelength is covered at the edge of Channel 3 LONG and in Channel 4 SHORT.
The biggest surprise from the MIRI MRS continuum images presented in Figure <ref> is the discovery of a new member of the WL 20 system, WL 20SE,
adjacent to the previously known IRC source, which we are now calling WL 20SW.
The continuum mid-infrared appearance of the WL 20 system has previously been presented at 7.9, 10.3, 12.5, 17.9, 20.8, and 24.5 μm from Keck II imaging observations
with sufficient resolution at 7.9 and 10.3 μm to have resolved WL 20SE from WL 20SW <cit.>.
The MIRI observations show WL 20SE to emit about 26.5% of the flux of WL 20SW at 5.3 μm, a wavelength at which both sources are well resolved by MIRI (see Figure
<ref>).
If the 5.3 μm flux ratio between WL 20SE and WL 20SW were constant out to 7.9 μm, and the source fluxes had not varied between the time of the MIRI and the Keck II observations,
the contribution of WL 20SE to the previously reported 7.9 μm flux would have been 33 mJy, a level near the detection limit, judging from the lowest level contour of 40 mJy
in the 7.9 μm plot of the WL 20 system (Figure 1 d) of <cit.>).
In fact, <cit.> did
report that WL 20S showed extended structure, beyond the point-source appearances of WL 20E and WL 20W (see their Figure 7). Furthermore, it was noted that the extended structure
did not vary with wavelength, in contrast to other Class I objects, which generally exhibit increasing size with wavelength.
The combined fluxes from WL 20SW and WL 20SE had exhibited mid-infrared variability in the past on timescales of a few years <cit.>; thus, it is
of interest to compare the newly acquired MIRI MRS flux measurements with previously published ones. These measurements are presented in
Table <ref>, from which it is clear that the combined fluxes from WL 20SW and WL 20SE are consistently lower in 2023
than they were in 1998.
§.§ Gas Emission from Jets and Disk Winds
The most spectacular discovery of the MIRI MRS observations of the WL 20 multiple system
is that of the parallel, twin, bipolar jets powered by WL 20SE and WL 20SW, in multiple transitions of
[FeII] and [NiII], as well as in the higher-excitation [ArII] and [NeII] ionic lines
(see Figures <ref> - <ref> and Table <ref>).
This is the first young system, lacking any associated, cold molecular outflow,
as traced by, for example, CO 1-0 or CO 2-1 emission,
to exhibit ionized jets, discovered via centimeter radio observations <cit.>, but now resolved and imaged in the mid-infrared.
Recent JWST studies of Class 0 and Class I outflows point to a unified
picture of nested structures, the three outermost of which have wide opening angles, with the outermost layer (if detected) seen in scattered
near-infrared light, the next inner layer mapped in millimeter CO lines, within which the mid-infrared hydrogen emission is found.
Collimated jets seen via their ionic emission lines are found along the symmetry axis of the wide-angled, outer layers
<cit.>. In the case of WL 20,
the scattered light <cit.>
and CO outflow components are missing, since, at this evolutionary stage, there is no infall envelope left.
None of the detected molecular hydrogen
line maps of Figure <ref> show evidence of jets emanating directly from WL 20SE and WL 20SW.
Instead, the molecular hydrogen maps display a distinct biconical morphology with the apex coinciding with the
locations of WL 20SE and WL 20SW. The ionized, parallel jets propagate along the symmetry axis of the molecular hydrogen cone.
At first glance, one might be tempted to interpret the H_2 structure as outlining biconical cavity walls. However,
examination of the gas distribution in Figure <ref> shows a lack of surrounding gas within which to form a cavity.
Rather, there is a clear spatial anticorrelation of the cold, CO molecular gas and the warm H_2 gas, strongly
supporting the interpretation that the H_2 structure originates from wide-angled disk winds.
These observations can be understood in the context of the coevolution of the protostellar envelope
and the jet sources it surrounds. For the youngest, Class 0 protostars, the majority of the mass is
still in the envelope, and, as jets emerge from the central accreting sources, there is plenty of molecular material to entrain,
resulting in molecular jets and outflows. As the sources evolve through the Class I stage, most of the mass is now
concentrated in the protostars, with some remnant molecular infall envelope material, allowing for the appearance of molecular outflows,
wide-angled disk winds, and the mid-infrared ionized jets. Finally, by the Class II stage, the molecular material from the original
protostellar envelope is gone (see Figures <ref> and <ref>), so, if enough disk accretion activity is still ongoing, the jets
will appear as purely ionic, since there is now no ambient molecular material to entrain. Furthermore, this circumstance
strengthens the case for the interpretation of the mid-infrared molecular hydrogen structures of Figure <ref> as resulting
from disk winds, which must be the origin of the observed H_2, since the envelope material is long gone.
§.§ Disks in the WL 20 System
§.§.§ The Dust Disks Detected by ALMA
Both the 1.3 mm and the 1.9 mm ALMA continuum observations, acquired at similarly high resolutions (0.14^'' × 0.11^''
and 0.096^'' × 0.16^'', respectively) resolve the newly discovered WL 20SE source and the previously known WL 20SW into twin, edge-on disk structures
(see Figures <ref> and <ref>, respectively).
The total 1.3 mm flux from the entire WL 20 system is 61.1 mJy (see Table <ref>),
to be compared with the previously published 95 mJy acquired with the 12^'' beam of the IRAM 30-meter, with a stated ∼ 20% flux calibration uncertainty
<cit.>. Thus, within the stated uncertainties,
the high-resolution ALMA observations are consistent with having detected all of the continuum emission from the system.
The edge-on disk structures resolved by ALMA explain both the Class I SED (spectral energy
distribution) of WL 20SW (the dominant source at near- and mid-infrared wavelengths) and the additional A_V =25 inferred from its infrared spectrum, in addition to the A_V = 16 toward its northern neighbors.
These extinction values were previously inferred from near-infrared spectroscopy.
Photospheric spectral features evident in the near-infrared (2.06-2.49 μm) spectra of WL 20E and WL 20W were used to determine their
spectral types and veilings <cit.>. One can then apply various amounts of reddening, corresponding to known values of A_V, to match the observed spectral slopes
and fluxes to determine A_V = 16 for these sources. Since the corresponding near-infrared spectrum of WL 20S (WL 20SW+WL 20SE) did not show any obvious
absorption or emission features, its spectral slope and brightness were used to place constraints on its A_V value. Under the assumption that its intrinsic spectrum
and K flux were the same as that of WL 20W, its spectrum could be matched with A_V = 41 <cit.>.
The newly discovered edge-on disks of WL 20SE and WL 20SW by ALMA are well resolved, with diameters ∼ 100 au at 125 pc (see the right panel of Figure <ref>).
The disks surrounding WL 20E and WL 20W, on the other hand, remain unresolved with ALMA, implying disk diameters less than 13 AU for each, for
an assumed distance of d= 125 pc, derived from VLBA parallax measurements <cit.>. This finding is perfectly in line
with the results of the ODISEA (Ophiuchus Disk Survey Employing ALMA), which found the 1.3mm continuum disk sizes in Oph heavily weighted toward compact disks
with radii < 15 AU for 85% of detected objects <cit.>.
Disk dust masses of just 3.3 M_⊕ and 3.6 M_⊕ are derived for WL 20E and WL 20W, respectively.
These values are in line with average disk dust masses derived for Class II objects in Corona Australis <cit.>,
IC348 <cit.>, and Ophiuchus <cit.>.
By contrast, the disk masses around both WL 20SW and WL 20SE, 42 M_⊕ and 24 M_⊕, respectively (see Table <ref>), are in line with
the higher average disk dust masses derived for Taurus <cit.> and Lupus <cit.>.
§.§.§ X-ray excited 12.8μm [NeII] Disk Emission
The 12.81 μm [NeII] line was detected toward each of the WL 20 components, with the observed line profiles shown in Figure <ref>.
Spectra were extracted through the same apertures as for the continuum flux measurements reported in Table <ref>.
Continuum-subtracted [NeII] line fluxes, A_12.8, the extinction at 12.81 microns, and extinction-corrected NeII line luminosities are listed in Table <ref>.
For reference, spectral types and inferred T_eff are also listed <cit.>.
The extinction at 12.81 μm was derived from the published values of A_V
of 16 for both WL 20E and WL 20W, and A_V = 41 toward WL 20S <cit.>. We use A_J = 0.282A_V <cit.>, followed by the conversion
A_12.81 = 0.16A_J for R_V = 5.5 for ρ Oph <cit.>. Taking into account the extinction toward each source, and using a distance
of 125 pc, we arrive at the intrinsic [NeII] line luminosities in the last column of Table <ref>.
The WL 20 system was previously observed at 12.81 μm from the ground with VLT/VISIR through a 0.4^'' slit at a resolution of R = 30,000 corresponding
to ∼ 10 km s^-1 <cit.>.
The observed coordinates were closest to WL 20E and an upper limit of < 0.2 × 10^-14 erg cm^-2 s^-1 was established, quite close to our clear detection of the line toward
WL 20E at 0.24 × 10^-14 erg cm^-2 s^-1.
The observed values for the [NeII] line fluxes agree nicely with predictions from X-ray excitation of neon in the warm upper atmospheres of disks around T Tauri stars, as first proposed
by <cit.>. These authors calculated a 12.81 μm [NeII] flux of ∼ 10^-14 erg cm^-2 s^-1 for a fiducial T Tauri disk model from <cit.>,
assuming a central star mass of 0.5 M_⊙, stellar radius of 2 R_⊙, T = 4000K, L_x = 10^30 erg s^-1, and
accretion rate of 10^-8 M_⊙ yr^-1, for a face-on disk orientation at an assumed distance of 140 pc.
We note that an X-ray flux of L_x = 1.16 ± 0.09 × 10^30 erg s^-1
from the DROXO (Deep Rho-Ophiuchi XMM-Newton Observation) survey is reported by <cit.> for the WL 20 system in its entirety, without
distinguishing among its individual components.
The coordinates listed for the origin of the X-ray emission in WL 20, (J2000) α = 16:27:15.9, δ = -24:38:43.7 with a 1.1^'' positional error,
originate from Table A.1. of <cit.>. These coordinates, as previously noted, are closest to those of WL 20E. Given that the FWHM of Newton/XMM is
∼6^'', the deduced coordinates of the peak X-ray emission are biased toward the
strongest emitter within the PSF, so we do not know the X-ray flux associated with each individual component of the WL 20 system.
lccccc
9
0pt
[NeII] Line Fluxes and Additional Properties of the WL 20 System Components
6
Source Spectral T_eff A_12.8μ m Ne II Line Ne II Line
Type Flux Luminosity
(K) (10^-14 erg cm^-2 s^-1) (10^28 erg s^-1)
WL 20E K7 IV/V 4040 0.72 0.24 0.87
WL 20W M0 IV/V 3800 0.72 0.21 0.76
WL 20S … … 1.85 0.36 1.31
§.§.§ ALMA Constraints on Disk Gas Masses
We can estimate disk gas masses using the models of <cit.>,
with the measured ^13CO(2-1) and C^18O(2-1) line fluxes and flux upper limits
reported in <ref> and a distance of 125 pc. For the combined disks of WL 20SE and WL 20SW, these values lead to a
combined gas mass of about 100 M⊕ (3 × 10^-4 M_⊙) for an assumed CO/C^18O ratio of 550, or just ∼1 M⊕ for the case of
a CO/C^18O ratio of 1650, meant to allow for selective photodissociation in the models.
The combined gas mass of 100 M⊕ leads to a gas/dust ratio of just 1.5 for the combined disks of WL 20SE and WL 20SW,
and a factor of 100 lower for the case of the higher CO/C^18O ratio.
In stark contrast to the copious CO(2-1) gas emission toward the WL 20SW/WL 20SE system,
there is a striking lack of emission toward
either of the sources WL 20E or WL 20W (see left panel of Figure <ref>).
Multiplying the 3σ upper limits for the ^13CO(2-1) and C^18O(2-1) line fluxes reported
in <ref> by 4 π × (d =125 pc)^2 for comparison with the grid of models presented in Figure 6 of <cit.>,
we find that our observed upper limits fall well below the lowest model gas mass of just 0.1 M_Jup (32 M_⊕).
This gas mass upper limit is to be compared with
the derived dust masses of 3.3 M_⊕ and 3.6 M_⊕ for WL 20E and WL 20W, respectively.
One firm conclusion, as a result of this comparison, is
that the gas to dust ratio in these disks is at least 10 times lower than the canonical ISM (Interstellar Medium) value of 100,
since otherwise, we would have
had clear detections of both disks in each of the CO(2-1) isotopologues.
§.§ Gas Emission from the Surroundings
§.§.§ ALMA
The twin disks of WL 20SW and WL 20SE lie near the peak of the flattened, elliptically shaped CO(2-1) structure evident in the left panel of Figure <ref>,
and outlined by the gray ellipses in both Figure <ref> and Figure <ref>, which shows the Moment 1 maps in all three isotopologues, CO(2-1), ^13CO(2-1), and C^18O(2-1).
A Moment 1 map is the integrated velocity-weighted intensity map divided by the integrated intensity map, and emphasizes the predominant
velocity distribution of the gas. A large-scale velocity gradient across this gaseous envelope
is most apparent in the CO(2-1) Moment 1 map in the left panel of Figure <ref>.
Within the outlined elliptical area in this panel, the minimum and maximum velocities
are -1.41 km s^-1 and +3.62 km s^-1, respectively, relative to the cloud's +4 km s^-1 V_LSR.
It is tempting to interpret the combined morphology and velocity distribution of the flattened ^12CO(2-1) structure peaking
on the twin disks of WL 20SE and WL 20SW as a pseudo-disk encompassing both of the smaller-scale twin disks.
A pseudo-disk is simply a flattened protostellar envelope first proposed as the natural consequence of including a uniform magnetic
field threading the density distribution of the singular isothermal sphere model of cloud core collapse <cit.>.
If we were to interpret the elliptical CO(2-1) structure as a pseudo-disk, then for
a central mass of 0.5 - 1.0 M_⊙, the expected range of
freefall velocities, (2GM_*/R_pseudo-disk), at a 310 AU radius (being the extent of the semi-major axis of the ellipse)
would be 1.1 km s^-1 to 2.4 km s^-1.
The expected velocity gradients
for infalling gas motions are, however, not observed in
the individual CO(2-1) velocity channel maps (not shown here).
Another possible interpretation of the observed velocity structure of the of the CO(2-1) gas within its emission peak is bulk rotation:
Assuming a central mass of 0.5 M_⊙ to 1 M_⊙, the corresponding range of
Keplerian velocities expected at a radius of 310 AU
is 1.45 km s^-1 to 3.0 km s^-1, centered on the cloud's V_LSR of 4 km s^-1. Whereas the values of the observed velocity
extrema of -1.41 km s^-1 and +3.62 km s^-1 are close to those expected for a single, spatially resolved, Keplerian disk,
their locations are not – they are not centered on WL 20SE and WL 20SW.
Close examination of the individual velocity channels reveals red- and blue-shifted gas components on each side of the twin disks,
consistent with the presence of two unresolved Keplerian disks blended together. Recall
that the large beam size of the CO(2-1) observations could not resolve the twin disks that
were discovered via the much higher resolution continuum observations.
In the isotopologue maps of the central and right panels of Figure <ref>,
we see progressively deeper into the envelope's gas structure, given the decreasing optical depths of the
^13CO(2-1) and C^18O(2-1) emission lines. Optical depth effects may account for the differing velocity structures encountered in the
corresponding Moment 1 maps.
The ^13CO(2-1) Moment 1 map shows mostly red-shifted gas towards both disks, reaching
magnitudes of up to + 3 km s^-1 relative to the cloud's V_LSR.
In the C^18O(2-1) Moment 1 map, the gas surrounding WL 20SW and WL 20SE is again red-shifted, but at lower velocities,
ranging from about 0.5 - 1.5 km s^-1 relative to the V_LSR.
An additional explanation for the gas structure in which the twin edge-on disks are embedded is that it represents
the leftover envelope gas which has mostly been obliterated by the twin bipolar jets emanating from WL 20SE and WL 20SW.
We can estimate the gas mass of this structure from the
integrated intensity of the ^13CO(2-1) emission
within the ellipse of the central panel of Figure <ref>, under the assumption of optically thin emission.
Adjusting the formula for the gas
mass estimate derived for ^13CO(1-0) from
<cit.> (following <cit.>) to ^13CO(2-1):
M(H_2) = 1.37 × 10^-5 exp(5.34/T_ex) (T_ex + 0.88) ×
exp(10.58/ T_ex) (D/100 pc)^2 (10^-6/X (^13CO)) ∫ S_ν dv M_⊙
where D is the distance to the source in units of 100 pc, X(^13CO) is the fractional abundance of ^13CO with respect to H_2, and T_ex is the excitation temperature of the gas.
Assuming T_ex = 30K at 125 pc distance, and using the measured S_ν dv of 4.01 Jy km s^-1 in the ellipse in the central panel of Figure <ref>
yields a remnant envelope gas mass of just 4.5 × 10^-3 M_⊙. By comparison, a core mass of 0.024 M_⊙ (or just 25 M_Jupiter) for WL 20 was
derived from
submillimeter continuum mapping of the ρ Oph cloud at 850 μm with the 13^'' beam
of the James Clerk Maxwell Telescope, adjusted for a distance of 125 pc <cit.>.
Placing this in an evolutionary context, it is useful to point out here that in a recent ALMA survey of 7 Class 0, 7 Class I,
and 7 Flat Spectrum sources in Orion B, all sources exhibited beautiful red- and blue-shifted, bipolar CO (2-1) and
^13CO (2-1)outflow structures <cit.>,
in contrast with the case of WL 20S which completely lacks any trace of
CO outflow activity (see left and middle panels of Figures <ref> and <ref>).
Furthermore, in this same study, in which the
ambient cloud cores were mapped in C^18 O(2-1),
it was demonstrated that outflows remove a significant amount of gas from their parent cores.
In the case of the WL 20 multiple system, it is amply evident that the mass already
residing in the pre-main-sequence stars, exceeding 1 M_⊙
in just the WL 20E and WL 20W components alone, is far in excess of the ambient core gas mass, which is, at most, 0.024 M_⊙.
§.§.§ Extended [NeII]
In Figure <ref>,
we present the observed [NeII] line profiles
extracted through the jet apertures presented in Figure <ref>.
We also extracted a spectrum completely off-source to the northwest, through
a 1.00^'' circular aperture centered at α_2000 =16:27:15.703,
δ_2000 = -24:38:31.108,
displayed in the rightmost panel of Figure <ref>.
[NeII] emission was still detected, albeit with a peak flux ∼ 20-30 times weaker than that observed through the jet apertures.
This result led us to produce the continuum-subtracted, [NeII] integrated line map in the left panel of Figure <ref>, to examine the larger-scale spatial distribution
of the [NeII] emission. Integrating the [NeII] line flux over the entire 5.5^'' × 6.2^''
FOV, shown in the right panel of Figure <ref>,
yields a value of 5.23 × 10^-15 erg cm^-2 s^-1,
not corrected for extinction. This value is to be compared with the
6.28 ± 0.25 × 10^-14 erg cm^-2 s^-1 from the
Spitzer/IRS c2d data <cit.>.
The Spitzer spectra were obtained through a 4.7^'' × 11.1^'' aperture, with a resolution of R = 600.
As previously emphasized by <cit.>, the discrepancy between their VLT/Vizier [NeII] line flux upper limit and the
value detected by Spitzer could be reconciled by the inferred presence of spatially extended
[NeII] emission originating from outflows. The new MIRI MRS imaging data confirm the shock-powered origin
of much of the observed [NeII] emission, in addition to the X-ray-excited [NeII] disk emission from each component of the WL 20 system, as
can be seen in the left panel of Figure <ref>. The [NeII] clearly fills the bipolar lobes traced by the H_2 maps of Figure <ref>.
§ PUTTING IT ALL TOGETHER: CONCLUSIONS AND SUMMARY
In Figure <ref>, we present
the combined
discoveries of JWST MIRI MRS and ALMA in the WL 20 IRC system:
images of the twin disks, ionized twin jets, and the biconical molecular H_2 gas structure.
The four dust continuum disks detected by ALMA are represented by contours, and their locations are marked by the crosses
in each panel.
The highest-angular-resolution images showing the jets are shown in color in the left and right panels,
in the high-excitation [ArII] line at 6.985 μm,
and the low-excitation [FeII] line at 5.34 μm, respectively.
The jet driving sources, discovered with the MIRI MRS instrument in its highest angular resolution channels, coincide with the locations of the two small, edge-on disks, resolved by ALMA.
The molecular gas, in stark contrast with the ionized gas, is distributed in a double-cone-shaped structure, whose apex is centered
on the two edge-on disks of WL 20SE and WL 20SW.
The double-cone-shaped morphology is shown in the middle panel of Figure <ref>,
which highlights the
continuum-subtracted, H_2 0-0 S(3) 9.67 μm emission.
The parallel jets propagate along the cone's central axis.
Portions of this bipolar conical gas structure clearly continue beyond the observed MIRI MRS field-of-view.
Based on near-infrared spectroscopy and ground-based, spatially resolved photometry,
the northern components, WL 20E and WL 20W, can be placed on model isochrones in
the Hertzsprung-Russell diagram. Using this technique,
the most plausible system age is determined to be ∼ 2 - 2.5 × 10^6 yr
<cit.>.
Given the proximity of the sources to each other, and taking into consideration the typical dimensions of pre-stellar cores,
it is plausible to assume that the system evolved coevally, leading to the conclusion that both WL 20SE and WL 20SW,
their disks, and their jets are also this old.
A later evolutionary stage for WL 20SE and WL 20SW is in keeping with the
lack of any molecular outflows associated with these objects in the ALMA observations
of CO(2-1) and ^13CO(2-1).
In addition, the structure of the relatively
optically thin C^18O(2-1) emission in the right panel of Figure <ref>
shows a pronounced anticorrelation
with the H_2 morphology of Figure <ref>, suggesting
that the combined outflows have dissipated
the original molecular gas core from which the system formed.
Such destruction of the molecular gas environment is also evident in the distribution
of [NeII] emission, which is not confined to the jets, but is distributed, albeit at a low level,
throughout the MIRI MRS FOV (see Figures <ref> and <ref>).
Finally, the core associated with the entire WL 20 system contains
only 0.024 M_⊙ of gas, accentuating the fact that the central objects
have already attained their birth masses.
The lack of a massive molecular core surrounding the system at this Class II evolutionary stage,
simplifies the interpretation of the origin of the biconical H_2 gas – it must originate in wide-angled disk winds,
since there is no infalling or ambient cold gas to entrain.
With regard to the presence of collimated, ionized jets, five have been found so far in Class 0 sources with MIRI MRS or NIRSpec
<cit.>, whereas one Class 0 source was found to be purely molecular
<cit.>.
However, in a comprehensive K-band spectroscopic survey of
26 Class 0 sources carried out with the MOSFIRE instrument at Keck I,
whereas 90% of sources showed H_2 emission, characteristic of shocks in outflows,
only 20% showed [FeII] emission lines, presumably
associated with narrow jets <cit.>.
This percentage of [FeII] detections in Class 0 sources may be a
lower limit, given the high line-of-sight extinction toward these most embedded protostars
and the fact that in many Class 0 sources in which the 5.34 μm [FeII] lines were detected by JWST,
the corresponding H or K band [FeII] lines were not.
For five Class I systems that have been
investigated by JWST NIRSpec or MIRI MRS, all show the presence of ionized
jets: DGTauB <cit.>, the TMC 1 binary <cit.>.
and the HH46 IRS binary <cit.>. In a K-band spectroscopic survey carried
out with the Keck II telescope,
of 52 Class I and Flat Spectrum sources, 23 show H_2 emission and none were reported to show [FeII] <cit.>.
It has previously been suggested
that as the outflow evolves, the mass-loss rate decreases, velocities increase,
and the jet becomes progressively more ionized <cit.>.
In light of the information we have so far, it can be stated that the H_2 component
of the outflow activity, regardless of its origin, decreases in frequency with
evolutionary stage.
There are interesting differences amongst
the WL 20 system components worth pointing out as well:
The WL 20SW/SE sources are both actively driving ionized jets, whereas neither WL 20E nor WL 20W are
currently jet or outflow sources. The edge-on disks of WL 20SE and WL 20SW are well-resolved, both with
extents of ∼ 100 AU, whereas the disks around WL 20E and WL 20W remain unresolved, suggesting
disk projected diameter upper limits of just 13 AU (see Figure <ref>).
Although small, the gas masses associated with the disks of WL 20SE and WL 20SW are measurable, whereas
any gas emission associated with the unresolved dust disks of WL 20E and WL 20W remains undetected,
signaling a highly depleted gas to dust ratio compared with that of the ISM.
To summarize, combined JWST MIRI MRS and ALMA observations of the young, multiple, infrared companion system, WL 20, in the Ophiuchus
star-forming region resulted in the discovery of the following:
* A previously unknown companion to WL 20SW: WL 20SE
* Twin, edge-on disks of ∼ 100 AU diameter, with disk dust masses of 24 ± 4 M_⊕
and 42 ± 2 M_⊕ associated with WL 20SE and WL 20SW, respectively, and a combined gas mass of just
1 - 100 M_⊕
* Unresolved disks with diameters < 13 AU and dust masses of 3.3 ± 0.4 M_⊕ and 3.6 ± 0.5 M_⊕
for WL 20E and WL 20W, respectively, directly detected for the first time, with gas/dust ratios ≤ 10
* Parallel, ionized jets, emanating from both WL 20SE and WL 20SW, seen in five transitions of [FeII],
two transitions of [NiII], and in [ArII] and [NeII]
* The presence of extended, low-level [NeII] emission throughout
* A biconical H_2 structure surrounding the ionized jets, observed in eight different mid-infrared H_2 lines, originating in wide-angled disk winds
What is remarkable about these jets and the H_2 disk winds is the lack of any associated molecular line emission from cold gas in
the millimeter wavelength region.
§ ACKNOWLEDGMENTS
M.B. would like to thank Ewine van Dishoeck for her leadership, enthusiasm, and encouragement during the preparation of this work;
the entire MIRI JOYS+ (JWST Observations of Young protoStars+) team. Diego Mardones for obtaining the Band 6 ALMA data, and Sue Terebey for
fruitful discussions. We thank the anonymous referee for their attentive reading and numerous suggestions for improvement of the originally submitted manuscript.
The work of M.E.R. was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract
with the National Aeronautics and Space Administration.
V.J.M.LG.'s research was supported by an appointment to the NASA Postdoctoral Program at the NASA Ames
Research Center, administered by the Oak Ridge Associated Universities under contract with NASA.
M.L.v.G. acknowledges support from ERC Advanced grant 101019751 MOLDISK,
TOP-1 grant 614.001.751 from the Dutch Research Council (NWO) and The Netherlands
Research School for Astronomy (NOVA). Astrochemistry in Leiden is supported by the Netherlands Research School for Astronomy (NOVA).
This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope.
The JWST data presented in this article were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute,
which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract
NAS-5-03127 for JWST. The specific observations analyzed can be accessed via
[DOI:10..17909/d60d-mh65]https://doi.org/10..17909/d60d-mh65 with the DataSet Title: “MIRI/MRS WL 20”,
and [DOI:10.17909/pqce-5432]https://doi.org/10.17909/pqce-5432 with the DataSet Title: “MIRI/MRS Background used for WL 20 Data.”
The JWST MIRI data are from Program ID 01236, PI: Mike Ressler.
The following national and international funding agencies funded and supported the MIRI development:
NASA; ESA; Belgian Science Policy Office (BELSPO);
Centre Nationale d'Études Spatiales (CNES);
Danish National Space Center;
Deutsches Zentrum für Luft- und Raumfahrt (DLR);
Enterprise Ireland;
Ministerio de Economiá y Competividad;
The Netherlands Research School for Astronomy (NOVA);
The Netherlands Organization for Scientific Research (NWO);
Science and Technology Facilities Council;
Swiss Space Office;
Swedish National Space Agency; and UK Space Agency.
This paper makes use of the following ALMA data: ADS/JAO.ALMA#2019.1.01792.S and
ADS/JAO.ALMA#2022.1.01734.S.
ALMA is a partnership of ESO (representing its member states), NSF (USA,) and
NINS (Japan) together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of South Korea), in cooperation with the
Republic of Chile. The joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ. This research has made
use of NASA's Astrophysics Data System Bibliographic Services, as well as the SIMBAD database,
operated at CDS, Strasbourg, France.
Software: Numpy <cit.>; Astropy, a community-developed
core Python package for Astronomy <cit.>, Matplotlib <cit.>, and SuperMongo by Robert Lupton and Patricia Monger
(https://www.astro.princeton.edu/ rhl/sm/ ).
JWST(MIRI MRS), ALMA
aasjournal
|
http://arxiv.org/abs/2409.03208v1 | 20240905030319 | Boundary dissipative spin chains with partial solvability inherited from system Hamiltonians | [
"Chihiro Matsui",
"Naoto Tsuji"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech"
] |
Optimal Regularity for Fully Nonlinear Nonlocal Equations with Unbounded Source Terms
Disson S. dos Prazeres and Makson S. Santos
Received 16 July 2024; accepted 04 September 2024
=====================================================================================
Abstract
Partial solvability plays an important role in the context of statistical mechanics, since it has turned out to be closely related to the emergence of quantum many-body scar states, i.e., exceptional energy eigenstates which do not obey
the strong version of the eigenstate themalization hypothesis.
We show that partial solvability of a quantum many-body system can be maintained even when the system is coupled to boundary dissipators under certain conditions.
We propose two mechanisms that support partially solvable structures in boundary dissipative systems: The first one is based on the restricted spectrum generating algebra, while the second one is based on the Hilbert space fragmentation.
From these structures,
we derive exact eigenmodes of the Gorini-Kossakowski-Sudarshan-Lindblad equation
for a family of quantum spin chain models with boundary dissipators,
where we find various intriguing phenomena arising from the partial solvability of the open quantum systems, including persistent oscillations (quantum synchronization) and the existence of the matrix product operator symmetry.
We discuss how the presence of solvable eigenmodes affects long-time behaviors of observables in boundary dissipative spin chains based on numerical simulations using the quantum trajectory method.
0
We consider two kinds of partially solvable Hamiltonians coupled to boundary dissipators, for each of which we introduce a mechanism inducing partial solvability under the time evolution given by the Gorini-Kossakowski-Sudarshan-Lindblad equation.
The first type consists of the Hamiltonians which admit the restricted spectrum generating algebra. We show that several solvable energy eigenstates of such Hamiltonians do not feel the effect of boundary dissipators and therefore become the solvable eigenmodes of the Liouvillians. The corresponding Liouvillians exhibit long-lived oscillations in the expectation values of observables, since they inherit the restricted spectrum generating algebras of the Hamiltonians.
The second type is provided by the Hamiltonian with the Hilbert space fragmentation with embedded integrability. We show that the boundary dissipators coupled to such a Hamiltonian can be regarded as partial integrability non-violating perturbations in the thermofield double formalism, when they act as quasiparticle baths which dope and absorb quasiparticles. In the solvable subspace, this Liouvillian can be mapped to two decoupled integrable spin chains whose eigenvalues and eigenvectors are exactly calculated via the Bethe ansatz method.
§ INTRODUCTION
Integrability of isolated quantum systems has been studied for a long time. Since the achievement by H. Bethe <cit.> who has derived the exact eigenfunctions for the Heisenberg spin chain, the method, now called the (coordinate) Bethe ansatz, has become a powerful tool for systematically constructing eigenfunctions of interacting many-body systems.
What lies behind the Bethe ansatz is the decomposability of a many-body scattering into “consistent" two-body scatterings, that is, physics does not depend on a way to decompose a many-body scattering into two-body ones. This property, which is often referred to as the definition of “quantum integrability", is guaranteed by the existence of the R-matrix that solves the Yang-Baxter equation (YBE) <cit.>.
Once the mathematical background of integrable systems has been understood, various methods have been invented for calculating energy spectra, form factors, correlation functions, etc. The methods include, e.g., the algebraic Bethe ansatz <cit.>, the vertex-operator approach <cit.>, the separation of variables <cit.>, and the off-diagonal Bethe ansatz <cit.>.
On the other hand, there exists another class of solvable systems, in which only a part of the energy spectrum is analytically accessible. We shall call such a system “partially solvable".
Partial solvability is mostly understood as an extra symmetry of the Hamiltonian that emerges only in a subspace W of the entire Hilbert space ℋ. In other words, the Hamiltonian restricted in W satisfies extra commutation relations induced by the extra symmetry.
If the restricted Hamiltonian is integrable,
such symmetry-induced commutation relations are derived from
infinitely many commuting transfer matrices originating from the YBE, leading to the existence of infinitely many conserved quantities. However, in most of the known cases of partially solvable systems, the extra symmetry is specified by a much
simpler algebraic relation such as “the restricted spectrum generating algebra (rSGA)" <cit.>,
which holds in the subspace W.
Examples of the partially solvable systems that exhibit the rSGA include the perturbed spin-1 XY model <cit.> and the Affleck-Kennedy-Lieb-Tasaki (AKLT) model <cit.>.
One can construct energy eigenstates in the solvable subspace
by applying the spectrum generating operator Q (satisfying the rSGA with the Hamiltonian) to a simply constructed energy eigenstate.
Another known mechanism which may induce partial solvability is the Hilbert space fragmentation (HSF) <cit.>.
The HSF is defined as the fragmentation of the total Hilbert space into exponentially many invariant subspaces ℋ_r (“Krylov subspaces") induced by the Hamiltonian,
ℋ = ⊕_r ℋ_r,
provided that such a decomposition is not caused by an obvious local symmetry of the Hamiltonian.
The fragmentation structure is now mathematically understood in terms of “the commutant algebra”<cit.> for the Hamiltonian,
which also tells the number of Krylov subspaces induced by the Hamiltonian.
Although the HSF is not necessary related to the notion of integrability, some models admit both the HSF and integrability (Fig. <ref>(a)) <cit.>. A representative example is the XXC model <cit.>, which has been introduced as a new type of integrable systems without referring to the HSF structure.
As will be shown in <cit.>,
the Hamiltonian of the XXC model has the fragmentation of the Hilbert space into exponentially many invariant subspaces, according to “frozen" partial spin configurations (which we call “irreducible strings (IS)" in the main text.)
Surprisingly, the integrable Hamiltonian with the HSF structure can be deformed in such a way that keeps its integrability in a selected subspace among exponentially many of those (Fig. <ref>(b), (c)) <cit.>. The key idea is to choose a perturbation that vanishes
in a selected subspace and hence does not violate
integrability in this subspace (Table <ref>), although integrablity in the entire Hilbert space is in general lost by such a perturbation.
Based on a similar idea, we will introduce a variety of perturbations that keeps integrability in a selected subspace in the main text. A remarkable fact is that partial solvability in a selected subspace holds even under site-dependent integrability-breaking perturbations, since these perturbations are irrelevant in a selected subspace (a similar idea for Hamiltonians with rSGA can be found in <cit.>).
0
0
Partially solvable systems are now intensively studied especially in the context of thermalization.
For instance, the solvable subspace W is an invariant subspace of the Hamiltonian, and therefore, any state in W never reaches the other subspaces during time evolution. This is a typical example of “weak ergodicity breaking" in the Hilbert space, which may be considered as a necessary condition for the emergence of “quantum many-body scar (QMBS) states" <cit.>, i.e., non-thermal states in a non-integrable
system. Indeed, many of the QMBS states have been found to be exactly solvable energy eigenstates of non-integrable systems. Several examples can be found in <cit.>.
Another remarkable feature in partially solvable systems is a persistent oscillation of local observables <cit.>. This phenomenon is understood as a consequence of a large overlap between an initial state and solvable energy eigenstates forming equally-spaced energy spectra imposed by the rSGA.
Existence of long-lived oscillations implies that the system neither thermalizes nor relaxes
to any steady state.
Motivated by these atypical behaviors of partially solvable systems, we focus on a question of whether an open quantum system can also admit partial solvability, and if so, what are characteristic phenomena in partially solvable open quantum systems. Let us consider open quantum systems that evolve according to the Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) equation <cit.>.
The GKSL equation constitutes the most general form of the completely-positive-trace-preserving map under the assumptions that time evolution is Markovian (sometimes one also assumes an initial state to be a product state of the system and environment).
Recent progress unveils several solvability mechanisms for the GKSL equation, which are classified into two types (Table <ref>): The first one consists of completely solvable Liouvillians whose spectra are fully accessible by analytic methods <cit.>, while the second class consists of partially solvable Liouvillians whose spectra are only partially accessible <cit.>.
Most of the known partially solvable Liouvillians are solvable only for the steady state, leaving the other eigenmodes including the slowest decaying mode unsolvable (see, e.g., Ref. <cit.> and references therein). Only a few examples are known as partially solvable Liouvillians with a no-less-than-two-dimensional solvable subspace induced by rSGA <cit.>, but in those examples the dissipators are coupled to all the sites of the system.
In this paper, we are especially interested in extending the notion of partial solvability for closed quantum systems to open quantum systems. Our target is an open quantum system with partial solvability inherited from a partially solvable system Hamiltonian, whose solvable energy eigenstates are robust against boundary dissipators.
One way to realize such an open quantum system is to employ a partially solvable system with rSGA, some of whose solvable eigenstates vanish by all the boundary dissipators.
We find that those solvable energy eigenstates may exist for the AKLT-type Hamiltonians when they couple to boundary dissipators injecting spin-2 quasiparticles.
Another way is to employ a system Hamiltonian with the HSF structure and absorb the boundary dissipators as integrability-preserving
perturbations in a selected subspace.
We find that there exist site-dependent perturbations that keep integrability in the selected subspace, and hence certain boundary dissipators can be interpreted as integrability-preserving
perturbations in the selected subspace of the integrable doubled spin chain (Fig. <ref>(c)).
As we will obtain in the main text, such a system may be realized for the perturbed XXC model whose edges are attached to quasiparticle baths.
In these partially solvable open quantum systems, the solvable subspace of the system Hamiltonian is
inherited to a solvable subspace of the Liouvillian. Therefore, the partially solvable Liouvillians constructed in the above ways have not only the solvable steady states but also the solvable eigenmodes. As a result, we observe characteristic behaviors such as persistent oscillations, which can never be obtained for non-integrable Liouvillians.
The rest of this paper is organized as follows. In Sec. <ref>, we give a review of partial solvability for closed quantum systems, and present our original results on site-dependent perturbations and an integrable subspace encoded by the period-three IS (see Sec. <ref>).
We explain two mechanisms of the partial solvability, the rSGA and HSF, together with some observations about the mechanism to incorporate the HSF and the integrability-preserving
perturbations in a selected subspace. Section <ref> is the main part of this paper, which is devoted to partial solvability of open quantum systems. We show how the partial solvability of the system Hamiltonian can be inherited to open quantum systems, provided that the system shows either the rSGA or HSF.
In the latter case, we take “the thermofield double (TFD) formalism" <cit.>, which maps a density matrix defined in ℋ⊗ℋ^† to a vector in the doubled Hilbert space ℋ⊗ℋ^*.
The action of the Liouvillian can be expressed as the Hamiltonian for the two decoupled integrable spin chains in the solvable subspace, if the quantum jump terms are irrelevant due to the HSF.
We also demonstrate numerical results for some of the models to see how solvable eigenmodes in open quantum systems affect long-time behaviors of observables.
The concluding remarks are given in Sec. <ref>, in which some open questions and possible future works are listed.
0
In this paper, we are especially interested in the partially solvable open quantum systems whose partial solvability comes from bulk partial solvability. Our focuses is the partially solvable spin-1 model equipped with the rSGA. Then we set the quasiparticle baths on its ends, in such a way that the system feels quasiparticles are coming in from both ends.
Under this situation, one may naively expect that the system reaches the fully-polarized steady state and this is the only steady state. But what we have found is that any diagonal ensemble consisting of solvable energy eigenstates can be the steady state.
Emergence of this unexpected infinitely degenerate steady states is induced by the existence of the exactly solvable subspace whose elements vanish under the dissipators.
Such density matrices which do not feel the dissipators has first been introduced for the optimal system by the name the dark states <cit.>, since those do not emit photons. Nowadays, the dark state |D ⟩ is defined as a state which satisfies
H |D ⟩ = E |D ⟩,
L_α |D ⟩ = 0, ∀α,
which are the sufficient conditions for the state |D ⟩ to vanish by the dissipators. We instead propose another mechanism to produce the state which does not feel the dissipators, by showing that the constructed steady states vanish by the dissipators, although they survive by each quantum jump operator.
It is natural to expect that, by the fact that there exists the solvable subspace which does not feel the dissipators, the elements in this solvable subspace inherits many interesting bulk properties observed as a result of the rSGA structure in the system Hamiltonian. One representative feature is the persistent oscillations, corresponding to the persistent oscillations obtained for the closed quantum systems equipped with the rSGA-based QMBS <cit.>. Actually, similar phenomena have been first reported for another partially solvable quantum system <cit.>, although its partial solvability does not come from the bulk partial solvability but from the cooperation between the Hamiltonian and dissipation operator.
Consequently, we have theoretically confirm that the persistent oscillations emerge for our open system only when the initial state is prepared as a superposition of solvable energy eigenstates of H, contrary to the known example <cit.> in which the similar oscillation can be observed by stating from any superposition of energy eigenstates with real coefficients. Besides, we numerically verify the long-lived oscillations for the ℤ_2-invariant initial state, which indeed is the inherited feature from the bulk <cit.>.
Besides the non-integrable models, several integrable models can also admit the HSF structure <cit.>. Curiously, some models succeeded to keep their integrability even in several Krylov subspaces even by adding integrability-breaking perturbations <cit.>. Suppose we look for the perturbations that keeps integrability in a given subspace ℋ_0. Then these perturbations can be any as long as they vanish under the projector onto the integrable subspace ℋ_0. That means, these integrability-breaking perturbations can be site-dependent, unlike the conventionally considered examples most of which are translationally invariant, even by admitting several choices for impurities.
Therefore, our plan for this paper is first to show the construction of integrable models with the HSF, and then provide the examples of partially integrability-breaking perturbations for each model.
Another interesting aspect of the partially solvable models with site-dependent perturbations can be observed in the relation to open quantum systems. Especially, recent developments in the open quantum systems whose time evolution obeys the Gorini2013Kossakowski2013Sudarshan2013Lindblad (GKSL) equation achieved classification of their solvability <cit.>.
In spite of great success in clarifying their solvability mechanism, translation invariance is often imposed when discussing solvability of Liouvillians, i.e. most of the considered examples couple to dissipators at every site. Such a setup is not very useful from the viewpoint of experimental accessibility.
For this reason, we aim to realize the partially solvable open quantum system coupled to dissipators only at its boundaries, which we expect is easily examined experimentally.
In order to examine time evolution of open quantum systems which obeys the GKSL equation, “the thermofield double (TFD) vector" formalism <cit.> is often employed, in which a density matrix defined in the ℋ⊗ℋ^† is mapped to a vector in the doubled Hilbert space ℋ⊗ℋ^* by an isomorphism
φ: ℋ⊗ℋ^†→ℋ⊗ℋ^*,
ρ = ∑_m,n p_m,n |m ⟩⟨ n|
↦ |ρ⟩⟩ = ∑_m,n p_m,n |m ⟩⊗ |n ⟩^*.
Subsequently, the Liouvillian is mapped to the non-Hermitian Hamiltonian in the doubled Hilbert space.
In the TFD formalism, a Liouviillian with boundary dissipators is mapped to a periodic non-Hermitian Hamiltonian with two impurities corresponding to boundary dissipators. Thus, one can regard this boundary-driven open system as the closed periodic chain with local impurities.
This fact evokes us to apply the idea of site-independent partially integrability-breaking perturbations arises in the context of the HSF to constructing partially solvable Liouvillians.
We show that, indeed, several choices of boundary dissipators can match the form of partially integrability-breaking perturbations.
In this paper, we propose two mechanisms for robustness of the invariant subspace consisting of QMBS of the bulk Hamiltonian against certain boundary dissipators. The first one is “the boundary cancellation mechanism" on QMBS in the matrix product forms. The second one is the HSF with the invariant subspace in which the dissipators are irrelevant.
Quantum thermalization is one of the most mysterious phenomena in statistical mechanics. People have devoted efforts to understand quantum thermalization based on quantum mechanics over decades and recently big progresses have been achieved by the reinterpretation of eigenstate thermalization hypothesis (ETH) <cit.>.
While classical thermalization is understood based on two hypotherical mechanisms the principle of equal probability, which says every microscopic states in the energy shell are realized with the equal probability, and typicality, which requires for almost all microscopic states in the energy shell to be macroscopically undistinguishable from the thermal state, quantum thermalization is now believed to be understood by the following two mechanisms. The first one is ergodicity in the Hilbert space, which corresponds to the principle of equal probability for classical statistical mechanics. This guarantees for a quantum system to relax to a certain steady state. As fluctuations of a local observable by time around its time average are evaluated by
lim_T →∞1/T∫_0^T |⟨ O ⟩(t) - ⟨ O ⟩_ d |^2 dt ≤|O||^2/D_ eff,
D_ eff = ( ∑_j |c_j|^4 )^-1
the right-hand side, the upper bound of fluctuations, indeed vanishes in the thermodynamic limit if the initial state is prepared to be a superposition of many enough energy eigenstates <cit.>, which guarantees ergodicity in the Hilbert space.
The second one is the eigenstate thermalization hypothesis (ETH) <cit.>, which is a quantum analog of classical typicality. ETH states that all or almost all, depending on how much strong the statement is, energy eigenstates cannot be distinguished from the thermal state, as long as their eigenenergies are in a macroscopically small interval. This is rather strong statement, and indeed, recent studies show that there exist exceptional non-thermal energy eigenstates in various systems <cit.>.
These exceptional non-thermal energy eigenstates usually appear in systems which do not thermalize, such as integrable sytems or many-body localized systems <cit.>. However, recently it has been discovered that such non-thermal energy eigenstates show up in systems which do thermalize <cit.>. These states are called quantum many-body scars (QMBS), which are intensively studied these five years.
The first paper which reported the emergence of QMBS is <cit.>, in which persistent revivals have been observed in the domain-wall density on the Rydberg atom quantum chain. Nowadays, emergence of such long-lived revivals is theoretically understood from the algebraic viewpoint as a result of the hidden restricted spectrum generating algebra (rSGA) <cit.>. The restricted spectrum generating algebra is a kind of dynamical symmetry which requires for the presence of a local operator Q which does not commute with the Hamiltonian but satisfies
[H, Q] - ω Q |_W = 0, ω∈ℝ
in a subspace of the entire Hilbert space W ⊂ℋ.
Therefore, once we find an exactly solvable energy eigenstate ψ_0 ⟩, we can subsequently construct the other energy eigenstates from ψ_0 ⟩ by applying Q. In other words, the solvable energy eigenstates constructed in this way form the invariant subspace W. These energy eigenstates exhibit the spectrum equally spaced by ω, which perfectly explains why the oscillations we just mentioned emerge.
Since the solvable subspace constructed in this way is negligibly small in the thermodynamic limit, the model equipped with the rSGA weakly violates ergodicity in the Hilbert space.
For this reason, exactly solvable energy eigenstates in a partially solvable model are the candidates of QMBS.
Another characteristic feature of QMBS is relatively small entanglement entropies compared to those of the thermal state, which grows by the volume law. While this has been numerically tested for most cases, some partially solvable models are known to have the exact zero-energy eigenstate in a matrix product form and quasiparticle excitation states on the top of it. As the entanglement entropy of a matrix product state is bounded from above by its bond dimension, these states have constant entanglement entropies as long as the bond dimensions are fixed.
Indeed, a variety of theoretical models has been proposed as the examples of partially solvable models due to the rSGA <cit.>, all of which have the matrix product zero-energy eigenstate.
Now we turn to open quantum systems. Stimulated by the recent developments in understanding QMBS, it is natural to ask what are the counterparts to QBMS in open systems.
The time evolution of the open quantum system is governed by the Liouvillian ℒ:
d/dtρ(t) = ℒ(ρ(t)).
Actually, solvability of open quantum systems has been intensively studied these days as well, especially when the system obeys the Gorini2013Kossakowski2013Sudarshan2013Lindblad (GKSL) equation.
Solvability of the GKSL equation is classified into two types. Liouvillians of the first type are completely solvable by ***. In the second type, on the other hand, systems are partially solvable in which only steady states are exactly solvable.
In this paper, we look for more general examples of partially solvable open systems in which, besides the steady states, several eigenmodes are exactly solvable.
A few examples of partially solvable Liouvillians can be found in the literatures <cit.>. The first one discusses ***
The Hilbert space fragmentation (HSF) is known as one of the mechanisms which induces the non-thermalizing property of non-integrable closed quantum systems.
The HSF has first been introduced in <cit.> and the mechanism for their emergence has been formulated in the context of the models with dipole-momentum conservation <cit.>.
One can say that the Hamiltonian exhibits the HSF when the Hilbert space are divided into the exponentially many invariant subspaces (“the Krylov subspaces") of the Hamiltonian
ℋ = ⊕_r ℋ_r
and this decomposition is not caused by the obvious local symmetry of the Hamiltonian.
Later in <cit.>, the mathematical structure behind the HSF has been explained by introducing “the commutant algebra", which also tells the number of Krylov subspaces induced by the Hamiltonian.
Besides the non-integrable models, several integrable models can also admit the HSF structure <cit.>. Curiously, some models succeeded to keep their integrability even in several Krylov subspaces even by adding integrability-breaking perturbations <cit.>. Suppose we look for the perturbations that keeps integrability in a given subspace ℋ_0. Then these perturbations can be any as long as they vanish under the projector onto the integrable subspace ℋ_0. That means, these integrability-breaking perturbations can be site-dependent, unlike the conventionally considered examples most of which are translationally invariant, even by admitting several choices for impurities.
Therefore, our plan for this paper is first to show the construction of integrable models with the HSF, and then provide the examples of partially integrability-breaking perturbations for each model.
Another interesting aspect of the partially solvable models with site-dependent perturbations can be observed in the relation to open quantum systems. Especially, recent developments in the open quantum systems whose time evolution obeys the Gorini2013Kossakowski2013Sudarshan2013Lindblad (GKSL) equation achieved classification of their solvability <cit.>.
In spite of great success in clarifying their solvability mechanism, translation invariance is often imposed when discussing solvability of Liouvillians, i.e. most of the considered examples couple to dissipators at every site. Such a setup is not very useful from the viewpoint of experimental accessibility.
For this reason, we aim to realize the partially solvable open quantum system coupled to dissipators only at its boundaries, which we expect is easily examined experimentally.
In order to examine time evolution of open quantum systems which obeys the GKSL equation, “the thermofield double (TFD) vector" formalism <cit.> is often employed, in which a density matrix defined in the ℋ⊗ℋ^† is mapped to a vector in the doubled Hilbert space ℋ⊗ℋ^* by an isomorphism
φ: ℋ⊗ℋ^†→ℋ⊗ℋ^*,
ρ = ∑_m,n p_m,n |m ⟩⟨ n|
↦ |ρ⟩⟩ = ∑_m,n p_m,n |m ⟩⊗ |n ⟩^*.
Subsequently, the Liouvillian is mapped to the non-Hermitian Hamiltonian in the doubled Hilbert space.
In the TFD formalism, a Liouviillian with boundary dissipators is mapped to a periodic non-Hermitian Hamiltonian with two impurities corresponding to boundary dissipators. Thus, one can regard this boundary-driven open system as the closed periodic chain with local impurities.
This fact evokes us to apply the idea of site-independent partially integrability-breaking perturbations arises in the context of the HSF to constructing partially solvable Liouvillians.
We show that, indeed, several choices of boundary dissipators can match the form of partially integrability-breaking perturbations.
In this paper, we propose two mechanisms for robustness of the invariant subspace consisting of QMBS of the bulk Hamiltonian against certain boundary dissipators. The first one is “the boundary cancellation mechanism" on QMBS in the matrix product forms. The second one is the HSF with the invariant subspace in which the dissipators are irrelevant.
§ PARTIALLY SOLVABLE CLOSED SPIN CHAINS
In this section, we mostly give an overview of partial solvability for closed quantum systems.
In Sec. <ref>, we propose the existence of site-dependent perturbations to a certain integrable Hamiltonian that preserve integrability in a selected subspace, which is our original result.
We mainly consider
s=1 spin chains with nearest-neighbor interactions as an example. The Hamiltonian can be written as
H = ∑_j=1^N h_j,j+1,
h_j,j+1 = 1⊗⋯⊗j,j+1h⊗⋯⊗1,
h = ∑_s,t,s',t'=0^2 h^s,s'_t,t' |tt' ⟩⟨ ss'|,
where h is a local Hamiltonian acting on two neighboring s=1 spins whose states are labeled by |tt'⟩ (t,t'=0,1,2), and N is the number of lattice sites.
Among several mechanisms that produce partial solvability of closed quantum systems, we focus on the restricted spectrum generating algebra (rSGA) and the Hilbert space fragmentation (HSF).
§.§ Restricted spectrum generating algebra
The notion of the spectrum generating algebra (SGA), or also called the dynamical symmetry, has been introduced in various contexts <cit.>. In this paper, we shall say that the model has the SGA if there exists a spectrum generating operator Q^† that satisfies an algebraic relation,
[H, Q^†] - ℰ Q^† = 0,
for a real constant ℰ.
If a model has the SGA and some of its energy eigenstates are known, one can construct towers of eigenstates by applying the operator Q^† to those known states repeatedly.
The observed energy spectrum is then equally spaced with the interval ℰ due to the algebraic relation (<ref>).
One of the simplest examples that show the SGA is a free-fermion model. The Hamiltonian (denoted by H_ FF) is diagonalized in the momentum space,
H_ FF = ∑_k Λ_k η_k^†η_k,
with real eigenvalues Λ_k and a fermion creation operator η_k^†,
and thus satisfies the SGA,
[H_ FF, η_k^†] = Λ_k η_k^†,
due to the anti-commutation relations for η_k^† and η_k.
Then all the energy eigenstates are created by applying the fermion creation operators η_k^† to the obvious vacuum state |0 ⟩, i.e., the zero-energy eigenstate,
H_ FF η_k_1^†⋯η_k_n^† |0 ⟩ = (Λ_k_1 + ⋯ + Λ_k_n) η_k_1^†⋯η_k_n^† |0 ⟩.
In this example, the fermion creation operator plays a role of the spectrum generating operator for each mode k.
Sometimes there appears the SGA only in a subspace W of the entire Hilbert space ℋ,
[H, Q^†] - ℰ Q^†|_W = 0, W ⊂ℋ.
We call this type of the SGA structure that emerges only in the subspace W “the restricted spectrum generating algebra (rSGA)".
As in the case of the SGA that holds in the entire Hilbert space, one can construct a tower of energy eigenstates in the subspace W by repeatedly applying the operator Q^† to an obvious energy eigenstate |Ψ_0 ⟩ (if it may exist).
One of the simplest examples equipped with the rSGA is the perturbed spin-1 XY model <cit.>, whose local Hamiltonian is given by
h_j,j+1^XY = J/2 (S_j^+ S_j+1^- + S_j^- S_j+1^+) + m/2 (S_j^z + S_j+1^z),
where S_j^±=S_j^x± iS_j^y and S_j^z are spin-1 operators, and J and m are real coefficients.
This model is considered to be non-integrable, and therefore, the energy eigenstates are in general not exactly solvable. However, one can easily find that the fully-polarized state |22… 2 ⟩ (and |00… 0 ⟩) is obviously an energy eigenstate with the eigenenergy -mN (mN). Accordingly, one can derive some of the excited states exactly
in the form of (Q^†)^n |22… 2 ⟩ (that allows a quasiparticle picture),
where Q^† creates quasiparticles of spin-2 magnons carrying momentum k=π
Q^† = ∑_x=1^N (-1)^x (S_x^+)^2.
The spin-2 magnon creation operator Q^† then satisfies the rSGA,
[H_XY, Q^†] + 2m Q^†|_W = 0,
in the subspace W
spanned by the quasiparticle excitation states
W = span{ (Q^†)^n |22… 2 ⟩}_n ∈{0,1,…,N}.
The rSGA produces the equally-spaced eigenvalue spectrum in the solvable subspace W, which can
explain the persistent oscillation observed in the Loschmidt echo <cit.>. This implies that the perturbed spin-1 XY model never relaxes to any steady state, if the initial state has large enough overlap with the solvable subspace W.
A more involved example exhibiting the rSGA is a family of the spin-1 Affleck-Kennedy-Lieb-Tasaki (AKLT)-type model <cit.>. The local Hamiltonian is given by
h_j,j+1^ AKLT = 1/2 h^00_00 (S_j^x S_j+1^x + S_j^y S_j+1^y + S_j^z S_j+1^z)
- ( 1/2 h^00_00 + a_1^2/a_0 a_2 h^11_11) (S_j^x S_j+1^x + S_j^y S_j+1^y + S_j^z S_j+1^z)^2
- ( 1/2 h^00_00 - a_1^2/a_0 a_2(a_1^2/a_0 a_2 - 1 ) h^11_11) (S_j^x S_j+1^x + S_j^y S_j+1^y)^2
- ( h^00_00 + ( a_1^2/a_0 a_2 - 1 ) h^11_11) (S_j^z S_j+1^z)^2
+ ( 1/2 h^00_00 + ( a_1^4/a_0^2 a_2^2 - 1 ) h^11_11) ((S_j^z)^2 + (S_j+1^z)^2)
+ ( 1 - 2 a_1^4/a_0^2 a_2^2) h^11_11,
in which h^00_00 and h^11_11 are free real parameters, while a_0, a_1, and a_2 are free complex parameters under the condition that a_1^2/(a_0 a_2) ∈ℝ. Note that the original AKLT model is realized by choosing h^11_11 / h^00_00 = 2/3 and a_0 = -√(2) a_1 = -a_2 = √(2/3).
Although this Hamiltonian is non-integrable, it has been known for a long time that the ground state and some of the excited states are exactly solvable <cit.>, to which a new list of solvable excited states has been added recently in the context of the QMBS states <cit.>.
The zero-energy state
of the AKLT-type model is written in the form of the matrix product state,
|Ψ_0 ⟩ = ∑_m_1,… m_N ∈{0,1,2} tr_a (A_m_1⋯ A_m_N) |m_1 … m_N ⟩,
with the frustration-free condition
h_j,j+1^ AKLTA⃗_j A⃗_j+1 = 0,
A⃗_j = [ A_0; A_1; A_2 ]_j,
which holds for j = 1,2,…,N.
The matrix-valued elements A_0, A_1, and A_2 are given by the Pauli matrices
A_0 = a_0 σ^+,
A_1 = a_1 σ^Z,
A_2 = a_2 σ^-,
σ^± = σ^x ± i σ^y,
in which
the coefficients a_0, a_1, and a_2 are the same as in the Hamiltonian (<ref>). Therefore, the bond dimension of this matrix product state is two.
Besides the ground state, it has been found that several excited states are solvable, most of which admit the quasiparticle description <cit.>. Here we focus on the excited states expressed by the spin-2 magnons carrying momentum k = π. They are created by the operator given in (<ref>),
which also satisfies the rSGA for the AKLT-type Hamiltonian,
[H_ AKLT, Q^†] - 2h^00_00 Q^†|_W = 0,
in the subspace W ⊂ℋ spanned by the quasiparticle excited states
W = span{ (Q^†)^n |Ψ_0 ⟩}_n ∈{0,1,…,⌊ N/2 ⌋}.
Thus, the quasiparticle creation operator plays a role of the spectrum generating operator that creates a tower of solvable energy eigenstates on top of the matrix product
zero-energy
state (<ref>).
Recent studies on partial solvability have provided more formal understanding about the emergence of the rSGA in terms of quasisymmetry <cit.> for the former example and its deformation for the latter example <cit.>.
§.§ Hilbert space fragmentation
Hilbert space fragmentation (HSF) is characterized by exponentially many block-diagonal structures of the Hamiltonian H, which are caused by non-obvious symmetries of H. Due to the block-diagonal structure of the Hilbert space, each subsapce is never accessed from the other subspaces by time evolution.
Among several mechanisms for HSF <cit.>, we focus on the fragmentation observed for the Hilbert space of spin chains due to the presence of “frozen" spin configurations under the action of a Hamiltonian. Those “frozen" spin configurations are called “irreducible strings" (IS) <cit.>, which are associated with a non-obvious symmetry of the Hamiltonian.
In order to explain the HSF induced by IS, let us
consider spin chains with arbitrary spin-s
(instead of spin-1).
Suppose that we have a spin-s chain with nearest-neighbor interactions, whose local Hamiltonian is given by
h^ HSF = ∑_s,s' ∈ A h^s,s'_s,s' |ss' ⟩⟨ ss'|
+ ∑_t,t' ∈ B h^t,t'_t,t' |tt' ⟩⟨ tt'|
+ ∑_s ∈ A, t ∈ B(h^s,t_t,s |ts ⟩⟨ st| + h^t,s_s,t |st ⟩⟨ ts| + h^s,t_s,t |st ⟩⟨ st| + h^t,s_t,s |ts ⟩⟨ ts|).
Here A and B represent subsets of the labels for local states A,B ⊂{0,1,… 2s}, which satisfy A ∪ B = {0,1,… 2s} and A ∩ B = ∅. We also call the labels for local states “species".
Since the Hamiltonian given in the form of Eq. (<ref>) never exchanges the species in each subset A or B, the configuration (i.e., IS) in each subset A and B is “frozen" under the action of the Hamiltonian.
The existence of such a frozen partial configuration causes fragmentation of the Hilbert space, i.e., the the Hilbert space is fragmented according to the partial configurations in the subsets A and B.
In each of the fragmented subspaces, one can observe that a spin-1/2 model (with two local states) is embedded in the following way.
The entire Hilbert space of the spin-s chain is ℂ^(2s+1)N, which consists of the tensor product of N local linear spaces ℂ^2s+1 spanned by the (2s+1) basis vectors |0 ⟩, |1 ⟩,…,|2s+1 ⟩ corresponding to the spin degrees of freedom. In this basis, a state in ℂ^(2s+1)N is labeled by the spin configuration.
Alternatively, one can use another basis
with local states labeled by A and B, and configurations realized within A and B
(Fig. <ref>).
Suppose that the subset A consists of N_A species and B consists of N_B (= (2s+1) - N_A) species. The total degree of freedom is then calculated as
∑_n=0^N [ N; n ]· N_A^n N_B^N-n = (N_A +N_B)^N,
which matches the dimension of the Hilbert space in the first description (dim ℂ^(2s+1)N), indicating that the two different bases describe the same Hilbert space.
For the Hamiltonian with HSF (<ref>),
the configurations within A and B (i.e., IS) are frozen, while the labels A and B are unconstrained. The (A, B) degrees of freedom form the N-fold tensor product of the two-dimensional local linear space ℂ^2.
The projection onto such a 2^N-dimensional subspace
is defined by restricting to the configurations in A and B specified by IS (denoted by a projector P_ IS) and then by identifying all the local states in each of the subspaces A and B (Fig. <ref>).
In this way, the Hamiltonian (<ref>) is reduced to a spin-1/2 chain with the nearest-neighbor interactions.
§.§.§ Completely integrable case
Although the HSF is not necessarily associated with the notion of solvability, it is often possible to embed solvability in one or a small number of subspaces.
From now on, let us for simplicity come back to spin-1 systems.
One representative example is the spin-1 XXC model <cit.> with the local Hamiltonian,
h^XXC = coshη(∑_s,s' ∈{0,2} |ss' ⟩⟨ ss'| + |11 ⟩⟨ 11| )
+ ∑_s ∈{0,2}( |s1 ⟩⟨ 1s| + |1s ⟩⟨ s1| ).
The Hamiltonian exhibits the HSF, without changing the configuration of the labels 0 and 2, and thus the IS for the XXC model is the partial configuration formed by 0 and 2. This local Hamiltonian indeed belongs to the class of (<ref>), by choosing the subsets A = {0,2} and B = {1}.
Although the projector onto the subspaces specified by the IS generally has a site-dependent complicated expression, sometimes it can be expressed in a simple form.
One example is the projector onto the direct sum of the subspaces encoded by the fully-polarized ISs, … 0000 … and … 2222 …, of length n (n=0,…,N). The projectors onto the corresponding subspace are expressed by simple tensor products of the physical spaces,
P^(0)_ pol = ⊗_j=1^N (|0 ⟩⟨ 0| + |1 ⟩⟨ 1|)_j,
P^(2)_ pol = ⊗_j=1^N (|2 ⟩⟨ 2| + |1 ⟩⟨ 1|)_j,
respectively.
Another example is the projector onto the subspace encoded by the alternating IS, … 0202 … <cit.>,
P_ alt
= ∑_0pt(α_1,β_1),…,(α_N,β_N)∈{0,1,2}^2 tr_a (M_α_1,β_1⋯ M_α_N,β_N)
|α_1 …α_N ⟩⟨β_1 …β_N|
- ⊗_j=1^N (|1 ⟩⟨ 1|)_j,
M_0,0 = σ_a^+, M_1,1 = 1_a, M_2,2 = σ_a^-, M_α,β = 0 (α≠β),
in which σ_a^± and 1_a represent the Pauli matrices and the two-by-two unit matrix acting non-trivially on the auxiliary space, respectively. Here the trace tr_a is taken only over the auxiliary space.
This projector cannot be written in a simple tensor product form, but instead can be written in an (almost) matrix product form by introducing the two-dimensional auxiliary space.
Note that the projectors P_ pol^(0), P_ pol^(2), and P_ alt are not orthogonal to each other, but have a common one-dimensional subspace, span{⊗_j=1^N |1 ⟩_j}.
Besides the HSF structure, the XXC model also exhibits integrability, which is guaranteed by the existence of the R-matrix solving the Yang-Baxter equation (YBE),
R_1,2(λ_1,λ_2) R_1,3(λ_1,λ_3) R_2,3(λ_2,λ_3) = R_2,3(λ_2,λ_3) R_1,3(λ_1,λ_3) R_1,2(λ_1,λ_2)
(λ_1,λ_2,λ_3 ∈ℂ),
defined in the three-fold tensor product of the linear spaces V_1 ⊗ V_2 ⊗ V_3. We denote the R-matrix that acts non-trivially on the ith and jth sites by R_i,j, e.g.,
R_1,2(λ_1,λ_2) = R(λ_1,λ_2) ⊗1_3.
The explicit form of the R-matrix for the XXC model can be found in <cit.>.
Integrability of the XXC model is inherited by the projected Hamiltonian onto the subspace specified by a certain IS. Regardless of the choice of IS, any projected Hamiltonian onto the subsapce specified by a certain IS results in the same spin-1/2 integrable XXZ Hamiltonian by identifying |0⟩ and |2⟩,
h_j,j+1^XXCP_ ISℋ∖{ |0⟩,|2⟩}^N⟼coshη( |↑↑⟩⟨↑↑ | + |↓↓⟩⟨↓↓| )
+ ( |↑↓⟩⟨↓↑| + |↓↑⟩⟨↑↓| ),
where we assign up spins to the sites belonging to the subset A and down spins to the sites belonging to the subset B.
This also indicates that the spin-1 XXC Hamiltonian can be diagonalized sector by sector labeled by an IS via the spin-1/2 XXZ Hamiltonian, instead of being diagonalized directly via the XXC R-matrix.
0
Since in the subspace with a fixed IS the XXC Hamiltonian only distinguishes whether each local state belongs to A or B without distinguishing configurations realized in each of the subset A and B, we can consider the projection which maps the XXC Hamiltonian to the reduced Hamiltonian in a given subspace by identifying all configurations in each subsets A or B
h_XXCP_ ISℋ∖ (A^⊗ N B^⊗ N)⟼coshη( |AA ⟩⟨ AA| + |BB ⟩⟨ BB| )
+ ( |AB ⟩⟨ BA| + |BA ⟩⟨ AB| ).
This is nothing but the XXZ Hamiltonian with anisotropy coshη.
Thus, we can diagonalize the spin-s XXC Hamiltonian sector by sector labeled by IS as the spin-1/2 XXZ Hamiltonian, instead by diagonalizing it via the XXC R-matrix.
For simplicity, let us hereafter focus on the spin-1 case again. Then the entire Hilbert space is spanned by tensor product of local spaces spanned by the three fundamental vectors |0 ⟩, |1 ⟩, and |2 ⟩. If we set the subset A and B as A = { 0,2 } and B = { 1 }, the XXC Hamiltonian does not change the configuration of the local states |0 ⟩ and |2 ⟩
h_XXC = coshη( |00 ⟩⟨ 00| + |22 ⟩⟨ 22| + |02 ⟩⟨ 02| + |20 ⟩⟨ 20| + |11 ⟩⟨ 11| )
+ ( |01 ⟩⟨ 10| + |21 ⟩⟨ 12| + |10 ⟩⟨ 01| + |12 ⟩⟨ 21| ).
§.§.§ Partially integrable case
Now we consider perturbations that break entire integrability but keep integrability in a subspace specified by a given IS.
The idea is to find integrability-breaking perturbations in such a way that are irrelevant (or vanish) in a given subspace.
By focusing on systems with nearest-neighbor interactions, we provide two examples of perturbations that violate integrability in the entire Hilbert space but keep integrability in a given subspace. The first one is a perturbation which keeps integrability in the subspace specified by the polarized IS.
In this subspace, any perturbation in a form of
h^ pol(α_ d1,α_ d2,α_ o1,α_ o2,ζ) = α_ d1 |02 ⟩⟨ 02| + α_ d2 |20 ⟩⟨ 20| + α_ o1 |02 ⟩⟨ 20| + α_ o2 |20 ⟩⟨ 02|
+ 2coshζ_1 ∑_s ∈{0,2} |ss ⟩⟨ ss|
+ 2coshζ_2 ∑_s ∈{0,2}( |s1 ⟩⟨ s1| + |1s ⟩⟨ 1s| )
does not violate integrability. Note that these perturbations also preserve the spin-flip invariance. It is easy to check that the first four interactions are irrelevant in the subspace of the polarized IS, since they vanish unless the configuration would include adjacent 0 and 2, which never appears in the polarized subspace.
On the other hand, the interactions in the second line of Eq. (<ref>) act as a uniform external magnetic field in the projected space P_ polℋ∖{|0⟩,|2⟩}^N.
Thus, the entire Hamiltonian acts in the projected space as the spin-1/2 XXZ Hamiltonian with the shifted anisotropy coshη→coshη + coshζ_1 - coshζ_2 and the uniform external magnetic field,
h_j,j+1^XXC + h_j,j+1^ polP_ polℋ∖{|0⟩,|2⟩}^N⟼ σ_j^+ σ_j+1^- + σ_j^- σ_j+1^+ + 1/2(coshη + coshζ_1 - coshζ_2) (σ_j^z σ_j+1^z)
+ 1/2coshζ_1 (σ_j^z + σ_j+1^z)
+ 1/2 (coshη + coshζ_1 + coshζ_2),
which is integrable.
The second example is a perturbation which keeps integrability in the subspace of alternating IS.
In this subspace, any of the following perturbations does not violate integrability,
h^ alt(β_ d1,β_ d2,β_ o1,β_ o2,ζ) = β_ d1 |00 ⟩⟨ 00| + β_ d2 |22 ⟩⟨ 22|
+ β_ o1 |00 ⟩⟨ 22| + β_ o2 |22 ⟩⟨ 00|
+ 2 coshζ_1 (|02 ⟩⟨ 02| + |20 ⟩⟨ 20|)
+ 2coshζ_2 ∑_s ∈{0,2}( |s1 ⟩⟨ s1| + |1s ⟩⟨ 1s| ).
One can see that the first four terms are irrelevant, since they always vanish unless the state includes adjacent 0s or 2s, which never show up in the alternating subspace. On the other hand, the last two terms act as a uniform external magnetic field
in the projected space P_ altℋ∖{|0⟩,|2⟩}^N.
Thus, the entire Hamiltonian acts in this projected space as the spin-1/2 XXZ model with the shifted anisotropy coshη→coshη + coshζ_1 - coshζ_2 and the uniform external magnetic field,
h_j,j+1^ XXC + h_j,j+1^ altP_ altℋ∖{|0⟩,|2⟩}^N⟼ σ_j^+ σ_j+1^- + σ_j^- σ_j+1^+ + 1/2(coshη + coshζ_1 - coshζ_2) (σ_j^z σ_j+1^z)
+ 1/2coshζ (σ_j^z + σ_j+1^z)
+ 1/2 (coshη + coshζ_1 + coshζ_2),
which is again integrable.
0
By concentrating on the nearest neighbor interactions, the perturbation
h'_ alt = h^00_00 |00 ⟩⟨ 00| + h^22_22 |22 ⟩⟨ 22|
+ h^02_02 |02 ⟩⟨ 02| + h^20_20 |20 ⟩⟨ 20|
+ h^22_00 |00 ⟩⟨ 22| + h^00_22 |22 ⟩⟨ 00|
keeps integrability in the subspace of the alternating irreducible strings … 0 2 0 2 0 2 ….
In this subspace, the first two and the last two terms always act as zero, and therefore, the system does not feel the integrability-breaking perturbation in the subspace of the alternating irreducible string. That is, the Hamiltonian after perturbed with these four terms is effectively the XXC model in this subspace.
On the other hand, the Hamiltonian does not act as the XXC model, after adding the third and fourth terms, even in this subspace. However, we can observe that integrability is still there by “identifying" the states |0 ⟩ and |2 ⟩. Indeed, the Hamiltonian with all terms in (<ref>) is regarded as the spin-1/2 XXZ model after identifying the two states |0 ⟩ and |2 ⟩. We checked this survived integrability as emergence of the embedded spin-1/2 XXZ spectrum in the full spectrum of the perturbed XXC model (see Appendix <ref>).
Note that the third and fourth terms in (<ref>) violate not only integrability but also the spin-flip invariance, and consequently, the HSF structure of the entire Hilbert space. With these perturbations, total magnetization is no longer a conserved quantity. It is also worth notifying that both of the models (<ref>) and (<ref>) coincide with the t-J_z model <cit.>, a canonical model that exhibits the HSF.
Another remarkable fact is that partial solvability we discussed above is robust against site-dependent perturbations. For example, one can keep the system integrable in the subspace specified by the alternating IS even when one adds different perturbations on different sites, as long as the perturbations are written in the form of Eq. (<ref>). We will come back to this point later in the discussion of partial solvability for open quantum systems.
Although we have focused on nearest-neighbor interactions so far,
there exist longer-range interactions that keep integrability in the subspace specified by an IS.
For instance, the following three-body interactions
do not violate integrability in the subspace specified by the period-three triplet IS (… 0 0 2 0 0 2 …):
h'_ tri = h^000_000 |000 ⟩⟨ 000| + h^*22_*221⊗ |22 ⟩⟨ 22| + h^22*_22* |22 ⟩⟨ 22| ⊗1 + h^202_202 |202 ⟩⟨ 202|.
In this way, integrability in the subspace specified by the IS with period-p seems to hold under a certain choice of p-body interactions. Also, the projector onto the subspace encoded by the period-three triplet IS can be written in an (almost) matrix product form with a bond dimension three,
P_ tri
= ∑_0pt(α_1,β_1),…,(α_N,β_N)∈{0,1,2}^2 tr_a (M_α_1,β_1⋯ M_α_N,β_N)
|α_1 …α_N ⟩⟨β_1 …β_N|
- 2 ⊗_j=1^N (|1 ⟩⟨ 1|)_j,
M_0,0 = S_a^+, M_1,1 = 1_a, M_2,2 = (S_a^-)^2, M_α,β = 0 (α≠β),
where S_a^± and 1_a represent the spin-1 operators and the three-by-three unit matrix acting non-trivially on the auxiliary space, respectively.
§.§.§ Matrix product operator symmetry
In the previous subsection, we have observed that the integrable spin-1/2 XXZ model can be embedded in one of the fragmented Hilbert space of the XXC model specified by a certain IS. Therefore, it is obviously possible to apply the Bethe ansatz method to construct partially conserved quantities, which are conserved only in the integrable subspace but not in the entire Hilbert space. On the other hand, it has been proposed that a partially solvable model is characterized by “the matrix product operator (MPO) symmetry" <cit.>, which implies the existence of conserved quantities in the matrix product forms with fixed bond dimensions. Surprisingly, these are the conserved quantities not only in the integrable subspace but also in the entire Hilbert space. Since they have the matrix product forms with finite bond dimensions, they exhibit small entanglement entropies.
Concrete examples have been displayed for the XXC model <cit.> under the assumption that the conserved quantity is expressed by the MPO form,
T = tr_a (L_a,N… L_a,1),
L_a,n = 1⊗ |1 ⟩⟨ 1| + ∑_s,t=0,2 L^(s,t)⊗ |s ⟩⟨ t|,
in which 1 and L^(s,t) are two-by-two matrices acting on the auxiliary space.
The key relation in proving that the MPO (<ref>) commutes with the Hamiltonian is the local divergence relation,
[h_j,j+1, L_a,j+1 L_a,j] = M_a,j+1 L_a,j - L_a,j+1 M_a,j,
where M_a,n is another two-by-two matrix.
0
The mechanism that the MPO (<ref>) commutes with the Hamiltonian comes from the local divergence relation:
[h_x,x+1, L_a,x+1 L_a,x] = M_a,x+1 L_a,x - L_a,x+1 M_a,x,
where M is another operator which acts on the tensor product of the two-dimensional auxiliary space and the three-dimensional local physical space. Especially for the Hamiltonian (<ref>) and the MPOs (<ref>) and (<ref>), the local divergence relation (<ref>) is satisfied with M = 0.
For the XXC model with the nearest-neighbor perturbations (<ref>), only M = 0 solves the divergence relation, which is included in the class of “the commutant algebra" discussed in <cit.>.
Two kinds of L can be found as the solution to (<ref>): The first one is the diagonal MPO,
L^(0,0) = [ x y; y z ],
L^(2,2) = [ u 0; 0 v ],
where x,y,z and u,v are free parameters. The second one is the non-diagonal MPO,
L^(0,2) = (L^(2,0))^† = γσ^-,
L^(0,0) = [ α 0; 0 δ ],
L^(2,2) = [ β 0; 0 ε ],
where α, β, γ, δ, ε are free parameters.
Note that the MPO symmetry with M ≠ 0 has been found for the XXC model with longer-range interactions <cit.>, which is not included in the class of the commutant algebra <cit.>.
§ PARTIALLY SOLVABLE OPEN SPIN CHAINS
Our main focus in this paper is partially solvable quantum systems coupled to boundary dissipators, in which the steady state and some eigenmodes are again solvable. In this section, we show two mechanisms to construct those models:
The first one is an rSGA-induced partially solvable system which remains to be partially solvable even in the presence of boundary dissipators under a certain condition. The second one is the HSF-induced partially solvable system, whose HSF structure is partially inherited to the boundary dissipative system.
These are thus examples of partially solvable boundary dissipative systems induced by partial solvability of the Hamiltonian, whose solvable states are robust against the boundary dissipators.
In addition to robustness of partial solvability against boundary dissipators, there are several important questions including what are characteristic features of partially solvable eigemodes, how relaxation processes in the solvable subspace differ from those in a generic case, and how the partially solvable eigenmodes are experimentally realizable.
In order to discuss these points, we consider a partially solvable spin-1 chain coupled to boundary dissipators, whose density matrix ρ evolves according to the Liouvillian ℒ in the GKSL equation,
d/dtρ(t) = ℒ(ρ)
= -i [H, ρ] + ∑_αγ_α𝒟_α(ρ),
𝒟_α(ρ) = A_αρ A_α^† - 1/2{A_α^† A_α, ρ}.
Here H is the system's Hamiltonian and A_α is a quantum jump operator acting on the physical space. We assume that the jump operator A_α non-trivially acts only on the first and Nth sites, representing the effect of a boundary dissipator with dissipation rates γ_α.
In the following discussion, we focus on two different Hamiltonians: the AKLT-type Hamiltonian (<ref>)
and the XXC Hamiltonian (<ref>).
0
In the solvable subspace of the Liouvillian, i.e. equivalently the effective Hamiltonian (<ref>), the dissipation terms always act as zero, and therefore, the dissipators are irrelevant in the solvable subspace. That is, any state in this subspace is a so-called “dark state" <cit.>, which does not feel the dissipators.
In the following subsections, we propose three mechanisms for emergence of exactly solvable eigenmodes induced by bulk partial solvability.
§.§ rSGA induced solvable eigenmodes
The first example in which partial solvability is robust against boundary dissipators is given by an rSGA-induced partially solvable system. Let us consider a system in which a zero-energy state |Ψ_0 ⟩ is analytically known. The rSGA solvability is characterized by the existence of a spectrum generating operator Q^† that satisfies the rSGA (<ref>) with the Hamiltonian in a subspace W spanned by states constructed by applying Q^† to the zero-energy state,
W = { |Ψ_n ⟩}_n,
|Ψ_n ⟩ = (Q^†)^n |Ψ_0 ⟩.
Thus, the states |Ψ_n ⟩ are exactly solvable energy eigenstates of the Hamiltonian with the eigenenergies ℰn.
It often occurs that the solvable excited states admit the single-mode quasiparticle description,
Q^† = ∑_x=1^N e^ikx q^†_x,
in which q_x^† is a local operator non-trivially acting on the xth site.
Based on these, it is natural to ask whether the solvable states can survive even when quasiparticles are injected from both of the edges of the system.
Such a situation is realized by taking the boundary quantum jump operators as
A_ L = q_1^†,
A_ R = q_N^†.
The density matrix ρ_nn for the solvable energy eigenstate |Ψ_n ⟩, as it commutes with the Hamiltonian by definition (i.e., [ρ_nn, H]=0), becomes the steady state of the GKSL equation (<ref>) if
𝒟_α(ρ_nn) = 0, ∀α,
ρ_nn = |Ψ_n ⟩⟨Ψ_n|.
The pure state ρ_nn that satisfies this condition together with the commutativity with the Hamiltonian is known as “the dark state", which has been introduced in the context of atomic physics and optics <cit.>.
The AKLT-type model (<ref>) is one of the examples whose solvable energy eigenstates satisfy the dark-state condition (<ref>) in the presence of the boundary quasiparticle dissipators.
The zero-energy eigenstate is given by the matrix product state, as has been explained in Sec. <ref>, but with the boundary deformation,
|Ψ^(v_ L, v_ R)_0 ⟩
= ∑_m_1,…,m_N ∈{0,1,2}_a⟨ v_ L| A_m_1⋯ A_m_N |v_ R⟩_a · |m_1… m_N ⟩,
since the open boundary condition is imposed on the system. The boundary vectors |v_ L,R⟩∈ V_a = span{ |↑⟩, |↓⟩} in the auxiliary space must be properly chosen in order for (<ref>) to be the zero-energy eigenstate. Especially when the model is frustration-free, as we consider in this paper, there are four degenerate zero-energy eigenstates, as no constraint is imposed on the boundary vectors.
A tower of the solvable energy eigenstates are then independently constructed on top of each of the four degenerate zero-energy states by applying the spin-2 magnon creation operator Q^† (<ref>). That is, the solvable subspace W under the open boundary condition is composed of four separate subspaces specified by the boundary vectors,
W = W^(↑,↑)⊕ W^(↑,↓)⊕ W^(↓,↑)⊕ W^(↓,↓),
W^(v_ L,v_ R) = span{ |Ψ_n^(v_ L,v_ R)⟩}_n,
|Ψ_n^(v_ L,v_ R)⟩ = (Q^†)^n |Ψ_0^(v_ L,v_ R)⟩.
We set the boundary quasiparticle dissipators in such a way that the quasiparticles are coming into the system from both of the ends,
A_ L = (S_1^+)^2,
A_ R = (S_N^+)^2.
With these dissipators, one of the solvable subspaces W^(↑,↓) satisfies the dark state conditions (<ref>),
A_α |ψ^(↑,↓)⟩ = 0, α∈{ L, R},
|ψ^(↑,↓)⟩∈ W^(↑,↓).
(See Appendix <ref> for the proof.) That is, by denoting the Liouvillian with the AKLT-type Hamiltonian and the dissipators (<ref>) by ℒ_ AKLT, any diagonal density matrix in the subspace W^(↑,↓)
becomes a steady state of the GKSL equation,
ℒ_ AKLT(ρ_ diag^(↑,↓)) = 0,
ρ_ diag^(↑,↓) = ∑_n p_n |Ψ_n^(↑,↓)⟩⟨Ψ_n^(↑,↓)|, ∑_n p_n = 1, p_n ≥ 0, ∀ n.
Note that, in the case of the perturbed spin-1 XY model, which is another model with the hidden rSGA, there never exist such a solvable energy eigenstate that satisfies the dark-state condition (<ref>).
The fact that any state in the subspace W^(↑,↓) is the dark state (<ref>) indicates that any density matrix in W^(↑,↓)⊗ (W^(↑,↓))^†, even if it contains off-diagonal elements, becomes an eigenmode of the GKSL equation. Suppose that we have an off-diagonal element ρ_m,n^(↑,↓) = |Ψ_m^(↑,↓)⟩⟨Ψ_n^(↑,↓)| in a given density matrix. This is a dark state from the previous statement, and moreover, its commutator with the Hamiltonian is proportional to itself,
[H, ρ_m,n^(↑,↓)] = 2(m-n) h^00_00ρ_m,n^(↑,↓),
which indicates that ρ_m,n^(↑,↓) is an eigenmode of the GKSL equation. The relation (<ref>) together with the dark-state condition (<ref>) leads to the restricted spectrum generating algebra for the Liouvillian in the subspace W^(↑,↓)⊗ (W^(↑,↓))^†⊂ℋ⊗ℋ^†,
ℒ_ AKLT(ρ^(↑,↓)_m,n)
= -2i (m-n) h^00_00ρ^(↑,↓)_m,n,
giving an equally-spaced spectrum along the imaginary axis with the interval 2h^00_00 embedded in the full spectrum of ℒ_ AKLT.
The hidden rSGA structure (<ref>) of the Liouvillian evokes us the persistent oscillations observed for rSGA-induced partially solvable isolated quantum systems <cit.>. Indeed, if we choose an initial state in the subspace W^(↑,↓),
|ψ(0) ⟩ = ∑_n a_n |Ψ_n^(↑,↓)⟩∈ W^(↑,↓),
it is easy to show that persistent oscillations are observed for an observable O in a long-time scale,
⟨ O(t) ⟩∼∑_n ≤ m 2cos(2(m-n)h^00_00 t) a_m a_n Re O_nm (t→∞).
This indicates that the system prepared in the solvable subspace never relaxes to any steady state.
Even if the initial state is generic, the long-lived oscillations survive as long as the initial state has a large enough overlap with the subspace W^(↑,↓).
0
mechanism we propose in this paper is the partially solvable Liouvillians associated with the rSGA. The mechanism of partial solvability for this kind of models is explained by the robust rSGA under the boundary dissipators. That is, the bulk Hamiltonian has partial solvability induced by the rSGA, and the rSGA structure is not violated by the boundary dissipators.
This situation is realized if the solvable subspace W is a subspace of the kernel of all the dissipation operators 𝒟_α. That is, the following relations must hold for all |ψ⟩∈ W
H |ψ⟩ = E |ψ⟩, 𝒟_α(|ψ⟩⟨ψ|) = 0, ∀α.
Similar steady states have been known by the name “dark states", first introduced in <cit.> and discussed later in many examples <cit.>, which are defined by slightly stronger conditions
H |ψ⟩ = E |ψ⟩,
A_α |ψ⟩ = 0, ∀α.
For this reason, we call the state which satisfies (<ref>) “the generalized dark state".
Several examples of the dark states have been found for the systems in which every site is coupled to a single dephasing operator <cit.>. These models exhibits the persistently oscillating eigenmodes, which are the characteristic behavior of the models equipped with the rSGA.
On the other hand, the system which is coupled to dissipators only at the edges can admit the generalized dark states. Such generalized dark states are obtained for the partially solvable spin chain with frustration-free interactions in the spin-2 magnon baths at the edges. The examples include the AKLT model and its generalizations (<ref>).
These models exhibit spin-2 magnon excitation (<ref>) on the top of the zero-energy matrix product state (<ref>), which form the exactly solvable invariant subspace W of the Hamiltonian (<ref>).
Suppose that the partially solvable Hamiltonian with the rSGA is coupled to boundary dissipators given by (S_1^+)^2 and (S_N^+)^2.
The first relation for the generalized dark state condition (<ref>) is satisfied if we choose |ψ⟩ = (Q^†)^n |Ψ_0 ⟩ as it is the energy eigenstates, while the second relation requires
(S_1^+)^2 (Q^†)^n |Ψ_0 ⟩⟨Ψ_0| Q^n (S_1^-)^2 - 1/2{ (S_1^-)^2 (S_1^+)^2, (Q^†)^n |Ψ_0 ⟩⟨Ψ_0| Q^n } = 0,
(S_N^+)^2 (Q^†)^n |Ψ_0 ⟩⟨Ψ_0| Q^n (S_N^-)^2 - 1/2{ (S_N^-)^2 (S_N^+)^2, (Q^†)^n |Ψ_0 ⟩⟨Ψ_0| Q^n } = 0.
These are satisfied by one of the choices for the boundary vectors ** which produces four degenerate zero-energy eigenstates (<ref>).
It is important for obtaining the generalized dark states that the vacuum of quasiparticles are the non-trivial states. In fact, the solvable subspace of the perturbed spin-1 XY model consists of the vacuum, which is the fully-polarized state (<ref>), and the spin-2 magnon excitation on the top of the vacuum (<ref>), and this subspace is violated by the quasiparticle baths at the edges.
The non-trivial structure of the zero-energy state for the AKLT model is understood by the deformed symmetry structure <cit.>, which connects the non-trivial zero-energy state with the fully-polarized state. Thus, it is necessary for the solvable subspace of the bulk Hamiltonian is associated with the deformed symmetry structure that to be robust against the quasiparticle baths at the edges.
As the dissipators are irrelevant in the solvable subspace of the rSGA-induced partially solvable models, persistent oscillating modes exist for the Liouvillian, as a consequence of the equally-spaced spectrum of the bulk Hamiltonian. These oscillating modes are characterized by
[H, ρ_m,n] = -ω (m-n) ρ_m,n,
𝒟_α(ρ_m,n) = 0, ∀α,
leading to
ρ(t) = ∑_n c_n^2 |Ψ_n ⟩⟨Ψ_n|
+ ∑_n ≠ m c_n c_m { e^iω (m-n) t |Ψ_n ⟩⟨Ψ_m| + e^-iω (m-n) t |Ψ_m ⟩⟨Ψ_n| },
and thus to the persistent oscillation of local observables, if the initial state has a decent overlap with the quasiparticle excitation states, indicating that the system never reaches the relaxation state.
Similar phenomena have been observed for the system with the dephasing operators coupled to every site <cit.>. Here, we observed that the non-relaxing states exist even when the system is coupled to the boundary dissipators, each of which dopes quasiparticles.
§.§ Numerical simulation
To see how the presence of the rSGA-induced solvable eigenmodes affects observables in partially solvable open spin chains,
here we perform numerical simulations for the GKSL equation (<ref>).
We take the generalized AKLT model H_ AKLT=∑_j=1^N-1 h_j,j+1^ AKLT (<ref>) as the bulk Hamiltonian
with a finite number of lattice sites N. We specifically focus on the case of the original AKLT model, i.e., by choosing
h_11^11/h_00^00=2/3, a_0=-√(2)a_1=-a_2=√(2/3).
The dissipators are given by spin-2 creation operators (<ref>)
acting on each end of the chain. The explicit form of the GKSL equation that we solve in this section is:
d/dtρ =
-i[H_ AKLT,ρ]+∑_α= L,Rγ_α( A_αρ A_α^† - 1/2{A_α^† A_α, ρ}),
H_ AKLT =
∑_j=1^N-1[
S_j · S_j+1+1/3 ( S_j · S_j+1)^2
],
A_ L =
(S_1^+)^2,
A_ R
=
(S_N^+)^2,
which is numerically simulated by the quantum trajectory method <cit.> together with the exact diagonalization.
The initial state is chosen to be a product state of the spin-1 chain, whose spin configuration is nearly a Néel state |N, S^z(0)⟩,
depending on the values of N and the initial total S^z(0):
|0 2 0 2 ⋯ 0 2 0 2 ⟩ :
|0 2 0 2 ⋯ 0 2 0 1 ⟩ :
|0 2 0 2 ⋯ 0 2 0 2 1 ⟩ :
|0 2 0 2 ⋯ 0 2 0 2 0 ⟩ :
We identify the state labels 0, 1, 2 with the spin configuration ↑, 0, ↓, respectively. Among the initial states |N, S^z(0)⟩ considered here, those with S^z(0)=1 have an overlap with the states in the subspace W^(↑,↓), since they have ↑ spins at both ends of the chain after removing S_j^z=0 spins. Hence, we expect that the rSGA-induced solvable eigenmodes appear in those cases with S^z(0)=1.
In Fig. <ref>, we show the time evolution of the local magnetization ⟨ S_j^z⟩ for several initial conditions
((a) N=8, S^z=0, (b) N=8, S^z=1, (c) N=9, S^z=0, (d) N=9, S^z=1)
with γ_ L=γ_ R=1. In the cases of S^z(0)=0 [Fig. <ref>(a), (c)], the local magnetization gradually reaches
the steady state value without oscillations, while in the cases of S^z(0)=1 [Fig. <ref>(b), (d)]
the local magnetization clearly shows long-lived coherent oscillations. In all the cases above, the magnetization does not approach the maximum value, meaning that the steady state does not correspond to the trivial all up states (i.e., |00⋯ 0⟩=|↑↑⋯↑⟩). We can also see that the magnetization at the boundary j=1, N takes a relatively larger steady-state value as compared to the bulk part, which may be due to the effect of the spin injection at the boundary.
In order to understand the role of the solvable eigenmodes in the GKSL equation, we plot the ratio of the number of trajectories for each S^z measured by ⟨ P_S^z⟩ in Fig. <ref>, where P_S^z is the projection operator onto the corresponding subspace with fixed S^z. The parameters are the same as in Fig. <ref>. When S^z(0)=0 [Fig. <ref>(a), (c)], the number of trajectories having S^z=0 quickly decays to zero, while those with S^z≠ 0 grow subsequently (those with lower S^z grow faster). Since S^z can change by 2 due to the spin-2 injection at the boundaries, S^z only takes values of even integers (that should not exceed the system size N). In the long-time limit,
the trajectories with S^z=6 and 8 survive, and the others seem to vanish.
This is in sharp contrast to the cases for S^z(0)=1 [Fig. <ref>(b), (d)],
where the number of trajectories with arbitrary S^z can survive in the long-time limit. This is in consistent with the fact that there is a tower of dark states with S^z=1,3,5,… (as shown in Eq. (<ref>)), which can be accessed from the initial state with S^z(0)=1 having an overlap with the states in the subspace W^(↑,↓). Hence the steady state remains to be far from the trivial all up states (|00⋯ 0⟩=|↑↑⋯↑⟩) even in the presence of the spin-2 injection. The steady states for S^z(0)=1 are also different from the all up state, since there is an additional dark state with S^z=S_ max^z-2 (S_ max^z is the maximum S^z in a finite chain with length N). In the thermodynamic limit, this dark state will become indistinguishable from the all up state.
In this way, there is a clear difference between the dynamics starting from S^z(0)=0 and S^z(0)=1 rooted in the presence of the solvable eigenmodes in the GKSL equation.
The long-lived coherent oscillations observed in Fig. <ref>(b), (d) are not due to the solvable dark states, since each trajectory has a single value of S^z at each time, which allows for realization of a single dark state at each time in each trajectory. One cannot have quantum mechanical superposition of different dark states,
which might cause coherent oscillations due to interference among dark states. To find the origin of the oscillations, we plot the imaginary part of the eigenvalues of the non-Hermitian Hamiltonian H^ eff_ AKLT=H_ AKLT-i/2∑_α= L,Rγ_α A_α^† A_α
with S^z=0 in Fig. <ref>(a) and S^z=1 in Fig. <ref>(b). In the former case (S^z=0), there is no eigenstate having an eigenvalue with zero imaginary part.
All the eigenstates have nonzero imaginary parts, which are forming continuous spectra.
In the latter case (S^z=1), on the other hand, there is one eigenstate having an eigenvalues with zero imaginary part, which is not shown in Fig. <ref>(b) where we take a log scale in the vertical axis. This corresponds to one of the solvable eigenmodes in the GKSL equation. On top of that, we find another eigenstate having an eigenvalue with a nonzero but very small imaginary part, which is separated from those of the other eigenstates. This eigenstate has the second smallest real part of the eigenvalues. Although it is not exactly a dark state, it may create oscillations with lifetime longer than the maximum time in Fig. <ref>. In fact, we confirm that this eigenstate has a strong overlap with the initial state close to the Néel state for S^z(0)=1. If one waits for sufficiently long time, we expect that the coherent oscillations will vanish eventually.
We also performed numerical simulations for other choices of the model parameters away from the solvable regions. The results show similar behaviors (e.g., the number of trajectories with S^z<S_ max^z-2 decay to zero) as in the case of S^z(0)=0 for the AKLT model with the boundary dissipators. Again we confirm the role of the rSGA-induced solvable eigenmodes in the GKSL equation, which support nontrivial steady states in the presence of dissipations.
§.§ HSF-induced solvable eigenmodes
The second example of robust partial solvability against boundary dissipators can be found for systems with the HSF.
The HSF of the Liouvillian has been discussed in the context of the commutant algebra <cit.>.
In this subsection, on the other hand, we discuss partial solvability induced by the HSF structure of the Hamiltonian for boundary dissipative systems, which have not been considered before.
In order to discuss the HSF for Liouvillians, it is useful to work on the TFD vector expression <cit.>, which is realized by the isomorphism,
φ: ρ = ∑_m,n p_m,n |m ⟩⟨ n|
↦ |ρ⟩⟩ = ∑_m,n p_m,n |m ⟩⊗ |n ⟩^*.
In the TFD expression, the Liouvillian is expressed as a non-Hermitian Hamiltonian acting on the doubled Hilbert space ℋ⊗ℋ^*,
d/dt |ρ(t) ⟩⟩ = -i H |ρ(t) ⟩⟩,
H = H ⊗1 - 1⊗^tH + i ∑_αγ_α( (A_α⊗ A_α^*) - 1/2 (A_α^† A_α⊗1 + 1⊗^tA_α A_α^*) ).
For boundary dissipative systems, the quantum jump operators A_α non-trivially act only on the first and/or the Nth site. In this subsection, we consider the Hamiltonian with the HSF that is solvable at least in one of the fragmented subspaces.
0
One may notice that this non-Hermitian Hamiltonian consists of the Hermitian part given by the first two terms and the non-Hermitian part given by the rests. Thus, a state |ψ⟩⟩∈ V ⊗^tV can be the zero-energy eigenvector of H, by proving the steady state of ℒ, if |ψ⟩⟩ separately vanishes by the first and second parts
(H ⊗1 - 1⊗^tH) |ψ⟩⟩ = 0,
( (A_α⊗ A_α^*) - 1/2 (A_α^† A_α⊗1 + 1⊗^tA_α A_α^*) ) |ψ⟩⟩ = 0, ∀α.
These conditions are sufficiently satisfied if the state |ψ⟩⟩ is a so-called “dark state" i.e. an energy eigenstate and killed by all quantum jump operators
H |ψ⟩ = E |ψ⟩,
𝒟_α(|ψ⟩⟨ψ|) = 0, ∀α.
Throughout this subsection, we consider the (perturbed) XXC Hamiltonian (<ref>) that exhibits both the HSF and (partial) integrability.
Suppose that two kinds of dissipators are coupled to each end of the XXC spin chain,
A_ L,+ = (S_1^+)^2, A_ L,- = (S_1^-)^2,
A_ R,+ = (S_N^+)^2, A_ R,- = (S_N^-)^2,
with the coupling strengths controlled by the dissipation rates γ_ L,+, γ_ L,-, γ_ R,+, and γ_ R,-.
With these dissipators, the Liouviilian for the boundary dissipative XXC spin chain ℒ_XXC is effectively written as the spin chain having twice the length 2N than the original chain (Fig. <ref>),
H_XXC = ∑_j=1^N-1 h^(XXC)_j,j+1 + h^( b,R)_N,N+1
- ∑_j=N+1^2N-1 h^(XXC)_j,j+1 + h^( b,L)_1,2N.
Here we have used the transpose invariance and the inversion symmetry of the XXC Hamiltonian (<ref>),
^t(∑_j=1^N-1 h_j,j+1^XXC) = ∑_j=1^N-1^th_j,j+1^XXC = ∑_j=1^N-1 h_j,j+1^XXC,
U_ I(∑_j=1^N-1 h_j,j+1^XXC) U_ I
= ∑_j=1^N-1 U_ I h_j,j+1^XXC U_ I
= ∑_j=1^N-1 h_N-j+1,N-j^XXC = ∑_j=1^N-1 h_j,j+1^XXC,
in which the operator U_ I reflects the spin chain with respect to its center,
U_ I: ℋ→ℋ, |s_1,s_2,…,s_N ⟩↦ |s_N,…,s_2,s_1 ⟩.
The effects of the boundary dissipators can be written in terms of interactions between the first and 2Nth sites and between the Nth and (N+1)th sites,
h^( b,α) = iγ_α,+( |00 ⟩⟨ 22| - 1/2 (|2 ⟩⟨ 2| ⊗1 + 1⊗ |2 ⟩⟨ 2|) )
+ iγ_α,-( |22 ⟩⟨ 00| - 1/2 (|0 ⟩⟨ 0| ⊗1 + 1⊗ |0 ⟩⟨ 0|) ), α∈{ R,L},
each of which represents incoming and outgoing quasiparticles.
When all the dissipation rates are set to zero, the effective Hamiltonian (<ref>) simply consists of the two decoupled XXC Hamiltonians. Therefore, the HSF structure is obtained in the doubled Hilbert space according to the IS consisting of configurations of 0s and 2s. Moreover, this doubled XXC Hamiltonian is obviously integrable, since each of the XXC spin chains is integrable.
On the other hand, both the HSF structure and entire integrability are broken by the presence of boundary dissipation terms. However, one may notice that the subspace specified by the alternating IS survives as an invariant subspace of the entire effective Hamiltonian (<ref>) even in the presence of the boundary dissipation terms, since the nearest-neighbor interactions in the boundary dissipation terms (<ref>), i.e. the IS violating terms, are irrelevant in this subspace.
In the original Liouvillian expression, this means that any state with the alternating IS never goes out from the other subspaces, nor any state in the other subspaces never come into the subspace with the alternating IS.
§.§.§ Integrable subspaces
The diagonal terms in (<ref>) can be regarded as boundary magnetic fields on each of the two XXC Hamiltonians, and therefore the subspace specified by the alternating IS is the integrable subspace of the effective Hamiltonian (<ref>) if these diagonal terms provide the integrable boundaries of the XXC model.
The integrable boundary conditions for the XXC model have been discussed in <cit.>.
When boundary magnetic fields are imposed only in the z-direction, the Hamiltonian is integrable if the boundary terms are given by
H_ bXXC = ∑_j=1^N-1h^XXC_j,j+1
+ sinhηξ_- · (S_1^z)^2
- sinhηξ_+ · (S_N^z)^2
+ 1/2sinhη (ξ_+ - ξ_-).
Here ξ_- and ξ_- are arbitrary complex parameters.
0
The R-matrix for the XXC model is given in (<ref>). The newly introduced matrix K representing the reflection at the boundaries contains the parameter ξ, which determines strength of the diagonal boundary magnetic fields. The R-matrix and K-matrix for the XXC model can be found in <cit.>.
With these observations, the effective Hamiltonian (<ref>) becomes integrable in the subspace specified by the alternating IS when the quasiparticle incoming and outgoing rates are the same at each end,
γ_α,+ = γ_α,-, α∈{ L, R}.
Although we set H as the integrable XXC Hamiltonian so far, it is also possible to replace it with the perturbed XXC Hamiltonian by adding site-dependent perturbations on the bulk without violating partial solvability of the Liouvillian in the subspace with the alternating IS. Such a perturbation (<ref>) modifies the bulk interactions in the effective Hamiltonian (<ref>) as
h^XXC_j,j+1→ h^XXC_j,j+1 + h_j,j+1^ alt(β^(j)_ d1,β^(j)_ d2,β^(j)_ o1,β^(j)_ o2,ζ^(j)), j=1,…,N-1,
h^XXC_j,j+1→ h^XXC_j,j+1 + h_j,j+1^ alt(β^(j)_ d1,β^(j)_ d2,β^(j)_ o2,β^(j)_ o1,ζ^(j)), j=N+1,…,2N-1,
both of which are apparently in the form of (<ref>), implying that the effective Hamiltonian stays partially solvable in the subspace with the alternating IS after introducing these bulk perturbations.
0
We found four kinds of subspaces in each of which the effective Hamiltonian (<ref>) with the dissipators (<ref>) becomes different integrable Hamiltonians. The projectors onto these subspaces are essentially the matrix product projector onto the subspace of the alternating irreducible strings (<ref>), but have different boundary conditions for different subspaces.
Let us introduce two different kinds of boundary projectors which non-trivially act on the first or last site
B_0, L = (|0 ⟩⟨ 0| + |1 ⟩⟨ 1|) (⊗1)^N-1,
B_2, L = (|2 ⟩⟨ 2| + |1 ⟩⟨ 1|) (⊗1)^N-1,
B_0, R = (1⊗ )^N-1 (|0 ⟩⟨ 0| + |1 ⟩⟨ 1|),
B_2, R = (1⊗ )^N-1 (|2 ⟩⟨ 2| + |1 ⟩⟨ 1|).
We write the bulk part of the matrix product projector (<ref>) as
𝒫_ alt = ⊗_ phys( σ_ aux^+ ⊗ |0 ⟩⟨ 0| + 1_ aux⊗ |1 ⟩⟨ 1| + σ_ aux^- ⊗ |2 ⟩⟨ 2| ),
which is related to the projector P_ alt in (<ref>) by P_ alt = tr_ aux 𝒫_ alt.
Then, the effective Hamiltonian (<ref>) under any of following projectors
P_02,02 = tr_ aux (B_0, L𝒫_ alt B_2, R) ⊗ tr_ aux (B_0, L𝒫_ alt B_2, R),
P_20,20 = tr_ aux (B_2, L𝒫_ alt B_0, R) ⊗ tr_ aux (B_2, L𝒫_ alt B_0, R),
P_00,22 = tr_ aux (B_0, L𝒫_ alt B_0, R) ⊗ tr_ aux (B_2, L𝒫_ alt B_2, R),
P_22,00 = tr_ aux (B_2, L𝒫_ alt B_2, R) ⊗ tr_ aux (B_0, L𝒫_ alt B_0, R)
is the two decoupled integrable XXZ model with imaginary boundary magnetic fields. Each projector realizes the different boundary magnetic fields.
0
written in the matrix product expressions
𝒫_α_1,α_N= P_α_1,α_N P_α_N,α_1,
P_α_1,α_N = ∑_α_1,…,α_N ∈{0,1,2} tr[ A_α_1 A_α_2… A_α_N-1 A_α_N] |α_1 α_2 …α_N-1α_N ⟩⟨α_1 α_2 …α_N-1α_N|
- ∑_α_1, α_N ∈{0,2}∑_α_2,…,α_N-1∈{0,1,2} tr[ A_α_1 A_α_2… A_α_N-1 A_α_N] |α_1 α_2 …α_N-1α_N ⟩⟨α_1 α_2 …α_N-1α_N|,
where we used the notations 0 = 2 and 2 = 0. With these projectors, the effective Hamiltonian is mapped to the spin-1/2 XXZ chain under different boundary conditions depending on the choices of α_ L and α_ R.
by choosing as γ_ L^+ = γ_ L^(-) and γ_ R^+ = γ_ R^(-), the effective Hamiltonian becomes the periodic XXC chain with two impurities which deform the interaction at the N and (N+1)th sites, and the first and 2Nth sites as
h_1, 2N = γ_- { |22 ⟩⟨ 00|
- 1/2 (|0 ⟩⟨ 0| ⊗1) - 1/2 (1⊗ |0 ⟩⟨ 0|) },
h_N,N+1 = γ_+ { |00 ⟩⟨ 22|
- 1/2 (|2 ⟩⟨ 2| ⊗1) - 1/2 (1⊗ |2 ⟩⟨ 2|) },
whose first term acts as zero in the subspace of alternating irreducible strings, while the second and third terms are regarded as imaginary boundary magnetic fields of two decoupled spin chain of length N in the same subspace.
For instance, the projected Hamiltonian by P_02,02 becomes
H_P_02,02𝒫_02,02ℋ∖{0,2}^⊗ N⟼ H^(+)_XXZ(γ_-, L,γ_+, R) ⊗1 - 1⊗ H^(-)_XXZ(γ_-, R,γ_+, L),
H^(+)_XXZ(γ_-, L,γ_+, R) = - i/4γ_-, Lσ_1^z + ∑_x=1^N-1( σ_x^+ σ_x+1^- + σ_x^- σ_x+1^+ + 1/2coshη σ_x^z σ_x+1^z ) - i/4γ_+, Rσ_N^z
+ N-1/2coshη - i/4 (γ_-, L + γ_+, R)
H^(-)_XXZ(γ_-, R,γ_+, L) = - i/4γ_-, Rσ_N+1^z - ∑_x=N+1^2N-1( σ_x^+ σ_x+1^- + σ_x^- σ_x+1^+ + 1/2coshη σ_x^z σ_x+1^z ) - i/4γ_+, Lσ_2N^z
+ N-1/2coshη - i/4 (γ_+, L + γ_-, R)
in the reduced space ⊗^N span{ |A ⟩, |B ⟩}. We provide the projected Hamiltonian by each of (<ref>) in Table <ref>, which is numerically verified for N = 3 and 4.
0
through the transfer matrix
T(λ) = **.
Note that the transfer matrix constructed in this way has commuting property
[T(λ), T(μ)] = 0,
which produces many conserved quantities.
The commuting matrices are constructed as the product of the R-matrices
T(λ) =
produces the commuting transfer matrices
T(λ) = tr_0 (R_0N(λ) … R_02(λ) R_01(λ)),
[T(λ), T(μ)] = 0,
and subsequently, many conserved quantities as well as the expansion of T(λ) with respect to the spectral parameter λ.
The energy eigenstates are expressed
For instance, the subspace encoded by the fully-polarized irreducible strings … 0000 …,
§.§.§ Solvable eigenmodes
The XXC Hamiltonian with the integrable boundaries can be diagonalized via the Bethe ansatz method due to the existence of the R- and K-matrices which solve the Yang-Baxter equation and the reflection relation, respectively. For the perturbed XXC model that becomes integrable only in the subspace specified by a certain IS,
the spectrum and eigenvectors in this subspace can be found by mapping the Hamiltonian restricted in the solvable subspace to the spin-1/2 XXZ model.
By employing the same strategy explained in Sec. <ref>, the effective Hamiltonian H_XXC (<ref>) in the solvable subspace, namely the subspace specified by the alternating IS, is first mapped to two decoupled XXC spin chains (Fig. <ref>). Then these XXC spin chains can be mapped to the spin-1/2 XXZ chains with the diagonal boundary magnetic fields by identifying the states |0 ⟩ and |2 ⟩,
H_XXCP_ altℋ∖{ |0⟩,|2⟩}^N⟼ H^(+)_XXZ(γ_ L,γ_ R) ⊗1 - 1⊗ H^(-)_XXZ(γ_ R,γ_ L),
H^(+)_XXZ(γ_ L,γ_ R) = - i/4γ_ Lσ_1^z + ∑_j=1^N-1( σ_j^+ σ_j+1^- + σ_j^- σ_j+1^+ + 1/2coshη σ_j^z σ_j+1^z ) - i/4γ_ Rσ_N^z
+ N-1/2coshη - i/4 (γ_ L + γ_ R),
H^(-)_XXZ(γ_ R,γ_ L) = - i/4γ_ Rσ_N+1^z - ∑_j=N+1^2N-1( σ_j^+ σ_j+1^- + σ_j^- σ_j+1^+ + 1/2coshη σ_j^z σ_j+1^z ) - i/4γ_ Lσ_2N^z
+ N-1/2coshη - i/4 (γ_ L + γ_ R).
Note that the boundary magnetic fields here are “imaginary magnetic fields" with pure imaginary coefficients. By writing sets of energy eigenvalues for these two spin-1/2 XXZ chains as {E^(+)_n_+}_n_+ and {E^(-)_n_-}_n_-, the set of the summed energy eigenvalues {E^(+)_n_+ + E^(-)_n_-}_n_+,n_- is embedded in the full spectrum of the effective Hamiltonian H_XXC.
Since the map φ is an isomorphism, the spectrum of the Liouvillian ℒ_XXC matches that of the effective Hamiltonian H_XXC. The energy eigenvalues of the double spin chain, {E^(+)_n_+ + E^(-)_n_-}_n_+,n_-, then agree with the eigenvalues of the Liouvillian ℒ_XXC restricted in the solvable subspace.
The corresponding eigenvectors |E^(+)_n_+⟩⊗ |E^(-)_n_-⟩ thus provide the eigenvectors of the effective non-Hermitian Hamiltonian H_XXC by a map
|σ_1 σ_2 …σ_N ⟩∈ℂ^2N↦ |τ_1 τ_2 …τ_N ⟩∈ℂ^3N,
τ_j = 1 + θ_j^ω_j,
in which θ_j and ω_j are defined by
θ_j = -|1 - σ_j|, ω_j = 1 - (-1)^∑_k=1^j σ_k,
respectively.
Here θ_j is a parameter that determines whether the jth site is in the local state |1 ⟩ or not, while ω_j counts the number of 0s and 2s between the first and jth site.
The eigenvectors of the effective non-Hermitian Hamiltonian H_XXC are mapped to the eigenmodes of the Liouvillian ℒ_XXC via the inverse map of the isomorphism (<ref>).
Therefore, solving the eigenvalue problem for the spin-1/2 XXZ chain under the imaginary boundary magnetic fields tells the eigenmodes for the spin-1 XXC model coupled to boundary dissipators.
The Bethe-ansatz method is well established for the spin-1/2 XXZ chain even in the presence of boundary magnetic fields <cit.>. The Hamiltonian and conserved quantities are constructed from a series expansion of the transfer matrix, which consists of the R-matrix
R_ij(λ) =
sinh(λ+ η/2) coshη/2·1_ij + cosh(u+η/2) sinhη/2·σ_i^z σ_j^z
+ sinhη·( σ_i^+ σ_j^- + σ_i^- σ_j^+ ),
and the K-matrix
K(λ,ξ) = sinhξcoshλ·1 + coshξsinhλ·σ^z,
which solve the Yang-Baxter equation (<ref>) and the reflection relation,
R_12(λ_1-λ_2) K_1(λ_1) R_12(λ_1+λ_2) K_2(λ_2)
= K_2(λ_2) R_12(λ_1 + λ_2) K_1(λ_1) R_12(λ_1-λ_2).
The complex parameter ξ determines the strength of the diagonal boundary magnetic fields, which apprears in the spin-1/2 XXZ Hamiltonian as
H_XXZ
= ∑_j=1^N-1( σ_j^+ σ_j+1^- + σ_j^- σ_j+1^+ + 1/2coshη·σ_j^z σ_j+1^z )
+1/2sinhηξ_- ·σ_1^z - 1/2sinhηξ_+ ·σ_N^z.
Thus, in order to realize the restricted effective Hamiltonian (<ref>),
we need to choose ξ_± to be pure imaginary for η∈ℝ (i.e., in the gapped regime) or
to be real for pure imaginary η (i.e., in the gapless regime).
The transfer matrix T(λ) for the open spin chain is “a double-row transfer matrix", which consists of two products of the R-matrices,
T(λ) = tr_0 ( K_0(-λ - η,ξ_+) M_0(λ) K_0(λ,ξ_-) M_0(λ) ),
M_0(λ) = R_0N(λ) … R_01(λ),
M_0(λ) = R_10(λ) … R_N0(λ) .
These transfer matrices are mutually commuting,
[T(λ), T(μ)] = 0,
for any λ, μ∈ℂ.
Thus, a series expansion of the transfer matrix,
T(λ) = exp( ∑_r λ^r/r! Q_r ),
provides a large number of conserved quantities Q_r. By cumbersome but straightforward calculations, one can confirm that the XXZ Hamiltonian with the boundary magnetic fields (<ref>) is obtained
from Q_1 up to a constant <cit.>,
H_XXZ = (2 sinhξ_+ sinhξ_- coshη (sinhη)^2N-1)^-1·d/dλ T(λ) |_λ = 0
- (sinhη)^2 + N (coshη)^2/coshη.
The eigenvectors of the Hamiltonian are derived by diagonalizing the transfer matrix, which is achieved by the Bethe-ansatz method <cit.>. The eigenenergies are written in terms of the eigenvalues of the transfer matrix τ(λ),
E({λ_j}) = (2 sinhξ_+ sinhξ_- coshη (sinhη)^2N-1)^-1·τ'(0) - (sinhη)^2 + N (coshη)^2/coshη,
τ(λ) = (sinh(λ + η))^2Nsinh(2λ + 2η)/sinh(2λ + η)sinh(λ + ξ_+) sinh(λ + ξ_-) ∏_i=1^n sinh(λ - λ_i - η)/sinh(λ + λ_i + η)
+ (sinhλ)^2Nsinh(2λ)/sinh(2λ +
η)sinh(λ + η - ξ_+) sinh(λ + η - ξ_-) ∏_i=1^n sinh(λ + λ_i + 2η)/sinh(λ - λ_i),
where λ_i solves a set of the Bethe equations,
( sinh(λ_j + η)/sinh(λ_j))^2N = sinh(λ_j - ξ_+ + η) sinh(λ_j - ξ_- + η)/sinh(λ_j + ξ_+) sinh(λ_j + ξ_-)
·∏_0ptk=1k ≠ j^n
sinh(λ_j - λ_k + η)/sinh(λ_j - λ_k - η)sinh(λ_j + λ_k + 2η)/sinh(λ_j + λ_k),
for j = 1,2,…,n.
Since the analytic solutions to the Bethe equations (<ref>) are inaccessible, we instead give numerical results for the energy spectra of the XXZ spin chains (Fig. <ref>). As compared from the spectrum of the effective Hamiltonian (<ref>), the sums of the eigenenergies for H^(+) and H^(-) are indeed embedded in its full spectrum.
Unlike the rSGA-induced partial solvability discussed in the previous subsection, neither equally-spaced spectrum nor pure-imaginary eigenvalue is observed in the solvable subspace of the XXC model.
This indicates that no oscillating mode exists for the XXC model coupled to the boundary dissipators. The only non-decaying mode is the steady state, corresponding to the zero eigenvalue of the effective Hamiltonian.
Degenerate steady states are observed for the entire Liouvillian, but the integrable subspace has the unique steady state,
ρ_ ss^XXC = |11 … 1 ⟩⟨ 11 … 1|,
which is a product state. This is also a completely separated state from the other states by the HSF, and therefore initial states never reach the integrable steady state unless it is the steady state itself.
0
Our main focus in this subsection is how partial solvability of the system affects the relaxation phenomena of this non-equilibrium system.
The first difference shows up in behaviors of the eigenmodes of the Liouvilian. As the Liouvillian for the XXC spin chain coupled to the boundary dissipators (<ref>) is not fully solvable, a generic eigenmode exponentially decays by time (see Figure <ref>).
On the other hand, the eigenmodes in the solvable subspace persistently survive, some of which are the steady states, while the others are the persistently oscillating modes, which are known to be in the context of another partially solvable Liouvillian exhibiting “the quantum synchronization" <cit.>.
In the solvable subspace of the Liouvillian, i.e. equivalently the effective Hamiltonian (<ref>), the dissipation terms always act as zero, and therefore, the dissipators are irrelevant in the solvable subspace. That is, any state in this subspace is a so-called “dark state" <cit.>, which does not feel the dissipators.
Now let us explicitly construct the steady states and oscillating modes consisting of the solvable eigenmodes.
As the model has an exact map to the XXZ model with diagonal boundaries in each subspace specified by a certain irreducible strings, the solvable eigenmodes are written as the Bethe states <cit.>. We write them as |E_j ⟩⊗ |E_k ⟩, corresponding to each eigenenergy iE({λ_j}) - iE({λ_k}) given in (<ref>). Note that eigenenergies for the effective Hamiltonian (<ref>) are purely imaginary in the solvable subspace.
It is straightforward that any state in the form of |E_j ⟩⊗ |E_j ⟩ in the doubled Hilbert space, i.e. a pure energy eigenvector |E_j ⟩⟨ E_j| in the original Hilbert space, is a steady state
H |E_j ⟩⊗ |E_j ⟩ = (iE({λ_j}) - iE({λ_j})) |E_j ⟩⊗ |E_j ⟩ = 0
⇔ℒ(|E_j ⟩⟨ E_j|) = 0,
since E({λ_j}) is purely imaginary.
The same observation can be applied to the diagonal density matrix consisting of the Bethe vectors, which is also the steady state. Suppose p_j > 0 for all j such that ∑_j p_j = 1. Then one obtains
H∑_j p_j |E_j ⟩⊗ |E_j ⟩ = ∑_j p_j (iE({λ_j}) - iE({λ_j})) |E_j ⟩⊗ |E_j ⟩ = 0
⇔ℒ(∑_j p_j |E_j ⟩⟨ E_j|) = 0.
The oscillating modes are, on the other hand, given by a off-diagonal density matrix consisting of the Bethe vectors. For instance, a density matrix consisting of two energy eigenvectors
H( p_jk |E_j ⟩⊗ |E_k ⟩ + p^*_jk |E_k ⟩⊗ |E_j ⟩)
= p_jk( iE({λ_j}) - iE({λ_k}) ) |E_j ⟩⊗ |E_k ⟩
+ p^*_jk( iE({λ_k}) - iE({λ_j}) ) |E_k ⟩⊗ |E_j ⟩
exhibits persistent oscillation for p_jk∈ℝ.
0
As the Liouvillian (<ref>) is a non-trace preserving map, a density matrix is expected to vanish after a long time t →∞. For a generic pure initial state prepared as the superposition of energy eigenstates
**,
the Liouvillian acts as
***
unless the state vanishes by the measurement operators.
Indeed, our numerical test tells that the system prepared in an arbitrary initial state decays by time (see Figure <ref>).
We found, however, there exist a family of density matrix which does not vanish by applying the Liouvillian (<ref>) repeatedly.
One of such examples is the pure energy eigenstates of the XXC model in the alternating irreducible subspace
ρ_ ss = |E_n ⟩⟨ E_n|.
This density matrix indeed is the root of the Liouvillian (<ref>), since it commutes with the Hamiltonian and vanishes under the measurement terms.
Subsequently, any diagonal density matrix consisting of the energy eigenstates in the same subspace
ρ_ss = ∑_n p_n |E_n ⟩⟨ E_n|,
∑_n p_n = 1, ∀ n, p_n ≥ 0,
also becomes the root of the Liouvillian. Thus, the diagonal density matrix of the form (<ref>) is the steady state, which never decays by time, although the Liouvillian is not a trace-preserving map.
On the other hand, if one prepares the initial state as a superposition of the energy eigenstate in the subspace of the alternating irreducible string
|ψ⟩ = ∑_n c_n |E_n ⟩,
the pure state |ψ⟩⟨ψ| exhibits oscillations which persistently survives. It is easy to check that this pure state vanishes under the measurement operators, while the commutator with the Hamiltonian survives
[H, |ψ⟩⟨ψ|] = ∑_m,n c_m c_n^* (E_m - E_n) |E_m ⟩⟨ E_n|.
Thus, the persistent oscillations are observed for the system prepared in the superposition of a few number of non-degenerate energy eigenstates.
§.§.§ Solvable steady states induced by MPO symmetry
In the previous subsection, we have seen that partial solvability emerges for the open XXC model coupled to boundary dissipators, where the eigenmodes including the steady state are exactly calculated.
If one focuses only on steady states, some of them are analytically derived even if they are out of the solvable subspace.
The derivation of those “out-of-integrable" steady states can be achieved based on the MPO symmetry we reviewed in Sec. <ref>.
By following the method originally introduced in <cit.>, we write the steady state in the form of a product of an amplitude operator Ω,
ρ_ ss = ΩΩ^†.
Then, if the amplitude operator Ω is given by the matrix product form,
Ω = _a⟨ v_ L| L_a,N… L_a,2 L_a,1 |v_ R⟩_a,
in which the operator L_a,n satisfies the local divergence (<ref>) with the local perturbed XXC Hamiltonian, the amplitude matrix (<ref>) produces the steady-state density matrix, as we see in the following.
Let us consider the time evolution of the density matrix consisting of the amplitudes matrix (<ref>). Time evolution of any density matrix is described by the Lindblad equation (<ref>). Due to the local divergence relation (<ref>), the commutator term produces only two terms coming from the non-commutativity at the boundaries,
[H_XXC, Ω] = _a⟨ v_ L| M_a,N… L_a,2 L_a,1 |v_ R⟩_a
- _a⟨ v_ L| L_a,N… L_a,2 M_a,1 |v_ R⟩_a
= 0,
since the solution exists only for M = 0.
On the other hand, the boundary dissipators non-trivially act only at the edges, which may cancel the non-commuting terms above,
_a⟨ v_ L| ⊗_b⟨ v_ L| ( -i M_a,N L_b,N^†_ p + γ_ R,+𝒟_ R,+(L_a,N L_b,N^†_ p) + γ_ R,-𝒟_ R,-(L_a,N L_b,N^†_ p) ) = 0 , ,
( i M_a,1 L_b,1^†_ p + γ_ L,+𝒟_ L,+(L_a,1 L_b,1^†_ p) + γ_ L,-𝒟_ L,-(L_a,1 L_b,1^†_ p) ) |v_ R⟩_a ⊗ |v_ R⟩_b = 0.
Here we leave the boundary dissipation rates free, which shall be constrained later for solvability of the steady state.
The solutions to the divergence relation (<ref>) and the boundary cancellation condition (<ref>) then provide the steady state density matrix in the form (<ref>) and (<ref>).
We found that both the diagonal and off-diagonal MPO symmetries on the bulk result in the same solution,
y=0,
|x|^2 = γ_ R,+/γ_ R,- |u|^2,
|x|^2 = γ_ R,+/γ_ R,- |v|^2,
γ_ R,+/γ_ R,- = γ_ L,+/γ_ L,- = ω.
The integrable steady state observed in the previous subsection is realized as a special case with u = v = 0 and ω = 1.
§ CONCLUDING REMARKS
In this paper, we have introduced a new class of partially solvable open quantum systems. The models are new in the sense that their partial solvability is induced by partial solvability of the Hamiltonians. The systems are coupled to dissipators only at the boundaries, unlike other existing partially solvable models <cit.>, in which the dissipators are attached at every site.
We showed that partial solvability of the system can be robust against the boundary dissipators under two different kinds of conditions.
Liouvillians in the first type admit the existence of dark states, i.e., the states which do not feel the effect of dissipators, and hence partial solvability of the Hamiltonian survives in the subspace spanned by these dark states. A pure dark state becomes an exactly solvable steady state of the Liouvillians, while an off-diagonal element density matrix consisting of the dark states provides an eigenmode of the Liouvillian.
As an example, we especially focused on the AKLT-type model coupled to quasiparticle baths at both edges. As a known fact, the AKLT-type model exhibits four degenerate ground states <cit.> due to the boundary spin fractionalization, and a tower of solvable quasiparticle excited states can be constructed on top of each ground state <cit.>. Among these four towers of the degenerate solvable energy eigenstates, we found that a tower of states under a certain choice of the boundary spins is a set of dark states of the Liouvillian.
One of the remarkable features of this model is persistent oscillations obtained in local observables, when the initial state is prepared to have a large enough overlap with the solvable state of the Liouvillian. This quantum synchronization is brought by the equally-spaced spectrum of the Liouvillian in the solvable subspace consiting of the dark states, which is inherited by the equally-spaced spectrum of the Hamiltonian due to the rSGA structure. A similar phenomenon has been reported in <cit.> for the Liouvillian which also exhibits the rSGA but whose dissipators are coupled to all the sites of the system.
The second mechanism which makes Liouvillians partially solvable is the HSF. We are especially interested in the case where the Hamiltonian exhibits the HSF which divides the Hilbert space into exponentially-many subspaces, some of which may survive even if the boundary dissipators are introduced. The key property in extending the notion of partial solvability induced by the HSF is the robustness of some subspaces under site-dependent perturbations.
We have considered the XXC spin chain as an example, which exhibits the HSF due to the presence of IS. We showed that the effect of the boundary quasiparticle baths, which inject and absorb quasiparticles, can be regarded as the partial solvability preserving perturbations through the thermofield double formalism.
As a result, the effective Hamiltonian in the solvable subspace of the doubled Hilbert space becomes two decouples integrable XXZ spin chains with imaginary boundary magnetic fields corresponding to the boundary dissipators. Thus, any eigenmode of the Liouviilian in this solvable subspace is accessible via the Bethe-ansatz method, which is numerically verified (Fig. <ref>).
Besides the eigenmodes in the solvable subspace, several more steady states can be exactly derived by using the MPO symmetry of the XXC Hamiltonian, which is characterized by the local divergence relation (<ref>). As the local divergence relation (<ref>) holds for the XXC model only when it is reduced to the frustration-free condition (M=0 in Eq. (<ref>)), the steady states associated with the MPO symmetry are the dark states, which vanish under the action of the dissipators. We found that the steady state in the solvable subspace is included in the class of solvable steady states associated with the MPO symmetry.
A new class of partially solvable open quantum systems introduced in this paper paves a way to study non-integrable open quantum systems. At the same time, it also proposes several interesting questions. We list possible future works in the following, by focusing on the HSF-induced partially solvable open quantum systems.
The first question is how large one can make an overlap with the solvable subspace of the Liouvillian, when a “physical" initial state, such as the dimer state and Néel state, is prepared.
This would be addressed by following the method introduced in <cit.>, which allows the overlap between the initial state and the energy eigenstates to be expressed by the Tsuchiya determinant, after the thermofield double formalism is applied.
The second question we are interested in is how the relaxation time differs between the solvable subspace and unsolvable subspaces. Definitely, the Liouvillian restricted in the solvable subspace has a gapless spectrum, as it matches the energy spectrum of the XXZ model, and this may lead to a non-trivial (non-exponential) relaxation behavior.
The third question is whether the Kardar-Parisi-Zhang (KPZ) universality class is observed for open quantum systems. As the XXC chain coupled with boundary dissipators is mapped to the two XXZ spin chains in the solvable subspace, it is likely that the KPZ universality class observed for the XXZ spin chain <cit.> emerges also for this open quantum system, if it is robust against the boundary conditions.
§ ACKNOWLEDGEMENTS
C. M. thanks H. Katsura, C. Paletta, and B. Pozsgay for helpful discussions, on which the idea of the HSF part of this work is mainly based. C. M. acknowledges financial support from JSPS KAKENHI Grant Number JP23K03244.
N. T. acknowledges support from JST FOREST (Grant No. JP-MJFR2131)
and JSPS KAKENHI (Grant No. JP24H00191).
0
The steady state condition ℒ(ρ_ ss) = 0 then requires that the operator L satisfies the local divergence relation (<ref>) and the boundary cancellation conditions
_a⟨ v_ L| ( -i M_a,N L_a,N^†_ phys + 𝒟_ R,+(L_a,N L_a,N^†_ phys) ) = 0 , ,
( i M_a,1 L_a,1^†_ phys + 𝒟_ R,-(L_a,1 L_a,1^†_ phys) ) |v_ R⟩_a = 0.
The local divergence relation is solved by the diagonal or non-diagonal L-operator given in (<ref>), which also solves the boundary cancellation conditions if the parameters are set as in Table <ref>.
0
§ PROOF OF EQ. (<REF>)
In this Appendix, we show that any state in the subspace W^(↑,↓) satisfies the dark-state condition (<ref>) in the presence of the boundary dissipators.
As the dissipators (<ref>) non-trivially act only on the first and/or the Nth site, the dark-state condition in the present case is written as
_a⟨ v_ L| ⊗_b⟨ v_ L| 𝒟_ L(A⃗⊗A⃗^†_p) = 0,
𝒟_ R(A⃗⊗A⃗^†_p) |v_ R⟩_a ⊗ |v_ R⟩_b = 0,
in which A⃗⊗A⃗^†_p is the three-by-three matrix with the matrix-valued elements,
A⃗⊗A⃗^†_p =
[ A_0 ⊗ A_0^* A_0 ⊗ A_1^* A_0 ⊗ A_2^*; A_1 ⊗ A_0^* A_1 ⊗ A_1^* A_1 ⊗ A_2^*; A_2 ⊗ A_0^* A_2 ⊗ A_1^* A_2 ⊗ A_2^* ].
The definitions of the matrices A_0, A_1, and A_2 are given in (<ref>).
Then we have
𝒟_ L(A⃗⊗A⃗^†_p) =
[ A_2 ⊗ A_2^* 0 -1/2 A_0 ⊗ A_2^*; 0 0 -1/2 A_1 ⊗ A_2^*; -1/2 A_2 ⊗ A_0^* -1/2 A_2 ⊗ A_1^* A_2 ⊗ A_2^* ],
which indicates that the dissipation terms always include the element A_2 proportional to σ^-. Thus, the dark-state condition (<ref>) is satisfied by choosing the boundary vectors as
_a⟨ v_ L| = _a⟨↑|, _b⟨ v_ L| = _b⟨↑|,
|v_ R⟩_a = |↓⟩_a,
|v_ R⟩_b = |↓⟩_b.
abbrv
|
http://arxiv.org/abs/2409.02295v1 | 20240903211744 | Cosmological limits on the neutrino mass sum for beyond-$Λ$CDM models | [
"Helen Shao",
"Jahmour J. Givans",
"Jo Dunkley",
"Mathew Madhavacheril",
"Frank Qu",
"Gerrit Farren",
"Blake Sherwin"
] | astro-ph.CO | [
"astro-ph.CO",
"hep-ph"
] |
[email protected]
Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08544, USA
Center for Computational Astrophysics, Flatiron Institute, 162 5th Ave, New York, NY 10010, USA
Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08544, USA
Department of Physics, Jadwin Hall, Princeton University, Princeton, NJ 08544, USA
Department of Astrophysical Sciences, Peyton Hall, Princeton University, Princeton, NJ 08544, USA
Department of Physics and Astronomy, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA, USA 19104
DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 OWA, UKKavli Institute for Cosmology Cambridge, Madingley Road, Cambridge CB3 0HA, UK
DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 OWA, UKKavli Institute for Cosmology Cambridge, Madingley Road, Cambridge CB3 0HA, UK
DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 OWA, UKKavli Institute for Cosmology Cambridge, Madingley Road, Cambridge CB3 0HA, UK
§ ABSTRACT
The sum of cosmic neutrino masses can be measured cosmologically, as the sub-eV particles behave as `hot' dark matter whose main effect is to suppress the clustering of matter compared to a universe with the same amount of purely cold dark matter. Current astronomical data provide an upper limit on between 0.07 - 0.12 eV at 95% confidence, depending on the choice of data. This bound assumes that the cosmological model is ΛCDM, where dark energy is a cosmological constant, the spatial geometry is flat, and the primordial fluctuations follow a pure power-law. Here, we update studies on how the mass limit degrades if we relax these assumptions. To existing data from the Planck satellite we add new gravitational lensing data from the Atacama Cosmology Telescope, the new Type Ia Supernova sample from the Pantheon+ survey, and baryonic acoustic oscillation (BAO) measurements from the Sloan Digital Sky Survey and the Dark Energy Spectrosopic Instrument. We find the neutrino mass limit is stable to most model extensions, with such extensions degrading the limit by less than 10%. We find a broadest bound of < 0.19 eV at 95% confidence for a model with dynamical dark energy, although this scenario is not statistically preferred over the simpler model.
Cosmological limits on the neutrino mass sum for beyond-ΛCDM models
Blake Sherwin
September 9, 2024
===================================================================
§ INTRODUCTION
Neutrinos are electrically uncharged spin-1/2 fermions which exist in one of three active flavor states: electron, muon and tau neutrinos (ν_e, ν_μ, ν_τ). The discovery of neutrino flavor oscillations <cit.> showed that neutrinos have mass, with each neutrino flavor occupying a superposition of three mass eigenstates, m_i (i = 1,2,3). Oscillation experiments measure the mass-squared splittings between these states, Δ m_ij^2 = m_i^2 - m_j^2, with the mass states either in an inverted hierarchy (IH), where m_3≪ m_1<m_2, or the normal hierarchy (NH), where m_1<m_2≪ m_3. In the normal hierarchy the sum of the masses, ∑ m_i, has a lower limit of 0.06 eV; in the inverted hierarchy it is 0.1 eV <cit.>.
Direct-detection β-decay experiments give limits of m ≤ 0.8 (90% confidence level) for the electron neutrino by the KATRIN Collaboration <cit.>.
Future experiments are expected to reach a sensitivity of 0.2 eV (90% c.l.) <cit.>. A tighter indirect limit has already been placed on the sum of neutrino masses from cosmological data <cit.>, with the primary effect on cosmology being the suppression of the clustering of matter compared to a universe with the same amount of purely cold dark matter. Expressed as the sum of neutrino masses, the current upper bound at 95% confidence is in the range < 0.07-0.12 eV depending on the choice of data, with the tightest limit obtained when including new baryon acoustic oscillation (BAO) data from the Dark Energy Spectroscopic Instrument (DESI) <cit.>.
This bound assumes that the correct cosmological model is , where dark energy is a cosmological constant, the spatial geometry is flat, cold dark matter interacts only via gravity, and the power spectrum of primordial perturbations follows a pure power-law. Previous studies have examined how the mass limit degrades if we loosen these assumptions, since some model extensions can mimic the effect of non-zero neutrino mass on the clustering of matter <cit.>. In this paper we update bounds using the latest data from the Atacama Cosmology Telescope (ACT) <cit.>, the Pantheon+ supernova survey <cit.>, and a composite of BAO data from DESI and the Sloan Digital Sky Survey, allowing for variations in, for example, the spatial curvature and the equation of state of dark energy.
§ METHODOLOGY
We follow similar analysis methods as in <cit.>, estimating the posterior distribution for sets of cosmological parameters that include a varying neutrino mass sum, using a mixture of cosmological datasets. This includes the angular power spectra of the CMB intensity and polarization anisotropy as measured by the Planck satellite <cit.>. We note that the PR3 Planck likelihood showed a 2.9σ inconsistency between predictions of the CMB lensing parameter A_L and its value as measured via its smearing effect on the temperature and polarization power spectra <cit.>. This led to artificially tightened constraints on the neutrino mass. Hence, for our baseline data, as in <cit.>, we choose to use the Planck likelihood constructed from the PR4 NPIPE processed maps <cit.>. We also include BAO measurements which characterize the typical separation of galaxies and provide a measure of the total amount of dark matter. Our `BAO-1' dataset includes data measured by the 6dF <cit.> and Sloan Digital Sky Survey DR7 <cit.>, the latter of which is combined with the BOSS DR12 and DR16 data <cit.>. We also use a composite `BAO-2' dataset combining results from both SDSS and DESI, described in the appendix.[We use the following <cit.> likelihoods for primary CMB data: , and for BAO: .]
To this baseline data combination, we add the angular power spectrum of the reconstructed gravitational lensing of the CMB measured by Planck <cit.> and ACT <cit.>; this provides a measure of the growth of cosmic structure at later times, peaking at typically half the age of the universe. We label these datasets as: `CMB (Planck)' and `CMB (ACT+Planck)'. For the latter, we emphasize that we are using a variation of the ACT likelihood that includes Planck lensing data. Finally, we include the distances to Type Ia supernova as measured by the Pantheon+ survey <cit.>, offering a direct way to measure the expansion rate of the universe. Here, we do not include the SH0Es calibration <cit.> and refer to this dataset as `SNe'.[We convert the Pantheon+ likelihood used with CosmoSIS to a version compatible with Cobaya and check we get the same results as using <cit.>.] Consistent constraints on have also been placed using the full-shape power spectrum of galaxies and redshift space distortions (e.g., <cit.>), though we do not use these in our analyses.
Our baseline model is , characterized by two initial conditions (the amplitude and spectral index of power-law scalar fluctuations, A_s and n_s, respectively), three ingredients (baryon density, cold dark matter density and dark energy fraction), and an optical depth to reionization, τ, to which we add the sum of neutrino masses. The extension model parameters we consider are the curvature, Ω_k, the equation of state of dark energy that is either constant with time or varies with time, w(z), the effective number of neutrino species, N_ eff, and a running of the primordial scalar spectral index, n_run[In there are 3.046 effective neutrino species and a power law spectral index with no running.]. As in <cit.>, we generate theoretical predictions for these models with the CAMB numerical code, using the Parameterized Post-Friedmann (PPF) prescription for dark energy perturbations <cit.>, and the Chevallier-Polarski-Linder (CPL) parameterization of the form <cit.>,
w(z) = w_0 + w_a(z/1+z)
when we consider variations of dark energy equation of state as a function of redshift.[To compute the small-scale nonlinear matter power spectrum we use the halofit model <cit.> implemented with HMcode <cit.> and the MEAD fitting method <cit.>.] We estimate the posterior distribution of parameters with Metropolis Hastings methods using the sampling code, quoting marginalised one-dimensional medians or 95% upper limits. To remain independent of neutrino oscillation experiments, we do not impose the lower mass limit for either NH or IH as a prior. However, we do require to be positive. Previous works have also employed hierarchy-agnostic priors <cit.> but did not find conclusive results on a preferred hierarchy.
As described in <cit.>, in this baseline model the neutrino mass is not strongly correlated with any of the other six parameters, but it has weak correlations with the primordial amplitude and the optical depth to reionization as they both affect the amplitude of the clustering. Fig. <ref> shows how a model with =0.19 eV cannot be distinguished from a zero neutrino mass model using only primary CMB anisotropy data—neutrinos would have become non-relativistic only after the CMB formed. However, such a model would lower the CMB lensing signal by about 6%, which is sufficient to be disfavored at the 3σ significance level.
§ RESULTS
We initially obtain similar results to <cit.>, finding < 0.12 eV for the model, before including SNe data.[<cit.> uses different BAO likelihoods, resulting in slightly different constraints.] Including the SNe data shrinks the 1σ uncertainty by a small amount, but the preferred value is slightly non-zero, so the upper limit increases to 0.13 eV, as shown in Fig. <ref>. If we incorporate DESI measurements, using the CMB (ACT+Planck) + BAO-2 + SNe dataset, the constraint marginally tightens to <0.12 eV, but we still find a non-zero peak in the posterior distribution as shown in Fig. <ref>. This is a higher upper bound compared to the limit reported in <cit.>, since our BAO-2 dataset conservatively does not include the new DESI data in the first two redshift bins, using SDSS data instead. With this choice of data the inverted hierarchy is still viable in the minimal extension. This is consistent with the findings of <cit.>, who provide analogous constraints using the Planck likelihoods <cit.> and a slightly different combination of DESI+SDSS data. A summary of the 95% upper limits for different datasets is given in Table <ref>.
In Fig. <ref> we additionally vary the spatial curvature, where Ω_k<0 (>0) corresponds to a closed (open) universe. As has been shown in e.g., <cit.>,
a slightly larger neutrino mass is allowed in an open universe with Ω_k>0. Due to the geometric degeneracy <cit.>, an open universe needs more dark energy to conserve the peak positions in the primary CMB spectrum.
These models would have a smaller CMB lensing signal than the best-fitting model <cit.>, which can be partly compensated by decreasing the neutrino mass. However, BAO data provides orthogonal constraints on and curvature, switching the direction of this degeneracy <cit.>. The new ACT lensing data better measures the clustering, resulting in <0.13 eV compared to 0.17 eV with only Planck lensing.
Including the SNe data does not further tighten constraints for this model.
If the dark energy is not a constant vacuum energy, its equation of state, w=p/ρ, is anti-correlated with neutrino mass. As discussed in <cit.>, if w<-1, the dark energy density Ω_ DE(z) is decreased, then the total matter density must increase to fit the primary CMB data. This enhances the growth of cosmic structures, whose effects on CMB lensing can be counteracted by increasing the neutrino mass. This correlation increases the neutrino mass bound to <0.18 eV with CMB(ACT+Planck)+BAO-1 data. In this case, the SNe data measure the time dependence of the expansion rate of the universe, providing an additional constraint on w. This partly breaks the degeneracy, as shown in Fig. <ref>, bringing the limit back to <0.13 eV. Including BAO-2 data further decreases this bound to <0.11 eV but does not improve the constraint on w. Moreover, allowing both w and Ω_k to vary together does not further decrease the neutrino mass uncertainty. Neither w-1 nor Ω_k0 are preferred by the data.
The effects of non-zero neutrino mass can only be partially mimicked by a constant equation of state of dark energy. If this component is dynamic, corresponding to a new scalar field for example, the equation of state would have a time dependence. For the time-varying model we consider, the dark energy density scales as <cit.>
Ω_ DE(z) = Ω_ DE(0) (1+z)^3(1+w_0+w_a)exp(-3w_az/1+z).
Fig. <ref> shows how the correlation of the neutrino mass with the equation of state today, w_0, is reduced, and instead becomes anti-correlated with w_a. This happens because w_a governs the late time dynamics of dark energy, as seen in Eq. <ref> for small z. Consequently, it has a larger effect on H(z) and on than does w_0 <cit.>. As discussed earlier, decreasing the dark energy equation of state through an increase in w_a can result in a good fit to the data (namely, the measured comoving distance to decoupling) if is increased. Meanwhile, the w_0 + w_a term in the exponent of Eq. <ref> induces anti-correlations between the two parameters, as seen in the third plot of Fig. <ref>. Thus, w_0 and are now slightly correlated due to their mutual anti-correlations with w_a. Including the SNe data, the bound is <0.23 eV; without the ACT and SNe data this limit is 25% weaker at 0.29 eV. Using the BAO-2 data also shrinks the parameter space for larger , as shown the green contours of Fig. <ref>, resulting in < 0.19 eV. Finally, we find that allowing curvature to be additionally varied in this model does not further increase the uncertainty, as seen in Fig. <ref>.
§ CONCLUSIONS
With an upper limit from cosmological data on the neutrino mass sum of <0.12 eV, rising to <0.19 eV in a time-varying dark energy model, we are approaching the minimum predicted level for either neutrino mass hierarchy (0.06 or 0.1 eV). Our analysis shows that the upper limit does not depend strongly on assumptions about the cosmological model, and that the model is still preferred when applying Bayesian model selection.
A lower bound on has not yet been seen with cosmological data: we have yet to find evidence for non-zero neutrino mass. Improved data from future DESI releases and the Simons Observatory, as well as measurements of the optical depth from CLASS, TauRUS or LiteBIRD, hold promise for a non-zero mass detection <cit.>. In such a scenario, it will be important to distinguish a non-zero mass from a time-varying dark energy. We can expect dark energy properties will be better constrained by future supernova, weak lensing and galaxy clustering data from, e.g., the Vera C. Rubin Observatory and the Euclid satellite. Combinations of multiple cosmological datasets will be crucial for solidifying our understanding of neutrino properties beyond the Standard Model.
§ ACKNOWLEDGMENTS
JG acknowledges support from Princeton's Presidential Postdoctoral Research Fellowship. JD acknowledges NSF grant AST2108126. MM acknowledges support from NSF grants AST-2307727 and AST-2153201 and NASA grant 21-ATP21-0145. We use the .
§ APPENDIX
Recent BAO measurements by DESI placed tight constraints on the upper bound of neutrino mass sum in various cosmologies <cit.>. Notably, they obtain < 0.072 eV in the + model. Given that this bound appears to exclude the IH at 95% confidence, we investigate the impact of this constraint when using select BAO measurements from SDSS in place of DESI. In particular, we note the ∼ 3σ difference found between the DESI and SDSS LRG results in the redshift interval 0.6 < z < 0.8 <cit.>. Thus, we choose to use a composite `BAO-2' dataset containing measurements from both DESI and SDSS, conservatively swapping in the SDSS LRG measurements for these ranges in our data mixture. A similar data combination is tested in <cit.>, where BAO results from each redshift range are selected from the survey that covers the larger effective volume at that redshift. Our `BAO-2' consists of the following BAO measurements:
* We use the SDSS measurements at z = 0.15, 0.38, and 0.51 in place of the BGS and LRG1 results from DESI. The corresponding likelihoods are . This coincides with the selection described in <cit.>, as the SDSS survey contains larger effective volume at these redshifts.
* We use the SDSS measurements for 0.6 < z < 0.8 in place of the LRG2 results from DESI, corresponding to the likelihood. This differs from the selection described in <cit.>, which used the DESI measurements to maximize effective survey volume.
* We use the DESI LRG+ELG combination over 0.8 < z < 1.1, and the higher redshift ELGs and QSOs. The corresponding likelihoods are .
* For the Lyα measurements, we use the combined DESI+SDSS data provided by the likelihood.
Using this in our CMB (ACT+Planck) + BAO-2 dataset for the minimal cosmological extension: +, we find an upper bound of < 0.104 eV (95% c.l). The marginalized posterior distribution is shown in green in Fig. <ref>. If instead we swap out the SDSS LRG measurement at z=0.6 with the corresponding data point from DESI, we find < 0.081 eV (95% c.l). These results can be compared to the upper bound of < 0.075 eV (95% c.l) obtained using CMB (ACT+Planck)+ BAO(DESI), where BAO measurements are from DESI only (blue curve in Fig. <ref>). Hence, we find an increased (∼ 44%) neutrino mass bound when excluding both DESI LRG measurements. While these upper bounds still place pressure on the IH, they allow for NH as a viable mass paradigm in this model. The constraint is further relaxed when including SNe data, as shown in Table <ref>, in agreement with <cit.>.
|
http://arxiv.org/abs/2409.03154v1 | 20240905010317 | Koopman analysis of combinatorial optimization problems with replica exchange Monte Carlo method | [
"Tatsuya Naoi",
"Tatsuya Kishimoto",
"Jun Ohkubo"
] | physics.app-ph | [
"physics.app-ph"
] |
Graduate School of Science and Engineering, Saitama University, Sakura, Saitama 338–8570 Japan
§ ABSTRACT
Combinatorial optimization problems play crucial roles in real-world applications, and many studies from a physics perspective have contributed to specialized hardware for high-speed computation. However, some combinatorial optimization problems are easy to solve, and others are not. Hence, the qualification of the difficulty in problem-solving will be beneficial. In this paper, we employ the Koopman analysis for multiple time-series data from the replica exchange Monte Carlo method. After proposing a quantity that aggregates the information of the multiple time-series data, we performed numerical experiments. The results indicate a negative correlation between the proposed quantity and the ability of the solution search.
Koopman analysis of combinatorial optimization problems with replica exchange Monte Carlo method
Tatsuya Naoi, Tatsuya Kishimoto, and Jun Ohkubo
September 9, 2024
================================================================================================
§ INTRODUCTION
Combinatorial optimization problems play a crucial role in various fields in real-world applications, and their solutions contribute to efficient resource utilization, cost reduction, and service quality improvement. Examples include supply chain management, finance, manufacturing, transportation, energy, healthcare, and telecommunications. In combinatorial optimization problems, we find combinations of variables that minimize an objective function while satisfying various constraints. Since the minimization procedure is the same as seeking ground energy states in statistical physics <cit.>, there are many works for combinatorial optimization problems from a physics viewpoint. For example, D-wave Advantege by D-Wave systems <cit.> employs the principle of quantum annealing <cit.>. Another type of hardware is based on complementary metal-oxide semiconductors (CMOS), including devices developed by Hitachi <cit.>, Toshiba <cit.>, and Fujitsu <cit.>. In the CMOS-type hardware, an architecture based on the replica exchange Monte Carlo method <cit.> is sometimes employed, which exploits the merit of parallel computation.
While recent annealing hardware enables us to solve combinatorial optimization problems at high speed, the time required to find the optimal solution varies from problem to problem. For example, a class of 0-1 quadratic knapsack problem <cit.> is NP-hard, but some are easy to solve, and others are not. Conventional simulated or quantum annealing requires a slow reduction in temperature or quantum effects when the combinatorial optimization problem is hard to solve. Although there is a well-known schedule that yields an optimal solution <cit.>, the schedule is too late in practice. In addition, it is worth knowing in advance whether the problem is easy to solve or not.
Here, we expect that the time-series data used to find the solution will reflect the degree of difficulty in problem-solving; the dynamics for easy problems will have simple structures, while difficult problems yield complicated dynamics. To investigate the dynamics, one can employ the Koopman theory <cit.>, which has attracted attention in various fields. The Koopman theory enables us to use linear algebra for nonlinear systems; it is also possible to analyze stochastic nonlinear systems within the framework of linear algebra. In the Koopman theory, we deal with a Koopman operator for the time evolution in the observable function space, not the state space. While the observable function space is infinite-dimensional, the Koopman operator is linear even if the underlying dynamical system is nonlinear. Several methods have been developed in recent years to obtain Koopman operators from an observed data set. One of the methods is dynamic mode decomposition (DMD), which analyzes dynamical systems from observed data <cit.>. There are extended methods, such as extended dynamic mode decomposition (EDMD) <cit.>. The EDMD can efficiently approximate the Koopman operator as a finite-dimensional matrix. Applications of EDMD are currently being studied, including time series prediction, system identification <cit.>, and control <cit.>.
In this paper, we aim to clarify the relationship between the Koopman analysis and the degree of difficulty in combinatorial optimization problems. The EDMD yields an approximated matrix, the so-called Koopman matrix, for the Koopman operator. Since the replica exchange Monte Carlo method generates time-series data for each temperature, we get several datasets for the Koopman analysis. After applying the EDMD to time-series data generated from the replica exchange Monte Carlo method, we analyze the eigenvalues of the derived Koopman matrix for each temperature; the information of the eigenvalues will contain that of the degree of difficulty for each problem. We will also discuss a conventional data analysis based on singular value decomposition (SVD).
This paper is organized as follows. Section <ref> reviews the background of the annealing method and the Koopman theory. Section <ref> yields the main proposal, including data acquisition and analysis methods. In Sect. <ref>, we demonstrate the proposed method with numerical experiments on randomly generated problems and the 0-1 quadratic knapsack problems; the comparison with the proposed method and the conventional SVD are also discussed. We give a summary and mention of future work in Sect. <ref>.
§ BACKGROUND
§.§ QUBO formulation and Ising model
A quadratic form of binary variables called quadratic unconstrained binary optimization (QUBO) formulation is widely used as an input for specialized hardware. The QUBO formulation is deeply related to the Ising model, and both have binary variables. The QUBO formulation has the following cost function for the state vector a:
E(a) = ∑_i∈𝒟∑_j∈𝒟1/2 Q_ija_ia_j,
where 𝒟 is the set of indices of the variables, a_i∈{0,1} is the i-th binary variable in a, and Q_ij∈ℝ is the strength of the interaction between the binary variables a_i and a_j. The number of spins is N, i.e., |𝒟| = N. By contrast, the Ising model has binary spins with σ_i∈{-1,1}, and the energy (cost) function of the Ising model is denoted as follows:
E(σ) = -∑_i∈𝒟∑_j∈𝒟1/2 J_ijσ_iσ_j-∑_i∈𝒟h_iσ_i,
where σ is the spin vector, J_ij∈ℝ is the two-body interaction between the spins σ_i and σ_j, and h_i is the external magnetic field on σ_i.
The energy function of the Ising model is equivalent to the cost function of QUBO formulation via the following transformation:
a_i = 1+σ_i/2.
Conversion between {J_ij},{h_i} and {Q_ij} is also possible.
It is possible to convert various combinatorial optimization problems into the QUBO formulation <cit.>. Hence, minimizing the functions in Eqs. (<ref>) or (<ref>) corresponds to seeking solutions to combinatorial optimization problems.
§.§ Replica exchange Monte Carlo method
The replica exchange Monte Carlo method, also known as the parallel tempering method, improves sampling efficiency in Monte Carlo simulations and Markov chain Monte Calro methods <cit.>. In the replica exchange method, the temperature of each replica is determined and fixed in advance, and each replica develops with the conventional Metropolis rule. There is an exchange procedure for replicas. Due to the replica exchanges, when a low-temperature replica gets stuck in a local minimum, the replica can escape from a local minimum by exchanging a higher-temperature replica.
Let R be the number of replicas and a^(r) be the state vector for the r-th replica. Then, the Gibbs distribution is defined as follows:
P(a^(r))=1/Z^(r)exp(-E(a^(r))/T^(r)),
where T^(r), E(a^(r)), and Z^(r) correspond to the temperature parameter, the cost function, and the normalization constant for the r-th replica, respectively. While it is difficult to evaluate the normalization constant Z^(r) in general, there is no need to evaluate it; the state transition obeys the following conventional Metropolis rule:
P^(r)_change=min[1,exp(-Δ E^(r)/T_r)],
where Δ E^(r) is the energy difference from the previous state to the next one in the r-th replica. The Metropolis rule yields a sampling from the Gibbs distribution without evaluating the normalization constant Z^(r).
As denoted above, there is the exchange procedure of the state vectors between different replicas. Although low-temperature settings are necessary when seeking stable states, there are many local minima that have difficulty escaping at low temperatures. Hence, we seek various configurations using the replicas with high temperatures. Here, we define the temperature parameters in ascending order: T_1 < T_2 < ⋯ < T_R. Then, the probability P^(m,l)_exchange for the exchange between the m-th and l-th replicas is defined as
P^(m,l)_exchange=min[1,exp{(E^(m)-E^(l))(1/T_m-1/T_l)}],
where E^(m) and E^(l) are the energies of the m-th and l-th replicas, respectively. In this paper, we consider only exchanges at two adjacent replicas, and the following equation holds in Eq. (<ref>):
l = m + 1
for m=1, ⋯, R-1.
Note that the data analysis method proposed in Sect. <ref> assumes the usage of the replica-exchange Monte Carlo method. As denoted in Sect. <ref>, the replica-exchange Monte Carlo method is employed on a certain type of annealing hardware and is also suitable for parallel computing. Hence, the assumption of using the replica-exchange Monte Carlo method is appropriate.
§.§ Koopman theory
We here briefly review the Koopman theory; for more details, see, for example, the review paper in Ref. <cit.>.
The Koopman operator is a linear operator defined mainly in nonlinear dynamical systems. In the Koopman theory, we consider the time evolution of an observable function instead of the state variables. Consider the following time-evolution:
a_t+1 = F(a_t),
where a_t is a state vector at time t, and F is a nonlinear time evolution operator. Here, we consider a deterministic dynamics for simplicity; a stochastic case will be commented later. We also introduce an observable function ϕ(a). For example, ϕ(a) = a_i corresponds to the observation of the i-th spin variable state. Then, the Koopman operator 𝒦 acts on the function ϕ as follows:
𝒦ϕ = ϕ∘ F.
Equations (<ref>) and (<ref>) lead to
ϕ(a_t+1)=(𝒦ϕ)(a_t).
It is easy to confirm the linearity of the Koopman operator; for any constants c_1, c_2 ∈ℝ and observable functions ϕ_1, ϕ_2, we have
{𝒦(c_1 ϕ_1 + c_2 ϕ_2) } (a_t) = (c_1 ϕ_1+c_2 ϕ_2)(a_t+1)
= c_1 ϕ_1(a_t+1) + c_2 ϕ_2(a_t+1)
= c_1(𝒦ϕ_1)(a_t) + c_2(𝒦ϕ_2)(a_t)
= (c_1𝒦ϕ_1 + c_2𝒦ϕ_2)(a_t).
Note that the Koopman operator 𝒦 is infinite-dimensional because 𝒦 acts on elements in a function space. Therefore, it is necessary to approximate the Koopman operator 𝒦 as a finite-dimensional Koopman matrix K in practice. To describe the Koopman matrix K, we introduce a so-called dictionary. The dictionary is defined as
ψ(a) = [ψ_1(a),ψ_2(a),⋯,ψ_N_dic(a)]^⊤,
where ψ_i(a) is the i-th function and N_dic is the size of the dictionary. Note that in Ref. <cit.>, the dictionary is defined as a row vector, but in this paper we consider it as a column vector. Then, we introduce the following linear combination to express an observable function:
ϕ(a_t) = ∑_k=1^N_dicc_kψ_k(a_t) = c^⊤ψ(a_t),
where c is a coefficient vector. Combining Eqs. (<ref>) and (<ref>), we have
(𝒦ϕ)(a_t) = c^⊤(𝒦ψ)(a_t).
Note that c is time-independent. Hence, instead of the the action of the Koopman operator on the observable function ϕ, it is enough to consider the action on the dictionary as follows:
ψ(a_t+1) = 𝒦ψ(a_t) ≃ K ψ(a_t),
which leads to the Koopman matrix K.
The EDMD is a method to derive the Koopman matrix K from a dataset <cit.>. Here, we consider a single time-series dataset {a_1,a_2,…,a_M}, although it is sufficient to have snapshot pairs rather than the single time-series data. Then, the least-squares problem with the cost function,
J = ∑_t=1^M ψ(a_t+1) - Kψ(a_t) ^2,
leads to the Koopman matrix K immediately. The solution is denoted as
K = QG^+,
where
G = 1/m∑_t=1^Mψ(a_t)^⊤ψ(a_t),
Q = 1/m∑_t=1^Mψ(a_t+1)^⊤ψ(a_t),
and G^+ is the pseudo-inverse of the matrix G.
In this paper, the Metropolis rule in Eq. (<ref>) yields the dynamics, and hence we should consider a stochastic dynamics. As discussed in Ref. <cit.>, the Koopman matrix K gives expectations in the stochastic case; Eq. (<ref>) is replaced simply with
𝔼[ψ(a_t+1)] ≃ K ψ(a_t).
Even in the stochastic case, the Koopman matrix K contains the information of the system dynamics.
§ DIFFICULTY IN PROBLEM-SOLVING AND KOOPMAN ANALYSIS
Here, we will discuss a relationship between the Koopman analysis and the degree of difficulty in problem-solving.
§.§ Flow for analyzing data
One would expect that the dynamics for easy problems will have simple structures, while difficult problems yield complicated dynamics. The Koopman matrix contains the degree of complexity of the dynamics; it is possible to describe the dynamics of simple structures with a small number of modes, and the dynamics of complex structures will require many modes. Therefore, the distribution of eigenvalues should reflect the information of the dynamics.
However, a single time series of data alone cannot determine whether the eigenvalue distribution is simple or not. For example, the state of any problem will hardly change at considerably low temperatures, and only a few modes will remain. Therefore, we focus on the variation of the eigenvalue distributions at different temperatures. If the difficulty in problem-solving is high, the number of modes needed to describe the dynamics will increase as soon as the temperature becomes high.
From the above considerations, we employ several time-series datasets generated from the replica exchange Monte Carlo method. Figure <ref> shows the flow for analyzing data. The replica exchange Monte Carlo method generates several time-series data with different temperature parameters, and we simply analyze each of those datasets. Although the flow is simple, there are some notices for the data analysis. We explain each of those notices below.
§.§ Reduction of dictionary size
The dictionary size in the EDMD becomes enormously large when the number of variables increases. For simplicity, consider monomial dictionary functions with the form a_1^p_1 a_2^p_2⋯ a_N^p_N where p_i ∈ℕ_0. When L-1 is the maximum degree of each variable, i.e., p_i < L for all i, the dictionary size is L^N. Of course, we can restrict L to be less than two because the spin only takes 0 or 1; for example, a_1^2 = a_1. However, we still have the dictionary size with 𝒪(2^N). Due to the curse of dimensionality, we cannot evaluate the Koopman matrix.
In the data analysis, we restrict the total degree of monomials to be less than two and remove a_i^2 for i = 1, …,N from the dictionary. Hence, we employ the following dictionary:
ψ(a) =
[
1, a_1, a_2, …, a_N, a_1 a_2, a_1 a_3, …, a_N-1 a_N]^⊤.
Preliminary numerical verification confirms that a higher-order dictionary, which also includes monomials of degree 3 or higher, does not significantly affect the results. We consider the reason for this is that the QUBO formulation has only up to two-body interactions. Hence, we employ the dictionary in Eq. (<ref>) in the data analysis.
§.§ Preparation of snapshot pairs
As denoted in Fig. <ref>, there are replica exchanges between different temperature settings. In the EDMD, the analysis is based on snapshot pairs. Hence, one might suspect that the snapshot pairs at replica exchange timings could cause some problems in the data analysis; the snapshot pairs at the timings do not reflect the true dynamics of the system.
For the preparation of the snapshot pairs, we performed preliminary numerical experiments in which we removed the snapshot pairs at the replica exchange timings. The numerical results showed no significant differences due to data preprocessing. The reason is the small number of replica exchanges; these snapshot pairs do not affect the data analysis. Hence, in the flow in Fig. <ref>, we do not remove the snapshot pairs at the replica exchange timings for simplicity.
§.§ Summation of eigenvalues
After the flow in Fig. <ref>, we have several eigenvalue distributions. As denoted above, we investigate the variation of the eigenvalue distribution at different temperatures.
Figure <ref>(a) shows examples of the eigenvalue distributions; the experimental setting is the same with Sect. <ref>; we will explain it later. In Fig. <ref>(a), the distribution varies depending on the problem. There are several ways to capture the characteristics of the eigenvalue distributions. For example, the number of eigenvalues above a certain threshold would represent the number of dominant modes. After trying some ways, we finally decided to employ the summation of the eigenvalues. Figure <ref>(b) plots the sum of eigenvalues versus each temperature; we see clear differences depending on the problem.
§.§ Characterization of temperature dependency
Finally, we characterize the shape of the temperature dependency in Fig. <ref>(b). After some trials, we found that the area of the rapidly changing region of the temperature dependency in Fig. <ref>(b) captures the characteristics of the difficulty of problem-solving. For example, the left side of the dotted line in Fig. <ref>(b) corresponds to the region showing sharp temperature dependency for each case.
Let w_r be the sum of Koopman eigenvalues at the r-th temperature. We introduce the following quantity S to characterize the difficulty in problem-solving:
S = ∑^R'_r=1(1/2(w_r+1+w_r)(T_r+1-T_r)),
where T_r is the temperature of the r-th replica, and R' is the index of the end temperature of the area. Although one can determine the index R' by the figure appearance, we here employ the following determination method:
* Set r = 3.
* Calculate Δ_r = (w_r+1-w_r)/(w_r-w_r-1).
* If Δ_r < 0.5, then R'=r. If not, r → r+1 and go back to Step 2.
This procedure does not use the first few indices because the data does not show a monotonic increase in the first three or so temperature ranges. Additionally, this rule is not universally applicable; increasing the number of replicas and sampling more finely across temperature ranges could not necessarily exhibit a monotonic increase. However, in the numerical experiments below, we determine R' in the manner described above to automate the procedure.
§ NUMERICAL EXPERIMENT
We perform several numerical experiments to check whether the proposed method successfully captures the characteristics of the difficulty of problem-solving. In the following numerical experiments, we deal with two examples; one is randomly generated, and the other is the 0-1 quadratic knapsack problem <cit.>, which was also used in the previous work for the specialized hardware <cit.>. As for the 0-1 knapsack cases, we generated some problems randomly to vary the difficulty level of problem-solving.
The purpose of the following numerical experiments is to examine the relationship between the Koopman analysis and difficulty in problem-solving. Hence, we compare the number of finding times of the optimal solution with the quantity proposed in Sect. <ref>. Note that we consider only problems with small spin numbers because we need optimal solutions for the comparison.
§.§ Randomly generated problems
Although we tried several problem settings, the following method for generating problems of varying difficulty is employed here. Here, the number of spins is 15, and we randomly generate J_ij in Eq. (<ref>) from 𝒩(3,0.5) or 𝒩(-3,0.5) with equal probability. The coefficients h_i in Eq. (<ref>) is also generated from 𝒩(2,0.5) or 𝒩(-2,0.5) with equal probability. The number of replica is R=15, and the temperatures {T_r} for r=1, 2, ⋯, R are defined as follows:
T_r = r + 1.0.
We obtain 10,000 time-series data for each temperature by using the replica exchange Monte Carlo method. Before the analysis by the EDMD, the binary variables {-1,1} are converted to {0,1} by Eq. (<ref>). Here, we generate nine different problems, and for each problem, the quantity S by the data analysis flow described in Sect. <ref> is evaluated.
Figure <ref>(a) shows the temperature-dependency of the the sum of Koopman eigenvalues. For the evaluation of the quantity S in Eq. (<ref>), R' is determined by the procedure described in Sect. <ref> as follows:
* R'=5 for Cases 1, 2, 3, 4 and 7,
* R'=6 for Cases 5 and 6,
* R'=7 for Case 9,
* R'=8 for Case 8.
Next, we measure the number of finding times of the optimal solution for each case. For the 15 spins problem, the number of states is 2^15 = 32,768, and hence, it is possible to search exhaustively to find the state that minimizes the cost function. Note that the temperature parameters used to solve the problem in the replica exchange Monte Carlo method differ from those used to compute the features S; we employ the following conventional settings:
T_r = 0.001 + (r/R)^2, (R=5 and r=1, 2, ⋯, R).
We apply the updates for a randomly chosen spin N times and the replica exchange procedure at the timing; the replica exchange occurs between a randomly selected replica number and the number one above it. We call this procedure an iteration step. After repeating the iteration steps 100 times, we judge whether we find the optimal solution. We repeat the search 100 times and count the number of finding times of the optimal solution. Figure <ref>(b) shows the relationship between the number of finding times of the optimal solution in 100 trials and the feature S; it shows the results of ten runs of the above experiment for each problem case.
In Fig. <ref>(b), we see a clear negative correlation between the number of finding times the optimal solution and the feature S. This numerical results indicate that our conjecture is valid: the temperature-dependency in the sum of Koopman eigenvalues captures the difficulty in problem-solving.
§.§ Quadratic knapsack Problem
The 0-1 quadratic knapsack problem extends the standard knapsack problem. The standard knapsack problem aims to select items within a given capacity constraint to maximize the total value. However, in the 0-1 quadratic knapsack problem, the value of the items is represented in a quadratic form. As denoted above, the 0-1 quadratic knapsack problem was discussed in the specialized hardware previously <cit.>. The cost function is denoted as follows:
E(x) = -1/2∑_i∈ D∑_j∈ DVal_ijx_ix_j - ∑_i∈ DVal_ix_i
+λmax(0,∑_i∈ DWT_ix_i-WT_max),
where Val_i and WT_i mean the value and the weight of the i-th item, respectively; Val_ij is the interaction value between the i-th item and j-th item, and WT_max is the capacity of the knapsack. Note that x_i∈{0,1}.
Here, we consider problems with 15 items. The problem parameters, Val_ij, Val_i, WT_i, and WT_max in Eq (<ref>) are randomly generated from uniform distribution as follows:
* Val_ij∼ U(1.0,30.0) (i≠ j)
* Val_i∼ U(1.0,30.0),
* WT_i∼ U(1.0,30.0),
* WT_max∼ U^integer(15,16,⋯,300),
where U^integer has the integer domain. For i=j, Val_ij=0. The penalty constant λ in Eq. (<ref>) is 100. Note that the cost function is not the QUBO formulation. There are a few ways to deal with the penalty terms for the inequality constraints. For example, we can convert the penalty terms into the QUBO formulation via duality relations <cit.>. In Ref. <cit.>, a mechanism, such as asymmetric two-body interactions and slack variables, was employed. However, we here directly evaluate the cost function value from Eq. (<ref>) for simplicity because this paper aims to check the ability of the Koopman theory. Hence, the spin number is the same as that of items; there are 15 spins.
For the evaluation of S, we use the following temperature settings:
T_r = 10 × r + 10,
and R = 10. Other settings for the time-series data analysis are the same as in Sect. <ref>. In the solution search procedure, we use the same temperature parameters with Eq. (<ref>). The number of iterations for each search is 200, and the other settings for the search procedure are the same as in Sect. <ref>.
Figure <ref>(a) shows the temperature-dependency of the the sum of Koopman eigenvalues. The values of R' are as follows:
* R'=4 for Cases 1, 2, 3, 4, 7, and 9,
* R'=5 for Case 5,
* R'=6 for Cases 6 and 8.
Figure <ref>(b) show the relationship between the number of finding times of the optimal solution in 100 trials and the feature S; as in Sect. <ref>, there is a clear negative correlation even in the 0-1 knapsack problem.
§.§ Data analysis with SVD
The SVD is well-known for capturing the characteristics of data matrices. Therefore, we here analyze the time-series data with the SVD. In the EDMD method, we investigated the temperature dependency of the sum of eigenvalues. By contrast, we here evaluate the temperature dependency of the sum of singular values. Instead of the quantity S in Eq. <ref>, we evaluate the following quantity:
S^svd = ∑^R'_r=1(1/2(w^svd_r+1 + w^svd_r)(T_r+1-T_r)),
where w^svd_r is the sum of singular values at the r-th temperature.
Figure <ref>(a) shows the temperature-dependency of the the sum of singular values. The values of R' are 15 for all cases. Figure <ref>(b) show the relationship between the number of finding times of the optimal solution in 100 trials and the feature S^svd. Figures <ref>(a) and (b) shows the corresponding results for the 0-1 quadratic knapsack problems; here, R' in Eq. (<ref>) are as follows:
* R'=5 for Case 3, 4, 6, 7, 8, and 9,
* R'=6 for Case 2 and 5,
* R'=8 for Case 1.
The observation from Figs. <ref>(a) and (b) are the same as the random problem cases.
From the comparison of the results of the EDMD and SVD analyses, we obtain the following observations:
* The EDMD and SVD analyses show negative correlations between the proposed features and difficulty in problem-solving.
* The sums of Koopman eigenvalues take small values at low temperatures in all cases, but those of singular values take various values at low temperatures.
* While Figs. <ref>(a) and <ref>(a) show slow temperature-dependency, the EDMD results show a steep temperature dependency, as shown in Figs. <ref>(a) and <ref>(a). Hence, a narrower temperature range is sufficient for the EDMD analysis than the SVD, contributing to lower computational costs.
We consider that the steeper temperature dependency in the Koopman analysis stems from the fact that EDMD represents the dynamics itself; the SVD focuses on the entire time series data, while the EDMD deals with snapshot pairs. As the temperature increases, the dynamics should change, and the EDMD could immediately reflect the details of the dynamics because of the analysis of the snapshot pairs. Therefore, the change in dynamics with a small temperature change would be apparent in the EDMD analysis.
These numerical experiments confirm that the data analysis flow based on the Koopman analysis works well in discussing the difficulty in problem-solving.
§ CONCLUSION
In this paper, we clarified the relationship between the Koopman analysis and the difficulty in problem-solving for combinatorial optimization problems. Since only a single time-series data at a temperature is not enough to discuss the difficulty, we generated multiple time-series data at various temperatures by using the replica exchange Monte Carlo method; as denoted above, a type of specialized hardware has employed the replica exchange Monte Carlo method for the parallel computation. In the data analysis flow, we proposed the quantity that aggregates the information of the multiple time-series data. The numerical experiments confirmed the validity of the proposals; there is a negative correlation between the proposed quantity and the ability of the solution search.
This paper is the first work to use Koopman analysis to discuss the difficulty in problem-solving for combinatorial optimization problems. There are some remaining tasks left for the future. For example, one should check the algorithm dependency of the analytical results; although the Metropolis rule employed in this paper is popular in optimization, the time-series data depends on the update algorithm. It would also be interesting to evaluate the algorithm with a mechanism to get out of the local solution, for example, as proposed in Ref. <cit.>. One of the other remaining tasks is the reduction of the computational costs in the EDMD; for example, the online EDMD <cit.> could be effective for sequential data acquisition in the replica exchange Monte Carlo method. There are many recent studies for specialized hardware for combinatorial optimization problems, and customizing the algorithm for each problem would be beneficial. We believe that this paper will lead to future studies to quantify the difficulty in problem-solving.
This work was supported by JST FOREST Program (Grant Number JPMJFR216K, Japan).
10
Kirkpatrick1983
S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi, Science 220, 671 (1983).
Bunyk2014
P. I. Bunyk, E. M. Hoskinson, M. W. Johnson, E. Tolkacheva, F. Altomare, A. J. Berkley, R. Harris, J. P. Hilton, T. Lanting, A. J. Przybysz, and J. Whittaker,
IEEE Trans. Appl. Supercond. 24, 1700110 (2014).
d-wave
C. McGeoch and P. Farré, D-Wave Tech. Report Series 14-1049A-A (2020).
d-wave-company
D-Wave Systems (accessed July 31, 2024). [Online]
<https://www.dwavesys.com>.
Kadowaki1998
T. Kadowaki and H. Nishimori, Phys. Rev. E 58, 5355 (1998).
Farhi2001
E. Farhi, J. Goldstone, S. Gutmann, J. Lapan, A. Lundgren, and D. Preda, Science 292, 472 (2001).
Takemoto2019
T. Takemoto, M. Hayashi, C. Yoshimura, and M. Yamaoka, IEEE International Solid-State Circuits Conference (ISSCC), 2019, p. 52.
Takemoto2021
T. Takemoto, K. Yamamoto, C. Yoshimura, M. Hayashi, M. Tada, H. Saito, M. Mashimo, and M. Yamaoka, IEEE Int. Solid-State Circ. Conf. 64, 64 (2021).
Goto2019a
H. Goto, K. Tatsumura, and A. Dixon, Sci. Adv. 5, eaav2372 (2019).
Goto2019b
H. Goto, K. Endo, M. Suzuki, Y. Sakai, T. Kanao, Y. Hamakawa, R. Hidaka, M. Yamasaki, and K. Tatsumura, Sci. Adv. 7, eabe7953 (2021).
Tatsumura2019
K. Tatsumura, A. R. Dixon, and H. Goto, 29th International Conference on Field Programmable Logic and Applications (FPL), 2019, p. 59.
Aramon2019
M. Aramon, G. Rosenberg, E. Valiante, T. Miyazawa, H. Tamura, and H. G. Katzgraber, Front. Phys. 7, 48 (2019).
Matsubara2020
S. Matsubara, M. Takatsu, T. Miyazawa, T. Shibasaki, Y. Watanabe, K. Takemoto, and H. Tamura, 25th Asia and South Pacific Design Automation Conference (ASP-DAC), 2020, p. 667.
Swendsen1986
R. H. Swendsen and J. S. Wang, Phys. Rev. Lett. 57, 2607 (1986).
Hukushima1996
K. Hukushima and K. Nemoto, J. Phys. Soc. Jpn. 65, 1604 (1996).
Gallo1980
G. Gallo, P. L. Hammer, and B. Simeone, Combinatorial Optimization 12, 132 (1980).
Geman1984
S. Geman and D. Geman, IEEE Trans. Pattern Anal. Mach. Intell. PAMI-6, 721 (1984).
Suzuki2005
S. Suzuki and M. Okada, J. Phys. Soc. Jpn. 74, 1649 (2005).
Koopman1931
B. O. Koopman, Proc. Nat. Acad. Sci. 17, 315 (1931).
Rowley2009
C. Rowley, I. Mezić, P. Bagheri, S. Schlatter, and D. Henningson, J. Fluid Mech. 641, 115 (2009).
Williams2015
M. O. Williams, I. G. Kevrekidis, and C. W. Rowley, J. Nonlinear Sci. 25, 1307 (2015).
Mauroy2020
A. Mauroy and J. M. Gonçalves, IEEE Tran. Auto. Control 65, 2550 (2020).
Korda2018
M. Korda and I. Mezić, Automatica 93, 149 (2018).
Lucas2014
A. Lucas, Front. Phys. 2, 2 (2014).
Brunton2022
S. L. Brunton, M. Budišić, E. Kaiser, and J. N. Kutz, SIAM Rev. 64, 229 (2022).
Yin2023
F. Yin, H. Tamura, Y. Furue, M. Konoshima, K. Kanda, and Y. Watanabe, J. Phys. Soc. Jpn. 92, 034002 (2023).
Sato2019
G. Sato, M. Konoshima, T. Ohwa, H. Tamura, and J. Ohkubo, Phys. Rev. E 99, 042106 (2019).
Sato2024
Y. Sato, M. Konoshima, H. Tamura, and J. Ohkubo, J. Phys. Soc. Jpn. 93, 044802 (2024).
Zhang2019
H. Zhang, C. W. Rowley, E. A. Deem, and L. N. Cattafesta, SIAM J. Appl. Dyn. Syst. 18, 1586 (2019).
|