entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 14
193
| authors
sequencelengths 1
1.14k
| primary_category
stringclasses 125
values | categories
sequencelengths 1
6
| text
stringlengths 12
495k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2409.03207v1 | 20240905025712 | Ruelle's inequality and Pesin's formula for Anosov geodesic flows in non-compact manifolds | [
"Alexander Cantoral",
"Sergio Romaña"
] | math.DS | [
"math.DS"
] |
Ruelle's inequality and Pesin's formula]Ruelle's inequality and Pesin's formula for Anosov geodesic flows in non-compact manifolds
Keywords: Anosov geodesic flow, Jacobi field, Lyapunov exponents, Ruelle's inequality, Pesin's formula.
Mathematics Subject Classification (2010): 37D40, 53C20.
Instituto de Matemática, Universidade Federal do Rio de Janeiro, CEP 21941-909, Rio de Janeiro, Brazil
[email protected]
Instituto de Matemática, Universidade Federal do Rio de Janeiro, CEP 21941-909, Rio de Janeiro, Brazil
[email protected]
§ ABSTRACT In this paper we prove Ruelle's inequality for the geodesic flow in non-compact manifolds with Anosov geodesic flow and some assumptions on the curvature. In the same way, we obtain the Pesin's formula for Anosov geodesic flow in non-compact manifolds with finite volume.
[
Sergio Romaña
Received 16 July 2024; accepted 04 September 2024
=====================================================
§ INTRODUCTION
Ruelle in <cit.> proved an important result in ergodic theory relating entropy and Lyapunov exponents. More precisely, if f:M→ M is a C^1-diffeomorphism on a compact manifold and μ is an f-invariant probability measure on M, then
h_μ(f)≤∫∑_𝒳_i(x)>0𝒳_i(x)·(H_i(x))dμ(x),
where h_μ(f) is the entropy, {𝒳_i(x)} is the set of Lyapunov exponents at x∈ M and (H_i(x)) is the multiplicity of 𝒳_i(x).
In situations involving non-compact manifolds, Ruelle's inequality may be compromised. For example, Riquelme in <cit.> constructed diffeomorphisms defined on non-compact manifolds with an invariant measure with positive entropy and the sum of the positive Lyapunov exponents was equal to zero. However, in recent years, certain findings have been achieved that, in particular situations, offer the possibility of verifying Ruelle's inequality in non-compact contexts. Liao and Qiu in <cit.> showed Ruelle's inequality for general Riemannian manifolds under an integrable condition. Riquelme in <cit.> showed Ruelle's inequality for the geodesic flow in manifolds with pinched negative sectional curvature with some condition about the derivatives of the sectional curvature.
The main goal of this work is to prove Ruelle's inequality for the geodesic flow on the unit tangent bundle of a non-compact manifold with Anosov geodesic flow and some assumptions on the curvature. More precisely,
Let M be a complete Riemannian manifold with Anosov geodesic flow. Assume that the curvature tensor and the derivative of the curvature tensor are both uniformly bounded. Then, for every ϕ^t-invariant probability measure μ on SM, we have
h_μ(ϕ)≤∫_SM∑_𝒳_i(θ)>0𝒳_i(θ)· (H_i(θ))dμ(θ).
We can see that this result generalizes what Riquelme demonstrated in <cit.> since the Anosov geodesic flows encompass the manifolds with pinched negative curvature.
The question arises as to under what conditions equality can be achieved in (<ref>). For example, when the manifold is compact, the diffeomorphism is C^1+α and the measure is absolutely continuous with respect to the Lebesgue measure, Pesin showed in <cit.> that (<ref>) is actually an equality, called Pesin's formula. Our second result deals with the equality case of Theorem 1.1. In this case, we suppose that the manifold has finite volume.
Let M be a complete Riemannian manifold with finite volume and Anosov geodesic flow, where the flow is C^1-Hölder. Assume that the curvature tensor and the derivative of the curvature tensor are both uniformly bounded. Then, for every ϕ^t-invariant probability measure μ on SM which is absolutely continuous relative to the Lebesgue measure, we have
h_μ(ϕ)= ∫_SM∑_𝒳_i(θ)>0𝒳_i(θ)· (H_i(θ))dμ(θ).
§.§ Structure of the Paper:
In section 2, we introduce the notations and geometric tools that we use in the paper. In section 3, we prove the existence of Oseledec's decomposition for the flow at time t=1. In section 4, we explore certain results that will allow us to deal with the challenge of non-compactness of the manifold. Using the strategies exhibited in <cit.> to prove the Ruelle's inequality for diffeomorphisms in the compact case, we prove Theorem 1.1 in section 5. Finally, in section 6 we prove Theorem 1.2 using techniques applied by Mañe in <cit.>.
§ PRELIMINARIES AND NOTATION
Throughout this paper, M=(M,g) will denote a complete Riemannian manifold without boundary of dimension n≥ 2, TM is the tangent bundle, SM its unit tangent bundle and π:TM→ M will denote the canonical projection, that is, π(x,v)=x for (x,v)∈ TM.
§.§ Geodesic flow
Given θ=(x,v)∈ TM, we denote by γ_θ the unique geodesic with initial conditions γ_θ(0)=x and γ'_θ(0)=v. The geodesic flow is a family of C^∞-diffeomorphisms ϕ^t:TM→ TM, where t∈ℝ, given by
ϕ^t(θ)=(γ_θ(t), γ'_θ(t)).
Since geodesics travel with constant speed, we have that ϕ^t leaves SM invariant. The geodesic flow generates a vector field G on TM given by
G(θ)=. ddt|_t=0ϕ^t(θ)=. ddt|_t=0( γ_θ(t),γ'_θ(t)) .
For each θ=(x,v)∈ TM, let V be the vertical subbundle of TM whose fiber at θ is given by V_θ= dπ_θ. Let K: TTM→ TM be the connection map induced by the Riemannian metric (see <cit.>) and denotes by H the horizontal subbundle of TM whose fiber at θ is given by H_θ= K_θ. The maps . d π_θ|_H_θ:H_θ→ T_xM and . K_θ|_V_θ: V_θ→ T_xM are linear isomorphisms. This implies that T_θ TM=H_θ⊕ V_θ and the map j_θ:T_θ TM→ T_xM× T_xM given by
j_θ(ξ)=(dπ_θ(ξ), K_θ(ξ))
is a linear isomorphism. Furthermore, we can identify every element ξ∈ T_θ TM with the pair j_θ(ξ). Using the decomposition T_θ TM=H_θ⊕ V_θ, we endow the tangent bundle TM with a special Riemannian metric that makes H_θ and V_θ orthogonal. This metric is called the Sasaki metric and it's given by
⟨ξ, η⟩_θ=⟨ dπ_θ(ξ), dπ_θ(η)⟩_x + ⟨ K_θ(ξ),K_θ(η)⟩_x .
From now on, we work with the Sasaki metric restricted to the unit tangent bundle SM. To begin with, it is valid to ask if SM is a complete Riemannian manifold with this metric.
Let M be a complete Riemannian manifold. Then SM is a complete metric space with the Sasaki metric.
Let θ, ω∈ SM and γ:[0,1]→ SM be a curve joining θ and ω. By the identification (<ref>) we can write
l(γ) =∫_0^1 γ'(t)dt
=∫_0^1 ( dπ_γ(t)( γ'(t))^2 + K_γ(t)( γ'(t))^2)^1/2 dt
≥∫_0^1 dπ_γ(t)( γ'(t)) dt
=∫_0^1 ( π∘γ)'(t) dt
=l(π∘γ).
This implies that
d(θ,ω)≥ d(π(θ),π(ω))
for any two points θ,ω∈ SM. Let { (p_n,v_n)}_n∈ℕ be a Cauchy sequence in SM. By (<ref>) we have that { p_n}_n∈ℕ is a Cauchy sequence in M. Since M is complete, there is p∈ M such that lim_n→ +∞p_n=p. If we consider the compact set X={ (q,v)∈ SM: d(q,p)≤ 1}, for n≥ n_0 we have that (p_n,v_n)∈ X and therefore the Cauchy sequence converges in SM.
The sectional curvature of SM with the Sasaki metric can be calculated from the curvature tensor and the derivative of the curvature tensor of M as explained in <cit.>: Let Π be a plane in T_(x,v)SM and choose an orthonormal basis { (v_1,w_1), (v_2,w_2)} for Π satisfying v_i^2+ w_i^2=1, for i=1,2, and ⟨ v_1,v_2 ⟩ =⟨ w_1,w_2⟩=0. Then the Sasaki sectional curvature of Π is given by
K_Sas(Π)= ⟨ R_x(v_1,v_2)v_1,v_2 ⟩ +3⟨ R_x(v_1,v_2)w_1,w_2 ⟩
+ w_1^2 w_2^2
-34 R_x(v_1,v_2)v^2+14R_x(v,w_2)v_1 ^2+14R_x(v,w_1)v_2 ^2
+12⟨ R_x(v,w_1)w_2,R_x(v,w_2)v_1 ⟩ - ⟨ R_x(v,w_1)v_1,R_x(v,w_2)v_2 ⟩
+⟨ (∇_v_1R)_x(v,w_2)v_2,v_1 ⟩ + ⟨ (∇_v_2R)_x(v,w_1)v_1,v_2 ⟩ .
This equality shows that if the curvature tensor of M and its derivatives are bounded, then the sectional curvature of SM with the Sasaki metric is also bounded. This property is crucial as it allows us to compare volumes between subsets of TSM and subsets of SM using the exponential map of SM (see Lemma 5.3).
The types of geodesic flows discussed in this paper are the Anosov geodesic flows, whose definition follows below.
We say that the geodesic flow ϕ^t:SM→ SM is of Anosov type if T(SM) has a continuous splitting T(SM)=E^s⊕⟨ G⟩⊕ E^u such that
dϕ^t_θ (E^s(u)(θ)) = E^s(u)(ϕ^t(θ)),
dϕ^t_θ|_E^s ≤ C λ^t,
dϕ^-t_θ|_E^u ≤ C λ^t,
for all t≥ 0 with C>0 and λ∈ (0,1), where G is the geodesic vector field. It's known that if the geodesic flow is Anosov, then the subspaces E^s(θ) and E^u(θ) are Lagrangian for every θ∈ SM (see <cit.> for more details).
§.§ Jacobi fields
To study the differential of the geodesic flow with geometric arguments, let us recall the definition of a Jacobi field. A vector field J along a geodesic γ of M is a Jacobi field if it satisfies the Jacobi equation
J”(t)+R(γ'(t),J(t))γ'(t)=0,
where R denotes the curvature tensor of M and "'" denotes the covariant derivative along γ. A Jacobi field is determined by the initial values J(t_0) and J'(t_0), for any given t_0∈ℝ. If we denote by S the orthogonal complement of the subspace spanned by G, for every θ∈ SM, the map ξ→ J_ξ defines an isomorphism between S(θ) and the space of perpendicular Jacobi fields along γ_θ, where J_ξ(0)=dπ_θ(ξ) and J'_ξ(0)=K_θ(ξ).
The differential of the geodesic flow is determined by the behavior of the Jacobi fields and, therefore, by the curvature. More precisely, for θ∈ SM and ξ∈ T_θ SM we have (in the horizontal and vertical coordinates)
dϕ^t_θ(ξ)=(J_ξ(t), J'_ξ(t)), t∈ℝ.
In the context of an Anosov geodesic flow, if ξ∈ E^s(θ) (respectively, ξ∈ E^u(θ)), the Jacobi field associated J_ξ(t) is called a stable (respectively, unstable) Jacobi field along γ_θ(t).
The following proposition allows us to uniformly limit the derivative of the exponential map from certain conditions on the curvature of the manifold.
Let N be a complete Riemannian manifold and suppose that the curvature tensor is uniformly bounded. Then there exists t_0>0 such that for all x∈ N and for all v,w∈ T_xN with v = w =1 we have
d(exp_x)_tvw≤52, ∀| t| ≤ t_0.
If w∈⟨ v⟩, then w=v or w=-v. In both cases, by Gauss Lemma (see <cit.>) we have that
d(exp_x)_tvw^2 =⟨ d(exp_x)_tvw, d(exp_x)_tvw⟩
=⟨ d(exp_x)_tvv, d(exp_x)_tvv⟩
=1t^2⟨ d(exp_x)_tvtv, d(exp_x)_tvtv⟩
=1t^2⟨ tv,tv⟩
=1.
Now assume that w∈⟨ v⟩^⊥. Consider the Jacobi field
J(t)=d(exp_x)_tvtw, t∈ [-1,1]
with initial conditions J(0)=0 and J'(0)=w. By Lemma 8.3 of <cit.> there exists t_0>0, independent of the point x, such that
d(exp_x)_tvw =J(t) | t| ≤32, ∀ t∈(-t_0,t_0)∖{ 0 }.
As T_xN=⟨ v⟩ + ⟨ v⟩^⊥, the last inequality completes the proof.
§.§ No conjugate points
Let γ be a geodesic joining p,q∈ M, p≠ q. We say that p,q are conjugate along γ if there exists a non-zero Jacobi field along γ vanishing at p and q. A manifold M has no conjugate points if any pair of points are not conjugate. This is equivalent to the fact that the exponential map is non-singular at every point of M. There are examples of manifolds without conjugate points obtained from the hyperbolic behavior of the geodesic flow. In <cit.>, Klingenberg proved that a compact Riemannian manifold with Anosov geodesic flow has no conjugate points. Years later, Mañé (see <cit.>) generalized this result to the case of manifolds of finite volume. In the case of infinite volume, Melo and Romaña in <cit.> extended the result of Mañé over the assumption of sectional curvature bounded below and above. These results show the relationship that exists between the geometry and dynamic of an Anosov geodesic flow.
Let M be a complete Riemannian manifold without conjugate points and sectional curvature bounded below by -c^2, for some c>0. When the geodesic flow ϕ^t:SM→ SM is of Anosov type, Bolton in <cit.> showed that there is a positive constant δ such that, for every θ∈ SM, the angle between E^s(θ) and E^u(θ) is greater than δ. Moreover, Eberlein in <cit.> showed that
1. K_θ(ξ)≤ c dπ_θ(ξ) for every ξ∈ E^s(θ) or E^u(θ), where K:TTM → TM is the connection map.
2. If ξ∈ E^s(θ) or E^u(θ), then J_ξ(t)≠ 0 for every t∈ℝ.
§.§ Lyapunov exponents
Let (M.g) be a Riemannian manifold and f:M→ M a C^1-diffeomorphism. The point x is said to be (Lyapunov-Perron) regular if there exist numbers {𝒳_i(x) }_i=1^l(x), called Lyapunov exponents, and a decomposition of the tangent space at x into T_xM=⊕_i=1^l(x)H_i such that for every vector v∈ H_i(x)∖{ 0}, we have
lim_n→±∞1nlogdf^n_xv =𝒳_i(x)
and
lim_n→±∞1nlog| ( df^n_x) |=∑_i=1^l(x)𝒳_i(x)· (H_i(x)).
Let Λ be the set of regular points. By Oseledec's Theorem (see <cit.>), if μ is an f-invariant probability measure on M such that log^+ df^± 1 is μ-integrable, then the set Λ has full μ-measure. Moreover, the functions x→𝒳_i(x) and x→(H_i(x)) are μ-measurable and f-invariant. In particular, if μ is ergodic, they are μ-almost everywhere constant.
§ EXISTENCE OF LYAPUNOV EXPONENTS
In this section, we will prove that when the geodesic flow is Anosov and the sectional curvature is bounded below, the norm dϕ^± 1_θ is bounded by a positive constant independent of θ. This boundedness is crucial as it ensures, for a given probability measure, the existence of Lyapunov exponents by Oseledec's Theorem. More precisely,
Let M be a complete Riemannian manifold without conjugate points, sectional curvature bounded below by -c^2, for some c>0, and μ an ϕ^t-invariant probability measure in SM. If the geodesic flow is of Anosov type, then log dϕ^± 1∈ L^1(μ).
Before giving a proof of Theorem 3.1, it is essential to establish the following two lemmas.
Let M be a complete Riemannian manifold without conjugate points, sectional curvature bounded below by -c^2, for some c>0, and geodesic flow of Anosov type. For every θ∈ SM, there exists a constant P>0 such that for every ξ∈ E^s(θ), η∈ E^u(θ) with ξ=η =1, we have
J_η(1) ≤ P and J_ξ(-1) ≤ P.
Fix θ∈ SM and let η∈ E^u(θ) with η =1. Consider a stable Jacobi field J_s along γ_θ such that J_η(0)=J_s(0) and put ω=(J_s(0),J'_s(0)). By item 1 of section 2.3 we have
J'_s(0)≤ c J_s(0) = c J_η(0) ≤ c
and
ω^2= J_s(0)^2 +J_s'(0) ^2≤ 1+c^2.
Define the Jacobi field J(t)= J_η(t)-J_s(t). We can see that J is a perpendicular Jacobi field along γ_θ satisfying J(0)=0. By Rauch's comparison Theorem (see <cit.>) we have that
J(1)≤sinh ccJ'(0) .
Since the geodesic flow is Anosov and ω∈ E^s(θ),
J_s(1)≤dϕ^1_θ (ω) ≤ Cλω≤ Cλ√(1+c^2).
From (<ref>) and (<ref>) we have that
J_η(1) ≤ J(1) + J_s(1)
≤sinh cc J'(0) + Cλ√(1+c^2)
≤sinh cc( J'_η(0) + J'_s(0) ) +Cλ√(1+c^2)
≤( 1+cc) sinh c +Cλ√(1+c^2):=P_1.
Using the same technique for the stable case, there exists P_2>0 such that
J_ξ(-1)≤ P_2
for every ξ∈ E^s(θ) with ξ =1. Considering P=max{ P_1,P_2}, the conclusion of the lemma follows.
We know that, with the hypothesis of Theorem 3.1, there exists a constant δ>0 such that the angle between the stable and unstable subspaces is uniformly bounded below by δ. As a direct consequence of this result, we have the following lemma.
Let M be a complete Riemannian manifold without conjugate points, sectional curvature bounded below by -c^2, for some c>0, and geodesic flow of Anosov type. Define the function f:SM→ℝ as
f(θ)=sup{|⟨ξ,η⟩|: ξ∈ E^s(θ), η∈ E^u(θ), ξ=η =1 } .
Then there exists Q>0 such that
sup_θ∈ SM f(θ)≤ Q<1.
Proof of Theorem 3.1. Fix θ∈ SM and consider ξ∈ T_θ SM with ξ =1. Since the geodesic flow is of Anosov type, we can write
ξ=sξ_1+rξ_2+ξ_3,
where ξ_1∈ E^s(θ), ξ_2∈ E^u(θ) and ξ_3∈⟨ G(θ)⟩ with ξ_1 =ξ_2 =1. Then
1=sξ_1+rξ_2 ^2 + ξ_3^2.
This implies that ξ_3≤ 1 and sξ_1+rξ_2≤ 1. We have
sξ_1+rξ_2^2=s^2+r^2+2sr⟨ξ_1,ξ_2 ⟩≤ 1.
It follows from Lemma 3.3 that the regions
E_β={ (s,r):s^2+r^2+2srβ≤ 1}
with -Q≤β≤ Q are bounded ellipses. If we consider L=diam(E_Q)/2+1>0, the ball B centered in 0 and radius L contains these ellipses (see Figure 1). In particular, the parameters s,r are bounded, that is | s|, | r|≤ L. By Lemma 3.2 we have that
dϕ^1_θ(ξ_2) =√( J_ξ_2(1)^2 + J'_ξ_2(1) ^2 )
≤√(1+c^2) J_ξ_2(1)
≤√(1+c^2)P.
Then
dϕ^1_θ(ξ) ≤| s|dϕ^1_θ(ξ_1) + | r|dϕ^1_θ(ξ_2) + dϕ^1_θ(ξ_3)
≤| s| Cλ + | r|√(1+c^2)P+1
≤ LCλ + L√(1+c^2)P+1
for every ξ∈ T_θ SM with ξ =1. This implies that dϕ^1_θ is bounded and therefore the function logdϕ^1 is μ-integrable, since the constants L and P are independent of the point θ. Using the second inequality of Lemma 3.2 we obtain that log dϕ^-1 is μ-integrable.□
§ CONSEQUENCES OF A GEODESIC FLOW BEING OF ANOSOV TYPE
In this section, we explore some results, based on the hyperbolicity of a geodesic flow, that will allow us to address the challenge of the non-compactness of the manifold in the proof of Ruelle's inequality.
From now on, let us assume that M is a complete Riemannian manifold without conjugate points, sectional curvature bounded below by -c^2, for some c>0, and the geodesic flow ϕ^t:SM→ SM is of Anosov type. For every ω∈ T_θ SM we can write
ω=ω^s+ω^u+ω^c,
where ω^s∈ E^s(θ), ω^u∈ E^u(θ) and ω^c∈⟨ G(θ)⟩.
For m∈ℕ large enough, there is τ_1>1 such that for every θ∈ SM
dϕ^m_θ≤τ_1 dϕ^m_θ(η)
for some η∈ E^u(θ) with η =1.
Fix θ∈ SM and let ω=ω^s+ω^u+ω^c∈ T_θ SM with ω =1. This implies that ω^s+ω^u ≤ 1 and ω^c ≤ 1. Moreover, we know that ω^s≤ L and ω^u≤ L (see Section 3). Consider m∈ℕ large enough such that Cλ^m<1/2.
Case 1: ω^u=0.
Since the geodesic flow is Anosov we have that
dϕ^m_θ(ω) ≤ dϕ^m_θ(ω^s) +dϕ^m_θ(ω^c)
≤ Cλ^m + 1
<C^-1λ^-m
≤ dϕ^m_θ(η)
for every η∈ E^u(θ) with η =1.
Case 2: ω^u≠ 0.
Since the geodesic flow is Anosov we have that
dϕ^m_θ(ω^s)≤ Cλ^m L<C^-1λ^-mL≤ L dϕ^m_θ(ω^u)ω^u .
Then
dϕ^m_θ(ω) ≤ dϕ^m_θ(ω^s) + dϕ^m_θ(ω^u)+ dϕ^m_θ(ω^c)
≤ L dϕ^m_θ(ω^u)ω^u + L dϕ^m_θ(ω^u)ω^u +1
< (2L+1) dϕ^m_θ(ω^u)ω^u .
If we consider τ_1=2L+1, in both cases we have that
dϕ^m_θ(ω)≤τ_1 . d ϕ^m_θ|_E^u(θ)
for every ω∈ T_θ SM with ω =1. Since the norm is always attained in a finite-dimensional space, we conclude the proof of the lemma.
For m∈ℕ large enough, there is τ_2∈ (0,1) such that for every θ∈ SM
dϕ^m_θ^*≥τ_2 dϕ^m_θ(ξ)
for some ξ∈ E^s(θ) with ξ =1, where dϕ^m_θ^*=inf_v =1dϕ^m_θ(v).
Let ε>0 and consider m∈ℕ large enough such that ε≥ (L+1)Cλ^m and √(1-ε^2) >ε Cλ^m, where L comes from Section 3. Fix θ∈ SM and define the following set
Γ_θ,ε,m:={ω∈ T_θ SM: ω =1, ω=ω^s+ω^u+ω^c and dϕ^m_θ(ω^u+ω^c) ≥ε} .
Case 1: ω∈Γ_θ,ε,m with ω^s≠ 0.
Since the geodesic flow is Anosov,
dϕ^m_θ(ω) ≥ dϕ^m_θ(ω^u+ω^c) - dϕ^m_θ(ω^s)
≥ dϕ^m_θ(ω^u+ω^c) - Cλ^m ω^s.
As ω∈Γ_θ,ε,m we have that
dϕ^m_θ(ω^u+ω^c) ≥ε≥ (L+1)Cλ^m≥ (ω^s +1)Cλ^m.
Then from (<ref>) and (<ref>)
dϕ^m_θ(ω) ≥ dϕ^m_θ(ω^u+ω^c) -Cλ^m ω^s≥ Cλ^m ≥ dϕ^m_θ(ω^s) ω^s .
Case 2: ω∈Γ_θ,ε,m with ω^s=0.
Since the geodesic flow is Anosov,
dϕ^m_θ(ω) = dϕ^m_θ(ω^u+ω^c) ≥ε>Cλ^m≥dϕ^m_θ(ξ)
for every ξ∈ E^s(θ) with ξ =1.
Case 3: ω∉Γ_θ,ε,m and w=1.
We have that
ε^2 >dϕ^m_θ(ω^u+ω^c) ^2= dϕ^m_θ(ω^u) ^2+dϕ^m_θ(ω^c) ^2=dϕ^m_θ(ω^u) ^2+ω^c ^2.
Then ω^c <ε and dϕ^m_θ(ω^u) <ε. Since the geodesic flow is Anosov,
C^-1λ^-mω^u≤ dϕ^m_θ(ω^u)<ε.
This implies that ω^u <ε Cλ^m. On the other hand, as w=1, then
ω^u + ω^s≥ω^u+ω^s=√(1-ω^c^2)>√(1-ε^2 ) .
Furthermore
L≥ω^s > √(1-ε^2) - ε Cλ^m>0.
In particular, ω^s≠ 0. Denote by
E^cu(θ):=E^u(θ)⊕⟨ G(θ)⟩
and define the following linear map
P_θ: T_θ SM→ E^s(θ)
as the parallel projection onto E^s(θ) along E^cu(θ). Since the angle between the stable and unstable subspaces is uniformly away from 0 for every θ∈ SM, then there is δ≥ 1 such that
P_θ(ω) ≤δω
for every θ∈ SM and ω∈ T_θ SM (see Theorem 3.1 in <cit.>). Then
dϕ^m_θ(ω^s) =P_ϕ^m(θ)(dϕ^m_θ(ω)) ≤δdϕ^m_θ(ω) .
By (<ref>), if we choose ε>0 such that ω^s≥ 1/2, we have that
dϕ^m_θ(ω)≥12δ dϕ^m_θ(ω^s)ω^s.
If we consider τ_2= 1/2δ, in all cases we have that for all for every ω∈ T_θ SM with ω =1 there is ξ∈ E^s(θ), with ξ=1 such that
d_θϕ^m(ω)≥τ_2 d_θϕ^m(ξ).
Since the infimum is always attained in a finite-dimensional space, the last inequality concludes the proof of the lemma.
Clearly dϕ^m_θ^* ≤ dϕ^m_θ for every θ∈ SM. From Lemmas 4.1 and 4.2, we can obtain a positive constant, independent of θ, such that the direction of the inequality changes.
For m∈ℕ large enough, there is κ>1, depending on m, such that
dϕ^m_θ≤κ dϕ^m_θ^*
for every θ∈ SM.
From Lemma 4.1 we have that
d ϕ^m_θ≤τ_1 d ϕ^m_θ(η)
for some η∈ E^u(θ) with η=1. Denote by J_η the Jacobi field associated to η. Since the geodesic flow is of Anosov type and dϕ^t_θ(η)=(J_η(t),J'_η(t)), we have from item 1 of Section 2.3 that
d ϕ^m_θ≤τ_1 d ϕ^m_θ(η)=τ_1 √(J_η(m) ^2 +J'_η(m) ^2 )≤τ_1√(1+c^2) J_η(m) .
In the same way, by Lemma 4.2 we have that
dϕ^m_θ^*≥τ_2 dϕ^m_θ(ξ)=τ_2√(1+ J'_ξ(m)^2 J_ξ(m) ^2 ) J_ξ(m) ,
for some ξ∈ E^s(θ) with ξ=1, where J_ξ is the Jacobi field associated to ξ. Moreover,
√(1+c^2)≤√(1+c^2)√(1+ J'_ξ(m)^2 J_ξ(m) ^2 ).
Define the function
r: [0,+∞) →ℝ
t→λ^-t J_ξ(t)λ^t J_η(t).
This function is well-defined because the stable and unstable Jacobi fields are never zero since the manifold has no conjugate points (see Section 2). We have that
r'(t)=r(t)(-2logλ + ⟨ J'_ξ(t),J_ξ(t) ⟩⟨ J_ξ(t),J_ξ(t) ⟩ - ⟨ J'_η(t),J_η(t) ⟩⟨ J_η(t),J_η(t) ⟩).
Also
A(t)=⟨ J'_ξ(t),J_ξ(t) ⟩⟨ J_ξ(t),J_ξ(t) ⟩∈ [-c,c] andB(t)=⟨ J'_η(t),J_η(t) ⟩⟨ J_η(t),J_η(t) ⟩∈ [-c,c].
Since the curvature is bounded below by -c^2, then λ≥ e^-c (see <cit.>). Therefore
-2logλ-2c≤ -2logλ +A(t) - B(t)≤ -2logλ + 2c.
This implies that
-2logλ - 2c≤r'(t)r(t)≤ -2logλ + 2c
and
r(0)· e^(-2logλ-2c)t≤ r(t)≤ r(0)· e^(-2logλ+2c)t.
Therefore
r(0)^-1· e^(2logλ - 2c)t≤1r(t)≤ r(0)^-1· e^(2logλ+2c)t.
For t=m we have that
J_η(m) ≤ r(0)^-1· e^(2logλ+2c)m·λ^-2m J_ξ(m) =r(0)^-1· e^2cm J_ξ(m) .
From (<ref>), (<ref>), (<ref>) and (<ref>)
dϕ^m_θ ≤τ_1 √(1+c^2)J_η(m)
≤τ_1√(1+c^2)· r(0)^-1· e^2cm J_ξ(m)
≤τ_1√(1+c^2)√(1+ J'_ξ(m)^2 J_ξ(m) ^2 )· r(0)^-1· e^2cm J_ξ(m) .
From item 1 of Section 2.3 we have that
1=ξ^2=dπ_θ(ξ) ^2 + K_θ(ξ)^2≤ (1+c^2) dπ_θ(ξ) ^2.
Since 1=η^2=dπ_θ(η) ^2 + K_θ(η)^2, the last inequality implies that
r(0)^-1=dπ_θ(η) dπ_θ(ξ) ≤1dπ_θ(ξ) ≤√(1+c^2).
Therefore, substituting in (<ref>) and using (<ref>)
dϕ^m_θ≤κ dϕ^m_θ^*,
where κ=τ_1·τ_2^-1· (1+c^2)· e^2cm>1.
On the other hand, since the geodesic flow is of Anosov type, we have that the norm dϕ^m_θ is bounded between two positive constants.
For m∈ℕ large enough, there are constants K_1, K_2>0, K_1 depending on m, such that
K_2< dϕ^m_θ <K_1
for every θ∈ SM.
Fix θ∈ SM. Since the geodesic flow is of Anosov type, for η∈ E^u(θ) with η =1 we have that
dϕ^m_θ≥ dϕ^m_θ(η)≥ C^-1λ^-m>C^-1,
then K_2=1/C. On the other hand, from (<ref>) we have that
dϕ^1_θ ≤ LCλ + L√(1+c^2)( ( 1+cc) sinh c+Cλ√(1+c^2)) +1
≤ LCλ + L√(1+c^2)( 1+cc) sinh c + LCλ(1+c^2)+1
≤ 2LCλ + LCλ c^2 +L√(1+c^2)( 1+cc) sinh c + 1:=h(c).
Then, we can consider K_1=h(c)^m.
A direct consequence of Proposition 4.4 is the following result.
Given ε>0, there is β∈ (0,1), depending on m, such that
β dϕ^m_θ̃< dϕ^m_θ, ∀θ̃∈ SM: d(θ,θ̃)<ε
for every θ∈ SM.
By Proposition 4.4 we have that
K_2K_1< dϕ^m_θ dϕ^m_θ̃ <K_1K_2
Considering β=K_2K_1=C^-1h(c)^m the conclusion of the corollary follows.
§ RUELLE'S INEQUALITY
In this section, we will prove Theorem 1.1. For this, we will adapt the idea of the proof of Ruelle's inequality for diffeomorphisms in the compact case exhibited in <cit.>.
Let M be a complete Riemannian manifold satisfying all the hypotheses of Theorem 1.1 and μ an ϕ^t-invariant probability measure on SM. By simplicity, we consider μ an ergodic ϕ^t-invariant probability measure on SM. In this case, we denote by {𝒳_i} the Lyapunov exponents and { k_i} their respective multiplicities. The proof in the non-ergodic case is a consequence of the ergodic decomposition of such a measure. We can also assume that ϕ=ϕ^1 is an ergodic transformation with respect to μ. If it is not the case, we can choose an ergodic-time τ for μ and prove the theorem for the map ϕ^τ. The proof of the theorem for the map ϕ^τ implies the proof for the map ϕ because the entropy of ϕ^τ and the Lyapunov exponents are τ-multiples of the respective values of ϕ.
Fix ε>0 and m∈ℕ large enough. There exists a compact set K⊂ SM such that μ(K)>1-ε.
Based on the results in Section 4, we present the following theorem, which constitutes a similar version to the inclusion (10.3) described in <cit.>.
Consider the constants κ>1 and 0<β<1 given by Proposition 4.3 and Corollary 4.5 respectively.
Let M be a complete Riemannian manifold without conjugate points and sectional curvature bounded below by -c^2, for some c>0. If the geodesic flow is of Anosov type, then for every θ∈ K there exists ϱ:=ϱ(K)∈ (0,1) such that
ϕ^m(exp_θ(B(0,βκ^-1ϱ)))⊆ exp_ϕ^m(θ) (d ϕ^m_θ(B(0,ϱ)).
We will proceed by contradiction. Suppose that for every n∈ℕ, there are θ_n∈ K and v_n∈ T_θ_nSM with v_n=βκ^-1n such that
ϕ^m(exp_θ_n(v_n))=exp_ϕ^m(θ_n)(dϕ^m_θ_n(w_n))
where w_n =1n. Since K is compact and w_n→ 0, then dϕ^m_θ_n(w_n) is less than injectivity radius of the
exponential map restricted to the compact set K, for n large enough by Proposition 4.4. Therefore
dϕ^m_θ_n(w_n) =d(ϕ^m(θ_n),exp_ϕ^m(θ_n)(dϕ^m_θ_n(w_n)))
=d(ϕ^m(θ_n),ϕ^m(exp_θ_n(v_n)))
≤∫_0^1(ϕ^m∘ c_n)'(t) dt ,
where c_n(t)=exp_θ_n(tv_n). Then
dϕ^m_θ_n(w_n) ≤sup_t∈ [0,1]dϕ^m_c_n(t)∫_0^1 c'_n(t) dt
=sup_t∈ [0,1]dϕ^m_c_n(t)· v_n .
For n large enough, by Corollary 4.5 we have that
κβ^-1 dϕ^m_θ_n(w_n) w_n = w_n v_n dϕ^m_θ_n(w_n) w_n
≤sup_t∈ [0,1]d_c_n(t)ϕ^m
<β^-1dϕ^m_θ_n.
Therefore
κβ^-1dϕ^m_θ_n^*<β^-1dϕ^m_θ_n
which contradicts the Proposition 4.3.
Now, denote by ϱ_m=βκ^-1ϱ<1, where the constants β, κ and ϱ come from Theorem 5.1. Using the techniques of separate sets applied in <cit.> we define a finite partition 𝒫=𝒫_K∪{ SM∖ K} of SM in the following way:
. 𝒫_K is a partition of K such that for every X∈𝒫_K, there exist balls B(x,r') and B(x,r) such that the constants satisfy 0<r'<r<2r'≤ϱ_m2 and
B(x,r')⊂ X⊂ B(x,r).
. There exists a constant ζ>0 such that the cardinal of 𝒫_K, denoted by | 𝒫_K|, satisfies
| 𝒫_K|≤ζ·(ϱ_m)^-(SM).
. h_μ(ϕ^m,𝒫)≥ h_μ(ϕ^m)-ε.
By definition of entropy,
h_μ(ϕ^m,𝒫) =lim_k→ +∞ H_μ( . 𝒫|ϕ^m𝒫∨…∨ϕ^km𝒫)
≤ H_μ( . 𝒫| ϕ^m𝒫)
≤∑_D∈ϕ^m𝒫μ(D)·logcard{ X∈𝒫: X∩ D≠∅}.
Denote by φ=sup_θ∈ SMdϕ_θ>1. First, we estimate the number of elements X∈𝒫 that intersect a given element D∈ϕ^m𝒫.
There exists a constant L_1>0 such that if D∈ϕ^m𝒫 then
card{ X∈𝒫: X∩ D≠∅}≤ L_1·max{φ^m· (SM), (ϱ_m)^-(SM)} .
Consider D∈ϕ^m𝒫, then D=ϕ^m(X') for some X'∈𝒫.
Case I: X'∈𝒫_K.
By the mean value inequality
diam(D) =diam(ϕ^m(X'))
≤sup_θ∈ SMdϕ_θ^m·diam(X')
≤φ^m· 4r',
since X'⊂ B(x,2r'). If X∈𝒫_K satisfies X∩ D≠∅, then X is contained in a 4r'-neighborhood of D, denoted by W. Since φ^m>1 we have that
diam(W) ≤φ^m· 4r' + 8r'
=4r'·( φ^m + 2)
<12r'·φ^m.
Hence
∑_{ X∈𝒫_K:X∩ D≠∅}vol(X)≤vol(W)≤ A_1 · (r')^ (SM)·φ^m· (SM),
where A_1>0. Since X∈𝒫_K contains a ball of radius r', the volume of X is bounded below by
A_2· (r')^ (SM)≤vol(X),
where A_2>0. From (<ref>) and (<ref>) we have that
card{ X∈𝒫:X∩ D≠∅} ≤A_1A_2·φ^m· (SM) +1
≤(A_1A_2+1) ·φ^m· (SM) .
Case II: X'=SM∖ K.
In this case, we have that
card{ X∈𝒫:X∩ D≠∅} ≤| 𝒫_K| +1
≤ (ζ+1)(ϱ_m)^-(SM).
Considering L_1=max{A_1A_2+1, ζ+1} we obtain the desired result.
Now we will get a finer exponential bound for the number of those sets D∈ϕ^m𝒫_K that contain regular points. For this, let Λ_m be the set of regular points θ∈ SM which satisfy the following condition: for k≥ m and ξ∈ T_θ SM
e^k( 𝒳(θ,ξ)-ε) ξ≤ dϕ^k_θ (ξ) ≤ e^k( 𝒳(θ,ξ)+ε) ξ,
where 𝒳(θ,ξ)=lim_n→±∞1nlogd ϕ^n_θ(ξ).
If D∈ϕ^m𝒫_K has non-empty intersection with Λ_m, then there is a constant L_2>0 such that
card{ X∈𝒫: X∩ D≠∅}≤ L_2· e^mε∏_i:𝒳_i>0e^m(𝒳_i+ε)k_i.
Let X'∈𝒫_K such that ϕ^m(X')=D and suppose that X'∩Λ_m≠∅. Pick a point θ∈ X'∩Λ_m and consider the ball B=B(0,ϱ)⊂ T_θ SM. We claim that
X'⊆ exp_θ(B(0,ϱ_m)),
where exp_θ denotes the exponential map defined on the tangent plane T_θ SM. In fact, let z∈ X'. Since SM is complete with the Sasaki metric (see Lemma 2.1) we can choose w∈ T_θ SM such that γ(t)=exp_θ(tw), where γ is a geodesic with γ(0)=θ and γ(1)=exp_θ(w)=z. As diam 𝒫_K < ϱ_m then
d(θ,z)=l(γ)< ϱ_m.
Similar to the proof of Proposition 2.2, we obtain that
ϱ_m> ∫_0^1γ'(s) ds
= w .
Then w∈ B(0,ϱ_m) and hence
z=exp_θ(w)∈ exp_θ(B(0,ϱ_m)).
Since z∈ X' was arbitrary, the claim is proven. Therefore, from Theorem 5.1 we have that
D=ϕ^m(X')⊆ B_0:=exp_ϕ^m(θ)(B̃_0),
where B̃_0=dϕ^m_θ(B) is an ellipsoid. Since the curvature tensor and the derivative of the curvature tensor of M are both uniformly bounded, we have that the Sasaki sectional curvature of SM is uniformly bounded (see (<ref>)). This implies that the curvature tensor of SM is uniformly bounded. Applying Proposition 2.2 to SM, there exists t_0>0 such that
d (exp_ϕ^m(θ))_tv≤52
for every | t|≤ t_0 and v∈ T_ϕ^m(θ)SM with v =1. Then, for m large enough, we have that
diam (D) ≤ h(c)^m·diam (X')
≤ h(c)^m·ϱ_m
=1C·τ_2τ_1·11+c^2· e^-2cm·ϱ
<t_02,
where h(c) is the expression that bounds the derivative of ϕ (see Proposition 4.4). Therefore, we can choose B_0 that satisfies D⊂ B_0 and diam(B_0)<t_0. We know that diam𝒫_K< ϱ_m<ϱ, then if X∈𝒫_K intersects D, it lies in the set
B_1={Ψ∈ SM: d(Ψ,B_0)<ϱ}.
Since X⊂ B(x,r) and 2r<ϱ_m<ϱ, then B(x,ϱ/2)⊂ B_1 and
card{ X∈𝒫_K: X∩ D≠∅}≤ b·vol(B_1)·ϱ^-(SM),
for some b>0, where vol(B_1) denotes the volume of B_1 induced by the Sasaki metric. Consider a subset B̃^*_0⊂B̃_0 such that exp_ϕ^m(θ) is a diffeomorphism between B̃^*_0 and B_0. Since
| d (exp_ϕ^m(θ))_v| ≤d (exp_ϕ^m(θ))_v ^(SM)
for every v∈B̃^*_0, from (<ref>) we have that
vol(B_0)≤( 52) ^(SM)·vol(B̃_0).
This implies that the volume of B_1 is bounded, up to a bounded factor, by the product of the lengths of the axes of the ellipsoid B̃_0. Those corresponding to non-positive Lyapunov exponents are at most sub-exponentially large. The remaining ones are of size at most e^m(𝒳_i+ε), up to a bounded factor, for all sufficiently large m. Thus
vol(B_1) ≤ A· e^mε·(diam(B))^ (SM)∏_i:𝒳_i>0e^m(𝒳_i+ε)k_i
≤ A· e^mε·(2ϱ)^ (SM)∏_i:𝒳_i>0e^m(𝒳_i+ε)k_i
=÷ e^mε·ϱ^ (SM)∏_i:𝒳_i>0e^m(𝒳_i+ε)k_i,
where Ã=A· 2^ (SM), for some A>0. Then substituting in (<ref>) we have that
card{ X∈𝒫: X∩ D≠∅} ≤ b·vol(B_1)·ϱ^- (SM)+1
≤ b·Ã· e^mε∏_i:𝒳_i>0e^m(𝒳_i+ε)k_i+1
≤ (b·Ã+1)· e^mε∏_i:𝒳_i>0e^m(𝒳_i+ε)k_i.
Considering L_2=b·Ã+1 we obtain the desired result.
Proof of Theorem 1.1. We have that μ(SM∖ K)<ε. From (<ref>), Lemmas 5.2 and 5.3 we obtain
mh_μ(ϕ)-ε =h_μ(ϕ^m)-ε
≤ h_μ(ϕ^m,𝒫)
≤∑_D∈ϕ^m𝒫μ(D)·logcard{ X∈𝒫: X∩ D≠∅}
≤∑_D∈ϕ^m𝒫_K, D∩Λ_m=∅μ(D)·logcard{ X∈𝒫: X∩ D≠∅}
+ ∑_D∈ϕ^m𝒫_K, D∩Λ_m≠∅μ(D)·logcard{ X∈𝒫: X∩ D≠∅}
+μ(ϕ^m(SM∖ K))·logcard{ X∈𝒫: X∩ϕ^m(SM∖ K)≠∅}
≤∑_D∈ϕ^m𝒫_K, D∩Λ_m=∅μ(D)( log (L_1) + (SM)·max{ mlog(φ),-log(ϱ_m) })
+ ∑_D∈ϕ^m𝒫_K, D∩Λ_m≠∅μ(D)( log(L_2) +mε +m∑_i:𝒳_i>0(𝒳_i+ε)k_i)
+ μ(SM∖ K)·( log (L_1) + (SM)·max{ mlog(φ),-log(ϱ_m) })
≤( log (L_1) + (SM)·max{ mlog(φ) ,-log(ϱ_m) })·μ(SM∖Λ_m)
+ log(L_2) +mε +m∑_i:𝒳_i>0(𝒳_i+ε)k_i
+ ε·( log (L_1) + (SM)·max{ mlog(φ) ,-log(ϱ_m) }).
By Oseledec's Theorem we have that μ(SM∖Λ_m)→ 0 as m→∞. Moreover,
lim_m→ +∞1mlog(ϱ_m) = -log(h(c))-2c,
where h(c) is the expression that bounds the derivative of ϕ (see Proposition 4.4). Then, dividing by m in (<ref>) and taking m→ +∞ we obtain
h_μ(ϕ)≤ε + ∑_i:𝒳_i>0(𝒳_i+ε)k_i +ε· (SM)·max{log(φ) ,log(h(c))+2c }.
Letting ε→ 0 we have
h_μ(ϕ)≤∑_i:𝒳_i>0𝒳_ik_i,
which is the desired upper bound. □
§ PESIN'S FORMULA
In this section, we aim to prove Theorem 1.2. To achieve this goal, we will use the techniques applied by Mañé in <cit.> which don't use the theory of stable manifolds. Adopting this strategy greatly simplifies our proof since we only need to corroborate that all the technical hypotheses used by Mañé continue to be satisfied under the condition of the geodesic flow being Anosov. To simplify notation, we write
𝒳^+(θ)=∑_𝒳_i(θ)>0𝒳_i(θ)· (H_i(θ)).
We start introducing some notations. Set g:SM→ SM a map and ρ:SM→ (0,1) a function. For θ∈ SM and n≥ 0, define
S_n(g,ρ,θ)={ω∈ SM: d(g^j(θ),g^j(ω))≤ρ(g^j(θ)), 0≤ j≤ n} .
If μ is a measure on SM and g and ρ are measurable, define
h_μ(g,ρ,θ)=lim sup_n→∞-1nlogμ(S_n(g,ρ,θ)).
Let E be a normed space and E=E_1⊕ E_2 a splitting.
We say that a subset W⊂ E is a (E_1,E_2)-graph if there exists an open set U⊂ E_2 and a C^1-map ψ:U→ E_1 such that W={ (ψ (x),x): x∈ U}. The number
sup{ψ(x)-ψ(y) x-y: x,y∈ U, x≠ y }
is called the dispersion of W.
Let M be a complete Riemannian manifold and μ an ϕ^t-invariant probability measure on SM satisfying the assumptions of Theorem 1.2. Denote by ν the Lebesgue measure on SM. Since the geodesic flow is of Anosov type, consider
E^cs(θ)=⟨ G(θ)⟩⊕ E^s(θ)
for every θ∈ SM. From Theorem 3.1 there is a set Λ⊂ SM such that μ(SM∖Λ)=0 and the Lyapunov exponents of ϕ exist for every θ∈Λ. Fix any ε>0. By Egorov's and Oseledec's Theorems, there is a compact set K⊂Λ with μ(K)≥ 1-ε such that the splitting T_θ SM=E^cs(θ)⊕ E^u(θ) is continuous when θ varies in K and, for some N>0, there are constants α>β>1 such that, if g=ϕ^N, the inequalities
d g^n_θ(η) ≥α^nη
.d g^n_θ|_E^cs(θ) ≤β^n
log| ( . d g^n_θ|_E^u(θ)) | ≥ Nn(𝒳^+(θ)-ε)
hold for all θ∈ K, n≥ 0 and η∈ E^u(θ).
In the same way as in <cit.>, in the remainder of this section, we will treat SM as if it were an Euclidean space. The arguments we use can be formalized without any difficulty by the direct use of local coordinates. Since the geodesic flow is C^1-Hölder, we have the following result proved by Mañe in <cit.>.
For every σ>0 there is ξ>0 such that, if θ∈ K and g^m(θ)∈ K for some m>0, then if a set W⊂ SM is contained in the ball B_ξ^m(θ) and is a (E^cs(θ),E^u(θ))-graph with dispersion ≤σ, then g^m(W) is a (E^cs(g^m(θ)),E^u(g^m(θ)))-graph with dispersion ≤σ.
Fix the constant σ>0 of the statement of Lemma 6.1 small enough such that exists a∈ (0,1), a≤ t_0/2, where t_0 comes from Proposition 2.2 applied to SM, with the following property: if θ∈ K, ω∈ SM and d(θ,ω)<a, then for every subspace E⊂ T_ω SM which is a (E^cs(θ), E^u(θ))-graph with dispersion ≤σ we have
| log|( . d g_ω|_E) |-log| ( . d g_θ|_E^u(θ) ) | | ≤ε.
We proved in Theorem 3.1 that the norm of the derivative of ϕ is bounded, then denote
P=sup{log|. ( d ϕ_θ|_E) | : θ∈ SM, E⊂ T_θ SM}.
The following proposition is an adaptation of Mañe's result in <cit.> applied to the case of Anosov geodesic flow for non-compact manifolds. To ensure a comprehensive understanding of our arguments, we chose to include the full proof provided by Mañé.
For every small ε>0, there exist a function ρ:SM→ (0,1) with logρ∈ L^1(SM,μ), an integer N>0 and a compact set K'⊂ SM with μ(SM∖ K')≤ 2√(ε) such that
h_ν(ϕ^N,ρ,θ)≥ N( 𝒳^+(θ)-ε-εN-4P√(ε))
for every θ∈ K'.
For θ∈ K, define L(θ) as the minimum integer ≥ 1 such that g^L(θ)(θ)∈ K. This function is well defined for μ-almost every θ∈ K and it is integrable. Extend L to SM, putting L(θ)=0 when θ∉ K and at points of K that do not return to this set. Define ρ:SM→ (0,1) as
ρ(θ)=min{ a, ξ^L(θ)},
where a∈ (0,t_0/2) comes from property (<ref>) and ξ>0 comes from Lemma 6.1. Since L is integrable then clearly logρ is also integrable.
On the other hand, by Birkhoff's ergodic theorem, the function
Ψ(θ)=lim_n→ +∞1ncard{ 0≤ j< n: g^j(θ)∈Λ∖ K}
is defined for μ-almost every θ∈Λ. Then
ε≥μ(Λ∖ K) =∫_ΛΨ dμ
≥∫_{θ∈Λ: Ψ(θ)>√(ε)}Ψ dμ
>√(ε)·μ( {θ∈Λ: Ψ(θ)>√(ε)}).
Therefore,
μ( {θ∈Λ: Ψ(θ)≤√(ε)})≥ 1-√(ε).
By Egorov's Theorem, there exists a compact set K'⊂ K with μ(K')≥ 1-2√(ε) and N_0>0 such that, if n≥ N_0,
card{ 0≤ j< n: g^j(θ)∈Λ∖ K}≤ 2n√(ε)
for all θ∈ K'. Since the subspaces E^cs(θ) and E^u(θ) are not necessary orthogonal, there exists B>0 such that
ν(S_n(g,ρ,θ))≤ B∫_E^cs(θ)ν(( ω+E^u(θ)) ∩ S_n(g,ρ,θ) ) dν(ω)
for every θ∈ K' and n≥ 0, where ν also denotes the Lebesgue measure in the subspaces E^cs(θ) and ω+E^u(θ). For ω∈ E^cs(θ), denote by
Ω_n(ω)=(ω+E^u(θ)) ∩ S_n(g,ρ,θ).
Take D>0 such that D> vol(W) for every (E^cs(θ),E^u(θ))-graph W with dispersion ≤σ contained in B_ρ(θ)(θ), where θ∈ K' and ρ is the function defined in (<ref>). This constant exists because the domain of the graphs is contained in a ball of radius <1 and the derivatives of the functions defining the graphs are uniformly bounded in norm by σ.
If g^n(θ)∈ K' and ω∈ E^cs(θ), from Lemma 5 of <cit.> we have that g^n(Ω_n(ω)) is a (E^cs(g^n(θ)),E^u(g^n(θ)))-graph with dispersion ≤σ and
D>vol(g^n(Ω_n(ω)))=∫_Ω_n(ω)| . dg^n_z|_T_z Ω_n(ω)| dν(z).
Fix any θ∈ K' and let S_n={ 0≤ j<n: g^j(θ)∈ K'}. If n≥ N_0, it follows from (<ref>), (<ref>) and (<ref>) that for ω∈ E^cs(θ) we have
log| ( . dg^n_z|_T_zΩ_n(ω)) | =∑_j=0^n-1log| ( . dg_g^j(z)|_T_g^j(z)g^j(Ω_n(ω))) |
≥∑_j∈ S_nlog| ( . dg_g^j(z)|_T_g^j(z)g^j(Ω_n(ω))) |-NP(n-cardS_n)
≥∑_j∈ S_nlog| ( . dg_g^j(θ)|_E^u(g^j(θ))) |-ε n-NP(n-cardS_n)
≥∑_j=0^n-1log| ( . dg_g^j(θ)|_E^u(g^j(θ))) |-ε n-2NP(n-cardS_n)
=log| (. dg^n_θ|_E^u(θ))| -ε n-2NP(n-cardS_n)
≥ nN(𝒳^+(θ)-ε)-ε n-2NP(n-cardS_n)
≥ nN(𝒳^+(θ)-ε)-ε n-4NPn√(ε).
From (<ref>) we obtain that
D>ν(Ω_n(ω))·exp( nN(𝒳^+(θ)-ε)-ε n-4NPn√(ε))
for every θ∈ K' and ω∈ E^cs(θ). It follows from (<ref>) that
ν(S_n(g,ρ,θ))≤ B· D·exp( -nN(𝒳^+(θ)-ε)+ε n+4NPn√(ε)) .
Therefore, for every θ∈ K',
h_ν(g,ρ,θ)=lim sup_n→∞-1nlogν(S_n(g,ρ,θ))≥ N( 𝒳^+(θ)-ε-εN-4P√(ε)).
This completes the proof of the proposition.
We will show that the function ρ of Proposition 6.2 allows us to find a lower bound for the entropy of ϕ^N. To prove this, Mañé constructed a partition of the manifold with certain properties using strongly the compactness condition (see Lemma 2 of <cit.>). Since the manifold SM is not necessarily compact in our case, we will use another technique to construct a partition that satisfies the same properties. Consider the constant a∈ (0,1), a<t_0/2, used in property (<ref>).
Let M be a complete Riemannian manifold and suppose that the curvature tensor and the derivative of the curvature tensor are both uniformly bounded. For every θ∈ SM we have that
diam exp_θ U≤52·diam U,
where U⊂ B(0,a)⊂ T_θ SM.
Fix θ∈ SM and consider U⊂ B(0,a)⊂ T_θ SM. We need to prove that
d(exp_θ u, exp_θ v)≤52 u-v
for every u,v∈ U. Consider the segment q(t)=tu + (1-t)v and the curve γ(t)=exp_θ q(t) that joins exp_θ u with exp_θ v. Then
l(γ) =∫_0^1γ'(t)
=∫_0^1 d(exp_θ)_q(t) (u-v) dt.
For each t∈ [0,1], there are w(t)∈ T_θ SM with w(t)=1 and s(t)∈ℝ with |s(t)|≤ t_0 such that
q(t)=s(t)w(t).
Since a≤ t_0/2, from Proposition 2.2 we have that
d(exp_θ)_q(t) (u-v) = d(exp_θ)_s(t)w(t) (u-v)
≤52u-v.
Therefore in (<ref>)
d(exp_θ u, exp_θ v)≤ l(γ)≤52u-v
completing the proof.
Consider the function ρ:SM→ (0,1) defined in (<ref>).
There exists a countable partition 𝒫 of SM with finite entropy such that, if 𝒫(θ) denotes the atom of 𝒫 containing θ, then
diam 𝒫(θ)≤ρ(θ)
for μ-almost every θ∈ SM.
For each n≥ 0, define
U_n={θ∈ SM: e^-(n+1)<ρ(θ)≤ e^-n} .
Since logρ∈ L^1(SM,μ), we have that
∑_n=0^∞ n μ(U_n)≤ -∑_n=0^∞ ∫_U_nlogρ(θ)dμ(θ)= -∫_SMlogρ(θ)dμ(θ)<∞.
Then, by Lemma 1 of <cit.> we obtain
∑_n=0^∞μ(U_n)logμ(U_n)<∞.
For θ∈ SM∖ K' we have that ρ(θ)=a. Then there exists n_0≥ 0 such that
e^-(n_0+1)<a≤ e^-n_0
and U_n∩ (SM∖ K')=∅ for every n≠ n_0. This implies that U_n⊂ K' for every n≠ n_0. Define
U_n_0^*=U_n_0∩ K'.
Since K' is compact, there exist A>0 and r_0>0 such that for all 0<r≤ r_0, there exists a partition 𝒬_r of K' whose atoms have diameter less than or equal to r and such that the number of atoms in 𝒬_r, denoted by | 𝒬_r|, satisfies
| 𝒬_r|≤ A( 1r)^ (SM).
Define 𝒬 as the partition of K' given by
. Sets X∩ U_n, for n≥ 0, n≠ n_0, where X∈𝒬_r_n and r_n=e^-(n+1) such that μ(X∩ U_n)>0.
. Sets X∩ U_n_0^*, where X∈𝒬_r_n_0 and r_n_0=e^-(n_0+1) such that μ(X∩ U_n_0^*)>0.
On the other hand, consider 0<ε'<a/10 such that, we can choose a measurable set (like a “ring" covering SM∖ K')
V_1⊆{θ∈ SM∖ K': d(θ,K')≤ε'}:=E_1
that satisfies
μ(V_1)≤√(ε).
Define K'_1=K'∪ V_1 and choose a measurable set (like a “ring" covering SM∖ K'_1)
V_2⊆{θ∈ SM∖ K'_1: d(θ,K'_1)≤ε'}:=E_2
that satisfies
μ(V_2)≤√(ε)2.
Proceeding inductively, we define bounded measurable sets
V_n⊆{θ∈ SM∖ K'_n-1: d(θ,K'_n-1)≤ε'}:=E_n,
where K'_n-1=K'∪ V_1…∪ V_n-1, with measure
μ(V_n)≤√(ε)2^n-1.
Since
∑_n=1^∞ nμ(V_n)≤∑_n=1^∞n2^n-1·√(ε)<∞,
by Lemma 1 of <cit.> we have that
∑_n=1^∞μ(V_n)logμ(V_n)<∞.
Let k be the number of balls of radius a/10 which cover E_1 and denote by B(θ_1,a/10),…,B(θ_k,a/10) this covering. We claim that
E_2⊆⋃_i=1^k B(θ_i,a/5).
In fact, suppose that exists θ∈ E_2 such that d(θ,θ_i)≥ a/5, for every i=1,…,k. By construction, there is ω∈ E_1 such that
d(θ,ω)≤ε'<a10.
Since we cover E_1 by balls, ω∈ B(θ_i_0,a/10) for some i_0∈{ 1,…,k}. Therefore,
d(θ,ω) ≥ d(θ,θ_i_0)-d(θ_i_0,ω)
> a5 -a10
=a10,
which is a contradiction with (<ref>). This proves the claim. Since SM is complete (see Lemma 2.1), for each i∈{1,…,k}, there is an open ball B^i(0,a/5)⊂ T_θ_iSM such that
exp_θ_i(B^i(0,a/5))=B(θ_i,a/5).
By <cit.> there exists N_1:=N_1(a)>0, which depends on the dimension of SM and a, such that the minimal number of balls of radius a/10 which can cover B^i(0,a/5) is bounded by N_1. Suppose that
B^i_1,…, B^i_N_1
are balls of radius a/10 that cover B^i(0,a/5). From Lemma 6.3, if we project these balls to the manifold SM by the exponential map we have that exp_θ_iB^i_j are sets of diameter
diam exp_θ_i B^i_j≤52diam B^i_j=52·2a10=a2.
Then we can cover E_2 by k N_1 sets of diameter ≤ a/2. Since every set of diameter ≤ a/2 is contained in a ball of radius a/2, we can cover E_2 by kN_1 balls B(ω_1,a/2),…, B(ω_kN_1,a/2). Analogously, since ε'<a/10 we have that
E_3⊂⋃_i=1^kN_1 B(ω_i,6a/10).
For each i∈{1,…,kN_1}, there is an open ball B^i(0,6a/10)⊂ T_ω_iSM such that
exp_ω_i(B^i(0,6a/10))=B(ω_i,6a/10).
By <cit.> there exists N_2:=N_2(a)>0, which depends on the dimension of SM and a, such that the minimal number of balls of radius a/10 which can cover B^i(0,6a/10) is bounded by N_2 and repeating the previous process we have that we can cover E_3 by kN_1N_2 balls of radius a/2. Continuing inductively, we obtain that E_n can be covered by kN_1N_2^n-2 balls of radius a/2. Therefore, for every n≥ 1, define a partition 𝒫̂_n of V_n whose atoms have diameter ≤ a and the number of atoms satisfies
| 𝒫̂_1 | ≤ k, |𝒫̂_n |≤ kN_1N_2^n-2, ∀ n≥ 2.
Finally, define the partition of SM as
𝒫=𝒬∪⋃_n≥ 1𝒫̂_n.
Recalling the well-known inequality
-∑_i=1^m x_ilog x_i≤(∑_i=1^m x_i)( log m -log∑_i=1^m x_i)
which holds for any set of real numbers 0<x_i≤ 1, i=1,…,m. We claim that H(𝒫)<+∞. In fact, from (<ref>) and (<ref>) we obtain that
H(𝒫) =∑_n≥ 0, n≠ n_0( -∑_P∈𝒬,P⊂ U_nμ(P)logμ(P))+( -∑_P∈𝒬,P⊂ U_n_0^*μ(P)logμ(P))
+ ∑_n≥ 1( -∑_P∈𝒫̂_nμ(P)logμ(P))
≤∑_n≥ 0, n≠ n_0μ(U_n)[ log| 𝒬_r_n| - logμ(U_n) ] +μ(U_n_0^*)[ log | 𝒬_r_n_0 | - logμ(U_n_0^*) ]
+∑_n≥ 1,μ(V_n)[ log | 𝒫̂_n | - logμ(V_n) ]
≤∑_n≥ 0, n≠ n_0μ(U_n)[log A + (SM)(n+1) - logμ(U_n)]
+μ(U_n_0^*)[log A+ (SM)(n_0+1)- logμ(U_n_0^*)]+μ(V_1)[log k -logμ(V_1)]
+ ∑_n≥ 2μ(V_n)[log k + log N_1 + (n-2)log N_2- logμ(V_n)]
<∞.
Moreover, if θ∈ U_n, for n≥ 0, n≠ n_0, then 𝒫(θ) is contained in an atom of 𝒬_r_n and
diam 𝒫(θ)≤ r_n= e^-(n+1)<ρ(θ).
If θ∈ U_n_0^*, then 𝒫(θ) is contained in an atom of 𝒬_r_n_0 and
diam 𝒫(θ)≤ r_n_0= e^-(n_0+1)<ρ(θ).
In another case, if θ∈ V_n, for n≥ 1, then 𝒫(θ) is contained in an atom of 𝒫̂_n and diam𝒫(θ)≤ a=ρ(θ).
Given that M has finite volume, it follows that SM also has finite volume. Lemma 6.4, together with the Radon-Nikodym Theorem and Shannon-McMillan-Breiman Theorem, allow us to obtain the following result proved in <cit.>.
If μ≪ν, where ν denotes the Lebesgue measure on SM, then
h_μ(ϕ^N)≥∫_SM h_ν(ϕ^N,ρ,θ)dμ(θ).
Proof of Theorem 1.2. We just need to prove that
h_μ(ϕ) ≥∫_SM𝒳^+(θ)dμ(θ).
Consider Υ=sup_θ∈ SM{dϕ^1_θ, dϕ^-1_θ}. Then
∫_SM∖K'𝒳^+(θ)dμ(θ) ≤μ(SM∖ K')· (SM)·logΥ
≤ 2√(ε)· (SM)·logΥ.
From Propositions 6.2 and 6.5 we have that
h_μ(ϕ^N) ≥∫_SMh_ν(ϕ^N,ρ,θ)dμ(θ)
≥∫_K' h_ν(ϕ^N,ρ,θ)dμ(θ)
≥ N∫_K'𝒳^+(θ)dμ(θ)-Nε-ε-4NP√(ε)
≥ N∫_SM𝒳^+(θ)dμ(θ)-2√(ε)N· (SM)·logΥ-Nε-ε-4NP√(ε).
Hence,
h_μ(ϕ)≥∫_SM𝒳^+(θ)dμ(θ)-2√(ε)· (SM)·logΥ- ε-εN-4P√(ε).
Letting ε→ 0 we obtain the desired lower bound.□
§.§ Acknowledgments
Alexander Cantoral thanks FAPERJ for partially supporting the research (Grant E-26/202.303/2022). Sergio Romaña thanks “Bolsa Jovem Cientista do Nosso Estado No. E-26/201.432/2022", NNSFC 12071202, and NNSFC 12161141002 from China. The second author thanks the Department of Mathematics of the SUSTech- China for its hospitality during the execution of this work.
abbrv
|
http://arxiv.org/abs/2409.03009v1 | 20240904180432 | Measurement of $CP$ violation in ${B^0}\rightarrow{D^{+}D^{-}}$ and ${B^{0}_{s}}\rightarrow{D^{+}_{s}D^{-}_{s}}$ decays | [
"LHCb collaboration",
"R. Aaij",
"A. S. W. Abdelmotteleb",
"C. Abellan Beteta",
"F. Abudinén",
"T. Ackernley",
"A. A. Adefisoye",
"B. Adeva",
"M. Adinolfi",
"P. Adlarson",
"C. Agapopoulou",
"C. A. Aidala",
"Z. Ajaltouni",
"S. Akar",
"K. Akiba",
"P. Albicocco",
"J. Albrecht",
"F. Alessio",
"M. Alexander",
"Z. Aliouche",
"P. Alvarez Cartelle",
"R. Amalric",
"S. Amato",
"J. L. Amey",
"Y. Amhis",
"L. An",
"L. Anderlini",
"M. Andersson",
"A. Andreianov",
"P. Andreola",
"M. Andreotti",
"D. Andreou",
"A. Anelli",
"D. Ao",
"F. Archilli",
"M. Argenton",
"S. Arguedas Cuendis",
"A. Artamonov",
"M. Artuso",
"E. Aslanides",
"R. Ataíde Da Silva",
"M. Atzeni",
"B. Audurier",
"D. Bacher",
"I. Bachiller Perea",
"S. Bachmann",
"M. Bachmayer",
"J. J. Back",
"P. Baladron Rodriguez",
"V. Balagura",
"W. Baldini",
"L. Balzani",
"H. Bao",
"J. Baptista de Souza Leite",
"C. Barbero Pretel",
"M. Barbetti",
"I. R. Barbosa",
"R. J. Barlow",
"M. Barnyakov",
"S. Barsuk",
"W. Barter",
"M. Bartolini",
"J. Bartz",
"J. M. Basels",
"S. Bashir",
"G. Bassi",
"B. Batsukh",
"P. B. Battista",
"A. Bay",
"A. Beck",
"M. Becker",
"F. Bedeschi",
"I. B. Bediaga",
"N. A. Behling",
"S. Belin",
"V. Bellee",
"K. Belous",
"I. Belov",
"I. Belyaev",
"G. Benane",
"G. Bencivenni",
"E. Ben-Haim",
"A. Berezhnoy",
"R. Bernet",
"S. Bernet Andres",
"A. Bertolin",
"C. Betancourt",
"F. Betti",
"J. Bex",
"Ia. Bezshyiko",
"J. Bhom",
"M. S. Bieker",
"N. V. Biesuz",
"P. Billoir",
"A. Biolchini",
"M. Birch",
"F. C. R. Bishop",
"A. Bitadze",
"A. Bizzeti",
"T. Blake",
"F. Blanc",
"J. E. Blank",
"S. Blusk",
"V. Bocharnikov",
"J. A. Boelhauve",
"O. Boente Garcia",
"T. Boettcher",
"A. Bohare",
"A. Boldyrev",
"C. S. Bolognani",
"R. Bolzonella",
"N. Bondar",
"A. Bordelius",
"F. Borgato",
"S. Borghi",
"M. Borsato",
"J. T. Borsuk",
"S. A. Bouchiba",
"M. Bovill",
"T. J. V. Bowcock",
"A. Boyer",
"C. Bozzi",
"A. Brea Rodriguez",
"N. Breer",
"J. Brodzicka",
"A. Brossa Gonzalo",
"J. Brown",
"D. Brundu",
"E. Buchanan",
"A. Buonaura",
"L. Buonincontri",
"A. T. Burke",
"C. Burr",
"J. S. Butter",
"J. Buytaert",
"W. Byczynski",
"S. Cadeddu",
"H. Cai",
"A. C. Caillet",
"R. Calabrese",
"S. Calderon Ramirez",
"L. Calefice",
"S. Cali",
"M. Calvi",
"M. Calvo Gomez",
"P. Camargo Magalhaes",
"J. I. Cambon Bouzas",
"P. Campana",
"D. H. Campora Perez",
"A. F. Campoverde Quezada",
"S. Capelli",
"L. Capriotti",
"R. Caravaca-Mora",
"A. Carbone",
"L. Carcedo Salgado",
"R. Cardinale",
"A. Cardini",
"P. Carniti",
"L. Carus",
"A. Casais Vidal",
"R. Caspary",
"G. Casse",
"J. Castro Godinez",
"M. Cattaneo",
"G. Cavallero",
"V. Cavallini",
"S. Celani",
"D. Cervenkov",
"S. Cesare",
"A. J. Chadwick",
"I. Chahrour",
"M. Charles",
"Ph. Charpentier",
"E. Chatzianagnostou",
"M. Chefdeville",
"C. Chen",
"S. Chen",
"Z. Chen",
"A. Chernov",
"S. Chernyshenko",
"X. Chiotopoulos",
"V. Chobanova",
"S. Cholak",
"M. Chrzaszcz",
"A. Chubykin",
"V. Chulikov",
"P. Ciambrone",
"X. Cid Vidal",
"G. Ciezarek",
"P. Cifra",
"P. E. L. Clarke",
"M. Clemencic",
"H. V. Cliff",
"J. Closier",
"C. Cocha Toapaxi",
"V. Coco",
"J. Cogan",
"E. Cogneras",
"L. Cojocariu",
"P. Collins",
"T. Colombo",
"M. C. Colonna",
"A. Comerma-Montells",
"L. Congedo",
"A. Contu",
"N. Cooke",
"I. Corredoira",
"A. Correia",
"G. Corti",
"J. J. Cottee Meldrum",
"B. Couturier",
"D. C. Craik",
"M. Cruz Torres",
"E. Curras Rivera",
"R. Currie",
"C. L. Da Silva",
"S. Dadabaev",
"L. Dai",
"X. Dai",
"E. Dall'Occo",
"J. Dalseno",
"C. D'Ambrosio",
"J. Daniel",
"A. Danilina",
"P. d'Argent",
"A. Davidson",
"J. E. Davies",
"A. Davis",
"O. De Aguiar Francisco",
"C. De Angelis",
"F. De Benedetti",
"J. de Boer",
"K. De Bruyn",
"S. De Capua",
"M. De Cian",
"U. De Freitas Carneiro Da Graca",
"E. De Lucia",
"J. M. De Miranda",
"L. De Paula",
"M. De Serio",
"P. De Simone",
"F. De Vellis",
"J. A. de Vries",
"F. Debernardis",
"D. Decamp",
"V. Dedu",
"S. Dekkers",
"L. Del Buono",
"B. Delaney",
"H. -P. Dembinski",
"J. Deng",
"V. Denysenko",
"O. Deschamps",
"F. Dettori",
"B. Dey",
"P. Di Nezza",
"I. Diachkov",
"S. Didenko",
"S. Ding",
"L. Dittmann",
"V. Dobishuk",
"A. D. Docheva",
"C. Dong",
"A. M. Donohoe",
"F. Dordei",
"A. C. dos Reis",
"A. D. Dowling",
"W. Duan",
"P. Duda",
"M. W. Dudek",
"L. Dufour",
"V. Duk",
"P. Durante",
"M. M. Duras",
"J. M. Durham",
"O. D. Durmus",
"A. Dziurda",
"A. Dzyuba",
"S. Easo",
"E. Eckstein",
"U. Egede",
"A. Egorychev",
"V. Egorychev",
"S. Eisenhardt",
"E. Ejopu",
"L. Eklund",
"M. Elashri",
"J. Ellbracht",
"S. Ely",
"A. Ene",
"E. Epple",
"J. Eschle",
"S. Esen",
"T. Evans",
"F. Fabiano",
"L. N. Falcao",
"Y. Fan",
"B. Fang",
"L. Fantini",
"M. Faria",
"K. Farmer",
"D. Fazzini",
"L. Felkowski",
"M. Feng",
"M. Feo",
"A. Fernandez Casani",
"M. Fernandez Gomez",
"A. D. Fernez",
"F. Ferrari",
"F. Ferreira Rodrigues",
"M. Ferrillo",
"M. Ferro-Luzzi",
"S. Filippov",
"R. A. Fini",
"M. Fiorini",
"M. Firlej",
"K. L. Fischer",
"D. S. Fitzgerald",
"C. Fitzpatrick",
"T. Fiutowski",
"F. Fleuret",
"M. Fontana",
"L. F. Foreman",
"R. Forty",
"D. Foulds-Holt",
"V. Franco Lima",
"M. Franco Sevilla",
"M. Frank",
"E. Franzoso",
"G. Frau",
"C. Frei",
"D. A. Friday",
"J. Fu",
"Q. Fuehring",
"Y. Fujii",
"T. Fulghesu",
"E. Gabriel",
"G. Galati",
"M. D. Galati",
"A. Gallas Torreira",
"D. Galli",
"S. Gambetta",
"M. Gandelman",
"P. Gandini",
"B. Ganie",
"H. Gao",
"R. Gao",
"T. Q. Gao",
"Y. Gao",
"Y. Gao",
"Y. Gao",
"M. Garau",
"L. M. Garcia Martin",
"P. Garcia Moreno",
"J. García Pardiñas",
"K. G. Garg",
"L. Garrido",
"C. Gaspar",
"R. E. Geertsema",
"L. L. Gerken",
"E. Gersabeck",
"M. Gersabeck",
"T. Gershon",
"S. G. Ghizzo",
"Z. Ghorbanimoghaddam",
"L. Giambastiani",
"F. I. Giasemis",
"V. Gibson",
"H. K. Giemza",
"A. L. Gilman",
"M. Giovannetti",
"A. Gioventù",
"L. Girardey",
"P. Gironella Gironell",
"C. Giugliano",
"M. A. Giza",
"E. L. Gkougkousis",
"F. C. Glaser",
"V. V. Gligorov",
"C. Göbel",
"E. Golobardes",
"D. Golubkov",
"A. Golutvin",
"S. Gomez Fernandez",
"F. Goncalves Abrantes",
"M. Goncerz",
"G. Gong",
"J. A. Gooding",
"I. V. Gorelov",
"C. Gotti",
"J. P. Grabowski",
"L. A. Granado Cardoso",
"E. Graugés",
"E. Graverini",
"L. Grazette",
"G. Graziani",
"A. T. Grecu",
"L. M. Greeven",
"N. A. Grieser",
"L. Grillo",
"S. Gromov",
"C. Gu",
"M. Guarise",
"L. Guerry",
"M. Guittiere",
"V. Guliaeva",
"P. A. Günther",
"A. -K. Guseinov",
"E. Gushchin",
"Y. Guz",
"T. Gys",
"K. Habermann",
"T. Hadavizadeh",
"C. Hadjivasiliou",
"G. Haefeli",
"C. Haen",
"J. Haimberger",
"M. Hajheidari",
"G. Hallett",
"M. M. Halvorsen",
"P. M. Hamilton",
"J. Hammerich",
"Q. Han",
"X. Han",
"S. Hansmann-Menzemer",
"L. Hao",
"N. Harnew",
"M. Hartmann",
"S. Hashmi",
"J. He",
"F. Hemmer",
"C. Henderson",
"R. D. L. Henderson",
"A. M. Hennequin",
"K. Hennessy",
"L. Henry",
"J. Herd",
"P. Herrero Gascon",
"J. Heuel",
"A. Hicheur",
"G. Hijano Mendizabal",
"D. Hill",
"S. E. Hollitt",
"J. Horswill",
"R. Hou",
"Y. Hou",
"N. Howarth",
"J. Hu",
"J. Hu",
"W. Hu",
"X. Hu",
"W. Huang",
"W. Hulsbergen",
"R. J. Hunter",
"M. Hushchyn",
"D. Hutchcroft",
"M. Idzik",
"D. Ilin",
"P. Ilten",
"A. Inglessi",
"A. Iniukhin",
"A. Ishteev",
"K. Ivshin",
"R. Jacobsson",
"H. Jage",
"S. J. Jaimes Elles",
"S. Jakobsen",
"E. Jans",
"B. K. Jashal",
"A. Jawahery",
"V. Jevtic",
"E. Jiang",
"X. Jiang",
"Y. Jiang",
"Y. J. Jiang",
"M. John",
"A. John Rubesh Rajan",
"D. Johnson",
"C. R. Jones",
"T. P. Jones",
"S. Joshi",
"B. Jost",
"J. Juan Castella",
"N. Jurik",
"I. Juszczak",
"D. Kaminaris",
"S. Kandybei",
"M. Kane",
"Y. Kang",
"C. Kar",
"M. Karacson",
"D. Karpenkov",
"A. Kauniskangas",
"J. W. Kautz",
"M. K. Kazanecki",
"F. Keizer",
"M. Kenzie",
"T. Ketel",
"B. Khanji",
"A. Kharisova",
"S. Kholodenko",
"G. Khreich",
"T. Kirn",
"V. S. Kirsebom",
"O. Kitouni",
"S. Klaver",
"N. Kleijne",
"K. Klimaszewski",
"M. R. Kmiec",
"S. Koliiev",
"L. Kolk",
"A. Konoplyannikov",
"P. Kopciewicz",
"P. Koppenburg",
"M. Korolev",
"I. Kostiuk",
"O. Kot",
"S. Kotriakhova",
"A. Kozachuk",
"P. Kravchenko",
"L. Kravchuk",
"M. Kreps",
"P. Krokovny",
"W. Krupa",
"W. Krzemien",
"O. K. Kshyvanskyi",
"S. Kubis",
"M. Kucharczyk",
"V. Kudryavtsev",
"E. Kulikova",
"A. Kupsc",
"B. K. Kutsenko",
"D. Lacarrere",
"P. Laguarta Gonzalez",
"A. Lai",
"A. Lampis",
"D. Lancierini",
"C. Landesa Gomez",
"J. J. Lane",
"R. Lane",
"G. Lanfranchi",
"C. Langenbruch",
"J. Langer",
"O. Lantwin",
"T. Latham",
"F. Lazzari",
"C. Lazzeroni",
"R. Le Gac",
"H. Lee",
"R. Lefèvre",
"A. Leflat",
"S. Legotin",
"M. Lehuraux",
"E. Lemos Cid",
"O. Leroy",
"T. Lesiak",
"E. Lesser",
"B. Leverington",
"A. Li",
"C. Li",
"H. Li",
"K. Li",
"L. Li",
"M. Li",
"P. Li",
"P. -R. Li",
"Q. Li",
"S. Li",
"T. Li",
"T. Li",
"Y. Li",
"Y. Li",
"Z. Lian",
"X. Liang",
"S. Libralon",
"C. Lin",
"T. Lin",
"R. Lindner",
"V. Lisovskyi",
"R. Litvinov",
"F. L. Liu",
"G. Liu",
"K. Liu",
"S. Liu",
"W. Liu",
"Y. Liu",
"Y. Liu",
"Y. L. Liu",
"A. Lobo Salvia",
"A. Loi",
"J. Lomba Castro",
"T. Long",
"J. H. Lopes",
"A. Lopez Huertas",
"S. López Soliño",
"Q. Lu",
"C. Lucarelli",
"D. Lucchesi",
"M. Lucio Martinez",
"V. Lukashenko",
"Y. Luo",
"A. Lupato",
"E. Luppi",
"K. Lynch",
"X. -R. Lyu",
"G. M. Ma",
"R. Ma",
"S. Maccolini",
"F. Machefert",
"F. Maciuc",
"B. Mack",
"I. Mackay",
"L. M. Mackey",
"L. R. Madhan Mohan",
"M. J. Madurai",
"A. Maevskiy",
"D. Magdalinski",
"D. Maisuzenko",
"M. W. Majewski",
"J. J. Malczewski",
"S. Malde",
"L. Malentacca",
"A. Malinin",
"T. Maltsev",
"G. Manca",
"G. Mancinelli",
"C. Mancuso",
"R. Manera Escalero",
"D. Manuzzi",
"D. Marangotto",
"J. F. Marchand",
"R. Marchevski",
"U. Marconi",
"E. Mariani",
"S. Mariani",
"C. Marin Benito",
"J. Marks",
"A. M. Marshall",
"L. Martel",
"G. Martelli",
"G. Martellotti",
"L. Martinazzoli",
"M. Martinelli",
"D. Martinez Santos",
"F. Martinez Vidal",
"A. Massafferri",
"R. Matev",
"A. Mathad",
"V. Matiunin",
"C. Matteuzzi",
"K. R. Mattioli",
"A. Mauri",
"E. Maurice",
"J. Mauricio",
"P. Mayencourt",
"J. Mazorra de Cos",
"M. Mazurek",
"M. McCann",
"L. Mcconnell",
"T. H. McGrath",
"N. T. McHugh",
"A. McNab",
"R. McNulty",
"B. Meadows",
"G. Meier",
"D. Melnychuk",
"F. M. Meng",
"M. Merk",
"A. Merli",
"L. Meyer Garcia",
"D. Miao",
"H. Miao",
"M. Mikhasenko",
"D. A. Milanes",
"A. Minotti",
"E. Minucci",
"T. Miralles",
"B. Mitreska",
"D. S. Mitzel",
"A. Modak",
"R. A. Mohammed",
"R. D. Moise",
"S. Mokhnenko",
"E. F. Molina Cardenas",
"T. Mombächer",
"M. Monk",
"S. Monteil",
"A. Morcillo Gomez",
"G. Morello",
"M. J. Morello",
"M. P. Morgenthaler",
"J. Moron",
"A. B. Morris",
"A. G. Morris",
"R. Mountain",
"H. Mu",
"Z. M. Mu",
"E. Muhammad",
"F. Muheim",
"M. Mulder",
"K. Müller",
"F. Muñoz-Rojas",
"R. Murta",
"P. Naik",
"T. Nakada",
"R. Nandakumar",
"T. Nanut",
"I. Nasteva",
"M. Needham",
"N. Neri",
"S. Neubert",
"N. Neufeld",
"P. Neustroev",
"J. Nicolini",
"D. Nicotra",
"E. M. Niel",
"N. Nikitin",
"P. Nogarolli",
"P. Nogga",
"C. Normand",
"J. Novoa Fernandez",
"G. Nowak",
"C. Nunez",
"H. N. Nur",
"A. Oblakowska-Mucha",
"V. Obraztsov",
"T. Oeser",
"S. Okamura",
"A. Okhotnikov",
"O. Okhrimenko",
"R. Oldeman",
"F. Oliva",
"M. Olocco",
"C. J. G. Onderwater",
"R. H. O'Neil",
"D. Osthues",
"J. M. Otalora Goicochea",
"P. Owen",
"A. Oyanguren",
"O. Ozcelik",
"F. Paciolla",
"A. Padee",
"K. O. Padeken",
"B. Pagare",
"P. R. Pais",
"T. Pajero",
"A. Palano",
"M. Palutan",
"G. Panshin",
"L. Paolucci",
"A. Papanestis",
"M. Pappagallo",
"L. L. Pappalardo",
"C. Pappenheimer",
"C. Parkes",
"B. Passalacqua",
"G. Passaleva",
"D. Passaro",
"A. Pastore",
"M. Patel",
"J. Patoc",
"C. Patrignani",
"A. Paul",
"C. J. Pawley",
"A. Pellegrino",
"J. Peng",
"M. Pepe Altarelli",
"S. Perazzini",
"D. Pereima",
"H. Pereira Da Costa",
"A. Pereiro Castro",
"P. Perret",
"A. Perro",
"K. Petridis",
"A. Petrolini",
"J. P. Pfaller",
"H. Pham",
"L. Pica",
"M. Piccini",
"L. Piccolo",
"B. Pietrzyk",
"G. Pietrzyk",
"D. Pinci",
"F. Pisani",
"M. Pizzichemi",
"V. Placinta",
"M. Plo Casasus",
"T. Poeschl",
"F. Polci",
"M. Poli Lener",
"A. Poluektov",
"N. Polukhina",
"I. Polyakov",
"E. Polycarpo",
"S. Ponce",
"D. Popov",
"S. Poslavskii",
"K. Prasanth",
"C. Prouve",
"D. Provenzano",
"V. Pugatch",
"G. Punzi",
"S. Qasim",
"Q. Q. Qian",
"W. Qian",
"N. Qin",
"S. Qu",
"R. Quagliani",
"R. I. Rabadan Trejo",
"J. H. Rademacker",
"M. Rama",
"M. Ramírez García",
"V. Ramos De Oliveira",
"M. Ramos Pernas",
"M. S. Rangel",
"F. Ratnikov",
"G. Raven",
"M. Rebollo De Miguel",
"F. Redi",
"J. Reich",
"F. Reiss",
"Z. Ren",
"P. K. Resmi",
"R. Ribatti",
"G. R. Ricart",
"D. Riccardi",
"S. Ricciardi",
"K. Richardson",
"M. Richardson-Slipper",
"K. Rinnert",
"P. Robbe",
"G. Robertson",
"E. Rodrigues",
"E. Rodriguez Fernandez",
"J. A. Rodriguez Lopez",
"E. Rodriguez Rodriguez",
"J. Roensch",
"A. Rogachev",
"A. Rogovskiy",
"D. L. Rolf",
"P. Roloff",
"V. Romanovskiy",
"M. Romero Lamas",
"A. Romero Vidal",
"G. Romolini",
"F. Ronchetti",
"T. Rong",
"M. Rotondo",
"S. R. Roy",
"M. S. Rudolph",
"M. Ruiz Diaz",
"R. A. Ruiz Fernandez",
"J. Ruiz Vidal",
"A. Ryzhikov",
"J. Ryzka",
"J. J. Saavedra-Arias",
"J. J. Saborido Silva",
"R. Sadek",
"N. Sagidova",
"D. Sahoo",
"N. Sahoo",
"B. Saitta",
"M. Salomoni",
"I. Sanderswood",
"R. Santacesaria",
"C. Santamarina Rios",
"M. Santimaria",
"L. Santoro",
"E. Santovetti",
"A. Saputi",
"D. Saranin",
"A. Sarnatskiy",
"G. Sarpis",
"M. Sarpis",
"C. Satriano",
"A. Satta",
"M. Saur",
"D. Savrina",
"H. Sazak",
"F. Sborzacchi",
"L. G. Scantlebury Smead",
"A. Scarabotto",
"S. Schael",
"S. Scherl",
"M. Schiller",
"H. Schindler",
"M. Schmelling",
"B. Schmidt",
"S. Schmitt",
"H. Schmitz",
"O. Schneider",
"A. Schopper",
"N. Schulte",
"S. Schulte",
"M. H. Schune",
"R. Schwemmer",
"G. Schwering",
"B. Sciascia",
"A. Sciuccati",
"S. Sellam",
"A. Semennikov",
"T. Senger",
"M. Senghi Soares",
"A. Sergi",
"N. Serra",
"L. Sestini",
"A. Seuthe",
"Y. Shang",
"D. M. Shangase",
"M. Shapkin",
"R. S. Sharma",
"I. Shchemerov",
"L. Shchutska",
"T. Shears",
"L. Shekhtman",
"Z. Shen",
"S. Sheng",
"V. Shevchenko",
"B. Shi",
"Q. Shi",
"Y. Shimizu",
"E. Shmanin",
"R. Shorkin",
"J. D. Shupperd",
"R. Silva Coutinho",
"G. Simi",
"S. Simone",
"N. Skidmore",
"T. Skwarnicki",
"M. W. Slater",
"J. C. Smallwood",
"E. Smith",
"K. Smith",
"M. Smith",
"A. Snoch",
"L. Soares Lavra",
"M. D. Sokoloff",
"F. J. P. Soler",
"A. Solomin",
"A. Solovev",
"I. Solovyev",
"R. Song",
"Y. Song",
"Y. Song",
"Y. S. Song",
"F. L. Souza De Almeida",
"B. Souza De Paula",
"E. Spadaro Norella",
"E. Spedicato",
"J. G. Speer",
"E. Spiridenkov",
"P. Spradlin",
"V. Sriskaran",
"F. Stagni",
"M. Stahl",
"S. Stahl",
"S. Stanislaus",
"E. N. Stein",
"O. Steinkamp",
"O. Stenyakin",
"H. Stevens",
"D. Strekalina",
"Y. Su",
"F. Suljik",
"J. Sun",
"L. Sun",
"Y. Sun",
"D. Sundfeld",
"W. Sutcliffe",
"P. N. Swallow",
"K. Swientek",
"F. Swystun",
"A. Szabelski",
"T. Szumlak",
"Y. Tan",
"M. D. Tat",
"A. Terentev",
"F. Terzuoli",
"F. Teubert",
"E. Thomas",
"D. J. D. Thompson",
"H. Tilquin",
"V. Tisserand",
"S. T'Jampens",
"M. Tobin",
"L. Tomassetti",
"G. Tonani",
"X. Tong",
"D. Torres Machado",
"L. Toscano",
"D. Y. Tou",
"C. Trippl",
"G. Tuci",
"N. Tuning",
"L. H. Uecker",
"A. Ukleja",
"D. J. Unverzagt",
"E. Ursov",
"A. Usachov",
"A. Ustyuzhanin",
"U. Uwer",
"V. Vagnoni",
"V. Valcarce Cadenas",
"G. Valenti",
"N. Valls Canudas",
"H. Van Hecke",
"E. van Herwijnen",
"C. B. Van Hulse",
"R. Van Laak",
"M. van Veghel",
"G. Vasquez",
"R. Vazquez Gomez",
"P. Vazquez Regueiro",
"C. Vázquez Sierra",
"S. Vecchi",
"J. J. Velthuis",
"M. Veltri",
"A. Venkateswaran",
"M. Verdoglia",
"M. Vesterinen",
"D. Vico Benet",
"P. V. Vidrier Villalba",
"M. Vieites Diaz",
"X. Vilasis-Cardona",
"E. Vilella Figueras",
"A. Villa",
"P. Vincent",
"F. C. Volle",
"D. vom Bruch",
"N. Voropaev",
"K. Vos",
"G. Vouters",
"C. Vrahas",
"J. Wagner",
"J. Walsh",
"E. J. Walton",
"G. Wan",
"C. Wang",
"G. Wang",
"J. Wang",
"J. Wang",
"J. Wang",
"J. Wang",
"M. Wang",
"N. W. Wang",
"R. Wang",
"X. Wang",
"X. Wang",
"X. W. Wang",
"Y. Wang",
"Z. Wang",
"Z. Wang",
"Z. Wang",
"J. A. Ward",
"M. Waterlaat",
"N. K. Watson",
"D. Websdale",
"Y. Wei",
"J. Wendel",
"B. D. C. Westhenry",
"C. White",
"M. Whitehead",
"E. Whiter",
"A. R. Wiederhold",
"D. Wiedner",
"G. Wilkinson",
"M. K. Wilkinson",
"M. Williams",
"M. R. J. Williams",
"R. Williams",
"Z. Williams",
"F. F. Wilson",
"M. Winn",
"W. Wislicki",
"M. Witek",
"L. Witola",
"G. Wormser",
"S. A. Wotton",
"H. Wu",
"J. Wu",
"Y. Wu",
"Z. Wu",
"K. Wyllie",
"S. Xian",
"Z. Xiang",
"Y. Xie",
"A. Xu",
"J. Xu",
"L. Xu",
"L. Xu",
"M. Xu",
"Z. Xu",
"Z. Xu",
"Z. Xu",
"D. Yang",
"K. Yang",
"S. Yang",
"X. Yang",
"Y. Yang",
"Z. Yang",
"Z. Yang",
"V. Yeroshenko",
"H. Yeung",
"H. Yin",
"C. Y. Yu",
"J. Yu",
"X. Yuan",
"Y Yuan",
"E. Zaffaroni",
"M. Zavertyaev",
"M. Zdybal",
"F. Zenesini",
"C. Zeng",
"M. Zeng",
"C. Zhang",
"D. Zhang",
"J. Zhang",
"L. Zhang",
"S. Zhang",
"S. Zhang",
"Y. Zhang",
"Y. Z. Zhang",
"Y. Zhao",
"A. Zharkova",
"A. Zhelezov",
"S. Z. Zheng",
"X. Z. Zheng",
"Y. Zheng",
"T. Zhou",
"X. Zhou",
"Y. Zhou",
"V. Zhovkovska",
"L. Z. Zhu",
"X. Zhu",
"X. Zhu",
"V. Zhukov",
"J. Zhuo",
"Q. Zou",
"D. Zuliani",
"G. Zunica"
] | hep-ex | [
"hep-ex"
] |
roman
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH (CERN)
pdflatex
CERN-EP-2024-217
LHCb-PAPER-2024-027
September 4, 2024
[Authors are listed at the end of this paper.]
§ ABSTRACT
A time-dependent, flavour-tagged measurement of violation is performed with
and decays, using data collected by the detector in proton-proton collisions at a centre-of-mass energy of 13corresponding to an integrated luminosity of
6.
In decays the -violation parameters are measured to be
S_ = ,
C_ = -.
In decays the -violating parameter formulation in terms of and |λ| results in
= ,
|λ_| = -.
These results represent the most precise single measurement of the -violation parameters in their respective channels. For
the first time in a single measurement, symmetry is observed to be violated in decays with a significance exceeding six standard deviations.
Submitted to JHEP
. .
plain
arabic
§ INTRODUCTION
Measurements of violation in mesons play a crucial role in the search for physics beyond the Standard Model (SM).
With the increase in experimental precision, control over hadronic matrix elements becomes more important, which
is a major challenge in most decay modes.
In decays of beauty mesons to two charmed mesons , this can be achieved by employing U-spin flavour symmetry
and constraining the hadronic contributions by relating different -violation and branching fraction measurements <cit.>.
The system gives access to a variety of interesting observables
that probe elements of the Cabibbo–Kobayashi–Maskawa (CKM) quark-mixing matrix <cit.>.
In and decays, the -violating weak phases β and β_s can be measured, respectively.
The phases arise in the interference between the –(–) mixing and the tree-level decay amplitudes to the () final state, leading to time-dependent asymmetries.
The decays can also proceed through several other diagrams, as shown in <ref>. The asymmetries may arise from both SM contributions and new physics effects, if present.
In and decays, the same final state is accessible from both and states.
The partial decay rate as a function of the decay time t is given by
Γ(t,d)/ t∝
e^-t/τ_( cosh_ t/2
+ D_fsinh_ t/2
+ d C_fcos_ t
- d S_fsin_ t),
where _ = Γ_L - Γ_H and _ = m_H - m_L are the decay-width difference and mass difference of the heavy and light (=) or (=) mass eigenstates,
τ_ is the mean lifetime of the meson and the tag d represents the flavour at production taking the value
+1 for a meson and -1 for a meson.
The -violation parameters are defined as
D_f = -2|λ_f|cosϕ_/1+|λ_f|^2, C_f = 1-|λ_f|^2/1+|λ_f|^2, S_f = -2|λ_f|sinϕ_/1+|λ_f|^2,
λ_f = q/pA̅_f/A_f and ϕ_ = -λ_f,
where A_f and A̅_f are the decay amplitudes of and to the common final state f and the ratio q/p describes mixing of the mesons.
The parameter D_f cannot be measured in decays because, at the current experimental precision, is compatible with zero. Thus, the decay rates for can be simplified to
Γ(t,d)/ t∝
e^-t/(1
+ d C_cos t
- d S_sin t).
If only tree-level contributions in decays are considered, direct violation vanishes resulting in C_=0 and
S_ = -sin = -sin2β.
This assumption is valid within the current experimental precision for decays, where β can be measured
with high precision as recently reported by <cit.>.
However, in measurements the loop-mediated penguin contributions shown in <ref> cannot be neglected and
an additional phase shift is measured via sin(2β + Δ) = - S_/√(1-C^2_).
This measurement enables higher-order corrections to the measurement of in decays to be constrained,
under the assumption of U-spin flavour symmetry.
Due to the similarities of the two decay channels, a parallel measurement of the -violation parameters in
and decays is performed.
Both decays have been previously studied by <cit.>, while measurements of
the parameters in decays have been performed by <cit.> and <cit.>.
The result lies outside the physically allowed region and shows a small tension with the other measurements.
This analysis uses proton-proton (pp) collision data collected by the experiment during the years 2015 to 2018 corresponding to an integrated luminosity of 6.
The candidates are reconstructed through the decays
and .[If not stated otherwise, charge-conjugated decays are implied.]
These decays have the highest branching fractions into charged kaons and pions.
Candidates where both mesons decay via are not considered due to the
smaller branching fraction of this mode.
Similarly, one of the mesons from the candidates is always reconstructed through the decay
and the other is reconstructed through the decays , or .
Both signal channels and a dedicated control channel are selected by similar criteria with only minor differences as described in <ref>.
A mass fit is performed separately for each final state to statistically subtract the remaining background as described in <ref>.
The knowledge of the initial flavour of the candidates is crucial for measurements of time-dependent asymmetries in neutral -meson decays.
In <ref> the algorithms used to determine the initial flavour of the mesons are described.
The decay-time fit to measure the -violation parameters is described in <ref> and the systematic uncertainties are discussed in <ref>.
In <ref> the results are presented from both this analysis and in combination with previous measurements.
§ DETECTOR AND SIMULATION
The detector <cit.> is a single-arm forward
spectrometer covering the range 2<η <5,
designed for the study of particles containing or quarks. The detector includes a high-precision tracking system
consisting of a silicon-strip vertex detector surrounding the pp
interaction region <cit.>, a large-area silicon-strip detector located
upstream of a dipole magnet with a bending power of about
4 T m, and three stations of silicon-strip detectors and straw
drift tubes <cit.> placed downstream of the magnet.
The tracking system provides a measurement of the momentum, , of charged particles with
a relative uncertainty that varies from 0.5% at low momentum to 1.0% at 200.
The minimum distance of a track to a primary pp collision vertex (PV), the impact parameter (IP),
is measured with a resolution of (15+29/),
where is the component of the momentum transverse to the beam, in .
Different types of charged hadrons are distinguished using information
from two ring-imaging Cherenkov detectors <cit.>.
Photons, electrons and hadrons are identified by a calorimeter system consisting of
scintillating-pad and preshower detectors, an electromagnetic
and a hadronic calorimeter. Muons are identified by a
system composed of alternating layers of iron and multiwire
proportional chambers <cit.>.
Simulation is required to model the effects of the detector acceptance and the
imposed selection requirements.
Samples of signal decays are used to determine the parameterisation of the signal mass
distributions and decay-time resolution model.
In the simulation, pp collisions are generated using
<cit.>
with a specific configuration <cit.>.
Decays of unstable particles
are described by <cit.>, in which final-state
radiation is generated using <cit.>.
The interaction of the generated particles with the detector, and its response,
are implemented using the toolkit <cit.> as described in
Ref. <cit.>.
The underlying pp interaction is reused multiple times, with an independently generated signal decay for each <cit.>.
To account for differences between the distributions of particle identification (PID) variables in simulation and data,
the PIDCalib package <cit.> is used to reweight the distributions in the simulation.
§ SELECTION
The online event selection is performed by a trigger <cit.>,
which consists of a hardware stage based on information from the calorimeter and muon
systems, followed by a software stage which applies a full event
reconstruction.
At the hardware trigger stage, events are required to have a muon with high or a
hadron, photon or electron with high transverse energy in the calorimeters.
The software trigger requires a two-, three- or four-track
secondary vertex with a significant displacement from any primary
pp interaction vertex. At least one charged particle
must have a transverse momentum > 1.6 and be
inconsistent with originating from a PV.
A multivariate algorithm <cit.> is used for
the identification of secondary vertices consistent with the decay
of a hadron.
In the offline selection, and candidates are reconstructed through their decays into the selected final-state particles, which are required to
satisfy loose selection criteria on their momentum, transverse momentum and PID variables, and be inconsistent with originating from any PV.
The and candidates should form vertices with a good fit quality and the scalar sum of transverse momenta of their three final-state particles should be greater than 1800.
All possible combinations of tracks forming a common vertex should have a distance of closest approach smaller than 0.5.
The candidates are reconstructed from two or candidates with opposite charges that form a good-quality vertex.
The momentum vector of the candidates should point from the PV to the secondary vertex.
The scalar sum of the transverse momenta of all six final-state particles is required to be greater than 5000.
The invariant masses of the and candidates are required to be within a window of ± 45 around their known values <cit.>.
This requirement, of about ±4 times the mass resolution, retains almost all candidates while separating the from the mass region.
To suppress single-charm decays of the form ^+^-^-, both candidates are required to
have a significant flight distance from the decay vertex.
In the reconstruction of the candidates, background contributions can arise from the misidentification of the final-state particles.
Misidentification from a pion, kaon or proton is considered.
The three-body invariant masses are recomputed to identify background decays from , and states.
The masses for potential two-body background contributions arising from intermediate and decays are similarly computed.
These background sources are suppressed by PID requirements within the mass windows of the known particle masses.
A particularly challenging background arises from the misidentification between and decays.
The ↔ misidentification shifts the mass region of the reconstructed candidates to that of the or vice versa.
In this case, a simple PID requirement does not provide the necessary rejection of
the particularly large background contribution from decays.
To distinguish between the two decays a boosted decision tree (BDT) algorithm is trained utilising the module from the package <cit.>.
Simulated and decays from the , and samples are used to train the BDT classifier.
A k-folding procedure with k=5 is used to avoid overtraining <cit.>.
Various two- and three-body invariant masses, recomputed with different final-state particle hypotheses, are used in the training.
Additionally, the flight distance of the candidates, and the PID variables of those particles that are potentially misidentified, are used.
The requirements on the BDT-classifier output are chosen to suppress the candidates in the channel and
candidates in the channel to negligible levels.
This is verified by applying the requirements to the simulated samples, which results in the rejection of more than 99% of the respective candidates.
A second BDT classifier is trained to suppress combinatorial background.
As a signal proxy, all available simulated , and samples are used while
the background proxy is taken from the upper-mass sideband of the data, which is defined as m_>5600, beyond the -candidate mass fit region.
The variables used in the training are all transverse momenta of intermediate and final-state particles;
the flight distance and the difference in invariant mass from the known value <cit.> of the candidates;
the angle between the flight direction and each of the decay products;
the of the and candidates,
which is the difference in the value of the PV fit with and without the particle being considered in the calculation.
Similar to the strategy used in Ref. <cit.>, the requirement on the BDT-classifier output is chosen to minimise the uncertainties on the -violation parameters.
The invariant mass used in the mass fits is computed from a kinematic fit to the decay chain
with constraints on all charm-meson masses to improve the invariant-mass resolution of the candidates <cit.>.
For calculation of the decay time, a constraint on the PV is used in the kinematic fit. To avoid correlations between the decay time and the invariant mass, no constraints on the charm-masses are used.
Contributions from partially reconstructed backgrounds are reduced to negligible levels by restricting the invariant mass of the candidates to lie within the range 5240–5540.
The decay-time range is chosen to be 0.3–10.3, where the lower boundary is set to reduce background originating from the PV.
For candidates the same decay-time range is chosen, but the invariant-mass range is 5300–5600.
After the selection, multiple candidates are found in about 1% of the events.
Usually, these candidates differ in just one track or PID assignment.
Since it is very unlikely to find two genuine candidates in one event, only one of the candidates is chosen arbitrarily.
§ MASS FIT
An extended unbinned maximum-likelihood fit to the invariant mass of the candidates is performed to
extract per-event weights via the technique <cit.>.
These weights are used in the decay-time fit to statistically subtract the background.
Pseudoexperiment studies indicate that any residual correlation between the decay time and the mass introduces no meaningful bias into the -violation measurement.
The mass model in the channel consists of a signal component and
two background components to model decays and the combinatorial background.
A double-sided Hypatia probability density function (PDF) <cit.> is used to model the signal component.
The shape parameters are determined by a fit to simulated decays and fixed
in the fit to data, while the peak position and width of the distribution are allowed to vary.
The same model is used for the component with a shift of the peak position by the known
mass difference between the and mesons <cit.>.
An exponential PDF is used to model the combinatorial background.
The mass model in the channel consists only of a signal component and
a combinatorial background component, which are parameterised as in the fit.
Mass fits are performed separately for each final state.
Figures <ref> and <ref> show the results of the fits to all and final states, respectively.
The fits yield an overall number of 5 695 ± 100 and 13 313 ± 135 signal decays.
§ FLAVOUR TAGGING
For time-dependent violation measurements of neutral mesons, the flavour of the meson at production is required.
At the method used to determine the initial flavour is called flavour tagging.
These algorithms exploit the fact that in collisions, and quarks are almost exclusively produced in pairs.
When the quark forms a meson (and similarly the quark forms a meson), additional particles are produced in the fragmentation process.
From the charges and types of these particles, the flavour of the signal meson at production can be inferred.
The tagging algorithm that uses charged pions or protons from the fragmentation process of the quark that leads to the signal
is called the same-side (SS) tagger <cit.>.
In the case of signal mesons, charged kaons are used by the SS tagger <cit.>.
The opposite-side (OS) tagger uses information from electrons and muons from semileptonic decays, kaons from
the decay chain, secondary charm hadrons and the charges of tracks from the secondary vertex of the
other -hadron decay <cit.>.
Each algorithm i provides individual tag decisions, d_i, and a predicted mistag, η_i, which is an estimate of the probability
that the tag decision is wrong.
The tag decision takes the values -1 for a meson, 1 for a meson and 0 if no tag decision can be made.
The predicted mistag ranges from 0 to 0.5 and takes the value of 0.5 for untagged events.
Each predicted mistag distribution is given by the output of a BDT that is trained on flavour-specific decays <cit.> and
has to be calibrated to represent the mistag probability, _i(η_i), in the signal decay.
Flavour-specific control channels with kinematics similar to the signal are used to obtain a calibration curve.
This is found to be well-described by a linear function.
Following calibration, the individual taggers are combined separately for OS and SS cases, and the resulting mistag distributions are recalibrated.
These calibrations are used in the decay-time fit to determine the -violation parameters to which the uncertainties on the calibration parameters are propagated through means of a Gaussian constraint.
To calibrate the SS and OS taggers of the channel, as well as the OS tagger of the
channel, decays are used.
These have very similar kinematics to the signal decays and the selection is very similar, as described in <ref>.
The SS kaon tagger used for decays is calibrated with the
channel.
A reweighting process is applied to ensure the calibration sample matches the distributions of the signal channel in the transverse momentum of the meson, the pseudorapidity, the number of tracks and the number of PVs.
Additionally, the compatibility of the calibration between and decays is verified by comparing the calibration parameters
determined using simulation.
The performance of the tagging algorithms is measured by the tagging power ϵ_tagD^2, where ϵ_tag
is the fraction of tagged candidates and D=1-2ω is the dilution factor introduced by the mistag probability, ω.
The tagging power is a statistical dilution factor due to imperfect tagging, equivalent to an efficiency with respect to a sample with perfect tagging.
Overall tagging powers of (6.28±0.11)% in and (5.60±0.07)% in decays are achieved.
§ DECAY-TIME FIT
An unbinned maximum-likelihood fit to the signal-weighted decay-time distribution is performed to determine the -violation parameters.
In order to avoid experimenter bias, the values of the -violation parameters were
not examined until the full procedure had been finalised.
The measured decay-time distribution of the candidates given the tag decisions d⃗ = (d_OS, d_SS) and predicted mistags η⃗ = (η_OS, η_SS) is described by the PDF
𝒫(t,d⃗ | η⃗) = ϵ(t) ·( ℬ(t',d⃗ | η⃗) ⊗ℛ(t-t') ) ,
where ℬ(t',d⃗ | η⃗) describes the distribution of the true decay time t', which is convolved with the decay-time resolution function
ℛ(t-t'), and the acceptance function ϵ(t) describes the total efficiency as a function of the reconstructed decay time.
The PDF describing the decay-time distribution can be deduced from <ref> and takes the general form
ℬ(t',d⃗ | η⃗) ∝ e^-t'/τ ( C^eff_cosh(d⃗ | η⃗) cosh_ t'/2
+ C^eff_sinh(d⃗ | η⃗) sinh_ t'/2
- C^eff_cos(d⃗ | η⃗) cos_ t'
+ C^eff_sin(d⃗ | η⃗) sin_ t').
The effective coefficients are given by
C^eff_cosh = D_f(Σ(d⃗ | η⃗) + A_prodΔ(d⃗ | η⃗), C^eff_cos = C_f (Δ(d⃗ | η⃗) + A_prodΣ(d⃗ | η⃗)),
C^eff_sinh = D_f (Σ(d⃗ | η⃗) + A_prodΔ(d⃗ | η⃗)), C^eff_sin = S_f (Δ(d⃗ | η⃗) + A_prodΣ(d⃗ | η⃗)),
where the production asymmetry A_prod = (N_ - N_)/(N_ + N_) represents the difference
in the production rates of and mesons. The functions
Σ(d⃗,η⃗) = P(d⃗,η⃗|) + P(d⃗,η⃗|) and
Δ(d⃗,η⃗) = P(d⃗,η⃗|) - P(d⃗,η⃗|)
are dependent on the tagging calibration parameters, where P(d⃗,η⃗|) and P(d⃗,η⃗|) are the probabilities of observing the tagging decisions d⃗ and the predicted mistags η⃗, given the true flavour or , respectively.
§.§
The decay-time fit of decays is insensitive to C^eff_sinh under the assumption that is zero.
Moreover, due to the long oscillation period of the mesons, the decay-time resolution of around 52 has a very small impact on the -violation parameters.
The decay-time resolution model consists of three Gaussian functions that have a common mean and different widths. The parameters of the model are determined from simulation and fixed in the fit to data.
The selection and reconstruction efficiency depends on the decay time due to displacement requirements made on the final-state particles and
a decrease in the reconstruction efficiency for tracks with large impact parameter with respect to the beamline <cit.>.
The decay-time dependent efficiency is modeled by cubic-spline functions <cit.> with five knots at (0.3, 0.5, 2.7, 6.3, 10.3), whose positions were determined using simulation.
The spline coefficients are free to vary in the fit.
Gaussian constraints are used to account for the uncertainties on the tagging calibration parameters,
the lifetime, the oscillation frequency, , and the production asymmetry.
The world-average values are used for the external parameters <cit.>, while the production asymmetry is taken from
a similar time-dependent analysis of decays <cit.>.
The tagging efficiencies are free to vary in the decay-time fit.
Figure <ref> (left) shows the results of the decay-time fit for this channel.
§.§
In the decay-time fit of decays, the hyperbolic terms of <ref> can be measured provided that is not zero. Moreover,
the definitions from <ref> are used to directly determine the parameters and |λ|.
The acceptance function, the tagging parameters and external parameters are treated in the same way as for the decays.
In addition to the lifetime and the oscillation frequency, , the decay-width difference is constrained in the fit to the world-average value <cit.>.
The value of the production asymmetry is taken from the control channel as described in Ref. <cit.>.
Due to the high oscillation frequency of the meson, the decay-time resolution plays an important role. A per-event decay-time resolution is determined based on the per-event decay-time uncertainty estimated from the vertex fit, which is calibrated using a sample of candidates, with (), and additional requirements imposed to suppress candidates produced in decays to negligible levels.
The measured decay time of the remaining candidates, which originate from the PV, is consistent with zero, and their distribution is used to assess resolution and bias effects.
A linear fit to the measured and predicted decay-time resolution is performed.
A scale factor is then applied to translate the resulting calibration to the signal mode. It is determined by comparing the decay-time resolution of and decays in simulation.
Figure <ref> (right) shows the results of the decay-time fit for this channel.
The decay-time-dependent asymmetry and the projection of the PDF are shown in <ref> for (left) and (right) decays. The asymmetry in each decay-time bin is given by A^ = -(∑_j w_jd_jD_j)/(∑_j w_jD_j^2) with the tagging decision d_j, the tagging dilution D_j and the signal weight w_j obtained by the method <cit.>, for each candidate j.
§ SYSTEMATIC UNCERTAINTIES AND CROSS-CHECKS
A variety of cross-checks are performed and potential sources of systematic uncertainties are considered.
The decay-time fit is performed on a simulated sample using the same strategy for
the tagging calibration as for the fit to data.
A second fit is performed where instead of the reconstructed tagging, the truth information of the initial flavour
of the mesons is used.
Both results of the -violation parameters agree with the generated values.
The decay-time fit is performed on several subsets of the data to test the consistency of the results.
The data subdivision is done according to the final state, magnet polarity, years of data taking and tagging information (OS only or SS only). Consistent results are found in all cases.
A bootstrapping procedure <cit.> is used to cross-check the statistical uncertainty from the decay-time fit to data.
A data set is created by randomly drawing candidates from the original sample until a certain number of candidates is reached that itself is drawn from a Poisson distribution with the expected number of candidates matching the original data sample.
This entails that the same candidate can be drawn multiple times.
The mass and decay-time fits are performed on this data set to first statistically subtract the background and then determine the -violation parameters.
The residual of the fit result with respect to the baseline fit is stored and the whole procedure is repeated until the distribution of the residuals is not significantly affected by statistical fluctuations.
The statistical uncertainties from the fits to data are shown to be accurate as they are consistent with the standard deviations of the residuals, and the correlation coefficients lie within expectations.
A decay-time fit with a different set of knots for the acceptance function is performed.
The difference in the results with respect to the baseline fit is assigned as a systematic uncertainty.
To test the fit strategy, pseudoexperiments are performed.
In each pseudoexperiment, the mass and decay time are generated using the results of the baseline fit to data.
The background contributions are generated with a specific time dependence, assuming symmetry for the background.
Similar to the bootstrapping procedure, the baseline fitting procedure is performed on the pseudoexperiments and the residuals are collected.
For decays, the mean values of the results are found to be consistent with the input values within the statistical uncertainties, while the fits to the pseudoexperiments show a small bias of -0.002 in and 0.008 in |λ_|.
This is of the order of a few percent of the statistical uncertainty and is subtracted from the biases found in the following studies.
The following systematic uncertainties are determined using the same procedure, with the only difference being that an alternative model is used to generate pseudoexperiments in each case.
A bias in the distribution of the residuals is assigned as a systematic uncertainty.
The sum of two Crystal Ball functions <cit.>, with parameters obtained from a fit to simulation, is used in the pseudoexperiments to test the choice of the signal mass model.
Since is fixed to zero in the decay-time fit of decays, a systematic uncertainty is assigned for this assumption.
The value of is varied in the pseudoexperiments from the assumed value of zero by ±1σ, where σ is the uncertainty of the world average value of <cit.>.
The value of D_ is calculated from the normalisation condition D_ = ±√(1 - S_^2 - C_^2) and the largest deviation is assigned as the systematic uncertainty.
In the channel the decay-time-resolution model is determined on simulation.
Due to differences between simulation and data the resolution could be underestimated.
The effect of underestimating the resolution is tested by increasing the width of the resolution function by 10% in the pseudoexperiments, which corresponds to the level measured in the system. It is found to be small and no further studies are considered.
In the channel, candidates originating from the PV are used to determine a per-event resolution calibration.
Only () decays are used and assumed to represent the resolution of the whole sample.
A second calibration is obtained using a sample of decays without specific requirements on the intermediate decays and used in the pseudoexperiments to assign a systematic uncertainty.
A decay-time bias caused by the misalignment of the vertex detector was observed in other analyses of data taken during the same period <cit.> and confirmed in the present analysis.
Due to the low oscillation frequency of mesons, this has a negligible effect on the measurement of the -violation parameters, as shown in Ref. <cit.> and so is not evaluated here.
However, in decays, this bias could have a significant impact on the measurement.
To evaluate the effect, the mean of the resolution function in the generation of the pseudoexperiments is set to the largest observed bias.
The individual systematic uncertainties on the -violation parameters are reported in <ref> and summed in quadrature.
§ RESULTS AND INTERPRETATION
A flavour-tagged time-dependent analysis of and decays is performed
using proton-proton collision data collected by the experiment during the years 2015 to 2018, corresponding to an integrated luminosity of 6.
Approximately 5 700 signal candidates are observed.
A fit to their decay-time distribution, including evaluation of systematic uncertainties, gives the final results
S_ = ,
C_ = - ,
with a statistical correlation between the two parameters of ρ(S_, C_) = 0.472.
The results and correlations of the external parameters from the decay-time fit are presented in <ref>.
Wilks' theorem <cit.> is used to determine the significance of the result, excluding systematic uncertainties.
The hypothesis of symmetry, corresponding to S_ = C_ = 0, can be rejected by more than six standard deviations.
The values are consistent with previous results from and <cit.>, which
correspond to a small contribution from higher-order SM corrections.
Thus, this measurement will move the world average further away from the measurement,
which lies outside the physical region <cit.>.
The result is combined with the previous measurement in this channel <cit.>.
Due to the small effect of the external parameters on the result, the two measurements are
assumed to be uncorrelated and the combined values are
S_ = ,
C_ = -,
with a statistical correlation between the two parameters of ρ(S_, C_) = 0.474.
Approximately 13 000 signal candidates are observed and the final results of the
decay-time fit and the systematic uncertainties are
= ,
|λ_| = - ,
with a statistical correlation between the two parameters of ρ(, |λ_|) = -0.007.
Further information on the results of the decay-time fit is shown in <ref>.
This result is consistent with, and more precise than, the previous measurement <cit.>.
The combination with the previous measurement, following the same strategy as for the decays, yields the values
= ,
|λ_| = - ,
with a statistical correlation between the two parameters of ρ(, |λ_|) = 0.005.
The values are consistent with symmetry in the channel.
These results can be used in combination with other measurements to perform a global analysis
and extract SM parameters as has previously been performed in Ref. <cit.>.
They represent the most precise single measurements of the -violation parameters in their respective channels and the combined results supersede the previous measurements.
For the first time, symmetry can be excluded by more than six standard deviations in a single measurement of decays.
§ ACKNOWLEDGEMENTS
We express our gratitude to our colleagues in the CERN
accelerator departments for the excellent performance of the LHC. We
thank the technical and administrative staff at the LHCb
institutes.
We acknowledge support from CERN and from the national agencies:
CAPES, CNPq, FAPERJ and FINEP (Brazil);
MOST and NSFC (China);
CNRS/IN2P3 (France);
BMBF, DFG and MPG (Germany);
INFN (Italy);
NWO (Netherlands);
MNiSW and NCN (Poland);
MCID/IFA (Romania);
MICIU and AEI (Spain);
SNSF and SER (Switzerland);
NASU (Ukraine);
STFC (United Kingdom);
DOE NP and NSF (USA).
We acknowledge the computing resources that are provided by CERN, IN2P3
(France), KIT and DESY (Germany), INFN (Italy), SURF (Netherlands),
PIC (Spain), GridPP (United Kingdom),
CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil),
and Polish WLCG (Poland).
We are indebted to the communities behind the multiple open-source
software packages on which we depend.
Individual groups or members have received support from
ARC and ARDC (Australia);
Key Research Program of Frontier Sciences of CAS, CAS PIFI, CAS CCEPP,
Fundamental Research Funds for the Central Universities,
and Sci. & Tech. Program of Guangzhou (China);
Minciencias (Colombia);
EPLANET, Marie Skłodowska-Curie Actions, ERC and NextGenerationEU (European Union);
A*MIDEX, ANR, IPhU and Labex P2IO, and Région Auvergne-Rhône-Alpes (France);
AvH Foundation (Germany);
ICSC (Italy);
Severo Ochoa and María de Maeztu Units of Excellence, GVA, XuntaGal, GENCAT, InTalent-Inditex and Prog. Atracción Talento CM (Spain);
SRC (Sweden);
the Leverhulme Trust, the Royal Society
and UKRI (United Kingdom).
§ APPENDICES
§ RESULTS AND CORRELATIONS OF EXTERNAL PARAMETERS
tocsectionReferences
LHCb
LHCb collaboration
R. Aaij^370000-0003-0533-1952,
A.S.W. Abdelmotteleb^560000-0001-7905-0542,
C. Abellan Beteta^50,
F. Abudinén^560000-0002-6737-3528,
T. Ackernley^600000-0002-5951-3498,
A. A. Adefisoye^680000-0003-2448-1550,
B. Adeva^460000-0001-9756-3712,
M. Adinolfi^540000-0002-1326-1264,
P. Adlarson^810000-0001-6280-3851,
C. Agapopoulou^140000-0002-2368-0147,
C.A. Aidala^820000-0001-9540-4988,
Z. Ajaltouni^11,
S. Akar^650000-0003-0288-9694,
K. Akiba^370000-0002-6736-471X,
P. Albicocco^270000-0001-6430-1038,
J. Albrecht^190000-0001-8636-1621,
F. Alessio^480000-0001-5317-1098,
M. Alexander^590000-0002-8148-2392,
Z. Aliouche^620000-0003-0897-4160,
P. Alvarez Cartelle^550000-0003-1652-2834,
R. Amalric^160000-0003-4595-2729,
S. Amato^30000-0002-3277-0662,
J.L. Amey^540000-0002-2597-3808,
Y. Amhis^14,480000-0003-4282-1512,
L. An^60000-0002-3274-5627,
L. Anderlini^260000-0001-6808-2418,
M. Andersson^500000-0003-3594-9163,
A. Andreianov^430000-0002-6273-0506,
P. Andreola^500000-0002-3923-431X,
M. Andreotti^250000-0003-2918-1311,
D. Andreou^680000-0001-6288-0558,
A. Anelli^30,n0000-0002-6191-934X,
D. Ao^70000-0003-1647-4238,
F. Archilli^36,t0000-0002-1779-6813,
M. Argenton^250009-0006-3169-0077,
S. Arguedas Cuendis^9,480000-0003-4234-7005,
A. Artamonov^430000-0002-2785-2233,
M. Artuso^680000-0002-5991-7273,
E. Aslanides^130000-0003-3286-683X,
R. Ataíde Da Silva^490009-0005-1667-2666,
M. Atzeni^640000-0002-3208-3336,
B. Audurier^120000-0001-9090-4254,
D. Bacher^630000-0002-1249-367X,
I. Bachiller Perea^100000-0002-3721-4876,
S. Bachmann^210000-0002-1186-3894,
M. Bachmayer^490000-0001-5996-2747,
J.J. Back^560000-0001-7791-4490,
P. Baladron Rodriguez^460000-0003-4240-2094,
V. Balagura^150000-0002-1611-7188,
W. Baldini^250000-0001-7658-8777,
L. Balzani^190009-0006-5241-1452,
H. Bao^70009-0002-7027-021X,
J. Baptista de Souza Leite^600000-0002-4442-5372,
C. Barbero Pretel^46,120009-0001-1805-6219,
M. Barbetti^260000-0002-6704-6914,
I. R. Barbosa^690000-0002-3226-8672,
R.J. Barlow^620000-0002-8295-8612,
M. Barnyakov^240009-0000-0102-0482,
S. Barsuk^140000-0002-0898-6551,
W. Barter^580000-0002-9264-4799,
M. Bartolini^550000-0002-8479-5802,
J. Bartz^680000-0002-2646-4124,
J.M. Basels^170000-0001-5860-8770,
S. Bashir^390000-0001-9861-8922,
G. Bassi^34,q0000-0002-2145-3805,
B. Batsukh^50000-0003-1020-2549,
P. B. Battista^14,
A. Bay^490000-0002-4862-9399,
A. Beck^560000-0003-4872-1213,
M. Becker^190000-0002-7972-8760,
F. Bedeschi^340000-0002-8315-2119,
I.B. Bediaga^20000-0001-7806-5283,
N. A. Behling^190000-0003-4750-7872,
S. Belin^460000-0001-7154-1304,
V. Bellee^500000-0001-5314-0953,
K. Belous^430000-0003-0014-2589,
I. Belov^280000-0003-1699-9202,
I. Belyaev^350000-0002-7458-7030,
G. Benane^130000-0002-8176-8315,
G. Bencivenni^270000-0002-5107-0610,
E. Ben-Haim^160000-0002-9510-8414,
A. Berezhnoy^430000-0002-4431-7582,
R. Bernet^500000-0002-4856-8063,
S. Bernet Andres^440000-0002-4515-7541,
A. Bertolin^320000-0003-1393-4315,
C. Betancourt^500000-0001-9886-7427,
F. Betti^580000-0002-2395-235X,
J. Bex^550000-0002-2856-8074,
Ia. Bezshyiko^500000-0002-4315-6414,
J. Bhom^400000-0002-9709-903X,
M.S. Bieker^190000-0001-7113-7862,
N.V. Biesuz^250000-0003-3004-0946,
P. Billoir^160000-0001-5433-9876,
A. Biolchini^370000-0001-6064-9993,
M. Birch^610000-0001-9157-4461,
F.C.R. Bishop^100000-0002-0023-3897,
A. Bitadze^620000-0001-7979-1092,
A. Bizzeti^0000-0001-5729-5530,
T. Blake^560000-0002-0259-5891,
F. Blanc^490000-0001-5775-3132,
J.E. Blank^190000-0002-6546-5605,
S. Blusk^680000-0001-9170-684X,
V. Bocharnikov^430000-0003-1048-7732,
J.A. Boelhauve^190000-0002-3543-9959,
O. Boente Garcia^150000-0003-0261-8085,
T. Boettcher^650000-0002-2439-9955,
A. Bohare^580000-0003-1077-8046,
A. Boldyrev^430000-0002-7872-6819,
C.S. Bolognani^780000-0003-3752-6789,
R. Bolzonella^25,k0000-0002-0055-0577,
N. Bondar^430000-0003-2714-9879,
A. Bordelius^480009-0002-3529-8524,
F. Borgato^32,o0000-0002-3149-6710,
S. Borghi^620000-0001-5135-1511,
M. Borsato^30,n0000-0001-5760-2924,
J.T. Borsuk^400000-0002-9065-9030,
S.A. Bouchiba^490000-0002-0044-6470,
M. Bovill^630009-0006-2494-8287,
T.J.V. Bowcock^600000-0002-3505-6915,
A. Boyer^480000-0002-9909-0186,
C. Bozzi^250000-0001-6782-3982,
A. Brea Rodriguez^490000-0001-5650-445X,
N. Breer^190000-0003-0307-3662,
J. Brodzicka^400000-0002-8556-0597,
A. Brossa Gonzalo^46,56,45,†0000-0002-4442-1048,
J. Brown^600000-0001-9846-9672,
D. Brundu^310000-0003-4457-5896,
E. Buchanan^58,
A. Buonaura^500000-0003-4907-6463,
L. Buonincontri^32,o0000-0002-1480-454X,
A.T. Burke^620000-0003-0243-0517,
C. Burr^480000-0002-5155-1094,
J.S. Butter^550000-0002-1816-536X,
J. Buytaert^480000-0002-7958-6790,
W. Byczynski^480009-0008-0187-3395,
S. Cadeddu^310000-0002-7763-500X,
H. Cai^73,
A. C. Caillet^16,
R. Calabrese^25,k0000-0002-1354-5400,
S. Calderon Ramirez^90000-0001-9993-4388,
L. Calefice^450000-0001-6401-1583,
S. Cali^270000-0001-9056-0711,
M. Calvi^30,n0000-0002-8797-1357,
M. Calvo Gomez^440000-0001-5588-1448,
P. Camargo Magalhaes^2,x0000-0003-3641-8110,
J. I. Cambon Bouzas^460000-0002-2952-3118,
P. Campana^270000-0001-8233-1951,
D.H. Campora Perez^780000-0001-8998-9975,
A.F. Campoverde Quezada^70000-0003-1968-1216,
S. Capelli^300000-0002-8444-4498,
L. Capriotti^250000-0003-4899-0587,
R. Caravaca-Mora^90000-0001-8010-0447,
A. Carbone^24,i0000-0002-7045-2243,
L. Carcedo Salgado^460000-0003-3101-3528,
R. Cardinale^28,l0000-0002-7835-7638,
A. Cardini^310000-0002-6649-0298,
P. Carniti^30,n0000-0002-7820-2732,
L. Carus^21,
A. Casais Vidal^640000-0003-0469-2588,
R. Caspary^210000-0002-1449-1619,
G. Casse^600000-0002-8516-237X,
J. Castro Godinez^90000-0003-4808-4904,
M. Cattaneo^480000-0001-7707-169X,
G. Cavallero^25,480000-0002-8342-7047,
V. Cavallini^25,k0000-0001-7601-129X,
S. Celani^210000-0003-4715-7622,
D. Cervenkov^630000-0002-1865-741X,
S. Cesare^29,m0000-0003-0886-7111,
A.J. Chadwick^600000-0003-3537-9404,
I. Chahrour^820000-0002-1472-0987,
M. Charles^160000-0003-4795-498X,
Ph. Charpentier^480000-0001-9295-8635,
E. Chatzianagnostou^370009-0009-3781-1820,
M. Chefdeville^100000-0002-6553-6493,
C. Chen^130000-0002-3400-5489,
S. Chen^50000-0002-8647-1828,
Z. Chen^70000-0002-0215-7269,
A. Chernov^400000-0003-0232-6808,
S. Chernyshenko^520000-0002-2546-6080,
X. Chiotopoulos^780009-0006-5762-6559,
V. Chobanova^800000-0002-1353-6002,
S. Cholak^490000-0001-8091-4766,
M. Chrzaszcz^400000-0001-7901-8710,
A. Chubykin^430000-0003-1061-9643,
V. Chulikov^430000-0002-7767-9117,
P. Ciambrone^270000-0003-0253-9846,
X. Cid Vidal^460000-0002-0468-541X,
G. Ciezarek^480000-0003-1002-8368,
P. Cifra^480000-0003-3068-7029,
P.E.L. Clarke^580000-0003-3746-0732,
M. Clemencic^480000-0003-1710-6824,
H.V. Cliff^550000-0003-0531-0916,
J. Closier^480000-0002-0228-9130,
C. Cocha Toapaxi^210000-0001-5812-8611,
V. Coco^480000-0002-5310-6808,
J. Cogan^130000-0001-7194-7566,
E. Cogneras^110000-0002-8933-9427,
L. Cojocariu^420000-0002-1281-5923,
P. Collins^480000-0003-1437-4022,
T. Colombo^480000-0002-9617-9687,
M. C. Colonna^190009-0000-1704-4139,
A. Comerma-Montells^450000-0002-8980-6048,
L. Congedo^230000-0003-4536-4644,
A. Contu^310000-0002-3545-2969,
N. Cooke^590000-0002-4179-3700,
I. Corredoira ^460000-0002-6089-0899,
A. Correia^160000-0002-6483-8596,
G. Corti^480000-0003-2857-4471,
J.J. Cottee Meldrum^54,
B. Couturier^480000-0001-6749-1033,
D.C. Craik^500000-0002-3684-1560,
M. Cruz Torres^2,f0000-0003-2607-131X,
E. Curras Rivera^490000-0002-6555-0340,
R. Currie^580000-0002-0166-9529,
C.L. Da Silva^670000-0003-4106-8258,
S. Dadabaev^430000-0002-0093-3244,
L. Dai^700000-0002-4070-4729,
X. Dai^60000-0003-3395-7151,
E. Dall'Occo^190000-0001-9313-4021,
J. Dalseno^460000-0003-3288-4683,
C. D'Ambrosio^480000-0003-4344-9994,
J. Daniel^110000-0002-9022-4264,
A. Danilina^430000-0003-3121-2164,
P. d'Argent^230000-0003-2380-8355,
A. Davidson^560009-0002-0647-2028,
J.E. Davies^620000-0002-5382-8683,
A. Davis^620000-0001-9458-5115,
O. De Aguiar Francisco^620000-0003-2735-678X,
C. De Angelis^31,j0009-0005-5033-5866,
F. De Benedetti^480000-0002-7960-3116,
J. de Boer^370000-0002-6084-4294,
K. De Bruyn^770000-0002-0615-4399,
S. De Capua^620000-0002-6285-9596,
M. De Cian^21,480000-0002-1268-9621,
U. De Freitas Carneiro Da Graca^2,a0000-0003-0451-4028,
E. De Lucia^270000-0003-0793-0844,
J.M. De Miranda^20009-0003-2505-7337,
L. De Paula^30000-0002-4984-7734,
M. De Serio^23,g0000-0003-4915-7933,
P. De Simone^270000-0001-9392-2079,
F. De Vellis^190000-0001-7596-5091,
J.A. de Vries^780000-0003-4712-9816,
F. Debernardis^230009-0001-5383-4899,
D. Decamp^100000-0001-9643-6762,
V. Dedu^130000-0001-5672-8672,
S. Dekkers^10000-0001-9598-875X,
L. Del Buono^160000-0003-4774-2194,
B. Delaney^640009-0007-6371-8035,
H.-P. Dembinski^190000-0003-3337-3850,
J. Deng^80000-0002-4395-3616,
V. Denysenko^500000-0002-0455-5404,
O. Deschamps^110000-0002-7047-6042,
F. Dettori^31,j0000-0003-0256-8663,
B. Dey^760000-0002-4563-5806,
P. Di Nezza^270000-0003-4894-6762,
I. Diachkov^430000-0001-5222-5293,
S. Didenko^430000-0001-5671-5863,
S. Ding^680000-0002-5946-581X,
L. Dittmann^210009-0000-0510-0252,
V. Dobishuk^520000-0001-9004-3255,
A. D. Docheva^590000-0002-7680-4043,
C. Dong^4,b0000-0003-3259-6323,
A.M. Donohoe^220000-0002-4438-3950,
F. Dordei^310000-0002-2571-5067,
A.C. dos Reis^20000-0001-7517-8418,
A. D. Dowling^680009-0007-1406-3343,
W. Duan^710000-0003-1765-9939,
P. Duda^790000-0003-4043-7963,
M.W. Dudek^400000-0003-3939-3262,
L. Dufour^480000-0002-3924-2774,
V. Duk^330000-0001-6440-0087,
P. Durante^480000-0002-1204-2270,
M. M. Duras^790000-0002-4153-5293,
J.M. Durham^670000-0002-5831-3398,
O. D. Durmus^760000-0002-8161-7832,
A. Dziurda^400000-0003-4338-7156,
A. Dzyuba^430000-0003-3612-3195,
S. Easo^570000-0002-4027-7333,
E. Eckstein^18,
U. Egede^10000-0001-5493-0762,
A. Egorychev^430000-0001-5555-8982,
V. Egorychev^430000-0002-2539-673X,
S. Eisenhardt^580000-0002-4860-6779,
E. Ejopu^620000-0003-3711-7547,
L. Eklund^810000-0002-2014-3864,
M. Elashri^650000-0001-9398-953X,
J. Ellbracht^190000-0003-1231-6347,
S. Ely^610000-0003-1618-3617,
A. Ene^420000-0001-5513-0927,
E. Epple^650000-0002-6312-3740,
J. Eschle^680000-0002-7312-3699,
S. Esen^210000-0003-2437-8078,
T. Evans^620000-0003-3016-1879,
F. Fabiano^31,j0000-0001-6915-9923,
L.N. Falcao^20000-0003-3441-583X,
Y. Fan^70000-0002-3153-430X,
B. Fang^730000-0003-0030-3813,
L. Fantini^33,p,480000-0002-2351-3998,
M. Faria^490000-0002-4675-4209,
K. Farmer^580000-0003-2364-2877,
D. Fazzini^30,n0000-0002-5938-4286,
L. Felkowski^790000-0002-0196-910X,
M. Feng^5,70000-0002-6308-5078,
M. Feo^19,480000-0001-5266-2442,
A. Fernandez Casani^470000-0003-1394-509X,
M. Fernandez Gomez^460000-0003-1984-4759,
A.D. Fernez^660000-0001-9900-6514,
F. Ferrari^240000-0002-3721-4585,
F. Ferreira Rodrigues^30000-0002-4274-5583,
M. Ferrillo^500000-0003-1052-2198,
M. Ferro-Luzzi^480009-0008-1868-2165,
S. Filippov^430000-0003-3900-3914,
R.A. Fini^230000-0002-3821-3998,
M. Fiorini^25,k0000-0001-6559-2084,
M. Firlej^390000-0002-1084-0084,
K.L. Fischer^630009-0000-8700-9910,
D.S. Fitzgerald^820000-0001-6862-6876,
C. Fitzpatrick^620000-0003-3674-0812,
T. Fiutowski^390000-0003-2342-8854,
F. Fleuret^150000-0002-2430-782X,
M. Fontana^240000-0003-4727-831X,
L. F. Foreman^620000-0002-2741-9966,
R. Forty^480000-0003-2103-7577,
D. Foulds-Holt^550000-0001-9921-687X,
V. Franco Lima^30000-0002-3761-209X,
M. Franco Sevilla^660000-0002-5250-2948,
M. Frank^480000-0002-4625-559X,
E. Franzoso^25,k0000-0003-2130-1593,
G. Frau^620000-0003-3160-482X,
C. Frei^480000-0001-5501-5611,
D.A. Friday^620000-0001-9400-3322,
J. Fu^70000-0003-3177-2700,
Q. Fuehring^19,550000-0003-3179-2525,
Y. Fujii^10000-0002-0813-3065,
T. Fulghesu^160000-0001-9391-8619,
E. Gabriel^370000-0001-8300-5939,
G. Galati^230000-0001-7348-3312,
M.D. Galati^370000-0002-8716-4440,
A. Gallas Torreira^460000-0002-2745-7954,
D. Galli^24,i0000-0003-2375-6030,
S. Gambetta^580000-0003-2420-0501,
M. Gandelman^30000-0001-8192-8377,
P. Gandini^290000-0001-7267-6008,
B. Ganie^620009-0008-7115-3940,
H. Gao^70000-0002-6025-6193,
R. Gao^630009-0004-1782-7642,
T.Q. Gao^550000-0001-7933-0835,
Y. Gao^80000-0002-6069-8995,
Y. Gao^60000-0003-1484-0943,
Y. Gao^8,
M. Garau^31,j0000-0002-0505-9584,
L.M. Garcia Martin^490000-0003-0714-8991,
P. Garcia Moreno^450000-0002-3612-1651,
J. García Pardiñas^480000-0003-2316-8829,
K. G. Garg^80000-0002-8512-8219,
L. Garrido^450000-0001-8883-6539,
C. Gaspar^480000-0002-8009-1509,
R.E. Geertsema^370000-0001-6829-7777,
L.L. Gerken^190000-0002-6769-3679,
E. Gersabeck^620000-0002-2860-6528,
M. Gersabeck^620000-0002-0075-8669,
T. Gershon^560000-0002-3183-5065,
S. G. Ghizzo^28,l,
Z. Ghorbanimoghaddam^54,
L. Giambastiani^32,o0000-0002-5170-0635,
F. I. Giasemis^16,e0000-0003-0622-1069,
V. Gibson^550000-0002-6661-1192,
H.K. Giemza^410000-0003-2597-8796,
A.L. Gilman^630000-0001-5934-7541,
M. Giovannetti^270000-0003-2135-9568,
A. Gioventù^450000-0001-5399-326X,
L. Girardey^620000-0002-8254-7274,
P. Gironella Gironell^450000-0001-5603-4750,
C. Giugliano^25,k0000-0002-6159-4557,
M.A. Giza^400000-0002-0805-1561,
E.L. Gkougkousis^610000-0002-2132-2071,
F.C. Glaser^14,210000-0001-8416-5416,
V.V. Gligorov^16,480000-0002-8189-8267,
C. Göbel^690000-0003-0523-495X,
E. Golobardes^440000-0001-8080-0769,
D. Golubkov^430000-0001-6216-1596,
A. Golutvin^61,43,480000-0003-2500-8247,
S. Gomez Fernandez^450000-0002-3064-9834,
F. Goncalves Abrantes^630000-0002-7318-482X,
M. Goncerz^400000-0002-9224-914X,
G. Gong^4,b0000-0002-7822-3947,
J. A. Gooding^190000-0003-3353-9750,
I.V. Gorelov^430000-0001-5570-0133,
C. Gotti^300000-0003-2501-9608,
J.P. Grabowski^180000-0001-8461-8382,
L.A. Granado Cardoso^480000-0003-2868-2173,
E. Graugés^450000-0001-6571-4096,
E. Graverini^49,r0000-0003-4647-6429,
L. Grazette^560000-0001-7907-4261,
G. Graziani^0000-0001-8212-846X,
A. T. Grecu^420000-0002-7770-1839,
L.M. Greeven^370000-0001-5813-7972,
N.A. Grieser^650000-0003-0386-4923,
L. Grillo^590000-0001-5360-0091,
S. Gromov^430000-0002-8967-3644,
C. Gu^150000-0001-5635-6063,
M. Guarise^250000-0001-8829-9681,
L. Guerry^110009-0004-8932-4024,
M. Guittiere^140000-0002-2916-7184,
V. Guliaeva^430000-0003-3676-5040,
P. A. Günther^210000-0002-4057-4274,
A.-K. Guseinov^490000-0002-5115-0581,
E. Gushchin^430000-0001-8857-1665,
Y. Guz^6,43,480000-0001-7552-400X,
T. Gys^480000-0002-6825-6497,
K. Habermann^180009-0002-6342-5965,
T. Hadavizadeh^10000-0001-5730-8434,
C. Hadjivasiliou^660000-0002-2234-0001,
G. Haefeli^490000-0002-9257-839X,
C. Haen^480000-0002-4947-2928,
J. Haimberger^480000-0002-3363-7783,
M. Hajheidari^48,
G. Hallett^560009-0005-1427-6520,
M.M. Halvorsen^480000-0003-0959-3853,
P.M. Hamilton^660000-0002-2231-1374,
J. Hammerich^600000-0002-5556-1775,
Q. Han^80000-0002-7958-2917,
X. Han^210000-0001-7641-7505,
S. Hansmann-Menzemer^210000-0002-3804-8734,
L. Hao^70000-0001-8162-4277,
N. Harnew^630000-0001-9616-6651,
M. Hartmann^140009-0005-8756-0960,
S. Hashmi^390000-0003-2714-2706,
J. He^7,c0000-0002-1465-0077,
F. Hemmer^480000-0001-8177-0856,
C. Henderson^650000-0002-6986-9404,
R.D.L. Henderson^1,560000-0001-6445-4907,
A.M. Hennequin^480009-0008-7974-3785,
K. Hennessy^600000-0002-1529-8087,
L. Henry^490000-0003-3605-832X,
J. Herd^610000-0001-7828-3694,
P. Herrero Gascon^210000-0001-6265-8412,
J. Heuel^170000-0001-9384-6926,
A. Hicheur^30000-0002-3712-7318,
G. Hijano Mendizabal^50,
D. Hill^490000-0003-2613-7315,
S.E. Hollitt^190000-0002-4962-3546,
J. Horswill^620000-0002-9199-8616,
R. Hou^80000-0002-3139-3332,
Y. Hou^110000-0001-6454-278X,
N. Howarth^60,
J. Hu^21,
J. Hu^710000-0002-8227-4544,
W. Hu^60000-0002-2855-0544,
X. Hu^4,b0000-0002-5924-2683,
W. Huang^70000-0002-1407-1729,
W. Hulsbergen^370000-0003-3018-5707,
R.J. Hunter^560000-0001-7894-8799,
M. Hushchyn^430000-0002-8894-6292,
D. Hutchcroft^600000-0002-4174-6509,
M. Idzik^390000-0001-6349-0033,
D. Ilin^430000-0001-8771-3115,
P. Ilten^650000-0001-5534-1732,
A. Inglessi^430000-0002-2522-6722,
A. Iniukhin^430000-0002-1940-6276,
A. Ishteev^430000-0003-1409-1428,
K. Ivshin^430000-0001-8403-0706,
R. Jacobsson^480000-0003-4971-7160,
H. Jage^170000-0002-8096-3792,
S.J. Jaimes Elles^47,740000-0003-0182-8638,
S. Jakobsen^480000-0002-6564-040X,
E. Jans^370000-0002-5438-9176,
B.K. Jashal^470000-0002-0025-4663,
A. Jawahery^66,480000-0003-3719-119X,
V. Jevtic^190000-0001-6427-4746,
E. Jiang^660000-0003-1728-8525,
X. Jiang^5,70000-0001-8120-3296,
Y. Jiang^70000-0002-8964-5109,
Y. J. Jiang^60000-0002-0656-8647,
M. John^630000-0002-8579-844X,
A. John Rubesh Rajan^220000-0002-9850-4965,
D. Johnson^530000-0003-3272-6001,
C.R. Jones^550000-0003-1699-8816,
T.P. Jones^560000-0001-5706-7255,
S. Joshi^410000-0002-5821-1674,
B. Jost^480009-0005-4053-1222,
J. Juan Castella^550009-0009-5577-1308,
N. Jurik^480000-0002-6066-7232,
I. Juszczak^400000-0002-1285-3911,
D. Kaminaris^490000-0002-8912-4653,
S. Kandybei^510000-0003-3598-0427,
M. Kane^58 0009-0006-5064-966X,
Y. Kang^4,b0000-0002-6528-8178,
C. Kar^110000-0002-6407-6974,
M. Karacson^480009-0006-1867-9674,
D. Karpenkov^430000-0001-8686-2303,
A. Kauniskangas^490000-0002-4285-8027,
J.W. Kautz^650000-0001-8482-5576,
M.K. Kazanecki^40,
F. Keizer^480000-0002-1290-6737,
M. Kenzie^550000-0001-7910-4109,
T. Ketel^370000-0002-9652-1964,
B. Khanji^680000-0003-3838-281X,
A. Kharisova^430000-0002-5291-9583,
S. Kholodenko^34,480000-0002-0260-6570,
G. Khreich^140000-0002-6520-8203,
T. Kirn^170000-0002-0253-8619,
V.S. Kirsebom^30,n0009-0005-4421-9025,
O. Kitouni^640000-0001-9695-8165,
S. Klaver^380000-0001-7909-1272,
N. Kleijne^34,q0000-0003-0828-0943,
K. Klimaszewski^410000-0003-0741-5922,
M.R. Kmiec^410000-0002-1821-1848,
S. Koliiev^520009-0002-3680-1224,
L. Kolk^190000-0003-2589-5130,
A. Konoplyannikov^430009-0005-2645-8364,
P. Kopciewicz^39,480000-0001-9092-3527,
P. Koppenburg^370000-0001-8614-7203,
M. Korolev^430000-0002-7473-2031,
I. Kostiuk^370000-0002-8767-7289,
O. Kot^52,
S. Kotriakhova^0000-0002-1495-0053,
A. Kozachuk^430000-0001-6805-0395,
P. Kravchenko^430000-0002-4036-2060,
L. Kravchuk^430000-0001-8631-4200,
M. Kreps^560000-0002-6133-486X,
P. Krokovny^430000-0002-1236-4667,
W. Krupa^680000-0002-7947-465X,
W. Krzemien^410000-0002-9546-358X,
O.K. Kshyvanskyi^52,
S. Kubis^790000-0001-8774-8270,
M. Kucharczyk^400000-0003-4688-0050,
V. Kudryavtsev^430009-0000-2192-995X,
E. Kulikova^430009-0002-8059-5325,
A. Kupsc^810000-0003-4937-2270,
B. K. Kutsenko^130000-0002-8366-1167,
D. Lacarrere^480009-0005-6974-140X,
P. Laguarta Gonzalez^450009-0005-3844-0778,
A. Lai^310000-0003-1633-0496,
A. Lampis^310000-0002-5443-4870,
D. Lancierini^550000-0003-1587-4555,
C. Landesa Gomez^460000-0001-5241-8642,
J.J. Lane^10000-0002-5816-9488,
R. Lane^540000-0002-2360-2392,
G. Lanfranchi^270000-0002-9467-8001,
C. Langenbruch^210000-0002-3454-7261,
J. Langer^190000-0002-0322-5550,
O. Lantwin^430000-0003-2384-5973,
T. Latham^560000-0002-7195-8537,
F. Lazzari^34,r0000-0002-3151-3453,
C. Lazzeroni^530000-0003-4074-4787,
R. Le Gac^130000-0002-7551-6971,
H. Lee^600009-0003-3006-2149,
R. Lefèvre^110000-0002-6917-6210,
A. Leflat^430000-0001-9619-6666,
S. Legotin^430000-0003-3192-6175,
M. Lehuraux^560000-0001-7600-7039,
E. Lemos Cid^480000-0003-3001-6268,
O. Leroy^130000-0002-2589-240X,
T. Lesiak^400000-0002-3966-2998,
E. Lesser^48,
B. Leverington^210000-0001-6640-7274,
A. Li^4,b0000-0001-5012-6013,
C. Li^130000-0002-3554-5479,
H. Li^710000-0002-2366-9554,
K. Li^80000-0002-2243-8412,
L. Li^620000-0003-4625-6880,
M. Li^8,
P. Li^70000-0003-2740-9765,
P.-R. Li^720000-0002-1603-3646,
Q. Li^5,70009-0004-1932-8580,
S. Li^80000-0001-5455-3768,
T. Li^5,d0000-0002-5241-2555,
T. Li^710000-0002-5723-0961,
Y. Li^8,
Y. Li^50000-0003-2043-4669,
Z. Lian^4,b0000-0003-4602-6946,
X. Liang^680000-0002-5277-9103,
S. Libralon^470009-0002-5841-9624,
C. Lin^70000-0001-7587-3365,
T. Lin^570000-0001-6052-8243,
R. Lindner^480000-0002-5541-6500,
V. Lisovskyi^490000-0003-4451-214X,
R. Litvinov^31,480000-0002-4234-435X,
F. L. Liu^10009-0002-2387-8150,
G. Liu^710000-0001-5961-6588,
K. Liu^720000-0003-4529-3356,
S. Liu^5,70000-0002-6919-227X,
W. Liu^8,
Y. Liu^580000-0003-3257-9240,
Y. Liu^72,
Y. L. Liu^610000-0001-9617-6067,
A. Lobo Salvia^450000-0002-2375-9509,
A. Loi^310000-0003-4176-1503,
J. Lomba Castro^460000-0003-1874-8407,
T. Long^550000-0001-7292-848X,
J.H. Lopes^30000-0003-1168-9547,
A. Lopez Huertas^450000-0002-6323-5582,
S. López Soliño^460000-0001-9892-5113,
Q. Lu^150000-0002-6598-1941,
C. Lucarelli^260000-0002-8196-1828,
D. Lucchesi^32,o0000-0003-4937-7637,
M. Lucio Martinez^780000-0001-6823-2607,
V. Lukashenko^37,520000-0002-0630-5185,
Y. Luo^60009-0001-8755-2937,
A. Lupato^32,h0000-0003-0312-3914,
E. Luppi^25,k0000-0002-1072-5633,
K. Lynch^220000-0002-7053-4951,
X.-R. Lyu^70000-0001-5689-9578,
G. M. Ma^4,b0000-0001-8838-5205,
R. Ma^70000-0002-0152-2412,
S. Maccolini^190000-0002-9571-7535,
F. Machefert^140000-0002-4644-5916,
F. Maciuc^420000-0001-6651-9436,
B. Mack^680000-0001-8323-6454,
I. Mackay^630000-0003-0171-7890,
L. M. Mackey^680000-0002-8285-3589,
L.R. Madhan Mohan^550000-0002-9390-8821,
M. J. Madurai^530000-0002-6503-0759,
A. Maevskiy^430000-0003-1652-8005,
D. Magdalinski^370000-0001-6267-7314,
D. Maisuzenko^430000-0001-5704-3499,
M.W. Majewski^39,
J.J. Malczewski^400000-0003-2744-3656,
S. Malde^630000-0002-8179-0707,
L. Malentacca^48,
A. Malinin^430000-0002-3731-9977,
T. Maltsev^430000-0002-2120-5633,
G. Manca^31,j0000-0003-1960-4413,
G. Mancinelli^130000-0003-1144-3678,
C. Mancuso^29,14,m0000-0002-2490-435X,
R. Manera Escalero^450000-0003-4981-6847,
D. Manuzzi^240000-0002-9915-6587,
D. Marangotto^29,m0000-0001-9099-4878,
J.F. Marchand^100000-0002-4111-0797,
R. Marchevski^490000-0003-3410-0918,
U. Marconi^240000-0002-5055-7224,
E. Mariani^16,
S. Mariani^480000-0002-7298-3101,
C. Marin Benito^450000-0003-0529-6982,
J. Marks^210000-0002-2867-722X,
A.M. Marshall^540000-0002-9863-4954,
L. Martel^630000-0001-8562-0038,
G. Martelli^33,p0000-0002-6150-3168,
G. Martellotti^350000-0002-8663-9037,
L. Martinazzoli^480000-0002-8996-795X,
M. Martinelli^30,n0000-0003-4792-9178,
D. Martinez Santos^460000-0002-6438-4483,
F. Martinez Vidal^470000-0001-6841-6035,
A. Massafferri^20000-0002-3264-3401,
R. Matev^480000-0001-8713-6119,
A. Mathad^480000-0002-9428-4715,
V. Matiunin^430000-0003-4665-5451,
C. Matteuzzi^680000-0002-4047-4521,
K.R. Mattioli^150000-0003-2222-7727,
A. Mauri^610000-0003-1664-8963,
E. Maurice^150000-0002-7366-4364,
J. Mauricio^450000-0002-9331-1363,
P. Mayencourt^490000-0002-8210-1256,
J. Mazorra de Cos^470000-0003-0525-2736,
M. Mazurek^410000-0002-3687-9630,
M. McCann^610000-0002-3038-7301,
L. Mcconnell^220009-0004-7045-2181,
T.H. McGrath^620000-0001-8993-3234,
N.T. McHugh^590000-0002-5477-3995,
A. McNab^620000-0001-5023-2086,
R. McNulty^220000-0001-7144-0175,
B. Meadows^650000-0002-1947-8034,
G. Meier^190000-0002-4266-1726,
D. Melnychuk^410000-0003-1667-7115,
F. M. Meng^4,b0009-0004-1533-6014,
M. Merk^37,780000-0003-0818-4695,
A. Merli^490000-0002-0374-5310,
L. Meyer Garcia^660000-0002-2622-8551,
D. Miao^5,70000-0003-4232-5615,
H. Miao^70000-0002-1936-5400,
M. Mikhasenko^750000-0002-6969-2063,
D.A. Milanes^740000-0001-7450-1121,
A. Minotti^30,n0000-0002-0091-5177,
E. Minucci^680000-0002-3972-6824,
T. Miralles^110000-0002-4018-1454,
B. Mitreska^190000-0002-1697-4999,
D.S. Mitzel^190000-0003-3650-2689,
A. Modak^570000-0003-1198-1441,
R.A. Mohammed^630000-0002-3718-4144,
R.D. Moise^170000-0002-5662-8804,
S. Mokhnenko^430000-0002-1849-1472,
E. F. Molina Cardenas^820009-0002-0674-5305,
T. Mombächer^480000-0002-5612-979X,
M. Monk^56,10000-0003-0484-0157,
S. Monteil^110000-0001-5015-3353,
A. Morcillo Gomez^460000-0001-9165-7080,
G. Morello^270000-0002-6180-3697,
M.J. Morello^34,q0000-0003-4190-1078,
M.P. Morgenthaler^210000-0002-7699-5724,
J. Moron^390000-0002-1857-1675,
A.B. Morris^480000-0002-0832-9199,
A.G. Morris^130000-0001-6644-9888,
R. Mountain^680000-0003-1908-4219,
H. Mu^4,b0000-0001-9720-7507,
Z. M. Mu^60000-0001-9291-2231,
E. Muhammad^560000-0001-7413-5862,
F. Muheim^580000-0002-1131-8909,
M. Mulder^770000-0001-6867-8166,
K. Müller^500000-0002-5105-1305,
F. Muñoz-Rojas^90000-0002-4978-602X,
R. Murta^610000-0002-6915-8370,
P. Naik^600000-0001-6977-2971,
T. Nakada^490009-0000-6210-6861,
R. Nandakumar^570000-0002-6813-6794,
T. Nanut^480000-0002-5728-9867,
I. Nasteva^30000-0001-7115-7214,
M. Needham^580000-0002-8297-6714,
N. Neri^29,m0000-0002-6106-3756,
S. Neubert^180000-0002-0706-1944,
N. Neufeld^480000-0003-2298-0102,
P. Neustroev^43,
J. Nicolini^19,140000-0001-9034-3637,
D. Nicotra^780000-0001-7513-3033,
E.M. Niel^490000-0002-6587-4695,
N. Nikitin^430000-0003-0215-1091,
P. Nogarolli^30009-0001-4635-1055,
P. Nogga^18,
C. Normand^540000-0001-5055-7710,
J. Novoa Fernandez^460000-0002-1819-1381,
G. Nowak^650000-0003-4864-7164,
C. Nunez^820000-0002-2521-9346,
H. N. Nur^590000-0002-7822-523X,
A. Oblakowska-Mucha^390000-0003-1328-0534,
V. Obraztsov^430000-0002-0994-3641,
T. Oeser^170000-0001-7792-4082,
S. Okamura^25,k0000-0003-1229-3093,
A. Okhotnikov^43,
O. Okhrimenko^520000-0002-0657-6962,
R. Oldeman^31,j0000-0001-6902-0710,
F. Oliva^580000-0001-7025-3407,
M. Olocco^190000-0002-6968-1217,
C.J.G. Onderwater^780000-0002-2310-4166,
R.H. O'Neil^580000-0002-9797-8464,
D. Osthues^19,
J.M. Otalora Goicochea^30000-0002-9584-8500,
P. Owen^500000-0002-4161-9147,
A. Oyanguren^470000-0002-8240-7300,
O. Ozcelik^580000-0003-3227-9248,
F. Paciolla^34,u0000-0002-6001-600X,
A. Padee^410000-0002-5017-7168,
K.O. Padeken^180000-0001-7251-9125,
B. Pagare^560000-0003-3184-1622,
P.R. Pais^210009-0005-9758-742X,
T. Pajero^480000-0001-9630-2000,
A. Palano^230000-0002-6095-9593,
M. Palutan^270000-0001-7052-1360,
G. Panshin^430000-0001-9163-2051,
L. Paolucci^560000-0003-0465-2893,
A. Papanestis^57,480000-0002-5405-2901,
M. Pappagallo^23,g0000-0001-7601-5602,
L.L. Pappalardo^25,k0000-0002-0876-3163,
C. Pappenheimer^650000-0003-0738-3668,
C. Parkes^620000-0003-4174-1334,
B. Passalacqua^250000-0003-3643-7469,
G. Passaleva^260000-0002-8077-8378,
D. Passaro^34,q0000-0002-8601-2197,
A. Pastore^230000-0002-5024-3495,
M. Patel^610000-0003-3871-5602,
J. Patoc^630009-0000-1201-4918,
C. Patrignani^24,i0000-0002-5882-1747,
A. Paul^680009-0006-7202-0811,
C.J. Pawley^780000-0001-9112-3724,
A. Pellegrino^370000-0002-7884-345X,
J. Peng^5,70009-0005-4236-4667,
M. Pepe Altarelli^270000-0002-1642-4030,
S. Perazzini^240000-0002-1862-7122,
D. Pereima^430000-0002-7008-8082,
H. Pereira Da Costa^670000-0002-3863-352X,
A. Pereiro Castro^460000-0001-9721-3325,
P. Perret^110000-0002-5732-4343,
A. Perro^480000-0002-1996-0496,
K. Petridis^540000-0001-7871-5119,
A. Petrolini^28,l0000-0003-0222-7594,
J. P. Pfaller^650009-0009-8578-3078,
H. Pham^680000-0003-2995-1953,
L. Pica^34,q0000-0001-9837-6556,
M. Piccini^330000-0001-8659-4409,
L. Piccolo^310000-0003-1896-2892,
B. Pietrzyk^100000-0003-1836-7233,
G. Pietrzyk^140000-0001-9622-820X,
D. Pinci^350000-0002-7224-9708,
F. Pisani^480000-0002-7763-252X,
M. Pizzichemi^30,n,480000-0001-5189-230X,
V. Placinta^420000-0003-4465-2441,
M. Plo Casasus^460000-0002-2289-918X,
T. Poeschl^480000-0003-3754-7221,
F. Polci^16,480000-0001-8058-0436,
M. Poli Lener^270000-0001-7867-1232,
A. Poluektov^130000-0003-2222-9925,
N. Polukhina^430000-0001-5942-1772,
I. Polyakov^430000-0002-6855-7783,
E. Polycarpo^30000-0002-4298-5309,
S. Ponce^480000-0002-1476-7056,
D. Popov^70000-0002-8293-2922,
S. Poslavskii^430000-0003-3236-1452,
K. Prasanth^580000-0001-9923-0938,
C. Prouve^460000-0003-2000-6306,
D. Provenzano^31,j0009-0005-9992-9761,
V. Pugatch^520000-0002-5204-9821,
G. Punzi^34,r0000-0002-8346-9052,
S. Qasim^500000-0003-4264-9724,
Q. Q. Qian^60000-0001-6453-4691,
W. Qian^70000-0003-3932-7556,
N. Qin^4,b0000-0001-8453-658X,
S. Qu^4,b0000-0002-7518-0961,
R. Quagliani^480000-0002-3632-2453,
R.I. Rabadan Trejo^560000-0002-9787-3910,
J.H. Rademacker^540000-0003-2599-7209,
M. Rama^340000-0003-3002-4719,
M. Ramírez García^820000-0001-7956-763X,
V. Ramos De Oliveira^690000-0003-3049-7866,
M. Ramos Pernas^560000-0003-1600-9432,
M.S. Rangel^30000-0002-8690-5198,
F. Ratnikov^430000-0003-0762-5583,
G. Raven^380000-0002-2897-5323,
M. Rebollo De Miguel^470000-0002-4522-4863,
F. Redi^29,h0000-0001-9728-8984,
J. Reich^540000-0002-2657-4040,
F. Reiss^620000-0002-8395-7654,
Z. Ren^70000-0001-9974-9350,
P.K. Resmi^630000-0001-9025-2225,
R. Ribatti^490000-0003-1778-1213,
G. R. Ricart^15,120000-0002-9292-2066,
D. Riccardi^34,q0009-0009-8397-572X,
S. Ricciardi^570000-0002-4254-3658,
K. Richardson^640000-0002-6847-2835,
M. Richardson-Slipper^580000-0002-2752-001X,
K. Rinnert^600000-0001-9802-1122,
P. Robbe^140000-0002-0656-9033,
G. Robertson^590000-0002-7026-1383,
E. Rodrigues^600000-0003-2846-7625,
E. Rodriguez Fernandez^460000-0002-3040-065X,
J.A. Rodriguez Lopez^740000-0003-1895-9319,
E. Rodriguez Rodriguez^460000-0002-7973-8061,
J. Roensch^19,
A. Rogachev^430000-0002-7548-6530,
A. Rogovskiy^570000-0002-1034-1058,
D.L. Rolf^480000-0001-7908-7214,
P. Roloff^480000-0001-7378-4350,
V. Romanovskiy^650000-0003-0939-4272,
M. Romero Lamas^460000-0002-1217-8418,
A. Romero Vidal^460000-0002-8830-1486,
G. Romolini^250000-0002-0118-4214,
F. Ronchetti^490000-0003-3438-9774,
T. Rong^60000-0002-5479-9212,
M. Rotondo^270000-0001-5704-6163,
S. R. Roy^210000-0002-3999-6795,
M.S. Rudolph^680000-0002-0050-575X,
M. Ruiz Diaz^210000-0001-6367-6815,
R.A. Ruiz Fernandez^460000-0002-5727-4454,
J. Ruiz Vidal^81,y0000-0001-8362-7164,
A. Ryzhikov^430000-0002-3543-0313,
J. Ryzka^390000-0003-4235-2445,
J. J. Saavedra-Arias^90000-0002-2510-8929,
J.J. Saborido Silva^460000-0002-6270-130X,
R. Sadek^150000-0003-0438-8359,
N. Sagidova^430000-0002-2640-3794,
D. Sahoo^760000-0002-5600-9413,
N. Sahoo^530000-0001-9539-8370,
B. Saitta^31,j0000-0003-3491-0232,
M. Salomoni^30,n,480009-0007-9229-653X,
I. Sanderswood^470000-0001-7731-6757,
R. Santacesaria^350000-0003-3826-0329,
C. Santamarina Rios^460000-0002-9810-1816,
M. Santimaria^27,480000-0002-8776-6759,
L. Santoro ^20000-0002-2146-2648,
E. Santovetti^360000-0002-5605-1662,
A. Saputi^25,480000-0001-6067-7863,
D. Saranin^430000-0002-9617-9986,
A. Sarnatskiy^770009-0007-2159-3633,
G. Sarpis^580000-0003-1711-2044,
M. Sarpis^620000-0002-6402-1674,
C. Satriano^35,s0000-0002-4976-0460,
A. Satta^360000-0003-2462-913X,
M. Saur^60000-0001-8752-4293,
D. Savrina^430000-0001-8372-6031,
H. Sazak^170000-0003-2689-1123,
F. Sborzacchi^48,270009-0004-7916-2682,
L.G. Scantlebury Smead^630000-0001-8702-7991,
A. Scarabotto^190000-0003-2290-9672,
S. Schael^170000-0003-4013-3468,
S. Scherl^600000-0003-0528-2724,
M. Schiller^590000-0001-8750-863X,
H. Schindler^480000-0002-1468-0479,
M. Schmelling^200000-0003-3305-0576,
B. Schmidt^480000-0002-8400-1566,
S. Schmitt^170000-0002-6394-1081,
H. Schmitz^18,
O. Schneider^490000-0002-6014-7552,
A. Schopper^480000-0002-8581-3312,
N. Schulte^190000-0003-0166-2105,
S. Schulte^490009-0001-8533-0783,
M.H. Schune^140000-0002-3648-0830,
R. Schwemmer^480009-0005-5265-9792,
G. Schwering^170000-0003-1731-7939,
B. Sciascia^270000-0003-0670-006X,
A. Sciuccati^480000-0002-8568-1487,
S. Sellam^460000-0003-0383-1451,
A. Semennikov^430000-0003-1130-2197,
T. Senger^500009-0006-2212-6431,
M. Senghi Soares^380000-0001-9676-6059,
A. Sergi^28,l,480000-0001-9495-6115,
N. Serra^500000-0002-5033-0580,
L. Sestini^320000-0002-1127-5144,
A. Seuthe^190000-0002-0736-3061,
Y. Shang^60000-0001-7987-7558,
D.M. Shangase^820000-0002-0287-6124,
M. Shapkin^430000-0002-4098-9592,
R. S. Sharma^680000-0003-1331-1791,
I. Shchemerov^430000-0001-9193-8106,
L. Shchutska^490000-0003-0700-5448,
T. Shears^600000-0002-2653-1366,
L. Shekhtman^430000-0003-1512-9715,
Z. Shen^60000-0003-1391-5384,
S. Sheng^5,70000-0002-1050-5649,
V. Shevchenko^430000-0003-3171-9125,
B. Shi^70000-0002-5781-8933,
Q. Shi^70000-0001-7915-8211,
Y. Shimizu^140000-0002-4936-1152,
E. Shmanin^240000-0002-8868-1730,
R. Shorkin^430000-0001-8881-3943,
J.D. Shupperd^680009-0006-8218-2566,
R. Silva Coutinho^680000-0002-1545-959X,
G. Simi^32,o0000-0001-6741-6199,
S. Simone^23,g0000-0003-3631-8398,
N. Skidmore^560000-0003-3410-0731,
T. Skwarnicki^680000-0002-9897-9506,
M.W. Slater^530000-0002-2687-1950,
J.C. Smallwood^630000-0003-2460-3327,
E. Smith^640000-0002-9740-0574,
K. Smith^670000-0002-1305-3377,
M. Smith^610000-0002-3872-1917,
A. Snoch^370000-0001-6431-6360,
L. Soares Lavra^580000-0002-2652-123X,
M.D. Sokoloff^650000-0001-6181-4583,
F.J.P. Soler^590000-0002-4893-3729,
A. Solomin^43,540000-0003-0644-3227,
A. Solovev^430000-0002-5355-5996,
I. Solovyev^430000-0003-4254-6012,
R. Song^10000-0002-8854-8905,
Y. Song^490000-0003-0256-4320,
Y. Song^4,b0000-0003-1959-5676,
Y. S. Song^60000-0003-3471-1751,
F.L. Souza De Almeida^680000-0001-7181-6785,
B. Souza De Paula^30009-0003-3794-3408,
E. Spadaro Norella^28,l0000-0002-1111-5597,
E. Spedicato^240000-0002-4950-6665,
J.G. Speer^190000-0002-6117-7307,
E. Spiridenkov^43,
P. Spradlin^590000-0002-5280-9464,
V. Sriskaran^480000-0002-9867-0453,
F. Stagni^480000-0002-7576-4019,
M. Stahl^480000-0001-8476-8188,
S. Stahl^480000-0002-8243-400X,
S. Stanislaus^630000-0003-1776-0498,
E.N. Stein^480000-0001-5214-8865,
O. Steinkamp^500000-0001-7055-6467,
O. Stenyakin^43,
H. Stevens^190000-0002-9474-9332,
D. Strekalina^430000-0003-3830-4889,
Y. Su^70000-0002-2739-7453,
F. Suljik^630000-0001-6767-7698,
J. Sun^310000-0002-6020-2304,
L. Sun^730000-0002-0034-2567,
Y. Sun^660000-0003-4933-5058,
D. Sundfeld^20000-0002-5147-3698,
W. Sutcliffe^50,
P.N. Swallow^530000-0003-2751-8515,
K. Swientek^390000-0001-6086-4116,
F. Swystun^550009-0006-0672-7771,
A. Szabelski^410000-0002-6604-2938,
T. Szumlak^390000-0002-2562-7163,
Y. Tan^4,b0000-0003-3860-6545,
M.D. Tat^630000-0002-6866-7085,
A. Terentev^430000-0003-2574-8560,
F. Terzuoli^34,u,480000-0002-9717-225X,
F. Teubert^480000-0003-3277-5268,
E. Thomas^480000-0003-0984-7593,
D.J.D. Thompson^530000-0003-1196-5943,
H. Tilquin^610000-0003-4735-2014,
V. Tisserand^110000-0003-4916-0446,
S. T'Jampens^100000-0003-4249-6641,
M. Tobin^5,480000-0002-2047-7020,
L. Tomassetti^25,k0000-0003-4184-1335,
G. Tonani^29,m,480000-0001-7477-1148,
X. Tong^60000-0002-5278-1203,
D. Torres Machado^20000-0001-7030-6468,
L. Toscano^190009-0007-5613-6520,
D.Y. Tou^4,b0000-0002-4732-2408,
C. Trippl^440000-0003-3664-1240,
G. Tuci^210000-0002-0364-5758,
N. Tuning^370000-0003-2611-7840,
L.H. Uecker^210000-0003-3255-9514,
A. Ukleja^390000-0003-0480-4850,
D.J. Unverzagt^210000-0002-1484-2546,
E. Ursov^430000-0002-6519-4526,
A. Usachov^380000-0002-5829-6284,
A. Ustyuzhanin^430000-0001-7865-2357,
U. Uwer^210000-0002-8514-3777,
V. Vagnoni^240000-0003-2206-311X,
V. Valcarce Cadenas^460009-0006-3241-8964,
G. Valenti^240000-0002-6119-7535,
N. Valls Canudas^480000-0001-8748-8448,
H. Van Hecke^670000-0001-7961-7190,
E. van Herwijnen^610000-0001-8807-8811,
C.B. Van Hulse^46,w0000-0002-5397-6782,
R. Van Laak^490000-0002-7738-6066,
M. van Veghel^370000-0001-6178-6623,
G. Vasquez^500000-0002-3285-7004,
R. Vazquez Gomez^450000-0001-5319-1128,
P. Vazquez Regueiro^460000-0002-0767-9736,
C. Vázquez Sierra^460000-0002-5865-0677,
S. Vecchi^250000-0002-4311-3166,
J.J. Velthuis^540000-0002-4649-3221,
M. Veltri^26,v0000-0001-7917-9661,
A. Venkateswaran^490000-0001-6950-1477,
M. Verdoglia^310009-0006-3864-8365,
M. Vesterinen^560000-0001-7717-2765,
D. Vico Benet^630009-0009-3494-2825,
P. V. Vidrier Villalba^45,
M. Vieites Diaz^480000-0002-0944-4340,
X. Vilasis-Cardona^440000-0002-1915-9543,
E. Vilella Figueras^600000-0002-7865-2856,
A. Villa^240000-0002-9392-6157,
P. Vincent^160000-0002-9283-4541,
F.C. Volle^530000-0003-1828-3881,
D. vom Bruch^130000-0001-9905-8031,
N. Voropaev^430000-0002-2100-0726,
K. Vos^780000-0002-4258-4062,
G. Vouters^100009-0008-3292-2209,
C. Vrahas^580000-0001-6104-1496,
J. Wagner^190000-0002-9783-5957,
J. Walsh^340000-0002-7235-6976,
E.J. Walton^1,560000-0001-6759-2504,
G. Wan^60000-0003-0133-1664,
C. Wang^210000-0002-5909-1379,
G. Wang^80000-0001-6041-115X,
J. Wang^60000-0001-7542-3073,
J. Wang^50000-0002-6391-2205,
J. Wang^4,b0000-0002-3281-8136,
J. Wang^730000-0001-6711-4465,
M. Wang^290000-0003-4062-710X,
N. W. Wang^70000-0002-6915-6607,
R. Wang^540000-0002-2629-4735,
X. Wang^8,
X. Wang^710000-0002-2399-7646,
X. W. Wang^610000-0001-9565-8312,
Y. Wang^60009-0003-2254-7162,
Z. Wang^140000-0002-5041-7651,
Z. Wang^4,b0000-0003-0597-4878,
Z. Wang^290000-0003-4410-6889,
J.A. Ward^56,10000-0003-4160-9333,
M. Waterlaat^48,
N.K. Watson^530000-0002-8142-4678,
D. Websdale^610000-0002-4113-1539,
Y. Wei^60000-0001-6116-3944,
J. Wendel^800000-0003-0652-721X,
B.D.C. Westhenry^540000-0002-4589-2626,
C. White^550009-0002-6794-9547,
M. Whitehead^590000-0002-2142-3673,
E. Whiter^530009-0003-3902-8123,
A.R. Wiederhold^620000-0002-1023-1086,
D. Wiedner^190000-0002-4149-4137,
G. Wilkinson^630000-0001-5255-0619,
M.K. Wilkinson^650000-0001-6561-2145,
M. Williams^640000-0001-8285-3346,
M.R.J. Williams^580000-0001-5448-4213,
R. Williams^550000-0002-2675-3567,
Z. Williams^540009-0009-9224-4160,
F.F. Wilson^570000-0002-5552-0842,
M. Winn^12,
W. Wislicki^410000-0001-5765-6308,
M. Witek^400000-0002-8317-385X,
L. Witola^210000-0001-9178-9921,
G. Wormser^140000-0003-4077-6295,
S.A. Wotton^550000-0003-4543-8121,
H. Wu^680000-0002-9337-3476,
J. Wu^80000-0002-4282-0977,
Y. Wu^60000-0003-3192-0486,
Z. Wu^70000-0001-6756-9021,
K. Wyllie^480000-0002-2699-2189,
S. Xian^71,
Z. Xiang^50000-0002-9700-3448,
Y. Xie^80000-0001-5012-4069,
A. Xu^340000-0002-8521-1688,
J. Xu^70000-0001-6950-5865,
L. Xu^4,b0000-0003-2800-1438,
L. Xu^4,b0000-0002-0241-5184,
M. Xu^560000-0001-8885-565X,
Z. Xu^480000-0002-7531-6873,
Z. Xu^70000-0001-9558-1079,
Z. Xu^50000-0001-9602-4901,
D. Yang^40009-0002-2675-4022,
K. Yang^610000-0001-5146-7311,
S. Yang^70000-0003-2505-0365,
X. Yang^60000-0002-7481-3149,
Y. Yang^28,l0000-0002-8917-2620,
Z. Yang^60000-0003-2937-9782,
Z. Yang^660000-0003-0572-2021,
V. Yeroshenko^140000-0002-8771-0579,
H. Yeung^620000-0001-9869-5290,
H. Yin^80000-0001-6977-8257,
C. Y. Yu^60000-0002-4393-2567,
J. Yu^700000-0003-1230-3300,
X. Yuan^50000-0003-0468-3083,
Y Yuan^5,70009-0000-6595-7266,
E. Zaffaroni^490000-0003-1714-9218,
M. Zavertyaev^200000-0002-4655-715X,
M. Zdybal^400000-0002-1701-9619,
F. Zenesini^24,i0009-0001-2039-9739,
C. Zeng^5,70009-0007-8273-2692,
M. Zeng^4,b0000-0001-9717-1751,
C. Zhang^60000-0002-9865-8964,
D. Zhang^80000-0002-8826-9113,
J. Zhang^70000-0001-6010-8556,
L. Zhang^4,b0000-0003-2279-8837,
S. Zhang^700000-0002-9794-4088,
S. Zhang^630000-0002-2385-0767,
Y. Zhang^60000-0002-0157-188X,
Y. Z. Zhang^4,b0000-0001-6346-8872,
Y. Zhao^210000-0002-8185-3771,
A. Zharkova^430000-0003-1237-4491,
A. Zhelezov^210000-0002-2344-9412,
S. Z. Zheng^60009-0001-4723-095X,
X. Z. Zheng^4,b0000-0001-7647-7110,
Y. Zheng^70000-0003-0322-9858,
T. Zhou^60000-0002-3804-9948,
X. Zhou^80009-0005-9485-9477,
Y. Zhou^70000-0003-2035-3391,
V. Zhovkovska^560000-0002-9812-4508,
L. Z. Zhu^70000-0003-0609-6456,
X. Zhu^4,b0000-0002-9573-4570,
X. Zhu^80000-0002-4485-1478,
V. Zhukov^170000-0003-0159-291X,
J. Zhuo^470000-0002-6227-3368,
Q. Zou^5,70000-0003-0038-5038,
D. Zuliani^32,o0000-0002-1478-4593,
G. Zunica^490000-0002-5972-6290.
^1School of Physics and Astronomy, Monash University, Melbourne, Australia
^2Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, Brazil
^3Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, Brazil
^4Department of Engineering Physics, Tsinghua University, Beijing, China, Beijing, China
^5Institute Of High Energy Physics (IHEP), Beijing, China
^6School of Physics State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing, China
^7University of Chinese Academy of Sciences, Beijing, China
^8Institute of Particle Physics, Central China Normal University, Wuhan, Hubei, China
^9Consejo Nacional de Rectores (CONARE), San Jose, Costa Rica
^10Université Savoie Mont Blanc, CNRS, IN2P3-LAPP, Annecy, France
^11Université Clermont Auvergne, CNRS/IN2P3, LPC, Clermont-Ferrand, France
^12Département de Physique Nucléaire (DPhN), Gif-Sur-Yvette, France
^13Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France
^14Université Paris-Saclay, CNRS/IN2P3, IJCLab, Orsay, France
^15Laboratoire Leprince-Ringuet, CNRS/IN2P3, Ecole Polytechnique, Institut Polytechnique de Paris, Palaiseau, France
^16LPNHE, Sorbonne Université, Paris Diderot Sorbonne Paris Cité, CNRS/IN2P3, Paris, France
^17I. Physikalisches Institut, RWTH Aachen University, Aachen, Germany
^18Universität Bonn - Helmholtz-Institut für Strahlen und Kernphysik, Bonn, Germany
^19Fakultät Physik, Technische Universität Dortmund, Dortmund, Germany
^20Max-Planck-Institut für Kernphysik (MPIK), Heidelberg, Germany
^21Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
^22School of Physics, University College Dublin, Dublin, Ireland
^23INFN Sezione di Bari, Bari, Italy
^24INFN Sezione di Bologna, Bologna, Italy
^25INFN Sezione di Ferrara, Ferrara, Italy
^26INFN Sezione di Firenze, Firenze, Italy
^27INFN Laboratori Nazionali di Frascati, Frascati, Italy
^28INFN Sezione di Genova, Genova, Italy
^29INFN Sezione di Milano, Milano, Italy
^30INFN Sezione di Milano-Bicocca, Milano, Italy
^31INFN Sezione di Cagliari, Monserrato, Italy
^32INFN Sezione di Padova, Padova, Italy
^33INFN Sezione di Perugia, Perugia, Italy
^34INFN Sezione di Pisa, Pisa, Italy
^35INFN Sezione di Roma La Sapienza, Roma, Italy
^36INFN Sezione di Roma Tor Vergata, Roma, Italy
^37Nikhef National Institute for Subatomic Physics, Amsterdam, Netherlands
^38Nikhef National Institute for Subatomic Physics and VU University Amsterdam, Amsterdam, Netherlands
^39AGH - University of Krakow, Faculty of Physics and Applied Computer Science, Kraków, Poland
^40Henryk Niewodniczanski Institute of Nuclear Physics Polish Academy of Sciences, Kraków, Poland
^41National Center for Nuclear Research (NCBJ), Warsaw, Poland
^42Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele, Romania
^43Affiliated with an institute covered by a cooperation agreement with CERN
^44DS4DS, La Salle, Universitat Ramon Llull, Barcelona, Spain
^45ICCUB, Universitat de Barcelona, Barcelona, Spain
^46Instituto Galego de Física de Altas Enerxías (IGFAE), Universidade de Santiago de Compostela, Santiago de Compostela, Spain
^47Instituto de Fisica Corpuscular, Centro Mixto Universidad de Valencia - CSIC, Valencia, Spain
^48European Organization for Nuclear Research (CERN), Geneva, Switzerland
^49Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
^50Physik-Institut, Universität Zürich, Zürich, Switzerland
^51NSC Kharkiv Institute of Physics and Technology (NSC KIPT), Kharkiv, Ukraine
^52Institute for Nuclear Research of the National Academy of Sciences (KINR), Kyiv, Ukraine
^53School of Physics and Astronomy, University of Birmingham, Birmingham, United Kingdom
^54H.H. Wills Physics Laboratory, University of Bristol, Bristol, United Kingdom
^55Cavendish Laboratory, University of Cambridge, Cambridge, United Kingdom
^56Department of Physics, University of Warwick, Coventry, United Kingdom
^57STFC Rutherford Appleton Laboratory, Didcot, United Kingdom
^58School of Physics and Astronomy, University of Edinburgh, Edinburgh, United Kingdom
^59School of Physics and Astronomy, University of Glasgow, Glasgow, United Kingdom
^60Oliver Lodge Laboratory, University of Liverpool, Liverpool, United Kingdom
^61Imperial College London, London, United Kingdom
^62Department of Physics and Astronomy, University of Manchester, Manchester, United Kingdom
^63Department of Physics, University of Oxford, Oxford, United Kingdom
^64Massachusetts Institute of Technology, Cambridge, MA, United States
^65University of Cincinnati, Cincinnati, OH, United States
^66University of Maryland, College Park, MD, United States
^67Los Alamos National Laboratory (LANL), Los Alamos, NM, United States
^68Syracuse University, Syracuse, NY, United States
^69Pontifícia Universidade Católica do Rio de Janeiro (PUC-Rio), Rio de Janeiro, Brazil, associated to ^3
^70School of Physics and Electronics, Hunan University, Changsha City, China, associated to ^8
^71Guangdong Provincial Key Laboratory of Nuclear Science, Guangdong-Hong Kong Joint Laboratory of Quantum Matter, Institute of Quantum Matter, South China Normal University, Guangzhou, China, associated to ^4
^72Lanzhou University, Lanzhou, China, associated to ^5
^73School of Physics and Technology, Wuhan University, Wuhan, China, associated to ^4
^74Departamento de Fisica , Universidad Nacional de Colombia, Bogota, Colombia, associated to ^16
^75Ruhr Universitaet Bochum, Fakultaet f. Physik und Astronomie, Bochum, Germany, associated to ^19
^76Eotvos Lorand University, Budapest, Hungary, associated to ^48
^77Van Swinderen Institute, University of Groningen, Groningen, Netherlands, associated to ^37
^78Universiteit Maastricht, Maastricht, Netherlands, associated to ^37
^79Tadeusz Kosciuszko Cracow University of Technology, Cracow, Poland, associated to ^40
^80Universidade da Coruña, A Coruna, Spain, associated to ^44
^81Department of Physics and Astronomy, Uppsala University, Uppsala, Sweden, associated to ^59
^82University of Michigan, Ann Arbor, MI, United States, associated to ^68
^aCentro Federal de Educacão Tecnológica Celso Suckow da Fonseca, Rio De Janeiro, Brazil
^bCenter for High Energy Physics, Tsinghua University, Beijing, China
^cHangzhou Institute for Advanced Study, UCAS, Hangzhou, China
^dSchool of Physics and Electronics, Henan University , Kaifeng, China
^eLIP6, Sorbonne Université, Paris, France
^fUniversidad Nacional Autónoma de Honduras, Tegucigalpa, Honduras
^gUniversità di Bari, Bari, Italy
^hUniversità di Bergamo, Bergamo, Italy
^iUniversità di Bologna, Bologna, Italy
^jUniversità di Cagliari, Cagliari, Italy
^kUniversità di Ferrara, Ferrara, Italy
^lUniversità di Genova, Genova, Italy
^mUniversità degli Studi di Milano, Milano, Italy
^nUniversità degli Studi di Milano-Bicocca, Milano, Italy
^oUniversità di Padova, Padova, Italy
^pUniversità di Perugia, Perugia, Italy
^qScuola Normale Superiore, Pisa, Italy
^rUniversità di Pisa, Pisa, Italy
^sUniversità della Basilicata, Potenza, Italy
^tUniversità di Roma Tor Vergata, Roma, Italy
^uUniversità di Siena, Siena, Italy
^vUniversità di Urbino, Urbino, Italy
^wUniversidad de Alcalá, Alcalá de Henares , Spain
^xFacultad de Ciencias Fisicas, Madrid, Spain
^yDepartment of Physics/Division of Particle Physics, Lund, Sweden
^†Deceased
|
http://arxiv.org/abs/2409.03253v1 | 20240905051328 | SpinMultiNet: Neural Network Potential Incorporating Spin Degrees of Freedom with Multi-Task Learning | [
"Koki Ueno",
"Satoru Ohuchi",
"Kazuhide Ichikawa",
"Kei Amii",
"Kensuke Wakasugi"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"cs.LG"
] |
Granular-ball Representation Learning for Deep CNN on Learning with Label Noise
Dawei Dai1()0000-0002-8431-4431 Hao Zhu10000-0002-4655-7336 Shuyin Xia10000-0001-5993-9563 Guoyin Wang10000-0002-8521-5232
September 5, 2024
==============================================================================================================================
§ ABSTRACT
Neural Network Potentials (NNPs) have attracted significant attention as a method for accelerating density functional theory (DFT) calculations. However, conventional NNP models typically do not incorporate spin degrees of freedom, limiting their applicability to systems where spin states critically influence material properties, such as transition metal oxides. This study introduces SpinMultiNet, a novel NNP model that integrates spin degrees of freedom through multi-task learning. SpinMultiNet achieves accurate predictions without relying on correct spin values obtained from DFT calculations. Instead, it utilizes initial spin estimates as input and leverages multi-task learning to optimize the spin latent representation while maintaining both E(3) and time-reversal equivariance. Validation on a dataset of transition metal oxides demonstrates the high predictive accuracy of SpinMultiNet. The model successfully reproduces the energy ordering of stable spin configurations originating from superexchange interactions and accurately captures the rhombohedral distortion of the rocksalt structure. These results pave the way for new possibilities in materials simulations that consider spin degrees of freedom, promising future applications in large-scale simulations of various material systems, including magnetic materials.
§ INTRODUCTION
First-principles calculations based on Density Functional Theory (DFT) have been widely utilized as a powerful tool for understanding electronic structures and material properties <cit.>. Although DFT calculations can accurately predict energies and forces acting on atoms, they are often hindered by high computational costs. This limitation can become a significant bottleneck, particularly for large-scale systems or long-time simulations.
To address this issue, Neural Network Potentials (NNPs) have emerged as a promising alternative to accelerate DFT calculations <cit.>. NNPs learn the relationship between atomic configurations and energies from data obtained through DFT calculations, enabling significant reduction in computational cost while maintaining accuracy comparable to DFT calculations. In particular, NNPs based on the Graph Neural Network (GNN) are well-suited for constructing accurate potential models, as they can effectively capture the local atomic environments <cit.>.
However, most conventional NNPs do not account for spin degrees of freedom, limiting their application to material systems where spin states play a critical role in determining properties, such as transition metal oxides (TMOs). TMOs are known to exhibit diverse magnetic properties due to the presence of transition metal ions with partially filled d-orbitals, and incorporating spin degrees of freedom is crucial for understanding their properties <cit.>. For example, accurate prediction of the energy difference between ferromagnetic (FM) and antiferromagnetic (AFM) states requires proper representation of the potential energy surface depending on the spin configuration. Recently, SpinGNN <cit.> and xDeepH <cit.> have been proposed as NNP models incorporating spin degrees of freedom. These models take spin values as input in addition to atomic configurations and predict spin-dependent potential energies. However, these models require correct spin inputs during prediction, limiting their applicability since correct spin values are often unavailable in realistic scenarios.
To overcome this limitation, this study presents SpinMultiNet, a novel NNP model which utilizes initial spin estimates as input and accurately predicts the spin-dependent potential energy surface. Our model employs multi-task learning to predict energy, forces, and spin simultaneously. This allows the spin latent features to be optimized within the network, enabling highly accurate predictions, even if the input spin is an initial estimate provided by the user. Furthermore, our model is designed to satisfy not only E(3) equivariance but also time-reversal equivariance, which ensures consistent and physically meaningful predictions. These equivariance contribute to improved data efficiency and enhanced generalization capabilities. The main contributions of this work are as follows:
* Development of a spin-dependent NNP model using initial spin estimates as input: We designed a spin-dependent NNP model applicable even when correct spin information is not available a priori.
* Demonstration of high prediction accuracy in TMOs: We applied SpinMultiNet to a dataset of TMOs and demonstrated its ability to accurately predict energy changes due to spin configurations. Specifically, we reproduced the energy ordering of stable spin configurations originating from superexchange interactions and confirmed that the optimized lattice constants of rocksalt TMOs agree well with experimental results.
* Verification of the importance of time-reversal equivariance: Ablation studies revealed that time-reversal equivariance is essential for accurate spin prediction. Additionally, we demonstrated that higher predictive accuracy can be achieved when precise spin values are provided as input.
§ RELATED WORK
In recent years, several NNP models that take spin degrees of freedom into account have been proposed. Magnetic moment tensor potential <cit.> introduces spin degrees of freedom into the moment tensor potential <cit.>, enabling the learning of spin-dependent potentials. Similarly, mHDNNP <cit.> proposes a model that incorporates spin interactions into atom-centered symmetry functions. However, these methods are limited to collinear spins. On the other hand, SpinGNN <cit.> addresses the noncollinear spin by using the inner product of two noncollinear spins as input to the GNN. SpinGNN leverages the high expressive power of GNNs to construct accurate potential models.
A critical aspect of spin-dependent NNP models is ensuring time-reversal equivariance. Time-reversal equivariance describes how the state of a system changes under the time-reversal operation and is essential for the physically accurate handling of latent spin features. For example, under the time-reversal operation, spin flips its sign while energy remains invariant. SpinGNN ensures the time-reversal invariance of the energy by restricting spin-derived features to scalars only. On the other hand, xDeepH <cit.> achieves physical consistency and higher representational capacity by designing an architecture that is equivariant to the time-reversal operation.
However, existing spin-dependent NNP models still face the challenge of requiring correct spin values as input during prediction. Although DFT calculations can provide correct spin values, relying on them for every NNP prediction is impractical because of the significant computational costs.
Alternatively, NNP models capable of calculating the gradient of energy with respect to spin, i.e., magnetic forces <cit.>, can be used to obtain correct spin values in the ground state by optimizing the spin configuration to minimize the energy.
However, this optimization process is computationally expensive as it requires repeated calculations using optimization algorithms, making it a significant challenge.
In contrast, CHGNet <cit.> outputs magnetic moments without requiring spin values as input. However, CHGNet is highly dependent on the spin states used in the training data and cannot predict energies or magnetic moments for spin states not included in the training data. Moreover, it outputs the same energy for structures with the same atomic configuration but different spin configurations, making it unsuitable for tasks such as predicting the energy difference between FM and AFM states.
SpinMultiNet accurately predicts energies and spin values even from initial spin estimates, without relying on correct spin values obtained from DFT or spin optimization calculations.
This is achieved by performing multi-task learning that simultaneously predicts energy and spin while optimizing spin features in a time-reversal equivariant manner.
This approach enables efficient calculation of energies for various spin configurations, addressing the computational cost challenges of conventional methods.
Table <ref> summarizes the relationship between spin input and output for existing research and SpinMultiNet.
§ METHODS
§.§ Equivariance
§.§.§ E(3) Equivariance
Equivariance refers to the property where the output changes correspondingly when a specific transformation is applied to the input data. For instance, if the input data is rotated, the output of an equivariant function will also rotate accordingly. This property plays a crucial role in processing physical systems and geometric data.
Generally, a function ℒ: 𝒳→𝒴 (𝒳, 𝒴 are vector spaces) is equivariant if the representation D of the group G satisfies the following:
ℒ∘ D^𝒳(g) = D^𝒴(g) ∘ℒ
Here, D^𝒳(g) is the representation of the vector space 𝒳 for element g of the group G.
SO(3) equivariance refers to the property of being equivariant to rotation operations in three-dimensional space.
The irreducible representations of SO(3) are known as Wigner D-matrices <cit.>, which are matrices of dimension 2l + 1 for rotation order l. By incorporating SO(3) equivariance into each layer of the model, an overall SO(3) equivariant model can be constructed. This is achieved by combining two steerable vector features using the Clebsch-Gordan tensor product <cit.>.
(𝐮⊗𝐯)^(l)_m = ∑_m_1=-l_1^l_1∑_m_2=-l_2^l_2 C^(l,m)_(l_1,m_1)(l_2,m_2) u^(l_1)_m_1 v^(l_2)_m_2
Here, 𝐮 and 𝐯 are steerable vector features of rotation orders l_1 and l_2, respectively, m is the representation index (m ∈ [-l, l]), and C^(l,m)_(l_1,m_1)(l_2,m_2) is the Clebsch-Gordan coefficient.
A steerable vector feature is a 2l + 1 dimensional vector that takes the form of an irreducible representation of the SO(3) group and can be rotated by applying the Wigner D-matrix <cit.>.
This tensor product has a non-zero value only when l satisfies |l_1 - l_2| ≤ l ≤ |l_1 + l_2|, and the output is also an irreducible representation.
Furthermore, by calculating feature vectors using the interatomic vector r⃗_ij and restricting the tensor product calculation to cases where the parity p (-1 for odd and 1 for even) satisfies the condition p_l = p_l_1p_l_2, it is possible to construct an E(3) equivariant model that incorporates translation and inversion operations <cit.>.
E(3) equivariant models can flexibly represent interactions between scalars, vectors, and higher-order tensors, leading to high expressive power in processing data in three-dimensional space and improved data efficiency.
§.§.§ Time-Reversal Equivariance
Since spin changes its sign under the time-reversal operations, it is important to incorporate this equivariance into the NNP model.
According to xDeepH <cit.>, introducing time-reversal equivariance {I, 𝒯} into the E(3) equivariant model can be achieved by decomposing the tensor product of two spin vectors as 1/2⊗1/2 = 0 ⊕ 1.
Time-reversal equivariance is then achieved by ensuring that the imaginary part of l=0 and the real part of l=1 change their sign under the time-reversal operation, while other components remain unchanged.
In practice, this can be incorporated into Equation (<ref>) by introducing a time-reversal parity t. Each vector feature is labeled with four parameters: l, m, p, and t. For example, r⃗_ij is labeled as (l=1, p=-1, t=1), and m⃗_i as (l=1, p=1, t=-1).
t is treated similarly to p, and the tensor product is calculated only when the condition t_l = t_l_1t_l_2 is satisfied.
Additionally, scalar spin features with (l=0, p=1, t=1), i.e., E(3) ×{I, 𝒯} invariant features, can also be incorporated.
In this study, we added the magnitude of the magnetic moment to the initial node features and the inner product of magnetic moments between neighboring atoms to the edge features.
By incorporating time-reversal equivariance in this manner, we expect the model to represent physically correct spin behavior, leading to improved prediction accuracy.
§.§ Model Architecture
SpinMultiNet is built upon a GNN. Figure <ref> illustrates the overall architecture of the model. Note that in this paper, any process labeled with E3 represents an E(3) ×{I, 𝒯} equivariant process. These processes were implemented using the <cit.> and <cit.> packages.
For each atom, steerable features are generated using the atomic number Z_i, interatomic vectors r⃗_ij, and initial magnetic moment estimate m⃗_i as input. These features are then fed into an E(3) ×{I, 𝒯} equivariant GNN. The steerable features are iteratively updated by the Interaction Layers, after which specific irreducible representations are extracted and used for predicting the energy and magnetic moments.
While forces acting on atoms can be directly predicted from the l=1 features, in this study, they were calculated from the gradient of energy with respect to atomic positions.
The Interaction Layer is designed to be E(3) ×{I, 𝒯} equivariant, ensuring that the node features of each atom are updated while maintaining equivariance. By stacking multiple Interaction Layers, SpinMultiNet can capture longer-range atomic and spin interactions.
§.§.§ Embedding Layer
Unlike conventional models that do not consider spin degrees of freedom, our model incorporates spin information into the atom embedding. The initial node features 𝐡_i^0 and edge features 𝐞_ij are defined as follows:
𝐡_i^0 = MLP(OneHot(Z_i) || GaussianBasis(|m⃗_i|))
𝐞_ij = MLP(BesselBasis(|r⃗_ij|) || GaussianBasis(m̂_i ·m̂_j))
Here, m̂_i represents the unit vector of the initial magnetic moment estimate. A cutoff function is applied to the Bessel functions to ensure smoothness before and after the cutoff <cit.>.
Spin information is incorporated into the initial node and edge features by concatenating scalar features that are invariant under the time-reversal operation (the magnitude of the magnetic moment and the inner product of two magnetic moments, respectively).
§.§.§ Interaction Layer
The E(3) ×{I, 𝒯} equivariant operation used as the convolution layer in SpinMultiNet is defined as follows:
(M_ij, c)^(l)_m
= ( 𝐡^(l_1)_j,c⊗^w 𝐄^(l_2)(r⃗_ij, m⃗_i, m⃗_j) )^(l)_m
= w ∑_m_1=-l_1^l_1∑_m_2=-l_2^l_2 C^(l,m)_(l_1,m_1)(l_2,m_2) h^(l_1)_j,c, m_1 E^(l_2)_m_2(r⃗_ij, m⃗_i, m⃗_j)
𝐄(r̂_ij, m̂_i, m̂_j) = Y(r̂_ij) ⊕ Y(m̂_i) ⊕ Y(m̂_j)
𝐰 = E3MLP(𝐞_ij)
Here, (M_ij, c)^(l)_m is the l, m element of channel c in the message function, 𝐡 is the latent feature of the node, and Y is the spherical harmonics function. For brevity, parities p and t are omitted.
𝐰 is a weight vector that has a value for each path of the tensor product and is calculated from the edge features 𝐞_ij. To satisfy equivariance, the same weight must be applied for the same l regardless of the value of m.
Each feature has a parity t with respect to the time-reversal operation, and directional information of the magnetic moment is incorporated through 𝐄. An E(3) ×{I, 𝒯} equivariant tensor product is calculated between the node features and 𝐄, and weighted by the value calculated from the edge features. Through this process, the model can capture spatial and spin interactions between atoms. When considering only collinear spin, there is no need to expand the magnetic moment in spherical harmonics; it can simply be input as a time-reversal odd scalar.
The calculated message is aggregated to the central node through the message function, and a non-linear activation function is applied. In this study, a gate-type activation function with time-reversal equivariance <cit.> was used. Figure <ref> shows the visualization of the changes in latent features with respect to input structure rotation and spin inversion.
§.§.§ Output Layer
After passing through multiple Interaction Layers, the node features retain sufficient information regarding the structure and spin configurations. From these node features, the energy and magnetic moments are calculated.
The energy is calculated by a linear combination of the components of each node feature that satisfy the following conditions: rotation order l=0, parity p=1, and time-reversal parity t=1. These components are essentially the E(3) ×{I, 𝒯} invariant scalar components.
U_pred = ∑_i ∑_c w_c h_i, c^(l=0, p=1, t=1)
Here, w_c represents the learnable weight parameters, and h_i, c^(l=0, p=1, t=1) represents the E(3) ×{I, 𝒯} invariant scalar component of the node features of atom i.
On the other hand, the magnetic moment is calculated by a linear combination of the components of each node feature that satisfy l=1, p=1, and t=-1. These components represent the latent spin representation reflecting the structural information.
m⃗_i, pred =∑_c w_c 𝐡_i, c^(l=1, p=1, t=-1)
Here, m⃗_i, pred represents the predicted magnetic moment of atom i, and 𝐡_i, c^(l=1, p=1, t=-1) represents the latent spin representation of the node features of atom i.
For collinear spins, the components with l=0, p=1, and t=-1 are used.
We performed multi-task learning by introducing an auxiliary task of predicting the magnetic moments obtained from DFT calculations, in addition to predicting the energy. This allows the model to learn correct spin information internally and improve the energy prediction accuracy, even if the input magnetic moments are initial estimates.
The loss function for multi-task learning is defined as a weighted sum of the losses for energy, force, and magnetic moment predictions.
ℒ = ℒ_energy + λ_f ℒ_forces + λ_m ℒ_mag
Here, ℒ_energy, ℒ_forces, and ℒ_mag represent the loss functions for energy, force, and magnetic moment predictions, respectively, and λ_f and λ_m represent the weight coefficients for each loss. In this study, ℒ_mag was applied only to magnetic elements.
Through this multi-task learning, the model can predict the energy and magnetic moments corresponding to various spin states, such as ferromagnetic and antiferromagnetic states, and high-spin and low-spin states, by modifying the direction and magnitude of the initial estimates of the input magnetic moments. This mimics the calculation process of first-principles calculation software such as Vienna Ab-initio Simulation Package (VASP) <cit.>, indicating that the input and output results of VASP can be directly used as training data.
§.§ Dataset
The datasets were created using DFT calculations performed with VASP. The detailed settings for the DFT calculations are provided in Appendix <ref>.
First, we created a dataset (Mn-Co-Ni dataset) focusing on rocksalt-type TMOs with the space group Fm3m.
Specifically, using Mn, Co, Ni, or their combinations as transition metal atoms, we performed structural optimizations starting from FM and various AFM configurations to obtain stable crystal structures. Following this, we applied four types of deformation operations to build a dataset that includes a diverse range of atomic configurations:
* Random displacement: Each atomic coordinate was randomly displaced by a small amount.
* Shear strain: Shear strain was applied while maintaining the fractional coordinates of the atoms.
* Tensile strain: Tensile strain was applied while maintaining the fractional coordinates of the atoms.
* Cell volume change: The volume of the crystal lattice was changed while maintaining the fractional coordinates of the atoms.
For each structure with these deformation operations applied, we performed single-point calculations using VASP to calculate the energy, forces, and magnetic moments. The magnetic moments were directly used from the VASP output. Each data point also includes the initial magnetic moments () obtained from the VASP input file, which are used as inputs to the NNP model.
Finally, we constructed the Mn-Co-Ni dataset, consisting of a total of 29,989 data points.
Additionally, we created a dataset focused on the CoO crystal structure (space group Fm3m), referred to as the Co-pair dataset. For 1,000 structures generated by applying random displacements, both FM (ferromagnetic) and AFM (antiferromagnetic) configurations were generated, resulting in 1,000 pairs of structures. In each structure pair, only the spin configurations differ.
Similarly, for these structure pairs, single-point calculations were performed to obtain the training data.
The Co-pair dataset is used to learn the energy difference between different spin configurations for the same atomic configuration.
The Mn-Co-Ni dataset was randomly split into training, validation, and test sets with a ratio of 80%, 10%, and 10%, respectively. The Co-pair dataset was similarly split, ensuring that each structure pair belongs to only one of the splits.
Using these datasets, we trained SpinMultiNet to minimize the loss function defined in Equation (<ref>). The detailed settings for the training are provided in Appendix <ref>.
§ RESULTS
§.§ Model Performance
First, we present the mean absolute errors (MAEs) for the Mn-Co-Ni dataset in Table <ref>.
For comparison, we also show the results of an NNP model without spin input (NequIP <cit.>), with a comparable number of training parameters.
SpinMultiNet showed an improvement in prediction accuracy of 73.2% for energy and 17.9% for forces compared to the model without spin input.
Furthermore, when performing multi-task learning that includes spin output, the prediction accuracy for both energy and forces improved even further compared to single-task learning, despite the optimization cost being allocated to the magnetic moment as well.
This finding suggests that predicting magnetic moments refines the latent representation of input spins, aligning it more closely with the correct spin information obtained from DFT calculations, thereby improving energy prediction accuracy.
Moreover, multi-task learning enables the prediction of magnetic moments during inference. As shown in Figure <ref>(c), the model provides accurate predictions of magnetic moments. However, large prediction errors were observed for some atoms. These errors stem from the misprediction of low-spin Co^2+ species as high-spin states, likely due to the limited number of low-spin Co^2+ species in the dataset. Nevertheless, these instances account for only 0.3% of the total magnetic atoms, indicating that the model accurately predicts magnetic moments for the majority of atoms.
In the Mn-Co-Ni dataset, all structures exhibit slight variations in atomic configurations. Therefore, even a model lacking spin input may be capable of inferring the spin state to some degree based on these structural differences, which could, in turn, reduce the energy prediction error.
To more clearly verify the effect of spin input, we conducted additional experiments using the Co-pair dataset.
The Co-pair dataset contains energy data for both FM and AFM configurations of the same atomic configurations, enabling a clearer demonstration of the importance of the spin input.
Table <ref> shows the MAEs for the Co-pair dataset. The model without spin input exhibits a significant energy prediction error of 26.6 meV/atom and predicts an intermediate energy between the FM and AFM states for all data points. This is a reasonable result, as the model without spin input cannot distinguish between different spin configurations. Conversely, SpinMultiNet demonstrates a very small energy prediction error of 0.403 meV/atom, confirming its ability to clearly distinguish between FM and AFM states.
These results demonstrate that by appropriately considering spin degrees of freedom, SpinMultiNet can predict energy, forces, and magnetic moments with higher accuracy compared to conventional NNP models without spin input.
§.§ Identification of Stable Spin Configurations
SpinMultiNet can predict the energy for any given spin configuration, enabling the identification of the most stable spin configuration in a magnetic structure. In this section, we performed structural optimizations for NiO and MnO with Fm3m rocksalt structures, using FM and two types of AFM configurations (AFM type-I and AFM type-II shown in Figure <ref>(a), visualized using VESTA <cit.>) as initial structures to predict the most stable spin configuration. The structural optimization were performed without symmetry constraints using <cit.>.
Figure <ref>(b) shows the energy after structural optimization for each spin configuration. For both NiO and MnO, the AFM type-II configuration was identified as the most stable spin configuration. This agrees with the experimentally observed antiferromagnetic ground state <cit.>.
In these TMOs, it is known that due to superexchange interactions, the AFM type-II configuration, where spins align parallel within the (111) plane, is more stable than the AFM type-I configuration <cit.>. SpinMultiNet accurately reproduces this energy ordering originating from superexchange interactions.
Table <ref> shows the lattice constants after structural optimization for each spin configuration. In the FM configuration, symmetry was preserved after structural optimization, whereas in the AFM type-I configuration, a distortion along the c-axis was observed. Notably, a rhombohedral distortion was induced in the AFM type-II configuration, altering the space group to R3m. The optimized rhombohedral angle α (= β = γ) was 90.10^∘ and 90.64^∘ for NiO and MnO, respectively. These values are in excellent agreement with the experimentally reported rhombohedral angles (90.08^∘ for NiO and 90.60^∘ for MnO) <cit.>.
These results demonstrate that SpinMultiNet effectively learns the complex, spin-dependent energy landscape and can accurately predict both the stable spin configuration and the associated structural parameters.
This suggests that SpinMultiNet can be a powerful tool for exploring stable spin configurations in complex systems where DFT calculations are computationally expensive and challenging.
§.§ Ablation Study
To further understand the behavior of SpinMultiNet, we performed an ablation study on its architecture and input features. Table <ref> shows the results of the ablation study using the Mn-Co-Ni dataset.
First, to examine the effect of time-reversal equivariance in SpinMultiNet, we trained a version with the spin-related components removed from 𝐄(r̂_ij, m̂_i, m̂_j), making it time-reversal invariant.
The time-reversal invariant model showed a slight increase in MAE for energy and forces by 4.13% and 3.0%, respectively, compared to SpinMultiNet. However, the MAE for the magnetic moment increased significantly, from 0.0076 μ_B to 0.5840 μ_B.
This is because the time-reversal invariant model cannot recognize the inversion of the input spin and incorrectly predicts the sign of the output spin. In contrast, SpinMultiNet (time-reversal equivariant) can correctly change the sign of the output spin and internal features in response to the inversion of the input spin.
Next, to investigate the performance when using correct input spins, we trained a model using the magnetic moments obtained from DFT calculations as input spins. In this case, since the input and output spins are identical, only the energy and forces were used as training targets.
This model achieved a reduction in MAE of 43.6% for energy and 7.32% for forces compared to the model using initial magnetic moment estimates as inputs.
This suggests that using more precise values for the input spins can further enhance the model performance.
Remarkably, even when using a single initial estimate for the magnetic moment of each element (in this study, 3.0 for Mn and 2.5 for Ni), SpinMultiNet demonstrates high performance. The results are comparable to those obtained using the correct magnetic moments, with the difference in the MAE of energy prediction being within 1 meV/atom.
This suggests that SpinMultiNet can accurately predict energies as long as the spin direction is correctly specified, indicating that determining the initial estimates is relatively straightforward.
The results of this ablation study highlight the importance of time-reversal equivariance and spin input values, supporting the validity of SpinMultiNet design.
§ CONCLUSION
In this study, we developed SpinMultiNet, a novel multitasking NNP model which explicitly incorporates spin degrees of freedom. This model can simultaneously predict accurate energies and spin values using initial spin estimates as input, without relying on correct spin values obtained from DFT calculations.
This was achieved by employing multi-task learning to simultaneously predict energy and spin, optimizing the latent representation of spin in the process.
SpinMultiNet accurately captures the spin-dependent energy landscape and can reproduce important physical phenomena such as superexchange interactions. This paves the way for large-scale simulations of various material systems, including magnetic materials, which were challenging for conventional NNP models.
Future challenges include the following two points:
* Validation using larger datasets: While we validated the effectiveness of the model using a relatively small dataset in this study, evaluation using a large and diverse dataset is necessary to verify its applicability to a wider range of material systems. Considering spin degrees of freedom increases the complexity of the energy landscape, requiring more training data than conventional NNP models.
* Improvement of the mapping between initial estimates and converged values: For magnetic moments, the initial estimates and the converged values obtained from DFT calculations have a many-to-one relationship. This means that slightly different (or sometimes significantly different) initial magnetic moment estimates can correspond to the same converged value, complicating the model training. To address this issue, appropriate constraints need to be introduced into the model to learn a proper mapping between initial estimates and converged values.
By addressing these challenges, we expect to develop even more accurate and versatile spin-dependent NNP models.
unsrt
equationsection
figuresection
tablesection
§ APPENDIX
§ VISUALIZATION OF LATENT FEATURES
Since SpinMultiNet consists of E(3) ×{I, 𝒯} equivariant interaction layers, its latent features also possess this equivariance.
To illustrate this behavior, Figure <ref> shows the changes in the latent features of a Ni atom within a Ni-O two-atom system when the structure is rotated and the spin is flipped.
Here, the latent features have 16×0eE + 8×1oE + 4×2eE + 8×1eO representations. For example, 8×1oE represents 8 channels of vector features with rotation order l=1, parity p=-1 (odd), and time-reversal parity t=1 (even).
The upper part of Figure <ref> shows that while the features in the l>0 region rotate with the input structure, those in the l=0 region remain unchanged. This demonstrates that SpinMultiNet satisfies E(3) equivariance.
Furthermore, the lower part of Figure <ref> shows that when the input spin is flipped, the features in the t=-1 region (time-reversal odd features) are inverted, while those in the t=1 region (time-reversal even features) remain unchanged. This demonstrates that SpinMultiNet satisfies time-reversal equivariance.
Thus, the internal features of SpinMultiNet appropriately transform in response to changes in the input, enabling data-efficient learning.
§ DFT CALCULATIONS
Spin-polarized DFT calculations were performed using VASP. The Perdew-Burke-Ernzerhof (PBE) functional <cit.> was used as the exchange-correlation functional, and the calculations were based on the GGA+U method with Hubbard U correction. The U-J parameters for Co, Ni, and Mn were set to 3.32 eV, 6.2 eV, and 3.9 eV, respectively.
The plane-wave cutoff energy was set to 520 eV. The k-point mesh was automatically generated using the method implemented in the <cit.> package under the condition of kppvol = 100.
Single-point calculations were performed for each structure to calculate the energy, forces, and magnetic moments. The input magnetic moments were set to 1.0, 2.5, 3.0, and 0.0 μ_B for Co, Ni, Mn, and O, respectively, assuming collinear spins. These values were also used as initial estimates of magnetic moments for input into the SpinMultiNet.
It is important to note that these initial estimates are different from the final magnetic moments obtained through DFT calculations.
This accounts for the difficulty in obtaining correct magnetic moments in advance under realistic simulation scenarios.
§ TRAINING DETAILS
SpinMultiNet, with four Interaction Layers, was trained to minimize the loss function defined in Equation (<ref>). The MAE was used as the loss function, and the loss weights for forces and magnetic moments were set to λ_f = 1.0 and λ_m = 0.1, respectively. The energy loss was calculated after converting to per-atom energy.
The Adam optimizer was used with an initial learning rate of 0.01, a batch size of 32, and 400 epochs. The learning rate was decayed to 1 × 10^-4 using a cosine annealing scheduler.
For comparison, we trained the NequIP model <cit.>, which does not account for spin degrees of freedom, using the same settings. The number of model parameters was adjusted to be approximately the same as SpinMultiNet (about 4M), and the maximum rotation order was limited to l=2.
The training of these models was performed using NVIDIA V100 GPUs.
|
http://arxiv.org/abs/2409.02849v1 | 20240904161955 | Anomaly Detection in Offshore Open Radio Access Network Using Long Short-Term Memory Models on a Novel Artificial Intelligence-Driven Cloud-Native Data Platform | [
"Abdelrahim Ahmad",
"Peizheng Li",
"Robert Piechocki",
"Rui Inacio"
] | cs.NI | [
"cs.NI"
] |
1
.001
Anomaly Detection in Offshore Open RAN Using LSTM Models on a Novel AI-Driven Cloud-Native Data Platform
Ahmad & Li et al.
mode = title]Anomaly Detection in Offshore Open Radio Access Network Using Long Short-Term Memory Models on a Novel Artificial Intelligence-Driven Cloud-Native Data Platform
1]Abdelrahim Ahmad[orcid=https://orcid.org/0000-0002-6980-5267]
[1]
[1]
[email protected]
[1]Boldyn Networks
2]Peizheng Li[orcid=https://orcid.org/0000-0003-1516-1993]
[1]
[email protected]
[2]Department of Electrical and Electronic Engineering, University of Bristol, United Kingdom
2]Robert Piechocki[orcid=https://orcid.org/0000-0002-4879-1206]
[email protected]
1]Rui Inacio[]
[email protected]
[cor1]Corresponding author
[fn1]Both authors contributed equally to this article.
This work was developed within the Innovate UK/CELTIC-NEXT European collaborative project on AIMM (AI-enabled Massive MIMO).
§ ABSTRACT
The radio access network (RAN) is a critical component of modern telecom infrastructure, currently undergoing significant transformation towards disaggregated and open architectures. These advancements are pivotal for integrating intelligent, data-driven applications aimed at enhancing network reliability and operational autonomy through the introduction of cognition capabilities, exemplified by the set of enhancements proposed by the emerging Open radio access network (O-RAN) standards.
Despite its potential, the nascent nature of O-RAN technology presents challenges, primarily due to the absence of mature operational standards. This complicates the management of data and applications, particularly in integrating with traditional network management and operational support systems. Divergent vendor-specific design approaches further hinder migration and limit solution reusability. Addressing the skills gap in telecom business-oriented engineering is crucial for the effective deployment of O-RAN and the development of robust data-driven applications.
To address these challenges, Boldyn Networks, a global Neutral Host provider, has implemented a novel cloud-native data analytics platform. This platform underwent rigorous testing in real-world scenarios of using advanced artificial intelligence (AI) techniques, significantly improving operational efficiency, and enhancing customer experience. Implementation involved adopting development operations (DevOps) practices, leveraging data lakehouse architectures tailored for AI applications, and employing sophisticated data engineering strategies.
The platform successfully addresses connectivity challenges inherent in offshore windfarm deployments using long short-term memory (LSTM) Models for anomaly detection of the connectivity, providing detailed insights into its specialized architecture developed for this purpose.
Open RAN Telecom AI LSTM Deep Learning Big Data DevOps MLOps CI/CD Data Engineering Anomaly Detection Cloud-native private networks
[
[
September 9, 2024
=====================
§ INTRODUCTION
Telecommunication networks are essential to many aspects of our lives, driving digital transformation and revolutionizing communication. The benefits of these networks are numerous. Recently, their importance has surged due to the proliferation of various types of user equipment (UE), internet of things (IoT) devices, autonomous operations, and services that require faster, more reliable, resilient, secure, and private connectivity. This has led to a substantial rise in the demand for private networks, amplifying the challenges of managing many smaller, tailored mobile networks to deliver high-quality services.
To address these escalating demands, innovative enhancements in network design have emerged. A recent progress in this arena is the advent of open radio access network (O-RAN) technology. O-RAN aims to disaggregate the monolithic, single-vendor RAN reducing infrastructure costs and paving the way for network programmability, ultimately leading to autonomous network operations by leveraging native-supported in-network artificial intelligence (AI) techniques, aiming to streamline the complexities of designing, delivering, managing and operating private networks. It establishes a new framework of standards and principles for wireless networking, emphasizing open standards, interfaces, functions, and interoperability to foster greater market competition.
O-RAN’s primary objective is to reduce vendor lock-in, enhance flexibility, and develop network cognitive functions by leveraging data to optimize network performance and improve its resilience, to achieve cost efficiency, particularly in managing diverse heterogeneous networks. Its key advantages lie in software-defined network (SDN) technologies and virtualized network functions (VNF), which not only slash deployment costs but also enable network programmability for autonomous management. This, in turn, reduces operational complexity and optimizes performance <cit.>.
On the other hand, O-RAN is still a relatively new approach. It comes with many realistic challenges in implementation, such as interoperability with legacy network management systems, data integration issues, immaturity in data processing platforms to produce data-driven applications, management, and other technical complexities. In addition to these challenges, there is a shortage of experienced engineers and an increased number of engineering roles with specific skill sets that are necessary to boost this transformation in RAN architecture.
There is a wide range of applications required in the O-RAN system to improve its functionality, such as predictive maintenance and anomaly detection, energy efficiency optimization, automated network configuration and healing, enhanced quality of service (QoS) and traffic management, enhanced user admission control, dynamic RAN slicing <cit.>. AI and machine learning (ML) approaches are usually considered as the main tools to tackle these challenges <cit.>.
The combination of programmability and AI in O-RAN, from the implementation of xApps and rApps using the interfaces offered by RAN intelligent controllers (RICs), leads to the automation of network management tasks and makes real-time, data-driven decisions <cit.>.
The architectural innovations of O-RAN allow network operators to integrate AI algorithms into the O-RAN network, enabling the use of AI for tasks such as network optimization, troubleshooting, and other complex business problems. By automating these tasks, AI can be used to improve network performance, enhance the customer experience, and reduce costs.
However, developing solutions in O-RAN involves numerous challenges and complexities. These include obtaining a large amount of reliable training data from the network, managing and monitoring AI models for execution and inference, and regulating and implementing update mechanisms for the AI models deployed in the operational support system (OSS) stack.
In theory, the data and model aspects of these challenges can be partially addressed through machine learning operations (MLOps) <cit.>. However, in the context of O-RAN, the challenges are more complex due to multi-purpose applications and multi-platform issues stemming from the multi-vendor nature of O-RAN. The data sources for AI models are vast and intricate, often originating from multiple systems as shown in Fig. <ref>. Consequently, relying on a traditional data solution provided by a single vendor is nearly impossible in the multi-vendor environment of the mobile network operator industry.
This underscores the need for a comprehensive platform and methodology to apply analytics and AI solutions within O-RAN. Such an approach is essential to effectively address these challenges and organize the efforts involved in developing data-driven solutions. These challenges present opportunities for network operators to design and implement a platform capable of tackling these issues whilst being able to proactively test and deploy AI models in O-RAN, to enhance the network’s adaptability and efficiency.
In this paper, we introduce a novel cloud-native, open data-driven platform for O-RAN to address these challenges and streamline the integration of AI applications into the O-RAN management stack. This platform leverages cutting-edge engineering technologies and concepts such as data lakehouse <cit.>, DevOps <cit.>, and MLOps. The design of this platform also considers the growing number of private network deployments, and the increased complexities of these networks and use cases, ensuring future scalability and the capacity to handle the vast amounts of data generated within the network.
The contributions of this paper can be summarized as follows:
* To the best of the authors' knowledge, this is the first holistic cloud-native platform for multi-vendor system integration and services management proposed for O-RAN.
* This cloud-native platform is tightly aligned with the O-RAN architecture for potential AI model implementation and integration.
* This platform fully automates involved infrastructure, setup, data, and AI pipelines using DevOps and GitOps technologies, significantly reducing operational workload and streamlining the development cycle, which enables better collaboration between business owners and developers, leading to more efficient resource utilization.
* The paper presents an existing near real-time business problem related to connectivity to be resolved on an offshore mobile network that uses O-RAN technology and provides a solution for it using anomaly detection with a long-short term memory (LSTM) model in the proposed data platform. This is the first AI-based solution targeting a use case of an offshore mobile network built using O-RAN technology.
* This paper provides a new cloud-native open data platform architecture that will support multi-vendor O-RAN designs. It also presents the used method of work in the AI lifecycle and other data-driven applications to tackle the lack of specialized human resources and standardization when deploying AI in O-RAN.
The remaining sections of this paper are constructed as follows:
Sec. <ref> presents the background of O-RAN and AI-enabled intelligent networks. Sec. <ref> elaborates on the proposed cloud-native open data platform. The anomaly detection use case leveraging AI technique is detailed in Sec. <ref>. Then, in Sec. <ref>, we discuss the potential problems of the raised platform and the plan for its future development. Lastly, the conclusions of this paper in Sec. <ref>.
§ PRELIMINARIES
In this section, we present the background information regarding the O-RAN technique and its AI applications.
§.§ O-RAN
The RAN is a critical component of a typical mobile communication network, enabling UE to connect to the core network, which then delivers services to users. The evolution of wireless communication systems from 1G to 5G highlights increasing modularity and virtualization of network functionalities.
Key advancements in RAN architecture include distributed RAN (D-RAN), centralized (or Cloud) RAN (C-RAN), and virtual RAN (vRAN). The distinctions among these architectures are detailed in <cit.>.
In the 3GPP 5G new radio (NR) specifications, the traditional base station (BS) is composed of three main components: the centralized unit (CU), distributed unit (DU), and radio unit (RU). The CU and DU together perform the functions of the baseband unit (BBU), while the RU is responsible for signal conversion and radio frequency (RF) transmission.
O-RAN aims to address vendor lock-in issues by promoting the decoupling of hardware and software. This approach advocates for open, standardized interfaces, virtualized network elements, and white-box hardware, driven by principles of intelligence and openness. By doing so, O-RAN seeks to transform the RAN industry, fostering a more flexible and interoperable ecosystem.
§.§.§ Openness of O-RAN
Openness in O-RAN involves adopting standardized interfaces to ensure interoperability, enabling seamless integration of hardware components from various vendors, and fostering a multi-vendor RAN ecosystem. The O-RAN Alliance has issued various specifications to support this initiative.
Technically, O-RAN adheres to 3GPP 5G NR specifications, featuring the CU, DU, and RU. As illustrated in Fig. <ref>, the RU and DU are disaggregated based on the 7.2x split <cit.> and connected via the open fronthaul interface. Further segmentation of the CU results in two logical components: the CU control plane (CU-CP) and the CU user plane (CU-UP), enhancing deployment flexibility and reducing latency concerns.
The DU and CU are interconnected through the open midhaul F1 interface, which is divided into F1-C for control plane communications and F1-U for user plane connectivity.
§.§.§ Intelligence of O-RAN
The intelligence of O-RAN is a pivotal aspect that enhances its functionality through the integration of AI and ML. These advanced technologies enable sophisticated network automation, allowing for dynamic resource allocation, efficient management, and proactive orchestration of network functions and resources. At the heart of this intelligence is RICs <cit.>, which are designed to host various applications that drive network optimization and network operational and maintenance processes. RICs are categorized into non-real-time (non-RT) RIC and near-real-time (near-RT) RIC, each supporting different types of applications known as rApps and xApps, respectively. It can be seen from Fig. <ref> that the near-RT RIC connects to the O-CU/O-DU via the E2 interface for near-real-time control, while the non-RT RIC communicates with the near-RT RIC through the A1 interface for non-real-time control and AI/ML model updates. Additionally, the O1 interface links the non-RT RIC with other RAN components for overall service management and orchestration <cit.>.
This layered approach ensures that O-RAN can adapt to varying network demands and conditions in real-time, significantly improving performance, reducing operational costs, and enhancing user experience.
§.§ The motivation of deploying AI in O-RAN
AI is becoming increasingly important in O-RAN compared to traditional RAN due to its capability to address complex network demands and enhance overall performance. Several key aspects benefit from the integration of AI <cit.>:
* Reducing Complexity: O-RAN networks have a more complex, disaggregated architecture compared to traditional RAN, making manual management and optimization more challenging. Also, building traditional applications that process the data will be challenging. AI algorithms can automate and optimize these processes. it also compensates for the shortage of skilled engineers to manage such novel networks <cit.>.
* Real-Time capability: The new O-RAN architecture supports Real-time and near real-time RIC allowing AI algorithms to respond to network changes in real-time, allowing for more efficient and effective management of the network, like traffic steering <cit.>.
* Cross-Layer Optimization: The intelligence executed in O-RAN is expected to perform cross-layer optimization over the network, which outperforms the classical optimization focusing on solely communication blocks.
* Future Possibilities: The programmability, openness, and disaggregation enable opportunities for innovation especially when utilizing AI. One of the most important ideas that will change the shape of networks is autonomous management which will provide advanced capabilities compared to the traditional methods. Some of these capabilities are as follows:
* Improved Network Performance: AI-based algorithms can be used to optimize network performance by dynamically allocating network resources and adjusting network parameters based on real-time network conditions such as autonomous QoE and QoS resource optimization <cit.>.
* Cost Savings: By fitting autonomous network management in O-RAN, this will reduce the need for human intervention to manage the complex networks, being an essential concept to scale up the number of O-RAN networks and their size. It will also provide additional means to reduce cost by deploying specific AI applications such as automatic energy-saving applications and efficient resource utilization <cit.>.
* Energy Efficiency Improvement: The AI capability embedded in O-RAN will be assisting in reducing the overall operational energy consumption of the O-RAN system. Apart from the software design of optimizing the network elements and control signal configuration, with AI, more flexible and fine-grained network function operation methods can be supported, for instance, the toggling off and on of carriers and cells in O-RAN can be conducted in the RIC with a non-real-time fashion <cit.>.
In summary, the use of AI in O-RAN allows for more powerful automation, optimization, and insights compared to traditional RAN, making it a key enabler for the development and growth of the O-RAN market <cit.>.
§.§ The challenges in enabling AI in O-RAN network
In this publication, in light of the authors' engineering background, we focus on engineering challenges, notwithstanding the whole plethora of challenges related to legal, regulatory, business models, etc. As we progressed toward implementing AI in O-RAN, we encountered several hurdles and challenges:
* Multi-vendor RAN Model: The multi-vendor model involves deploying and operating RAN equipment and software from different vendors within the same network. This raises significant challenges, especially in centralizing management and collecting data from these systems. Data integration becomes more difficult, delaying the development of AI applications. Additionally, having multiple O-RAN models can isolate each component, limiting the data available for building holistic applications.
* Big Data Characteristics: O-RAN networks generate highly complex, diverse, and voluminous data, such as network performance data, configuration management data, fault management data, infrastructure data, and user equipment trace data. The big data characteristics of these data sources pose challenges in processing and analyzing information in real-time or in large volumes. Moreover, the varied structures and formats of data make integration into existing analytics platforms difficult. Consequently, many existing platforms may not meet the unique data requirements of O-RAN networks, slowing AI model development and complicating the integration of developed models with other O-RAN systems from different vendors.
* Integration with Existing Systems: Building certain AI models with specific algorithms requires data from external sources not directly accessible to the O-RAN platform, such as UE and network functions of other domains such as transmission networks or core networks. This external data is essential for comprehensive analysis and characterization of performance across the overall network ecosystem but poses integration challenges.
* Standardization: The lack of standardization in AI for O-RAN and RIC APIs presents significant challenges. There is no guarantee that developed xApps or rApps will be reusable across different RICs. Additionally, the industry is still in the early stages of establishing a centralized RIC system for the multi-vendor O-RAN model, further complicating standardization efforts.
* Skilled Resources: Managing AI algorithms in O-RAN networks requires specialized skills and expertise, which may not be readily available in the market. A deep understanding of O-RAN is essential for those involved in AI development for O-RAN, making it difficult to find qualified personnel.
The primary challenge lies in the absence of suitable data analytics platforms capable of accommodating the distinct data needs of managing multiple O-RAN networks built using multiple O-RAN vendors. To overcome this obstacle, it is imperative to develop new data analytics platforms specifically designed to meet the unique data demands of O-RAN networks. These platforms must seamlessly integrate with existing O-RAN network systems and associated infrastructure components. They should have the ability to process and analyze vast quantities of complex and varied data to deliver use cases that depend both on real-time (or near-real-time) capabilities, and other use cases that are not real-time in nature, whilst facilitating the integration of data from other non-ORAN diverse sources.
Given that the approach to O-RAN is still an evolving technological concept there is ample opportunity to contribute to the development of more robust systems. These challenges have driven us to create a unique cloud-native data analytics platform. Our goal is to directly address these difficulties and provide an environment that supports the development and execution of AI applications within the O-RAN ecosystem.
§ THE PROPOSED CLOUD-NATIVE OPEN DATA PLATFORM
Before introducing the proposed cloud-native open data platform, we will present the problem statement around the management of multiple O-RAN network instances and platforms. Fig. <ref> depicts a scenario where one communications service provider (CSP) implements multiple O-RAN network instances and platforms to support its ecosystem of mobile private networks (MPNs) and neutral host networks (NHNs). This scenario is based on real-world implementation of networks and services and it is not a theoretical exercise.
The software platform of O-RAN Vendor A in light blue has been selected to build a network system that delivers against the requirements of MPNs. On the other hand Vendor B in green, has been selected to build a system to deliver combined multi-operator RAN (MORAN) services in outdoor high-density demand (HDD) areas, implementing each mobile network operator (MNO) on its own dedicated virtualized network instance. Vendor C has been selected to build MORAN in-building coverage and capacity services, and similarly to the HDD use case, each MNO has been implemented on its own dedicated virtualized network instance. Other potential vendor platforms or software stacks might be chosen in the future to address the specific architectural and functional requirements dictated by new use cases.
There are other network components such as the IP network in dark blue, and IT & Security in yellow (which is shared by all vendors). Other network components are also used like routers, mobile apps, and IoT devices, each of which supports the operation of the network.
These networks are deployed using these different vendors’ systems and support multiple MNOs, in addition to our MPNs, and they are deployed in different geographical areas like airports, offshore windfarms, stadiums, smart cities, stadiums, hospitals, etc. Each use case deployment is different in terms of the vendor that is used, the internal design inside each vendor, the supported MNOs, and the supported network features. This creates a versatile design of networks to comply with the customer’s needs and use case requirements
The authors of this paper have collaborated closely to tackle the unique challenges that arise in the development cycle of AI/ML models within these O-RAN ecosystems. So far, we have noticed a significant gap: there is a lack of a supportive platform for conducting data management and analytical processes across multiple-vendor-based O-RAN systems and for deploying AI models within them. This absence points to a crucial need for innovation and development in this area, highlighting an opportunity for us to contribute to bridging this gap and advancing the field. Accordingly, we designed and implemented a multi-vendor cloud native open data architecture, which is designed to address the workflow presented in Fig. <ref>.
The platform is used to centralize, normalize, and standardize the management of multi-vendor networks, by integrating the data sources of each sub-ecosystem onto one single overlaying data management system allowing easy access to the integrated data sources via the inbuilt data pipelines that standardize the entire data analytics process. This is in opposition to adopting a casuistic approach to each new network component or network system being deployed.
This novel approach permits splitting the data pipeline creation work to be performed by three different roles:
* The RAN subject matter expert (SME) who understands the business problems to address and is able to describe these in terms of user cases detailing the purpose of the process under development, the topological nature of the data and its structure, explain the rationale of the analytical process, define the criteria for validation of the results and what can be classified as a successful outcome from developing/implementing a data pipeline and analytical application.
* Data engineer who understands how to use the data management platform and its tools to create the data pipelines in collaboration with the SME and the data scientist.
* The data scientist will use the continuous integration and continuous deployment (CI/CD) and MLOps techniques to implement the AI/ML models that will be trained to deliver the desired outcomes as defined in collaboration with the SME.
In O-RAN or any operational RAN environment, acquiring, storing, and processing data for model training poses significant challenges. While standard interfaces like E2 defined in the O-RAN architecture provide access to components such as O-DU, O-CU, and others within the network ecosystem, the data retrieved is typically raw and lacks a standardized schema, rendering it unsuitable for direct consumption by AI algorithms.
To effectively leverage AI within O-RAN and its interfaces, a multi-stage process is necessary. Initially, raw data from various sources must be collected, validated, enriched, transformed, and consolidated into an integrated data pool. This prepares the data for processing by using data engineering techniques, such as applying business rules, calculating key performance indicators (KPIs), performing feature engineering, and linking data tables based on network topology mapping. These processes ultimately enable the application of algorithms tailored to specific use cases.
Moreover, an O-RAN network is constructed on top of other system components, including IP networks and cloud server infrastructure. The operation and maintenance of these components are vital for overall network performance and should be seamlessly integrated into a holistic network management process that encompasses all system elements.
The primary goal of this platform is then to streamline and speed up the development and hosting of AI solutions within the O-RAN ecosystem. The effort distribution across different stages of development of an AI-capable system to manage O-RAN networks, depicted in Fig. <ref> shows that at the base of the pyramid lies the data acquisition and mediation stage that involves the biggest share of development effort. It is at this stage that the collaboration work between the three roles, mentioned earlier, is more intensive and if done correctly the components developed at this stage will underpin the work done at other stages facilitating and reducing the effort spent at each stage. Ultimately, the AI model deployment stage will benefit from having all the required components readily available from the start.
This efficiency stems from the elimination of redundant work across multiple network deployments as data migrates to a centralized system that standardizes the data handling processes, enabling the adoption of uniform AI models across different systems provided by different vendors.
Fig. <ref> presents the key stages of AI model development, highlighting the effort involved in achieving the final product. The subsequent stages of the data acquisition and mediation stage involve data storage and processing, which, while less effort-demanding, are vital for improving the performance of the data management process and for preparing datasets to meet the requirements defined by the SME for the analytical stages of the workflow. At the top of the pyramid sits the development of the AI models, built upon the foundations laid by previous stages. For Example, This structured approach ensures that developing AI models is more straightforward and effective, supported by the availability of high-quality, systematically organized data. This approach increases the usage of standardized components, facilitating the integration of new network instances and the adaptation of existing AI models.
Fig. <ref> depicts the main layers and components of the data management system and its interfaces towards the network functions and management systems. The data mediation layer implements the tools to connect with the multiple data sources, collecting the data and performing initial processing according to required processes to validate and enhance the quality of the datasets and unify the data (files and/or streams) coming from multiple instances of the same data-source type into one coherent pipeline.
The datasets are then stored in the data storage layer and/or streamlined to upper layers such as the Data Virtualisation & Processing layer, Application Layer, or Data Visualisation and Monitoring Layer. When the data is cleansed, validated, and enriched it gets processed in the processing layer using big data execution engines or using virtualization techniques also depending on the AI use case.
The Policies, Control, and Management Layer contains information about the network topology, data mapping, and the roles that are applied to the data in the processing layer to produce richer datasets, features, and KPIs to be used in the AI Layer and/or Visualisation and Monitoring Layer, the single pane of glass.
In the visualization Layer, the network engineers can access a set of reports and dashboards that combine multiple datasets from multiple data sources providing information about network performance, its configuration, and faults addressing analytical use cases such as service assurance and operational situational awareness.
§.§ The platform architecture
Despite the layered depiction, the order or positioning of the layers does not necessarily indicate a hierarchy or sequence. Each layer can be accessed directly, independent of its position in the stack. The layers are broken down as follows:
§.§.§ Data collection agent (DCA)
A DCA is a self-built software application deployed across the network equipment. This software is developed to extract or generate data from a function or interface that is not readily available and is considered important for the implementation of one or a set of use cases.
§.§.§ Data acquisition and mediation layer
This is the layer where the heavy tasks will be done to integrate all the deployed networks and supporting systems. The data is collected from all network functions and devices data sources including O-RAN, IoT devices, UEs, customer premise equipment (CPE), etc. This layer will deal with data in various formats and structures such as csv, json, xml, unstructured or semi-structured text, APIs, SFTP streaming system, snmp, etc. This layer will unify the way data is provided for the next layers and create a stream of coherent information from the disparate content provided by each data source. The data in this phase will go through different initial processing to enhance the quality like data parsing, enriching with more information and schema, transforming the data format and content, and distributing it to the next layers. This layer is important to standardize the data that is made accessible to other components of the data management platform.
§.§.§ Data storage layer
The collected data is stored in this layer. Depending on the volume and the accessibility requirement for this data by other components, a suitable storage system and table are used. Data lakehouse technologies are employed for sets of big data and relational databases for smaller datasets of data and mapping information.
§.§.§ Data streaming layer
This layer facilitates delivering data to the application layer or processing layer in real-time. This process avoids the latency associated with storing data in the storage layer and subsequently retrieving it, which is important when dealing with online monitoring and decision-making use cases. There’s an independent process for storing this data running in parallel, that doesn’t impact the performance of streaming procedures. Many important applications use streamed data, and they solve complex business problems. In this work, it was implemented for an anomaly detection AI/ML model that requires a continuous stream of data being fed with very low latency. This anomaly detection model is an example of an application where the data is processed in real-time before even being stored in the data lake. These use cases are important when we consider the case of near-RT RIC and RT-RIC where the latency of the decision-making process must be kept in order of magnitude of milliseconds, and data cannot be stored before being processed.
§.§.§ Data virtualization and processing layer
As data arrives at the storage layer, streaming layer, or both, it’s not immediately accessible for all types of processing. Here, two concepts come into play: the big data execution engine and data virtualization. Both share similarities in terms of data processing and access, but they differ in their applications. The execution engine, such as Spark, is used for complex computations on vast amounts of data to calculate network KPIs and perform feature engineering for AI. On the other hand, data virtualization simplifies data access, joining data from different sources like lakehouse objects and databases in a typical SQL-based manner. For instance, we use it to link the calculated KPIs with the network topology information and data from other sources to create views and reports that can be utilized by others in the upper layers of the data management platform.
§.§.§ Policies, controls and management
We use it to implement business rules and relationships across the four Data layers and the AI application layer. It manages O-RAN fault, configuration, accounting, performance, and security (FCAPS) <cit.> metadata, network topology, other data sources, performance alarm, and complex event alarm definition, AI rules and policies, network API access, and change activities on the network. In addition to that, it’s used as a central mapping layer to standardize data across different vendors into a common set of identifiers, promoting seamless data access and analysis. It improves data coherence and simplifies cross-networks operations.
§.§.§ Application layer
This is the layer where the data-driven application lifecycle is complete. In AI applications, this is the place where SME and data scientists work together on training, testing, publishing and validating an AI product.
§.§.§ Data visualisation layer
This layer is mostly dedicated to implementing business intelligence functions that offer SMEs a set of visualization artefacts such as reports and dashboards that combine data from multiple sources and at different stages of processing organized as per the use case definition. This can be described as the visual interface to monitor the overall system performance and provide situational awareness about network operational and maintenance priorities. This layer implements an interface between the engineer and the AI models, by reporting the actions taken by AI models during their regular operations. More details about this layer can be consulted in <cit.>.
§.§ Features of this platform
The platform is designed to offer flexibility in working with O-RAN data and includes the following features and benefits:
* Unification: The platform can collect, store, and process all types of data from various sources, regardless of volume, velocity, and variety, thanks to its modern open data architecture. This feature contributes to:
* The development of batch and real-time processing for network applications.
* Enabling data analysts to process and access data for advanced analytics tasks.
* Supporting the development and deployment of AI models, paving the way for advanced federated learning in multi-vendor networks.
* Handling operational needs ranging from reporting of complex processed metrics, performance, or fault events to interactive visualization with these dashboards and data cubes.
* Cost-effective: Built on commodity servers, the platform leverages Kubernetes (K8s) container orchestration and other tools to simplify management, reducing the resources required. On the other hand, Using open data architecture and data lakehouse technology reduces costs by consolidating data, optimizing query performance, reducing bandwidth usage, leveraging scalable cloud storage, utilizing cost-effective open-source tools, and improving collaboration and usability.
* Automation and Standardization: Using DevOps and GitOps <cit.> technologies, all infrastructure, setup, data, and AI pipelines are fully automated. This reduces the operational workload for development and deployment, making the development cycle predictable. Business owners can collaborate more effectively with developers to create solutions, ultimately reducing the resources needed.
* Scalability: As the number of O-RAN networks deployed increases the platform’s scalability is crucial. It has been thoroughly evaluated to ensure it meets the increased demands for AI, data processing, and storage.
§.§ Platform core components:
Fig. <ref> depicts the architectural building blocks of the cloud-native data management platform. This platform is further detailed as follows.
The first tier, following a bottom-up order, contains the K8s infrastructure and its automation toolset. we utilized Terraform for defining and provisioning our infrastructure as code enabling efficient management and automation of infrastructure resources that run the operating system (second tier of the architecture) Talos OS. Talos OS is a modern, Linux-based operating system that is specifically designed for K8s. It provides a secure, minimal, and immutable platform to enhance security. Its major feature is that it automates and simplifies the management and operations of K8s clusters. On top of this, K8s is installed to deploy the microservices and applications that implement K8s cluster management functions and data management applicational system (this being the third tier). The following applications and services are implemented in the third tier:
§.§.§ Data management applicational systems
* Apache NiFi is the core of the Acquisition and Mediation layer and is used to automate and manage the flow of data between systems where the data sources are located (across the multiple platforms and their components) and other applications and systems of the data management platform that consume these data. Therefore, it is the main application used to implement the collection and mediation layer of the platform.
* Apache Kafka is the core streaming system and is used to transfer data in real-time from the source to the processing layer.
* In the storage layer, we use the object storage Minio which provides an S3-like API. Depending on the data use case, we store the data using one of the data storing techniques like Hudi, Deltalake, or Iceberg open tables, or even in its raw status.
* The data is later accessible by Apache Spark to perform complex big data processing on data from Streaming, data lakehouse, databases, or from all. Trino is, on the other hand, a virtualization system and it’s used to perform SQL-like queries on any dataset available anywhere in the platform.
* In the application layer, we use Python, Jupyter Notebook, and MLflow to train, test, and validate the AI module before publishing its micro-service deployment in the processing layer. We also use DBT for version-controlled analytics workflows.
* In the control and management phase, we use Apicorio as a schema registry, Hive Metastore to store data catalogues and metadata, and we also use PostgresDB to store the network topology, rules, and alarm triggers.
* The last phase in the data is Elasticsearch and Kibana where we visualize different data from different sources in addition to the applied AI actions and results in one dashboard that serves as the single pane of glass in the platform.
§.§.§ K8s cluster management tools
There are other systems mentioned in the management toolbar. These tools are used to support the data tools in the data layers.
* Longhorn & Rancher: for k8s and cloud-native storage (data layer for applications) management.
* Prometheus and Grafana: for platform monitoring and alerting.
* NeuVector and Kube-bench: for security and vulnerability checks.
* Velero and Fluentd: for logs and backup management.
* Argo CD and GitHub Actions: for CI/CD and GitOps.
§.§.§ CI/CD pipeline
The development of data pipelines on this architecture can be a complex, tedious, and error-prone process if done manually. A set of tools and systems are used to automate the development and deployment of these pipelines. GitOps methods, such as CI/CD, are used to simplify, automate and manage this process.
Fig. <ref> shows an example of one CI/CD process implemented to develop and deploy the data pipeline of one use case addressed by the platform, providing details about every data engineering and AI model preparation task involved.
The pipeline CI/CD cycle starts by developing the source code and publishing it onto the GitHub repository, which triggers a set of automated workflows that checks the quality of the code and performs security scans. In the next step, the code is built into Docker images following a process known as containerization. All the Docker images are then submitted to a process of security vulnerability scanning, before being stored in the Docker registry. The last stage of the CI process is completed when all the deployment manifests are updated with the new Docker image. The final stage of the cycle is the CD process, which uses ArgoCD to automate the process of fetching the latest changes in the deployment manifests and deploying the new application.
§ THE PROBLEM STATEMENT AND THE PROPOSED SOLUTION
§.§ Problem statement
Fig. <ref> illustrates the use case of deploying a commercial O-RAN-based MPN in an offshore location, serving the UEs that are located in vessels that navigate around the windfarm attending to wind turbines for infrastructure operational and maintenance activities. These vessels spend most of the time navigating close to the windfarm and carry tens of people who work at sea for periods of 15 days. Therefore, these people rely on this connectivity to do their work, to communicate with their colleagues working onshore, their families, and for their entertainment. The connectivity provided by this network also supports business and operational critical processes to the organizations responsible for operating and maintaining the network, across an area of around 300 Km2.
Many factors such as weather conditions, sea conditions, distance to the site, and MPN operational faults may affect the quality of the connectivity service as experienced by the end-users, which might impact the: ability of running business and operational critical processes work according to the requirements or the ability of people working productively in their floating offices. Despite these challenges, the MPN’s network operator is responsible for managing, operating, and maintaining the network’s performance as per the contracted service level agreement.
The problem addressed by the solution described in this paper occurred in an MPN service deployed by Boldyn Networks in a windfarm located in the North Sea. This network is composed of 3-cellular sites of 3 sectors each and one cellular carrier per sector, totaling 9 macro cells that cover the whole extension of the windfarm. Each cell operates in the LTE B3 and implements a 20MHz channel. The cellular sites are installed across the windfarm in three different turbines, where commercial-of-the-shelf (COTS) servers are used to run the containerized O-DU, and three O-RUs are installed to implement a sector each. The O-DUs deployed offshore connect, via dark-fibre light-up using long-haul SFPs, to an infrastructure of COTS servers that run the O-CU container.
On the other hand, each relevant vessel’s network is composed of a Wi-Fi network that provides IP connectivity across the vessel. This Wi-Fi network connects to Boldyn’s customer premise equipment (CPE) that acts like an LTE broadband router back-hauling the vessels’ IP traffic to connect to the internet and the MPN customer’s enterprise network. This CPE is installed inside the vessel, near the bridge, and it’s connected to external 4x4 multiple-input-multiple-output (MIMO) antennae that increase the coverage and capacity of the network. Each CPE is dual-modem capable for connection resilience and traffic load balancing reasons and one MPN SIM is inserted in each modem.
The problem resides in the connection recovery procedure implemented by the CPE when one of the modems drops its link to the LTE network. This procedure takes 5 minutes to re-establish an LTE connection to the macro network which is a long period of time and in some conditions increases the likelihood of disconnections occurring in both modems at the same time. In studying the problem, it was found the following:
* These disconnections weren’t always correlated with the performance of the network, the radio link conditions, or the distance to the site.
* It could be observed drift in the behavior of radio link performance, between the two modems of the CPE, verified by monitoring indicators such as latency and throughput even if most of the time these were connected to the same cell and benefiting from similar radio conditions.
* Updating the configuration of the modem on the CPE via its API, forced the modem connection to restart and established a new radio link improving the performance of the modem in all the indicators.
* The process of restarting the modem is much quicker to implement than the process of re-connection in case of radio link drop and has the additional advantage that can be done pre-emptively whilst the other modem is showing good performance.
* The automation of this process of analysis and decision-making could reduce significantly the number of disconnections thus having a great impact on the quality of experience (QoE) of the users being served by the network system.
The automation of this process enabled us to develop a self-healing mechanism that implements an ML model, for the detection of the event of this type of performance anomaly in the modem, trained on available historical performance data merged with data that showed the timestamps where actions were taken to resolve the issue, which resulted in a set of labeled data emphasizing the relationship between patterns of behavior of the relevant performance indicators and the need for decision-making action. Some of the relevant performance indicators were reference signal received power (RSRP), reference signal received quality (RSRQ), IP packet data latency, timestamp, location, and cell to which the modem is connected to, and others from different parts of the network.
The process of collecting the data from the CPE device via the API every 5 seconds, preparing and enriching the data to stream this data to the application layer to be consumed by the ML model for online decision-making and direct provisioning, benefited from all the features and capabilities offered by the data management platform.
§.§ The proposed solution
Network anomaly detection or prediction is a complicated task. Anomalies are referred to as patterns in data that do not conform to a well-defined characteristic of normal patterns. Anomalies can be classified into three types: (1) point anomaly can be considered as a particular data instance deviation from the normal pattern of the dataset; (2) contextual anomaly is defined when a data instance behaves anomalously in a particular context; (3) collective anomaly happens when a collection of similar data instances behave anomalously with respect to the entire dataset, the group of data instances is termed a collective anomaly <cit.>. Existing anomaly detection methods include NN, support vector machine or rule-based classification, statistical signal process method and clustering. In this case, the prediction task is based on the historical records of network states and automated actions, so it can be regarded as a point anomaly issue, and NN is the preferred method. Moreover, to exploit the temporal correlations of the historical samples, the LSTM is the selected learning model.
§.§ Data source
This case study has benefited from utilizing data from a real-life network based on O-RAN standards. The O-CU, O-DU, and O-RU provide FCAPS data, which describes network performance and operational behavior. This FCAPS data is a fundamental building block for the AI model’s development cycle, playing a crucial role in the training and validation process. Once deployed, the AI model uses this data for inference. Another important data source comes from the CPE equipment in the vessel, which provides measurements of the modem’s performance and its location as a time series, mapped with the FCAPS information collected from the O-RAN network.
The CPEs are at sea, and their location is monitored continuously. A couple of months of this data is shown in Fig. <ref>[The latitude and longitude values are removed for the consideration of the users privacy.]. This data shows records of vessels connected to the network even when they are outside the coverage area. It contains many outliers that need to be removed, specifically those indicating locations far from the nearest site.
§.§ LSTM model design
The development of the LSTM model adopts the manner of offline development and online deployment. The historical data stored by the platform is used for the model's training. After the trained model meets the training criteria, the natural model will be deployed in the further step.
Historical data includes two parts, the network state data and corresponding actions. Actions here are operations recorded by the engineers manually when the QoS is lower than the threshold. Some of the network state data is exported using the template shown in table <ref>, where WAN ID indicates the ID of the devices that need an action; the carrier is the carrier frequency; LTE-RSRP, LTE-SINR, LTE-RSRQ, Latency, Latitude and Longitude are UE related information; Time refers the timestamp for that record.
The next anomaly must be forecasted according to the hidden pattern of historical network state records, so a reasonable solution is to formalize a sequence prediction task, where multiple discrete records on sequential timestamps will be cascaded as one training sequence. Then these training sequences are labelled by using manual operations. The training sequence is labeled as `0’ if there is an action needed, otherwise, the label is `1’.
For the features in table <ref>, the WAN ID, LTE-RSRP, LTE-SINR, LTE-RSRQ, and Latency indicators are some of the taken features for training, while removing the carrier, latitude, and longitude because they are unlikely correlated with the occurrence of the anomaly. The recording interval of raw data is 5 seconds. Two months of records were stored and then processed in the cloud-native data platform. The data pre-processing process is illustrated in Fig. <ref>.
First, for the given raw dataset, there are some records missing. So, the process needs to pad these missing values and remove the records that are obviously out of the normal range. Then as a step of de-noising, the data is downsampled to 1/N of the initial one. Then assemble every L sample as one training sequence. Lastly, the training sequence is labeled. It is worth mentioning that the criteria of labeling, are the manual actions taken by the engineer, which may lag over the first occurrence of the anomaly of 5 to 20 minutes. So, 10 minutes is taken as the average delay applied to data labeling. The training sequence will be a matrix with size 1×5× L, and the labels will be binary values. In this use case, after validation, N=2 and L=6 are reasonable options.
The architecture of the NN model is depicted in Fig. <ref>, wherein the LSTM module inherits from the LSTM function in Pytorch and the input size equals 1×5× 6, hidden size equals 2. Two linear layers follow the LSTM module, and the output is the indicator of taking action or not. The last layer adopts the activation function 'sigmoid'.
For the model's training, the training/validation dataset is divided according to the 80%/20% rule. The loss function is the binary cross entropy (BEC) <cit.>.
The training is on the two RTX 2080Ti GPUs, the batch size is 2048 and Adam is taken as the optimizer. In the batch sampling process, the weighted batch sampler is used because this dataset is an imbalanced dataset, which means that labels '1's are far more than labels '0's. The above training dataset is used to train the LSTM model, and the test set is used to evaluate the trained model. In the training stage, the training data is shuffled before batch sampling. Fig. <ref> shows the model's accuracy plots in the training and test. The convergence is reached after 100 epochs.
§.§ Model deployment and validation
The real-world deployment is relatively straightforward using the Boldyn data analytic platform. As illustrated in Fig. <ref>, the development happens in its application layer. In validation and deployment, the model is dockerized to an image and serves as a part of the DevOps pipeline and is hosted directly in the processing layer. This microservice continuously receives the real-time data and feeds it to the LSTM model. The generated actions will trigger the change in the networks.
After deploying the model in production the complaints and network dis-connectivity decreased significantly to a very acceptable limit. Table <ref> shows the improvement in the service performance and that validates the model in production.
It is evident that connectivity significantly improved due to proactive measures taken when the model identifies potential connection drops. This has effectively prevented simultaneous loss of connection for both modems. Occasionally, such drops occur concurrently due to weather conditions or specific spatial factors, albeit infrequently and under unique circumstances.
Due to the innovative nature of this research and the emerging O-RAN technology, there is currently no publicly available dataset that closely matches the specific requirements of our study. Consequently, the data used in this research is proprietary and was collected from real-world deployments by Boldyn Networks. This limitation underscores the need for future efforts to develop and share standardized datasets to facilitate broader validation and comparison of AI models in similar contexts.
§ DISCUSSIONS AND FUTURE WORKS
The deployment of the AI-driven application in the proposed cloud-native data analytics platform for an offshore O-RAN network demonstrated significant improvements in connectivity and operational efficiency. The use of LSTM models for real-time anomaly detection effectively reduced network disconnections, enhancing the user experience.
These results highlight the potential of AI in managing complex network environments, particularly in challenging offshore settings.
Deploying consistent AI models across complex, multi-vendor network environments remains a significant challenge, particularly when networks span diverse regions with varying system configurations and architectures. As shown in Fig. <ref>, the centralized AI models, while effective within single-vendor environments, struggle to maintain adaptability and efficiency across different network setups. This underscores the need for more flexible approaches that can ensure re-usability and privacy compliance in such diverse contexts.
To address these challenges, we propose Federated Learning (FL) as a promising solution. FL enables the training of AI models using local data from each deployment location, thus preserving data privacy and enhancing the model's applicability across different environments. This decentralized approach ensures that AI models can be deployed consistently and efficiently, even in multi-vendor, multi-region scenarios.
Our future work will focus on implementing the FL approach across various offshore networks managed by different O-RAN vendors (Fig. <ref>). By doing so, we aim to validate the effectiveness of FL in ensuring consistent performance and privacy compliance across diverse network deployments. The future work will also explore potential optimizations to further enhance the scalability and efficiency of FL in real-world applications, particularly in environments with highly heterogeneous systems and vendor setups.
§ CONCLUSIONS
RAN plays a critical role in modern telecom infrastructure, evolving towards disaggregated and open architectures like O-RAN. These innovations enable the integration of intelligent, data-driven applications to enhance network reliability and operational autonomy. However, the operation of O-RAN networks poses challenges due to immature real-world practices and complexities in managing data and applications across diverse vendor systems.
Boldyn Networks has developed a novel AI-driven cloud-native data analytics platform to address these challenges. Tested with advanced LSTM models for real-time anomaly detection, the platform significantly improves operational efficiency and enhances customer experience. Leveraging DevOps practices and tailored data lakehouse architectures for AI applications, it exemplifies sophisticated data engineering strategies.
The deployment of this platform in an offshore O-RAN network demonstrated significant improvements in connectivity and operational efficiency, validating the model’s effectiveness. However, the reliance on proprietary data highlights the need for standardized datasets to facilitate broader validation and comparison of AI models. Future research should explore the scalability of such AI-driven solutions across diverse, multi-vendor network environments. Implementing FL could ensure consistent AI model performance while preserving data privacy across different regions and system configurations.
This platform demonstrates significant potential for advancing in-RAN AI development. We aim to contribute to the community’s understanding and implementation of complex challenges in this domain, fostering innovations and improvements.
§ ACKNOWLEDGMENT
The authors would like to sincerely thank the following individuals from Boldyn Networks for their invaluable contributions to this paper: Sean Keating, Chief Technology Officer UK & Ireland, for his managerial support; Andrew Conway, Group Director Technology Strategy, Donal O’Sullivan, Head of Product Innovation, and David Kinsella, RAN Solutions Architect, for their technical review of the paper; and Menglin Yao, Data & Software Engineer, and Michael Waldron, DevOps Engineer, for their platform operation and technical support. Their review, constructive comments, and support were instrumental in the development and completion of this work.
§ AUTHORS CONTRIBUTIONS
* Abdelrahim Ahmad: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing - original draft.
* Peizheng Li: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Software, Validation, Visualization, Writing - original draft.
* Robert Piechocki: Project administration, Supervision, Writing - review & editing.
* Rui Inacio: Conceptualization, Methodology, Validation, Writing - review & editing.
ieeetr
|
http://arxiv.org/abs/2409.03602v1 | 20240905150046 | A combination theorem for hierarchically quasiconvex subgroups, and application to geometric subgroups of mapping class groups | [
"Giorgio Mangioni"
] | math.GR | [
"math.GR",
"math.GT",
"20F65 (Primary) 57K20, 51F30 (Secondary)"
] |
§ ABSTRACT
We provide sufficient conditions for two subgroups of a hierarchically hyperbolic group to generate an amalgamated free product over their intersection. The result applies in particular to certain geometric subgroups of mapping class groups of finite-type surfaces, that is, those subgroups coming from the embeddings of closed subsurfaces.
In the second half of the paper, we study under which hypotheses our amalgamation procedure preserves several notions of convexity in HHS, such as hierarchical quasiconvexity (as introduced by Behrstock, Hagen, and Sisto) and strong quasiconvexity (every quasigeodesic with endpoints on the subset lies in a uniform neighbourhood). This answers a question of Russell, Spriano, and Tran.
[
[
September 9, 2024
=====================
I will achieve in my life - Heaven grant that it be not long - some gigantic amalgamation between the two discrepancies so hideously apparent to me.Virginia Woolf
§ INTRODUCTION
§.§ A combination theorem for subgroups of HHGs
Given a group G and two subgroups A,B≤ G, it is natural to ask what the subgroup ⟨ A, B⟩_G generated by A and B looks like, and in particular if it is isomorphic to the amalgamated free product A *_C B. In this paper, we prove an amalgamation theorem for when A and B are subgroups of any group G acting “nicely” on a hierarchical space (see Definition <ref> below). This class includes all relatively hyperbolic groups, and all hierarchically hyperbolic groups in the sense of Behrstock, Hagen, and Sisto <cit.> (such as mapping class groups of finite-type surfaces, many 3-manifold groups, many Coxeter and Artin groups, compact special groups…). We give here a special case of the result, postponing the full statement to Section <ref>:
Let (G,) be a hierarchically hyperbolic group, let A, B≤ G be subgroups and let C=A∩ B. Suppose that there exists a constant M≥ 0 and a domain Y_a∈ for every a∈ (A∪ B)-C, such that the following hold:
* max{_Y_a(Cx_0), _Y_a(aCx_0)}≤ M/10;
* _Y_a(Cx_0, aCx_0)≥ M;
* If a∈ A-C and b∈ B-C, Y_a aY_b;
* In the same setting, _Y_a(Cx_0, bCx_0)≤ M/10.
There exists M_0≥ 0, depending only on (G,), such that, if M≥ M_0, then
⟨ A,B⟩_G≅ A *_C B.
§.§ Amalgamation of geometric subgroups
In understanding the above theorem, one should have in mind the following example. Let G=(S) be the (extended) mapping class groups of a finite-type surface S. Let U, V be two closed, connected, incompressible subsurfaces, such that no connected component of S-U (resp. S-V) is an annulus. These conditions ensure that (U) and (V) naturally embed in G, and we denote the image of such embeddings as geometric embedded subgroups[The terminology "geometric" is due to Paris and Rolfsen <cit.> and denotes the image of the homomorphism (W)→(S) for any closed subsurface W, without further assumptions. Our notation thus denotes the cases when a geometric subgroup is also embedded. If there is a standard name for such subgroups we would be grateful for a reference.]. Let ∂ U∩∂ V=Γ be a common boundary multicurve, and suppose ∂ U-Γ and ∂ V-Γ are “sufficiently entangled” in the complement of Γ, meaning that they are far enough in the curve graph (S-Γ). This condition ensures that the intersection of (U) and (V) is the Dehn Twist flat ^|Γ| supported on Γ. Using the separability of the latter in both (U) and (V), one can find finite-index subgroups A≤(U) and B≤(V), such that every element a∈ A-^|Γ| acts with large translation length on some subsurface Y_a U (Assumption (II)). Furthermore, the entanglement of the boundaries of U and V ensures that, whenever a∈ A-^|Γ| and b∈ B-^|Γ|, the subsurfaces Y_a and Y_b must overlap (Assumption (III)). The above example is analysed more thoroughly in Subsection <ref>, where we prove the following, slightly more general result:
Let S be a connected finite-type surface, and let (U) and (V) be two geometric embedded subgroups, where each of U and V is either connected or a multicurve. Let Γ=∂ U∩∂ V (where, with a little abuse of notation, the boundary of a multicurve denotes its support). Suppose that ∂ U-Γ, ∂ V-Γ are both non-empty, and that
_ (S-Γ)(∂ U-Γ,∂ V-Γ)≥ 4.
Then there exist finite index subgroups A≤(U), B≤(V), intersecting along the Dehn twist flat ^|Γ|, such that
⟨ A,B⟩_(S)≅ A *_^|Γ| B.
§.§.§ Comparison with the literature
Our Theorem <ref> is similar in spirit to Leininger-Reid's result for Veech subgroups along a common multitwist <cit.>. The main difference is that, while the subgroups there are supported on the whole surface S (and indeed every element which does not lie in the intersection is pseudo-Anosov), our result deals with reducible subgroups, and the large translations of the elements are witnessed by pairs of transverse subsurfaces.
Our theorem also covers the case when U and V are multicurves. In this setting, it should be compared to Loa's result about free product of multitwists supported on “far enough” multicurves <cit.>. While our procedure requires to pass to finite-index subgroups with large translation in the annular domains, theirs applies to the whole Dehn Twist flats supported on the multicurves, but only when the intersection U∩ V is empty. We also stress that Loa's result gives more information about the amalgam, including the fact that it is undistorted in (S) and parabolically geometrically finite, in the sense of <cit.>.
However, we point out that our result about multicurves is just a very special case of a theorem of Koberda, which can be used to produce more general, undistorted RAAGs in (S) (see <cit.> and its quantitative version by Runnels <cit.>).
§.§ Preserving convexity
If A and B satisfy some property P, it is natural to ask when the subgroup they generate still enjoys P. The feature we focus on in the second half of the paper is hierarchical quasiconvexity (HQC for short), which is the analogue of quasiconvexity in the HHS world. A hierarchically quasiconvex subgroup of a HHG is itself a hierarchically hyperbolic space, and therefore it enjoys numerous properties regarding, for example, a coarse median structure and a quadratic isoperimetric function <cit.>, its asymptotic dimension <cit.>, and the arrangement patterns of top-dimensional quasiflats <cit.>. Therefore, it is relevant to understand when the subgroup generated by two hierarchically quasiconvex subgroups is again hierarchically quasiconvex. We provide sufficient conditions for our amalgamation procedure to preserve hierarchical quasiconvexity (the exact statement is Theorem <ref>):
Let (G,) be a HHG, let A,B≤ G be two hierarchically quasiconvex subgroups, and let C=A∩ B. Suppose that:
* A and B satisfy the hypotheses of Theorem <ref>, for some M≥ 0;
* A and B fill all squares (Definition <ref>);
* A and B have no drift in the orthogonals (Definition <ref>).
There exists a constant M_0≥ 0, depending on (G,) and the above data, such that, if M≥ M_0, then ⟨ A,B⟩_G≅ A*_C B is hierarchically quasiconvex in (G,).
Roughly, two HQC subgroups A and B fill all squares if, whenever two domains U,V∈ are orthogonal, if A has large projection to U and B to V then the intersection A∩ B also has large projection to one of the domains. This property is equivalent to the fact that A∪ B is hierarchically quasiconvex (see Lemma <ref>).
Moreover, A and B have no drift in the orthogonals if it does not happen that both A and B have bounded projections to some domain U, which is orthogonal to the domains Y_a and Y_b used to detect the amalgamation. In Subsection <ref> we provide a counterexample where the lack of this property falsifies the conclusion of Theorem <ref>.
In our third and last Theorem, we study when our amalgamation procedure preserves strong quasiconvexity. Recall that, given a metric space X, a subspace Y⊆ X is strongly quasiconvex if every quasigeodesic γ with endpoints on Y lies in a neighbourhood of Y, whose radius only depends on X and on the quasigeodesic constants of γ. Such a subset is also called Morse, as strongly quasiconvex geodesics are exactly the Morse directions. Most of the properties of quasiconvex subsets of hyperbolic spaces hold for strongly quasiconvex subsets of general metric spaces <cit.>. Furthermore, as explored in <cit.>, a subspace of a hierarchically hyperbolic space is strongly quasiconvex if and only if it is hierarchically quasiconvex and enjoys a further assumption, the orthogonal projection dichotomy. In Theorem <ref> we prove that the latter property is preserved by our amalgamation procedure. This way, we provide a possible answer to <cit.>:
Let (G,) be a HHG, let A,B≤ G be two strongly quasiconvex subgroups of G, and let C=A∩ B. Suppose that A and B satisfy the hypotheses of Theorem <ref>, for some constant M≥ 0.
There exists a constant M_0≥ 0, depending on (G,) and the strong quasiconvexity gauge of A and B, such that if M≥ M_0 then ⟨ A,B⟩_G≅ A*_C B is strongly quasiconvex in G.
§.§ Pindaric excursus: relative hierarchical quasiconvexity?
In a relatively hyperbolic group, the "right" notion of convexity of a subgroup is relative quasiconvexity. Indeed, a relatively quasiconvex subgroup inherits a relative hyperbolic structure, and the intersection of two relatively quasiconvex subgroups is again relatively quasiconvex. As both HHGs and relatively hyperbolic groups fall into the category of "relative HHGs", in the sense of Definition <ref> below, one could look for a notion that unifies relative quasiconvexity and hierarchical quasiconvexity.
Let (G,) be a relative HHG, and let _0 be a collection of domains which is closed under nesting and contains every U∈ such that U is not hyperbolic. Formulate a notion of hierarchical quasiconvexity relative to _0 for subgroups of G, such that:
* If (G,) is a HHG and _0=∅, one recovers hierarchical quasiconvexity;
* If (G,ℙ) is relatively hyperbolic and _0=ℙ, one recovers relative quasiconvexity;
* Under suitable conditions on _0, if a subgroup is HQC relative to _0 then it admits a structure of a relative hierarchically hyperbolic space;
* The intersection of two HQC subgroups relative to _0 is HQC relative to _0.
We believe a possible approach would be to generalise the notion of transition points on a geodesic in a relatively hyperbolic group, and then try to emulate the characterisation of relative quasiconvexity from <cit.>.
After one finds the right definition, one could attempt to extend Theorem <ref> to relative HQC subgroups, possibly generalising known combination theorems for relatively quasiconvex subgroups (see, among others, <cit.>).
§.§ Outline
Section <ref> provides the background on hierarchically hyperbolic spaces and groups. In Section <ref> we prove the main amalgamation result, Theorem <ref>, which we then apply to certain geometric subgroups of mapping class groups in Section <ref>.
In Section <ref> we strengthen our result to preserve hierarchical quasiconvexity. The proof of Theorem <ref> is in three steps. In Subsection <ref>, we first recall that a subset of a HHS is HQC if and only if it is almost closed under certain quasigeodesics, called hierarchy paths. Next, in Subsection <ref> we determine under which conditions the union A∪ B of two HQC subgroups is again HQC (see Lemma <ref>). Finally, given a hierarchy path connecting two points of A∪ B, in Subsection <ref> we show that it can be decomposed as a union of hierarchy paths with endpoints on cosets of A∪ B, and the conclusion follows from the hierarchical quasiconvexity of the latter cosets.
Finally, Section <ref> is devoted to the proof of the combination result for strongly quasiconvex subgroups, Theorem <ref>.
§.§ Acknowledgements
Firstly, I would like to thank my supervisor, Alessandro Sisto, for his constant support (even during the summer break) and several hints. Moreover, this paper arose as a side quest in a bigger, slightly unrelated project (talk about serendipity); so I am grateful to Yago Antolin, Matt Cordes, Giovanni Sartori, and Alessandro Sisto for numerous contributions to the first half of the paper, and for letting me publish it in this form.
§ A CRASH COURSE IN HIERARCHICAL HYPERBOLICITY
We start by recalling some notions from the world of hierarchically hyperbolic spaces and groups, first introduced by Behrstock, Hagen, and Sisto in <cit.>.
The quasigeodesic space (X,_X) is a hierarchical space if there exists E≥0, called the hierarchical constant, an index set , whose elements will be referred to as domains, and a set { U| U∈} of geodesic metric spaces ( U,_U), called coordinate spaces, such that the following conditions are satisfied:
* (Projections.)
There is a set {π_U: X→ 2^ U| U∈} of projections mapping points in X to sets of diameter bounded by E in the various U∈. Moreover, for all U∈, the coarse map π_U is (E,E)–coarsely Lipschitz and π_U( X) is E–quasiconvex in U.
* (Nesting.)
is equipped with a partial order , and either =∅ or contains a unique –maximal element, denoted by S. When V U, we say V is nested in U. For each U∈, we denote by _U the set of V∈ such that V U. Moreover, for all U,V∈ with V U there is a specified subset ρ^V_U⊂ U with _ U(ρ^V_U)≤ E. There is also a projection ρ^U_V: U→ 2^ V. (The similarity in notation is justified by viewing ρ^V_U as a coarsely constant map V→ 2^ U.)
* (Orthogonality.)
has a symmetric and anti-reflexive relation called orthogonality: we write U V when U,V are orthogonal. Also, whenever V U and U W, we require that V W. We require that for each T∈ and each U∈_T such that {V∈_T| V U}≠∅, there exists a domain W∈_T-{T}, which we call a container for U inside T, such that whenever V U and V T, we have V W. Finally, if U V, then U,V are not –comparable.
* (Transversality and consistency.)
If U,V∈ are not orthogonal and neither is nested in the other, then we say U,V are transverse, denoted U V. In this case, there are sets ρ^V_U⊆ U and ρ^U_V⊆ V, each of diameter at most E and satisfying the Behrstock inequality:
min{_U(π_U(z),ρ^V_U),_V(π_V(z),ρ^U_V)}≤ E
for all z∈ X.
For U,V∈ satisfying V U and for all z∈ X, we have:
min{_U(π_U(z),ρ^V_U),_ V(π_V(z)∪ρ^U_V(π_U(z)))}≤ E.
The preceding two inequalities are the consistency inequalities for points in X.
Finally, if U V, then _W(ρ^U_W,ρ^V_W)≤ E whenever W∈ satisfies either V W or V W and WU.
* (Finite complexity.)
There exists n≥0, the complexity of X (with respect to ), so that any set of pairwise––comparable elements has cardinality at most n.
* (Large links.)
Let U∈, let z,z'∈ X and let N=__U(π_U(z),π_U(z')). Then there exists {T_i}_i=1,…,⌊ N⌋⊆_U- {U} such that, for any domain T∈𝔖_U-{U}, either T∈_T_i for some i, or _T(π_T(z),π_T(z'))<E. Also, _U(π_U(z),ρ^T_i_U)≤ N for each i.
* (Bounded geodesic image.)
For all U∈, all V∈_U- {U}, and all geodesics γ of U, either _ V(ρ^U_V(γ))≤ E or γ∩ N_E(ρ^V_U)≠∅.
* (Partial realisation.)
Let {V_j} be a family of pairwise orthogonal elements of , and let p_j∈π_V_j( X)⊆ V_j. Then there exists z∈ X, which we call a partial realisation point for the family, so that:
* _V_j(z,p_j)≤ E for all j,
* for each j and
each V∈ with V_j V, we have
_V(z,ρ^V_j_V)≤ E, and
* for each j and
each V∈ with V_j V, we have _V(z,ρ^V_j_V)≤ E.
* (Uniqueness.) For each κ≥ 0, there exists
θ_u=θ_u(κ) such that if x,y∈ X and
_ X(x,y)≥θ_u, then there exists V∈ such
that _V(x,y)≥κ.
We often refer to , together with the nesting and orthogonality relations, and the projections as a hierarchical structure for the space X.
Notice that, if E is a hierarchical constant for (X, ), then so is any E'≥ E. Hence, throughout the paper we will always implicitly assume that every hierarchical constant is strictly positive.
Where it will not cause confusion, given U∈, we will often suppress the projection map π_U when writing distances in U, i.e., given A,B⊆ X and P⊆ U we shall write d_U(A,B) for d_ U(π_U(A),π_U(B)) and d_U(A,P) for d_ U(π_U(A),P). Furthermore, when V_1, V_2∈ are such that V_i U or V_i U for i=1,2, we will write d_U(V_1,V_2) for d_ U(ρ^V_1_U,ρ^V_2_U).
A hierarchical space is
* hierarchically hyperbolic if every coordinate space is E-hyperbolic;
* relatively hierarchically hyperbolic if every coordinate space is either E-hyperbolic or -minimal. Notice that this includes relatively hyperbolic spaces and groups, as explained in <cit.>.
All properties of hierarchically hyperbolic spaces whose proofs do not involve the hyperbolicity of coordinate spaces also hold for hierarchical spaces in general. In particular, the following is proved as in <cit.>, which only uses the partial realisation axiom:
Let (X, ) be a hierarchical space. For every U,V,W∈ such that U V and both ρ^U_W, ρ^V_W are defined, then _W(ρ^U_W,ρ^V_W)≤ 2E.
Combining Lemma <ref> and Axiom <ref> we get:
Let U,V,W∈. Suppose that _V(U,W) is well-defined and strictly greater than 2E. Then U W.
Moreover, possibly after enlarging the HHS constant E, we get the following variant of the bounded geodesic image axiom, which is proved by combining the consistency inequalities with the original bounded geodesic image axiom, as in <cit.>:
Let (X, ) be a hierarchical space. Let x,y∈ X and let U,V∈ be such that U V. Then either _U(x,y)≤ E or every geodesic [π_V(x), π_V(y)]⊆ V must pass E-close to ρ^U_V.
Let R≥0 and let (b_U)_U∈∈∏_U∈2^ U be a tuple such that for each U∈, the U–coordinate b_U has diameter ≤ R. Then (b_U)_U∈ is R–consistent if for all V,W∈, we have
min{_V(b_V,ρ^W_V),_W(b_W,ρ^V_W)}≤ R
whenever V W and
min{_W(b_W,ρ^V_W),_V(b_V∪ρ^W_V(b_W))}≤ R
whenever V W.
Later we will need the following, which is <cit.> (notice that it holds for hierarchically hyperbolic spaces):
Let (X,) be a hierarchically hyperbolic space. Then for each R≥1, there exists θ=θ(R) so that, for any R–consistent tuple (b_U)_U∈, there exists x∈ X such that _V(x,b_V)≤θ for all V∈.
Observe that the uniqueness axiom (Definition (<ref>)) implies that the realisation point x for (b_U)_U∈, provided by Theorem <ref>, is coarsely unique.
Let (X,) be a hierarchical space. An automorphism consists of a map g: X→X, a bijection g^♯: → preserving nesting and orthogonality, and, for each U∈, an isometry g^♢(U): U→ (g^♯(U)) for which the following two diagrams commute for all U,V∈ such that U V or U V:
Xrgdπ_U Xdπ_g^♯ (U)
Urg^♢ (U) (g^♯ (U))
and
Urg^♢ (U)dρ^U_V (g^♯ (U))dρ^g^♯ (U)_g^♯ (V)
Vrg^♢ (V) (g^♯ (V))
Whenever it will not cause ambiguity, we will abuse notation by dropping the superscripts and just calling all maps g.
We say that two automorphisms g,g' are equivalent, and we write g∼ g', if g^♯=(g')^♯ and g^♢(U)=(g')^♢(U) for each U∈. Given an automorphism g, a quasi-inverse g for g is an automorphism with g^♯=(g^♯)^-1 and such that, for every U∈, g^♢(U)=g^♢(U)^-1. Since the composition of two automorphisms is an automorphism, the set of equivalence classes of automorphisms forms a group, denoted Aut().
A finitely generated group G acts on a hierarchical space (X,) by automorphisms if there is a group homomorphism G→Aut(). Notice that this induces a G-action on X by uniform quasi-isometries.
If a group G acts on a (relative) HHS (X,), in such a way that the action on X is metrically proper and cobounded and the action on is cofinite, then G is called a (relative) hierarchically hyperbolic group, and any quasi-isometry between G and X given by the Milnor-Švarc Lemma endows G with the (relative) HHS structure of X (possibly with a larger HHS constant).
§ DETECTING AN AMALGAMATED FREE PRODUCT IN A HHG
We are now ready to prove Theorem <ref> from the introduction, in the following extended form.
Let G be a group acting on a hierarchical space (X,), and fix a basepoint x_0∈ X. Let A_1,…,A_n≤ G be subgroups, and let C be a subset of the intersection ⋂_l=1^n A_l be a subset. Suppose that there exists a constant M≥ 100E, where E is a HHS constant of X, and a domain Y_a∈ for every a∈ (⋃_i A_i)-C, such that the following hold:
* max{_Y_a(Cx_0), _Y_a(aCx_0)}≤ M/10;
* _Y_a(Cx_0, aCx_0)≥ M;
* If a∈ A_i-C and b∈ A_j-C where i≠ j, Y_a aY_b;
* In the same setting, _Y_a(Cx_0, bCx_0)≤ M/10.
Then C coincides with the pairwise intersections A_i∩ A_j for every i≠ j, and in particular C = ⋂_l=1^n A_l. Moreover, the natural map
_C A_∙:= A_1 *_C … *_C A_n→⟨ A_1,…,A_n⟩_G
is an isomorphism.
Given a word w=g_1… g_k c∈_C A_∙-{1}, we shall show that one can find a collection of pairwise transverse domains W_1,…, W_k, one for every subword g_1… g_j of w, such that the projection of g_1… g_j Cx_0 on W_i only changes when one passes from g_1… g_i-1 Cx_0 to g_1… g_i Cx_0. In a way, the domain W_i detects “the i-th step of w”, and as a consequence of assumption <ref> all subwords g_1… g_j with j≥ i+1 cannot undo the translation on W_i. In particular, one gets that _W_k(Cx_0, wCx_0) is greater than some positive constant, depending on M and E, and therefore w is non-trivial in G as it acts non-trivially on (X,).
We now move to the proof of Theorem <ref>. First notice that, whenever i≠ j, C=A_i∩ A_j. Indeed C⊆ A_i∩ A_j, and if a∈ (A_i∩ A_j)-C then by assumption <ref> applied to a=b we would get that _Y_a(Cx_0, aCx_0)≤ M/10, in contrast with assumption <ref>.
Now, we want to prove that the natural epimorphism _C A_∙→⟨ A_1,…,A_n⟩_G is injective. In other words, given any non-trivial word w= g_1… g_k c∈_C A_∙-{1}, where c∈ C and every two consecutive g_i and g_i+1 belong to different factors, we have to show that w≠_G 1. This is clearly true if k≤ 2, so we focus on the case k≥3.
For every i=1,…, k let Y_i=Y_g_i, and set
C_i=g_1… g_i-1Cx_0, W_i=g_1… g_i-1Y_i,
so that
_W_i(C_i,C_i+1)= _Y_i(Cx_0,g_iCx_0)≥ M.
We break the rest of the proof of Theorem <ref> into a series of Claims.
W_i W_i+1 for every i=1,…, k-1.
Notice that Y_i and g_iY_i+1 are transverse by assumption <ref>, since g_i and g_i+1 lie in different factors of the amalgamation. Therefore, the domains W_i=(g_1… g_i-1)Y_i and W_i+1=(g_1… g_i-1)g_i Y_i+1 must be transverse as well, since the G-action preserves transversality.
For every i=2, …, k, _W_i(W_i-1, C_i)≤ M/10+E.
Symmetrically, for every i=1, …, k-1, _W_i(W_i+1, C_i+1)≤ M/10+E.
We only prove the first statement, as the second follows analogously. As _W_i-1(C_i-1, C_i)≥ M≥ 4E, by the Behrstock inequality one of the following happens:
* Every point of π_W_i(C_i) is E-close to ρ^W_i-1_W_i. Then the conclusion clearly follows.
* Every point of π_W_i(C_i-1) is E-close to ρ^W_i-1_W_i. Then we have that
_W_i(C_i, W_i-1)≤_W_i(C_i, C_i-1)+E≤ M/10+E,
where we used Assumption <ref>.
For every i=2, …, k-1, _W_i(W_i-1, W_i+1)≥ 4/5M-4E>6E.
We simply notice that
_W_i(W_i-1, W_i+1)≥_W_i(C_i, C_i+1)-_W_i(C_i, W_i-1)-_W_i(C_i+1, W_i+1)-
-_W_i(ρ^W_i-1_W_i)-_W_i(ρ^W_i+1_W_i)≥ 4/5M-4E.
We continue with a general statement about families of pairwise transverse domains in a hierarchical space:
Let (X,) be a hierarchical space, and let {W_1, …, W_k}⊂ be a collection of domains such that W_i W_i+1 for every i=1,…, k-1. If _W_j(W_j-1,W_j+1)> 6E for every j=2,…, k-1, then {W_1, …, W_k} are pairwise transverse, and _W_j(W_i,W_r)> 2E for every i<j<r.
Notice that the hypothesis of Lemma <ref> are satisfied by the W_is we are considering, in view of Claims <ref> and <ref> and of the fact that M≥ 100E.
We proceed by induction on k. If k=3 we just need to show that W_1 W_3. This is true since _W_2(W_1,W_3)> 6E, and we can invoke Corollary <ref>.
Now assume that the theorem is true for every collection of at most k-1 elements. In particular, by applying the inductive hypothesis to the collections {W_1, …, W_k-1} and to {W_2, …, W_k}, we get that _W_j(W_i,W_r)> 2E whenever i<j<r and (i,r)≠(1,k). Thus, we only need to show that _W_j(W_1,W_k)> 2E for every j=2,…, k-1, and again Corollary <ref> will imply that W_1 W_k. Now
_W_j(W_1,W_k)≥_W_j(W_j-1,W_j+1)-_W_j(W_j-1,W_1)-
-_W_j(W_j+1,W_k)-_W_j(ρ^W_j-1_W_j)-_W_j(ρ^W_j+1_W_j).
Since by Behrstock inequality we have that _W_j(W_j-1,W_1)≤ E, and similarly _W_j(W_j+1,W_k)≤ E, we get that
_W_j(W_1,W_k)> 6E-4E= 2E,
as required.
For every i≠ j and every x∈ C_j, _W_i(x, W_j)≤ E.
We assume that j<i, as the case j>i is dealt with analogously. If by contradiction _W_i(x, W_j)> E, then the Behrstock inequality implies that _W_j(x, W_i)≤ E. Moreover _W_j(W_i,W_j+1)≤ E, either by combining the Behrstock inequality with Lemma <ref> (if j≤ i-2) or because W_i=W_j+1 (if j=i-1). Thus we get
_W_j(x,C_j+1)≤_W_j(x,W_i)+_W_j(ρ^W_i_W_j)+_W_j(W_i,W_j+1)+
+_W_j(ρ^W_i+1_W_j)+_W_j(W_j+1,C_j+1)≤ M/10+5E,
where we used that _W_j(W_j+1,C_j+1)≤ M/10+5E by Claim <ref>. But then
M≤_W_j(C_j,C_j+1) ≤_W_j(x,C_j+1) M/10+5E,
giving a contradiction as M≥ 100E.
_W_k(C x_0, w Cx_0)=_W_k(C_1, C_k+1)≥ 9/10M-5E>0.
The Claim concludes the proof of Theorem <ref>, because it proves that w acts non-trivially on X, and therefore is non-trivial in G.
This is just a matter of putting all the above Claims together. Indeed, one has that
_W_k(C_1, C_k+1)≥_W_k(C_k,C_k+1)-_W_k(C_k, W_k-1)-_W_k(ρ^W_k-1_W_k∪ρ^W_1_W_k∪ C_1).
The first term of the right-hand side is at least M by assumption <ref>. The second one is at most M/10+E by Claim <ref>. Regarding the third one, we have that
_W_k(ρ^W_k-1_W_k∪ρ^W_1_W_k∪ C_1)≤
≤_W_k(ρ^W_k-1_W_k)+_W_k(W_k-1,W_1)+_W_k(ρ^W_1_W_k∪ C_1)≤ E+E+(2E)=4E,
where we used Claim <ref> to bound the last term. Hence
_W_k(C_1, C_k+1)≤ 9/10 M-5E,
as required.
Before moving forward, we point out the following lemma, which combines some of the above Claims and will be useful later.
For every 1≤ j<i≤ k and every x∈ C_j,
_W_i(x, C_i)≤ M/10+5E.
Symmetrically, for every 1≤ i+1<j≤ k and every x∈ C_j,
_W_i(x, C_i+1)≤ M/10+5E.
This is just a combination of some inequalities from the above proof. Indeed
_W_i(x, C_i)≤_W_i(x,W_j)+_W_i(W_j)+_W_i(W_j, W_i-1)+_W_i(W_i-1)+_W_i(W_i-1, C_i).
The first term is at most E by Claim <ref>; the third term is at most E by the Behrstock inequality, combined with Lemma <ref>; the last term is at most M/10+E by Claim <ref>. Thus
_W_i(x, C_i)≤ 4E+M/10+E=M/10+5E.
The second inequality follows analogously.
§ AMALGAMATION OF GEOMETRIC SUBGROUPS ALONG COMMON BOUNDARIES
We not describe an application of Theorem <ref> to mapping class groups of finite-type surfaces. The subgroups we shall amalgam are mapping class groups of embedded subsurfaces, which overlap “sufficiently” away from a collection of common boundary components. The precise result is Theorem <ref> below, which in turn is Theorem <ref> from the introduction.
§.§ Notation and setting
We gather here the (fairly standard, but sometimes subtle) notation we shall need to prove Theorem <ref>.
§.§.§ Curves, surfaces, and mapping classes
In what follows, let S be a possibly disconnected surface of finite-type (that is, an oriented, compact surface from which a finite number of points is removed). If S is connected, let (S) be the (extended) mapping class group of S, that is, the group of isotopy class of self-homeomorphisms of S fixing the boundary pointwise. If instead S is disconnected, its mapping class group is defined as the direct product of the mapping class groups of its connected components.
By curve we denote the isotopy class of an embedding 𝕊^1↪ S which is essential, meaning that it does not bound a disk with at most one puncture. A multicurve is a collection of pairwise disjoint, non-isotopic curves. We often see a multicurve as a subsurface of S, by replacing each curve with a closed annulus whose core is the curve. Given a curve γ, let T_γ be the Dehn Twist around γ (see e.g. <cit.>).
The curve graph S is the simplicial graph whose vertices are all curves on S, and where adjacency corresponds to disjointness. This definition does not apply to some surfaces of small complexity:
* If S is either a sphere with four punctures, or a torus with at most two punctures, then two curves are adjacent in S if and only if their intersection number is minimal among all pairs.
* If S is an annulus then its annular curve graph, which we still denote by S, is a quasiline. We won't need the actual definition, and we refer to <cit.> for further explanations.
The following is an easy consequence of known facts, but we provide a proof for completeness. Recall that a subgroup H of a group G is separable if, for every g∈ G-H, there exists a finite quotient ψ G→G such that ψ(g)∉ψ(H).
Let S be a finite-type surface, and let Γ⊆∂ S be a collection of boundary components. Then the Dehn twist flat ^|Γ| supported on Γ is a separable subgroup of (S).
Let S be the surface obtained from S by gluing a once-punctured disk onto each boundary component belonging to Γ, and let {p_1,…, p_r} be the punctures added this way. By e.g. <cit.>, the quotient (S)/^|Γ| is isomorphic to the group (S, {p_1,…,p_k}), which is the subgroup of (S) of all mapping classes fixing the punctures {p_1,…,p_k} pointwise.
Now let g∈(S)-^|Γ|, and let h be its image in (S, {p_1,…,p_k}), which is therefore non-trivial. Now, (S, {p_1,…,p_k})≤(S), and the latter is residually finite (see e.g. <cit.> and the discussion below it for punctured surfaces); hence we can find a finite quotient G of (S, {p_1,…,p_k}) where h projects non-trivially. Then, if we take the composition (S)→(S, {p_1,…,p_k})→ G, we get a finite quotient such that the image of ^|Γ| is trivial while the image of g is not.
§.§.§ The marking graph
Let (S) be the marking graph of S, as defined in <cit.>. A vertex x of (S) consists of a multicurve of maximal cardinality, called the support of x and denoted by supp(x), and for every α∈supp(x) a choice of a set p∈α of diameter at most 1, called the transversal associated to α. By e.g. <cit.>, which in turn builds on observations from <cit.>, (S) has the following HHS structure:
* The domain set is the collection of subsurfaces of S;
* is containment and is disjointness of subsurfaces (up to isotopy);
* For every U∈, the associated coordinate space U is the curve graph of U, and the projection π_U (S)→ U is the subsurface projection.
Furthermore, (S) acts geometrically on (S), and therefore inherits the HHG structure described above.
§.§ The result
By subsurface we mean the isotopy class of an embedding U↪ S, where U is a finite-type surface whose connected components cannot be pairs of pants. The boundary ∂ U of a connected, non-annular subsurface U is defined as the closure of U minus its interior: if U is an annulus, with a little abuse of notation we define its boundary as the core of the annulus. The boundary of a disconnected subsurface is the union of the boundaries of its connected components.
If U is a closed subsurface, we can extend every mapping class on U to the identity on S-U, and we get a homomorphism (U)→(S). This map is injective if and only if every curve in ∂ U is essential in S, and no two boundary components are isotopic (see e.g. <cit.>). In this case, we call (U) a geometric embedded subgroup of (S). Our main theorem describes when two such subgroups span a free product, amalgamated along a common boundary Dehn twist flat:
Let S be a connected finite-type surface, and let (U) and (V) be two geometric embedded subgroups, where each of U and V is either connected or a multicurve.
Let Γ=∂ U∩∂ V, and let Γ_U=∂ U-Γ (resp. Γ_V=∂ V-Γ). Suppose that Γ_U, Γ_V are both non-empty, and that _ (S-Γ)(Γ_U,Γ_V)≥ 4.
Then there exist finite index subgroups A≤(U), B≤(V), intersecting along the Dehn twist flat ^|Γ|, such that
⟨ A,B⟩_(S)≅ A *_^|Γ| B.
The proof will highlight some useful techniques to verify the requirements of Theorem <ref>. Such tools often use only the existence of a HHG structure for the ambient group, and can therefore be exported to more general settings.
We start with some considerations on the two subsurfaces and their mapping class groups:
Given two subsurfaces W U and W' V, if none of the two subsurfaces is a sub-multicurve of Γ, then W W'.
If by contradiction W and W' were either disjoint, or one contained in the other, then there would be two curves δ⊂ W and δ'⊂ W, both disjoint from Γ, such that _ (S-Γ)(δ, δ')≤ 1. However, notice that δ would either belong to, or be disjoint from, ∂ U, since W U, and the same is true for δ' and ∂ V. Therefore
_ (S-Γ)(δ, δ')≥_ (S-Γ)(Γ_U, Γ_V) -2 ≥ 2, contradicting our assumption.
(U)∩(V)=^|Γ|.
Clearly ^|Γ|≤(U)∩(V). Conversely, pick an element g∈(U)∩(V). By e.g. <cit.>, there exists a power k such that g^k=∏_i=1^l g_i, where each g_i is either a partial pseudo-Anosov or a power of a Dehn Twist, and the supports {R_i} of the g_i are all pairwise disjoint, closed subsurfaces.
Now, as g is supported on both U and V, each R_i must be nested in both U and V, and therefore must be nested in Γ by Claim <ref>. In other words, g^k∈^|Γ|.
Now, if one between U and V is a multicurve we immediately get that g∈^|Γ| as well, and we are done. Otherwise, the above argument shows that g projects to a torsion element of (U), where U is the surface obtained by capping every curve in Γ with a once-punctured disk. However, as ∂ U-Γ is non-empty, U still has non-empty boundary, so <cit.> yields that (U) is torsion-free (more precisely, the cited result is stated for surfaces with negative Euler characteristic; however we can always embed (U) in the mapping class group of such a surface, for example by gluing a sphere with five disks removed along one of the boundaries of U). Then again g∈^|Γ|, as required.
Next, we produce the subgroups A, B. Extend Γ to a multicurve Γ' of maximal cardinality, and let x_0∈(S) be any marking supported on Γ'. Let
C_0sup{_W(x_0,V) | W U,WΓ}+sup{_W'(x_0,U) | W' V,W'Γ}.
The above quantity is well-defined, as Claim <ref> tells us that, whenever W U is not a sub-multicurve of Γ, we have that W V, so that the projection of V to W is well-defined. Furthermore, C_0 is finite. Indeed, let y_V∈ (S) be a marking whose support contains ∂ V. Then, whenever W is nested in U but not in Γ, the projection ρ^V_W coincides with the subsurface projection of y_V to W. In other words,
_W(x_0,V)=_W(x_0,y_V)≤ E_ (S)(x_0,y_V)+E,
where E is a HHS constant for ( (S), ), and we used that subsurface projections are E-coarsely Lipschitz. We can similarly define a marking y_U whose support contains ∂ U, and use it to bound the second term of Equation (<ref>). Notice that, in view of the above argument, we can rewrite Equation (<ref>) in the following form:
C_0= sup{_W(x_0,y_V) | W U,WΓ}+sup{_W'(x_0,y_U) | W' V,W'Γ}.
Now, let F⊂(U) be the subset of all elements a such that, for every Y∈,
_Y(y_U, a(y_U))≤ 12C_0+100E+100
Notice that F is finite, by the fact that (S) acts metrically properly on (S), combined with the uniqueness axiom (<ref>) for the HHS ((S),). By Lemma <ref> there exists a finite-index subgroup A≤(U) containing ^|Γ|, such that if a∈ F∩ A then a∈^|Γ|. One can define B analogously. Notice that
^|Γ|≤ A∩ B≤(U)∩(V)=^|Γ|,
where the last equality is Claim <ref>.
We are finally ready to show that A and B satisfy the hypotheses of Theorem <ref>, and therefore
⟨ A,B⟩_(S)≅ A *_^|Γ| B.
We first describe the subsurfaces Y_g we shall use to prove the Theorem. For every a∈ A-^|Γ| there exists a domain Y_a such that _Y_a(y_U, a(y_U))≥ 12C_0+100E+100, by how we chose A.
We first notice that Y_a must be nested in U. Indeed, the action of (U) fixes ∂ U and every curve α∈supp(y_U) which is disjoint from U, and the transversal for each such α is moved within distance at most 4 in α (see e.g. <cit.>). Furthermore, if a surface Y is either disjoint from U, or properly contains U, then the subsurface projection of both y_U and a(y_U) only depends on the above data. Therefore _Y(y_U, a(y_U))≤ 4 whenever YU.
Furthermore, suppose that every Y_a as above was nested in Γ. Then we could multiply a by a suitable multitwist in ^|Γ| to find an element of (A∩ F)-^|Γ|, contradicting our choice of A. Thus pick any Y_a as above which is nested in U but not in Γ.
Now let M=10C_0+100E+40, and choose x_0 as above. We now verify the four requirements from Theorem <ref>, for every a∈ A-^|Γ| and every b∈ B-^|Γ|:
* <ref> The action of ^|Γ| fixes every curve α∈Γ'-Γ, and the associated transversal (again, up to distance 4 in α). Therefore, as Y_a U but is not nested in Γ, the subsurface projection of ^|Γ| x_0 to Y_a is coarsely the same as that of x_0, and the same is true for a^|Γ| x_0. In particular max{_Y_a(^|Γ| x_0), _Y_a(a^|Γ| x_0)}≤ 4≤ M/10.
* <ref> By the above discussion we get _Y_a(^|Γ| x_0,a^|Γ| x_0)≥_Y_a(x_0,ax_0)-8. Furthermore Equation (<ref>) yields that
_Y_a(x_0,ax_0)-8≥_Y_a(y_U,ay_U)-2C_0-8≥ M.
* <ref> As Y_a U, Y_b V and none is nested in Γ, Claim <ref> tells us that Y_a Y_b.
* <ref> Firstly _Y_b(^|Γ| x_0,a^|Γ| x_0)≤_Y_b(x_0,ax_0). Moreover
_Y_a(ax_0, Y_b)≥_Y_a(ax_0, x_0)-_Y_a(x_0, Y_b)-_Y_a(ρ^Y_b_Y_a)≥ M-C_0-E≥ 2E.
Hence by the Behrstock inequality _Y_b(ax_0, Y_a)≤ E. In turn, this means that
_Y_b(x_0, ax_0)≤_Y_b(x_0, Y_a)+ _Y_b(ρ^Y_a_Y_b)+_Y_b(Y_a, ax_0)≤ C_0+2E≤ M/10.
The proof of Theorem <ref> is now complete.
§ A COMBINATION THEOREM FOR HIERARCHICALLY QUASICONVEX SUBGROUPS
This Section is devoted to the proof of Theorem <ref> from the introduction, regarding the hierarchical quasiconvexity of the amalgamated free product of two HQC subgroups A,B of a HHG (G,).
§.§ Quasiconvexity and friends
A subspace Z of a geodesic metric space is R-quasiconvex, for some constant R≥ 0, if every geodesic segment with endpoints on Z is contained in the R-neighbourhood of Z.
For the rest of the section, let (X,) be a HHS.
A subspace Y⊆ X is κ-hierarchically quasiconvex, for some κ [0,+∞)→ [0,+∞), if:
* For every U∈, π_U(Y) is κ(0)-quasiconvex in U;
* Realisation: for every x∈ X and every R∈[0,+∞), if _U(x,Y)≤ R for every U∈ then _X(x,Y)≤κ(R).
It follows from the definition, together with the fact that coordinate projections are coarsely Lipschitz, that if Y and Z are two subspaces of X within Hausdorff distance d, and if Y is κ-hierarchically quasiconvex, then Z is κ'-hierarchically quasiconvex, for some function κ' only depending on κ, d, and (X,). This observation will be used repeatedly throughout the section.
An equivalent definition of hierarchical quasiconvexity, which more closely resembles Definition <ref>, involves being closed under certain quasigeodesic paths, called “hierarchy paths”:
For λ≥ 1, a (not necessarily continuous) path γ [a,b]⊂ℝ→ X is a λ–hierarchy path if
* γ is a (λ, λ)-quasigeodesic,
* for each W∈, the path π_w(γ) is an unparameterised (λ,λ)-quasigeodesic, meaning that it becomes a (λ,λ)-quasigeodesic after precomposing it with an increasing function g[0,l]→[a,b] mapping 0 to a and l to b.
<cit.> states that any two points of X are connected by a λ_0-hierarchy path, for some constant λ_0 only depending on (X, ). From now on, we will assume that the HHS constant E has been chosen greater than this λ_0.
A subset Y⊆ X is κ–hierarchically quasiconvex if and only if there exists a function Λ [1,+∞)→ [0,+∞) such that every λ-hierarchy path with endpoints on Y is contained in the Λ(λ)-neighbourhood of Y. Moreover, κ and Λ each determine the other.
The following combines <cit.>:
Let Y⊆ X be κ-hierarchically quasiconvex. There exists a coarsely Lipschitz, coarse retraction _Y X→ Y, called the gate on Y, such that, for every x∈ X and every W∈, π_W(_Y(x)) uniformly coarsely coincides with the coarse closest point projection of π_W(x) to the κ(0)-quasiconvex subset π_W(Y)⊆ W.
Given two HQC subspaces A,B, the gate of one onto the other can be characterised in terms of the coarse intersection of A and B:
Let A and B be κ-hierarchically quasiconvex subsets of X, and let _X(A,B)=r. There exist non-negative constants R_0 and D_1, both depending only on κ and r, such that _Haus(N_R_0(A)∩ N_R_0(B), _B(A))≤ D_1.
Furthermore, if G is a group and A, B are subgroups of G, then by <cit.> there exist D_2, depending on R_0 and on a choice of a word metric for G, such that _Haus(N_R_0(A)∩ N_R_0(B), A∩ B)≤ D_2. Thus, for HQC subgroups of a HHG, the gate of one onto the other is within finite Hausdorff distance from the actual intersection:
Let A and B be κ-hierarchically quasiconvex subgroups of a HHG (G,). There exists a constant L, which only depends on (G,) and κ, such that _Haus(A∩ B, _B(A))≤ L.
§.§ Hierarchical quasiconvexity of a union
In hyperbolic spaces, a union of quasiconvex subspaces is again quasiconvex. We provide a proof of this fact for completeness:
Let X be a δ-hyperbolic space, and let Y,Y' be two R-quasiconvex subspaces. Then Y∪ Y' is R-quasiconvex, where
R=R+2δ+_X(Y,Y')+1.
Let γ be a geodesic segment with endpoints a,b∈ Y∪ Y'. If a and b are both in Y, or both in Y', then γ is in the R-neighbourhood of Y∪ Y', by quasiconvexity of each subset. Thus, suppose without loss of generality that a∈ Y and b∈ Y'. Let p∈ Y and p'∈ Y' be such that _X(p,p')≤_X(Y,Y')+1, and choose geodesics [a,p], [p, p'], and [p', b]. Then γ lies in the 2δ-neighbourhood of [a,p]∪ [p, p']∪ [p', b], as geodesic quadrangles are 2δ-slim in hyperbolic spaces. In turn, [a,p]∪ [p, p']∪ [p', b] is contained in the (R+_X(Y,Y')+1)-neighbourhood of Y∪ Y', and the conclusion follows.
Now we want to establish an analogue of Lemma <ref> for HQC subspaces of a HHS. More precisely, we shall prove that the necessary and sufficient condition for the union to be HQC is the following. Recall that, given two subspaces C⊆ A of a metric space X, we say that C is R-dense in A if A⊆ N_R(C).
Let (X,) be a HHS, and let A,B⊆ X. We say that A and B fill all squares if there exists a constant T such that, for every U,V∈ such that U V, either π_U(_A(B)) is T-dense in π_U(A), or π_V(_B(A)) is T-dense in π_V(B).
Notice that, if A and B fill all squares for some constant T, then they also fill all squares for any bigger constant T'≥ T.
Definition <ref> roughly forbids the following situation, which gives the name to the property. Let X=ℝ^2 with the HHS structure coming from the usual Cartesian coordinates, let A be the x-axis and B the y-axis. Both A and B are hierarchically quasiconvex, as they correspond to factors of the product structure. However, A∪ B is not hierarchically quasiconvex, as it does not satisfy the realisation property: any point p∈ℝ^2 has the same x-coordinate as some point in A and the same y-coordinate as some point in B, but can be arbitrarily far from A∪ B. In other words, A and B leave some gaps in the square.
Let (X,) be a HHS, and let A,B⊆ X be κ-hierarchically quasiconvex subspaces. Then A and B fill all squares (Definition <ref>), for some constant T≥ 0, if and only if A∪ B is κ-hierarchically quasiconvex, where T and κ each determine the other (together with κ and _X(A,B)).
We split the two implications of Theorem <ref> into Lemmas <ref> and <ref> below.
Let (X,) be a HHS, and let A,B⊆ X be κ-hierarchically quasiconvex subspaces. If A∪ B is κ-hierarchically quasiconvex then A and B fill all squares (Definition <ref>), for some constant T which only depends on (X,), κ, and κ.
Fix a HHS constant E for (X,). Assume by contradiction that A and B do not fill all squares. This means that, for every T∈[0,+∞), one can find two orthogonal domains U,V∈ such that min{_U(A), _V(B)}≥ T, but neither π_U(_A(B)) is T-dense in π_U(A) nor π_V(_B(A)) is T-dense in π_V(B).
From now on, we will say that a quantity is uniform if it does not depend on T, but only on (X,), κ, and κ. We shall eventually choose T greater than every uniform constant we will find along the way, and this will yield the desired contradiction.
Since π_U(_A(B)) is not T-dense in π_U(A), there exists a point a∈ A such that _U(a,C)> T. Similarly, choose b∈ B such that _V(b,C)> T. Now consider the following tuple of coordinates:
x_W=π_W(a) W U;
π_W(b) W V;
π_W(a) W U W V;
ρ^U_W∪ρ^V_W
where, in order to make the definition more compact, we slightly abused the notation by setting ρ^U_W=∅ if U W, and similarly for V. Notice that the tuple {x_W}_W U is E-consistent, as a is a point of X and therefore satisfies the consistency inequalities by Axiom (<ref>), and for the same reason {x_W}_W V and {x_W}_W U, W V are E-consistent as well. Arguing as in <cit.>, one can show that the whole tuple {x_W}_W∈ is K_1-consistent for some uniform constant K_1. Then by the Realisation Theorem <ref> we can find p∈ X and a uniform constant K_2 such that _W(p,x_W)≤ K_2 for every W∈.
There exist uniform constants D≥ 0 and K_3≥ K_2 such that, if T≥ D, then _W(p,A∪ B)≤ K_3 for every W∈.
We refer to how we defined x_W in Equation (<ref>). If either W is nested into U or V, or W is orthogonal to both, then π_W(p) is K_2-close to either π_W(a) or π_W(b), and we have nothing to prove. Thus suppose, without loss of generality, that either U W or U W. Then π_W(p) is K_2-close to ρ^U_W∪ρ^V_W, and in particular
_W(p,U)≤ K_2+_W(ρ^V_W)+_W(U,V)≤ K_2+3E,
where we used that _W(U,V)≤ 2E by Lemma <ref>. Now let P_U be the product region associated to U, as defined in <cit.>. For our purposes, P_U can be thought of as the subspace of all z∈ X such that, for every Y∈ such that U Y or U Y, the projection π_Y(z) coincides with ρ^U_Y, up to some error which is bounded in terms of E. By <cit.>, there exists uniform constants D,λ,ν≥ 1 such that, if T≥ D, then there exists a λ-hierarchy path γ connecting a to _A(B)⊆ A which passes ν-close to the product region P_U. Since A is κ-hierarchically quasiconvex, by Lemma <ref> we have that γ is contained in some uniform neighbourhood of A. Thus the distance between A and P_U is uniformly bounded. In turn, since the projection map π_W X→ W is E-coarsely Lipschitz, π_W(A) is uniformly close to π_W(P_U), which in turn uniformly coarsely coincides with ρ^U_W⊆π_W(p).
Since A∪ B is κ-quasiconvex, the realisation property for A∪ B tells us that p is κ(K_3)-close to A∪ B, and without loss of generality we can assume that _X(p,A)≤κ(K_3). Then _B(p) is uniformly close to _B(A), as gate maps are coarsely Lipschitz. However, by how gate maps are constructed (see Definition <ref>), we have that π_V(_B(p)) uniformly coarsely coincides with the projection of π_V(p) to the quasiconvex subset π_V(B), and π_V(p)=π_V(b) already lies in π_V(B).
Summarising, there is some constant K_4, only depending on (X,), κ, and κ, such that _V(b, _B(A))≤ K_4, and this is against our choice of b if we choose T>K_4.
Let (X,) be a HHS, and let A,B⊆ X be κ-hierarchically quasiconvex subspaces. If A and B fill all squares (Definition <ref>), for some constant T≥ 0, then A∪ B is κ-hierarchically quasiconvex, where κ only depends on (X,), κ, _X(A,B), and T.
Firstly, by Lemma <ref> the projection π_U(A∪ B) is κ(0)-quasiconvex, where κ(0) is a constant depending on κ(0), D_X(A,B), and a HHS constant E for (X,).
Thus we are left to prove the realisation property for A∪ B.
To this purpose, fix R≥ 0, let x∈ X be such that _W(x,A∪ B)≤ R for every W∈, and set
Ω=κ(0)+R+T+E(D+KD+K+4),
where K is the constant from <cit.>, depending only on κ and (X, ), such that for every b∈ B
_X(b, _A(b))≤ KD+K.
We claim that there exists F∈{A,B} such that _W(x,F)≤Ω for every W∈. This will imply that _X(x,A∪ B)≤κ(Ω), so we can set κ(R)=κ(Ω). If this is not the case, let U,V∈ be (necessarily distinct) domains such that _U(x,B)> Ω and _V(x,A)> Ω. Notice that, as Ω≥ R, we must have that _U(x,A)≤ R and _V(x,B)≤ R. There are a few configurations to analyse, depending on the relation between U and V.
* U V: By the Behrstock inequality, Definition <ref>.(<ref>), we can assume without loss of generality that _U(x,ρ^V_U)≤ E. Moreover, since Ω≥ 4E and (ρ^V_U)≤ E, we have that _U(B,ρ^V_U)> 2E, and again the Behrstock inequality tells us that π_V(B) has diameter at most 3E. Thus
_V(x,A)≤_V(x,B)+_V(B)+_V(A,B)≤ R+3E+E(D+1),
where we used that _V(A,B)≤ E(D+1) as the projection π_V X→ V is (E,E)-coarsely Lipschitz. But then _V(x,A)≤Ω, against our assumption.
* U V: Since _U(x,B)>Ω≥ E, the bounded geodesic image Lemma <ref> yields that every geodesic connecting π_V(x) to π_V(B) must pass E-close to ρ^U_V. Since _V(x,B)≤ R we get that _V(x,U)≤ R+E, and in turn
_V(U,A)≥_V(x,A)-_V(x,U)-_V(ρ^U_V)≥ 2E+κ(0).
Now, every geodesic γ⊆ V connecting two points of π_V(A) is contained in the κ(0)-neighbourhood of π_V(A), and therefore is at least 2E-far from ρ^U_V. Then again the bounded geodesic image Lemma <ref> tells us that π_U(A) has diameter at most E, and as in the previous case we get that _U(x,B)≤Ω, yielding a contradiction. Notice that the same argument covers the case when V U.
* U V: Since A and B fill all squares, we can assume without loss of generality that π_U(_A(B)) is T-dense in π_U(A), so there exists b∈ B such that _U(x,_A(b))≤ R+T. In turn,
_U(x,B)≤_U(x,b)≤_U(x,_A(b))+_U(_A(b), b)≤ R+T+E(KD+K+1)≤Ω,
where again we used that π_U is (E,E)-coarsely Lipschitz. This again yields a contradiction.
§.§ Hierarchical quasiconvexity of an amalgam
We devote the rest of the Section to the proof of Theorem <ref>, regarding the amalgamation of HQC subgroups. First, a definition.
Let Z be a subset of the hierarchically hyperbolic
space (X,). Define P^1_λ(Z) to be the union of all λ–hierarchy paths between points in Z. One can then inductively set P^n_λ(Z)=P^1_λ(P^n-1_λ(Z)) for all n≥ 2.
Recall that every two points of X are connected by a λ_0-hierarchy path, where, λ_0 is the constant from Remark <ref>; thus, for every λ≥λ_0 and every n≥ 1, the hull P^n_λ(Z) is non-empty and contains Z.
The following restates the core result of <cit.>, which was crucial in establishing the equivalence between hierarchical quasiconvexity and being closed under hierarchy paths:
There exist N∈ℕ, λ≥λ_0, and κ, all depending only on (X,), such that, for every Z⊆ X, the N-th hierarchy path hull P^N_λ(Z) is κ-hierarchically quasiconvex.
For every θ≥ 0, let H_θ(Z) be the θ-quasiconvex hull of Z, as defined in e.g. <cit.>. By <cit.>, there exists θ_0 such that, for every Z⊂ X and every θ≥θ_0, the hull H_θ(Z) is κ_θ-hierarchically quasiconvex, where κ_θ only depends on θ and the HHS structure. Moreover, <cit.> states that there exist N and λ as above, and a constant θ≥θ_0, such that _Haus(P^N_λ(Z), H_θ(Z))≤ D, where D only depends on λ, θ, and (X, ). Thus, as P^N_λ(Z) is within Hausdorff distance at most D from a κ_θ-hierarchically quasiconvex set, it is itself κ-hierarchically quasiconvex by Remark <ref>, for some function κ depending on κ_θ and D (and therefore, ultimately, only on (X,)).
We now introduce the technical requirement for Theorem <ref>, whose necessity will be discussed in Subsection <ref> below.
Let (G,) be a HHG, let A,B≤ G be two subgroups satisfying the hypothesis of Theorem <ref>, and let C=A∩ B. We say that A,B have no drift in the orthogonals if there exists R≥0 such that the following hold. For every a,b∈ (A∪ B)-C belonging to different subgroups, and for every domain U∈ which is orthogonal to both a^-1Y_a and Y_b, we have that either π_U(A) or π_U(B) is R-dense in U.
Let (G,) be a HHG, let A,B≤ G be two κ-hierarchically quasiconvex subgroups, and let C=A∩ B. Suppose that:
* There exists M≥100E such that A and B satisfy the hypotheses of Theorem <ref>;
* A and B fill all squares (Definition <ref>), for some constant T≥ 0;
* A and B have no drift in the orthogonals (Definition <ref>), for some constant R≥ 0.
There exist a positive constant 𝔐 and a function 𝔨[0,+∞)→ [0,+∞), both depending only on κ, T, R, and (G,), such that, if M≥𝔐, then ⟨ A,B⟩_G≅ A*_C B is 𝔨-hierarchically quasiconvex in G.
Theorem <ref> is a consequence of the following technical statement:
There exist functions K'[0,+∞)→ [0,+∞) and M[0,+∞)→ [100E,+∞), depending only on κ, T, R, and (G,), such that the following holds. Let λ be as in Lemma <ref>. For every K≥ 0, if M≥ M(K) then
P^1_λ(N_K(A*_C B))⊆ N_K'(K)(A*_C B).
Set K_0=0, and iteratively define M_n=M(K_n-1) and K_n=K'(K_n-1) for every n=1,…, N, where N is the integer from Lemma <ref>. Now let 𝔐=max_i=1,…, N M_n. If M≥𝔐, then
P^N_λ(A*_C B)⊆ P^N-1_λ(N_K_1(A*_C B))⊆…⊆ N_K_N(A*_C B).
Now, P^N_λ(A*_C B) is κ-HQC, where κ is the function from Lemma <ref> which only depends on (G,). Furthermore, as P^N_λ(A*_C B) and A*_C B are within Hausdorff distance at most K_N, Remark <ref> implies that A*_C B is itself 𝔨-hierarchically quasiconvex, for some function 𝔨 only depending on K_N and κ. As by hypothesis K_N only depends on κ, T, R, and (G,), this concludes the proof of Theorem <ref>.
Let γ be a λ-hierarchy path between points of N_K(A*_C B). If we connect each endpoint of γ with any point of A*_C B within distance K, we get a λ'-hierarchy path γ' which extends γ, for some constant λ'≥λ depending only on K, λ, and the HHS constant of G. Up to the action of A*_C B on itself, we can assume that the endpoints of γ' are the identity element 1 and some w=g_1… g_k c, where c∈ C, g_i∉C for all i=1,…, k, and every two consecutive g_i and g_i+1 belong to different factors of the amalgamation.
A and B satisfy the hypothesis of Theorem <ref>, whose data include a basepoint x_0∈ G and a domain Y_a∈ for every a∈ (A∪ B)-C. As in Notation <ref>, for every i=1,…, k let Y_i=Y_g_i, and set
C_i=g_1… g_i-1Cx_0, W_i=g_1… g_i-1Y_i.
Moreover, for every i=1,…, k let
A_i=
g_1… g_i-1 Ag_i∈ A;
g_1… g_i-1 Bg_i∈ B.
This way, A_i contains both C_i and C_i+1.
We shall prove Lemma <ref> assuming that M≥ M(K), where
M(K)=.
In the expression above, L=L(κ, ) is the constant from Lemma <ref>, while the constant S=S(E,κ(0),λ',) will be defined in the proof of Claim <ref> below (more precisely, in the paragraph named Case 1).
Now fix any i=2, …, k-1. Recall that _W_i(C_i, C_i+1)≥ M by Assumption <ref> of Theorem <ref>. Combining this with Lemma <ref>, we get that
_W_i(1,w)≥_W_i(C_i, C_i+1)-_W_i(C_i, 1)-_W_i(C_i+1, w)≥ 4M/5-10E.
Notice that, by our choice of M in Notation <ref>, we have that
4M/5-10E≥ 2M/5+2λ'.
In other words, the balls of radius M/5 around π_W_i(1) and π_W_i(w) are at distance at least 2λ'. Then, as the projection of γ' to W_i is a (λ', λ')-quasigeodesic (after reparametrisation), there must be a point ℓ_i∈γ' such that
min{_W_i(1,ℓ_i), _W_i(w,ℓ_i)}>M/5.
Now, the core of the proof is the following:
For every i=2…, k-1, the distance between ℓ_i and A_i is bounded by some constant Ψ, depending on K, κ, T, R, and (G,).
Before proving the Claim, we show that it implies Lemma <ref>. Indeed, we can decompose γ' as a union of λ'-hierarchy paths γ'_i with endpoints {ℓ_i, ℓ_i+1}, where we set ℓ_1=1 and ℓ_k=w. Every γ'_i thus connects two points on some coset of N_Ψ(A∪ B). As A and B T-fill all squares, by Theorem <ref> there exists κ, depending on κ and T, such that A∪ B is κ-hierarchically quasiconvex; then Remark <ref> implies that N_Ψ(A∪ B) is κ'-hierarchically quasiconvex, where κ' depends on κ and Ψ. Finally, Lemma <ref> implies that each γ'_i is contained in a neighbourhood of a coset of N_Ψ(A∪ B), whose radius only depends on κ' and λ'. Summing everything up, we just proved that, if M≥ M(K), then γ' decomposes as a union of subpaths, whose distance from A *_C B is bounded above only in terms of K, κ, T, R, and (G,).
Recall that, for every i=2,…, k-1, we have a point ℓ_i such that
min{_W_i(1,ℓ_i), _W_i(w,ℓ_i)}>M/5,
and we want to prove that ℓ_i and A_i are uniformly close. To do so, it is enough to prove that, for every U∈, _U(A_i, ℓ_i) is uniformly bounded in terms of K, κ, T, R, and (X,), because then we can apply the realisation property of the κ-HQC A_i. There are five cases to analyse, depending on the relation between U and W_i.
Case 1: U=W_i. Let c∈ C_i be such that
_W_i(1,c)≤_W_i(1,C_i)+E≤ M/10+6E,
where we invoked Lemma <ref>. Similarly, let c'∈ C_i+1 be such that
_W_i(w,c')≤_W_i(w,C_i+1)+E≤ M/10+6E.
Fix three geodesics [π_W_i(1),π_W_i(c)]∪ [π_W_i(c),π_W_i(c')]∪ [π_W_i(c'),π_W_i(w)] ⊂ W_i. As W_i is E-hyperbolic and π_W_i(γ') is a (λ', λ')-quasigeodesic (after reparametrisation), there exists a constant S', depending on E and λ', such that π_W_i(ℓ_i) is S'-close to one of the three geodesics (this is a consequence of e.g. <cit.>, plus the fact that geodesic quadrangles in E-hyperbolic spaces are 2E-thin).
Now, if π_W_i(ℓ_i) is (2S'+6E)-close to [π_W_i(c),π_W_i(c')], then by κ(0)-quasiconvexity of A_i we have that _W_i(ℓ_i, A_i)≤ S, where S 2S'+6E+ κ(0) (this is the constant we use in Notation <ref> to choose M).
Thus suppose by contradiction that π_W_i(ℓ_i) is at least (2S'+6E)-far from the geodesic [π_W_i(c),π_W_i(c')], and without loss of generality we can assume that π_W_i(ℓ_i) is S'-close to some point r on the geodesic [π_W_i(1),π_W_i(c)], as in Figure <ref>. In particular _W_i(r,c)>S'+6E, because otherwise π_W_i(ℓ_i) would be at distance at most (2S'+6E) from π_W_i(c). But then
_W_i(1,ℓ_i)≤_W_i(1,r)+_W_i(r,ℓ_i)=
=_W_i(1,c)- _W_i(r,c)+_W_i(r,ℓ_i)<
< (M/10+6E)- (S'+6E)+S'=M/10<M/5,
contradicting Equation(<ref>).
Case 2: U W_i. The projection ρ^U_W_i is well-defined. Furthermore, by Case 1 there is some q∈ A_i such that _W_i(ℓ_i, q)≤ S.
Suppose first that _W_i(ℓ_i, U)> S+2E. Then any geodesic [π_W_i(ℓ_i), π_W_i(q)] inside W_i cannot pass through the E-ball around ρ^U_W_i, so the bounded geodesic image Lemma <ref> tells us that _U(A_i, ℓ_i)≤_U(q, ℓ_i)≤ E, and we are done.
Thus suppose that _W_i(ℓ_i, U)≤ S+2E. As _W_i(1,ℓ_i)>M/5, the triangle inequality yields that
_W_i(1,U)≥_W_i(1,ℓ_i)-_W_i(ℓ_i, U)-_W_i(ρ^U_W_i)> M/5-S-3E=
=M/10+5E + (M/10-S-8E)≥_W_i(1, C_i) + 2E,
where we used that, by our choice of M in Notation <ref>, M/10≥ S+10E, and that _W_i(1, C_i)≤ M/10+5E by Lemma <ref>. This means that any geodetic connecting π_W_i(1) to the closest point in π_W_i(C_i) cannot pass E-close to ρ^U_W_i, and the bounded geodesic image Lemma <ref>, applied to the domains U W_i, yields that _U(1, C_i)≤ E. Symmetrically, one gets that _U(w, C_i+1)≤ E. The situation in U is depicted in Figure (<ref>).
Now, the projection of A_i inside U is a κ(0)-quasiconvex subset, and π_U(γ') is a (λ', λ')-quasigeodesic (after reparametrisation) whose endpoints π_U(1) and π_U(w) are within distance at most E from π_U(A_i). Therefore, again as a consequence of e.g. <cit.>, the distance between π_U(ℓ_i) and π_U(A_i) is bounded in terms of κ(0), λ', and E.
Case 3: U W_i. We claim that both A_i and ℓ_i project uniformly close to ρ^W_i_U inside U. Indeed, _W_i(C_i,C_i+1)≥ M≥ 4E, and in particular one between C_i and C_i+1 is E-far from ρ^U_W_i. Then the Behrstock inequality yields _U(A_i, W_i)≤ E.
In order to bound _U(ℓ_i, W_i), first notice that the projections of 1, ℓ_i, and w to W_i are all at distance at least M/5≥ 4E from each other. Thus, again by Behrstock inequality, at least two of these points project E-close to ρ^W_i_U inside U. If _U(ℓ_i, W_i)≤ E we are done; otherwise
_U(1, w)≤_U(1, W_i)+_U(ρ^W_i_U)+_U(W_i, w) ≤ 3E.
As π_U(ℓ_i) lies on a (λ',λ')-quasigeodesic between π_U(1) and π_U(w), we get that _U(ℓ_i, W_i) is bounded in terms of λ' and _U(1, w)≤ 3E.
Case 4: W_i U. Again, we claim that both A_i and ℓ_i project uniformly close to ρ^W_i_U inside U. As pointed out above _W_i(C_i, C_i+1)≥ 4E, so by the bounded geodesic image Lemma <ref> every geodesic in U with endpoints on π_U(C_i) and π_U(C_i+1) must pass E-close to ρ^W_i_U. But π_U(A_i) is κ(0)-quasiconvex, hence _U(W_i, A_i)≤ E+κ(0).
Now, both _W_i(1, ℓ_i) and _W_i(ℓ_i, w) are greater than M/5 ≥ E. Thus any two geodesics [π_U(1), π_U(ℓ_i)] and [π_U(ℓ_i), π_U(w)] inside U must pass E-close to ρ^W_i_U. In turn, π_U(γ') is a (λ',λ')-quasigeodesic (after reparametrisation); therefore, again by e.g. <cit.>, there exists a constant ω≥ 0, depending only on λ' and E, such that the segment of π_U(γ') between π_U(1) and π_U(ℓ_i) is ω-close to the geodesic [π_U(1), π_U(ℓ_i)]. Summing the two facts, we can find a point p∈π_U(γ') such that _U(p,W_i)≤ω+E. Arguing similarly for the other segment of π_U(γ'), we get a point q∈π_U(γ') such that _U(q,W_i)≤ω+E. Now, π_U(ℓ_i) lies on the segment of π_U(γ') between p and q, as in Figure <ref>, and _U(p,q)≤ 2ω+3E. This implies that the distance between π_U(ℓ_i) and ρ^W_i_U is controlled in terms of E, ω, and λ'.
Case 5: U W_i. This case is itself split into several subcases, as it also involves the relation between U and both W_i-1 and W_i+1.
Case 5.1: First, notice that neither W_i-1 nor W_i+1 can be nested in U, as otherwise one of them would be orthogonal to W_i.
Case 5.2: Suppose that W_i-1 is also orthogonal to U. As A and B have no drift in the orthogonals (Definition <ref>), one of the following happens:
* If π_U(A_i) is R-dense in U, then in particular _U(ℓ_i, A_i)≤ R, and we are done.
* Otherwise, π_U(A_i-1) is R-dense in U. If _U(A_i-1)≤ T then U has diameter at most T+2R, and again we conclude as _U(ℓ_i, A_i)≤ R+2T. Otherwise, as A_i-1 and A_i fill all squares, we must have that π_U(_A_i-1(A_i)) is T-dense in π_U(A_i-1), and therefore (T+R)-dense in U. Then, as π_U(C_i) coarsely coincides with π_U(_A_i-1(A_i)) by Lemma <ref>, we get that the distance between π_U(ℓ_i) and π_U(C_i)⊆π_U(A_i) is uniformly bounded.
Case 5.3: We are left with the cases when both ρ^U_W_i-1 and ρ^U_W_i+1 are well-defined. First, we notice that
M≤_W_i-1(C_i-1,C_i)≤_W_i-1(C_i-1,U)+_W_i-1(ρ^U_W_i-1∪ρ^W_i_W_i-1) +_W_i-1(C_i, W_i)≤
≤_W_i-1(C_i-1,U)+4E +M/10+E,
where we used Lemma <ref> to bound the projections of the two orthogonal domains U W_i, and Claim <ref> to bound _W_i-1(C_i, W_i). Therefore
_W_i-1(C_i-1,U)≥ 9/10M-5E.
Furthermore _W_i-1(1,C_i-1)≤ M/10+5E by Lemma <ref>, so
_W_i-1(1,U)≥_W_i-1(C_i-1,U)-_W_i-1(C_i-1)-_W_i-1(1,C_i-1)≥
≥9/10M-5E-M/10-M/10-5E=7/10M-10E.
Notice that both _W_i-1(1,U) and _W_i-1(C_i-1,U) are greater than 2E, by our choice of M in Notation <ref>.
Now we claim that _U(1, C_i-1)≤ 3E. Indeed, if W_i-1 U, then the Behrstock inequality (<ref>) yields that both π_U(1) and π_U(C_i-1) are E-close to ρ^W_i-1_U. If instead U W_i-1 we notice that any geodesic connecting π_W_i-1(1) to π_W_i-1(C_i-1) lies in the (M/10+5E)-neighbourhood of π_W_i-1(C_i-1), and in particular it cannot pass E-close to ρ^U_W_i-1 as
_W_i-1(C_i-1, U)-(M/10+5E)≥ 4M/5-10E≥ 2E;
thus the bounded geodesic image Lemma <ref> tells us that _U(1, C_i-1)≤ E.
Arguing the exact same way, one gets that _U(w, C_i+2)≤ 3E, so the picture inside U is as in Figure (<ref>).
Now, the projections of A_i-1, A_i, and A_i+1 inside U are all κ(0)-quasiconvex, so their union is (κ(0)+4E+2)-quasiconvex by Lemma <ref>. As π_U(γ') is a (λ', λ')-quasigeodesic (after reparametrisation), whose endpoints π_U(1) and π_U(w) are within distance at most 3E from π_U(C_i-1) and π_U(C_i+1), respectively, we get that π_U(ℓ_i) must be ξ-close to π_U(A_i-1∪ A_i∪ A_i+1), for some constant ξ depending on λ', κ(0), and E.
Now, if π_U(ℓ_i) is (ξ+T)-close to π_U(A_i) then we are done. Otherwise, suppose that _U(ℓ_i, A_i)>ξ+T, so that we can assume without loss of generality that π_U(ℓ_i) is ξ-close to some point q∈π_U(A_i-1). Then by triangle inequality
_U(A_i-1)≥_U(C_i,q)≥_U(C_i,ℓ_i)-_U(ℓ_i, q)≥_U(A_i, ℓ_i)-_U(ℓ_i, q)≥ T.
Moreover,
_W_i(A_i)≥_W_i(C_i, C_i+1)≥ M≥ T.
As A and B fill all squares (Definition <ref>) and W_i U,
we have that either π_U(_A_i-1(A_i)) is T-dense in π_U(A_i-1), or π_W_i(_A_i(A_i-1)) is T-dense in π_W_i(A_i). Combining this with Lemma <ref>, which states that the gates coarsely coincide with the intersection, and the fact that projection maps are (E,E)-coarsely Lipschitz, we get that either π_U(C_i) is (T+LE+E)-dense in π_U(A_i-1), or π_W_i(C_i) is (T+LE+E)-dense in π_W_i(A_i).
However _W_i(C_i, C_i+1)≥ M>T+LE+E, again by our choice of M; so we must have that π_U(C_i) is (T+LE+E)-dense in π_U(A_i-1). In turn, this means that
_U(ℓ_i, A_i)≤_U(ℓ_i, C_i)≤_U(ℓ_i, A_i-1)+(T+LE+E)≤ξ+T+LE+E,
and we are done.
The proof of Lemma <ref>, and in turn of Theorem <ref>, is now complete.
§.§ Why no drift?
We now show that the conclusion of Theorem <ref> might not hold if one removes the hypothesis of having no drift in the orthogonals, Definition <ref>. Let ℱ_a,b be the free group on two generators a and b, and let D_x,y⟨ x,y | x^2=y^2=1⟩ be a copy of D_∞ generated by the involutions x and y. Let
G=ℱ_a,b× D_x_1,y_1× D_x_2,y_2,
and let X be the Cayley graph for G with respect to the generators {a,b, x_1, y_1, x_2, y_2}. The G-action on X, which is a direct product of hyperbolic spaces, makes G into a HHG; in particular, combining <cit.>, we can find a HHG structure whose only domains with unbounded coordinate spaces are the following:
* For every g∈ℱ_a,b, there is a domain gL_a whose coordinate space is g⟨ a⟩a, and a domain gL_b defined analogously;
* The Bass-Serre tree T of the splitting ℱ_a,b=⟨ a⟩ * ⟨ b⟩ is a domain, whose coordinate space is the tree itself;
* Finally, there are two domains W_1 and W_1, whose coordinate spaces are, respectively, D_x_1,y_1{x_1,y_1} and D_x_2,y_2{x_2,y_2}.
The relations between the above domains are as follows: T, W_1, and W_2 are pairwise orthogonal; for every g∈ℱ_a,b, gL_a and gL_b are nested inside T; every two domains which are nested inside T are transverse.
Now let A=⟨ a^N x_1x_2⟩ and B=⟨ b^N y_1y_2⟩, where N is a positive integer to be chosen later. For every g∈ A-{1} let Y_g=L_a, and similarly for every g∈ B-{1} let Y_g=L_b. One can choose N large enough that A and B satisfy the assumptions of Theorem <ref>. Moreover, notice that the projection of A to every domain which is not L_a has diameter bounded by some constant K, while π_L_a(A) is coarsely dense in L_a. In particular, A is hierarchically quasiconvex, and similar considerations hold for B. We also notice that A and B (K+1)-fill all squares, as for every two orthogonal domains U and V we have that min{_U(A),_V(B)}≤ K. However, A and B do not satisfy Definition <ref>, as for example the unbounded domain W_1 is orthogonal to both L_a and L_b but both A and B have bounded projection to W_1.
Finally, A*B is not hierarchically quasiconvex. Indeed, the projection of A*B to the product D_∞^2 D_x_1,y_1× D_x_2,y_2 is within finite distance from the diagonal ⟨ (x_1y_1,x_2y_2)⟩; hence the projection of A * B to both W_1 and W_2 is coarsely dense, but a point p∈ D^2_∞ can be arbitrarily far from the diagonal. Thus A*B does not satisfy the realisation property from Definition <ref>.
§ COMBINATION OF STRONGLY QUASICONVEX SUBGROUPS
We conclude the paper by studying when our amalgamation procedure preserves the following notion of quasiconvexity:
Let X be a geodesic metric space. A subspace Y⊆ X is Q-strongly quasiconvex, for some function Q [0,+∞)→ [0,+∞) called the strong convexity gauge of Y, if, given any λ≥ 0, every (λ, λ)-quasigeodesic with endpoints on Y lies in the Q(λ)-neighbourhood of Y.
The above notion is equivalent to quasiconvexity in hyperbolic spaces (see e.g. <cit.>), but it is stronger in general.
We also need the following definition from <cit.>:
For Θ≥ 0, a subset A of an HHS (X,) has the Θ–orthogonal projection dichotomy if for all U,V∈ with U V, if _U(A)≥Θ then π_V(A) is Θ-dense in V.
The following Lemma shows how strong quasiconvexity and hierarchical quasiconvexity are related:
Let (X,) be a hierarchically hyperbolic space. A subspace Y is Q-strongly quasiconvex, for some gauge Q, if and only if it is κ-hierarchically quasiconvex and has the Θ-orthogonal projection dichotomy, for some κ and Θ. Moreover, the gauge Q and the pair (κ,Θ) each determine the other.
Let (G,) be a HHG, let A,B≤ G be two Q-strongly quasiconvex subgroups of G, and let C=A∩ B. Suppose that there exists M≥0 such that A and B satisfy the hypotheses of Theorem <ref>, for some choice of Y_a,Y_b∈ for every a∈ A-C and every b∈ B-C.
There exists a positive constant 𝔐≥ 0 and a function 𝔔[0,+∞)→ [0, +∞), both depending on Q and (G,), such that if M≥𝔐 then ⟨ A,B⟩_G≅ A*_C B is 𝔔-strongly quasiconvex in G.
In view of Lemma <ref>, Theorem <ref> can be rephrased in the following form, which is the one we shall prove:
Let (G,) be a HHG, let A,B≤ G be two κ-hierarchically quasiconvex subgroups of G, and let C=A∩ B. Suppose that:
* There exists M≥100E such that A and B satisfy the hypotheses of Theorem <ref>;
* There exists Θ≥ 0 such that A and B have the Θ-orthogonal projection dichotomy (Definition <ref>).
Then there exist positive constants 𝔐, 𝔗≥ 0 and a function 𝔨[0,+∞)→ [0, +∞), all depending on κ, Θ, and (G,), such that if M≥𝔐 then ⟨ A,B⟩_G≅ A*_C B is 𝔨-hierarchically quasiconvex in G, and has the 𝔗-orthogonal projection dichotomy.
Firstly, we prove that, if M≥Θ, then the orthogonal projection dichotomy for A and B implies the second and third hypotheses of Theorem <ref> (Definitions <ref> and <ref>), for some constants T and R depending on κ, Θ, and (G,).
A and B fill all squares. Let U,V∈ be such that U V, and suppose that
min{_U(A), _V(B)}≥ 2Θ+1.
By the orthogonal projection dichotomy, π_U(B) is Θ-dense in U. This means that, for every a∈ A, there is some b∈ B such that _U(a,b)≤Θ, and in particular _U(a, _A(b))≤ 2Θ+1 as π_U(_A(b)) is defined by taking the coarse closest point projection of π_U(b) onto π_U(A). Hence π_U(_A(B)) is (2Θ+1)-dense in π_U(A), that is, we proved that A and B T-fill all squares for T=2Θ+1.
A and B have no drift in the orthogonals. Next, notice that the Θ-orthogonal projection dichotomy for A and B implies that A and B have no drift in the orthogonals, for R=Θ. Indeed, if b∈ B-C and U∈ is orthogonal to Y_b, then π_U(B) is Θ-dense in U, as we know that _Y_b(B)≥_Y_b(C,bC)≥ M≥Θ.
Orthogonal projection dichotomy. Now assume that
M≥Θ+4E+𝔐,
where 𝔐 is the constant from Theorem <ref> (which in turn depends on κ, E, and the constants T and R from the previous paragraphs). This choice of M grants the hierarchical quasiconvexity of A*_C B. Our final goal is to prove that A*_C B satisfies the 𝔗-orthogonal projection dichotomy, where
𝔗=.
Let U V be such that _U(A*_C B)≥𝔗. Up to the action of the group, we can assume that _U(1,w)≥𝔗/2, where w=g_1… g_k c is some element of A*_C B, with every g_i in a different factor than g_i+1. Define A_i, C_i, and W_i as in Notation <ref>.
If _U(A_i)≥Θ for some i then, by the orthogonal projection dichotomy for either A or B, we have that π_V(A_i) is Θ-dense in V. In particular π_V(A*_C B) is 𝔗-dense in V, and we are done.
Thus suppose that max_i=1, …, k{_U(A_i)}< Θ. Our current goal is to find an index j such that, inside U, the factor A_r projects far from both 1 and w whenever the difference |j-r| is sufficiently small, as we will clarify in Equation (<ref>) below.
Let j≤ k be the first index for which
_U(1, A_j)≥𝔗/4-Θ
(such an index exists as, for example, _U(1, A_k)≥_U(1, w)-_U(A_k)≥𝔗/2-Θ). Notice that j>3, as 1∈ A_1 and _U(A_1∪ A_2∪ A_3)≤ 3Θ, which is strictly less than 𝔗/4-Θ by our choice of 𝔗, Equation (<ref>). Furthermore _U(1, A_j-1)≤𝔗/4-Θ, by minimality of j. Hence
_U(A_j, w)≥_U(1, w)-_U(1, A_j-1)-_U(A_j-1∪ A_j)≥𝔗/4-3Θ,
where we used that A_j-1 intersects A_j and therefore _U(A_j∪ A_j-1)≤ 2Θ. Summarising, we found an index j such that
min{_U(1, A_j), _U(A_j,w)}≥𝔗/4-3Θ.
Now notice that, whenever |j-r|≤ 3, we have that
min{_U(1, A_r), _U(A_r,w)}≥𝔗/4-7Θ.
Indeed
_U(1, A_r)≥_U(1, A_j)-_U(⋃_t=j^rA_t),
and as each A_t intersects the next one we have that
_U(⋃_t=j^rA_t)≤∑_t=j^r_U(A_r)≤ (|j-r|+1)Θ≤ 4Θ.
The same argument shows that _U(A_r,w)≥𝔗/4-7Θ, hence the situation in U is as in Figure (<ref>).
Now we look at the relation between U and any W_r for which |j-r|≤ 3.
Containment implies the conclusion. Suppose first that W_r U, for some W_r as above. Then V, which was orthogonal to U, is also orthogonal to W_r. As
_W_r(A_r)≥_W_r(C_r, C_r+1)≥ M≥Θ,
the orthogonal projection dichotomy for A_r implies that π_V(A_r) is Θ-dense in V, and we are done. Then from now on assume that U does not contain any W_r as above.
No transversality. Next, we argue that U cannot be transverse to any W_r. Indeed _W_r(C_r, C_r+1)≥ M≥ 4E. Therefore, if U were transverse to W_r, then at least one between C_r and C_r+1 would be at distance greater than E from ρ^U_W_r. By the Behrstock inequality, this would imply that _U(W_r, A_r)≤ E, and in turn
_U(1, W_r)≥_U(1, A_r)-_U(ρ^W_r_U)-_U(W_r, A_r)≥
≥𝔗/4-7Θ-2E≥ 2E,
by how we chose 𝔗 in Equation (<ref>). The same proof gives that _U(W_r,w)≥ 2E. But then the Behrstock inequality would again imply that 1 and w both project E-close to ρ^U_W_r inside W_r, which is impossible.
No consecutive orthogonality. Then we can assume that U is either orthogonal to, or nested in, each W_r with |j-r|≤ 3.
Suppose that both |j-r|≤ 3 and |j-(r-1)|≤ 3. We claim that U cannot be orthogonal to both W_r-1 and W_r. If this was the case, then π_U(C_r) would be R-dense inside U, as A and B have no drift in the orthogonals (Definition <ref>). But this cannot happen, as _U(C_r)≤_U(A_r)≤Θ while ( U)≥_U(1,w)≥𝔗/2, which is strictly greater than 2R+Θ by our choice of 𝔗 in Equation (<ref>).
No alternated nesting. We are left with the case when, whenever |j-r|≤ 3 and |j-(r-1)|≤ 3, U is nested inside one between W_r-1 and W_r. As there are seven domains between W_j-3 and W_j+3, we can find three indices j-3≤ a<b<c≤ j+3 such that U is nested in W_a, W_b, and W_c. The consistency Axiom (<ref>) yields that _W_b(W_a, U)≤ E and _W_b(W_c, U)≤ E. Hence
_W_b(W_a,W_c)≤_W_b(W_a, U)+(ρ^U_W_b)+_W_b(W_a, U)+≤ 3E.
This gives a contradiction, because a<b<c and Claim <ref> tells us that _W_b(W_a,W_c) must be at least 6E.
The proof of Theorem <ref> is now complete.
alpha
|
http://arxiv.org/abs/2409.03080v1 | 20240904210855 | Explainable AI for computational pathology identifies model limitations and tissue biomarkers | [
"Jakub R. Kaczmarzyk",
"Joel H. Saltz",
"Peter K. Koo"
] | q-bio.TO | [
"q-bio.TO"
] |
Coupling AI and Citizen Science in Creation of Enhanced Training Dataset for Medical Image Segmentation
Amir Syahmi*, Xiangrong L. Lu*, Yinxuan Li*, Haoxuan Yao*, Hanjun Jiang,
Ishita Acharya, Shiyi Wang, Yang Nan, Xiaodan Xing, Guang Yang†
Department of Bioengineering and Imperial-X, Imperial College London
London, W12 7SL, United Kingdom
September 9, 2024
==========================================================================================================================================================================================================================================================
empty
§ INTRODUCTION
tocsectionIntroduction
Digital pathology has emerged as a transformative force in medicine, ushering in an era where computational methods can augment and enhance the diagnostic and prognostic capabilities of pathologists. By digitizing whole slide images (WSIs) of tissue specimens, this field has opened up new avenues for applying advanced machine learning techniques to analyze complex histological patterns and features. The potential impact of computational pathology is far-reaching, promising to improve diagnostic accuracy, standardize interpretation, and uncover novel biomarkers that may inform personalized treatment strategies <cit.>.
Recently, attention-based multiple instance learning (ABMIL) <cit.> has emerged as a powerful approach to analyze WSIs for various pathological tasks, demonstrating performance that often rivals or surpasses that of expert pathologists <cit.>. ABMIL models treat each WSI as a collection of smaller image patches (instances) and use attention mechanisms to identify and focus on the most relevant regions for the task at hand. Importantly, multiple instance learning allows ABMIL models to learn from specimen-level labels, not requiring exhaustive pixel-level annotations, which are time-consuming and costly to obtain<cit.>. This feature makes ABMIL models particularly well-suited for tasks such as cancer detection <cit.>, diagnosis <cit.>, identification of primary cancer origin <cit.>, grading <cit.>, genomic aberration detection <cit.>, molecular phenotyping <cit.>, treatment response prediction <cit.>, and prognostication <cit.>.
However, the widespread adoption of ABMIL models in clinical settings is hindered by challenges in model interpretability and trustworthiness <cit.>. A key limitation lies in the heavy reliance of interpretations based on ABMIL's attention, which is often used as a proxy for understanding model behavior. While attention highlights regions of interest within a WSI, they do not necessarily reflect the direct influence of these regions on model predictions <cit.>. This disconnect between attention and model output can lead to misinterpretations of model behavior, potentially eroding trust in the model's decisions and limiting its clinical utility <cit.>. In addition, post hoc model explanations via attribution methods, such as LIME <cit.> and SHAP <cit.>, make restrictive additive or linear assumptions of individual pixels, which have been argued to not reflect a model's decision making process <cit.>.
To address these challenges, we introduce HIPPO (Histopathology Interventions of Patches for Predictive Outcomes), an explainable AI method designed to enhance trust in ABMIL models and provide deeper insights into their decision-making processes. HIPPO goes beyond traditional attention-based interpretations by quantitatively assessing the impact of specific tissue regions on model predictions. By simulating targeted interventions through the occlusion or inclusion of individual or groups of patches, HIPPO enables a more nuanced understanding of how different histological features influence ABMIL model outputs.
We demonstrate the utility of HIPPO by applying it to two major tasks in computational pathology: metastasis detection and cancer prognosis prediction. In the context of metastasis detection, we evaluate five foundation models in pathology using the CAMELYON16 dataset <cit.>. Our analysis uncovers model-specific limitations and biases that would have remained hidden using attention mechanism alone. We reveal that some models rely heavily on extratumoral tissue for metastasis detection, while others are surprisingly insensitive to small tumor regions. These findings highlight the importance of rigorous model evaluation beyond standard performance metrics and underscore the potential of HIPPO in identifying when and why models might fail.
For cancer prognosis, we apply HIPPO to ABMIL models trained on breast cancer and cutaneous melanoma datasets from The Cancer Genome Atlas. Our results demonstrate that HIPPO can identify tissue regions more strongly associated with prognosis compared to those highlighted by attention. Strikingly, we find that high-attention regions can sometimes have counterintuitive effects on prognostic predictions, further emphasizing the limitations of relying solely on attention for model interpretation. By quantitatively assessing the impact of tumor-infiltrating lymphocytes (TILs) on model predictions, we confirm that the models have captured the known prognostic significance of TILs in both breast cancer and melanoma. This ability to link model behavior to established biological knowledge is crucial for building trust in AI-driven prognostic tools.
Beyond model interpretation, HIPPO opens up new possibilities for in silico experimentation in computational pathology. We showcase this potential by using HIPPO to simulate the effects of autologous TIL therapy in melanoma patients. By digitally replicating TIL-positive regions in specimens with poor prognosis, we demonstrate a proof-of-principle approach for identifying patients who potentially might benefit from this immunotherapy. This application illustrates how HIPPO can bridge the gap between computational predictions and clinically actionable insights.
As computational pathology continues to advance, the need for robust, interpretable, and trustworthy AI models becomes increasingly critical. HIPPO represents a significant step forward in this direction, offering a powerful tool for uncovering the strengths, limitations, and potential biases of ABMIL models in pathology. By providing a more comprehensive understanding of model behavior, HIPPO not only enhances the interpretability of existing models but also paves the way for developing more reliable and clinically relevant AI tools in pathology. As we demonstrate across multiple applications, from metastasis detection to prognostic modeling, HIPPO has the potential to accelerate the translation of computational pathology into clinical practice, ultimately improving patient care and outcomes.
§ RESULTS
§.§ HIPPO: Histopathology Interventions of Patches for Predictive Outcomes
HIPPO is a specimen-level perturbation toolkit that explains weakly-supervised models in computational pathology (Fig. <ref>a). The fundamental goal of HIPPO is to explore counterfactual (i.e., “what if”) scenarios that are infeasible to realize in actual tissue samples. For instance, it would be impractical to directly manipulate the tumor microenvironment of a tissue specimen to understand its effect on a prognostic model. Instead, we can digitally modify a WSI that simulates this intervention. HIPPO enables in silico interventions through the occlusion or inclusion of single or multiple patches, utilizing the resulting ABMIL model predictions as counterfactual outcomes. HIPPO provides quantitative insights into how specific tissue alterations impact pathological assessments through the lens of the AI model. These assessments can include but are not limited to, patient prognosis, treatment response prediction, metastasis detection, inference of spatial transcriptomics, gene mutation detection, and microsatellite instability identification. Applying HIPPO to ABMIL models enables researchers, regulators, and clinicians to elucidate model behavior and assess the reliability of model outputs in high-risk clinical contexts.
Traditional approaches to digital interventions in medical imaging often require precise segmentation of objects for occlusion or inclusion <cit.>, as well as sophisticated inpainting techniques to maintain image integrity <cit.>. Alternatively, generative AI can generate counterfactual images <cit.>, but the quality of the generated images has not been thoroughly evaluated for histopathology. These manual or AI-assisted methods can introduce covariate shifts when imperfectly executed <cit.>, potentially leading to unreliable model predictions. The key insight for HIPPO is based on how data flows through ABMIL models. A WSI is treated as a bag of permutation-invariant patches, where the number and order of patches are allowed to vary <cit.>. Thus, an intervention can be achieved through two primary perturbation mechanisms: (1) removing specific patches, effectively excising tissue from the input specimen, or (2) including specific patches, simulating the addition of new tissue into the specimen. HIPPO leverages unique properties of multiple instance learning models to facilitate the generation of counterfactual images bypassing the complexities of direct image manipulation by creating hypothetical scenarios such as the introduction or removal of tumor patches or regions of tumor-infiltrating lymphocytes (TILs) from a patient's specimen. Understanding when ABMIL models alter their predictions due to interventions provides quantitative insights into their decision making process, revealing important features and potential biases learned.
HIPPO offers hypothesis-driven and data-driven methods for intervention selection:
* HIPPO-knowledge: choosing a region based on prior knowledge or a well-defined hypothesis and quantifying its effect by removing it from specimens or adding it to specimens without that region and measuring the change in model outputs (Fig. <ref>a).
* HIPPO-attention: quantifying the effect of high attention regions by removing the high attention regions and measuring the change in model outputs (Fig. <ref>b).
* HIPPO-search-high-effect: a greedy search algorithm to identify the regions that maximally drive a prediction. This can be used to identify regions necessary for a model's output (Fig. <ref>c).
* HIPPO-search-low-effect: a greedy search algorithm to identify the regions that lead to the slightest change in model predictions. This can be used to identify the regions that do not change model predictions and, therefore, are not necessary for the output of a model. After removing unnecessary regions, the remaining regions can be considered sufficient for the model prediction (Fig. <ref>c).
HIPPO generates counterfactual outcomes through the lens of an AI model. Thus, if the AI model has limitations in fully capturing complex biological input-output relationships, HIPPO's explanations will reflect these limitations. HIPPO provides unprecedented access to interrogating the biological factors underlying prediction, providing insight into an AI's decision-making process.
With the advent of digital pathology foundation models, it is important to evaluate model robustness, generalizability, and potential biases and understand their limitations. Here, we showcase the breakthroughs made possible by HIPPO in rigorously evaluating models built on top of foundation models for breast metastasis detection and prognosis prediction tasks. We compare five foundation models in metastasis detection and identify model-specific limitations and biases. We also use HIPPO to study the effects of tissue components on prognostic models, demonstrating how HIPPO's capabilities surpass attention in identifying low and high-risk drivers. We also measure the effect of TILs on breast cancer and melanoma patient prognosis and demonstrate digitally that autologous TILs improve predicted prognosis in a subset of melanoma patients, marking exciting progress in the field.
§.§ Do MIL models think tumor is necessary for breast cancer metastasis detection?
Metastasis detection is a well-studied task, with well-defined features (i.e., tumor cells) that drive the label of whether or not a specimen contains metastasis. In a clinical setting, it is critical that metastases are identified; a false negative is unacceptable. Recent studies have shown that ABMIL models have strong performance in metastasis detection<cit.>. However, previous studies have also found that computer vision models can make the correct predictions for the wrong reasons, such as short-cut features or spurious correlations <cit.>. Thus, the degree to which AI models rely on the tumor regions remains to be seen, even for a relatively straightforward task like tumor detection. Understanding this is critical to elucidate the strengths and limitations of ABMIL models for metastasis detection, including potential biases.
To evaluate this, we trained several ABMIL models for breast metastasis detection using the CAMELYON16 dataset <cit.> (Fig. <ref>a). Several pathology foundation models have recently emerged, demonstrating near-human levels in metastasis detection. Here we consider five pathology foundation models (UNI <cit.>, REMEDIS <cit.>, Phikon <cit.>, CTransPath <cit.>, and RetCCL <cit.>). We trained five ABMIL models for each foundation model to distinguish whether or not a specimen contained metastasis. Similar to previously reported results <cit.>, UNI achieved a mean balanced accuracy of 0.982, REMEDIS 0.922, Phikon 0.907, CTransPath 0.858, and RetCCL 0.745. (Fig. <ref>b, Supplementary Table 1).
For HIPPO explainability experiments, we used the best-performing model (out of 5 random initializations) on the test set for each foundation model. The best UNI model achieved balanced accuracy of 1.00, REMEDIS 0.949, Phikon 0.955, CTransPath 0.885, and RetCCL 0.769 (Supplementary Table 2).
In this dataset, expert pathologists finely annotated metastatic regions. This allows us to use HIPPO-knowledge to determine whether metastatic regions are necessary for detecting breast cancer metastasis. Specifically, for patients who were positive for metastasis, we removed the patches that intersected with the tumor annotations, effectively creating a version of the specimen that does not contain metastasis. We compared model predictions before and after the intervention. Specificity was calculated as the ratio of true negatives to all negative samples. In this set of counterfactuals, all specimens were negative, so the specificity represented the proportion of correct negative predictions by the models. Notably, the UNI-based model exhibited the lowest specificity (0.73) in these counterfactual examples despite achieving the highest balanced accuracy on the original test set (1.00). This discrepancy was particularly pronounced in counterfactual specimens that originally contained macrometastases (specificity 0.59), suggesting that the UNI-based ABMIL model uses tissue outside of the tumor region to drive positive metastasis predictions. The REMEDIS-based model exhibited a similar trend, with a specificity of 0.77 in counterfactuals derived from macrometastases. In contrast, the other models showed less dependence on extratumoral tissue (sensitivity of Phikon-based, 0.86; CTransPath-based, 0.92; RetCCL-based, 0.88), indicating that their predictions are primarily driven by tumor epithelial cells rather than other tissue components (Fig. <ref>c).
We hypothesized that the low specificity of the UNI-based ABMIL model may be attributed to metastasis-induced alterations in the surrounding tumor microenvironment. To investigate this, we used HIPPO-knowledge to remove increasingly larger regions surrounding the tumor annotation progressively and quantified the effect on metastasis detection. As the extent of peritumoral tissue removal increased, the UNI-based model was consistently more likely to predict the absence of metastasis. Specificity increased from 0.73 at dilation of 0, to 0.78 at 64, to 0.80 at 128, to 0.86 at 256, and to 0.88 at 1024. This was driven primarily by macrometastatic specimens, where specificity increased from 0.59 at dilation of 0 to 0.68 at dilation of 64, to 0.73 at dilation of 128, to 0.82 at dilation of 256, to 0.86 at dilation of 1024. Notably, other ABMIL models remained largely unaffected by peritumoral tissue removal, highlighting a unique characteristic of the UNI-based model (Supplementary Fig. 1).
In summary, HIPPO enabled the quantitative exploration of peritumoral tissue on metastasis detection.
§.§ Is tumor sufficient for breast cancer metastasis detection?
While necessity assesses the importance of a feature or feature set, it does not inform whether the feature set is sufficient for model predictions. Metastasis detection models must be able to detect tumor regions no matter how small. Using HIPPO-knowledge, we tested the sufficiency of metastatic regions using two methods: removing all non-tumor patches and measuring model outputs and adding tumor regions to normal specimens and measuring model outputs.
First, we constructed counterfactual specimens (n=49) by removing all non-tumor tissue (i.e., removing patches that did not intersect with expert tumor annotations) and measuring model outputs. With only the tumor present, the true label for these images was “positive”, and the foundation models had the following sensitivity (true positive rate): UNI-based 0.98, REMEDIS-based 0.92, Phikon-based 0.98, CTransPath-based 0.96, RetCCL-based 0.82 (Fig. <ref>d). There is evidence to suggest that extratumoral tissue caused false negative predictions. Four of the five foundation models improved sensitivity when using only tumor tissue in micrometastases compared to the original positive samples, suggesting that extratumoral tissue drove false negative predictions. The sensitivity of CTransPath increased by 25%, Phikon by 4%, REMEDIS by 5%, and RetCCL by 100%. For UNI, however, using original WSIs resulted in a sensitivity of 1.0 on micrometastasis. However, when using only the tumor tissue, one false negative prediction suggested that the UNI-based model may use tissue outside of the metastatic region in its predictions. Critically, this demonstrated that the tumor was insufficient for a positive prediction in this specimen with the UNI-based model and that extratumoral tissue was solely driving the positive prediction. RetCCL had a true positive rate in macrometastases of 0.95 (21 predicted positive of 22 positive specimens). When using only tumor tissue, all macrometastases were detected successfully, demonstrating that tissue outside the metastatic region caused a false negative prediction.
We also evaluated whether tumor was sufficient for metastasis detection by embedding tumor regions in normal specimens. We embedded all patches intersecting with tumor annotations into normal specimens, resulting in 3,920 positive counterfactual examples (80 normal slides × 49 positive slides). Model outputs for these examples were recorded. The UNI-based model had a sensitivity of 0.98, REMEDIS-based 0.86, Phikon-based 0.95, CTransPath-based 0.90, and RetCCL-based 0.63. Positive counterfactuals made with micrometastases were less likely to be detected by most models (UNI-based achieved sensitivity of 0.96, REMEDIS-based 0.75, Phikon-based 0.91, CTransPath-based 0.93, and RetCCL-based 0.40), suggesting that smaller tumors in the context of normal tissue are insufficient for positive metastasis detection (Fig. <ref>e).
The average treatment effect for each metastatic slide was calculated by averaging the model's probability of metastasis across all negative samples. This informs which positive slides can drive positive predictions across individuals. 100% of macrometastases (n=22) led to true positives in UNI-based, REMEDIS-based, Phikon-based, and CTransPath-based models. In the RetCCL-based model, 90% (n=20) of macrometastases had an average true positive effect. Micrometastases (n=27) were less likely to induce positive predictions on average, with 96% (n=26) positive in UNI, 93% (n=25) in Phikon, 81% (n=22) in CTransPath, 74% (n=20) in REMEDIS, and 37% (n=10) in RetCCL.
§.§ Foundation models may miss small breast cancer metastases
To evaluate the sensitivity of ABMIL models to detect metastasis based on the size of the metastasis in a specimen, we analyzed the metastasis-positive specimens from the CAMELYON16 test set. Our methodology involved initially removing all tile embeddings that intersected with expert tumor annotations, effectively rendering the slide negative for metastases. A 128 × 128 region of tumor (shown in the right-hand side of Fig. <ref>a) was added to 80 normal specimens and 49 metastasis-removed positive specimens. When the single-patch tumor region was embedded in normal specimens, the REMEDIS-, Phikon-, and RetCCL-based ABMIL models detected 100% of counterfactuals as positive, highlighting their robustness to this small region of tumor. The UNI-based model, on the other hand, failed to detect 41% (n=33) of positive counterfactuals (n=80), and the CTransPath-based models failed to detect 35% (n=28) of positive counterfactuals. A similar trend was observed when the tumor region was embedded into the context of metastatic specimens (i.e., the positive specimen with metastasis removed). The REMEDIS-, Phikon-, and RetCCL-based models detected 100% of positive counterfactuals (n=49), whereas the UNI-based model missed 51% (n=25) and CTransPath-based missed 65% (n=32) of positive counterfactuals specimens (Fig. <ref>f). This result is surprising because the UNI-based model had perfect sensitivity in the original test set (Fig. <ref>b) as well as the highest sensitivity when larger tumors were embedded into normal tissue (Fig. <ref>e). This highlights that high classification performance on the held-out test set is insufficient to assess generalization to more nuanced downstream applications.
We also sought to quantify the sensitivity of models to each tumor patch in positive specimens, which can shed light on whether tumor patches carry different levels of informativeness for machine learning classifiers. To accomplish this, all tumor patches intersecting with expert tumor annotations were removed. Then, we reintroduced tiles fully within the expert tumor annotation, one at a time, to the tumor-removed specimen and evaluated the model outputs. These model outputs were compared to those when all tumor was removed. While some tumor patches could drive a positive prediction on their own, many could not (Fig. <ref>g for the UNI-based model, and other models and specimens are shown in Supplementary Figs. 2-6.).
To further quantify the effect of tumor size in metastasis detection, we added tumor patches into normal slides in a graded fashion and measured the sensitivity. All models exhibited a graded effect of tumor size, and UNI exhibited the highest sensitivity (Fig. <ref>h). Models tended to plateau in sensitivity at 0.262144 of tumor (16 patches) added. The RetCCL-based model showed the lowest sensitivity and the least sensitivity to smaller tumors.
To identify the largest amount of tumor that would go undetected by a model, we also used a HIPPO search algorithm, HIPPO-search-small-effect. We found that in some cases, regions up to 1.5 could be added into a negative counterfactual while still maintaining a negative detection. Indeed, the tumor patches that were insufficient to drive large effect sizes were largely similar to the sufficient patches, though some insufficient patches contained adipose cells along with tumor epithelial cells (Supplementary Fig. 7).
This shows that there exist regions within tumors that would go unseen by an ABMIL-based metastasis detection model. These biases should be explored further prior to the clinical use of metastasis detection models.
§.§ Non-tumor tissue can cause false positive metastasis detections
Given the effect of peritumoral tissue on UNI-based model predictions, we also evaluated whether peritumoral tissue was sufficient for positive metastasis predictions. A halo of peritumoral tissue was extracted from metastasis-positive specimens (n=49) with a width of 64, 128, 256, or 1024 , beginning at either the edge of the expert tumor annotation or 256 outside of the tumor annotation. The patches intersecting with the peritumoral halos were added to normal specimens (n=80), resulting in 3920 counterfactual examples (80 normal × 49 positive specimens). Model predictions were averaged across normal specimens, resulting in the average treatment effect of the peritumoral region of each positive specimen. This was evaluated for the UNI-based and Phikon-based models.
Despite the lack of tumor cells, halos of peritumoral tissue were sufficient to drive a positive metastasis detection. Halos 1024 in width beginning from the edge of the expert tumor annotation from 20% (n=10) of positive specimens caused positive predictions in the UNI-based model when embedded in normal specimens, whereas 10% (n=5) caused positive predictions in the Phikon-based model. When starting the 1024 halo 256 outside of the tumor annotation, there were 14% (n=7) positive predictions in the UNI-based model and 10% (n=5) for the Phikon-based model. Thinner halos of 64 beginning 256 outside of the tumor annotation also caused positive predictions: 10% (n=5) in the UNI-based model and 6% (n=3) in the Phikon-based model (Supplementary Fig. 8).
These results demonstrate that the models learned an association between peritumoral tissue and the presence of metastasis despite the peritumoral tissue not containing any metastasis. This highlights an important bias that could not have been uncovered using attention alone. HIPPO enabled the quantitative assessment of peritumoral tissue on metastasis detection.
§.§ Adipose tissue can cause false negative metastasis detections
To investigate patient samples that consistently drove false negative predictions further, we visualized the attention maps of the ABMIL models to identify regions that it considers important. For the CTransPath-based model, we observed that attention was concentrated in adipose regions for specimen (Figs. <ref>a and <ref>b).
Since attention maps only provide a qualitative visualization of regions in an image that the ABMIL models consider important, it is unclear to what extent adipose tissue directly affects model predictions. We address this with HIPPO-attention. The patches with adipose tissue were removed, and the effect of this perturbation on model outputs was quantified. We found that removing the adipose tissue rescued the true positive prediction in specimen , suggesting that fat caused the false negative prediction (Figs. <ref>c and <ref>d). To test whether the adipose tissue from would cause a misclassification in other models and specimens, we added the adipose regions from that specimen into the 48 other positive specimens. We then recorded whether the addition of adipose tissue caused false negative predictions. We found that true positives were flipped to false negatives in 2, 2, 1, 1, and 5 specimens for UNI, REMIDIS, Phikon, CTransPath, and RetCLL, respectively (Figs. <ref>e and <ref>f). This highlights how HIPPO can elucidate biases that cause misclassification. While attention alone could not inform that adipose was the cause of misclassification, it was useful to formulate a hypothesis that we can test with HIPPO-attention. This demonstrates how HIPPO can complement attention-based interpretability analysis to quantitatively test hypotheses of putatively important tissue regions.
§.§ HIPPO identifies shortcut learning when attention struggles
Identifying spurious correlations in deep learning models for medical imaging is crucial to ensure reliable and clinically relevant results. To test HIPPO's ability to identify spurious correlations, we conducted an experiment where we deliberately introduced an artificial bias into the CAMELYON16 dataset (Figs. <ref>a and <ref>b). Specifically, 768 × 768 blue squares were added to all negative images. This mimics the plausible scenario in which a pathologist marks certain slides with a blue marker. However, in doing so, it introduces a strong spurious correlation with labels. We hypothesized that the models would learn that slides were negative if a blue region was present and that slides lacking this blue region are positive (as blue regions are easier to identify compared to more variable tumor regions).
An ABMIL model was trained on the modified training data using UNI embeddings. The model achieved a balanced accuracy of 1.0 on the test set, suggesting the spurious correlations created a trivial prediction task. By performing standard model interpretation using attention, we found that metastatic regions were considered highly important (Fig. <ref>c). However, removing these regions using HIPPO did not alter the model predictions, demonstrating that tumor regions were not important for model predictions despite a strong attention assignment. This highlights an important weakness of attention: the disconnect between attended regions and model predictions.
Knowing that the metastatic regions did not affect model outputs, we used the search algorithm HIPPO-search-high-effect to identify the regions that maximally drove positive tumor predictions in both models using one positive specimen, . Given that the model trained with spurious correlations uses the lack of a blue square as a cue for positive specimens, we expected that no individual patches would drive the positive metastasis output and that tumor regions would not have a high effect on the prediction. Indeed, effect sizes were small and evenly distributed across the WSI (minimum 2.1e-05, maximum 0.02, mean 9.4e-05, and median 5.5e-05), indicating that no single region contributed strongly to the model prediction (Fig. <ref>d). By contrast, applying this search algorithm to the model trained on the original CAMELYON16 dataset, we found that patch effect sizes were higher (minimum 3.7e-08, maximum 0.09, mean 1.3e-4, and median 4.9e-08), and high effect patches were within expert tumor annotations (Fig. <ref>e). By tying interpretation analysis directly to predictions, HIPPO-based interpretations may provide more reliable explanations of model predictions.
Shortcut learning is an important bias that must be identified and addressed in deep learning on medical images. In this case, model performance and attention were insufficient to diagnose the shortcut learning. Observational analysis based on attention maps could easily mislead an observer to believe that tumor regions drive model predictions. Quantifying effect sizes of tumor regions using HIPPO addressed these limitations and diagnosed the shortcut learning.
§.§ Refining the search for prognostic tissue biomarkers
Having demonstrated HIPPO's effectiveness in metastasis detection, where the regions of interest are well-defined and were previously annotated by expert pathologists, we extended our investigation to the more complex domain of cancer prognosis. Unlike the clear delineation of tumor regions in metastasis detection, prognostic factors in WSIs are multifaceted and less clearly defined. We applied HIPPO to prognostic models that generate risk scores from WSIs, aiming to identify the tissue regions driving these predictions. Our experiments with HIPPO yielded two key insights. First, HIPPO's search algorithms demonstrated superior ability in identifying tissue patches that consistently and significantly influence risk predictions compared to conventional attention-based methods. While attention mechanisms yielded mixed effects — potentially identifying regions that counterintuitively drive lower risk in otherwise high-risk specimens — HIPPO provided a more consistent, reliable, and quantitative assessment of the regions that drive risk. Second, HIPPO's unique features enable in silico experiments to measure the effects of targeted tissue interventions on prognostic outcomes through the lens of the ABMIL model. HIPPO's potential to accelerate the discovery and validation of prognostic tissue biomarkers is an exciting development in cancer research, potentially bridging the gap between computational predictions and clinical actionability.
We trained prognostic ABMIL models using the PORPOISE framework <cit.>, a computational tool designed for predicting survival outcomes from histopathology images, to predict overall survival from WSIs in breast cancer (TCGA-BRCA) and cutaneous melanoma (TCGA-SKCM) (Supplementary Fig. 9). The same training and validation splits were used as in the original publication. Non-overlapping 128 × 128 patches from WSIs were embedded using the UNI model <cit.> (in the original PORPOISE publication, a truncated ResNet50 <cit.> was used). Low and high risk were defined as the first and fourth quartiles of risk scores. High attention regions were defined as the top 1% of attended patches, and HIPPO search algorithms were also used to identify the top 1% of patches by effect size.
High attention regions drove counterintuitive effects in many specimens, while HIPPO-search-low-effect and HIPPO-search-high-effect identified more robust and consistent drivers of risk. High attention regions in high-risk cutaneous melanoma specimens (n=67) drove lower risk in 45% (n=30) of specimens. HIPPO-search-high-effect, on the other hand, identified regions that all drove higher risk and that more greatly contributed to high-risk predictions (t=3.03, p<0.01, independent t-test). High attention in high-risk breast cancer specimens (n=256) drove lower risk in 40% (n=102) specimens. Again, HIPPO-search-high consistently identified regions that drove higher risk in the high-risk specimens (t=8.83, p<0.0001, independent t-test) (Fig. <ref>a). High attention regions in low-risk SKCM specimens (n=67) drove higher risk in 10% (n=7). HIPPO-search-low-effect identified regions that all drove lower risk and more strongly contributed to lower risk predictions (t=-2.30, p<0.05, independent t-test). High attention regions in low-risk BRCA specimens (n=256) drove higher risk predictions in 8% (n=20) specimens. HIPPO-search-low-effect identified patches that consistently drove lower risk predictions (t=-5.43, p<0.0001, independent t-test) (Fig. <ref>b). This counterintuitive effect underscores that attention scores may not directly relate to model predictions. Thus, interpretations that solely rely on these features may be misguided. HIPPO search algorithms reliably identified the regions that drove risk predictions and may have value as a tool for prognostic biomarker search.
TILs are a well-known prognostic biomarker. We evaluated the necessity and sufficiency of TILs for low-risk predictions in BRCA and SKCM. To test sufficiency, we extracted TIL-positive patches from low-risk specimens and placed them in high-risk specimens. For each high-risk slide, we embedded the TILs from each low-risk slide, and we averaged the model predictions across the low-risk slides to compute the average treatment effect of TILs for each high-risk slide. In high-risk BRCA specimens (n=253, three specimens failed cell detection), the addition of TILs from low-risk specimens decreased the risk by 46% (t=17.95, p < 0.0001, paired t-test) from 0.37 (std. dev. 0.20) to 0.20 (std. dev. 0.15). In SKCM (n=67), the addition of TILs significantly decreased risk by 59% (t=-22.53, p<0.0001, paired t-test) from 0.60 (std. dev. 0.14) to 0.25 (std. dev. 0.08) (Fig. <ref>c). To evaluate the necessity of TILs, we removed TIL-positive patches from low-risk specimens and measured the change in predictions. If TILs were necessary, then risk predictions would increase upon removal of TILs. In BRCA (n=254, two specimens failed cell detection), the removal of TILs significantly increased risk by 179% (t=3.83, p<0.001, paired t-test) from 0.002 (std. dev. 0.001) to 0.005 (std. dev. 0.014). In SKCM (n=67), the removal of TILs increased risk by 98% (t=4.27, p<0.0001, paired t-test) from 0.064 (std. dev. 0.045) to 0.126 (std. dev. 0.123) (Fig. <ref>d). The removal of TILs did increase risk predictions, but the risk predictions did not reach the level of high-risk slides, suggesting that other features in the WSIs were also driving the low-risk predictions. HIPPO facilitated a quantitative evaluation of the role of TILs on prognosis, providing insights beyond those achievable through the attention mechanism of ABMIL.
§.§ Generating hypotheses of which patients may benefit from autologous TIL therapy
Lifileucel is a promising immunotherapy for melanoma that involves isolating TILs from a patient's tumor, replicating the TILs, and infusing them back into the patient[<https://www.fda.gov/news-events/press-announcements/fda-approves-first-cellular-therapy-treat-patients-unresectable-or-metastatic-melanoma>]. In a phase II clinical trial, over 30% of patients responded to the therapy <cit.>. Identifying the patients that might respond to this therapy has the potential to improve patient outcomes and decrease costs (a single treatment may cost over 500000[$] <cit.>). Therefore, we sought to explore whether we could emulate this with ABMIL and HIPPO. We conducted in silico experiments to measure the effect of autologous TILs on prognosis. We used the prognostic model for cutaneous melanoma described above, and we studied the high-risk specimens in TCGA-SKCM (n=67 WSIs, n=54 patients). Counterfactuals were designed to model the injection of autologous TILs. In each specimen, TIL-positive patches were replicated 2 ×, 10 ×, 20 ×, and 100 × (Fig. <ref>a). TIL-positive patches were defined using the same heuristic as above (see Methods). The change in model predictions between original specimens and autologous counterfactuals was recorded to measure the effect of additional TILs on prognosis. Cohen's d was also calculated to quantify effect sizes. Importantly, we do not claim to demonstrate the efficacy of autologous TIL therapy through HIPPO and TCGA-SKCM. Rather, we aim to show a proof-of-principle that HIPPO may be used for hypothesis generation.
Autologous TILs significantly lowered predicted risk in a dose-dependent manner. Risk decreased by -2.18% (d=-0.50) at 2 × dose (t=-4.06, p<0.001, paired t-test), -10.8% (d=-0.56) at 10 × dose (t=-4.59, p<0.0001, paired t-test), -15.3% (d=-0.62) at 20 × dose (t=-5.06, p<0.0001, paired t-test), and -20.8% (d=-0.67) at 100 × dose (t=-5.49, p<0.0001, paired t-test) (Fig. <ref>b). Increasing the number of TILs by 100 × decreased predicted risk scores by over half in 18% of high-risk specimens. Together, we demonstrated a proof-of-principle in which we use HIPPO to identify patients who may benefit from autologous TIL therapy through improved predicted prognosis following the replication of their TILs.
§ DISCUSSION
In this study, we introduce HIPPO, an explainable AI method designed to enhance the interpretability and trustworthiness of ABMIL models in computational pathology. Our results demonstrate HIPPO's ability to uncover hidden biases, quantify the impact of specific tissue regions on model predictions, and bridge the gap between computational outputs and clinically relevant insights. These findings may have significant implications for the development, regulation, and clinical application of AI in pathology.
One of the key strengths of HIPPO lies in its capacity to reveal model-specific limitations that are not apparent from performance metrics or attention mechanisms alone. In our evaluation of metastasis detection models, we uncovered surprising variations in how different foundation models process histological information. For instance, some models showed a strong reliance on peritumoral tissue, while others demonstrated unexpected insensitivity to small tumor regions. These findings underscore the importance of rigorous model evaluation beyond standard performance metrics and highlight potential pitfalls in clinical deployment.
The revelation that high-attention regions can sometimes have counterintuitive effects on prognostic predictions is particularly striking. This disconnect between attention and model output challenges the common practice of using attention maps as a primary means of model interpretation. Our results suggest that regulatory bodies and clinical teams should exercise caution when relying solely on attention-based explanations and should consider incorporating quantitative impact assessments, such as those provided by HIPPO, in their evaluation processes. For example, one may use HIPPO-knowledge to quantify the effect of high attention regions on model predictions.
HIPPO's ability to verify that models have learned biologically relevant information, as demonstrated by our analysis of TILs, is crucial for building trust in AI-driven prognostic tools. This alignment between model behavior and established biological knowledge provides a foundation for explaining model decisions to clinicians and patients, potentially facilitating the integration of AI tools into clinical workflows. It is also possible to use HIPPO's de novo search to identify sets of patches from which expert pathologists could interpret manually to identify new biomarkers, helping to understand disease progression.
The application of HIPPO to simulate the effects of autologous TIL therapy in melanoma patients showcases the potential for in silico experimentation in computational pathology. As ABMIL models improve, this approach could have far-reaching implications for personalized medicine, offering a computational method to predict treatment responses and guide therapy selection. However, it is important to note that these simulations are based on model predictions and would require extensive clinical validation before they can be considered for patient care.
Our findings suggest several key considerations for the future development and deployment of ABMIL models in clinical settings: (1) Model developers should incorporate robustness to tissue heterogeneity and small tumor regions as explicit design goals, potentially through targeted data augmentation using HIPPO-based counterfactuals; (2) Regulatory approval processes for AI tools in pathology may consider including comprehensive evaluations of model behavior across diverse tissue contexts, going beyond aggregate performance metrics; (3) The implementation of AI tools in clinical practice should be accompanied by clear explanations of model strengths and limitations, with HIPPO-like analyses providing quantitative assessments of model reliability for specific tissue types or patient subgroups; (4) Post-deployment monitoring of AI models should include ongoing analysis of model behavior in real-world settings, with HIPPO offering a means to detect potential shifts in model performance or the emergence of unexpected biases.
While our study demonstrates the potential of HIPPO, several limitations must be acknowledged. First, the counterfactual scenarios generated by HIPPO, while informative, may not always reflect biologically plausible tissue alterations. Future work should focus on refining these interventions to more closely mimic realistic tissue changes. Second, our analysis was limited to a specific set of foundation models and datasets. Broader evaluation across diverse pathology tasks and model architectures is needed to fully characterize the generalizability of our findings. In addition the interpretations offered by HIPPO are inherently bound by the underlying model's capabilities and potential shortcomings in representing complex biological systems.
Looking ahead, several avenues for future research emerge from this work. The integration of HIPPO with multi-modal data, including genomic and clinical information, could provide even richer insights into model behavior and biological relevance. Additionally, exploring the use of HIPPO in guiding model refinement, such as targeted fine-tuning based on identified weaknesses, represents a promising direction for improving model robustness and clinical applicability.
In conclusion, HIPPO represents a major advance in the ability to interpret AI models in computational pathology. By providing a quantitative framework for assessing the impact of specific tissue regions on model predictions, HIPPO offers a powerful tool for uncovering model limitations, verifying biological relevance, and biomarker discovery for various clinical applications. As the field of computational pathology continues to evolve, quantitative methods like HIPPO will be crucial in ensuring that AI tools are deployed responsibly and effectively in healthcare settings.
§ METHODS
§.§ HIPPO toolkit
HIPPO (Histopathology Interventions of Patches for Predictive Outcomes) is an explainable AI toolkit for attention-based multiple instance learning models in computational pathology. It generates counterfactual examples by manipulating whole slide image patches to explain model behavior. In ABMIL, tissue from a WSI is divided into small tiles, which are embedded using a pre-trained model. These patch embeddings serve as input to the ABMIL model, which learns to map bags of patches to specimen-level labels. HIPPO is made possible by two key features of ABMIL: (1) models are invariant to patch order, and (2) models accommodate variable number of patches. Taking advantages of these features of ABMIL, HIPPO creates counterfactual examples by adding or removing patches, allowing for evaluation of hypothetical scenarios and quantification of the effects of tissue region on model predictions. This process can be hypothesis-driven (e.g., removing tumor regions to test their necessity in predicting breast metastases), based on model attention (e.g., removing high-attention regions), or identified by greedy search (described below). Patches can also be added to measure their impact, such as introducing tumor-infiltrating lymphocytes (TILs) to TIL-deficient specimens and measuring impact on prognostic models.
HIPPO employs a search algorithm to identify patches necessary or sufficient for predictions. This algorithm iteratively measures each patch's effect by removal and model re-evaluation. It can determine necessary patches (those causing the largest prediction drop when removed) or sufficient patches (those causing minimal change when removed). In multi-class prediction, this method can identify patches that drive the model towards specific outcomes. In regression models, the search algorithm can identify patches that either increase the output or decrease the output.
The greedy search experiments in the present report were conducted on NVIDIA A100 GPUs.
§.§ Deep neural network development
We used attention-based multiple instance learning (ABMIL) to learn specimen-level labels from wholes slide images. For metastasis detection, we evaluated five different patch encoders: UNI <cit.>, REMEDIS <cit.>, CTransPath <cit.>, Phikon <cit.>, and RetCCL <cit.>. These encoders were used to embed non-overlapping 128 × 128 patches, with all encoders utilizing identical patches. We standardized hyperparameters across all ABMIL models, adapting from Chen et al. <cit.>. The architecture comprised a first hidden layer of 512 units and a second of 384 units, incorporating gated attention. During training, we applied a dropout rate of 0.25. The output layer performed binary classification, distinguishing between the presence and absence of metastasis. Models were trained using cross-entropy loss and the Adam optimizer with a learning rate of 1 × 10^-4, following a cosine learning rate scheduler. We used a batch size of 1 without gradient accumulation. Training continued for a maximum of 20 epochs, with the best model selected based on the highest ROC AUC on the validation set. To assess initialization variability, we trained five separate models with different random seeds for each patch encoder. For subsequent experiments, we selected the initialization yielding the highest balanced accuracy on the CAMELYON16 test set for each encoder. We visualized attention heatmaps using QuPath <cit.>. All models were implemented in PyTorch and trained on NVIDIA RTX 2080 Ti GPUs.
For prognostic models, we used the ABMIL models defined in <cit.>. The model was composed of a linear layer with 512 units, dropout with a rate of 0.25, and a second linear layer of 256 units. Gated attention was used. The model had four outputs, representing hazards at four points in time. Risk scores were calculated as in ref. <cit.> and were in range [0, 1], where 0 indicates lowest probability of survival. Models were all implemented in PyTorch, and training was performed on NVIDIA RTX 2080 Ti GPUs.
§.§ Datasets
§.§.§ Breast cancer metastasis dataset
We used the CAMELYON16 dataset <cit.> to study breast cancer metastasis. This dataset consists of 399 images and has fine-grained tumor annotations made by expert pathologists. The training set was split into 90% training and 10% validation, stratified by the label of the specimen (i.e., normal or tumor).
Training set consisted of 143 negative and 100 positive WSIs (52 macrometastases and 48 micrometastases).
The validation set consisted of 16 negative and 11 positive WSIS (6 macrometastases and 5 micrometastases).
We used the pre-defined test set, which consisted of 80 negative and 49 positive WSIs (22 macrometastases and 27 micrometastases).
In the entire dataset, there were 160 metastasis-positive specimens. There was an average tumor area of 12.26 (std. dev. 34.04; minimum 0.008; and maximum 276.09). All 399 slides had pixel spacings between 0.226 and 0.243/px (MPP). The WSIs had 10,250 ± 6,672 patches (mean ± standard deviation), where each patch was 128 × 128 .
§.§.§ Prognostic datasets
Prognostic models were trained and evaluated using the invasive breast carcinoma (BRCA) and cutaneous melanoma (SKCM) studies from The Cancer Genome Atlas. In TCGA BRCA, 1,022 WSIs from 956 patients were used (130 death events), and in TCGA SKCM, 268 slides from 230 patients were used (89 death events). Overall survival time and censoring was used and retrieved from the code repository[<https://github.com/mahmoodlab/PORPOISE>] of ref. <cit.>. The training and validation splits for cross validation were accessed from the same code repository. The WSIs in TCGA BRCA had 11,260 ± 6,544 patches (mean ± standard deviation). The WSIs in TCGA SKCM had 14,153 ± 7,471 patches.
§.§ Whole slide image processing
Whole slide images were read using OpenSlide <cit.>, and a modified version of the CLAM toolkit <cit.> was used to segment tissue and calculate patch coordinates. Regions of tissue are identified to not spend computational resources on glass regions of the WSI. The image is converted to HSV color model (hue, saturation, value/brightness). The saturation channel is smoothed and thresholded to create a binary tissue image. Non-overlapping patches coordinates of 128 × 128 were calculated within the tissue regions. The CLAM toolkit <cit.> was modified to create patches at uniform physical sizes. The size of a patch in pixels can vary based on the spacing (/px, MPP), and the patch size in base pixels is calculated using,
Patch size(px) = Patch size()/WSI spacing(/px)
The 128 × 128 patches were then embedded using five pre-trained models (embedding dimensions in parentheses): UNI (1024) <cit.>, REMEDIS (4096) <cit.>, Phikon (768) <cit.>, CTransPath (768) <cit.>, and RetCCL (2048) <cit.>. When embedding a slide, patches were read directly from the WSI file. A batch of 64 patches was read from the WSI, and then that batch was processed by the model to compute embeddings. The embeddings of all patches in a WSI were concatenated into one array and saved to disk for reuse. These embeddings served as inputs to all ABMIL models in the present report.
§.§ HIPPO experiment details
§.§.§ Testing the necessity of tumor regions
The degree to which tumor regions influence ABMIL models for metastasis detection remains unclear. To test the necessity of tumor regions, all of the tumor in the 49 tumor-positive specimens was removed, and change in model outputs was recorded. The embeddings of all patches intersecting with expert tumor annotations were removed. See Supplementary Fig. 10
for the histogram of the number of patches removed from each WSI. Once all of the tumor patches were removed from the bag of embeddings, the specimen was called “negative” for metastasis. The modified bags of embeddings were run through the model, and outputs were recorded. The true negative rate (specificity) was calculated as the ratio of true negative detections to all negative samples. In this case, as all samples were negative, the true negative rate was the proportion of specimens called negative by the model. This was done for all patch embedding tested in the present report.
§.§.§ Testing the sufficiency of tumor regions
The sufficiency of tumor regions for metastasis detection remains unclear. We evaluated this in two ways: by evaluating the use of only tumor tissue from positive specimens (n=49), and by embedding metastatic patches from positive specimens (n=49) into negative specimens (n=80). In the first method, we removed all patches that did not intersect with the expert tumor annotations. This evaluated the hypothetical scenario that the specimen contained tumor and no other type of tissue. The labels of all specimens remained “positive”, and model outputs were recorded. Sensitivity was measured as the proportion of positive model predictions.
In the second method, we created counterfactual examples of metastasis-positive specimens (n=3920) from normal specimens (n=80). All combinations were evaluated: the patches that intersected expert tumor annotations from each positive slide were added to each negative slide, making a total of 3920 counterfactual examples (80 negative × 49 positive specimens). Each of these counterfactual examples was labeled “positive” because they contained tumor. These counterfactual examples were then run through the ABMIL models, and outputs were recorded. Sensitivity was measured as the proportion of positive model outputs.
§.§.§ Testing the effect of tumor size
The extent to which tumor size affects specimen-level metastasis detection is incompletely understood. Conventionally, this analysis is limited to existing specimens. We explore a more rich set of tumor sizes using counterfactual examples. First, we evaluated the effect of a single 128 × 128 tumor region in normal and metastatic specimens. The tumor region was taken from specimen at the coordinates (37878, 63530, 38444, 64096), indicating minimum X, minimum Y, maximum X, and maximum Y. For normal specimens, we added the embedding of this one patch into each of the 80 normal specimens and fed these bags of embeddings to the ABMIL model. Sensitivity was measured as the proportion of positive model predictions. We also evaluated this in the context of positive specimens. First, all tumor patches intersecting with expert tumor annotations were removed, and the single patch embedding was added to the bags of embeddings. 48 positive samples were used – the specimen that the patch came from was not included. Sensitivity was measured as the proportion of positive predictions.
In addition, the effect of each individual tumor patch was evaluated for metastasis detection. In the positive slides (n=49), all tumor patches intersecting expert tumor annotations were removed to render the slide negative for metastasis. Then, each tumor patch that was fully contained by the tumor annotations was added to the bag of embeddings one at a time, and model outputs were recorded. Model probabilities for tumor were recorded.
Last, the size of tumor was evaluated by sampling increasing numbers of tumor patches. First, all tumor patches intersecting with expert tumor annotations were removed. Then, tumor patches fully contained by the annotations were randomly sampled and added back to the bag of embeddings. This was evaluated over multiple numbers of sampled patches (i.e., 1, 2, 4, 8, 16, 32, 64). Sensitivity was evaluated as the proportion of positive predictions.
§.§.§ Identifying the largest unseen tumor
Motivated by the graded effect of tumor size on metastasis detection performance, we sought to identify the largest area of tumor that would still result in a negative prediction by the ABMIL models. To accomplish this, we used a HIPPO search algorithm. First, all patches that intersected the expert tumor annotation were removed, to render the specimen “negative” for metastasis. Then, tumor patches were added to the specimen one at a time, and model outputs were assessed. The tumor patch that resulted in the lowest model probability of tumor was kept in the bag, and the next round of the search was initiated. This was repeated until the model probability of tumor was greater than 0.5, which would trigger a positive prediction. The set of tumor patches that were in the bag prior to reaching a threshold of 0.5 were considered the largest area of tumor that could be present while maintaining a negative predictions.
§.§.§ Testing the effect of adipose tissue on metastasis detection
Upon inspection of attention maps for the CTransPath-based metastasis detection model, adipose regions had high attention in a false negative, leading us to hypothesize that adipose regions were driving the false negative in that specimen. Attention alone could not allow us to address this hypothesis, but HIPPO could. The adipose regions were annotated in QuPath. Patches that intersected with the adipose region were removed, while ensuring that no tumor patches were removed. To measure the effect of this adipose tissue in other specimens, patches intersecting with the adipose annotation were added to the other 48 metastasis-positive slides, and the number of changes from true positive to false negative were recorded.
§.§.§ Diagnosing shortcut learning
We sought to evaluate how HIPPO can uncover shortcut learning and how it compares to attention in this regard. To do this, we modified the normal specimens in the CAMELYON16 dataset to include a blue square (hexadecimal color code ). This is meant to mimic a plausible real world scenario in which a pathologist marked certain slides with a blue pen. In practice, we embedded one blue square of 128 × 128 using the UNI model <cit.> and replicated that embedding 36 times to create a 768 × 768 blue region. The embeddings of this blue region were concatenated with the patch embeddings of normal specimens. The specimens with metastasis were not modified. We reasoned that the ABMIL model would learn to distinguish normal from metastatic specimens by the presence of a blue region. To assess whether tumor regions were affecting model predictions in positive specimens, we removed all patches intersecting with tumor annotations in positive specimens and recorded model outputs. To visualize attention maps, we saved patch-wise attention weights in GeoJSON format and visualized the maps in QuPath <cit.>. We also used the search strategy HIPPO-search-high-effect to identify the regions with highest effect sizes de novo. We also did this using a UNI-based ABMIL model trained on the original, unaltered CAMELYON16 dataset, trained using the same hyperparameters and random seed.
§.§.§ Identifying prognostic regions and comparing with attention
We sought to compare the effectiveness of attention and HIPPO for identifying tissue regions related to predicted prognosis. TCGA BRCA and SKCM data were used in these experiments. For attention, regions assigned the top 1% of attention scores were selected. For HIPPO, the search strategy HIPPO-search-high-effect was used to identify the regions most contributing to high risk in high-risk specimens, and the search strategy HIPPO-search-low-effect was used to identify the regions most contributing to low risk in low-risk specimens. Low and high risk were defined as the first and fourth quartiles of predicted risk scores, respectively. The first 1% of patches identified by the HIPPO search algorithms were selected for evaluation. To quantify the effect of the selected regions on predicted prognosis, we calculated the difference between the predicted prognosis on the original specimens and the predicted prognosis on the specimens with the selected regions removed.
Risk contribution of ROI = Risk using original WSI - Risk when ROI is removed
Positive values indicated that the regions contributed to higher risk, and negative values indicated that the regions contributed to lower risk. Independent t-tests were used to assess significance of differences between attention and HIPPO.
§.§.§ Effect of TILs on prognostic models
In prognostic models, we measured the effects of tumor-infiltrating lymphocytes (TILs) on model behavior. The number of TILs was quantified using the same approach as Ref. <cit.>. Briefly, HoVer-Net <cit.> was used to outline and label the nuclei in TCGA BRCA and SKCM WSIs. The model labels nuclei as one of six categories: tumor epithelium, lymphocyte, stroma, necrosis, normal epithelium, and unknown. Each 128 × 128 was called TIL-positive if it contained more than 20 cells, more than 10 immune cells, and more than 5 tumor cells. In TCGA BRCA, HoVer-Net failed for 12 WSIs, some of which were missing pixel spacing information.
We measured the effect of TIL patches on predicted prognosis in TCGA BRCA AND SKCM by either removing TILs from low-risk specimens or adding TILs to high-risk specimens, where low-risk was defined as samples in the first quartile of predicted risk and high-risk were samples in the fourth quartile of predicted risk. The predicted prognoses were compared before and after the intervention. To evaluate the sufficiency of TILs for predicting low risk, we added TIL patches from low-risk specimens to high-risk specimens. Risk predictions of the model were recorded, and differences were tested using paired t-tests. To assess the necessity of TIL regions, we removed TIL-positive patches from low risk specimens and measured risk predictions. Differences were tested using paired t-tests.
§.§.§ Evaluating autologous TILs
Autologous TIL therapy is a promising immunotherapy. We explored how HIPPO could be used for hypothesis generation in the context of autologous TILs in high-risk SKCM specimens (n=67). We sought to assess the degree to which prognostic ABMIL models are effected by the number of TILs in a specimen. We do not claim to assess the efficacy of autologous TILs through HIPPO. The embeddings of TIL-positive regions were replicated 2×, 10 ×, 20 ×, or 100 ×, and the change in predicted risk was measured:
Change in Risk = Risk with autologous TILs - Risk with original WSI
Negative values indicated that the addition of TILs decreased risk. The change in risk from baseline was assessed using paired t-tests.
§ DATA AVAILABILITY
The CAMELYON16 dataset is available at <https://camelyon17.grand-challenge.org/Data/> under the CC0 license (public domain). The results shown here are in whole or part based upon data generated by the TCGA Research Network: <https://www.cancer.gov/tcga>. Clinical data and whole slide image files can be accessed at <https://portal.gdc.cancer.gov>. Training and validation splits for prognostic models were accessed at <https://github.com/mahmoodlab/PORPOISE>.
§ CODE AVAILABILITY
A Python package implementing HIPPO is available at <https://github.com/kaczmarj/HIPPO> and is licensed under the terms of the 3-Clause BSD License. HIPPO documentation is published at <https://github.com/kaczmarj/HIPPO> under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International copyright license (CC BY-NC-SA 4.0). Model weights and inference code are available at the following repositories: UNI (<https://huggingface.co/MahmoodLab/UNI>), REMEDIS (<https://github.com/google-research/medical-ai-research-foundations>), Phikon (<https://huggingface.co/owkin/phikon>), CTransPath (<https://github.com/Xiyue-Wang/TransPath>), and RetCCL (<https://github.com/Xiyue-Wang/RetCCL>). Model weights for the models trained for this report will be deposited to online repositories.
§ ACKNOWLEDGEMENTS
This research was supported by National Science Foundation (NSF) grant IIS2212046, National Institutes of Health (NIH) grant UH3CA225012, and Stony Brook Profund 2022 seed funding. JRK was also supported by the Medical Scientist Training Program at Stony Brook University and NIH grant T32GM008444 (NIGMS). We would also like to acknowledge the Department of Biomedical Informatics at Stony Brook University and the Simons Center for Quantitative Biology at Cold Spring Harbor Laboratory.
§ AUTHOR CONTRIBUTIONS
JRK and PKK conceived of the method and planned the experiments. JRK wrote the code and ran all experiments. JRK, PKK, and JHS interpreted the results of experiments. JHS and PKK supervised the project. JRK wrote the initial draft of the manuscript. All authors provided feedback on the manuscript and contributed to the final manuscript.
§ COMPETING INTERESTS
The authors declare the following competing interests: J.H.S. is co-founder and chief executive officer of Chilean Wool, LLC. All other authors declare no competing interests.
|
http://arxiv.org/abs/2409.02678v1 | 20240904130645 | Cubic graphs with no eigenvalues in the interval (-1,1) | [
"Krystal Guo",
"Gordon F. Royle"
] | math.CO | [
"math.CO",
"2020: Primary 05C50, Secondary 05C76"
] |
[
S. Skoupý and M. Štefaňák
September 9, 2024
=============================
§ ABSTRACT
We give a complete characterisation of the cubic graphs with no eigenvalues in the open interval (-1,1). There are two infinite families, one due to Guo and Mohar [Linear Algebra Appl. 449:68–75] the other due to Kollár and Sarnak [Communications of the AMS. 1,1–38], and 14 “sporadic” graphs on at most 32 vertices. This allows us to show that (-1,1) is a maximal spectral gap set for cubic graphs. Our techniques including examination of various substructure and an application of the classification of generalized line graphs.
MSC2020: Primary 05C50; Secondary 05C76.
keywords: graph eigenvalues, graph classification, graph spectra, spectral gap set
ntro2
xamples
roof
§ OPEN PROBLEMS AND FUTURE WORK
Lower and upper bounds on the maximum value of the HL-index among all graphs with given average degree are given in <cit.>. In the same paper, it is shown that a positive fraction of the eigenvalues of a subcubic graph lie in the interval [-√(2), √(2)].
It is still an open problem to show that the median eigenvalues of any cubic (or subcubic) graph, apart from the Heawood graph, are in the interval [-1,1].
An open sub-problem posed in <cit.> is to show that the median eigenvalues of subcubic planar graphs lie in [-1,1]; this problem was also collected in the recent collection of open problems in spectral graph theory, see <cit.>. It is true for subcubic, planar and K_4-free graphs, as shown in <cit.>.
The proofs in this paper rely heavily on the graph being cubic, and so do not apply to non-regular subcubic graphs. However we suspect that a similar result is true for non-regular subcubic graphs. In particular, we know two infinite families of non-regular subcubic graphs that arise from the Kollár-Sarnak graphs. The first family is obtained from k by deleting w_0, and second is obtained from k by deleting {w_0,b_k-1} (see <ref> for the vertex-naming convention). In addition we know four “sporadic” graphs on 8, 10, 14 and 18 vertices but we cannot currently rule out the existence of larger ones.
§ ACKNOWLEDGEMENTS
We would like to thank Brendan McKay who first pointed out the connection to positive semi-definite matrices and whose computations provided independent verification that the list of examples on up to 32 vertices is complete.
K. Guo gratefully acknowledge the support of the Cheryl E. Praeger Visiting Research Fellowship, which facilitated the initiation of this research during a visit to the University of Western Australia.
plain
|
http://arxiv.org/abs/2409.03735v1 | 20240905175031 | LLM-CI: Assessing Contextual Integrity Norms in Language Models | [
"Yan Shvartzshnaider",
"Vasisht Duddu",
"John Lacalamita"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CR",
"cs.CY"
] |
: Assessing Contextual Integrity Norms in Language Models
Yan Shvartzshnaider
York University
Vasisht Duddu
University of Waterloo
John Lacalamita
York University
=======================================================================================================================
§ ABSTRACT
Large language models (LLMs), while memorizing parts of their training data scraped from the Internet, may also inadvertently encode societal preferences and norms. As these models are integrated into sociotechnical systems, it is crucial that the norms they encode align with societal expectations.
These norms could vary across models, hyperparameters, optimization techniques, and datasets. This is especially challenging due to prompt sensitivity–small variations in prompts yield different responses, rendering existing assessment methodologies unreliable.
There is a need for a comprehensive framework covering various models, optimization, and datasets, along with a reliable methodology to assess encoded norms.
We present , the first open-sourced framework to assess privacy norms encoded in LLMs. uses a Contextual Integrity-based factorial vignette methodology to assess the encoded norms across different contexts and LLMs. We propose the multi-prompt assessment methodology to address prompt sensitivity by assessing the norms from only the prompts that yield consistent responses across multiple variants. Using and our proposed methodology, we comprehensively evaluate LLMs using and vignettes datasets from prior work, examining the impact of model properties (e.g., hyperparameters, capacity) and optimization strategies (e.g., alignment, quantization).
§ INTRODUCTION
Recent advancements in generative models, including large language models (LLMs), have led to significant performance improvements and their adoption in various sociotechnical systems, such as education <cit.> and healthcare <cit.>.
LLMs, which generate responses to input prompts, require vast amounts of data for training scraped from the Internet <cit.>.
However, the training of LLMs have several side effects. LLMs memorize parts of the training dataset, which may include personal or sensitive information <cit.>.
Moreover, during training, LLMs could inadvertently encode societal preferences and norms that directly bias their responses.
A misalignment between norms which are socially acceptable and those which are encoded by an LLM, could cause it to reveal information inappropriately in its responses, thereby violating privacy <cit.>.
Several prior works have quantified these privacy violations from LLMs by identifying personally identifiable information <cit.>, and extracting potentially sensitive training data <cit.>.
However, the orthogonal problem of assessing encoded norms in LLMs has not been explored before.
Understanding the norms encoded in LLMs can help ensure they adhere to socially acceptable norms, prevent inappropriate information leakage, and mitigate social and ethical harms <cit.>.
To address the novel problem of assessing encoded norms in the context of LLMs, we use the theory of contextual integrity (CI) <cit.>. CI defines privacy as the appropriate flow of information according to contextual norms. Prior work have used CI to evaluate societal expectations in various sociotechnical systems <cit.>, including the alignment of LLMs with human annotations <cit.>.
However, the assessment of encoded norms is not trivial.
Firstly, we conjecture that norms vary across different model types, capacities, hyperparameters, and optimization strategies (e.g, alignment <cit.> and quantization <cit.>).
Secondly, LLMs are affected by prompt sensitivity, where minor changes in phrasing can alter responses <cit.>. This issue has not been addressed in prior work <cit.>, making them unsuitable for our evaluation.
To tackle the above challenges, we developed , a modular open-source framework for running various LLMs with different optimizations, hyperparameters, and datasets using CI-based vignettes as prompts. We also introduce the multi-prompt assessment framework, which addresses prompt sensitivity by evaluating norms based only on prompts that produce consistent responses across variants. This approach enables comprehensive and reliable assessment of encoded norms in LLMs.
We claim the following main contributions: we present
* [Code will be available upon publication.], the first open-source framework which supports running various LLMs with different model properties, optimizations, and datasets. (Section <ref>)
* a multi-prompt CI norm assessment methodology to address prompt sensitivity to reliable assess encoded norms in CI. (Section <ref>)
* a comprehensive evaluation to assess encoded CI norms in 10 state-of-the-art LLMs and examine the impact of model properties and optimization strategies. (Section <ref>)
§ BACKGROUND AND RELATED WORK
We present a brief primer on LLMs (Section <ref>) and CI (Section <ref>), followed by describing related work at the intersection of CI and LLMs (Section <ref>) and evaluation of socio-technical properties in LLMs (Section <ref>).
§.§ Large Language Models
Current state-of-the-art language models use transformers with billions of model parameters <cit.>. These language text generation models are trained to predict the next tokens in a sentence given previous tokens.
The model learns the distribution Pr(x_1, x_2, …, x_n) = Π_i=1^n Pr(x_i | x_1, …, x_i-1) where x_1, x_2, …, x_n is a sequence of tokens taken from a given vocabulary.
A neural network, f_θ, with parameters θ, is used to estimate this probability distribution by outputting the likelihood of token x_i given by f_θ(x_i | x_1, …, x_i-1).
During training, a language model learns to maximize the probability of the data in a training set containing text documents (e.g., news articles or webpages).
Formally, the training involves minimizing the loss function ℒ(θ) = -logΠ_i=1^n f_θ(x_i | x_1, …, x_i-1) over each training example in the training dataset.
Once trained, a language model can generate new text conditioned on some prompt as prefix with tokens x_1, …, x_i) by iteratively sampling x̂_i+1∼ f_θ(x_i+1 | x_1, …, x_i) and then feeding x̂_i+1 back into the model to sample x̂_i+2∼ f_θ(x_i+2 | x_1, …, x̂_i+1).
Training LLMs is resource- and time-intensive, so pre-trained public models are fine-tuned for specific objectives before deployment. Popular fine-tuning optimization techniques include, alignment for matching with human annotations, and quantization reduce model capacity for efficiency.
Alignment.
Training data scrapped from the Internet can include inappropriate content such as hate speech, stereotypes) <cit.> that LLMs might memorize and reproduce during inference.
To address this drawback, LLMs are aligned through fine-tuning on human-annotated datasets that correct inappropriate responses.
As a result, the model replaces inappropriate responses with a standard responses like “I’m sorry, but I cannot...”.
Popular alignment techniques include: instruction fine-tuning <cit.> to follow natural language instructions;
reinforcement learning from human feedback (RLHF) <cit.> using a reward model based on human-annotated data to reward or penalize model responses;
direct preference optimization (DPO) <cit.> which simplifies RLHF by using LLM’s output probabilities to align with human preferences without a reward model.
Quantization. LLMs demand powerful GPUs because of their high capacity. Quantization improves efficiency by reducing the precision of weights and forcing them to take fixed set of values. This allows models to run on smaller devices. Activation-aware quantization <cit.> is one such approach that analyzes activation distributions to retain important parameters and eliminate redundant ones while maintaining utility.
§.§ Contextual Integrity
Contrary to predominant accounts of privacy that focus on aspects such as protecting sensitive information types <cit.>, enforcing access control <cit.> or mandating procedural policies and purposes <cit.>, the theory of CI defines privacy as an appropriate flow of information as governed by established societal norms <cit.>. According to CI, privacy is prima facie violated only when an information flow breaches an established contextual informational norm (aka CI norms or privacy norms), which reflect the values, purposes and function of a given context.
A CI-based assessment of privacy implication of a system or a service involves two main phases: a) identifying the norm breaching flow using CI and b) examining the breach using the CI heuristic to determine how the novel flow contributes the values and purposes of the context.
Identifying the norm breaching flow. The CI framework requires identifying five essential parameters to capture the information flow and the establishes norms in a given context including:
[label=(*)]
* roles or capacities of senders, subjects, and recipients in the context they operate (like professors in an educational context and doctors in the health context);
* the type of information they share;
* transmitted principle to state the conditions, for purposes or constraints under which the information flow is conducted.
A canonical example below describes a typical interaction between a patient and a doctor.
[label=ci_example, title=CI Example]
Patient (sender) sharing patient's (subject) medical data (information type) with a doctor (recipient) for a medical check up (transmission principles)
All the five parameter values matter.
A change in any of the values results in a novel information flow.
For instance, if instead of a doctor, a colleague is a recipient or instead of using the information for a medical check up, the information is made public, which could constitute a breach of an established social norm.
Examining the breach. After we detect a violation, as part of the normative assessment, we use the CI heuristic to examine the ethical, financial, social and even political implications <cit.>. At the end of the process, we can either discard the novel information flow or modify the existing norm to better reflect the societal values and expectations.
Several works have used the CI framework to gauge and evaluate privacy norms in different social context such as education <cit.>, IoT <cit.>, COVID-19 pandemic <cit.> and natural disasters <cit.>. They employed a survey methodology using CI-based vignettes to gauge the appropriateness of potential information flows. These vignettes are of the form:
[colframe=black,colback=white]
<information flow with five parameters>.
How acceptable is the above information flow?
[strongly unacceptable, somewhat unacceptable, neutral, somewhat acceptable, strongly acceptable]
§.§ Contextual Integrity and LLMs
A number of recent studies have applied CI to evaluate LLMs.
<cit.> use CI and theory of mind to evaluate the alignment of LLMs with human annotated responses. They present, ConfAIde, a benchmark to use CI for LLMs with 98 prompts from <cit.>.
Their study shows that LLM responses have low correlation with human annotations, with GPT-4 demonstrating better alignment compared to other models.
In a follow up work, <cit.> have used ConfAIde to investigate the alignment of 16 mainstream LLMs with human annotations.
They find that “most LLMs possess a certain level of privacy awareness” as the probability of LLMs refusing to answer private information increases significantly when they are instructed to follow privacy policies or maintain confidentiality.
Similar to results of <cit.>, they show that Pearson's correlation between human and LLM agreement varies widely and ChatGPT has the highest correlation among other models.
<cit.> evaluate the norms of LLMs when used as agents with a focus on privacy norms in LLM-mediated communication (i.e., LLMs being used to send emails).
They assess how well LLM responses align with crowd-sourced ground truth and measure privacy leakage from out-of-context information sharing.
<cit.> align LLMs with specific legal statutes to evaluate privacy violations and understand complex contexts for identifying real-world privacy risks. They generate synthetic cases and fine-tune their model to improve LLMs' ability to recognize privacy risks in actual court cases. However, their approach relies on limited number of expert-annotated norms and social contexts. To address these gaps, <cit.> develop a comprehensive checklist that includes social identities, private attributes, and existing privacy regulations. Using this checklist, they demonstrate that LLMs can fully cover HIPAA regulations.
<cit.> describe an attack that manipulates LLMs into revealing sensitive information by altering the context, such as fabricating an alien invasion to compel the model to disclose user details for “saving Earth.” Existing defenses like differential privacy and data sanitization fail because they do not account for context, and alignment is susceptible to jailbreaking. They use CI theory to mitigate information disclosures by proposing the use of two separate LLMs: one as a data minimization filter to identify appropriate information to disclose based on context, and the other that interacts with clients using the filtered data.
§.§ Evaluating Sociotechnical Properties
Several benchmarks evaluate various LLMs sociotechnical properties such as toxicity, fairness, bias, sycophancy, privacy, robustness, and ethics <cit.>.
On the other hand, LLMs have been shown to be sensitive to small variations in prompts which can drastically alter responses <cit.>. Previous studies comparing LLM decision-making to human behavior often overlook this sensitivity. <cit.> demonstrate that simple prompt adjustments can make LLMs exhibit more human-like behavior, questioning the reliability of current evaluation methods <cit.>.
There are limited studies consider prompt sensitivity: <cit.> propose generating synthetic prompts for better results. However, this is not suitable for assessing the encoded norms in LLMs as we require to query using CI-based vignettes and not synthetic prompts. Hence, a methodology for accounting for prompt sensitivity is largely an open problem.
There are a number of prior works that focus solely on assessing the leakage of sensitive data, including personally identifiable information to enhance LLMs privacy <cit.>. We can view them as assessment of a single CI parameter (data type), whereas a comprehensive CI approach requires all five parameters to make a privacy violation determination.
Hence, these are orthogonal to our work.
§ PROBLEM STATEMENT
We aim to reliably extract and evaluate the contextual information norms embedded in LLMs. In this section, we present the research questions, challenges, and limitations of using closely related work.
Research Questions.
We pose the following questions:
* How can we develop a comprehensive framework to assess the encoded norms in LLMs at scale?
* What methodology can reliably assess encoded norms?
* How do different factors influence the encoded norms?
Challenges.
To answer the above research questions, we have to address the following two challenges:
* Lack of framework and datasets. Current literature lacks methods for evaluating models with varying capacities, hyperparameters, and optimizations. Furthermore, there are no large datasets with CI-based vignettes.
* Prompt sensitivity. Current approaches for evaluating the responses obtained by simply prompting the model are not reliable due to prompt sensitivity <cit.>.
Hence, we cannot adapt existing evaluation strategies to reliably assess the encoded norms.
We need to address <ref> to answer <ref>, and develop a methodology to address <ref> to answer <ref> and <ref>.
Limitations of Prior Works.
The most closely related studies to our research question are by <cit.>, <cit.>, and <cit.>. As we discussed in Section <ref>, these works use CI-based vignettes to evaluate the alignment for LLMs.
We, however, see several aspects that limit the applicability of their methodology to our research questions:
* Different objectives. Both of these prior works study the alignment of LLMs with human annotations obtained from a user study. In contrast, we assess the CI norms learned by LLMs, not their alignment with human responses.
* Limited data. Their datasets only includes 98 vignettes <cit.> and 493 vignettes <cit.> in a few privacy-sensitive contexts. Moreover, the prompts only consider vignettes with three of the five CI parameters to show the alignment with human feedback. Therefore, it is unclear whether this approach can capture subtle differences in information flows with all parameters that might affect human judgments on acceptability.
* Limited evaluation.
The prompt templates for evaluation are different: to avoid anthropomorphizing LLMs, the prior works framed the prompts based on how people perceive information sensitivity rather than asking what model considers as “acceptable”.
While this works for evaluating alignment with humans, the approach does not assess the encoded norms in LLMs.
Finally, these works do not explicitly evaluate the impact of model properties such as capacity, prompt variants, or optimization techniques like alignment and quantization.
* Methodological differences. These prior work do not address prompt sensitivity, which is a major issue in evaluating LLMs <cit.>. Therefore, we cannot use their methodology to reliably assess the encoded CI norms. Furthermore, based on observations from prior work <cit.>, carefully choosing the prompts could result in a false sense of alignment with ConfAIde's human annotations <cit.> or PrivacyLens' crowd-sourced ground truth <cit.>.
Overall, the framework evaluates norms in LLMs in a general and principled manner and
can be extended to include tasks evaluated by prior work such as measuring privacy leakage <cit.> or human alignment <cit.>.
§ FRAMEWORK
We present , the the open-sourced CI norm assessment framework for LLMs to address <ref> and answer <ref>.
Design. Figure <ref> shows the modular design of , which comprises the Vignette, Inference, Clean-up, and Analysis & Plotting modules.
* Vignette module includes vignettes datasets.
Similar to prior work <cit.>, we use a script to create all possible vignettes from the combinations of the five CI parameters and a vignette template.
The resulting vignettes are saved in a file.
includes datasets covering the following contexts:
[label=*),itemjoin=,]
* IoT devices <cit.>
* COPPA regulations for IoT devices <cit.>
* Internet privacy <cit.>
* location data <cit.>
* public records <cit.>
* privacy as a social contract <cit.>.
For a chosen dataset, the module converts each vignette into a prompt before passing it to the LLM.
Depending on the LLM, the module uses a corresponding prompt template (e.g., [INST] and [/INST] for Llama, and for tulu). The prompt template also appends additional text to asking for the acceptability of the described information flow described in the vignette.
* Inference module requires users to provide a model description, after which it loads the pre-trained weights (e.g., from Huggingface or OpenAI), executes the model, and offers an API for sending prompts and receiving responses.
The model runs on an inference engine for efficient execution with minimal overhead. For Huggingface models, we use vLLM <cit.>, while the OpenAI models run on our custom implementation.
All the model descriptions include their capacity (e.g., 7B for seven billion parameters) along with the fine-tuning optimization used.
We identify three types of model optimization:
[label=*),leftmargin=*]
* non-aligned models which have been trained on standard datasets but do not include any safety fine-tuning.
* aligned models have been fine-tuned to account for human preferences using DPO.
* quantized models use AWQ to reduce the model capacity for better efficiency.
These models are identified with “AWQ” or “dpo” in the description.
For models like llama-3.1-8B-Instruct and gpt-4o-mini, which use RLHF by default, we do not specify the optimization in the model description.
* Clean-up module will filter the responses to get relevant text (e.g, Likert scale value) from verbose responses. For instance, an example of verbose text from models includes:
[colframe=black,colback=white]
Based on the scenario provided, the answer is: somewhat acceptable. While it is understandable for a smart watch to collect and transmit data related to its owner's child's heart rate, it is important....
We manually reviewed all responses to ensure the cleaning process correctly extracted the appropriate Likert scale value. This was necessary as, in some cases, the model provided all Likert scale responses without specifying the applicable one.
* Analysis & Plotting module generates relevant statistics (e.g., unanswered prompts and counts for each Likert scale) and plots them (e.g., heatmaps to illustrate the impact of varying CI parameters on responses). Additionally, the module can include statistical tests to measure result significance across different models, prompt variants, and optimizations to help infer the encoded norms in LLMs. For a dataset with a normal distribution, we can perform the Analysis of Variance (ANOVA) <cit.> test or the t-test <cit.> to assess the statistical significance of the difference between two or more groups of models. For non-normal data distributions, we can use non-parametric statistical tests such the Friedman test <cit.> to measure the overall differences in models' responses and pairwise comparisons using the Wilcoxon Signed-Rank Test <cit.>.
Intended Use. A user need only provide the model description and select a context-specific dataset to assess the encoded norms. We envision to have the following users:
* model developers to evaluate their models and compare norms across various optimizations before public release, and to design models which do not leak information in inappropriate contexts (e.g., <cit.>).
* regulators and auditors to assess CI norms in various models amid growing calls for regulation, address privacy and information governance, and
* researchers to better understand encoded norms and design training strategies that aligns encoded norms in LLMs with some socially accepted norms.
Overall, acts as a comprehensive benchmark to compare various models, optimizations, and datasets, which goes beyond prior work <cit.>.
Extending . The modular design of allows to easily integrate new datasets, prompt templates, models, and plots for analysis. To generate a new dataset, it is sufficient to specify the actors (senders, recipients, subjects), information types and transmission principles of a given context (see Section <ref> for the five essential CI parameters).
Our (Python) script can generate new CI vignettes corresponding to the new parameters and them export in parsabale format such as a file.
also supports introducing different prompt templates to encapsulate the CI vignettes and adding new models, by specifying the model description, taken from the Huggingface library, in our configuration file for evaluation.
Furthermore, the responses which are exported into a dataframe, can be used for additional analysis.
We design to be general, i.e., evaluate norms in LLMs regardless of their application. Prior works have considered specific cases where they evaluate the alignment with some ground truth human annotations using metrics such as privacy leakage <cit.> or correlation <cit.>.
can be extended by adding an additional module to evaluate these metrics given some reasonable ground truth.
However, to assess the norms encoded in LLMs, knowing ground truth is not necessary as we show in Section <ref> and Section <ref>.
§ METHODOLOGY
We present the experimental setup and hyperparameter choices in for evaluation (Section <ref>), followed by our multi-prompt CI assessment methodology (Section <ref>).
§.§ : Setup and Configuration
Vignette Module. We focus on two datasets: <cit.>, and <cit.>. contains 6912 vignettes to gauge appropriateness of information flows in context of IoT devices <cit.>. contains 1800 vignettes to gauge the perceptions of information flows in context of IoT devices prescribed in Children's Online Privacy Protection Rule (COPPA) <cit.>. Both datasets have vignettes of the form:
[colframe=black,colback=white]
Please indicate the extent to which this scenario is acceptable: a sleep monitor records owner's location which is sent to the local police under the following condition: if owner has given consent.
The answer needs to be strictly only one of the following options (without ANY additional text):
<Likert Scale>
For each prompt using the above template, we generate ten different prompt variants using ChatGPT as part of multi-prompt assessment methodology (see Section <ref>).
Inference Module. We discuss the choice of models and hyperparameter used for prompting.
Model Descriptions. Table <ref> summarizes the models used in our evaluation. We primarily consider pre-trained open-sourced tulu-2 family of models <cit.> to take advantage of the publicly available versions of the model with different capacities and optimizations (DPO and AWQ). Given a base model (e.g., tulu-2-7B or tulu-2-13B), we can evaluate the impact of optimizations on the CI norm assessment.
Temperature parameter.
The responses from LLMs are sensitive to the temperature parameter (t) that controls the predictability of the results. A smaller t results in more predicable output: with t=0 produces the most consistent results across multiple runs and t=1 the most unpredictable. We confirm this empirically by comparing the responses for t=0, 1 on both datasets, which are shown in Figure <ref>.
Increasing t lead to change in the distribution of LLM responses.
There was a noble increase in “No Answer” response and a shift along the Likert scale from “somewhat unacceptable” to “strongly unacceptable” or “neutral.”
Overall, t=0 returned consistent outputs for all models with an exception of quantized models.
To ensure consistent and deterministic responses, we choose the temperature parameter t=0 .
Cleanup Module.
In our evaluation, the clean up module found the responses for 9.93% (88,268) of the prompts as invalid.
The majority of invalid responses include the main categories: request for further context: “based on the information provided, it is difficult to determine the acceptability of the scenario without further context...”; limitation acknowledgment (mostly due to alignment): “as an ai language model, i cannot provide a personal opinion or additional text...”; and nonsensical response: mostly included character “s”, or used the wrong Likert scale in the response such as “smoothly acceptable” or “strictly acceptable.”
Analysis Module (Statistical Significance Tests). We run several statistical tests to to assess the statistical significance of the difference between two or more groups of models. For pairwise comparisons among models, we use the Wilcoxon Signed-Rank Test <cit.>, which is a non-parametric rank test assuming the responses are not normally distributed.
§.§ Multi-Prompt CI Assessment Methodology
We first empirically illustrate prompt sensitivity and then describe our proposed novel assessment methodology.
Illustrating Prompt Sensitivity.
We use ChatGPT to rephrase the original prompt template to generate ten different syntactic variants. For example, one prompt says: Please indicate the extent to which this scenario is acceptable: {scenario} and its variation is Please rate how acceptable this scenario is: {scenario}.
For the full list of prompt variants, refer to Table <ref> in the Appendix.
Figure <ref> shows the variance between prompts for each vignette across all LLMs for both and .
All models, except gpt-4o-mini, exhibit variance in their prompt responses that is consistent between the two datasets with occasional outliers.
Specifically, for , we observe a lower variance with the median of 0 to 0.5, and 25% of the prompts returned responses with variance of 0.5 to 1.
The variance in responses follows a similar trend for the compared to the .
We observed that models quantized and aligned models tend to have higher variances (e.g, tulu-2-7B-AWQ, tulu-2-dpo-7b and tulu-2-dpo-7B-AWQ) which can be attributed to lower quality responses as observed in prior work <cit.>.
Overall, the variance across prompt variants makes it harder to reliably assess the encoded norms.
Proposed Assessment Methodology To address the prompt sensitivity challenge (<ref>), we propose a methodology that only evaluates norms from prompts with consistent responses across all the prompt variants (addressing <ref>).
We quantify consistency using either simple majority (≥50%) or super majority (≥67%) of responses for each prompt variant. A stricter majority threshold and greater diversity in prompt variants both increase confidence in assessing encoded norms.
[colframe=black,colback=white, title=Multi-prompt assessment methodology.,colbacktitle=gray]
* Select K different variants of a given prompt to query the LLM. Ideally, the prompt variations should cover a wide set of distributions that is likely to be seen in practice.
* Pass all the K+1 prompts to LLMs and track the responses.
* If majority[This includes different types of majority: simple majority, super majority or any other forms] of the K+1 responses are consistent, use the corresponding information flow consistent response for further evaluation.
* Else, hold that vignette and its responses for further evaluation.
* Identify different CI parameters impact the responses, and the norms that an LLM might have learnt from its training data.
In our evaluation, we use simple majority of prompts with the same responses.
We incorporate this methodology in and use it for all subsequent evaluations.
§ EVALUATION
We now discuss how to reliably valuate CI norms and examine the factors influencing these norms to address <ref>, by way of addressing the following two questions:
* How do we assess the encoded CI norms for LLMs? (Section <ref>)
* How do model type, capacity, alignment, and quantization impact our assessment? (Section <ref>)
Choice of Norms/Models Analyzed.
We use a limited subset of information flows and their heatmap for brevity, as space constraints prevent covering all norms. We randomly selected senders to demonstrate norm analysis and the impact of various factors.
However, can generate complete heatmaps for detailed analysis some of which are included in the Appendix.
§.§ Assessing Encoded CI Norms in LLMs
We chose gpt-4o-mini and llama-3.1-8B-Instruct, which produced the most consistent responses in our evaluation, to illustrate a subset of encoded norms (<ref>) on .
We omit the evaluation on due to space limitation. Figure <ref> shows a sample output of 's plotting module–a heatmap of the extracted norms for a fitness tracker as a sender.
For a full set of extracted norms, refer to Figure <ref> and <ref> in the Appendix.
The empty (gray) squares in the heatmap represent information flows where could not deduce the corresponding encoded norm due to a lack of sufficient number of prompts with valid responses: ten or more prompts for llama-3.1-8B-Instruct or three prompts for gpt-4o-mini.
Overall, compared to gpt-4o-mini, llama-3.1-8B-Instruct is more conservative in its responses, with the majority of flows deemed “somewhat unacceptable.” A notable exception is the “if the owner has given consent” transmission principle. Under this transmission principle, the llama-3.1-8B-Instruct model viewed most information flows as “somewhat acceptable,” except when the fitness tracker shares information with “government intelligence agencies.” Furthermore, two specific information types—“audio [and video] of the owner”—stand out. The model deemed the information flow “strongly unacceptable” when the information “is stored indefinitely.” This norm stance seems to align with the original survey result in <cit.> that found that “fitness tracker sending recorded audio is considerably less acceptable than the same device sending exercise data.”
This is also reflected in gpt-4o-mini, which sees sharing audio and video information types for a large number of transmission principles as unacceptable. While overall producing a more positive responses, gpt-4o-mini views sharing information for advertising, indefinite storage consistently as “somewhat unacceptable” or “strongly unacceptable.”
§.§ Evaluating Influencing Factors
We now evaluate the influence of the following factors on the encoded CI norms <ref>:
[label=*),itemjoin=,]
* model type
* model capacity (7B models vs. 13B models)
* alignment (base models vs. DPO)
* quantization (base models vs. AWQ).
To gauge a factor's influence, we use a multi-dimensional heatmap to show responses for each across four models in both datasets. We compare models with specific factor values, such as the and models to assess the impact of model capacity on norms. To ensure reliable norm extraction, we use a multi-prompt assessment that considers the majority norm across all prompts for each model.
Model Type.
We conjecture that CI vignettes produce different responses for different model type due to differences in their training datasets and prompt sensitivity. We used to extract encoded norms in ten LLMs (see Table <ref> for a complete list) on both and . Figure <ref> shows distribution of responses for each LLMs for a fixed prompt template. The distributions of LLM responses varies significantly with various biases. For example, the model tulu-2-7B-AWQ produced largely “strongly unacceptable” responses compared to llama-3.1-8B-Instruct where the responses were split between “somewhat unacceptable” and “somewhat acceptable.”
Overall, we noted a significant variability in agreement on norms across various LLMs.
For , tulu-* models provided the same response for only 241 information flows and only five information flows in . The tulu-* and llama-3.1-8B-Instruct models agreed on ten information flows in and none in . The tulu-* and gpt-4o-mini-8B models agreed on the 207 information flows and only three in . The llama-3.1-8B-Instruct and gpt-4o-mini-8B seemed to be the most aligned, agreeing on a total of 2519 information flows in and 1107 in .
As we discuss in Section <ref>, without knowing the exact datasets used to train the LLM models, we can only speculate about the differences for these apparent biases.
Model Capacities. To evaluate the influence of model capacity, we compare the models with the models: tulu-2-7B () with tulu-2-13B (); and tulu-2-dpo-7B () with tulu-2-dpo-13B (). Heatmap in Figure <ref> shows the embedded norms for all four models related to information flows with the senders “a fitness tracker” () and “smart watch” ().
The squares with same color for all four triangles represents consistent responses across all four models.
Conversely, the different triangle colors reflect the inconsistencies in the models' responses.
We first focus on the responses with the same (or similar) color shades for all triangles to understand the norms that are consistent across different model capacities.
For example, in , perhaps reflective of the training dataset, all four models ranked the majority of flows involving “a fitness tracker” sharing information with “the government intelligence agencies" (second column in each section of the corresponding data type) as “somewhat unacceptable” and “strongly unacceptable.” For , all four models viewed information flows involving the transmission principle of “[serving] contextual ads” as “somewhat unacceptable” or “strongly unacceptable.”
This observation aligns with the prior work in <cit.> that found: “Information flows with the transmission principle “if the information is used to serve contextual ads” have negative average acceptability scores across almost all senders, recipients, and attributes.”
Nevertheless, in contrast to the reported result in <cit.>, the models also viewed information flows even if information “is deleted” as “somewhat unacceptable” or “strongly unacceptable.”
For several flows, however, in both and , depending on capacity, the models embed different norms.
In , while the models: tulu-2-7B () and tulu-2-dpo-7B ), consider information flows involving “a fitness tracker” sharing “owner's heart rate” with “other devices at home” or “owners immediate family” in “an emergency situation” or “to perform maintenance” as “strongly unacceptable,” models: tulu-2-13B () and tulu-2-dpo-13B (), deem them “strongly acceptable” (darkred!90darkred!90darkgreen!30darkgreen!30).
In , we observe a similar pattern for “a smart watch” sharing “the owner's child's birthday” with “a third-party service provider” or “[device's] manufacturer” when the “[recipient] implements reasonable procedures to protect the information collected.” The original survey <cit.> suggests similar age-related differences in information flow perceptions. Participants aged 45-65 found certain transmission principles more acceptable than younger groups, especially those familiar with COPPA and owning smart devices. Difference in age and familiarity might be reflected in the training dataset. This, in turn, could've skewed the model's encoded norms.
Model Alignment.
To evaluate the influence of alignment, we compare the responses from the base models: tulu-2-7B () and tulu-2-13B (); with the corresponding aligned models: tulu-2-dpo-7B () and tulu-2-dpo-13B ().
Figure <ref> identifies several illustrative differences. In , there are intra-agreement and inter-disagreement between base models and within aligned models regarding information flows involving “a fitness tracker” sharing “ the owner's heart rate” or “owner's location" in “an emergency situation” with all receipts.
The base models view these flows are “strongly unacceptable,” whereas the aligned model deemed it as “somewhat acceptable” (darkred!90darkred!90darkgreen!30darkgreen!30).
In , a similar pattern appears with regard to information flows involve “a smart watch” sharing “audio [or video] of it's owner child” with “a third party service provider” or “[device] manufacturer” for all the stated transmission principles (darkred!90darkred!90darkgreen!30darkgreen!30).
We can also observe a full agreement between the base and aligned models for some information flows in but not in .
In , as discussed in the previous section, all models viewed “a fitness tracker” sharing information flows with the “government intelligence agencies” as “strongly unacceptable.”
This is perhaps indicative of a strong sentiment at the time the training datasets was scrapped. The original work <cit.> corroborates this assumption: “we included the local police and government intelligence agencies in consideration of recent court cases involving data obtained from IoT and mobile devices”.
Significance of Results.
Table <ref> shows a pairwise Wilcoxon Signed-Rank tests that yielded p-values < 0.05 for comparisons of four model outputs thus we can reject the null hypothesis of no difference between the model responses, indicating that there are statistically significant differences.
We can, thus, conclude that model alignment and capacity significantly impacts the encoded norms.
Quantization. To evaluate the impact of quantization, we compare the base models: tulu-2-7B () and tulu-2-13B (); with the quantized AWQ models: tulu-2-7B-AWQ () and tulu-2-13B-AWQ (). Figure <ref> shows the heatmap for the four models.
Varying information types and transmission principles can elicit different results depending on whether the model is quantized. For the information flows involving “a fitness tracker” sharing “owners heart rate” with the [device] manufacturer “if privacy policy permits it” or “if the owner is notified” base models and quantized models produced different results (darkred!90yellow!90darkgreen!60darkred!30).
Similarly for , information flow when sharing “child's birthday” with the “third party service provider” or "device manufacturer” if “privacy policy permits it.”
Furthermore, there are cases where the base models within a specific capacity agree with their quantized counterparts, while disagreeing with models with other capacity.
For example, in , the (base and quantized) models view information flow involving “a smart watch” sharing “child's emergency contact” with “[device] manufacturer” if ”the owner has given a verifiable consent before the information is collected” as “somewhat acceptable” (darkred!90darkgreen!90darkgreen!30darkgreen!30).
However, models disagree: the base model
deems the flow as “strongly acceptable,” while the quantized model shows it as “strongly unacceptable.”
We also note instances where the base models and quantized models with the equal capacity encode the same norm, for example, in , “a fitness tracker” sharing “the time the owner is home” in the “emergency situation” (darkred!90darkred!90darkgreen!30darkgreen!30). The tulu-2-7B () and tulu-2-7B-AWQ () models deemed this information flow as “strongly unacceptable,” while tulu-2-13B () and tulu-2-13B-AWQ () view it as “somewhat acceptable.”
We see a similar agreement for when sharing the“audio [and video] of the owner's child.”
Significance of Results.
Table <ref> shows a pairwise Wilcoxon Signed-Rank tests of four model outputs that yielded p-values < 0.05) for all comparisons, rejecting the null hypothesis of no difference between the model responses.
We conclude that quantization has statistically significantly impact on the encoded norms.
Alignment & Quantization (A&Q). To evaluate the impact of A&Q, we compare the responses of the A&Q models and those of the base models.
Figure <ref> depicts the responses of tulu-2-7B (), tulu-2-13B (), tulu-2-dpo-7B-AWQ (), and tulu-2-dpo-13B-AWQ ().
We can observe instances of “agreement” or “disagreement” on norms between and models with both alignment and quantization (A&Q).
For example, in , there is an overall agreement “government intelligence agencies” as “somewhat unacceptable” or “strongly unacceptable,” with the exception of sharing “the times owner is home” “in an emergency situation.”
For several information flows, there is a disagreement within the base models and within the A&Q models, for example, in , when it comes to “a fitness tracker” sharing “owner's eating habits,” if the “[data] is anonymous” there is no inner agreement for all recipients to various degrees, except for “government intelligence agencies.” Similarly in , both and with A&Q models agree that sharing the “owner's child's call history” with “the third party provider” is mostly “somewhat unacceptable,” or “strongly unacceptable,” with the exception of “a smart watch” sharing this information “to protect child's safety.” In this case, the base models present a polar opposite view, whereas the A&Q models disagree on the level of acceptability (darkred!90darkred!90darkgreen!30darkred!30).
Significance of Results.
Table <ref> shows a pairwise Wilcoxon Signed-Rank tests of the four model outputs that yielded p-values < 0.05) for all comparisons. With this result, we can reject the null hypothesis of no difference between model responses, indicating that there are statistically significant differences.
We conclude that models with A&Q significantly impact the encoded norms.
§ DISCUSSION AND CONCLUSIONS
Our work builds on prior efforts aimed at the challenging task of evaluating the sociotechnical properties of LLM models. This task requires a deep understanding of both societal factors and the inner workings of the models. We discuss the limitations of our work and suggest future directions.
Encoded norms provenance.
While identifies encoded norms in LLMs, it does not trace their origin. The training datasets significantly impact a model's “view of the world,” and without dataset transparency, LLMs remain black boxes, making it difficult to understand their responses. Therefore, examining the dataset source is crucial to ensure LLMs are trained on valid and socially acceptable norms.
For instance, 82% of the training data for falcon-180B includes a massive scrape of the Web, 6% books, 5% conversations (e.g., from Reddit, StackOverflow, HackerNews), 5% code, and 2% from technical sources like arXiv, PubMed, and USPTO[<https://huggingface.co/tiiuae/falcon-180B#training-data>].
Making the contents available could help shed light on the biases in the models' responses.
CI privacy (ground truth) norms alignment. Having reliably identified encoded norms in LLMs, we can use to detect deviations from socially acceptable ground truths, which may be based on regulations, laws, or survey studies <cit.>.
Our CI-inspired approach can evaluate norms in LLMs across various settings and can be extended to compare with crowd-sourced ground truth on information flow acceptability, similar to <cit.>, to measure privacy leakage or correlation with human annotations as in <cit.> and <cit.>.
To ensure that the LLM norms align with the ground truths, we would need to fine-tune the models using alignment objectives (see Section <ref>). We consider this an area for future work.
Summary.
This paper introduces , the first open-source framework based on CI for evaluating contextual norms in LLMs. We propose a multi-prompt assessment methodology to extract encoded norms while addressing prompt sensitivity. Using , we evaluate norms in 10 LLMs, considering factors like model capacity, alignment, and quantization, and discuss their impact. Our work aims to provide a reliable evaluation guideline for future research.
§ ETHICS STATEMENT
The use of LLM has societal implications. To evaluate , we use latest LLMs that require a large amount of energy and resources to maintain. While our work has relatively little environmental impact because we are not training or fine-tuning the models, we acknowledge that through the use of tools like OpenAI in our research we contribute to the overall negative effect of these systems on the environment.
Our work carries also a social implication: by using CI, brings additional layer of normative rigor in evaluating LLM-based system in understanding how they contribute to the purpose, values and functions in the contexts they operate.
Furthermore, LLMs can produce auxiliary information in the responses that could contain inappropriate information. This is also the reason we designed the clean-up module to filter the irrelevant information.
§ OPEN SCIENCE
Datasets, scripts, binaries, and source code to reproduce our results will be publicly available upon publication to align with USENIX Security's open science policy.
Additionally, we will submit artifacts for evaluation to ensure its availability, functionality, and reproducibility.
§ ACKNOWLEDGEMENTS
We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), RGPIN-2022-04595, and thank the OpenAI API Researcher Access Program for the credits to evaluate GPT-4 model. Vasisht is supported by David R. Cheriton Scholarship, and Cybersecurity and Privacy Excellence Graduate Scholarship.
plainnat
§ APPENDIX
|
http://arxiv.org/abs/2409.02717v1 | 20240904135336 | Universality theorems for zeros of random real polynomials with fixed coefficients | [
"Matthew C. King",
"Ashvin Swaminathan"
] | math.PR | [
"math.PR",
"math.NT",
"60G99, 12D10 (primary), 11R04, 11R09 (secondary)"
] |
theoremTheorem
lemma[theorem]Lemma
corollary[theorem]Corollary
definition
definition[theorem]Definition
example[theorem]Example
*unnumlemmaLemma
remark
remark[theorem]Remark
:=
ℝ
ℙ
𝔼
ℤ
ℚ
Universality theorems for zeros of random real polynomials
with fixed coefficients
Matthew C. King and Ashvin A. Swaminathan
September 9, 2024
=====================================================================================
§ ABSTRACT
Consider a monic polynomial of degree n whose subleading coefficients are independent, identically distributed, nondegenerate random variables having zero mean, unit variance, and finite moments of all orders, and let m ≥ 0 be a fixed integer. We prove that such a random monic polynomial has exactly m real zeros with probability n^-b+o(1) as n→∞ through integers of the same parity as m, where b ≈ 0.76 is a positive constant. More generally, we determine conditions under which a similar asymptotic formula describes the corresponding probability for families of random real polynomials with multiple fixed coefficients. Our work extends well-known universality results of Dembo, Poonen, Shao, and Zeitouni, whose considered the family of real polynomials with all coefficients random.
As a number-theoretic consequence of these results, we deduce that an algebraic integer α of degree n has exactly m real Galois conjugates with probability n^-b+o(1), when such α are ordered by the heights of their minimal polynomials.
§ INTRODUCTION
Let (a_i)_i ∈ℕ denote a sequence of independent and identically distributed (i.i.d.) random variables of zero mean and unit variance possessing finite moments of all order. Consider the polynomials
f_n(x) x^n-1 + ∑_i= 2^n a_n-ix^n-i, and f_n^*(x) ∑_i= 1^n a_n-ix^n-i.
Note that f_n is monic with all subleading coefficients random, whereas f_n^* has all coefficients random. For n odd, let P_n (resp., P_n^*) be the probability that f_n(x) (resp., f_n^*(x)) is everywhere positive; i.e., define
P_n ℙ(f_n(x) > 0, ∀ x ∈), and P_n^* ℙ(f_n^*(x) > 0, ∀ x ∈).
In <cit.>, Dembo, Poonen, Shao, and Zeitouni (henceforth, DPSZ) study the asymptotic behavior of P_n^* in the limit as n →∞. Strikingly, they prove that there exists a universal positive constant b, independent of n, such that P_n^* = n^-b+o(1) (i.e., the limit lim_n →∞log_n P_n^* exists and equals -b); see Theorem 1.1(a) in loc. cit. While the value of b remains unknown, it was proven by DPSZ that 0.4≤ b≤ 2, and their numerical simulations suggest that b ≈ 0.76. In light of their remarkable result,
* Does the above result of DPSZ for the full family of random polynomials f_n^*(x) admit an analogue for the special family of monic random
polynomials f_n(x)? More specifically, is there a constant b' > 0 such that P_n = n^-b' + o(1)?
* If the answer to the first question is yes, then how is b' related to b? Are they equal?
In this paper, we give an affirmative answer to both of the above questions. Indeed, we prove:
We have that P_n = n^-b + o(1).
More generally, take j ∈ℕ, take n ≥ j to be of parity different from that of j, and let P_n,j (resp., P_n,j^*) be the probability that f_n(x) (resp., f_n^*(x)) has exactly j simple real zeros; i.e., define
P_n,j(#{x ∈ : f_n(x) = 0, f_n'(x) ≠ 0} = j), and P_n,j^* (#{x ∈ : f_n^*(x) = 0, f_n^*'(x) ≠ 0} = j).
In this setting, one can ask how P_n,j and P_n,j^* behave asymptotically as n →∞. When j = 0, we have that P_n,0 = P_n and P_n,0^* = P_n^*, and the aforementioned asymptotic formulas apply. In <cit.>, DPSZ prove that, for the full family of random polynomials f_n(x), the same asymptotic formula holds — i.e., we have that P_n,j^* = n^-b+o(1)— for each j, all the way up to j = o(log n/loglog n); see Theorem 1.2 in loc. cit. Our next result gives the analogue for the family of random monic polynomials f_n^*(x):
Let j ∈ℕ be such that j ≡ n-1 2. Then we have that P_n,j = n^-b + o(1) as n →∞. In fact, f_n(x) has at most o(log n/loglog n) real zeros with probability n^-b + o(1).
Roughly speaking, our methods to prove Theorems <ref> and <ref> involve separating the behavior of the leading term of the random monic polynomial from that of the remaining terms, which comprise a fully random polynomial of one smaller degree. The behavior of this lower-degree polynomial can then be analyzed using results obtained by DPSZ in the course of proving their asymptotic formulas in the context of fully random polynomials. It is natural to expect that similar arguments might work much more generally, to prove analogues of Theorems <ref> and <ref> for random polynomials with multiple fixed coefficients. In this paper, we actually prove these more general theorems, the statements of which are given in <ref>, and we deduce Theorems <ref> and <ref> as consequences.
§.§ Generalizations of Theorems <ref> and <ref> to polynomials with multiple fixed coefficients
As alluded to above, our methods allow us to generalize Theorems <ref> and <ref> in two different directions. Firstly, we can fix the values of several coefficients, not just the leading coefficient; and secondly, we can replace the condition of positivity with a much stronger condition. To state these generalizations, we require some further notation. Let k ∈_> 0 be fixed, let S ⊂{1, …, k} be a subset containing k, and for each i ∈ S, fix a number c_i ∈; if 1 ∈ S, take c_1 > 0. Consider the polynomial
f_n,S(x) ∑_i ∈ S c_ix^n-i + ∑_i ∈{1, …,n}
i ∉S a_n-ix^n-i,
which has random coefficients with the exception of the terms c_i x^n-i for i ∈ S. Let (γ_n)_n be a sequence of non-random functions on , and assume that exists δ > 0 for which n^δ|γ_n(x)| → 0 uniformly over x ∈. For n odd, let P_n,S,γ_n be the probability that the normalized polynomial
f̂_n,S(x) f_n,S(x)/√((f_n,S(x)^2))
is everywhere bigger than γ_n(x); i.e., define
P_n,S,γ_nℙ(f̂_n,S(x) > γ_n(x), ∀ x ∈).
Note that demanding f̂_n,S(x) to be everywhere greater than γ_n(x) is a considerably stronger condition than merely asking f_n,S(x) to be everywhere positive!
One cannot reasonably expect that an unconditional analogue of Theorem <ref> would hold with P_n replaced by, say, P_n,S,0 (where by the subscript “0” we mean that γ_n ≡ 0 is taken to be identically zero for each n). Indeed, one can choose the coefficient data— i.e., the data of the set S, the fixed coefficients c_i for i ∈ S, and the distribution of the random coefficients a_i— in such a way that P_n,S,0 = 0 for infinitely many n.[As we detail in Example <ref> (to follow), one way to do this is to choose the distribution of the a_i to be bounded in absolute value by some constant C, and to choose some of the c_i to be considerably less than -C.] We then analyze the positivity of f_n,S(x) on three ranges of |x|. First, when |x|< 1, the fixed terms matter little, and the lower-degree random terms dominate, making f_n,S(x) > 0 with positive probability. The same occurs when |x| -1 is sufficiently large, in which case the leading term is positive and dominates over all other terms. But in the middle range, when 0 ≤ |x| -1 is sufficiently small, the fixed terms can contribute large negative quantities to the value of f_n,S(x), forcing it to be negative regardless of the values of the random coefficients.
The upshot is that, if we are to prove that P_n,S,γ_n obeys an asymptotic similar to that which we obtained for P_n in Theorem <ref>, then we must impose a nontrivial condition on the coefficient data, one that addresses the potential for f_n,S(x) to be negative on the aforementioned “middle range” of |x|. To this end, we shall stipulate the coefficient data be “nice,” where the notion of niceness is defined as follows:
With notation as above, we say that the coefficient data are nice if there exists an even integer s > k such that we have
ℙ(f_s,S(x) ≠ 0, ∀ x s.t. |x| > 1) > 0.
Conditional on the coefficient data being nice, we prove the following generalization of Theorem <ref> about the asymptotic behavior of P_n,S,γ_n:
We have the upper bound P_n,S,γ_n≤ n^-b + o(1). Furthermore, if the chosen data are nice, then we have equality P_n,S,γ_n = n^-b + o(1).
We note that many natural choices of coefficient data are nice. A few interesting examples of such choices are listed as follows:
* Take the a_i to be standard normal random variables, with any choice of fixed coefficients. In fact, the main results of <cit.>, along with Theorems <ref> and <ref> in the present paper, are proven by using strong approximation results to reduce to the Gaussian case; see Theorem <ref> below.
* Take the distribution of the a_i to be arbitrary (of zero mean, unit variance, and finite moments of all orders), but take S to consist entirely of odd numbers and take c_i > 0 for each i ∈ S, so that the fixed terms have even degree and positive coefficients. This includes the case of monic polynomials considered in Theorems <ref> and <ref>. In particular, these theorems follow immediately from Theorem <ref> and <ref> by setting S = {1}, c_1 = 1, and γ_n ≡ 0 for all n.
* Take the distribution of the a_i to have sufficiently large support, relative to the data of the fixed coefficients. More precisely, for every choice of S and (c_i)_i ∈ S, there exists a constant M > 0 depending on these choices such that the coefficient data are nice if (a_i > M) > 0.
As mentioned above, the key to proving Theorem <ref> is to prove the corresponding result where the random coefficients a_i are assumed to be Gaussians. In this case, the result of Theorem <ref> holds under a much weaker assumption than stipulating that the functions γ_n converge everywhere to zero. Indeed, we prove:
Suppose that (a_i)_i are standard normal random variables, that sup{γ_n(x) : x ∈, n ∈ℕ} < 1, and that there exists a sequence (ε_n)_n of positive real numbers with limit 0 such that
sup{|γ_n(x)| : ||x|-1| ≤ n^-ε_n}→ 0.
as n →∞. Then we have that P_n,S,γ_n = n^-b+o(1).
We also prove the following generalization of Theorem <ref> concerning the probability P_n,j,S that f_n,S(x) has exactly j zeros; once again, the niceness condition is required for the lower bound:
Let j∈ℕ be such that j ≡ n-1 2. Then we have the upper bound P_n,j,S≤ n^-b + o(1). If the chosen data are nice, then we have equality P_n,j,S = n^-b + o(1). Furthermore, the polynomial f_n(x) has at most o(log n/loglog n) real zeros with probability at most n^-b + o(1), with equality if the
It is important to note that our proof of Theorem <ref> does not rely on Theorem <ref> or its proof. Observe that if we take n to be odd and j to be even in Theorem <ref>, we obtain Theorem <ref> in the special case where γ_n ≡ 0. Further taking S = {1} and c_1 = 1 gives a second proof of Theorem <ref>. On the other hand, the much stronger Theorem <ref> cannot be similarly deduced from Theorem <ref>.
Incidentally, our methods allow us to obtain some results in cases where the number of fixed coefficients is allowed to grow slowly with n. A particular case of interest is that of monic polynomials with many consecutive coefficients fixed to be zero. To set this up, let k(n) be a nondecreasing function with values in ℕ, let S_n = {1, …, k(n)}, let c_1 = 1 and c_i = 0 for all i ≥ 2, and consider the random polynomial f_n,S_n(x). Explicitly, we have
f_n,S_n(x) = x^n-1 + ∑_i = k(n)+1^n a_n-ix^n-i.
Then we have the following analogue of Theorem <ref>, which holds in the regime where log k(n) grows slower than √(log n) (with a somewhat weaker condition in the Gaussian case):
Let notation be as above, and suppose that k(n) = n^o(1) in the case of Gaussian coefficients, and that k(n) = n^o(1/√(log n)) in the case of general coefficients. Then we have
P_n, S_n,γ_n = n^-b + o(1).
The following analogue of Theorem <ref> holds in the tighter regime where k(n) grows slower than log n.
Let notation be as above, and suppose that k(n) = o(log n). Let j∈ℕ be such that j ≡ n-1 2. Then we have
P_n, j,S_n = n^-b + o(1).
§.§ Application to counting algebraic integers
As a consequence of Theorem <ref>, we can deduce asymptotic counts of algebraic integers having few or no real Galois conjugates. Before stating this application, we must define how we count algebraic we must put a height function on them. Given an algebraic integer α, let α' be the unique -translate of α with trace lying in {0,…, n-1}, and let p(x) = x^n + ∑_i = 1^n p_ix^n-i∈[x] be the minimal polynomial of α'. Then we define the height of the equivalence class of α to be max_i ∈{1, …, n} |p_i|^1/i.
We say that a nonzero algebraic integer α is j-realizable if it has exactly j real Galois conjugates. For instance, an algebraic integer α of degree n is 0-realizable if it is totally complex and n-realizable if it is totally real. Then Theorem <ref> has the following immediate corollary giving an asymptotic formula for the density of j-realizable algebraic integers:
Let j ∈ℕ be fixed. When algebraic integers α of degree n ≡ j 2 are ordered by height, the density of α that are j-realizable is n^-b+o(1).
Indeed, note that Corollary <ref> follows from Theorem <ref> by taking the distribution of the a_i to be the uniform distribution on [-√(3),√(3)], and taking k = 1, S = {1}, and c_1 = √(3). Note that the factor of √(3) arises because the a_i are required to have unit variance.
The problem of counting algebraic integers satisfying interesting conditions at the archimedean places has been studied before in the literature. Of particular relevance to the present paper is the work of Calegari and Huang <cit.> (cf. the closely related earlier work of Akiyama and Pethő <cit.>), who consider algebraic integers ordered by a somewhat different height function, namely, the absolute value of the largest Galois conjugate. With respect to this height, they determine precise asymptotics for densities of various types of algebraic integers, including the totally complex subfamily, for which they obtain a density ≍ n^-3/8. Proving this result amounts to studying monic real polynomials, all of whose roots have absolute value at most 1; note that such polynomials are special in that they are positive on if and only if they are positive on [-1,1]. On the other hand, by modifying the proof of Theorem <ref>, it is possible to show that a random monic polynomial is positive on [-1,1] with probability n^-b/2 + o(1). As observed in <cit.>, this raises the intriguing question of whether the densities n^-3/8 and n^-b/2 + o(1) agree — i.e., whether b is in fact equal to 3/4, a result that would seemingly concur with the aforementioned numerical simulations performed by DPSZ, which indicated that b ≈ 0.76.
Our main results have other applications of arithmetic interest. For instance, by combining this theorem with the main results of <cit.>, one obtains an tighter upper bound on the proportion of superelliptic equations having no integral solutions. Specifically, in <cit.>, the proportion of superelliptic equations having no integral solutions is bounded in terms of various local densities, one of which happens to be the density of monic polynomials of bounded height with a specified number of real roots. To control this density, the trivial bound of 1 is used; however, applying Theorem <ref> instead of the trivial bound would tighten the main results of <cit.> by a factor of o(1). This is in complete analogy with how the results of <cit.> were applied in <cit.> and in <cit.> to obtain bounds on the proportion of hyperelliptic curves having no rational points over odd-degree number fields. Here, the proportion of such “pointless” curves is bounded in terms of various local densities, one of which happens to be the density of not-necessarily-monic polynomials of bounded height with a specified number of real roots (hence the relevance of <cit.>).
§.§ Summary of related earlier work
For at least the past century, mathematicians have been interested in the number of real zeros, say N_n of random degree n - 1 polynomials of the form
∑_i = 1^n a_n - i A_n - i x^n - i where the A_n - i are deterministic real numbers depending on n and i, and the a_n - i are i.i.d. with zero mean and unit variance, as above. The case of Kac polynomials, which arise from setting each A_n - i = 1 has been of particular interest. In 1932, Bloch and Polya considered a_i chosen uniformly at random from {-1, 0, 1} and showed 𝔼(N_n) = O(n^1/2)<cit.>.
Beginning in 1938, Littlewood and Offord followed this with a series of papers <cit.> studying N_n for three coefficient distributions: standard normal, uniform on [-1, 1], and Bernoulli on {-1, 1}. In each case, they showed that there exist constants A, B > 0 such that the following holds:
A log n/(loglog n)^2≤ N_n ≤ B (log n)^2
w.p. 1 - o(1) as n →∞.
In 1943, Kac found an exact formula for the density function of N_n for any coefficient distribution with zero mean and finite variance, which he used to obtain the estimate 𝔼(N_n) = (2/π + o(1)) log n in the case of the standard Gaussian distribution <cit.>. Kac stated that along with the central limit theorem, his work would give the same formula for 𝔼(N_n) for many other coefficient distributions.
Kac's work was the first suggestion of the universality phenomenon in the context of N_n. In probability theory, universality refers to the case when a limiting random object as n goes to infinity (such as the real number lim_n →∞𝔼(N_n)/log n) is insensitive to the particular distributions of i.i.d. atoms (such as coefficients a_i) from which the random object is obtained. The study of universality for the quantity N_n has attracted significant interest from probabilists and number theorists.
Indeed, it is not only the expectation lim_n →∞𝔼(N_n)/log n that exhibits universality.
In several papers <cit.> ending in Maslova's work in 1974, Ibragimov and Maslova further illuminated the distribution of N_n, culminating in a central limit theorem for N_n.
In 2002, DPSZ demonstrated with their study of ℙ(N_n = 0) that it was not just the limiting shape of the CDF in Maslova's work but also the extreme left tail of the CDF for N_n that exhibits a universal behavior, independent of the law of a_i. When some of the leading coefficients are fixed, our Theorem <ref> reveals a clean dichotomy: unless the choices of fixed coefficients and random coefficient distribution force the problem to be degenerate in a precise way, the limiting behavior of ℙ(N_n = 0) is universal with respect to the random coefficient distribution and the fixed coefficients.
DPSZ state that their interest in P_n^* arose from work of Poonen and Stoll in arithmetic geometry, which found the probability that a random hyperelliptic curve y^2 = f(x) of genus g over has odd Jacobian <cit.>. Poonen and Stoll reduced the problem to the computation of local probabilities, one for each completion of , and the archimedean completion required the computation of the probability that y^2 = f(x) has no real points, or, equivalently, that f(x) < 0 holds on all of . Importantly, this probability arose with the coefficients of f having a uniform distribution. Further, for the application of DPSZ's work in <cit.> and <cit.> and for the application of our main results to <cit.> and our Corollary 10, the relevant case is again that of uniformly distributed a_i. Meanwhile, the work of DPSZ shows that the probability P_n^* is easiest to find when the coefficient distribution is Gaussian. Indeed, for DPSZ, handling the uniform distribution is no easier than handling a general distribution with zero mean, unit variance, and finite moments, and we find the same in our work. Even for the problem of determining asymptotics of 𝔼(N_n), Kac found the Gaussian distribution most tractable and did not solve the uniform case until six years later <cit.>. Thus, when studying the real zeros of f_n, S(x), proving universality reduces the arithmetically interesting case of the uniform distribution to the case of the Gaussian distribution,
The question of how often a polynomial over has a specified number of zeros admits an interesting analogue over the nonarchimedean completions of . Specifically, for a prime number p, one can ask: what is the probability that a random polynomial (or random monic polynomial) with p-adic integer coefficients has exactly m roots in the fraction field _p? This and other related questions were studied in detail by Bhargava, Cremona, Fisher, and Gajović <cit.>, who proved that the desired probability is a rational function of n, m, and p. Intriguingly, they show that this rational function is invariant under p ↦ 1/p; this invariance phenomenon was subsequently demonstrated by G., Wei, and Yin to be a consequence of Poincaré duality for the zeta-functions of certain relevant varieties
<cit.>.
§.§ Organization
The rest of this paper is organized as follows. We start in <ref> by proving Theorem <ref>, and then in <ref>, we deduce Theorems <ref> and <ref> (and hence also Theorem <ref>) from Theorem <ref>. We finish in <ref> by proving Theorems <ref> and <ref> (and hence also Theorem <ref> and Corollary <ref>). Each of <ref>–<ref> is divided into two subsections, the first of which proves the relevant “lower bound”— i.e., that the desired probability is at least n^-b + o(1)— and the second of which proves the corresponding “upper bound”— i.e., that the desired probability
§ PROOF OF THEOREM <REF>— THE CASE OF GAUSSIAN COEFFICIENTS
In this section, we prove Theorem <ref>, which is a version of our main result where the random coefficients (a_i)_i are taken to be standard normal random variables. Where necessary, we highlight how the proof readily adapts to obtain Theorem <ref>
In the rest of this section, we take k, S, (c_i)_i ∈ S, and (γ_n)_n to be as in the setting of Theorem <ref>. Following notation used in <cit.> for the Gaussian case, we denote the random degree-n polynomial with fixed coefficients as f_n,S^b(x) and the fully random degree-n polynomial as f_n^b,*(x), and we denote the corresponding normalized polynomials as f̂_n,S^b(x) and f̂_n^b,*(x), respectively. The proof makes crucial use of Slepian's lemma, which is a Gaussian comparison inequality and can be stated as follows (c.f. <cit.>):
[Slepian] Let X_t and Y_t be two centered Gaussian processes of equal variance on a subset T ⊂. (I.e., take X_t and Y_t such that (X_t) = (Y_t) = 0 and (X_t^2) = (Y_t^2) for all t ∈ T.) Suppose that X_t has everywhere greater covariance, so that for all s, t ∈ T, we have Cov(X_s, X_t) ≥Cov(Y_s, Y_t). Then, for any λ∈, we have (inf_t ∈ T X_t ≥λ) ≥(inf_t ∈ T Y_t ≥λ).
§.§ Lower bound
To obtain the lower bound, we split into cases according to the parity of k. In <ref>, we handle the case where k is even, and in <ref>, we explain how the argument of <ref> can be modified when k is odd.
§.§.§ The case where k is even
Let k be even. By moving the terms a_n-1x^n-1, …, a_n-kx^n-k to the right-hand side and renormalizing, one readily verifies that the condition f̂_n^b(x) > γ_n(x) is equivalent to the following condition:
f̂_n-k^b,*(x) > γ_n(x)√(1+i ∈ S∑ c_i^2x^2(n-i) + i ∈{1, …, k}∖ S∑ x^2(n-i)/i=1n-k∑ x^2(n-k-i)) -i ∈ S∑ c_ix^n-i + i ∈{1,…,k}∖ S∑ a_n-ix^n-i/√(i = 1n-k∑x^2(n-k-i)).
For the purpose of obtaining a lower bound, we can restrict a_n-i to lie within the interval [1/2,1] for each i ∈{1, …, k}∖ S. After making this restriction, we can also strengthen the condition (<ref>) (i.e., make it less probable) by replacing the right-hand side with a larger quantity. To this end, for each i ∈{1,…,k}∖ S, let χ_i →{0,1/2,1} be the function defined as follows:
χ_i(x) = 1, if i is even and x < 0,
1/2, if i is odd,
0, otherwise.
Then define γ_n^*(x) by
γ_n^*(x) γ_n(x)√(1+i ∈ S∑ c_i^2x^2(n-i) + i ∈{1, …, k}∖ S∑ x^2(n-i)/i=1n-k∑ x^2(n-k-i)) -i ∈ S∑ c_ix^n-i + i ∈{1,…,k}∖ S∑χ_i(x)x^n-i/√(i = 1n-k∑x^2(n-k-i)).
Then the condition (<ref>) is implied by the stronger condition f̂_n-k^b,*(x) > γ_n^*(x), and so we have that
(f̂_n,S^b(x) > γ_n(x), ∀ x ∈) ≥(f̂_n,S^b(x) > γ_n(x), ∀ x ∈; and a_n-i∈ [1/2,1], ∀ i ∈{1, …, k}∖ S)
≥(f̂_n-k^b,*(x) > γ_n^*(x), ∀ x ∈; and a_n-i∈ [1/2,1], ∀ i ∈{1, …, k}∖ S)
= (f̂_n-k^b,*(x) > γ_n^*(x), ∀ x ∈)×∏_i ∈{1,…, k}
i ∉S(a_n-i∈ [1/2,1]).
In (<ref>), the product over i is a nonzero constant, depending only on k and S. Note that this product is trivial in the setting of Theorem <ref>. Thus, it suffices to bound the first probability in (<ref>).
Our assumptions on γ_n(x) in the statement of Theorem <ref> imply that there exist integers τ_n such that logloglog n τ_n log n and such that sup{γ_n(x) : |x| ∈ [1 - ξ_n,(1-ξ_n)^-1]}→ 0, where ξ_n = e^-τ_n.[Here, we say that a a' if a = o(a') as n →∞.] In the setting of Theorem <ref>, this changes slightly: we take τ_n log k(n), instead of logloglog n. Since k is even, the covariance of f_n-k^b,*(x) is everywhere nonnegative,[Note that this is not true in the case where k is odd, which therefore necessitates separate treatment (see <ref>).] which allows us to apply Slepian's lemma. Doing so, we find that
(f̂_n-k^b,*(x) > γ_n^*(x), ∀ x ∈) ≥
(f̂_n-k^b,*(x) > γ_n^*(x), ∀ x ∈ [0,(1-ξ_n)^-1]) ×(f̂_n-k^b,*(x) > γ_n^*(x), ∀ x ∈ [-(1-ξ_n)^-1,0])×
(f̂_n-k^b,*(x) > γ_n^*(x), ∀ x ∈ [(1 - ξ_n)^-1,∞)) ×(f̂_n-k^b,*(x) > γ_n^*(x), ∀ x ∈ (-∞,-(1 - ξ_n)^-1]).
We start by handling the first two probabilities on the right-hand side of (<ref>). The idea is to verify that these probabilities are of the form studied in <cit.>. Indeed, it follows immediately from the results of <cit.> in conjunction with Slepian's lemma that each of the first two probabilities on the right-hand side of (<ref>) is at least n^-b/2+o(1), as long as the following two conditions are satisfied:
sup{γ_n^*(x) : x ∈ [-1+ξ_n,1-ξ_n], n ∈ℕ} < ∞, and
sup{|γ_n^*(x)| : |x| ∈ [1-ξ_n,(1-ξ_n)^-1]}→ 0
as n →∞. To verify the conditions (<ref>) and (<ref>), note that we have the following useful bound:
sup{|x^n-j|/√(∑_i = 1^n-k x^2(n-k-i)): |x| ∈ [0,(1-ξ_n)^-1]} = (1-ξ_n)^-(n-j)/√(∑_i = 1^n-k(1-ξ_n)^-2(n-i))≪√(ξ_n)/(1-ξ_n)^k-j,
where the implied constant is absolute. In the first step of (<ref>), we have used the fact that the quantity being maximized is increasing in |x|. Summing the bound (<ref>) over j ∈{1, …, k} using our assumption that log k τ_n, which implies that k 1/√(ξ_n) = e^τ_n/2 (note that this holds in both the settings of Theorems <ref> and <ref>), we find that
∑_j = 1^k √(ξ_n)/(1-ξ_n)^k-j≪√(ξ_n)×(1 - ξ_n)^-k-1/(1 - ξ_n)^-1-1≪√(ξ_n)×kξ_n/ξ_n≪ k√(ξ_n),
which tends to zero as n →∞. From this, we deduce a few consequences: first, the factor multiplied by γ_n(x) in (<ref>) is bounded for | x |≤ (1 - ξ_n)^-1; and second, the terms being subtracted on the right-hand side of (<ref>) converge to zero uniformly over |x| ≤ (1-ξ_n)^-1. Note that both of these conclusions continue to be true in the setting of Theorem <ref>. The conditions (<ref>) and (<ref>) now follow from our assumptions on γ_n(x).
We now bound the second two probabilities on the right-hand side of (<ref>). Since c_1 > 0 if 1 ∈ S and since χ_1(x) > 0 if 1 ∉S, and because γ_n(x) < 1 for all x ∈ and n ∈ℕ, there exists a constant C > 0, depending only on S and the values of the c_i, such that γ_n^*(x) < C for all x such that |x| ∈ [(1-ξ_n)^-1,∞). Note that C can be taken to be an absolute constant in the setting of Theorem <ref>. Thus, we have that
(f̂_n-k^b,*(x) > γ_n^*(x), ∀ x ∈ [(1-ξ_n)^-1,∞)) ≥(f̂_n-k^*(x) > C, ∀ x ∈ [(1-ξ_n)^-1,∞)),
(f̂_n-k^b,*(x) > γ_n^*(x), ∀ x ∈ (-∞,-(1-ξ_n)^-1]) ≥(f̂_n-k^*(x) > C, ∀ x ∈ (-∞,-(1-ξ_n)^-1]),
for all n ≫ 1. The probabilities on the right-hand sides of (<ref>) and (<ref>) are of the form studied in <cit.>, where they were each shown to be at least n^o(1). This completes the proof of the lower bound.
§.§.§ The case where k is odd
Now let k be odd. Here, in addition to moving the first k terms of the polynomial f_n^b(x) to the right-hand side, we move the (random) term a_n-k-1x^n-k-1 to the right-hand side as well. As before, we restrict a_n-k-1 to lie within the interval [1/2,1], and we replace a_n-k-1 with χ_k+1(x), and the rest of the argument proceeds exactly as in <ref>.
§.§ Upper bound
We now proceed with the proof of the upper bound. We start in <ref>, where we work out the case in which the fixed coefficients c_i are all zero, and we finish in <ref> by explaining how to deduce the result for general c_i from the case where they are all zero.
§.§.§ The case where c_i = 0 for all i ∈ S
In this section, we assume that c_i = 0 for all i ∈ S. In particular, we may assume without loss of generality that 1 ∉S (recall, on the other hand, that k ∈ S by assumption), and in particular, we are not in the setting of Theorem <ref>, and so we may keep k fixed.
Notation. We first set some notation. To bound the probability (f̂_n,S^b(x) > γ_n(x)), it is useful to obtain bounds on the covariance c_n of f̂_n,S^b, which is given explicitly as follows:
c_n(x, y) = 𝔼(f_n,S^b(x) f_n,S^b(y))/√(𝔼(f_n,S^b(x)^2) 𝔼(f_n,S^b(y)^2))
= (xy)^n-k-1r(xy) + ∑_i = 1^n-k (xy)^n-k-i/√((x^2(n-k-1)r(x^2) + ∑_i = 1^n-k x^2(n-k-i))(y^2(n-k-1)r(y^2) + ∑_i = 1^n-k y^2(n-k-i))),
where we have set r(z) ∑_i ∈{1, … , k}∖ S z^k + 1 - i.
We will also have occasion to use the function g defined by
g(x, y) = | xy - 1|/√(| (x^2-1)(y^2-1) |).
Lastly, for the purpose of proving upper bounds, it suffices to restrict our attention to small subintervals of . To this end, fix δ∈ (0,1/2), and define the four intervals I_1 [1 - n^-δ,1 - n^-(1-δ)], I_2I_1^-1, I_3 -I_2, and I_4 -I_1. Set
V ⋃_i=1^4 I_i and U ⋃_i = 1^4 I_i^2 ⊂ V^2.
The proof. Let n be odd or even. Let α_n ∈ [0,n^-δ/2], to be chosen later, and let χ_U ^2 →{0,1} denote the indicator function of the subset U ⊂^2. In <cit.>, the upper bound (f̂_n^b,*(x) > γ_n(x)) ≤ n^-b+o(1) is obtained in the case of Gaussian coefficients by constructing an auxiliary Gaussian process f_n(x) with covariance given by
(1 - α_n)/g(x,y)χ_U(x,y) + α_n.
Let c_n^*(x,y) denote the covariance of f̂_n^*(x) f_n^*(x)/(f_n^*(x)^2). It is shown in <cit.> that, for a suitable choice of α_n, the covariance (<ref>) is an upper bound on c_n^*(x,y) for all x,y ∈ V, and that this upper bound specializes to an equality when x = y. Thus, Slepian's lemma implies that to bound (f̂_n^b,*(x) > γ_n(x)) from above, the process f̂_n^b,*(x) can effectively be replaced with the process f_n(x), which turns out to be easier to work with.
An identical argument applies to prove that (f̂_n,S^b(x) > γ_n(x)) ≤ n^-b+o(1), so long as we can show that the covariance c_n(x,y) is bounded above by (<ref>), with equality when x = y. We prove this as follows:
For x,y ∈ V, and for all n ≫ 1, there exists α_n ∈ [0, n^-δ/2] such that
c_n(x,y) ≤(1 - α_n)/g(x,y)χ_U(x,y) + α_n,
with equality when x = y.
First, assume (x,y) ∈ U. That we have equality when x = y is obvious — indeed, c_n(x,x) = g(x,x) = 1. Consequently, we may assume that x ≠ y (in particular, we can divide by x -y, as we do multiple times in what follows). Now, by symmetry, it in fact suffices to take x,y ∈I_1 or x,y ∈I_2. We handle each of these cases separately as follows:
Case 1: x,y ∈I_1.
To start, note that the product g(x,y)c_n(x,y) may be rewritten as follows, writing r(z) for (z - 1) r(z):
g(x,y)c_n(x,y) =
|xy-1|/√(|(x^2-1)(y^2-1)|)×(xy)^n-k-1r(xy) + ∑_i = 1^n-k (xy)^n-k-i/√((x^2(n-k-1)r(x^2) + ∑_i = 1^n-k x^2(n-k-i))(y^2(n-k-1)r(y^2) + ∑_i = 1^n-k y^2(n-k-i))) =
sign(xy-1) ×r(xy) + 1 - (xy)^k-n/√((r(x^2) + 1 - x^2(k-n))(r(y^2) + 1 - y^2(k-n))).
Rearranging (<ref>) using (<ref>), and observing that 0 ≤ g(x,y)≪ 1 and g(x,y)c_n(x,y) = 1 + o(1) for x,y ∈I_1 we see that it suffices to prove the following bound:
n^-δ ≫(g(x,y)c_n(x,y))^2 - 1/g(x,y)^2-1 = (1-x^2)(1-y^2) [r(xy)^2-r(x^2)r(y^2)/(x-y)^2 + .
. -r(x^2)x^2 + 2r(xy)xy - r(y^2)y^2/(x-y)^2 + -r(y^2)x^-2n + 2r(xy)x^-ny^-n - r(x^2)y^-2n/(x-y)^2 + (x^-n-y^-n/x-y)^2 ]
The first and second terms in the square brackets on the right-hand side of (<ref>) are easily seen to be ≪ 1 (where the implied constant depends on k = r). The third and fourth terms are each o(1) as n →∞, and the external factor of (1-x^2)(1-y^2) is ≪ n^-2δ.
Case 2: x,y ∈I_2. Here, we prove the stronger result that
g(x,y)c_n(x,y) ≤ 1 (i.e., for this case, we can take α_n = 0). Applying (<ref>), squaring both sides of this stronger inequality, cross-multiplying, and moving the terms to the left-hand side, we see that it suffices to prove that
(r(xy)+1)^2 - (r(x^2)+1)(r(y^2)+1) ≤ 0.
Denote the left-hand side of (<ref>) by L(x,y), and for any integer m ≥ k = r, let L_m(x,y) denote the same expression with r(z) replaced by r(z) + z^m. Then it suffices to prove that L_m(x,y) L_m(x,y) - L(x,y) ≤ 0. We compute that
L_m(x,y)/(x-y)^2 = x^2my^2m + (x^m-y^m/x-y)^2-(x^m+1-y^m+1/x-y)^2 +
2(xy-1/x-y)^2r(xy)x^my^m -(x^2-1)(y^2-1)/(x-y)^2(r(y^2)x^2m+r(x^2)y^2m).
We start by estimating the first line of the right-hand side of (<ref>). For x,y ∈I_2, the first term is 1 + o(1), the second is m^2 + o(1), and the third is -(m+1)^2 + o(1), making for a total of -2m + o(1). As for the second line of (<ref>), assume that r(z) = z^e, where e ∈{1, …, k}. Under this assumption, the second line of (<ref>) may be conveniently reexpressed as follows:
x^2ky^2k[(x^m-k+1-y^m-k+1/x-y)^2 + x^2y^2(x^m-k-1-y^m-k-1/x-y)^2-(x^2y^2+1)(x^m-k-y^m-k/x-y)^2]
For x,y ∈I_2, the quantity (<ref>) the first term is (m-k+1)^2 + o(1), the second is (m-k-1)^2 + o(1), and the third is -2(m-k)^2 + o(1), making for a total of 2 + o(1). Thus, when r is a monomial, the second line of (<ref>) contributes 2 + o(1). Since this second line is linear in r, it follows that for any r, we get a contribution of at most 2k + o(1) ≤ 2m-2 + o(1). We conclude that L_m(x,y) ≤ (x-y)^2(-2 + o(1)) for all x,y ∈I_2, so taking n ≫ k to be sufficiently large yields the desired inequality.
Finally, consider the case where x,y ∈ V ∖ U. Without loss of generality, we may assume that |x| ∈I_1 and |y| ∈I_2. Then y^-n≪ e^-n^1-δ and x^-n≫ e^n^δ, from which we can apply (<ref>) to deduce that |g(x,y)c_n(x,y)| ≪ e^-n^δ. Since |g(x,y)| ≪ n^1-δ, the desired bound follows.
The upper bound (f̂_n,S^b(x) > γ_n(x)) ≤ n^-b+o(1) is then deduced from Lemma <ref> exactly as the upper bound (f̂_n^b,*(x) > γ_n(x)) ≤ n^-b+o(1) is deduced from <cit.>; we omit the details for the sake of brevity.
§.§.§ The case of general c_i
We now take c_i ∈ for each i ∈ S in such a way that c_1 > 0 if 1 ∈ S. Let f_n,0(x) f_n,S^b(x) - ∑_i ∈ Sc_ix^n-i, and as usual, let f̂_n,0(x) f_n,0(x)/(f_n,0(x)^2). Define γ_n^**(x) by
γ_n^**(x) γ_n(x)√(1+i ∈ S∑ c_i^2x^2(n-i) + i ∈{1, …, k}∖ S∑ x^2(n-i)/i=1n-k∑ x^2(n-k-i)) -i ∈ S∑ c_ix^n-i/√(i = 1n-k∑x^2(n-k-i)).
(Note the distinction between γ_n^*(x), defined in (<ref>), and γ_n^**(x).)
Evidently the condition f̂_n,S^b(x) > γ_n(x) is equivalent to the condition f̂_n,0(x) > γ_n^**(x), so we have that
(f̂_n,S^b(x) > γ_n(x), ∀ x ∈) = (f̂_n,0(x) > γ_n^**(x), ∀ x ∈) ≤(f̂_n,0(x) > γ_n^**(x), ∀ x ∈ V).
Now, it is shown in <ref> that
(f̂_n,0(x) > γ_n^**(x), ∀ x ∈ V) ≤ n^-b+o(1),
as long as |γ_n^**(x)| → 0 uniformly for x ∈ V. To verify that this condition holds, we imitate the bound in (<ref>) to find that
sup{|x^n-j|/√(∑_i = 1^n-k x^2(n-k-i)): x ∈ V} = (1-n^-δ)^-(n-j)/√(∑_i = 1^n-k(1-n^-δ)^-2(n-i))≪n^-δ/2/(1-n^-δ)^k-j.
Summing the bound (<ref>) over j ∈{1,…,k} just as in (<ref>), we obtain a bound of kn^-δ/2, which converges to zero as n →∞. Note that this convergene holds even in the setting of Theorem <ref>, where k = n^o(1).
This completes the proof of Theorem <ref>, as well as the proof of Theorem <ref> in the case where the coefficient distributions are Gaussians.
§ PROOFS OF THEOREMS <REF> AND <REF>— THE CASE OF GENERAL COEFFICIENTS
In <cit.>, the authors use the Komlós-Major-Tusnády (henceforth, KMT) strong approximation theorem (see, e.g., <cit.>) to extend their asymptotic formula for P_n^* in the case of Gaussian coefficients to the case where the coefficient distribution is arbitrary (having zero mean, unit variance, and finite moments of all orders). The rough idea of their argument is to partition the random polynomial into several chunks, and to analyze the contributions of each chunk separately.
The purpose of this section is to deduce Theorems <ref> and <ref> (and hence also Theorem <ref>) from Theorem <ref> by means of an analogous method. The key difference is that one of the chunks of the polynomial contains all of the terms with the fixed coefficients. It is precisely to control the behavior of this chunk that we impose the “niceness” condition introduced in Definition <ref>. The following simple example demonstrates the necessity of the niceness condition:
Let the law of the a_i be the uniform distribution from [-√(3),√(3)], let k = 2, let S = {1,2}, and let c_1 = 1 and c_2 = -2024. Then, no matter what values are taken by the a_i, and for every n ≥ 2, the polynomial f_n,S(x) takes a negative value at x = 2. Indeed, we have in this case that
f_n,S(2) ≤ 2^n-1 - 2024 × 2^n-2 + √(3)×(2^n-2-1) ≤ 2^n-2×(2 - 2024 + √(3)) < 0.
§.§ Approximation of coefficients by Gaussians via KMT
We start by applying the KMT strong approximation theorem. In doing so, we shall make use of the following observation: for every strictly increasing sequence of nonnegative even integers 0 = k_0 < k_1 < … < k_ℓ and every x ∈ [-1,1], the sequence
{x^k_j - x^k_j+1 : j∈{0,…,ℓ-1}}∪{x^k_ℓ}
forms a probability distribution (i.e., each term is nonnegative, and the sum of the terms is 1). Thus, for any s_0,…,s_k ∈, we have that
|s_0 + ∑_j=1^ℓ (s_j-s_j-1) x^k_j|
= |s_ℓ x^k_ℓ + ∑_j=0^ℓ-1 s_j (x^k_j - x^k_j+1) |
≤max_0 ≤ j ≤ℓ|s_k_j| .
Now, let k ≥ 1, let m ≫ k, and choose any subset S ⊂{1,…,k} that is either empty or contains k. Because (a_i)=0 and (a_i^2)=1 for each i ∈ℕ, a double application of the KMT strong approximation theorem yields the following result: we can redefine the random variables {a_i : i ∈{0, …, m-1} and m-i ∉S} on a new probability space
with a corresponding sequence of
independent standard normal random variables {b_i : i∈{0,…,m-1} and m-i ∉S}
such that for any p≥ 2, some χ_p > 0 that depends only on p, and all t>0, we have that
(max_0 ≤ j ≤n-1/2|∑_ i ∈{0,…,j}
n-2i ∉S(a_2i - b_2i)|
≥ t) +
(max_0 ≤ j ≤n-3/2|∑_i ∈{0,…,j}
n-(2i+1) ∉S(a_2i+1 -b_2i+1)|
≥ t) ≤χ_p n |a_0|^p t^-p .
Note that we can express the right-hand side of (<ref>) solely in terms of a_0 because the coefficients a_i are identically distributed. By the triangle inequality, we have
|∑_i ∈{0,…, m-1}
m-i ∉S (a_i - b_i) x^i| ≤|∑_i ∈{0,…, ⌊m-1/2⌋}
m-2i ∉S (a_2i-b_2i) x^2i|
+|∑_i ∈{0,…,⌊m-2/2⌋}
m-(2i+1) ∉S (a_2i+1- b_2i+1)
x^2i+1|
so applying (<ref>) twice — first taking s_j=∑_i ∈{0,…,j}
m-2i ∉S (a_2i-b_2i), and then taking
s_j=∑_i ∈{0,…, j}
m-(2i+1) ∉S(a_2i+1-b_2i+1)— and combining the result with (<ref>) and (<ref>) yields that for all m ≤ n, we have
(sup_|x| ≤ 1|f_m,S(x)-f_m,S^b(x)| ≥ 2 t ) ≤χ_p n |a_0|^p t^-p,
where f_m,S^b is defined by
f_m,S^b(x) ∑_i ∈ S c_i x^m-i + ∑_i ∈{1, …, m}
i ∉S b_m-ix^m-i.
Notice that (<ref>) only covers the region where |x| ≤ 1. To handle the region |x| ≥ 1, we invert x and consider the random polynomial in reverse. Indeed, define
g_m,S(x) x^m-1 f_m,S(x^-1) = ∑_i∈ S c_i x^i-1 + ∑_i ∈{1, …, m}
i ∉S a_m-i x^i-1 ,
and similarly define g_m,S^b(x) x^m-1 f_m,S^b(x^-1). By an analogous argument, we also obtain for all m ≤ n the bound
(sup_|x| ≤ 1|g_m,S(x)-g_m,S^b(x)| ≥ 4t) ≤χ_p n |a_0|^p t^-p.
Indeed, bounding |g_m,S-g_m,S^b| amounts to replacing (a_i,b_i) with
(a_m-1-i,b_m-1-i)— which leads us to take s_j = ∑_i ∈{0,…, j}
m-2i ∉S (a_m-1-2i-b_m-1-2i) first, and to take next s_j = ∑_i ∈{0,…, j}
m-(2i+1) ∉S (a_m-1-(2i+1)-b_m-1-(2i+1)). Note that the approximation error doubles from 2t to 4t because the partial sums s_j are now computed in reverse; for instance, we have
max_0 ≤ j ≤⌊m-1/2⌋|∑_ i ∈{0,…,j}
2i+1 ∉S(a_m-1-2i - b_m-1-2i)| ≥ 2t ⟹ max_0 ≤ j ≤⌊m-1/2⌋|∑_ i ∈{0,…,j}
m-2i ∉S(a_2i - b_2i) | ≥ t.
§.§ Notation and setup
In this section, we introduce the notation necessary for us to partition the random polynomial into pieces and complete the proof of Theorem <ref>. Let n, k, S be as in the setting of the theorem, and for m ≫ k, define the following quantities
σ_m,S(x)√((f_m,S(x)^2)) and σ_m,S(x) √((g_m,S(x)^2)),
and following (<ref>), define the normalized random polynomials
f̂_m,S(x)f_m,S(x)/σ_m,S(x) and f̂^b_m,S(x) f^b_m,S(x)/σ_m,S(x);
ĝ_m,S(x)g_m,S(x)/σ_m,S(x) and ĝ^b_m,S(x) g^b_m,S(x)/σ_m,S(x).
Following the convention in <ref>, we write f_m f_m,∅, g_m g_m,∅, and σ_m σ_m,∅ = σ_m,∅.
Just as in <cit.>, we introduce the following list of n-dependent quantities and functions:
[ p_n: p_n↑∞, χ_p_n|a_0|^p_n≤ n χ_p_n; ϵ_n: ϵ_n↓ 0, ϵ_n≥max{20/p_n,(log n)^-1/2}, ϵ_n; 2n^3ϵ_n=2^j ; m_n: m_n→∞, m_n=2n^3ϵ_n ; γ_n: γ_n(x)=max{0,γ_n(x),γ_n(x^-1)} ; σ_n: σ_n(x)=max{σ_n,S(x),σ_n,S(x)} ; ρ_n: ρ_n→ 0,
ρ_n = sup_|x| ≤ 1-m_n^-1 {σ_n(x) γ_n(x) } ρ_n≤ c n^-δ/2,; r_n: c n^-δ/2 ; ξ_n: ξ_n(x)=6x^m σ_n-2m(x)γ_n(x) ]
Partition the interval [-1,1] into
I{ x : |x| ≥ 1-0.5 n^- ϵ_n} and I^c [-1,1]
∖I. Then the constant 6 in the definition of the function ξ_n(x) in (<ref>) is chosen to ensure that, for all n ≫ 1, we have ξ_n(x) ≥σ_n(x) γ_n(x) for all x ∈I such that |x| ≥ 1 - m_n^-1. Since r_n ≥σ_n(x)γ_n(x) for all x such that |x| ≤ 1 - m_n^-1, it follows that 2r_n + ξ_n(x) ≥σ_n(x) γ_n(x) for all n ≫ 1 and x ∈I.
Next, let
f_n,S=f_n,S^L+f_n,S^M+ f_n,S^H where
f_n,S^L(x) ∑_i=0^m_n-1 a_i x^i,
f_n,S^M(x) ∑_i=m_n^n-(m_n+1) a_i x^i,
f_n,S^H(x) ∑_i ∈ S c_ix^n-i + ∑_i ∈{1, …, m_n}
i ∉S a_n-i x^n-i.
Similarly, we let g_n,S=g_n,S^L+g_n,S^M+ g_n,S^H, where for each ∙∈{L,M,H} we define g_n,S^∙(x) x^n-1f_n,S^∙(x^-1).
§.§ Lower bound
We first handle the lower bound in Theorem <ref>, leaving the upper bound to <ref>. With notation as in <ref>, we have the following chain of implications for all n ≫ 1:
f̂_n,S(x) > γ_n(x) , ∀ x ∈ ⟸ f̂_n,S(x) >γ_n(x) , ĝ_n,S(x) > γ_n(x), ∀ x ∈ [-1,1]
⟸ f_n,S^M(x) > ξ_n(x), g_n,S^M(x) > ξ_n(x) : ∀ x ∈I, and
f_n,S^M(x) >-r_n, g_n,S^M(x) >-r_n : ∀ x ∈I^c, and
f_n,S^L(x) >3r_n, g_n,S^L(x) ≥ -r_n : ∀ x ∈ [-1,1], and
f_n,S^H(x) ≥ -r_n, g_n,S^H(x) > 3r_n : ∀ x ∈ [-1,1].
Since the polynomial pairs (f_n,S^L,g_n,S^L), (f_n,S^M,g_n,S^M) and
(f_n,S^H,g_n,S^H) are mutually independent,
P_n,S,γ_n =
( f̂_n,S(x) >γ_n(x), ∀ x ∈) ≥
(
{f_n,S^M (x) >ξ_n(x), g_n,S^M(x) >ξ_n(x),
∀ x ∈I}∩{f_n,S^M(x) >-r_n, g_n,S^M(x) > -r_n, ∀ x ∈I^c}) ×
( f_n,S^L(x) >3r_n, g_n,S^L(x) ≥ -r_n , ∀ x ∈ [-1,1] ) ×( f_n,S^H(x) ≥ -r_n, g_n,S^H(x) > 3r_n , ∀ x ∈ [-1,1] ).
Call the three factors in (<ref>)Q_1, Q_2, and Q_3, respectively. The factor Q_1 was estimated in <cit.> using the strong approximation results (<ref>) and (<ref>) in the case S = ∅, and a lower bound of
Q_1 ≥ n^-b+o(1)
was obtained. The factor Q_2 was also estimated in <cit.>, where a lower bound of
Q_2 ≥ n^o(1)
was obtained. (To be clear, in loc. cit., the probabilities Q_i are expressed as Q_1 = Q_1 - 2Q_2 and Q_2 = Q_3 - Q_4, where each Q_i is a certain probability, and they bound the Q_i by controlling each of the Q_i separately.)
As for bounding Q_3, define the following probabilities:
Q_3' (g_m_n,S(x) > 3r_n, ∀ x ∈ [-1,1]; and x^n-m_nf_m_n,S(x) ≥ -r_n, ∀ x ∈±[1-m_n^-1,1]),
Q_4' ( x^n-m_n f_m_n,S(x) ≤ -r_n, x such that |x| ≤ 1 - m_n^-1).
Then Q_3 ≥Q_3' - Q_4', so it suffices to determine lower and upper bounds on Q_3' and Q_4', respectively. To bound Q_4', recall that m_n=2 n^3ϵ_n and ϵ_n → 0. It then follows that
for all n ≫ 1, we have
Q_4' ≤( sup_|x| ≤ 1 - m_n^-1 |x|^n-m_n|f_m_n(x)| ≥ r_n )
≤( sup_|x| ≤ 1 - m_n^-1|f_m_n(x)|≥ e^√(n)).
To proceed, we make use of the following lemma (this is <cit.>; see <cit.> for a proof).
Let {T_x : x ∈ [a, b]} be an almost surely continuous
stochastic process with T_a=0.
Assume that |T_x-T_y|^2 ≤ K (x-y)^2 for all x, y ∈ [a,b].
Then, we have that
( sup_x ∈ [a,b] T_x^2 ) ≤ 4 K (b-a)^2.
Take T_x=f_m_n,S(x)-f_m_n,S(0) = f_m_n,S(x) - a_0. Then, just as in <cit.>, we have the bound
|T_x-T_y|^2 = |f_m_n,S(x) - f_m_n,S(y)|^2 ≤ n^3(x-y)^2.
Using (<ref>) and applying Lemma <ref> two times, once with a=0, b = 1 - m_n^-1 and once with a = -1 + m_n^-1, b = 0, we find that
( sup_|x| ≤ 1 - m_n^-1 T_x^2 ) ≤ 8 n^3 (1 - m_n^-1)^2.
Upon combining (<ref>) with (<ref>) and applying Markov's inequality multiple times, it follows that for all n ≫ 1 we have
Q_4' ≤(|a_0| ≥ e^√(n)) + (sup_|x| ≤ 1 - m_n^-1 T_x^2 ≥ e^2√(n)) ≪ e^-√(n)
+ 8n^3 (1 - m_n^-1)^2e^-2√(n)≪ e^-n^1/3.
We now turn our attention to bounding Q_3'. For this, we prove the next lemma, from which one readily deduces that
Q_3' ≫ n^-O(ϵ_n),
where the implied constants are fixed.
There exists c<∞ such that for all m=2^κ+1, where κ∈ℕ is sufficiently large, we have
( g_m,S(x) > m^-2, ∀ x ∈ [-1,1]; and
x f_m,S(x) ≥ 0, ∀ x ∈± [1-2^-κ,1] )
≥ m^-c.
The statement and proof of Lemma <ref> draw heavy inspiration from <cit.>, except that the roles of f and g are reversed. This distinction matters not in loc. cit. but is of particular importance in the present article, where we use the niceness condition to prove the lemma.
Define the intervals
J_j{ x : 1-2^-j≤ |x| ≤ 1-2^-j-1} for j ∈{1,…,κ-1}
and J_κ={ x : 1-2^-κ≤ |x| ≤ 1 }. Let 1 ≪ s ≤κ, and further define J_0 { x : |x| ≤ 1-2^-s}. We will decompose f_m,S and g_m,S into sums over
κ-s+2
terms involving polynomials f^j and g^j of smaller degree, where j ∈{0,s,s+1,…,κ}.
Specifically, we
write
f_m,S(x) =x^m-2^s f^0 (x) + ∑_j = s^κ x^m-2^j+1 f^j(x) and g_m,S(x) =g^0 (x) + ∑_j=s^κ x^2^j g^j(x),
where the f^j and g^j are completely determined by the conditions f^0 = 2^s-1 = g^0 and f^j = 2^j - 1 = g^j for each j ∈{s, …, κ}.
For some sufficiently large Γ_0 > 0 that does not depend on κ, we have
2^j/2 (Γ_0-1) x^2^j -
∑_i=s
i ≠ j^κ 2^i/2 x^2^i≥ 0 , ∀ x ∈J_j, j ∈{s,…,κ}.
Moreover, since (a_0)=0 and (a_0^2)=1, there exist real numbers α > β > 0 such that (|a_0-α| ≤β')>0 for every β' ∈ (0,β). Fixing such an α, let M ∈ℕ be an integer to be chosen later, and take s ≫ 1 to be sufficiently large, so that
∑_i=s^∞ 2^i/2 x^2^i≤α/2M , ∀ x ∈J_0.
Note that such an s always exists because, for every x ∈J_0, the sum on the left-hand side of (<ref>) converges.
The sets J_0,J_s,J_s+1,
…,J_κ form a partition of the interval [-1,1]. One checks that for κ large enough,
we have
m^-2≤min{α/2M,inf_x ∈J_j,
j ∈{s,…,κ} 2^j/2 x^2^j}
Combining (<ref>) with (<ref>), we deduce that
{ g_m,S(x) > m^-2, ∀ x ∈ [-1,1] } ⊃⋂_j=s^κ{ g^j(x) > 2^j/2Γ_0 , ∀ x ∈J_j; and g^j(x) ≥ -2^j/2, ∀ x ∈J^c_j }
∩{ g^0(x) > α/M, ∀ x ∈J_0; and
g^0(x) ≥ 0, ∀ x ∈J^c_0 }
Note that for all x ∈ [-1,1], we have
{ x f_m,S(x) ≥ 0 }⊃{ x f^0(x) ≥ -2^κ/2}∩⋂_j=s^κ{ x f^j(x) ≥ 2^j/2Γ_0 }.
The polynomial pairs (f^j,g^j) for j ∈{0,s,…,κ} are mutually
independent of each other, with the pair (f^j,g^j) obeying
the same law as that of (f_2^j,g_2^j) for each j ≠ 0. Consequently, combining (<ref>) and (<ref>) yields that
ℙ( g_m,S(x) > m^-2, ∀ x ∈ [-1,1]; and
x f_m,S(x) ≥ 0, ∀ x ∈J_κ) ≥η_s,κ×∏_j = s^κ q_j,
where the factors η_s,κ and q_j are defined as follows:
η_s,κ ℙ( g_2^s,S(x) > α/M, ∀ x ∈J_0; and
g_2^s,S(x) ≥ 0, ∀ x ∈J^c_0; and
x f_2^s,S(x) ≥ -2^κ/2, ∀ x ∈J_κ)
q_j ℙ( f_2^j (x) > Γ_0 2^j/2, ∀ x ∈J_j; and
f_2^j(x) ≥ -2^j/2, ∀ x ∈J^c_j; and
x g_2^j(x) ≥Γ_0 2^j/2, ∀ x ∈J_κ)
That q_j is uniformly bounded away from zero in κ, independently of j, was established in <cit.>— indeed, note that q_j arises from the pieces of the random polynomials f_m,S and g_m,S that do not involve the fixed coefficients c_i for i ∈ S. It remains to show that η_s,κ is uniformly bounded away from zero in κ, for all sufficiently large κ. By our assumption that the coefficient data is nice, there exists an even integer s' > k, a set of values {v_i : i ∈{0,…, s'-1}∖ S}⊂, and a small constant β > 0 such that (|a_i - v_i| ≤β) > 0 for each i ∈{0,…, s'-1} and such that whenever v_i' ∈ [v_i - β, v_i + β] for every i ∈{0, …, s'-1}∖ S, we have
∑_i ∈ S c_i x^i-1 + ∑_i ∈{0, …, s'-1}
i ∉S v_i' x^i > 0, ∀ x ∈ (-1,1).
Indeed, the niceness assumption guarantees that the left-hand side of (<ref>) is nonzero for all x ∈ (-1,1) and v_i' ∈ [v_i - β, v_i + β], and so the fact that c_1 > 0 ensures that it is actually positive. For any ℓ > s'/2, consider the sum of terms
v_s'' x^s' + v_s'+1'x^s'+1 + ⋯ + v_2ℓ'x^2ℓ + v_2ℓ+1' x^2ℓ+1.
If we restrict the coefficients in (<ref>) to satisfy
α + β≥ v_2i' ≥ v_2i+1' ≥α - β, for every i ∈{s'/2,…,ℓ},
then the sum in (<ref>) is nonnegative for all x ∈ [-1,1]. Now, choose s so that 2^s > s', and take ℓ = 2^s-1-1. We claim that, if the coefficients v_i' satisfy v_i' ∈ [v_i - β, v_i + β] for every i ∈{0, …, s'-1}∖ S along with (<ref>), then we have
∑_i ∈ S c_i x^i-1 + ∑_i ∈{0, …, 2^s-1}
i ∉S v_i' x^i > α/M, ∀ x ∈ [-1 + 2^-s, 1 - 2^-s]
for sufficiently small β and large M. Indeed, there exists some ξ∈ (0,1-2^-s) such that the left-hand side of (<ref>) is at least c_1/2 for all x such that |x| ≤ξ. Then for all x such that ξ≤ |x| ≤ 1 - 2^-s, we have that
∑_i = s'^2^s-1α x^i = x^s'×∑_i = 0^2^s - s' - 1α x^i ≥ξ^s'/4×α.
Note that the quantities c_0/2 and ξ^s'/4 are fixed. Thus, if we take β to be sufficiently small, then the sum on the left-hand side of (<ref>) is bounded away from zero by some fraction of α, proving the claim. Finally, with the coefficients v_i' chosen as above and for β sufficiently small, we also have that
|xf_2^s(x)| = |∑_i ∈ S c_i x^2^s-i + ∑_i ∈{0, …, 2^s-1}
i ∉S v_i' x^2^s-1-i| > 2^sC ×α, ∀ x ∈J_k,
where C > 0 depends on the coefficients c_i, v_i as well as on β, β. Taking κ to be so large that 2^κ/2 > 2^sC ×α, we conclude that η_s,κ is uniformly bounded away from zero in κ.
It remains to explain how the proof of the lemma adapts to the setting of Theorem <ref>. Because of our assumption that k(n) = n^o(1/√(log n)), the only aspects of the proof that change are the choice of s = s(n), which must be taken so that 2^s(n) k(n), as well as the estimation of the factor η_s(n),κ. But because the subleading fixed coefficients c_2, …, c_k(n) are all equal to zero, it is easy to see that the coefficient data are nice, in the sense that a statement analogous to (<ref>) holds. The rest of the proof is the same.
Combining the bounds on Q_3' and Q_4' obtained in (<ref>) and (<ref>), respectively, we deduce that
Q_3 ≥Q_3' - Q_4' ≥ n^-o(1).
The lower bound P_n,S,γ_n≥ n^-b+o(1)
in Theorem <ref>, as well as the lower bound P_n,S_n≥ n^-b + o(1) in Theorem <ref>,
then follows by substituting the bounds (<ref>), (<ref>), and (<ref>) on the factors Q_1, Q_2, and Q_3 into the right-hand side of (<ref>).
§.§ Upper bound
We finish by handling the upper bound in
Theorem <ref>. Let η_n inf{γ_n(x) : ||x|-1|≤ n^-ϵ_n}, and recall that our assumptions imply that η_n → 0. Then, by applying the strong approximation results (<ref>) and (<ref>) with m = n, t = n^ϵ_n/4, and p = p_n, we deduce that
P_n,S,γ_n =
( f̂_n,S(x) >γ_n(x), ∀ x ∈) ≤(
f̂_n,S(x) >η_n and ĝ_n,S(x) >η_n, ∀ x ∈I)
≤( f̂^b_n,S(x) > η_n - n^-ϵ_n/4 and ĝ_n,S^b(x) >η_n -n^-ϵ_n/4,
∀ x ∈I) + 2 n^-3≤ n^-b+o(1)
for all n ≫ 1. The second-to-last inequality follows because inf_x ∈I{σ_n,S(x),σ_n,S(x)}≥ n^ϵ_n/2 for all n ≫ 1; the last inequality follows by an application of Theorem <ref>, taking the threshold to be η_n-n^-ϵ_n/4 when x ∈I∪I^-1
and -∞ for all other x. This completes the proof of Theorem <ref>.
§ PROOF OF THEOREM <REF> AND <REF>
The purpose of this section is to prove Theorems <ref> and <ref> (and hence also Theorem <ref> and Corollary <ref>). The arguments given in this section are logically independent of the previous two sections, so in particular, we obtain a second proof of Theorem <ref>. We retain notation (n, k, S, and j) as in the setting of the theorem.
§.§ Lower bound
To obtain the lower bound, we split into cases according to the parity of n,j. In <ref>, we treat the case where n-1,j are even, and in <ref>, we handle the case where n-1,j are odd.
§.§.§ The case where n-1,j are even
Let n-1,j be even. Note that by the assumptions of the theorem, the support of the distribution of the random coefficients intersects both intervals (-∞, 0) and (0, ∞). Further, there exists a sequence (p_n)_n with lim_n →∞ p_n = ∞ and (|a_0|^p_n) ≤ n. Set ρ_n max{5p_n^-1, (log n)^-1/2}; note that the definitions of p_n and ρ_n are not the same as in <ref>.
Let M ≫ 1 be such that ℙ(|a_i| < M) > 0. Define the set
𝒱{v ∈ : ℙ(|a_i-v|<ϵ) > 0, ∀ϵ > 0}⊆,
and observe that 𝒱≠∅. We begin with the following result, a somewhat weaker version of which is claimed and proven in <cit.>.
There exist a constant C_j(depending only on j and the law of the a_i), a small enough constant ϵ > 0, a constant B > 0, choices of even m = m(n) ∼ C_j ρ_n log n, and a polynomial B(x) = ∑_i = 0^m - 1 b_i x^i, with | b_i |≤ B and b_i ∈𝒱, such that the following holds. Suppose g(x) = ∑_i = 0^n - 1 g_i x^i is any polynomial satisfying the following four properties:
H1: | g_i - b_i | < ϵ for i ∈{0, 1, … , m - 1}
H2: | g_n - 1 - i|≤ M for i ∈{0, 1, … , m - 1}
H3: g_m + g_m + 1 x + ⋯ + g_n - m - 1 x^n - 2m - 1 > n^-1/4σ_n - 2m(x) for all x ∈
H4: | g_i | < (n-1)^ρ_n-1 for i ∈{0, 1, … , n - 1}.
Then g(x) has exactly j zeros in [0, 1], all of them simple, and is positive on [-1, 0].
This is proved as Step 1 and Step 2 of Lemma 8.2 in DPSZ. Our property H2 replaces the stronger property A2 in DPSZ, but one sees readily in their proofs of Step 1 and Step 2 that in each of the two occurrences where A2 is invoked, only the estimate
| g_n - 1 x^n - 1 + ... + g_n - m x^n - m|≤ m M | x^n - m|
on | x |≤ 1 is necessary. This estimate follows immediately from H2.
We now show that with large enough probability, f_n, S satisfies simultaneously H1, H2, H3, and H4, which we may call satisfying H. The proof is essentially the same as the proof of Lemma 8.1 in DPSZ, but it is nonetheless short and instructive.
Take M > max_i ∈ S |c_i|. The probability that f_n, S satisfies 𝐇 is at least n^-b + o(1).
It is easy to see that, because j = o(log n), we have (𝐇1) ≥ n^o(1) and (𝐇2) ≥ n^o(1). By <cit.> (i.e., the analogue of our Theorem <ref> for fully random polynomials), we have that (H3) ≥ n^-b + o(1). Because H1, H2, and H3 are independent, the probability that they occur together is at least n^-b + o(1). As for H4 note that by Markov's inequality, we have for each i that
(|a_i| ≥ (n-1)^ρ_n-1) = ( |a_i|^p_n-1≥ n^p_n-1ρ_n-1) ≪ n^-5,
so H4 fails with probability O(n^-3). Using the fact that (𝐇) ≥(𝐇1, 𝐇2, 𝐇3) - (not 𝐇4), together with the fact that b < 3, the desired lower bound follows.
Now, let
𝐇' be the condition on g(x) = ∑_i = 0^n - 1 g_i x^i that g_n - i = c_i for each i ∈ S, that
H also holds, and further that
x^m σ_n - 2m(x)/n^1/4 - ∑_i = 0^m - 1 (B + ϵ) | x |^i + ∑_i ∈ S c_i x^n - i +
∑_i ∈{1, … , m}
i ∉S g_n - i x^n - i
> 0, ∀ x such that | x |≥ 1.
Observe that the condition 𝐇' may be obtained from the condition 𝐇 by keeping 𝐇1, 𝐇3, and 𝐇4 intact and by replacing 𝐇2 with the stronger condition that |g_n-1-i| ≤ M for i ∈{0,…,m-1} and (<ref>) holds.
Suppose g(x) = ∑_i = 0^n - 1 g_i x^i satisfies 𝐇'. Then g(x) has exactly j real zeros, all of them simple.
It only remains to be shown that if 𝐇' holds, then g(x) has no real zeros with | x |≥ 1, which holds by combining (<ref>), H1, and H3.
The probability that f_n, S satisfies 𝐇' is at least n^-b + o(1).
By the proof of Lemma <ref>, it suffices to show that f_n, S satisfies (<ref>) with probability at least n^o(1). For | x |≥ 1, we have σ_n - 2m(x) ≥ (n - 2m)^1/2. Consequently, for all n ≫ 1, we have
x^m n^-1/4σ_n - 2m(x) - ∑_i = 0^m - 1 (B + ϵ) | x |^i > 0
for all x such that | x |≥ 1. It now suffices to prove that
∑_i ∈ S c_i x^m - i +
∑_i ∈{1, ... , m}
i ∉S a_m - i x^m - i≠ 0
for x such that |x| > 1 with probability at least n^o(1), but this follows from the niceness condition, just as in the proof of Lemma <ref>.
In the context of Theorem <ref>, it is also clear that if k = k(n) grows as slowly as o(log n) that the above estimates continue to hold — Lemma <ref> needs to be modified to permit fixing more than m coefficients at the top of the polynomial, but this is possible because the behavior of the polynomial on [-1,1] is determined almost entirely by the last m coefficients.
§.§.§ The case where n-1,j are odd
Let n-1,j be odd. Recall that in <ref>, our idea was to adapt the proof given in <cit.> to work in our setting. This was possible because the proof in loc. cit. involves choosing very tiny ranges for the low-degree coefficients and only requires that the high-degree coefficients be bounded. But when n-1,j are odd, the proof given in <cit.> does not similarly adapt to our setting: their argument involves choosing tiny ranges for both the low- and high-degree coefficients of the polynomial, which prevents us from fixing the high-degree coefficients to have the desired values c_i. Thus, to handle the case where n-1,j are odd, we must prove a new analogue of Lemma <ref> that allows us to control the number of zeros using only the low-degree coefficients.
Since the a_i are of zero mean and unit variance, there exist β < 0 < α such that α,β∈V. We may assume without loss of generality that α > -β. Let s ≥ 4 be an even integer such that α + (s-1)β < 0, and define polynomials
Q(x) -β x^s-1 - ∑_i = 0^s-2α x^i, R(x) = -α - ∑_i = 1^s-1β x^i.
Note that by construction we have Q(x) < 0 for all x such that |x| ≤ 1 and R(1) > 0 > R(-1).
Let δ > 0 be sufficiently small, let r = r(δ) be sufficiently large, and let ϵ = ϵ(r,δ) be sufficiently small; we will choose the values of these quantities, all of which are independent of n, later. Let r_i denote the multiple of s nearest to r^i, for each i ∈{1,…, j}. Let m = m(n) be the multiple of s nearest to 2r_kρ_n log n/|log(1-δ)|. Define the polynomial
B(x) = ∑_i = 0^m b_ix^i (1 + x^s + x^2s + ⋯ + x^r_1-s)Q(x) + (x^r_1 + x^r_1 + s + ⋯ + x^r_2 -s)R(x) +
(x^r_2 + x^r_2+s + ⋯ + x^r_3-s)Q(x) + (x^r_3 + x^r_3 + s + ⋯ + x^r_4 -s)R(x) + ⋯ +
(x^r_j-1 + x^r_j-1+s + ⋯ + x^r_j-s)Q(x) + (x^r_j + x^r_j + s + ⋯ + x^m -s)R(x) + α x^m.
Then we have the following lemma (which also applies in the setting of <cit.> and gives a different proof of their result for fully random polynomials):
For suitable choices of δ, r, and ϵ, suppose that g(x) = ∑_i = 0^n - 1 g_i x^i is any polynomial satisfying the following four properties:
K1: | g_i - b_i | < ϵ for i ∈{0, 1, … , m}
K2: | g_n - 1 - i|≤ M for i ∈{0, 1, … , m - 1}
K3: g_m+1 + g_m + 2 x + ⋯ + g_n - m - 1 x^n - 2m - 2 > n^-1/4σ_n - 2m-2(x) for all x ∈
K4: | g_i | < (n-1)^ρ_n-1 for i ∈{0, 1, … , n - 1}.
Then g(x) has exactly j zeros in [0, 1], all of them simple, and is negative on [-1, 0].
The proof of the lemma follows the argument given in <cit.> for the proof of Lemma <ref>, and it is likewise divided into three steps. The first step is to prove that g has the desired behavior on [0,1]; the second step is to do the same for [-1,0]; and the third step is to prove the same for ∖ [-1,1]. The third step is entirely identical to that of the proof of Lemma <ref>, so we omit it and focus on the first two steps.
Step 1. The zeros of g(x) in (0,1) are the same as those of F(x) (1 - x^s)g(x), so it suffices to prove the following conditions, which ensure that g(x) has exactly j zeros on [0,1]:
* F(x) < 0 for x ∈ [0,δ^1/r_1]
* F'(x) > 0 for x ∈ [δ^1/r_1,(1-δ)^1/r_1]
* F(x) > 0 for x ∈ [(1-δ)^1/r_1,δ^1/r_2]
* F'(x) < 0 for x ∈ [δ^1/r_2,(1-δ)^1/r_2]
* F(x) < 0 for x ∈ [(1-δ)^1/r_2,δ^1/r_3]
* F'(x) > 0 for x ∈ [δ^1/r_3,(1-δ)^1/r_3]
⋮
* F'(x) > 0 for x ∈ [δ^1/r_j,(1-δ)^1/r_j]
* F(x) > 0 for x ∈ [(1-δ)^1/r_j,2^-1/m]
* g(x) > 0 for x ∈ [2^-1/m,1]
To proceed, first note that for all x such that |x| ≤ (1-δ)^1/r_k, the polynomial F is approximated very well by the polynomial (1-x^s)B(x). Indeed, by conditions 𝐊1 and 𝐊4, we have the following bound for all x such that |x| ≤ (1-δ)^1/r_k and all n ≫ 1:
|F(x) - (1-x^s)B(x)| ≤ (1-x^s) (ϵ(1 + |x| + ⋯ + |x|^m-1) + n^ρ_n(|x|^m+1 + ⋯ + |x|^n)
≤ (1-|x|)^-1(ϵ + n^ρ_nx^m+1) ≪ϵ,
where the implied constant depends on r and δ. We are thus led to consider the polynomial (1-x^s)B(x), which can be expanded as follows:
(1-x^s)B(x) = Q(x) + (∑_ℓ = 1^j(-1)^ℓ(Q(x) - R(x))x^r_ℓ) - R(x)x^m + α x^m(1-x^s).
We start by verifying that the signs of F(x) are correct on the relevant intervals. Fix N such that sup_x ∈ [0,1]max{|Q(x)|,|R(x)|}≤ N, and take x ∈ [0,δ^1/r_1]. Then the factors x^r_ℓ, x^m, and x^m-s occurring in (<ref>) are all at most δ in size, so we have
(1-x^s)B(x) ≤ Q(x) + (2j+3)Nδ.
Thus, for all sufficiently small δ, the negativity of Q(x) on [0,1] implies in conjunction with (<ref>) that (1-x^s)B(x) is negative and bounded away from zero for all x ∈ [0,δ^1/r_1]. Using (<ref>), for all sufficiently small ϵ we conclude that F(x) is negative on this interval.
Next, take x ∈ [(1-δ)^1/r_i,δ^1/r_i+1] for some i ∈{1,…, j-1}, and note that we have x^m ≤ x^r_ℓ≤δ for all ℓ > i and x^r_ℓ∈ [1-δ,1] for all ℓ≤ i. Observe that we have the identity
Q(x) + ∑_ℓ = 1^i(-1)^ℓ(Q(x) - R(x)) = QR(x) Q(x), if i is even,
R(x), if i is odd.
Combining (<ref>) with (<ref>), it follows that for all x ∈ [(1-δ)^1/r_i,δ^1/r_i+1] we have
|(1-x^s)B(x) - QR(x)| ≤ (2j+3)Nδ.
Thus, for all sufficiently small δ, the polynomial (1-x^s)B(x) is as close as we like to QR(x), which is negative if i is even, and which is positive if i is odd and r is sufficiently large. That we have the desired behavior on this interval of x then follows from (<ref>) by taking ϵ to be sufficiently small.
Next, take x ∈ [(1-δ)^1/r_j,2^-1/m]— note that on this interval, the bound (<ref>) no longer applies, but it follows from conditions 𝐊1, 𝐊2, and 𝐊3 that for all n ≫ 1,
F(x) - (1-x^s)B(x) ≥ -(1-x^s)(ϵ(1 + |x| + ⋯ + |x|^m) + mM|x|^n-m) ≫ -ϵ,
where the implied constant is positive and depends on r and δ.
Then, from (<ref>) and (<ref>), taking i = j, we deduce that
(1-x^s)B(x) ≥1/2R(x) - 2jNδ.
Thus, for all sufficiently small δ, the polynomial (1-x^s)B(x) is bounded away from zero, and the desired behavior on this interval of x then follows from (<ref>) by taking ϵ to be sufficiently small.
Next, take x ∈ [2^-1/m,δ^1/n]. Applying conditions 𝐊1, 𝐊2, and 𝐊3 to control the three ranges of terms in the polynomial g yields the following lower bound:
g(x) ≥ B(x) - ϵ (1 + ⋯ + |x|^m) - mMx^n-m≥ (x^r_j + x^r_j + s + ⋯ + x^m-s)R(x) - r_jN - ϵ m - mMδ^1/2
≥(m-r_j/2s)R(x) - r_jN - ϵ m - mMδ^1/2
Since m = m(n) →∞ as n →∞, and since R(1) > 0, we see the bound in (<ref>) is positive for all n ≫ 1 so long as δ and ϵ are sufficiently small.
Next, take x ∈ [δ^1/n,1]. In this range, conditions 𝐊1, 𝐊2, and 𝐊3 imply that the terms g_m+1x^m+1 + ⋯ + g_n-m-1x^n-m-1 dominate, contributing at least n^1/8 as n →∞. The other terms contribute ≪ m = o(log n) on this range, giving the desired positivity.
We now move on to verifying that the signs of F'(x) are correct on the relevant intervals. We note that for all x ∈ [0,(1-δ)^1/r_j], the derivative F'(x) is approximated very well by the derivative of (1-x^s)B(x). Indeed, by conditions 𝐊1 and 𝐊4, we have the following bound for all x ∈ [0,(1-δ)^1/r_j] and all n ≫ 1:
|F'(x) - d/dx((1-x^s)B(x))| ≤ sx^s-1(ϵ(1 + ⋯ + x^m) + n^ρ_n(x^m+1 + ⋯ + x^n)) +
(1-x^s)(ϵ(1 + 2x + ⋯ + mx^m-1) + n^ρ_n((m+1)x^m + ⋯ + nx^n-1))
≤ s(1-x)^-1(ϵ + n^ρ_nx^m+1) + (1-x)^-2(ϵ + n^ρ_n(m+1)x^m) ≪ϵ,
where the implied constant depends on r and δ. By differentiating (<ref>), we obtain the identity
d/dx((1-x^s)B(x)) = Q'(x) + ∑_ℓ = 1^j (-1)^ℓ((Q'(x) - R'(x))x^r_ℓ + (Q(x)-R(x))r_ℓ x^r_ℓ - 1) - o(1).
Even though our choice of the polynomial B is different from the choice made in <cit.>, and we chose Q and R to be the negatives of the choices made in loc. cit., the identity (<ref>) takes the exact same shape in both settings, and thus their analysis applies with minimal change. In particular, their argument shows that the term (-1)^i(Q(x) - R(x))r_ix^r_i-1 dominates the right-hand side of (<ref>) on the interval [δ^1/r_i,(1-δ)^1/r_i], and the desired behavior then follows from the fact that Q(x) - R(x) is negative and bounded away from zero on the interval [δ^1/r_1,1], so long as r is sufficiently large.
Step 2. The analysis here is similar to that of Step 1, except that both Q(x) and R(x) are negative when x is close to -1, as is the middle range of terms g_m+1x^m+1 + ⋯ + g_n-m-1x^n-2m-2. For the sake of brevity, we just highlight two important places in which the proof in Step 1 needs to be modified here. Firstly, in (<ref>), the contribution of α x^m(1-x^s) cannot be ignored, but we observe that this term is ≪δ on the interval [-2^-1/m,-(1-δ)^1/r_j], so it is negligible as long as δ is sufficiently small. Secondly, we are no longer interested in studying the sign of the derivative F'(x); rather, we must show that F(x) < 0 on the intervals [-(1-δ)^1/r_i,-δ^1/r_i]. To do this, it suffices by (<ref>) (taking ϵ to be sufficiently small) to prove that (1-x^s)B(x) is negative and bounded away from zero on [-(1-δ)^1/r_i,-δ^1/r_i]. Combining (<ref>) and (<ref>), we see that for all x ∈ [-(1-δ)^1/r_i,-δ^1/r_i],
|(1-x^s)B(x) - t_i(x)Q(x) - (1-t_i(x))R(x)| ≤ (2j+1)Nδ,
where t_i(x) = 1 - x^r_i if i is even and t_i(x) = x^r_i if i is odd. Thus, if δ is sufficiently small, it suffices to see that t_i(x)Q(x) + (1-t_i(x))R(x) is negative and bounded away from zero, but this is true for all sufficiently large r because then both Q(x) and R(x) are negative and bounded away from zero.
The lower bound in Theorem <ref> in the case where n-1,j are odd now follows from Lemma <ref> in exactly the same way as the case where n-1,j are even follows from Lemma <ref>. In the context of Theorem <ref>, it is also clear that if k = k(n) grows as slowly as o(log n) that the above estimates continue to hold — Lemma <ref> needs to be modified to permit fixing more than m coefficients at the top of the polynomial, but this is possible because the behavior of the polynomial on [-1,1] is determined almost entirely by B(x).
§.§ Upper bound
Fix δ∈ (0, 1/2), and let V be as defined in (<ref>). Then restricting x to lie in V, we have that
P_n, j≤(#{x ∈: f_n,S(x) = 0}≤ j) ≤(#{x ∈ V: f_n,S(x) = 0}≤ j).
Consider the range T = [δlog n, (1 - δ) log n], chosen so that setting x = w_1(t) = 1 - e^-t for t ∈ T traces out the interval I_1 [1 - n^-δ, 1 - n^-(1 - δ)], while x = w_2(t) = w_1(t)^-1, x = w_3(t) = -w_1(t)^-1, and x = w_4(t) = -w_1(t) trace out the remaining closed intervals I_2, I_3, and I_4 that constitute V.
Divide T into R ⌊ (1 - 2 δ) log n ⌋ unit-length intervals — the last one slightly longer if necessary — with images J_(i - 1) R + 1, ... , J_iR in I_i under w_i. If f_n,S indeed has at most j zeros in V, then there is at least one way of choosing j intervals J_i_1, ... , J_i_j to ignore, so that f_n,S has constant sign on each of the ℓ≤ j + 4 maximal leftover intervals comprising V ∖(J_i_1∪⋯∪ J_i_j). Number these maximal leftover intervals L_1, ... , L_ℓ. A quick calculation reveals that, because j = o(log n/loglog n), there are n^o(1) choices of (i_1, ... , i_j) and n^o(1) choices of (s_i)_i ∈{± 1}^ℓ for the signs of f_n,S on the L_i. Thus, it suffices to obtain an upper bound on the quantity
(min_1 ≤ i ≤ℓinf_x ∈ L_i s_i f_n,S(x) > 0 )
for each choice of intervals J_i_1,…,J_i_j to ignore and each choice of signs (s_i)_i ∈{± 1}^ℓ.
To estimate (<ref>), we split off the first ≈ k terms of f_n,S, leaving behind a fully random polynomial to which we can apply the results of <cit.>. More precisely, let r be the smallest even integer such that r ≥ k.
Then we write
f_n,S(x) = σ_n - r(x) f̂_n - r(x) + ∑_i ∈ S
c_i x^n - i + ∑_i ∈{1, … , r}
i ∉S a_n-i x^n - i.
For s_if_n,S(x) to be positive, s_iσ_n-rf̂_n-r(x) must be at least as large as the negative of the sum of the remaining terms on the right-hand side (<ref>). To bound these remaining terms from above, observe that there exists a constant ξ > 0 such that for all x ∈ V, i ∈{1, …, r}, and n ≫ 1, we have
|x^n-i| ≤ξ× n^-δ/2σ_n - r(x).
Applying the bound in (<ref>) to the terms of degree n-i where i ∈ S, we find for all n ≫ 1 that
(min_1 ≤ i ≤ℓinf_x ∈ L_i s_i f_n,S(x) > 0 ) ≤(min_1 ≤ i ≤ℓinf_x ∈ L_i s_i f̂_n - r(x) > - r ξ n^-δ/4)
+
∑_i ∈{1, … , r}
i ∉S(|a_i| ≥ n^δ/4).
Note that the summation on the right-hand side of (<ref>) is O(n^-3)— indeed, it is O(n^-a) for every a > 0, because the coefficient law has finite moments of all orders. As for the first term on the right-hand side of (<ref>), this is shown in <cit.> to be bounded by n^-(1-2δ)b + o(1). Taking the limit as δ→ 0 yields the desired upper bound. It is also clear that if k = k(n) (and hence r) grows as slowly as o(log n) that the above estimates continue to hold.
§ ACKNOWLEDGMENTS
We thank Kumar Murty, Bjorn Poonen, Qi-Man Shao, and Melanie Matchett Wood for helpful conversations. AS was supported by the National Science Foundation, under the Graduate Research Fellowship, as well as Award No. 2202839.
alpha
|
http://arxiv.org/abs/2409.02811v1 | 20240904153057 | Physics Perspectives with the ePIC Far-Forward and Far-Backward detectors | [
"Michael Pitt"
] | physics.ins-det | [
"physics.ins-det",
"hep-ex",
"nucl-ex"
] |
Packing and finding paths in sparse random graphs
Vesna Iršič [mailto:[email protected]@fmf.uni-lj.si, Faculty of Mathematics and Physics, University of Ljubljana, Slovenia]
Julien Portier [mailto:[email protected]@cam.ac.uk, Department of Pure Mathematics and Mathematical Statistics (DPMMS), University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, United Kingdom]
Leo Versteegen [mailto:[email protected]@gmail.com, Department of Pure Mathematics and Mathematical Statistics (DPMMS), University of Cambridge, Wilberforce Road, Cambridge, CB3 0WA, United Kingdom]
September 9, 2024
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The ePIC experiment, which is scheduled to begin in the early 2030s at the future Electron–Ion Collider (EIC) at Brookhaven National Laboratory (BNL), is poised to deepen our understanding of the fundamental structure of visible matter. The primary objectives of the scientific mission of the EIC, as outlined in a 2018 report by the National Academy of Sciences <cit.>, include the determination of the full 3D momentum and spatial structure of nucleons, with a focus on understanding the gluon density and saturation phenomena. Additionally, the ePIC experiment aims to elucidate how the mass and the spin of nucleons and other hadrons arise from strong interactions.
The ePIC experiment will comprise a central 10-meter-long cylindrical barrel detector, covering a rapidity range from -4 to 4. An additional array of detectors that extends approximately 50 meters in the forward (η>4) and backward (η<-4) directions will be incorporated and is essential for achieving key objectives, such as measuring luminosity, tagging low-Q^2 scattered electrons, and measuring both prompt and secondary particles in the rapidity range of η>4. The extended detector array also expands the scope of the physics program beyond what was initially envisioned, significantly enhancing the EIC's research potential. The following section will delve into the details of the forward and backward detector arrays and explore some of the new physics opportunities made possible by these detectors.
§ THE EXTENDED EPIC DETECTOR
§.§ The Far-Backward detectors
§.§.§ Luminosity monitor
Precise cross-sectional measurements place stringent requirements for luminosity determination at the EIC. The luminosity monitor is set to measure the luminosity from the electron–ion elastic bremsstrahlung process ep→ eγ p with a precision better than 1%, as this process has a very large cross-section (∼mb) <cit.>. The detector concept follows a detector design similar to that used at ZEUS, HERA <cit.>, and it employs two distinct methods for counting bremsstrahlung photons: photon conversion into e^+e^- pairs for precise DIS cross-sectional measurements and direct (non-converted) photon detection for monitoring instantaneous collider performance.
Bremsstrahlung photons deviate from the electron beam and exit the beampipe through a 1-cm-thick aluminum window, which acts as an energy filter for them. However, this necessary thickness of the exit window inevitably causes some photon pair conversions, which are then eliminated by a sweeper magnet positioned after the exit window. One percent of the bremsstrahlung photons are converted into e^+e^- pairs using a 1-mm-thin aluminum foil and are then directed into a pair spectrometer by an adjustable spectrometer magnet. The pair spectrometer comprises a tracking layer (AC-LGAD <cit.>) with a spatial resolution of 20 μ m, followed by a scintillating fiber calorimeter with a thickness of 23X_0. The remaining bremsstrahlung photons are measured in a direct-photon calorimeter.
§.§.§ Low-Q2 taggers
Photon virtuality (Q^2) is closely related to the scattering angle of the outgoing electron in eA collisions. In these interactions, the central detector, which covers a rapidity range down to η=-4, has a high acceptance for outgoing electrons with Q^2>1 GeV^2. However, scattered at smaller angles, electrons with low Q^2 values tend to escape detection. Enhancing the detector's ability to cover these small angles would not only allow to probe a wider range of kinematic regions but would also provide valuable insights into processes involving quasi-real photons in the Q^2 range between 10^-3 to 10^-1. Figure <ref> shows the correlation between the rapidity of scattered electrons and Q^2 at the generator level before and after applying track reconstruction using the ePIC detector simulation. While there is high acceptance for Q^2 values below 10^-3, this very low region will likely be dominated by background tracks, which will be the focus of future studies.
§.§ The Far-Forward detectors
All processes of interest at the EIC are associated with the production of very forward particles, necessitating strong detection capabilities for hadrons and photons in the far-forward region (η>4). The far-forward (FF) array includes various detector concepts tailored to meet the demands of the physics program, such as calorimetry for neutrons and photons, silicon sensors for charged particle tracking and timing, and specialized detectors such as Roman pots for detecting protons or nuclear fragments that are very close to the beam. These elements are discussed in detail in this subsection. The layout of the FF detectors is shown in Figure <ref>.
§.§.§ The B0 detector
The B0 detector consists of four evenly spaced silicon tracker layers based on AL-LGAD technology, along with an electromagnetic calorimeter (EMCAL) made up of 2× 2× 20 cm^3 PbOW_4 crystals, all positioned within the bore of the B0 dipole magnet. The entire B0 detector subsystem is designed to measure particles produced at scattering angles between 5.5 and 20 mrad. The current design features a very low material budget in the rapidity range of 5 < η < 5.5, allowing for high acceptance of photons across a broad energy spectrum (greater than 50 MeV), including low-energy de-excitation photons. The EMCAL is planned to achieve an energy resolution of 6–7% and a position resolution of approximately 3 mm. With more than one interaction length, the EMCAL has a detection efficiency of over 50% for neutrons, making it suitable for veto studies. The B0 tracker can measure forward protons with a momentum resolution dp/p of 2–4%.
§.§.§ Forward beamline trackers
The very forward tracker is composed of two detectors: off-momentum detectors (OMDs) and Roman pots (RPs). Charged particles with lower magnetic rigidity experience greater deflection in the magnetic fields compared with nominal beam particles, leading to a displacement from the beam center. In eA collisions with A>1, most of the accelerated spices have A/Z∼ 2 when protons are emitted from the ion breakup, and they will carry half of the energy required to keep the protons in orbit, resulting in a large displacement from the beam center. The OMDs, which are positioned just after the B1 dipole magnet, measure protons and other charged particles with a beam rigidity (x_L) ranging from 30% to 60%. Protons and other charged particles with a beam rigidity above 60% are detectable in the RP detectors, which are placed just a few millimeters from the hadron beam in both the vertical and horizontal directions. The detectors also provide acceptance for scattered protons for up to 5 mrad in all directions.
Both the OMDs and RPs consist of two AC-LGAD-based tracking planes that are spaced two meters apart and are capable of measuring both the hit position and the local scattering angle between two planes. The position of a charged particle and its scattering angle at a given distance from the interaction point (IP) can be determined by considering its kinematics at the IP and applying a transformation matrix, which is evaluated for different beam optics configurations and particle rigidities at different distances from the IP. By inverting this equation, the track coordinates at the detector plane can be translated into the particle's kinematics at the interaction point.
§.§.§ Zero degree calorimeter
Neutral particles produced at scattering angles below 5 mrad propagate in a straight line through an exit window to a dedicated zero-degree calorimeter (ZDC), which is positioned before the B2 dipole magnet, as illustrated in Figure <ref>. The ZDC design includes both electromagnetic and hadronic calorimeters. The electromagnetic section consists of 20-cm-long LYSO or PbOW_4 crystals, which are optimized for detecting soft photons, while the hadronic section will be similar to the ePIC forward hadron calorimeter <cit.>. The reconstruction of neutral particles utilizes a machine-learning-based approach using the HEXPLIT algorithm <cit.>, which meets the detector requirements of an energy resolution of Δ E=50%/√(E)⊕ 5% and a position resolution of Δθ=3mrad/√(E)⊕ 5% for neutrons, as well as an energy resolution of Δ E=5%/√(E)⊕ 3% and a position resolution of 0.5–1 mm for photons, where E represents the energy deposition in units of GeV.
§ PHYSICS PERSPECTIVES
The far-forward and far-backward detectors at the EIC expand the scope of the physics program beyond the initial expectations, enhancing the research potential of the ePIC experiment. These detectors play a crucial role in several key processes. The deeply virtual Compton scattering process and deeply virtual meson production are essential for imaging the transverse spatial distribution of quarks and gluons within a proton during ep collisions. These processes rely on the precise detection of the intact proton, which is often detected in the far-forward region. In ed collisions, short-range correlations between nucleons can be investigated by tagging proton and neutron <cit.>. This method provides insights into the dynamics arising from gluons in the low-x region and how these dynamics depend on the internal configuration of nucleons. The Sullivan process offers a unique opportunity to study the form factors and structure functions of pions and kaons. This process involves the production of an outgoing baryon at very forward rapidities. Saturation effects and the internal structure of nuclei can be explored through diffractive production processes or coherent vector meson production. These phenomena are particularly sensitive to the distribution of gluons within nuclei and provide valuable information about the onset of gluon saturation. The structure of free neutrons and the EMC effect can be investigated by tagging spectators in the interactions of light nuclei, such as e+3He collisions. Spectator tagging allows for the isolation of specific interaction channels, providing a clearer view of neutron structure and the modification of nucleon structure within nuclei. In interactions of heavy nuclei, the high acceptance for soft photons in the far-forward detectors enables groundbreaking measurements of ion de-excitations during coherent eA scattering. These measurements provide insights into the de-excitation processes and the energy dissipation in excited nuclear states that emerge from high-energy collisions.
This section elaborates on two of these processes in more detail, illustrating the enhanced capabilities provided by the far-forward detectors at the EIC.
§.§.§ Coherent Vector Meson Production
One of the golden measurements at the EIC is the study of coherent and incoherent vector meson production processes from heavy nuclei <cit.>. This measurement provides a means of probing the gluon distribution within an ion and is particularly sensitive to saturation phenomena (see, for example, <cit.>). While both coherent and incoherent processes are of great interest, measuring the coherent production processes at high momentum transfer (t) or measuring the incoherent production process at a very small momentum transfer presents significant challenges. The discrimination of incoherent processes is achievable due to the extensive kinematic coverage provided by the far-forward (FF) detector array.
In <cit.>, the authors explored the vetoing capabilities of FF detectors to distinguish between coherent and incoherent production processes. Incoherent processes are characterized by ion breakup and the production of ion fragments, including free protons and neutrons, in the forward direction. Protons were effectively vetoed by the B0 tracker, OMD, and RP detectors, while the B0 EMCAL and ZDC vetoed neutral particles (such as photons and neutrons). As a result, most events that passed the full event selection criteria were identified as incoherent events, primarily through ion de-excitation. The authors demonstrated that the vetoing techniques suppressed incoherent processes by two orders of magnitude, leading to a signal-to-background ratio above unity for events with t values near the first diffractive minimum.
Additionally, most of the remaining background events originate from incoherent processes involving ion de-excitation, where the emitted photons escape detection. These processes, which have never been systematically studied, offer a promising new area of investigation with potential implications for the EIC physics program.
§.§.§ Virtual Compton Scattering (u-channel)
While the DVCS is also considered one of the “golden channels” of the EIC physics program due to its clear interpretation in terms of generalized Parton distributions <cit.>, virtual Compton backward scattering (u-channel) involves a large momentum transfer and may play a significant role in baryon stopping in heavy-ion collisions. The main background for this process is a coherent π^0 production, where π^0→γγ can be misinterpreted as a single photon due to a small scattering angle or if one of the photons escapes detection <cit.>.
The highly segmented ZDC provides powerful discrimination across all energy ranges. The angular separation of the two photons at the beam energy E_beam at the distance of the ZDC from the IP is given by Δ x^γγ = 70· m_π /E_beam meters, which corresponds to separations of 23 cm, 9.5 cm, and 3.4 cm for beam energies of 41, 100, and 275 GeV, respectively. The authors of <cit.> demonstrated that the π^0 background is reduced to a few percent in ep collisions at 18× 275 GeV^2. Moreover, the inclusion of the B0 detector will enhance signal acceptance by a factor of two or ten at a beam energy of 100 and 41 GeV.
§ SUMMARY
Comprehensive acceptance studies for all far-forward and far-backward detectors have been conducted, and their performance is well understood based on currently available information. The extended far-forward detector array allows an impressive extension of the nominal physics program <cit.> foreseen with the existing detectors. The focus has now shifted to simulation studies of various processes in preparation for the ePIC Technical Design Report.
99
NAS:report
National Academies of Sciences, Engineering, and Medicine,
“An Assessment of U.S.-Based Electron-Ion Collider Science”,
https://doi.org/10.17226/25171
The National Academies Press, Washington, DC, 2018
Haas:2010bq
T. Haas and V. Makarenko,
“Precision calculation of processes used for luminosity measurement at the ZEUS experiment”,
https://doi.org/10.1140/epjc/s10052-011-1574-9
Eur. Phys. J. C 71 (2011), 1574
Helbich:2005qf
M. Helbich, Y. Ning, S. Paganis, Z. Ren, W. B. Schmidke, F. Sciulli, U. Schneekloth, C. Buttner, A. Caldwell and J. Sutiak,
“The Spectrometer system for measuring ZEUS luminosity at HERA”,
https://doi.org/10.1016/j.nima.2006.06.049
Nucl. Instrum. Meth. A 565 (2006), 572-588
Mandurrino:2020ukm
M. Mandurrino, R. Arcidiacono, M. Boscardin, N. Cartiglia, G. F. Dalla Betta, M. Ferrero, F. Ficorella, L. Pancheri, G. Paternoster and F. Siviero, et al.
“Analysis and numerical design of Resistive AC-Coupled Silicon Detectors (RSD) for 4D particle tracking”,
https://doi.org/10.1016/j.nima.2020.163479
Nucl. Instrum. Meth. A 959 (2020), 163479
Lomnitz:2018juf
M. Lomnitz and S. Klein,
“Exclusive vector meson production at an electron-ion collider”,
https://doi.org/10.1103/PhysRevC.99.015203
Phys. Rev. C 99 (2019) no.1, 015203
Klest:2024xlm
H. Klest,
“Calorimetry for the ePIC Experiment”,
PoS DIS2024 (2024), 276
Paul:2023okc
S. J. Paul and M. Arratia,
“Leveraging staggered tessellation for enhanced spatial resolution in high-granularity calorimeters”,
https://doi.org/10.1016/j.nima.2023.169044
Nucl. Instrum. Meth. A 1060 (2024), 169044
Tu:2020ymk
Z. Tu, A. Jentsch, M. Baker, L. Zheng, J. H. Lee, R. Venugopalan, O. Hen, D. Higinbotham, E. C. Aschenauer and T. Ullrich,
“Probing short-range correlations in the deuteron via incoherent diffractive J/ψ production with spectator tagging at the EIC”,
https://doi.org/10.1016/j.physletb.2020.135877
Phys. Lett. B 811 (2020), 135877
Accardi:2012qut
A. Accardi, J. L. Albacete, M. Anselmino, N. Armesto, E. C. Aschenauer, A. Bacchetta, D. Boer, W. K. Brooks, T. Burton and N. B. Chang, et al.
“Electron Ion Collider: The Next QCD Frontier: Understanding the glue that binds us all”,
https://doi.org/10.1140/epja/i2016-16268-9
Eur. Phys. J. A 52 (2016) no.9, 268
Toll:2012mb
T. Toll and T. Ullrich,
“Exclusive diffractive processes in electron-ion collisions”,
https://doi.org/10.1103/PhysRevC.87.024913
Phys. Rev. C 87 (2013) no.2, 024913
Chang:2021jnu
W. Chang, E. C. Aschenauer, M. D. Baker, A. Jentsch, J. H. Lee, Z. Tu, Z. Yin and L. Zheng,
“Investigation of the background in coherent J/ production at the EIC”,
https://doi.org/10.1103/PhysRevD.104.114030
Phys. Rev. D 104 (2021) no.11, 114030
Burkardt:2002hr
M. Burkardt,
“Impact parameter space interpretation for generalized parton distributions”,
https://doi.org/10.1142/S0217751X03012370
Int. J. Mod. Phys. A 18 (2003), 173-208
Sweger:2023bmx
Z. Sweger, S. Yoo, Z. Zeng, D. Cebra, S. R. Klein, Y. Ji, X. Dong and M. Kim,
“Modeling backward-angle (u-channel) virtual Compton scattering at the future Electron-Ion Collider”,
https://doi.org/10.1103/PhysRevC.108.055205
Phys. Rev. C 108 (2023) no.5, 055205
AbdulKhalek:2021gbh
R. Abdul Khalek, A. Accardi, J. Adam, D. Adamiak, W. Akers, M. Albaladejo, A. Al-bataineh, M. G. Alexeev, F. Ameli and P. Antonioli, et al.
“Science Requirements and Detector Concepts for the Electron-Ion Collider: EIC Yellow Report”,
https://doi.org/10.1016/j.nuclphysa.2022.122447
Nucl. Phys. A 1026 (2022), 122447
|
http://arxiv.org/abs/2409.02698v1 | 20240904133402 | Exact first passage time distribution for second-order reactions in chemical networks | [
"Changqian Rao",
"David Waxman",
"Wei Lin",
"Zhuoyi Song"
] | q-bio.MN | [
"q-bio.MN",
"math.PR"
] |
§ ABSTRACT
The first passage time (FPT) is a generic measure that quantifies when a random quantity reaches a specific state. We consider the FTP distribution in nonlinear stochastic biochemical networks, where obtaining exact solutions of the distribution is a challenging problem. Even simple two-particle collisions cause strong nonlinearities that hinder the theoretical determination of the full FPT distribution. Previous research has either focused on analyzing the mean FPT, which provides limited information about a system, or has considered time-consuming stochastic simulations that do not clearly expose causal relationships between parameters and the system's dynamics. This paper presents the first exact theoretical solution of the full FPT distribution in a broad class of chemical reaction networks involving A + B → C type of second-order reactions. Our exact theoretical method outperforms stochastic simulations, in terms of computational efficiency, and deviates from approximate analytical solutions. Given the prevalence of bimolecular reactions in biochemical systems, our approach has the potential to enhance the understanding of real-world biochemical processes.
A Data Selection Approach for Enhancing Low Resource Machine Translation Using Cross-Lingual Sentence Representations
Nidhi Kowtal *
SCTR's Pune Institute of Computer Technology
Pune, India
[email protected]
Tejas Deshpande *
SCTR's Pune Institute of Computer Technology
Pune, India
[email protected]
Raviraj Joshi
Indian Institute of Technology Madras, India
L3Cube Labs, Pune
Pune, India
[email protected]
September 9, 2024
==============================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The first passage time (FPT) is a fundamental concept that is used to analyze the behavior and dynamics of stochastic processes<cit.>. In biochemical reaction networks, the FPT is a key quantity that refers to the time when a specific event or state first occurs within the network<cit.>.
Examples of the specific event include reaction completion<cit.>, binding or unbinding events<cit.>, protein translocation<cit.>, and state transitions<cit.>. Analyzing the FPT for these events is not just a theoretical exercise; the FPT can provide detailed insight into the timing, efficiency, and reliability of the underlying biochemical reaction system<cit.>. This insight is crucial because it enables not only the understanding of regulatory mechanisms<cit.>, but also their manipulation<cit.>, thereby opening up new possibilities in biochemistry and chemical kinetics.
When counts of the constituent molecules are low, stochasticity and discreteness are inescapable features of chemical kinetics<cit.>. In this context, the stochastic properties of the FPT require a characterization in terms of its probability distribution. However, past theoretical work has primarily focused on deriving the mean FPT and the global FPT<cit.>. This is because the FPT distribution is hard to measure experimentally; the stochastic timings are disguised in cell population measurements due to cell-cell variabilities, and precisions in single-cell measurements are limited by experimental technologies<cit.>. Attention has shifted recently to focus on obtaining the full FPT distribution beyond the mean<cit.>. This distribution provides much more information about the underlying biochemical system. For example, Thorneywork et al. demonstrate that a purely dynamic measurement of the full FPT distribution uncovers that a short-time, power-law regime of the distribution, rather than the mean FPT, reflects the number of intermediate states in an underlying potential energy landscape<cit.>. Therefore, mathematical methodologies for estimating and analyzing the FPT distribution are highly desirable, and complement advances in measurement technologies of the FPT distribution in biochemical systems.
Traditionally, estimating the FPT distributions depends on solving the underlying chemical master equations (CMEs)<cit.>, which are the primary modelling approach of stochastic biochemical systems<cit.>. Typically, CME solutions can be simulated<cit.>, solved approximately<cit.>, or solved exactly<cit.>. Simulation approaches, such as the Gillespie algorithm<cit.>, approximate the solutions of the CME by generating many realizations of the associated Markov process. Such simulations trade time efficiency for accuracy, since producing tens of thousands of sample paths, to expose the underlying distribution, takes time<cit.>.
Approximate methods were developed to solve CMEs, trading off estimation accuracy for time efficiency<cit.>. There are two main classes of such methods: (i) closure schemes<cit.> and (ii) linear mapping methods<cit.>. Under closure schemes, the solutions to CMEs are obtained by approximating higher-order moments of the solution by nonlinear functions of lower-order moments, thereby leading to tractable equations. Under linear mapping, bimolecular reactions are approximated by first-order reactions, allowing simpler, solvable systems. However, even with such approximations, it is a formidable task to elucidate the causal mechanisms regulating FPT distributions, as exhaustive parameter searches are usually unrealistic, and the complex compensatory effects of parameter variations are difficult to clarify<cit.>.
Exact theoretical expressions for the FPT distributions are highly desired, as they may be essential to identifying and quantifying the causal regulatory mechanisms<cit.>. However, this depends on determining time-dependent solutions for the CMEs, which are only known for specific cases<cit.>.To the best of our knowledge, general, exact time-dependent solutions of CMEs are known only for simple reaction systems - with zero and first-order reactions, i.e., linear systems<cit.>.
Time-dependent solutions of CMEs are still challenging for general, non-linear, biochemical reaction networks<cit.>. Even widely presented bimolecular reactions, which constitute one of the simplest core building blocks of a biochemical reaction network, involve highly nonlinear models, making exact CME solutions non-tractable<cit.>. As a result, for biochemical networks with second-order reactions, time-varying CME solutions are only available for specific cases, such as highly simplified systems with only a few states<cit.>, or reversible bimolecular reaction in which the transition matrix has a tridiagonal format<cit.>.
General time-dependent CME solutions are still unknown for systems with bimolecular reactions<cit.>. Matters are even more challenging if there is a large state space and if non-constant reaction rates are involved<cit.>. Generally, second-order reactions can be grouped into two types: A + A → C and A + B → C, and mathematical analysis treats these two types of second-order reactions separately<cit.>. In earlier work, we derived an exact FPT distribution for a general class of biochemical networks involving an A + A → C second-order reaction downstream of two zero/first-order reactions<cit.>. In the present work, we report the first exact distribution of the FPT for a general class of chemical reaction networks with an A + B → C second-order reaction that is downstream of various zero/first-order reactions.
We note that Anderson et al. have recently provided the first exact time-dependent distribution for a general class of reaction networks with higher-order complexes<cit.>. They find that the time-varying solutions of the CMEs will maintain a Poisson-product form. However, this result requires that: (i) the chemical reaction system initially has such a Poisson-product form, and (ii) the system has to satisfy a dynamical and restricted (DR) condition. This DR condition requires that, for any higher-order reactant pairs, the production and consumption are balanced. Generally, the DR condition is very restrictive, and only applies to specific reaction formats with specific kinetic rates and initial conditions. In this work, we present results that are not subject to the DR condition - by allowing the mean of molecular numbers to follow a stochastic process, as described by a stochastic differential equation (SDE)
rather than an ordinary differential equation (ODE).
Our exact analytical results are not only novel and efficient, but also highly practical. They are much more time efficient than stochastic simulations, and are much more accurate than traditional approximation methods, such as linear mapping methods (LMA)<cit.>. As we demonstrate, our methods have applications in diverse chemical reaction networks from different areas. These include gene regulatory networks<cit.> and multi-step transition models<cit.>. Our results thus have broad applicability and real-world significance.
§ PROBLEM FORMULATION
We consider a biochemical system with N chemical species S = [S_1,S_2,...,S_N]^⊤ whose molecules can undergo M+1 chemical reactions, R_m, with m=0,1,2,3,....,M. There are M zero or first-order reactions followed by one second-order reaction, which is of the type A + B → C.
We label the second-order reaction by m=0 and the other reactions (zero- or first-order) by m=1,2,...M. The molecules A and B are the reactants of the second-order reaction, which can be any two different molecular species. We denote these two reactants as S_1 and S_2 without loss of generality.
We shall use the following general notation for such a biochemical system:
y_m·Sa_m(t)→ y'_m·S,
S_1+S_2a_0(t)→ *
where m= 1,…,M label the m'th zero or first-order reaction, while y_m and y_m' are the stoichiometric coefficient vectors of the m'th reaction.
The reaction rate 'constants' are written a_m(t) and a_0(t), and we have assumed these vary with the time, t. The resulting analysis thus includes a broad class of non-linear time-varying biochemical systems with second-order reactions.
Because, in the above biochemical system, the first m reactions are zero or first-order, we require that the sum of the absolute values of all components of y_m, and separately of y_m', should not exceed one; we write these conditions as y_m_1≤ 1 and y'_m_1≤ 1, respectively.
To begin a derivation of the exact FPT distribution of the second-order reaction, we first define it mathematically. We adopt a standard approach by adding an additional species S_0, the number of which counts the time of occurrence of the second-order reaction<cit.>. We denote the number of S_0 molecules by x_0 ≡ x_0(t), then x_0 is a non-decreasing function of t. Thus, we modify the system in (<ref>) to:
y_m·Sa_m(t)→ y'_m·S,
S_1+S_2a_0(t)→ S_0.
If at any time t the second order reaction has not occurred, we have x_0(t)=0, and
equivalently the FPT exceeds t. Therefore, the complementary cumulative probability distribution of the FPT can be represented as:
P(FPT>t) = P(x_0(t) = 0).
§ RESULT
§.§ Analytical representation for the auxiliary chemical master equation
For the system given in (<ref>), we adopt the following notation:
* 𝐗=[[ x_0; 𝐱 ]] denotes the complete state vector of the system, representing the numbers of different species of S^*=[[ S_0; S ]]. The quantity 𝐱=[x_1,⋯,x_N]^⊤ denotes the numbers of different species of S.
* Y_m=[
y_m,0,
y_m
] and Y'_m=[
y'_m,0,
y'_m
] are stoichiometric coefficient vectors of the m'th reaction in system (<ref>), and y_m=[y_m,1,y_m,2,…,y_m,N], y'_m=[y'_m,1,y'_m,2,…,y'_m,N] denote parts of stoichiometric coefficient vectors, excluding y_m,0 and y'_m,0 for the auxiliary species S_0.
* a_0(t) denotes the rate `constant' at time t of the second-order reaction,
while a_m(t) (m=1,…,M) denotes the rate `constant' at time t of the m'th reaction
(which is either a zero or first-order reaction).
The CMEs for the system given in (<ref>) correspond to a differential equation for the probability distribution of X at time t<cit.>:
d P(X, t )/ d t =∑_k=0^M[ P(X-v_k, t ) c_k(X-v_k,t)- P(X, t ) c_k(X,t)],
where v_k is the transition vector for the k'th reaction, and c_k(X,t) is the propensity function for the k'th reaction, i.e., the probability that the k'th reaction occurs in state X, at time t. The
quantity c_k(X,t) equals a_k(t)X^Y_k, where, for any vectors with d components, a vector to the power of a
vector is defined by 𝐮^𝐯 def=∏^d_i=1u_i^v_i and we adopt the convention that 0^0=1. Therefore, for k=0, the reaction is given in (<ref>), thus, c_0(X,t)=a_0(t)x_1x_2. For k=1,…, M, the reactions are first or lower-order, and thus c_k(X,t), k=1,…, M are linear functions of states.
We next present a theorem that gives the exact solution of (<ref>)
in terms of the following variables/notation:
* λ(t)=[λ_1(t),…
,λ_N(t)]^⊤ is a column vector containing the mean of all numbers of the species in S at
any time t (t≥0), and Λ(t)= [[ λ_0(t); λ(t) ]] is a column vector containing the mean of all numbers of the species in S^* at
any time t (t≥0).
*
Λ(0)= [[ λ_0(0); λ(0) ]]denotes the initial mean number of each species.
* ℳ_1 denotes an N× N matrix with components
η_i,j^1, and ℳ_2 denotes an N×1 vector with
components η_i^2, where the components are given by:
η_i,j^1=∑_m:y_m=e_ia_m(t)(y_m,j^'
-y_m,j), η_i^2=∑_m:y_m=0a_m
(t)(y_m,i^'-y_m,i),
in which e_i is a vector where only element i is 1, and all other elements
are 0, while y_m,i^' and y_m,i are the i'th component
of y_m^' and y_m, respectively,
* 𝒩_1 and 𝒩_2 denote N× N matrices with
all elements zero, except the upper left 2×2 block, with
𝒩_1=(
[ a_S 0 ⋯; 0 -a_S ⋯; ⋮ ⋮ ⋱ ]) , 𝒩_2=(
[ ia_S 0 ⋯; 0 ia_S ⋯; ⋮ ⋮ ⋱ ]),
where
a_S=√(2a_0)/2.
The theorem reads as follows.
For the system in (<ref>), providing:
* the variables λ and λ_S obey the stochastic
differential equations (SDEs):
dλ =(ℳ_1λ+ℳ_2) dt + 𝒩_1λ dW^1_t+𝒩_2λ dW^2_t,
dλ_S = ( a_S λ_1-a_S λ_2 ) dW^1_t + ( ia_S λ_1+ ia_S λ_2 ) dW^2_t,
subject to λ_S(0)=0, and some initial condition λ(0),
* the new variable λ_0 obeys the SDE:
dλ_0=a_0λ_1λ_2 dt,
subject to λ_0(0)=0,
then if the initial condition of (<ref>) is a distribution of the Poisson-product form:
P(X,0) =Λ(0)^X/X !exp(-λ(0))=∏_i=0^N λ_i(0)^x_i/x_i !exp(-λ_i(0)),
the solution of (<ref>) is
P(X,t)=<Λ(t)^X/X!exp(-Λ(t))exp(λ_0(t)+λ_S(t))>,
where <...> denotes an expectation operation.
Theorem <ref> is proved in Appendix A.
Theorem <ref> gives an exact theoretical expression for the CME solution of a general class of nonlinear biochemical reaction networks with A + B → C type of second-order reactions. Compared to the previous analytical solutions of CMEs by Anderson et al.<cit.>, the key feature is that Theorem <ref> does not require the underlying system to satisfy a dynamically restricted (DR) condition, which considerably constrains the system's structure and parameters. However, the systems we analyze here, as represented by (<ref>), are not subject to the DR condition. This indicates a broader applicability of Theorem <ref> than previous exact theoretical results in the literature.
The key reason why Theorem <ref> can break the DR condition is that we allow the mean of each molecular species (contained in λ and λ_S), to follow stochastic processes, as described by the SDEs of (<ref>)), rather than being smoothly changing continuous variables, with no randomness, whose dynamics are governed by ordinary differential equations (ODEs). As a result, the time-dependent CME solution can be written as the average of a distribution over all λ paths. More details are given in Appendix <ref>.
Another significant aspect of Theorem <ref> is its applicability to time-varying systems where the reaction rates are not constant. This is a particularly complex problem, as solving CMEs with time-varying rates is much more challenging than those with constant reaction rates<cit.>. To date, except for our earlier work that provides an exact FPT distribution for a specific type of nonlinear biochemical reaction network with an A + A → C type of second-order reactions<cit.>, we have not encountered any work that offers exact theoretical results for FPTs with A + B → C type of second-order reactions and time-varying reaction rates. This suggests that Theorem <ref> will lead to new avenues of research in this area.
§.§ Analytical expression for the FPT distribution
One approach to obtain the solution of Eq. (<ref>) is to calculate the expectation by simulation, i.e., by independently solving the SDEs (for λ, λ_S and λ_0) many times and then averaging. This may be very time-consuming. However, Theorem <ref> can lead to an exact
form of the FPT distribution, as given in (<ref>), which can then be numerically evaluated.
The complementary cumulative probability distribution of the FPT
equals the probability of occurrence of states with x_0=0:
P(FPT>t) = ∑_x|x_0=0P(X,t)
=<exp(λ_S)>.
Corollory <ref> follows by substituting P(x,x_0,t) into (<ref>).
Equation (<ref>) is a highly compact exact theoretical result. Next, we present numerical methods for calculating it.
§.§ Numerical approximations for the FPT distribution
A direct approach to calculating <exp(λ_S)> in (<ref>) would require solving an infinite set of coupled ODEs (see Appendix <ref> for details). To avoid such an issue, we shall introduce a numerical approximation method based on a Padé approximant of <exp(λ_S)>. This requires the calculation of moments of λ_S(t). As we can show, the n'th moment, <λ_S^n>, has a closed-form expression. This follows because <λ_S^n> is governed by a finite set of coupled ODEs, which results because λ ( (<ref>)) and λ_S ( (<ref>)) follow linear SDEs.
We proceed by constructing a function H(s,t)=<exp(sλ_S(t))>; the quantity
required for Eq. (<ref>) is given by H(1,t).
To approximate H(1,t), we first obtained a Padé approximant of the function H(s,t), denoted as T(s,t), and then set s=1 to calculate T(1,t).
We determined a Padé approximant of H(s,t) using the following procedure<cit.>:
*
Construct the Maclaurin series of H(s,t) in s, truncated at the Ñ'th
term, i.e. T_Ñ(s,t)= ∑_n=0^Ñs^n/n!×∂^n/∂ s^nH(s,t)|_s=0 = ∑_n=0^Ñb_n(t)s^n. The larger Ñ is, the better the Padé approximate is, but at the cost of a more time-consuming algorithm.
* Calculate b_n(t) by determining <λ_S(t)^n>; since ∂^n/∂ s^nH(s,t)|_s=0= <λ_S(t)^n>, b_n(t) is known if <λ_S(t)^n> is calculated. See
below for details on how to calculate <λ_S(t)^n>.
* Find the Padé approximant by determining two polynomials P^*_L̃(s,t) and Q^*_Ñ-L̃(s,t), such that the Maclaurin series of P^*_L̃(s,t)/Q^*_Ñ-L̃(s,t), truncated at the Ñ'th term, equals T_Ñ(s,t).
* Equate the coefficients of P^*_L̃(s,t)/Q^*_Ñ-L̃(s,t) with that of the corresponding polynomial terms of T_Ñ(s,t), then P^*_L̃(s,t) and Q^*_Ñ-L̃(s,t) can be uniquely set by solving Ñ+1 algebraic equations. For example, if P^*_L̃(s,t)=p_0(t)+p_1(t)s+p_2(t)s^2+…+p_L̃(t)s^L̃, and
if
Q^*_Ñ-L̃(s,t) = 1+q_1(t)s+q_2(t)s^2+…+q_Ñ-L̃(t)s^Ñ-L̃,
then the algebraic equations are:
b_0=p_0
b_1+b_0q_1=p_1
b_2+b_1q_1+b_0q_2=p_2
⋮
b_L̃+b_L̃-1q_1+…+b_0q_L̃=p_L̃
b_L̃+1+b_L̃q_1+…+b_0q_L̃-1=0
b_Ñ+b_L̃q_1+…+b_0q_2L̃-Ñ=0 ,
where q_n=0 for n<0 or n>Ñ-L̃.
The key to obtaining a Padé approximant of H(s,t) is the determination of <λ_S(t)^n>. To achieve this, we adopt the following procedure<cit.>.
* Differentiate λ_S(t)^n, using Ito's rule:
d(λ_S^n)=nλ_S^n-1 dλ_S+ n(n-1)/2λ_S^n-2( dλ_S)^2, and substitute dλ_S and ( dλ_S)^2 from the SDEs
in (<ref>). The resulting right-hand side is a polynomial in λ_S^l_0λ^l.
* Keep differentiating all new terms of the form λ_S^l_0λ^l and
substitute dλ_S along with the SDEs in (<ref>), until all terms in the right-hand side of the equations are known.
* Average
the equations for d(λ_S^n) and d(λ_S^l_0λ^l); a set of ODEs are obtained, whose solution yields <λ_S(t)^n>. This set of ODEs is of finite dimension as a consequence of linearity of the SDEs in (<ref>).
§.§ Theory validation
We first applied our theory to an exemplar biochemical network that composes four zero/first-order reactions upstream of a second-order reaction:
[ ∅ [ a_2(t) ] a_1(t) S_1
,; ∅ [ a_4(t) ] a_3(t) S_2
,; S_1 + S_2 S_0, ]
We shall determine the FPT distribution via Eq. (<ref>)
We validated our theoretical results by comparing them to stochastic simulation algorithm (SSA)
results, which follow from the Gillespie algorithm<cit.>. To the best of our knowledge, there are no other exact results for comparison (our method is the first that provides an exact FPT distribution for biochemical systems with the network structure of (<ref>)). Thus, to illustrate the method's effectiveness, we compared our method with another recently developed approximation method, namely, the linear-mapping method (LMA)<cit.>, which derives CME solutions of a linearly approximated biochemical network using a mean-field assumption. We chose the LMA as a benchmark because: 1) our method and the LMA method require similar computational time, while moment closure schemes are significantly slower; 2) both our method and the LMA method can be used for biochemical systems with time-varying reaction rates.
Our method is more accurate and robust than the LMA method, across a broader range of parameters for the network in (<ref>) with time-constant reaction rates. The LMA and our method can provide accurate FPT distributions in some parameter ranges (Fig.<ref> A&B). However, as the second-order reaction goes faster or with more reactant molecules, our method can be more accurate (Fig.<ref> D vs. Fig.<ref> C and Fig.<ref> F vs. Fig.<ref> E ). We used the normalized Wasserstein distance (W-distance) to measure the error between the SSA simulated and the analytical FPT distributions gained from the LMA method or our method. The W-distance is normalized against the standard deviation of the SSA-simulated FPT distribution so that the normalized W-distance is dimensionless, allowing comparisons through various time scales. The heat map in Fig. <ref> E&F were fitted upon 49 error points, obtained from the results of a (7 × 7) set of parameters.
We further applied our method to the network in (<ref>) with time-varying reaction rates. We restricted consideration to a second-order reaction rate (a_0) that varies with time in a sinusoidal fashion. We tested four cases, where a_0 can be small (Fig. <ref> A-B) or large (Fig. <ref> C-D), or the rate of a_0 can be fast (Fig. <ref> A&C) or slow (Fig. <ref> B&D). Again, with larger a_0 values, our method is more accurate (Fig. <ref> C-D), whereas the rate at which a_0 changes has a slight influences on accuracy (Fig. <ref> A&C vs. Fig. <ref> B&D).
§.§ Applications to real biochemical networks
Our method can be applied to various biochemical systems in different fields. Here, we demonstrate two applications in genetic regulation networks (GRNs)<cit.> and a multistep reaction pathway in Ras activation by a protein, called Son Of Sevenless (SOS)<cit.>.
FPT distribution of a GRN: Gene expression is a fundamental process allowing organisms to create life machinery<cit.>. It is a highly regulated process that involves
the coordinated action of regulatory proteins that bind to specific DNA sequences to activate or repress gene transcription<cit.>. These genetic regulation networks are crucial in cell differentiation, development, and disease<cit.>. Two decades of research have shown that gene-gene or protein-gene interactions are inherently stochastic, leading to cell-cell variations in mRNA and protein levels<cit.>. Thus, analyzing the stochastic timings of the GRNs can be important for understanding cellular phenomena and their function<cit.>.
Here, we analyze a simple GRN system (Fig.<ref> A), which has only one gene that can be in two states, inactive (G) and active (G^*)<cit.>. A protein P, which is generated and degraded dynamically,
can bind to G to activate it, and the activated gene, G^*, may become deactivated after some time. The GRN involves three chemical reactions, including a second-order reaction, as shown in (<ref>). Starting with some substances that can generate protein P and hence G^*, we ask at what time does the protein P activate the gene?
[ ∅ [a2]a1 P; G^* G,; G +P G^*,; ]
To answer this question, we must derive the FPT distribution of the second-order reaction, in (<ref>). We applied both the LMA method and our method to solve this problem. Our method is accurate across a wide range of parameters (Fig.<ref> B), whereas the LMA can lead to significant errors when the reaction rate of the second-order reaction is high (Fig.<ref> C).
FPT distribution of a multistep reaction pathway: A second application is a multistep reaction pathway, which is present in many biological and chemical processes, such as enzymatic reactions, the
folding and unfolding of RNA molecules, and the conformation changes of ion channels<cit.>.
We chose a specific multistep reaction pathway<cit.>, the Ras activation by SOS (Fig. <ref>), to illustrate the effectiveness of our method. SOS is a Ras guanine nucleotide exchange factor (GEF) that plays a central role in numerous cellular signaling pathways, such as the epidermal growth factor receptor and T-cell receptor signaling.
SOS is autoinhibited in cytosol and activates only after recruitment to the membrane. There are two phases in the activation (Fig. <ref>A) the release of autoinhibition at the membrane through several membrane-mediated intermediates, by a sequence of first-order chemical reactions and 2) the binding of Ras at the allosteric site of SOS by a second-order reaction, which enables the activation of Ras<cit.>. What is the time when Ras is activated, i.e., what is the FPT distribution of the second-order reaction of (<ref>)?
A recent study analyzed the FPT distribution of the first phase of Ras activation<cit.>. It revealed how a faster Ras activation timescale is possible by using much slower activating SOS molecules through multistep reactions. The study resolves the odd discrepancy between the long timescale of individual SOS molecules and the much shorter timescale of Ras activation. More importantly, it demonstrates how rare and early SOS activation events dominate the macroscopic reaction dynamics, implying that the full FPT distribution is required for understanding of this phenomenon, rather than just the mean.
Nevertheless, the analysis presented in<cit.> only focused on first-order multistep reactions. In contrast, our method can include the second-order Ras binding phase of the activation process, pushing the analysis one step further. Without loss of generality, we used two steps in the multistep reaction phase of the SOS activation. The system of reactions is represented in (<ref>), where protein S_0 first needs to change to S_1 and then S_* before binding with a protein R to activate Ras. The S_0 and S_1 proteins degrade, with some time constants, and the activated R_* can deactivate to R.
[ ∅ [a2]a1 S_0; S_1 ∅
,; S_0 S_1
,; S_1 S_*,; R^* R,; R+S_2 *,; ]
We applied the LMA, along with our methods, to derive the FPT distribution of the binding reaction between S_* and R in (<ref>) across a wide range of parameters. We compared the FPT distributions with the SSA-simulated ones and measured the errors by a scaled W-distance. The W-distance is scaled by the standard variable of the SSA-simulated FPT distribution, denoted by σ. The scale is to make the W-distance a dimensionless quantity, enabling comparisons across systems with different timescales. The errors are all minimal for all parameter sets tested, with the maximum error at a scale of 10% of σ (Fig. <ref> B). However, the LMA can induce a largely inaccurate FPT distribution with a W-distance of 90% of σ (Fig. <ref> C), 9 times of error compared to our results.
§ DISCUSSION
In this paper we have derived an exact theoretical result for a general class of biochemical networks that involve nonlinearities caused by two-particle collisions. The networks include a second-order reaction of the A+B → C type that is downstream of a series of zero or first-order reactions. We have derived the exact theoretical distribution
for the earliest time the second-order reaction occurs - the first passage time (FPT) distribution of the second-order reaction.
Exact theoretical first passage time distributions have not, previously, been derived for such a system
with a broad range of applications. This is due to a lack of time-dependent solutions
of the distribution for the system's states, as described by a chemical master equation (CME). It is known that solutions of the CME, for general biochemical systems,
only hold for time-constant,
linear systems composed only of zero and first-order chemical reactions. Even the simplest two-particle collisions cause strong nonlinearities that hinder development of theoretical solutions of the CME for a general system. Additionally, there are further difficulties
caused by the time-varying reaction rates.
However, theoretical results for the solution of the CME have advanced to deal with time-varying reaction rates and second-order reactions for specific systems. The most recent theoretical results state that for systems that satisfy a dynamically restricted complex balance condition (DR condition), the solution of the CME maintains a Poisson-Product for all times, from the
initial time. The solution achieves this by deriving time-varying dynamics of the mean number of each molecular species,
as described by a set of deterministic ordinary differential equations (ODEs). However, the DR condition requires that all reactant pairs are balanced between reactants and products. This is strongly restrictive, and limits
not only to the form of the reactions but also the reaction rates and initial conditions.
By contrast, the results we have presented in this work represent a solution of the CME in its entirety, and are free of the
restrictions of the DR condition. This is achieved by
allowing the mean molecular numbers to be stochastic processes,
rather than having deterministic dynamics; the solution of the CME is obtained by averaging over the stochasticity.
Based on the exact solution of the CME, we derived the full distribution of the FPT, and developed a numerical scheme,
based on Pade-approximants, that allow its rapid computation.
Apart from demonstrating the theoretical value of our results, we have shown its practical value.
Our method can accurately compute the FPT distributions of nonlinear systems with A+B → C type of second-order reactions. The results are much more accurate than state-of-the-art linear mapping approximation methods (LMA), which we specifically chose
as a comparison, because it is a recent method that can be applied to time-varying systems, as can our results.
We further demonstrated our method's applicability to various networks, including genetic regulation networks and multi-step reaction pathways. This exhibits the potential of our approach to real-world applications.
While our analysis represents theoretical progress, it is limited to systems that have only one second-order reaction, and what is derived is the FPT of this particular second-order reaction. Therefore, our analysis cannot be applied to biochemical systems that
do not conform to the description given in (<ref>). One such system is the widely-used enzymatic process given by
the Michaelis Menten reaction, where a first-order reaction follows a reversible second-order reaction.
We plan to extend our analysis to such systems.
Overall, our work is a step forward in the theoretical derivation of exact solutions of full FPT distributions for biochemical networks with second-order reactions. Treating the mean molecular numbers as stochastic, and then averaging the result is, we believe, a new approach to theoretical derivations of the solution of CMEs. Indeed, this approach is the key
that enabled us to derive theoretical solutions of the CME for more general systems, and this may represent
a new way of analyzing more general nonlinear biochemical networks.
plain
§
0.3em section.
§.§.§
0.3em
§ PROOF OF THEOREM 1
In this appendix we give a proof of Theorem 1.
§.§ General procedure to solve the CME, assuming a solution of Poisson-product form
For a general second-order biochemical system, as defined in (<ref>), the CME can be written as (<ref>):
d P(X, t )/ d t =∑_k=0^M[ P(X-v_k, t ) c_k(X-v_k,t)- P(X, t ) c_k(X,t)],
where P(X, t ) is the probability of the system being in state X and time t.
The quantity c_k(X,t) equals the propensity of the k'th reaction at time t, a_k(t)X^Y_k, where a_k is the rate constant and, for any vectors with d components, a vector to the power of a
vector is defined by 𝐮^𝐯 def=∏^d_i=1u_i^v_i, with the convention that 0^0=1.
The quantity v_k is the transition vector of the k'th reaction.
A key question is: if the initial condition of (<ref>) has a Poisson product form:
P(X,0) =∏_i=0^N λ_0(0)^x_i/x_i!exp(-λ_i(0)),
then under what condition does the solution of (<ref>) remain of Poisson product form, i.e.
P(X,t)= ∏_i=0^N λ_i(t)^x_i/x_i!exp(-λ_i(t)),
We define
λ(t)=[λ_1(t),…
,λ_N(t)]^⊤
be a column vector containing the mean of all numbers of the species in S at
any time t (t≥0), and
Λ(t)= [[ λ_0(t); λ(t) ]]
be a column vector containing the mean of all numbers of the species in S^* at
any time t (t≥0).
Proceeding by substituting the Poisson product form of (<ref>) into
(<ref>), the left hand side follows by the Chain rule:
d P(X, t )/ d t=P(X,t) ×∑_i=0^N λ_i'(t) (X!/(X-e_i)!Λ(t)^-e_i-1 ),
where e_i is a vector within which only element i is 1, and all other elements are zero.
The right hand side of (<ref>) is given by:
∑_m=0^M[ P(X-v_m, t ) c_m(X-v_m,t)- P(X, t ) c_m(X,t)]
= P(X,t) [ ∑_i=0^N K_i(t) (X!/(x-e_i)!Λ(t)^-e_i-1 ).
.+ ∑_i=0^N∑_j=0^N K_ij(t) ( X!/(X-e_i-e_j)!Λ(t)^-e_i-e_j -1 ) ],
where
K_i(t) =∑_m:Y'_m=e_ia_m(t)Λ^Y_m-∑_m:Y_m=e_ia_m(t)Λ^Y_m,
K_ij(t) =∑_m:Y'_m=e_i+e_ja_m(t)Λ^Y_m-∑_m:Y_m=e_i+e_ja_m(t)Λ^Y_m.
For the solution of (<ref>) to remain Poisson-product form over time, we need to equate the right-hand sides of (<ref>) and (<ref>). The result is:
∑_i=0^N λ_i'(t) (X!/(X-e_i)!Λ(t)^-e_i-1 )
=∑_i=0^N K_i(t) (X!/(X-e_i)!Λ(t)^-e_i-1 )
+∑_i=0^N∑_j=0^N K_ij(t) ( X!/(X-e_i-e_j)!Λ(t)^-e_i-e_j -1 ).
In general, this is a complicated nonlinear problem to solve for the λ_i. In particular, the left-hand side contains only first-order terms of Λ, whereas the right-hand side contains second-order terms of Λ.
§.§ Solution that leads to the dynamical and restricted complex balance (DR) condition
One form of solution of (<ref>) occurs when all of the K_ij coefficients are zero (K_ij = 0 for all
i,j).
Anderson et al. assumed this and found the condition under which this holds (namely the DR condition)<cit.>.
DR condition: For any higher-order reaction complexes z·S, where the stoichiometric vector z satisfies z∈ℕ^N+1_≥ 0 and z_1≥ 2, then the DR condition is:
∑_m:y_m=za_mΛ^z(t)=∑_m:y'_m=za_mΛ^Y_m(t),
where the sum on the left is over those reactions where z·S are reactants, and the right is over those reactions where z·S are products.
When K_ij = 0 for all i and j we can write Eq. (<ref>) as ∑_i=1^N(λ_i^'(t) -K_i(t))( x!/(x-e_i
)!λ(t)^-e_i-1) =0 and a solution arises from
λ_i^'= K_i(t).
The dynamics of Λ is then governed by a set of deterministic ODEs:
d/ d tΛ =∑_m=0^Ma_mΛ^Y_m(Y_m'-Y_m).
The DR condition says that any higher-order reactant pairs within the system should be balanced. It is very restrictive for the system to satisfy: it requires that the higher-order reactant pairs remain the same from the reactants to the products and restricts the system's reaction rate coefficients.
Next, we present our method of solution that does not constrain the system by the DR condition.
§.§ General method of solution - without requiring the DR condition
In this work we solve (<ref>) in its entirety, without being restricted by the DR condition. Thus, instead of requiring all K_ij to be zero, as applies under a DR condition, we allow
the sum, that contains the K_ij, to be present
in (<ref>). We incorporate the effects of the sum by 'upgrading' Λ to be a
stochastic process, which we shall subsequently average over.
To proceed, we use Ito's rule:
df(λ)=f'(λ) dλ+1/2f”(λ)( dλ)^2,
when we differentiate the Poisson-product form of (<ref>), and obtain
d P(X, t )=P(X,t) ×[∑_i=0^N (X!/(X-e_i)!Λ(t)^-e_i-1 ) dλ_i(t).
+.1/2∑_i=0^N∑_j=0^N (X!/(X-e_i)!Λ(t)^-e_i-1 )(X!/(X-e_j)!Λ(t)^-e_j-1 ) dλ_i(t) dλ_j(t)].
§.§.§ Determination of the stochastic process, Λ
Ultimately, we will equate the averaged right-hand side of (<ref>) with
the right hand side of (<ref>) to determine an SDE that governs the dynamics of the stochastic process, Λ.
We proceed as follows:
* We assume the following form of the SDE that governs the dynamics of Λ
(for reasons that will be made clear, shortly):
d([ λ; λ_0 ])=([ b(Λ,t); b_0(Λ,t) ]) dt + ([ σ(Λ,t); σ_0(Λ,t) ]) dW_t,
where b(Λ,t), b_0(Λ,t) are drift terms, while σ(Λ,t) and σ_0(Λ,t) are diffusion terms. In the above equation: b(Λ,t) is a N component column vector; b_0(Λ,t) is a scalar; W_t is an 2 component column vector
of independent Wiener processes; σ(Λ,t) is an N × 2 matrix;
σ_0(Λ,t) is a row vector with 2 elements.
* We determine b(Λ,t) and b_0(Λ,t) by equating them to K_i(t) in (<ref>), as they are both determined by the zero- and first-order reactions. We obtain:
b(Λ,t)=∑_m: Y_m'_1≤ 1,Y_m_1≤ 1a_mλ^y_m(y_m'-y_m)=ℳ_1λ+ℳ_2,
b_0(Λ,t)=a_0λ^y_0(y'_0,0-y_0,0)=a_0λ_1λ_2.
Here ℳ_1 is an N× N matrix with components
η_i,j^1, ℳ_2 is an N component column vector with
components η_i^2, while y_m,i^' and y_m,i are the i'th
component of y_m^' and y_m, respectively, and
η_i,j^1=∑_m:y_m=e_ia_m(t)(y_m,j^'
-y_m,j),
η_i^2=∑_m:y_m=0a_m
(t)(y_m,i^'-y_m,i).
* We now explicitly write W_t as W_t=[W^1_t,W^2_t]^⊤, then
σ(Λ,t) · dW_t can then be written as:
σ(Λ,t) · dW_t = σ_1(Λ,t) dW^1_t+ σ_2(Λ,t) dW^2_t
where σ_1(Λ,t) and σ_2(Λ,t) are N component column vectors.
We adopted this particular format because the only second-order reactions that occur
within the system are of the type
A + B→ C, with two different reactants. The form of (<ref>) ensures that K_ij in (<ref>) satisfies:
* K_ij = 0 if i>2 or j>2, which means that species S_k for k=3,4… are not involved in the second-order reaction.
* K_ij = 0 if i=0 or j=0, which means that S_0 is not a reactant in any one reaction. It follows that
σ_0(Λ,t)=0.
* K_ij = 0 if i=j, which means that there are no square terms of the form x!/(x-2e_i)!λ(t)^-2e_i in the right-hand side of (<ref>).
When σ_1(Λ,t) and σ_2(Λ,t) satisfy certain relationships, the non-zero square terms of λ can be eliminated in the right-hand side of (<ref>) (see below).
* K_12 = K_21 = -a_0/2λ_1λ_2, which means that the matrix K = {K_ij} is symmetric, and its components are all products of different components of Λ. To satisfy this constraint, σ_1(Λ,t) and σ_2(Λ,t) are set as linear functions of λ:
σ_1(Λ,t)= 𝒩_1λ
σ_2(Λ,t)=𝒩_2λ,
where 𝒩_1 and 𝒩_2 are N × N square matrices.
* We determine the matrices 𝒩_1 and 𝒩_2, by equating terms of the form X!/(X-e_i-e_j)!Λ(t)^-e_i-e_j in (<ref>) with the K_ij terms in (<ref>). This leads to an algebraic equation, the solution of which sets the forms of 𝒩_1 and 𝒩_2:
1/2(σ_1(Λ,t)·σ_1(Λ,t)^⊤+σ_2(Λ,t)·σ_2(Λ,t)^⊤)=K,
where K is the matrix with elements K_ij (i≥ 1, j≥ 1) in (<ref>). 𝒩_1 and 𝒩_2 are determined as that shown in (<ref>).
At this point, the dynamics of λ_0 and λ are determined by:
dλ=(ℳ_1λ+ℳ_2) dt + 𝒩_1λ dW^1_t+𝒩_2λ dW^2_t,
dλ_0=λ_1λ_2 dt.
However, on substituting (<ref>) into (<ref>), terms are generated in the right-hand side of (<ref>) but are absent in (<ref>). These terms are λ_1λ_2x!/(x-e_1)!λ(t)^-e_1 and λ_1λ_2x!/(x-e_2)!λ(t)^-e_2.
To eliminate these terms in the right-hand side of (<ref>). We modified the Poisson-product form to P̃(X,t),
as given by
P̃(X,t)=Λ(t)^X/X!exp(-Λ(t))exp(λ_0(t)+λ_S(t)),
where λ_S(t) is a stochastic process to be determined.
Note that P̃(X,t) is not normalized to unity, and thus is not a probability distribution.
However, we will prove later that its average, <P̃(X,t)>, is a distribution; it is the solution of the CME
and obeys the Poisson-product form initial condition of (<ref>).
§.§.§ Determination of λ_S
Changing the Poisson product form from P(X,t) to P̃(X,t) does not change the format of (<ref>):
∑_m=0^M[ P̃(X-v_m, t ) c_m(X-v_m,t)-P̃(X, t ) c_m(X,t)] dt
=P̃(X,t) [ ∑_i=0^N K_i(t) (X!/(X-e_i)!Λ(t)^-e_i-1 ).
.+ ∑_i=0^N∑_j=0^N K_ij(t) ( X!/(X-e_i-e_j)!Λ(t)^-e_i-e_j -1 ) ] dt
where K_i(t) is the drift term of dλ_i.
However, when we differentiate P̃(X,t) using Ito's rule, (<ref>) is changed to a more complex format, as in (<ref>).
Setting I_e=exp(-Λ) and I_p=Λ^X/X! allows us to write
P̃(X,t)= I_eI_pexp(λ_0(t)+λ_S(t)). The quantity dP̃(X, t ) then becomes:
dP̃(X, t )=[I_p d(I_eexp(λ_0(t)+λ_S(t)))+I_eexp(λ_0(t)+λ_S(t)) dI_p]+exp(λ_0(t)) dI_p d(I_eexp(λ_S(t)))
=P̃(X,t) ×[∑_i=0^N K_i(t)(X!/(X-e_i)!Λ(t)^-e_i-1) dt.
+∑_i=0^N∑_j=0^N K_ij(t) ( X!/(X-e_i-e_j)!Λ(t)^-e_i-e_j -1 ) dt
+a_S[x_1-x_2] dW^1_t+ ia_S[x_1+x_2] dW^2_t
-( a_S λ_1-a_S λ_2 ) dW^1_t - ( ia_S λ_1+ ia_S λ_2 ) dW^2_t
.+ dλ_S+1/2( dλ_S)^2+∑_i=1^2 dλ_i dλ_S-a_0λ_1λ_2 dt]
+exp(λ_0(t)) dI_p d(I_eexp(λ_S(t))),
where
dI_p d(I_eexp(λ_S(t)))
=P̃(X,t) ×[a_0 (λ_2x_1+λ_1x_2 ) dt.
.
+1/2(x_1/λ_1 dλ_1(t) dλ_S(t)+x_2/λ_2 dλ_2(t) dλ_S(t) )
].
We then determine λ_S by equating the coefficients of the dt terms on the right-hand sides of (<ref>) and (<ref>). This leads to constraints that λ_S has to satisfy:
* dλ_S is a function of dW_t, as (<ref>) and (<ref>) already have
corresponding d t terms
* setting dλ_S = ξ_1(λ,t) dW^1_t + ξ_2(λ,t) dW^2_t
allows dI_p d(I_eexp(λ_S(t))) to be zero, i.e., the right hand side of (<ref>) is zero:
a_0 (λ_2x_1+λ_1x_2 ) dt
+1/2(x_1/λ_1 dλ_1(t) dλ_S(t)+x_2/λ_2 dλ_2(t) dλ_S(t) ) =0
* dλ_S should be a quadratic function of dλ_S:
1/2( dλ_S)^2+∑_i=1^2 dλ_i dλ_S-a_0λ_1λ_2 dt=0.
(<ref>) and (<ref>) determine that the solution of dλ_S is the sum of the diffusion terms of dλ_1 and dλ_2:
dλ_S = ( a_S λ_1-a_S λ_2 ) dW^1_t + ( ia_S λ_1+ ia_S λ_2 ) dW^2_t.
The introduction of λ_S(t) allows the elimination of the cross-product terms in (<ref>),
corresponding to terms of the form λ_1(t)λ_2(t)X!/(X-e_1)!Λ(t)^-e_1 and λ_1(t)λ_2(t)X!/(X-e_2)!Λ(t)^-e_2 in (<ref>).
We now substitute λ_S into (<ref>) to obtain
dP̃(X, t )=P̃(X,t) ×[∑_i=0^N K_i(t)(X!/(X-e_i)!Λ(t)^-e_i-1) dt.
+∑_i=0^N∑_j=0^N K_ij(t) ( X!/(X-e_i-e_j)!Λ(t)^-e_i-e_j -1 ) dt
+.a_S[x_1-x_2] dW^1_t+ ia_S[x_1+x_2] dW^2_t]
To prove Theorem <ref>, we needed to equate the right-hand sides of (<ref>) and (<ref>), which means that the stochastic diffusion terms involving dW_t^1 and dW_t^2 should be eliminated. Next, we prove that the average of P̃(X,t) eliminates these stochastic terms, and <P̃(X,t)> satisfies (<ref>), and hence is the solution of the CME in (<ref>).
§.§.§ Eliminating stochastic diffusion terms in (<ref>)
Averaging both sides of (<ref>) yields
d<P̃(X, t )> =<P̃(X,t) ×[∑_i=0^N K_i(t)(X!/(X-e_i)!Λ(t)^-e_i-1) dt..
+..∑_i=0^N∑_j=0^N K_ij(t) ( X!/(X-e_i-e_j)!Λ(t)^-e_i-e_j -1 ) dt ]>
+<a_SP̃(X,t)[x_1-x_2] dW^1_t+ ia_SP̃(X,t)[x_1+x_2] dW^2_t>
Set σ̃_1(X,λ,t) = a_SP̃(X,t)[x_1-x_2] and σ̃_2(X,λ,t) = ia_SP̃(X,t)[x_1+x_2], we then prove that: <σ̃_1(X,λ,τ) dW^1_τ+σ̃_2(X,λ,τ) dW^2_τ> = 0 (see Proposition <ref>), and hence the stochastic diffusion terms in (<ref>) are eliminated.
Define t_n be a series of stopping times t_n= inf{τ | λ(τ)≥ n} for n=1,2,….
Then, for any t∈[0,t_n], the following expectation is zero:
<σ̃_1(X,λ,t) dW^1_t+σ̃_2(X,λ,t) dW^2_t>=0,
<ref>:
From the definition of t_n, when t∈[0,t_n], λ(τ) is bounded, and hence |σ̃_1(X,λ,t)| and |σ̃_2(X,λ,t)| are bounded.
Thus, Z(t)=∫^t_0σ̃_1(X,λ,τ) dW^1_τ+σ̃_2(X,λ,τ) dW^2_τ is a martingale, and according to the properties of martingales, E(Z(t)) = Z(0)= 0, which is in equivalent to (<ref>).
Proposition <ref> guarantees that the stochastic diffusion term is eliminated when t<t_n. Next, we prove that <P̃(X, t )> is the solution of the CME in (<ref>).
§.§.§ <P̃(X, t )> is the solution of the CME in (<ref>)
We first prove that <P̃(X, t )> is the solution of the CME in (<ref>) when t<t_n, then we generalise the result as t_n→∞.
When t< t_n, (<ref>) holds, and <P̃(X, t) > is the solution of the CME (<ref>), because (<ref>) becomes:
d<P̃(X,t) > =
<P̃(X,t) ×[∑_i=0^N K_i(t)(X!/(X-e_i)!Λ(t)^-e_i-1) dt..
+..∑_i=0^N∑_j=0^N K_ij(t) ( X!/(X-e_i-e_j)!Λ(t)^-e_i-e_j -1 ) dt ]>
=∑_m=0^M[ <P̃(X-v_m, t )> c_m(X-v_m,t)-<P̃(X, t )> c_m(X,t)] dt
Define t∧ t_n=min(t,t_n), and denote 𝒫(X,t) as the solution of the CME in (<ref>). (<ref>) means that 𝒫(X,t∧ t_n)=<P̃(X, t∧ t_n ) > at time t∧ t_n.
Next, we prove that when n→∞, we have the limit lim_n→∞ t∧ t_n =t, leading to
<P̃(X,t)>=lim_n→∞<P̃(X,t∧ t_n)>=lim_n→∞𝒫(X,t∧ t_n)=𝒫(X,t),
where the last equal sign stems from 𝒫(X,t), as the solution of CME (<ref>), being a continuous function.
By probability measure, the limit lim_n→∞t∧ t_n=t is equivalent to
lim_n→∞ P(t_n≤ t)→ 0.
From the definition t_n= inf{t | λ(t)≥ n}, (<ref>) is equivalent to
lim_n→∞ P(max_0≤τ≤ tλ(τ)≥ n)→ 0.
Next, we use Lemma <ref> to prove (<ref>).
The linear SDE in (<ref>), that governs λ, satisfies the condition of Lemma <ref>, i.e, (<ref>) because:
1) (<ref>) can be rewritten as
dλ=b(Λ,t) dt + σ(Λ,t) dW_t,
where b is an N column vector of linear functions defined in (<ref>), and σ is an
N× 2 matrix of linear functions defined in (<ref>).
2) Linearity of b and σ lead to b(t,y)^2+σ(t,y)^2 being quadratic functions of the vector
y∈ℝ^N. Thus, (<ref>) holds, which means
<max_0≤τ≤ tλ(τ)^2> ≤ K(λ(0)) e^Ct,
where C is a constant, and K(λ(0)) is a constant that depends on λ(0).
Then by Chebyshev's inequality
P(max_0≤τ≤ tλ(τ)≥ n)≤ P(|max_0≤τ≤ tλ(τ)- μ|≥ n-μ)
≤<max_0≤τ≤ tλ(t)^2>/(n-μ)^2≤K(λ(0)) e^Ct/(n-μ)^2,
where μ=<λ(t_m)> and λ(t_m)=max_0≤τ≤ tλ(τ).
Because
μ^2=<λ(t_m)>^2≤<λ(t_m)^2>≤ K(λ(0)) e^Ct,
we have the limit
lim_n→∞K(λ(0)) e^Ct/(n-μ)^2=0.
Thus, (<ref>) leads to (<ref>)
(Problem 5.3.15 of<cit.>)
Suppose b_i(t,y) and σ_ij(t,y); 1≤ i ≤ d, 1≤ j ≤ r, are progressively measurable functionals from [0,∞)× C[0,∞)^d into ℝ satisfying
b(t,y)^2+σ(t,y)^2≤ K( 1+y^2);
∀ 0≤ t< ∞, y∈ℝ^d,
where K is a positive constant. If (X,W), (Ω,ℱ,P), {ℱ_t} is a weak solution to the SDE
dX=b(t,X) dt+σ(t,X) dW_t,
with <X_0^2m><∞ for some m>1, then for any finite time T>0, we have
<max_0≤ s≤ tX_t^2m>≤ C( 1+<X_0^2m>) e^Ct;0≤ t ≤ T,
where C is a positive constant depend only on m, T, K and d.
§ WHY CALCULATE INSTEAD OF ?
In this appendix we answer why we calculated <λ_S(t)^n> instead of <exp(λ_S).
In the main text, we calculated <λ_S(t)^n> because it can be determined from solution of a finite number of ODEs,
whereas calculating <exp(λ_S)> involves solving an infinite-dimensional system of coupled ODEs.
The details are as follows.
A natural idea to calculate <exp(λ_S)> is to derive its governing ODE by differentiating exp(λ_S):
dexp(λ_S)=exp(λ_S) dλ_S+1/2exp(λ_S)( dλ_S)^2.
Substituting dλ_S and dλ_S^2 in (<ref>) with the SDEs in (<ref>), some polynomial functions of λ show up in the right-hand side of (<ref>). The new polynomial functions have a general format of λ^lexp(λ_S), where l=[l_1,…,l_N] is a vector of positive integer numbers.
To determine terms like λ^lexp(λ_S), we kept deriving their ODEs by differentiating them. This process results in terms with higher exponents of λ, i.e. λ^l'exp(λ_S), where l'_1>l_1:
d(exp(λ_S)λ^l)=exp(λ_S)λ^l dλ_S+l_1exp(λ_S)λ^l-e_1 dλ_1+l_2exp(λ_S)λ^l-e_2 dλ_2+…
+1/2exp(λ_S)λ^l( dλ_S)^2+l_1(l_1-1)/2exp(λ_S)λ^l-2e_1( dλ_1)^2+…
+ l_1exp(λ_S)λ^l-e_1 dλ_S dλ_1+l_2exp(λ_S)λ^l-e_2 dλ_S dλ_2+l_1l_2exp(λ_S)λ^l-e_2-e_2 dλ_1 dλ_2+…,
where e_j (j=1,2,…,N) is a N dimensional vector, whose j'th component equals one, and all other components equal zero.
It can be seen from the right-hand side of (<ref>), that the differentiation process always results
in nonlinear terms with higher exponents of λ, whose expectations are not zero, e.g.
the terms like exp(λ_S)λ^l dλ_S^2.
As a result, the expectations of (<ref>) and (<ref>) compose a set of non-closed-form equations, and directly solving <exp(λ_S)> is dogged by infinite-dimensional ODEs.
To avoid solving infinite number of ODEs for <exp(λ_S)>, we approximated it by calculating <λ_S(t)^n>. This is achieved by introducing a new function H(s,t) = <exp(s*λ_S)>, and one may approximate H(s,t) by its Taylor expansions:
H(s,t)≈ T_Ñ(s,t)=s^n/n!×∂^n/∂ s^nH(s,t)|_s=0
=∑_n=0^Ñs^n/n! <λ_S(t)^n>.
(<ref>) means that we only need to determine <λ_S(t)^n> for every n to determine H(s,t), and H(1,t) is the required FPT distribution.
For practical reasons, we can calculate <λ_S(t)^n>, with n less than a predefined integer, i.e. n=0,1,2…,Ñ (Ñ is the highest order of calculated moments). Since exp(sλ_S(t)) is a transcendental function, we used a Padé approximant to control
the approximation errors. We calculated the Padé approximant via the extended Euclidean algorithm.
It turns out that calculating <λ_S(t)^n> is much easier than calculating <exp(λ_S)>, because the linearity of the SDEs in (<ref>) guarantes that <λ_S(t)^n> is governed by a set of finite dimensional ODEs.
We derived the ODEs governing <λ_S(t)^n> through a similar process as that of <exp(λ_S)>. We
started differentiating <λ_S(t)^n> by Ito's rule:
d(λ_S^n) =nλ_S^n-1 dλ_S+ n(n-1)/2λ_S^n-2( dλ_S)^2.
Substituting dλ_S and dλ_S^2 in (<ref>) with the SDEs in (<ref>), the right-hand side of (<ref>) would only be polynomial functions of λ_S^l_0λ^l. The linearities of the SDEs guarantee that
dλ_S can be represented by polynomial functions of λ with orders no more than one. Likewise, dλ_S^2 can be represented with polynomial functions of λ with orders no more than 2. As a result, the polynomial order of λ_S^l_0λ^l is not higher than n, i.e. l_0+l_1 ≤ n.
Furthermore, the differentiation of <λ_S^l_0λ^l> does not increase its polynomial order. Because of the Itô's rule,
d(λ_S^l_0λ^l)=l_0λ_S^l_0-1λ^l dλ_S+l_1λ_S^l_0λ^l-e_1 dλ_1+l_2λ_S^l_0λ^l-e_2 dλ_2…
+l_0(l_0-1)/2λ_S^l_0-2λ^l( dλ_S)^2+l_1(l_1-1)/2λ_S^l_0λ^l-2e_1( dλ_1)^2
+ l_0l_1λ_S^l_0-1λ^l-e_1 dλ_S dλ_1+l_0l_2λ_S^l_0-1λ^l-e_2 dλ_S dλ_2+l_1l_2λ_S^l_0λ^l-e_2-e_2 dλ_1 dλ_2…,
where e_j (j=1,2,…,N) is a N dimensional vector, whose j'th component equals one, and all other components equal zero. Similarly, because of the linearity of (<ref>), the polynomial orders of λ_S and λ of all terms in (<ref>) do not increase.
Therefore, (<ref>) and (<ref>) compose a set of closed-form equations, and <λ_S(t)^n> can be solved by a set of finite-dimensional ODEs.
|
http://arxiv.org/abs/2409.02609v1 | 20240904105011 | Proportionality for Constrained Public Decisions | [
"Julian Chingoma",
"Umberto Grandi",
"Arianna Novaro"
] | cs.GT | [
"cs.GT"
] |
media/
same
theoremTheorem
lemma[theorem]Lemma
corollary[theorem]Corollary
proposition[theorem]Proposition
fact[theorem]Fact
definitionDefinition
exampleExample
remarkRemark
plain
myexExample
example▵
namedexample[1][][#1]▵
fancy
empty
[LO]Proportionality for Constrained Public Decisions
Proportionality for Constrained Public Decisions
Julian Chingoma
ILLC
University of Amsterdam
Amsterdam, The Netherlands
Umberto Grandi
IRIT
Univeristé Toulouse Capitole
Toulouse, France
Arianna Novaro
CES
Université Paris 1 Panthéon-Sorbonne
Paris, France
July 2024
=================================================================================================================================================================================================================================================================
§ ABSTRACT
We study situations where a group of voters need to take a collective decision over a number of public issues, with the goal of getting a result that reflects the voters’ opinions in a proportional manner. Our focus is on interconnected public decisions, where the decision on one or more issues has repercussions on the acceptance or rejection of other public issues in the agenda. We show that the adaptations of classical justified-representation axioms to this enriched setting are always satisfiable only for restricted classes of public agendas. However, the use of suitably adapted well-known decision rules on a class of quite expressive constraints, yields proportionality guarantees that match these justified-representation properties in an approximate sense. We also identify another path to achieving proportionality via an adaptation of the notion of priceability.
§ INTRODUCTION
In many situations of collective decision-making, a group of voters is presented with a set of issues for which they are expected to make a binary choice: typically, deciding to either accept or reject each issue.
This setting has recently been studied under the name of public decisions <cit.> and it is of particular interest due to the real-world scenarios captured by it. Notable examples include: instances of multiple referenda where the public vote directly on the resolution of political issues; group activity planning, where a group of individuals are to choose, as a collective, the activities that the entire group shall partake in; and committee elections, where a set of candidates are in the running for multiple positions on a committee and a group of decision-makers must select the committee members <cit.>.
Given the collective nature of the problem, one of the natural desiderata is that the outcome represents a fair compromise for the participating voters.
Among the numerous possible interpretations of fairness is the one captured by the notion of proportional representation. Proportionality features prominently in many collective choice settings such as that of apportionment <cit.> and the aforementioned committee elections <cit.> while being introduced to richer social-choice models such as that of participatory budgeting (PB) <cit.>. Indeed, even when zooming in on the public decisions task, the goal of producing collective outcomes that proportionally reflect the opinions of the voter population has been drawing increasing attention in recent years <cit.>.
However, a component that has so far not received much attention in this growing literature on proportionality
is the presence of constraints that restrict the possible outcomes that can be returned. In this paper, we focus on answering the question of what one may do when outcomes that would satisfy classical proportionality axioms—and thus be considered fair outcomes—are no longer feasible due to the presence of constraints.
When examining real-world examples of the public-decision model, there are many scenarios where enriching the model with constraints fits naturally: in the case of participatory budgeting, the implementation of one project may be conditional on the acceptance (or rejection) of another; diversity constraints applied to the committee election problem that determine the number of individuals with certain characteristics that may be accepted/rejected; or when selecting the features of some product, only certain feature combinations represent affordable options.
In tackling our task, we build on existing notions of proportionality that have been posed for less rich models and tailor them for the challenges that comes with the existence of constraints.
Naturally, this leads us to also consider constrained versions of collective decision rules proposed in the literature and to investigate the extent to which they meet the requirements of our novel constraint-aware notions of proportionality.
Related work.
We begin by noting that our constrained public-decision model closely resembles that of judgment aggregation and it also naturally fits into the area of collective decisions in combinatorial domains (see <cit.> and <cit.> for general introductions to these two topics, respectively).
Most relevant to our paper is the recent work conducted on fairness in the context of public decisions without constraints <cit.>. <cit.> focused on individually proportional outcomes, thus, our work more closely aligns with that of <cit.> and <cit.> who adapt the notion of justified representation <cit.> from the literature of multiwinner voting (MWV) <cit.>. Moreover, proportionality has also been studied in models of sequential decision-making that are relevant to our own as they can be seen as generalisations of the public-decision model without constraints <cit.>. Amongst these sequential decision-making papers, those of <cit.> and <cit.> relate to our work the most as they also implement justified-representation notions. More recently, <cit.> studied proportionality for a general social-choice model that allows for the modelling of both the unconstrained and constrained versions of the public-decision model. By focusing on the latter, we explore properties that are specifically made for this setting, which in turn allows us to define, and subsequently conduct an analysis of, constrained public-decision rules that are not touched upon by <cit.>. Thus, our results complement their work by showing further possibilities, and also limitations, for proportionality within this constrained public-decision model. We also highlight work by <cit.> who adapted justified representation for the MWV model with arbitrary constraints instead of our focus on the public-decision model. This leads us towards differing approaches in adapting justified representation for constraints and also, analysing quite different rules.
In related fields, previous work studied proportionality in various models that differ from the constrained public-decision model but features collective choices on interconnected propositions: the belief merging setting <cit.>, interdependent binary issues via conditional ballots <cit.>, and approval-based shortlisting with constraints (presented in a model of judgment aggregation) <cit.>.
Contribution.
We study the extent to which proportionality can be ensured constrained public-decision setting. First, we introduce the notion of feasible group deviations as a building block that allows the translation of existing proportionality axioms—that are based on varying public-decision interpretations of justified representation—for this setting with constraints.
For each of our axioms, we show that although it is challenging to satisfy these properties in general constrained instances, when one hones in on a restricted—yet highly expressive—class of constraints, we can achieve proportionality guarantees that represent approximations of the desirable justified-representation axioms.
In doing so, we also define novel adaptations of recently studied decision rules to our public-decision setting with constraints, namely the method of equal shares (MES) and the MeCorA rule. Finally, we adapt the priceability notion from the MWV literature, which provides another promising route to introduce proportionality into public decisions under constraints.
Paper outline. We begin by detailing the constrained public-decision model in Section <ref>. We continue with Section <ref> where we discuss two known ways in which justified representation is formalised for public decisions, and also present the notion of deviating groups. Then each of sections <ref> and <ref> deal with a particular public-decision interpretation of justified representation. Before concluding in Section <ref>, we deal wit our constrained version of the priceability axiom in Section <ref>. Note that all omitted proofs can be found as part of the supplementary material.
§ THE MODEL
A finite set of n voters N = {1,…,n} has to take a collective decision on a finite set of m binary issues = {a_1,…,a_m}. It is typical in the public decisions setting to consider there only being two available decisions per issue but we instead adopt the following, more general setup.
Each issue a_t∈ is associated with a finite set of alternatives called a domain D_t = {d_t^1,d_t^2,…}⊆ X where |D_t| ≥ 2 holds for all t∈[m]. The design decision of going beyond binary issues is motivated by the wider real-life applicability of this model when more than two alternatives are possible for each issue.
Each voter i∈ N submits a ballot b⃗_i = (b⃗_i^1,…,b⃗_i^m) ∈ D_1×…× D_m
where b⃗_i^t = d_t^c indicates that voter i chooses the decision d_t^c for the issue issue a_t.
A profile B = (b⃗_1,…,b⃗_n) ∈ (D_1×…× D_m)^n is a vector of the n voters' ballots. An outcome w⃗ = (w_1,…,w_m) ∈ D_1×…× D_m is then a vector providing a decision for every issue at stake.
We focus on situations where some constraints limit the set of possible collective outcomes: we denote by ⊆ D_1×…× D_m the set of feasible outcomes. We write (B, ) to denote an election instance. By a slight abuse of notation we also refer to as the constraint, and thus, we refer to elections instances where = D_1×…× D_m as unconstrained election instances. [Note that while we work formally with the constraint being an enumeration of all feasible outcomes, in practice, it is often possible to represent the set of feasible outcomes in more concise forms—via the use of formulas of propositional logic, for example—to help with parsing said constraint and/or speed up computation by exploiting the constraint's representation structure.]
Note that voter ballots need not be consistent with the constraints, i.e., for an election instance (B, ), we do not require that b⃗_i∈ for all voters i∈ N.[This assumption takes our model closer to the particular model of judgment aggregation where the constraints on the output may differ from the constraints imposed on the the voters' input judgments <cit.>.]
While not common in work done in the related judgment aggregation model, our assumption that voters ballots need not correspond to feasible outcomes is common in other settings of social choice. In multiwinner voting, voters can approve more candidates than the committee target size while in participatory budgeting, the sum of the costs of a voter's approved projects may exceed the instance's budget. For our setting, we argue that this approach helps capture real-world, constrained decision-making scenarios where either the constraint is uncertain when voters submit their ballots, or possibly, the voting process becomes more burdensome for voters as they attempt to create ballots with respect to a (possibly difficult to understand) constraint. For example, consider a group of friends deciding on the travel destinations of their shared holiday across the world, visiting one country in each continent. On a booking platform, there are a certain number of locations that can be selected per continent such as: Amsterdam, Paris and Vienna in Europe; Mexico City and Toronto in North America; Cairo, Nairobi and Cape Town in Africa; and so on. Each friend has a preferred combination of cities and their collective itinerary is subject to factors such as their travel budget or the available flight connections between cities. However, as flight costs and connections may change significantly on a day-to-day basis, it may be unclear which combination of cities are affordable. Therefore, it is not reasonable to impose the requirement, by default that is, that voter ballots are constraint-consistent.
If needed, we explicitly state when we pivot from this assumption and require that voter ballots be constraint-consistent. At times, we shall restrict ourselves to election instances where D_t = {0,1} holds for every issue a_t. We refer to such cases as binary election instances. When necessary, we explicitly state whether any result hinges on the restriction to binary instances. Given an outcome w⃗ for a binary instance, the vector w⃗̅⃗ = (w̅_1,…,w̅_m) is such that w̅_t = 1-w_t for all issues a_t∈.
Now, consider an outcome w⃗, a set of issues S⊆ and some vector v⃗ = (1,…,v_m)∈ D_1×…× D_m (that can be interpreted as either an outcome or voter's ballot). We write w⃗Sv⃗ = (w_1',…,w_m') where w_t' = w_t for all issues a_t∈∖S and w_t' = v_t for all issues a_j∈ S. In other words, w⃗Sv⃗ is the resultant vector of updating outcome w⃗'s decisions on the issues in S by fixing them to those of vector v⃗.
For a given issue a_t ∈ and a decision d∈ D_t, we use N(a_t,d) = {i∈ N |b⃗_i^t = d} to denote the set of voters that agree with decision d on issue a_t.
Given two vectors v⃗, v⃗' ∈ D_1×…× D_m, we denote the agreement between them by v⃗v⃗' = {a_t∈| v_t = v_t'}. Then, the satisfaction that a voter i obtains from an outcome w⃗ corresponds to iw⃗ = |b⃗_iw⃗|, i.e., the number of decisions on which the voter i is in agreement with outcome w⃗.
§ PROPORTIONALITY VIA JUSTIFIED REPRESENTATION
This section starts with the observation that classical notions of proportionality fall short when considering interconnected decisions (in the upcoming Example <ref>), and then follows with our proposed generalisations of such axioms that deal with constraints.
Ideally, when looking to make a proportional collective choice, we would like to meet the following criteria: a group of similarly-minded voters that is an α fraction of the population should have their opinions reflected in an α fraction of the m issues. We wish to define an axiom for our model that captures this idea within our richer framework.
In the setting of multiwinner voting, this is formally captured with the justified representation axioms with one of the most widely studied being extended justified representation (EJR) <cit.>. Now, when being studied in the setting of public decisions, there are two different adaptations that have been studied and we shall look at both. One approach intuitively states that `a group of voters that agree on a set of issues T and represent an α fraction of the voter population, should control a α· |T| number of the total issues in ' <cit.>. We refer to it as agreement-EJR.
This approach differs from the following that is a more faithful translation of the EJR from multiwinner voting: `a group of voters that agree on, and represent, an α fraction of the issues, and voter population, respectively, should control α· m of the issues in ' <cit.>. The requirements on the voter groups that is present in the latter approach are captured by the notion of cohesiveness and so we refer to this version of EJR as cohesiveness-EJR. Observe that cohesiveness-EJR is stronger than, and implies, agreement-EJR.
Meeting the ideal outlined by both of these notions is not easy in our setting as the constraint could rule out a seemingly fair outcome from the onset.
Suppose there are two issues = {a_1, a_2} with constraint = {(1,0),(0,1)}. Then suppose there are two voters N = {1,2} with ballots b⃗_1 = (1,0), and b⃗_2 = (0,1) (note that voters 1 and 2 are both, on their own, cohesive groups). Here, both aforementioned EJR interpretations require each voter to obtain at least 1 in satisfaction, i.e., deciding half of the two issues at hand. However, there exists no feasible outcome that provides agreement-EJR or cohesiveness-EJR as one voter i∈{1,2} will have satisfaction iw⃗ = 0 for any outcome w⃗∈.
Example <ref> makes clear an issue that we must take into account when defining proportionality properties when there are constraints. That is, a voter group that is an α fraction of the population may lay claim to deciding an α fraction of the issues, but in doing so, they may be resolving, or influencing the decision on, a larger portion of the issues than they are entitled to.
In doing so, we look for meaningful ways to identify, given an outcome w⃗, those voter groups that are underrepresented and can justifiably complain at the selection of outcome w⃗.
The latter is formalised by the following definition which we use to identify the voter group whose displeasure is justified. Specifically, these are groups that can propose an alternative, feasible outcome w⃗^* that yields greater satisfaction for each group member.
Given election instance (B,) and outcome w⃗∈, a set of voters N'⊆ N has an (S,w⃗)-deviation if ∅≠ S ⊆ is a set of issues such that all of the following hold:
* S ⊆b⃗_ib⃗_j for all i,j∈ N' (the voters agree on the decisions on all issues in S).
* S⊆∖b⃗_iw⃗ for all i∈ N'
(the voters disagree with outcome w⃗'s decisions on all issues in S).
* w⃗Sb⃗_i∈ for all i∈ N' (fixing outcome w⃗'s decisions on issues in S, so as to agree with the voters in N', induces a feasible outcome).
Intuitively, given an outcome w⃗, a voter group having an (S,w⃗)-deviation indicates the presence of another feasible outcome w⃗^*≠w⃗ where every group member would be better off. Thus, our goal in providing a fair outcome reduces to finding an outcome where every group of voters that has an (S,w⃗)-deviation is sufficiently represented.
We shall use this (S,w⃗)-deviation notion to convert proportionality axioms from unconstrained settings to axioms that deal with constraints. But first, we look at the following computational question associated with (S,w⃗)-deviations: given an election instance (B,) and an outcome w⃗∈, the problem is to find all groups of voters with an (S,w⃗)-deviation.
Given an election instance (B,) and an outcome w⃗∈, there exists an algorithm that finds all groups of voters N' such that there exists an S⊆ with N' having an (S,w⃗)-deviation, that runs in O(||^2mn) time.
Take (B,) and outcome w⃗∈. Consider the following algorithm that operates in || rounds, assessing an outcome w⃗∈ in each round (with each outcome assessed once throughout): at each round for an outcome w⃗∈, iterate through all other outcomes w⃗^*≠w⃗∈; fix S to be the issues that w⃗ and w⃗^* disagree on; in at most mn steps, it can be checked if there is a set of voters
that agree with w⃗^* on all issues in S which verifies the existence of a
voter group N' with an (S,w⃗)-deviation; keep track of all such groups N'; if all outcomes have been assessed, terminate, otherwise, move to the next outcome. This algorithm takes O(||^2mn) time to complete in the worst case, which is polynomial in the input size given our assumptions.
We offer the following remark in regards to the nature of Proposition <ref>.
Proposition <ref> can be seen as positive whenever the constraint under consideration is `not too large'. Such an assumption is reasonable for many real-life examples. Consider the quite general, collective task of selecting the features of some product. Our running example of the logo design is an instance of this. Other applicable scenarios include choosing the technical features of a shared computer or the items to be placed in an organisation's common area. In many cases, factors such as a limited budget (or limited space in the case of the common area) may result in very few feature combinations being feasible for said product. These are natural scenarios where we may encounter a `small' constraint (according to our definition) with respect to the number of issues at hand and the size of their domains.
Our goal is to answer the following question: how much representation can we guarantee from some outcome w⃗, to a group of voters that has an (S,w⃗)-deviation and that qualifies as underrepresented?
§ JUSTIFIED REPRESENTATION WITH COHESIVENESS
We now propose the following adaptations of cohesiveness-EJR to public decisions with constraints. To adapt cohesiveness-EJR, we adapt cohesiveness from multiwinner voting in a similar manner as done by <cit.>. We say that a voter group is T-agreeing for some set of issues T⊆ if T ⊆b⃗_ib⃗_j holds for all voters i,j∈ N' and then we define cohesiveness as the following:
For a set of issues T⊆, we say that a set of voters N'⊆ N is T-cohesive if N' is T-agreeing and it holds that |N'| ≥ |T|·n/m.
Using T-cohesiveness, we can define EJR for public decisions with constraints <cit.>.
Given an election (B, ), an outcome w⃗ provides if for every T-cohesive group of voters N'⊆ N for some T⊆ with an (S,w⃗)-deviation for some S⊆ T, there exists a voter i∈ N' such that iw⃗≥ |T|.
Intuitively, deems an outcome to be unfair if there exists a T-cohesive voter group with (i) none of its group members having at least |T| in satisfaction, and (ii) `flipping' outcome w⃗'s decisions on some of the issues in T leads to some other feasible outcome.
We have the following result that can be interpreted as positive when the size of is `not too large'.
Given an election instance (B,) and an outcome w⃗∈, there exists an algorithm that decides in O((max_t∈[m]|D_t|)^m||^3mn) time whether outcome w⃗ provides .
From Proposition <ref> we know that, given an outcome w⃗, we can find all groups with some (S,w⃗)-deviation for some S⊆ in O(||^2mn) time. There can be at most (max_t∈[m]|D_t|)^m(||-1) such groups (recall that max_t∈[m]|D_t| is the maximal size of any issue's domain). Then, for each group N' with an (S,w⃗)-deviation, we can check their size in polynomial time and thus verify whether they are a T-cohesive with S⊆ T, and if so, we can check if there exists any voter i∈ N' with iw⃗ > |T|.
Now, <cit.> have already shown that, in general, cohesiveness-EJR is not always satisfiable in their sequential decisions model. This negative result carries over to the unconstrained public-decision setting. Although we shall, in the sections to follow, analyse the extent to which we can achieve positive results with cohesiveness-EJR in our constrained setting, this negative result motivates the study of the following weaker axiom—which is an adaptation of the multiwinner JR axiom—that can always be satisfied in the public-decision setting without constraints <cit.>.
Given an election instance (B, ), an outcome w⃗ provides if for every T-cohesive group of voters N'⊆ N for some T⊆ with an (S,w⃗)-deviation for some S⊆ T where |S| = |T| = 1, there exists a voter i∈ N' such that iw⃗≥ 1.
Unfortunately, when considering arbitrary constraints, even cannot always be achieved. Note that this even holds for binary election instances.
There exists an election instance where no outcome provides .
Consider the binary election instance with issues = {a_1,a_2} and a constraint = {(0,1),(0,0)}. Suppose that N = {1,2}, where b⃗_1 = (1,1) and b⃗_2 = (1,0). Note that for both outcomes w⃗∈, one voter will have satisfaction of 0 while being a T-cohesive group with an (S,w⃗)-deviation for |S| = |T| = 1. As each voter is half of the population, they may `flip' issue a_2 to deviate towards the alternative feasible outcome, which provides them greater satisfaction than the current one.
Let us now restrict the constraints that we consider. To do so, we introduce notation for the fixed decisions for a set of outcomes C ⊆, which are the issues in whose decisions are equivalent across all the outcomes in C. For a set of outcomes C ⊆, we represent this as:
_fix(C) = {a_t∈|there exists some d∈ D_t such that w_t = d for all w⃗∈ C}.
We say a constraint has the NFD property if _fix() = ∅ holds for .
At first glance, this NFD property seems more than a reasonable requirement but rather a property that should be assumed to hold by default. We argue however, that by doing so, we will neglect election instances where decisions that are fixed from the get-go may contribute to the satisfaction of voters and, specifically for our goal, these fixed decisions may aid in giving the voters their fair, proportional representation. It is this reason, why we did not restrict ourselves to election instances where the NFD property holds.
Now, we show that with the NFD property, the axiom can always be provided, albeit only for `small' election instances. We begin with cases where the number of feasible outcomes is limited to two.
For election instances (B, ) with || = 2 where has the NFD property, can always be satisfied.
Take some feasible outcome w⃗∈. Observe that when || = 2, if property NFD holds, then the two feasible outcomes differ on the decisions of all issues. Thus, it is only possible for T-cohesive groups with an (S,w⃗)-deviation for |S| ≤ |T| = m to have an allowable deviation from w⃗ to the only other feasible outcome. This means only the entire voter population have the potential to deviate. And if such deviation to w⃗' exists, then outcome w⃗' sufficiently represents the entire voter population.
Now we ask the following: can we guarantee when m ≤ 3? We answer in the positive when we restrict ourselves to binary election instances.
For binary election instances (B, ) with m ≤ 3 where the constraint has the NFD property, can always be provided.
The case for m=1 is trivially satisfied so we present the proof as two separate cases where the number of issues is either m=2 or m=3.
Case m = 2: Observe that for two issues (i.e., m = 2) there are 7 possible constraints satisfying the NFD property. Take one such and a feasible outcome w⃗ = (d_x,d_y) ∈ where d_x,d_y∈{0,1}. Let us consider now groups of voters with an (S, w⃗)-deviation over some set of issues S⊆ T who are witness to a violation of . As m = 2, the agreement among voters and the deviation may concern at most two issues, i.e., |S|, |T| ∈{1,2}.
First, consider |T| = 1. Since |S| ≤ |T| and S ≠∅, we have |S| = 1 for any T-cohesive group (which is thus of size |N'| ≥n/2) wishing to perform a (S, w⃗)-deviation from w⃗ to some other feasible outcome w⃗'∈. If there is a voter i ∈ N' such that iw⃗≥ 1, group N' would be sufficiently satisfied—therefore, is ensured and we are done. Otherwise, we have that all i ∈ N' are unanimous and that iw⃗ = 0; hence, b⃗_i = (1-d_x, 1-d_y) for all i ∈ N'. There are two possible outcomes (deviations) that differ from w⃗ in only one coordinate. If neither outcome is in , then no feasible deviation is possible for N' and we are done. Otherwise, assume without loss of generality that w⃗' = (1-d_x, d_y) ∈. Now, if there is a voter i ∈ N ∖ N' such that iw⃗≥ 1, then we are done (as the group N ∖ N' would be sufficiently satisfied if it were T-cohesive for |T| = 1). Else, it means that all voters j ∈ N ∖ N' are unanimous on ballot b⃗_j = (d_x, 1-d_y). But then, since satisfies property NFD, there exists some outcome w⃗”∈ such that w⃗”_2 = 1-d_y. Then, iw⃗”≥ 1 for all i ∈ N and no deviation is possible.
Finally, consider |T| = m = 2. In order for a group N' that is T-cohesive to have a (S,w⃗)-deviation for |S| ≤ |T|, it must be the case that N' = N, and iw⃗ = 0 for all i ∈ N. By property NFD, there must be some outcome w⃗' ≠w⃗∈, and thus iw⃗'≥ 1 for all i ∈ N.
Case m = 3: Let (B, ) be an election instance satisfying the conditions in the statement. We now reason on the existence of possible T-cohesive groups that are a witness to the violation of , for each possible size 1 ≤ |T| ≤ 3 of the set T.
For |T|=1, suppose by contradiction that for all w⃗∈, there is some voter group N' such that |N'|≥n/3 and each voter in N' has satisfaction of 0. Thus, for all voters i ∈ N' we have b⃗_i= w⃗̅⃗. Moreover, for a T-cohesive group with an (S,w⃗)-deviation for |S| = |T| = 1 to be possible, there has to exist a w⃗'∈ whose decisions differ from w⃗ in exactly one issue, i.e., w⃗w⃗' = 2. To fit all these disjoint T-cohesive groups for |T| = 1, one for each outcome in , it must be that n≥ ||·n/3, hence ||≤ 3 must hold. If || = 1, the NFD property cannot be met. If || = 2, then the two feasible outcomes cannot differ in the decision of only one issue while also satisfying the NFD property. For ||=3, to get a T-cohesive voter group with an (S,w⃗)-deviation for |S| = |T| = 1 at every w⃗∈, the three feasible outcomes must differ by at most one decision, contradicting the NFD property.
For |T| = 2, we only consider (S,w⃗)-deviations from a T-cohesive group N' with |S| ∈{1,2}. Consider the case of |S|=1. W.l.o.g., assume w⃗ = (0,0,0) and assume that there exists a T-cohesive group N' (where |N'| ≥ n·2/3) with every voter having satisfaction < 2, with an (S,w⃗)-deviation towards outcome, e.g., w⃗' = (1,0,0). The case for (0,1,0) is similar. Now one of the N' voters has satisfaction of 2. If n/3 voters now have an allowable deviation (satisfaction of 0 with the current outcome), by NFD one of the outcomes {010, 111, 011, 110} must be in . Observe that any of them provides satisfaction of at least 2 to all cohesive groups of |T| = 2, and at least satisfaction 1 to every cohesive group of |T| =1. Now we look at the case for |S| = 2. W.l.o.g., consider the outcome w⃗ = (0,0,0) and assume that there exists a T-cohesive group N' (where |N'| ≥ n·2/3) with an (S,w⃗)-deviation towards outcome, e.g., w⃗' = (1,1,0). Thus, there is some voter i in N' with satisfaction iw⃗'≥2. At this point, the only possible further (S,w⃗)-deviation could arise for |S| = 1 in case there are n/3 voters in N∖N' each have a satisfaction of 0 for w⃗', i.e., each has the ballot (0,0,1) and either one of the outcomes in {(1,0,0),(0,1,0),(1,1,1)} is in . Now take instead that iw⃗' = 2 and consider two cases where either voter i agrees or disagrees with the voters in N∖N' on the decision of issue a_3. First, assume that voter i∈ N' agrees with the voters in N∖N' on issue a_3 (so voter i had the ballot b⃗_i = (1,1,1)). Then if either (0,1,1)∈ or (1,1,1)∈ holds, we have that is provided. And if (0,0,1)∈ holds, then voters in N∖N' are entirely satisfied and the voters in N' may only have an (S,w⃗)-deviation for |S| ≤ |T| = 2 if either (0,1,1)∈ or (1,1,1)∈ holds (as they only `flip' issues they disagree with), which means that is provided. In the second case, assume that voter i∈ N' disagrees with the voters in N∖N' on issue a_3 and so, voter i had the ballot b⃗_i = (1,1,0). This means that iw⃗' = 3 holds, hence, any outcome that the voters in N∖N' propose given they have an (S,w⃗)-deviation for |S| = 1, would be one that provides .
Finally, a T-cohesive group for |T| = 3 implies a unanimous profile; if there exists an allowable (S,w⃗)-deviation for |S| ≤ |T| = 3, then the outcome in maximising the sum of agreement with the profile provides .
We leave it open whether the above result holds if we do not restrict our view to binary election instances. Unfortunately, the good news ends there as we provide a example showing that cannot be guaranteed when do not have m ≤ 3 (even for binary election instances).
There exists an election instance (B, ) where m>3 and the constraint has the NFD property but no outcome exists.
Suppose there is a binary election instance with a constraint = {w⃗_1, w⃗_2, w⃗_3, w⃗_4} for m=8 such that w⃗_1 = (0,0,0,…,0), w⃗_2 = (0,0,1,…,1), w⃗_3 = (1,1,1,…,1), w⃗_4 = (1,1,0,…,0). Consider now a profile of four voters where b⃗_i = w⃗_i. Given that m=8, note that for every outcome w⃗∈, there exists some voter that deserves 2 in satisfaction by being T-cohesive for |T| = 2 with an (S,w⃗)-deviation but with zero in satisfaction. And by , such a voter would be entitled to at least 1 in satisfaction, so there is no outcome in that provides .
We now turn our attention towards a weakening of that takes inspiration from EJR-1 studied in the context of participatory budgeting.
Given an election (B, ), an outcome w⃗ provides -1 if for every T-cohesive group of voters N'⊆ N for some T⊆ with an (S,w⃗)-deviation for some S⊆ T, there exists a voter i∈ N' such that iw⃗≥ |T| -1.
As implies -1, the results of Propositions <ref> and <ref> immediately apply to -1.
For binary election instances (B, ) with || = 2 where the constraint has the NFD property, -1 can always be provided.
For binary election instances (B, ) with m ≤ 3 where the constraint has the NFD property, -1 can always be provided.
Note that for the computational result for in Proposition <ref>, a simple alteration of the proof given for Proposition <ref> (replacing the value |T| with |T|-1 in the final satisfaction check) yields a corresponding computational result for -1.
Given an election instance (B,) and an outcome w⃗∈, there exists an algorithm that decides in O((max_t∈[m]|D_t|)^m||^3mn) time whether outcome w⃗ provides -1.
For the result of stating that can be provided when m=2 given that NFD holds (see Proposition <ref>), we can show something stronger for -1 by dropping the assumption that the NFD property holds.
For election instances (B, ) with m = 2, -1 can always be provided.
Consider an election over two issues, where a T-cohesive group of voters has an (S,w⃗)-deviation for some outcome w⃗, as per Definition <ref>. Observe that, when m = 2, (S,w⃗)-deviation are only possible for |S|∈{1,2}.
Take a T-cohesive group N' for |T| = 1 with an (S,w⃗)-deviation from w⃗ to some other feasible outcome w⃗'∈. Even if iw⃗ = 0 for every voter i ∈ N', we have iw⃗≥ |T| - 1 = 1-1 = 0, and thus -1 is satisfied.
Take now a T-cohesive group N' for |T| = 2: for them to deviate, it must be the case that N' = N, and iw⃗ = 0 for all i ∈ N. If they have an (S,w⃗)-deviation for |S| = |T| = 2, the outcome w⃗'⃗ they wish to deviate to must increase the satisfaction of each voter by at least 1, which thus satisfies iw⃗≥ |T| - 1 = 2-1 = 1, and thus -1.
Can we show that an outcome providing -1 always exists when there are more than three issues, unlike for ? Unfortunately, this is not the case, even assuming property NFD, as the same counterexample used to prove Proposition <ref> yields the following (so also for binary election instances).
There exists an election instance (B, ) where m > 3 and the constraint has the NFD property but there exists no outcome that provides -1.
We demonstrate that the challenge of satisfying -1 lies in the constraints. To do so, we show that in the setting without constraints, it is always possible to find an outcome that provides -1. To do so, we define the constrained version of MES that has been studied for the public-decision setting without constraints. Our adaptation allows for the prices associated with fixing the outcome's decisions on issues to vary. This contrasts with the unconstrained MES that fixes the prices of every issue's decision to n from the onset. And this pricing is determined by a particular pricing type λ.
The rule runs for at most m rounds. Each voter has a budget of m. In every round, for every undecided issue a_t in a partial outcome w⃗^*, we identify those issue-decision pairs (a_t,d) where fixing some decision d∈ D_t on issue a_t allows for a feasible outcome to be returned in future rounds. If no such issue-decision pair exists, then the rule stops. Otherwise, for every such pair (a_t,d), we calculate the minimum value for ρ_(a_t,d) such that if each voter in N(a_t,d) were to pay either ρ_(a_t,d) or the remainder of their budget, then these voters could afford to pay the price λ(a_t,d) (determined by the pricing type λ). If there exists no such value for ρ_(a_t,d), then we say that the issue-decision pair (a_t,d) is not affordable in round, and if in a round, there are no affordable issue-decision pairs, the rule stops.
Otherwise, we update w⃗^* by setting decision d on issue a_t for the pair (a_t,d) with a minimal value ρ_(a_t,d) (breaking ties arbitrarily, if necessary) and have each voter in N(a_t,d) either paying ρ_(a_t,d), or the rest of their budget. Note that may terminate with not all issues being decided and we assume that all undecided issues are decided arbitrarily.
A natural candidate for a pricing type is the standard pricing of unconstrained MES where the price for every issue-decision pair (a_t,d) is set to λ(a_t,d) = n. And with this pricing, that we refer to as unit pricing λ_unit, we can show that satisfies -1 for unconstrained, binary elections.
For binary election instances, when = {0,1}^m, with unit pricing λ_unit satisfies -1.
Take an outcome w⃗ returned by with unit pricing λ_unit and consider a T-cohesive group of voters N'. Let us assume that for every voter i∈ N', it holds that iw⃗ < |T|-1 and then set ℓ = |T|-1. So to conclude the run of , each voter in N' paid for at most ℓ-1 = |T|-2 issues.
Now, assume that the voters in N' paid at most m/(ℓ + 1) for any decision on an issue. We know that each voter has at least the following funds remaining at that moment:
m - (ℓ - 1)m/ℓ + 1 = 2m/ℓ + 1 = 2m/|T|≥2n/|N'|.
The last step follows from the group N' being . So now we know that the voters in N' hold at least 2n in funds when some at the end of 's run. Thus, we know that at least two issues have not been funded and for at least one of these two issues, at least half of N' agree on the decision of this issue (as the election instance is binary) and they hold enough funds to pay for it, hence, we have a contradiction to terminating.
Now, assume that some voter i in N' paid more than m/(ℓ + 1) for a decision on an issue. Since we know that at the end of 's execution, each voter in N' paid for at most ℓ-1 = |T|-2 issues, then at the round r that voter i paid more than m/(ℓ + 1) for an issue's decision, the voters in N' collectively held at least 2n in funds. Since at least two issues in were not funded, there exists some issue that could have been paid for in round r, where voters each pay m/(ℓ + 1), contradicting the fact that voter i paid more than m/(ℓ + 1) in round r.
So, we have that this group of voters N' cannot exist and that satisfies -1.
This result provides us with an axiom `close to' EJR that we know is always satisfiable when the issues have size-two domains and there are no constraints.
§ JUSTIFIED REPRESENTATION WITH AGREEMENT
Given the mostly negative results regarding the cohesiveness-EJR notion, we move on to justified representation based on agreement. We justify this move as the notion based on agreement is weaker and yields more positive results in the unconstrained setting. Thus, by assessing it here, we are able to establish a baseline of what can be achieved in terms of EJR-like proportionality guarantees in our constrained model. First, we formalise agreement-based EJR with the following axiom.
Given an election (B, ), an outcome w⃗ provides if for every T-agreeing group of voters N'⊆ N for some T⊆ with an (S,w⃗)-deviation for some S⊆ T with |S|≤ |T|·|N'|/n, there exists a voter i∈ N' such that iw⃗≥|N'|/n·|T|.
Now, in more unfortunate news, we find that is not always satisfiable in general. In fact, the counterexample of Proposition <ref> suffices to show this as each voter requires at least 1 in satisfaction for to to be satisfied.
There exists an election instance where no outcome provides (even when the NFD property holds for ).
We now focus on a particular class of constraints as we import agreement-EJR into our setting. Specifically, we consider a class that allows us to talk about how restrictive, and thus how costly, the fixing of a particular issue-decision pair is.
Akin to work by <cit.>, we consider constraints that can be equivalently expressed as a set of implications Imp_, where each implication in Imp_ is a propositional formula with the following form: ℓ_(a_x,d_x)→ℓ_(a_y,d_y). This class of constraints allows us, for instance, to express simple dependencies and conflicts such as `selecting x means that we must select y' and `selecting x means that y cannot be selected', respectively. These constraints correspond to propositional logic formulas in 2CNF.
Take a set of issues = {a,b,c,d,e} for a binary election instance. Here is an example of an implication set:
* Imp_ = {(a,1) → (b,1), (c,1) → (e,0), (d,1) → (e,0)}. Here, accepting a means that b must also be accepted while accepting either c or d requires the rejection of e.
Given a (possibly partial) outcome w⃗∈ and the set Imp_, we construct a directed outcome implication graph G_w⃗ = ⟨ H,E⟩ where H = ⋃_a_t∈{(a_t,d)| d∈ D_t}
as follows:
* Add the edge ((a_x,d_x), (a_y,d_y)) to E if ℓ_(a_x,d_x)→ℓ_(a_y,d_y)∈Imp_ and w_y≠ d_y;
* Add the edge ((a_y,d_y^*), (a_x,d_x^*)) for all d_y^*≠ d_y∈ D_y,d_x^*≠ d_x∈ D_x to E if ℓ_(a_x,d_x)→ℓ_(a_y,d_y)∈Imp_ and w_x = d_x.
Given such a graph G_w⃗ for an outcome w⃗, we use G_w⃗(a_x,d_x) to denote the set of all vertices that belong to some path in G_w⃗ having vertex (a_x,d_x) as the source (note that G_w⃗(a_x,d_x) excludes (a_x,d_x)).
Consider a binary election instance and take a set of issues = {a_1,a_2,a_3,a_4} and the implication set Imp_ = {(a_1,1) → (a_2,1), (a_1,1) → (a_3,1), (a_2,1) → (a_4,1)} of some constraint . Consider the outcome implication graph for w⃗_1 = (0,0,0,0) (vertices with no adjacent edges are omitted for readability):
[anchor=center]
[name = A1] at (-1, 2) (a_1,1);
[name = C1] at (1, 1) (a_3,1);
[name = B1] at (1, 2) (a_2,1);
[name = D1] at (3, 2) (a_4,1);
[->] (A1) edge (B1);
[->] (A1) edge (C1);
[->] (B1) edge (D1);
Then, we have G_w⃗(a_1,1) = {(a_2,1), (a_3,1), (a_4,1)} and therefore |G_w⃗(a_1,1)| = 3.
Thus, for an issue-decision pair (a_x, d_x), we can count the number of affected issues in setting a decision d_x for the issue a_x. This leads us to the following class of constraints.
Take some constraint expressible as a set of implications Imp_. For some positive integer k≥ 2, we say that is k-restrictive if for every outcome w⃗∈, it holds that:
max{|G_w⃗(a_x,d_x)| | (a_x,d_x)∈⋃_a_t∈{(a_t,d)| d∈ D_t}} = k-1
where G_w⃗ is the outcome implication graph constructed for outcome w⃗ and the implication set Imp_.
Intuitively, with a k-restrictive constraint, if one were to fix/change an outcome w⃗'s decision for one issue, this would require fixing/changing w⃗'s decisions on at most k-1 other issues. So intuitively, when dealing with k-restrictive constraints, we can quantify (at least loosely speaking) how `difficult' it is to satisfy a constraint via the use of this value k. Thus, we can use this value k to account for the constraint's difficulty when designing proportionality axioms.
Before assessing how k-restrictive constraints affect our goal of providing proportionality, we touch on the computational complexity of checking, for some constraint , whether there exists a set of implications Imp_ that is equivalent to . For the case of binary elections, this problem been studied under the name of Inverse Satisfiablility and it has been shown that for formulas in 2CNF, the problem is in <cit.>. So in the remainder of the paper, when we refer to a k-restrictive constraint , we thus assume that is expressible using an implication set Imp_.
We now import the agreement-EJR notion and an approximate variant into our framework with constraints.
Given an election (B, ), some α∈ (0,1] and some positive integer β, an outcome w⃗ provides α--β if for every T-agreeing group of voters N'⊆ N for some T⊆ with an (S,w⃗)-deviation for some S⊆ T with |S|≤ |T|·|N'|/n, there exists a voter i∈ N' such that iw⃗≥α·|N'|/n·|T| - β.
With this axiom, we formalise agreement-EJR to our constrained public-decision model with the presence of the multiplicative and additive factors allowing us to measure how well rules satisfy this notion even if they fall short providing the ideal representation.[Observe that we include the axiom's size requirement on the set S such that a group has an (S,w⃗)-deviation in order to prohibit considering cases such as a single voter only having an (S,w⃗)-deviation for S= while not intuitively being entitled to that much representation.] Note that for the sake of readability, when we have either α = 1 or β = 0, we omit them from the notation when referring to α--β.
Suppose there are four issues = {a_1, a_2, a_3, a_4} and consider a constraint = {(1,1,0,0),(1,1,1,0)}. Then suppose there are two voters with ballots b⃗_1 = (1,1,1,1) and b⃗_2 = (0,0,0,0) so each voter deserves at least 2 in satisfaction according to agreement-EJR. See that outcome w⃗ = (1,1,0,0) provides while the outcome w⃗' = (1,1,1,0) only provides 1/2- as voter 2 only obtains 1 in satisfaction whilst having a sufficiently small (S,w⃗)-deviation for the issue a_3 (deviating to outcome w⃗).
We now analyse with respect to this axiom for the class of k-restrictive constraints. We say that for , the price for an issue-decision pair (a_x,d) given a partial outcome w⃗^* is λ(a_x,d) = n· (|G_w⃗^*(a_x,d)|+1) and we refer to this as a fixed pricing λ_fix. Then we can show the following for binary election instances.
For binary election instances (B,) where is k-restrictive for some k, with fixed pricing λ_fix satisfies 1/k--1.
For a binary election instance (B,) where is k-restrictive, take an outcome w⃗ returned by with fixed pricing λ_fix. Consider a T-agreeing voter group N'. Let us assume that for every i∈ N', it holds that iw⃗ < |N'|/nk·|T|-1 and then set ℓ = |N'|/nk·|T|-1. So to conclude , each voter i∈ N' paid for at most ℓ-1 = |N'|/nk·|T|-2 issues. Note that for a k-restrictive constraint , the maximum price with fixed pricing λ_fix sets for any issue-decision pair is nk (as at most k issues are fixed for a purchase). Now, assume that the voters in N' paid at most m/(ℓ + 1) for any decision on an issue. We know that each voter has at least the following funds remaining at that moment:
m - (ℓ - 1)m/ℓ + 1 = 2m/ℓ + 1 = 2m/|N'|/kn·|T| = 2mnk/|N'||T|≥2nk/|N'|.
We now have that voter group N' holds at least 2nk in funds at the rule's end. Thus, we know that at least k issues have not been funded and for at least one of these k issues, at least half of N' agree on the decision for it (as the election is a binary instance) while having enough funds to pay for it. Hence, we have a contradiction to terminating.
Now, assume that some voter i∈ N' paid more than m/(ℓ + 1) for fixing an issue's decision. Since we know that at the end of 's run, each voter in N' paid for at most ℓ-1 issues, then at the round r that voter i paid more than m/(ℓ + 1), the voters group N' collectively held at least 2nk in funds. Since at least k issues in were not funded, there exists some issue that could have been paid for in round r, where voters each pay m/(ℓ + 1). This contradicts the fact that voter i paid more than m/(ℓ + 1) in round r.
So, we have that this group of voters N' cannot exist which concludes the proof.
Towards an even more positive result, and one where we are not limited to binary election instances, we now provide an adaptation of the MeCorA rule <cit.> . In the unconstrained public-decision model, MeCorA is presented by <cit.> as an auction-style variant of MES that allows voter groups to change the decision of an issue all while increasing the price for any further change to this issue's decision. In our constrained model, groups are allowed to pay for changes to the decisions on sets of issues, as long as these changes represent a feasible deviation.
Take some constant ϵ > 0. Start by setting λ_t = 0 as the current price of every issue a_t∈, endow each voter i∈ N with a personal budget of m and take some arbitrary, feasible outcome w⃗∈ as the current outcome. A groups of voters can `update' the current outcome w⃗'s decisions on some issues S⊆ if the group:
(i) can propose, for each issue a_t∈ S, a new price λ_t^*≥λ_t+ϵ,
(ii) can afford the sum of new prices for issues in S, and
(iii) has an (S,w⃗)-deviation.
The rule then works as follows. Given a current outcome w⃗, it computes, for every non-empty S⊆, the smallest possible value ρ_(t,S) for each issue a_t∈ S such that for some N', if voters in N' each pay ρ_S = ∑_a_t∈ Sρ_(t,S) (or their remaining budget), then N' is able to `update' the decisions on every a_t∈ S as per conditions (i)-(iii). If there exists no such voter group for issues S then it sets ρ_S = ∞.
If ρ_S = ∞ for every S⊆, the process terminates and returns the current outcome w⃗. Otherwise, it selects the set S with the lowest value ρ_S (any ties are broken arbitrarily) and does the following:
* updates the current outcome w⃗'s decisions on issues in S to the decisions agreed upon by the voters with the associated (S,w⃗)-deviation,
* updates the current price of every issue a_t∈ S to λ_t^*,
* returns all previously spent funds to all voters who paid for the now-changed decisions on issues in S,
* and finally, for each voter in N', deduct ∑_a_t∈ Sρ_(t,S) from their personal budget (or the rest of their budget).
Next, we show the representation guarantees can be achieved on instances with k-restrictive constraints via the use of modified version of MeCorA_. Moreover, we can drop the restriction to binary election instances that was key for the result of Theorem <ref>. In this MeCorA_ variant, we first partition the voter population into groups where members of each group agree on some set of issues. Then, for each group, its members may only pay to change some decisions as a collective and only on those issues that they agree on. Contrarily to MeCorA_, voter groups cannot pay to change some decisions if this leads to the group's members gaining `too much' satisfaction from the altered outcome (i.e., a voter group exceeding their proportional share of their agreed-upon issues, up to some additive factor q that parameterises the rule).
The set of the voters N is partitioned into p disjoints sets N(T_1),…,N(T_p) such that:
(i) for every x∈{1,…,p}, a voter group N(T_x)⊆ N is T_x-agreeing for some T_x⊆, and
(ii) for all x∈{1,…,p-1}, it holds that |N(T_x)|· |T_x| ≥ |N(T_x+1)|· |T_x+1|
As with MeCorA_, voter groups shall pay to change the decisions of some issues during the rule's execution. However, given the initial partition, during the run of Greedy MeCorA_-q, the voters in N(T_x) may only change decisions for the issues in T_x.
Moreover, if a voter group N(T_x) for some x∈{1,…,p} wishes to change some decisions at any moment during the process, this change does not lead to any voter in N(T_x) having satisfaction greater than |N(T_x)|/n· |T_x| - q with the updated outcome. Besides these two differences, the rule works exactly as MeCorA_.
Now, we can show the following for Greedy MeCorA_-q working on a k-restrictive constraint. For this result, we require the additional assumption that voter ballots represent feasible outcomes in .
For election instances (B,) where voters' ballots are consistent with the constraint and is k-restrictive for some k≥ 2, Greedy MeCorA_-(k-1) satisfies -(k-1).
Take an outcome w⃗ returned by Greedy MeCorA_-(k-1). Assume that w⃗ does not provide -(k-1). Thus, there is a T-agreeing group N' such that iw⃗ < |N'|/n· |T| - k+1 = ℓ holds for every i∈ N'.
Now, consider the partition of voters N(T_1),…,N(T_p) constructed by Greedy MeCorA_-(k-1) to begin its run.
Assume first that there is some x∈{1,…,p} such that N' = N(T_x), i.e., voters N' appear in their entirety in said partition.
We then have T= T_x. Moreover, voters in N' each contribute to at most ℓ decisions at any moment of the run of Greedy MeCorA_-(k-1), as this is the limit the rule imposes on their total satisfaction. We now consider two cases.
Assume that the voters in N' contributed at most m/(ℓ + k - 1) to change some decisions during the rule's execution. It follows that each voter has at least the following funds remaining: m - (ℓ - 1)·m/(ℓ + k - 1)≥nmk/|N'||T|.
In this case, the voters in N' would have at least nmk/|T| in collective funds, so it follows that each distinct (S,w⃗)-deviation available to N' must cost at least nmk/|T|. As N' is T-agreeing, it must be that N' has at least a (|T| - ℓ + 1)/k many (S,w⃗)-deviations due to being k-restrictive and as the voters' ballots are consistent with .
Now, consider the case where some voter in N' contributed more than m/(ℓ + k - 1) to change some decisions. The first time that this occurred, the change of decisions did not lead to any voter in N' obtaining a satisfaction greater than ℓ = |N'|/n· |T| - k + 1 (otherwise the rule would not allow these voters to pay for the changes). Thus, each voter in N' must have contributed to at most ℓ -1 issues before this moment. From the reasoning above, it must hold that in this moment, each voter held at least nmk/|N'||T| in funds with there being at least (|T| - ℓ + 1)/k feasible deviations available to N' and each such deviation costing at least nmk/|T|. So in both cases, for the (S,w⃗)-deviations that are present in T that voters in N' wish to make, outcome w⃗'s decisions must have been paid for by voters within the remaining voter population N∖ N'. And so, these decisions must have cost the voters in N∖ N' at least:
nmk/|T|·(|T| - ℓ + 1/k) = nm/|T|·(|T| - |N'|/n· |T| + k)
> nm/|T|·(n|T| - |N'||T|/n) = m(n - |N'|).
However, voters N∖ N' have at most m(n - |N'|) in budget. Thus, the rule cannot have terminated with the voter group N' existing.
Now, assume that the group N' did not appear in their entirety within the partition N(T_1),…,N(T_p) made by Greedy MeCorA_-(k-1). This means that some voter i∈ N' is part of another voter group N(T_x) that is T_x-agreeing such that |N(T_x)|/n· |T_x| ≥N'/n· |T|. Now, recall that for each voter group N(T) in the partition, the voters in N(T) have the same satisfaction to end the rule's execution (as they only pay to flip decisions as a collective). Thus, from the arguments above, it holds for this voter i∈ N'∩ N(T_x) that iw≥|N(T_x)|/n· |T_x| - k +1≥|N'|/n· |T| - k +1, which contradicts the assumption that every voter in N' has satisfaction less than |N'|/n· |T| - k +1.
We now offer another way towards producing proportional outcomes when using k-restrictive constraints. It is a constrained adaptation of the Local Search Proportional Approval Voting (LS PAV) rule from the MWV literature that is a polynomial-time computable rule that is known to satisfy EJR <cit.>. In the MWV setting, the rule begins with an arbitrary committee of some fixed size k and in iterations, searches for any swaps between committee members and non-selected candidates that brings about an increase of the PAV score by at least n/k^2. In our model, the PAV score of some feasible outcome w⃗∈ is defined to be PAV(w⃗) = ∑_i∈ N∑_t = 1^iw⃗1/t. We can then lift Local Search PAV to our setting with constraints.
Beginning with an arbitrary outcome w⃗∈ as the current winning outcome, the rule looks for all possible deviations. If there exists an (S,w⃗)-deviation for some voter group to some outcome w⃗'∈ such that PAV(w⃗') - PAV(w⃗)≥n/m^2, i.e., the new outcome w⃗' yields a PAV score that is at least n/m^2 higher than that of w⃗, then the rule sets w⃗' as the current winning outcome. The rule terminates once there exists no deviation that improves on the PAV score of the current winning outcome by at least n/m^2.
As there is a maximum obtainable PAV score, LS PAV_ is guaranteed to terminate. The question is how long this rule takes to return an outcome when we have to take k-restrictive constraints into account.
For elections instances where is k-restrictive (where k is a fixed constant), LS PAV_ terminates in polynomial time.
We show that given an outcome w⃗, finding all possible deviations can be done in polynomial time for a k-restrictive constraint . This can be done by exploiting the presence of the implication set Imp_. Note that the size of the implication set Imp_ is polynomial in the number of issues. So we can construct the outcome implication graph of Imp_ and the outcome w⃗ in polynomial time. Then for each issue a_t∈, we can find the set G_w⃗(a_t,d) for some d≠ w_t∈ D_t in polynomial time and the issue-decision pairs represent the required additional decisions to be fixed in order to make a deviation from outcome w⃗ by changing the w⃗'s decision on issue a_t to d. Doing this for each issue a_t allows us to find a deviation that can improve the PAV score, if such a deviation exists. With similar reasoning used in other settings <cit.>, we end by noting that since there is a maximum possible PAV score for an outcome, and each improving deviation increases the PAV score by at least n/m^2, the number of improving deviations that LS-PAV_ makes is polynomial in the number of issues m.
Off the back of this positive computational result, we present the degree to which LS PAV_ provides proportional outcomes with regards to the α--β axiom.
For election instances (B,) where the voters' ballots are consistent with the constraint and is k-restrictive for some k≥ 2, LS-PAV_ satisfies 2/(k+1)--(k-1).
For an election instance (B,) where is k-restrictive for k≥ 2, take an outcome w⃗ returned by LS-PAV_ and consider a group of voters N' that agree on some set of issues T. Let us assume that for every voter i∈ N', it holds that iw⃗ < 2/k+1·|N'|/n·|T|-k + 1 and then set ℓ = 2/k+1·|N'|/n·|T|-k + 1. We use r_i to denote the number of outcome w⃗'s decisions that a voter i∈ N agrees with.
For each voter i∈ N∖ N', we calculate the maximal reduction in PAV score that may occur from a possible deviations by LS-PAV_ when is k-restrictive. This happens when for each of at most r_i/k deviations, we decrease their satisfaction by k and remove ∑_t=0^k-11/(r_i - t) in PAV score. So for these voters in N∖ N', we deduct at most the following:
∑_N∖ N'r_i/k·(∑_t=0^k-11/r_i - t) ≤∑_N∖ N'r_i/k·(∑_t=1^kt/r_i) = k+1/2·(n - |N'|).
Now, so there are |T| - (ℓ - 1) = |T| - ℓ +1 issues that all voters in N' agree on but they disagree with outcome w⃗'s decisions on these issues. Since we assume the constraint is k-restrictive, then for each of these |T| - ℓ +1 issues, they fix at most k-1 other issues and thus, there are at least (|T| - ℓ +1)/k feasible deviations that can be made by LS-PAV_ amongst these issues. For the voters in N', we now consider the minimal increase in PAV score that may occur from these possible deviations by LS-PAV_. For each such deviation, we increase their satisfaction by at least k and thus, for a voter i∈ N', we increase the PAV score by ∑_t=1^k1/(r_i + t). Since for each voter i∈ N' we have r_i≤ℓ - 1, and as there are at least (|T| - ℓ +1)/k feasible deviations in T, it follows that we add at least the following to the PAV score:
|T| - ℓ +1/k·(∑_i∈ N'∑_t=1^k1/r_i + t)≥|T| - ℓ +1/k·(∑_i∈ N'∑_t=1^k1/ℓ + t - 1)
Taking into account that k≥ 2 and ℓ = 2|N'||T|/(n(k+1))-k+1, then with further simplification, we find that at least the following is added to the PAV score:
> n(k+1)/2 - |N'| + n(k+1)/|T|≥k+1/2·(n - |N'|) + n(k+1)/|T|.
So the total addition to the PAV score due to satisfying voters in N' is strictly greater than the PAV score removed for the added dissatisfaction of voters in N∖ N' (which is at most (k+1)(n - |N'|)/2). And specifically, this change in score is at least n(k+1)/|T| > n/|T| and thus, at least one of the (|T| - ℓ +1)/k many deviations must increase the PAV score by more than:
k/|T| - ℓ +1·n/|T|≥1/|T|·n/|T|≥n/|T|^2≥n/m^2.
Thus, LS-PAV_ would not terminate but would instead make this deviation in order to increase the total PAV score. Thus, contradicting that such a group N' cannot exist.
With this result, we have a rule that when focused on k-restrictive constraints, is both polynomial-time computable and provides substantial proportional representation guarantees (assuming voter ballots are constraint consistent).
§ PROPORTIONALITY VIA PRICEABILITY
With this section, we offer an alternative to the justified-representation-like interpretation of proportional representation, and this is through the notion of priceability <cit.>. Recent work has shown the promise of this market-based approach for a general social choice model <cit.> and the sequential choice model <cit.>. We look to employ it for constrained public decisions (albeit looking at a weaker priceability axiom than the axiom that <cit.> studied).
Each voter has a personal budget of m and they have to collectively fund the decisions on some issues, with each decision coming with some price. A price system ps = ({p_i}_i∈ N,{π_(a_t,d)}_(a_t,d)∈ H) where H = ⋃_a_t∈{(a_t,d)| d∈ D_t} is a pair consisting of (i) a collection of payment functions p_i: ×{0,1}→ [0, b], one for each voter i∈ N, and (ii) a collection of prices π_(a_t,d)∈ℝ_≥ 0, one for each decision pair (a_t,d) for a_t∈ and d∈ D_t. We consider priceability with respect to outcomes w⃗∈ where decisions are made on all issues.
We say that an outcome w⃗ = (w_1,…,w_m) is priceable if there exists a price system ps such that:
( P1): For all a_t∈ and d∈ D_t, it holds that if d ≠b⃗_i^t we have p_i(a_t,d) = 0, for every i∈ N.
( P2): ∑_(a_t,d)∈ H p_i(a_t,d) ≤ m for every i∈ N where it holds that H = ⋃_a_t∈{(a_t,d)| d∈ D_t}.
( P3): ∑_i∈ V p_i(a_t,d) = π_(a_t,w_t) for every a_t∈.
( P4): ∑_i∈ V p_i(a_t,d) = 0 for every a_t∈ and every d≠ w_t∈ D_t.
( P5): There exists no group of voters N' with an (S,w⃗)-deviation for some S⊆, such that for each a_t∈ S:
∑_i∈ N'(m-∑_(a_t',d')∈ H p_i(a_t',d')) > π_(a_t,w_t)
where H = ⋃_a_t∈{(a_t,d)| d∈ D_t}.
Condition (P1) states that each voter only pays for decisions that she agrees with; (P2) states that a voter does not spend more than her budget m; (P3) states that for every decision in the outcome, the sum of payments for this decision is equal to its price; (P4) states that no payments are made for any decision not in the outcome; and, finally, (P5) states that for every set of issues S, there is no group of voters N' agreeing on all decisions for issues in S, that collectively hold more in unspent budget to `update' outcome w⃗'s decision on every issue a_t∈ S to a decision that they all agree with (where `updating' these issues leads to a feasible outcome). We illustrate priceability in our setting with the following example of a binary election instance.
Take four issues = {a_1, a_2, a_3, a_4} and a constraint = {(1,1,1,1),(1,1,0,0)}. Suppose there are two voters with ballots b⃗_1 = (1,1,1,1) and b⃗_2 = (0,0,0,0). Note that outcome w⃗ = (1,1,1,1) is not priceable as any price system where voter 1 does not exceed her budget would have voter 2 having enough in leftover budget to cause a violation of condition ( P5) (with her entire budget being leftover, she can afford more than price of the (S,w⃗)-deviation to outcome w⃗). On the other hand, w⃗' = (1,1,0,0) is priceable where we set the price of this outcome's decisions to 1.
The following result gives some general representation guarantees whenever we have priceable outcomes.
Consider a priceable outcome w⃗ with price system ps = ({p_i}_i∈ N,{π_(a_t,d)}_(a_t,d)∈ H) where H = ⋃_a_t∈{(a_t,d)| d∈ D_t}. Then, for every T-cohesive group of voters N'⊆ N for some T⊆ with an (S,w⃗)-deviation for some S⊆ T, it holds that:
∑_i∈ N'iw⃗≥n/q·|T| - |S|
where q = max{π_(a_t,w_t)}_a_t∈ S.
Take a priceable outcome w⃗ and consider a T-cohesive group of voters N'. Suppose that ∑_i∈ N'iw⃗< n/q·|T| - |S| where q = max{π_(a_t,w_t)}_a_t∈ S. As a group, the voters N' have a budget of m|N'|. Now, the voters in N' collectively contributed to at most n/q·|T| - |S|-1 decisions in outcome w⃗, and for each decision, the price was at most q (as q is the the price system's maximal price). So, we have that voter group N' has at least the following in leftover budget:
m|N'| - q·(n/q·|T| - |S|-1)≥ m·n|T|/m - n|T| + q|S|+q = q·(|S|+1).
Note we made use of the fact that N' is T-cohesive.
Thus, we know that N' has strictly more than q|S| in funds and for each issue in a_t∈ S, holds more than in funds than q≥π_(a_t,w_t). This presents a violation of condition P5 of priceability. Hence, voter group N' cannot exist.
However, we now must ascertain whether priceable outcomes always exist, regardless of the nature of the constraint. We see that this is possible thanks to the rule we have already defined, namely MeCorA_.
The next result shows that MeCorA_ captures the notion of priceability.
MeCorA_ always returns priceable outcomes.
Let w⃗ = (w_1,…,w_m) be the outcome returned by MeCorA_. We define the following price system ps: For each issue a_t∈, fix the prices π_(a_t,w_t) = π_(a_t,d) = λ_t for all d≠ w_t∈ D_t where λ_t is issue a_t's last MeCorA_ price (before being set to ∞) prior to the rule's termination. Fix the payment functions p_i for each voter to the money they spent to end the execution of MeCorA_. Observe that the priceability conditions ( P1)-( P4) clearly hold: since we have that, to end MeCorA_'s run, voters do not pay for decisions that (i) they do not agree with (condition ( P1)) and (ii) are not made by outcome w⃗ (condition ( P4)); MeCorA_ limits each voter a budget of m (condition ( P2)) ( P2); and the sum of payments for decisions made by outcome w⃗ will equal exactly π_(a_t,w_t) = λ_t (condition ( P3)). Now, for condition ( P5), note that if such a group of voters N' existed for some set of issues S, then MeCorA_ would not have terminated as this group of voters could have changed the decisions of these issues in S while increasing each issues' prices.
This is a positive result that, combined with that of Proposition <ref>, gives us a rule that always returns us priceable outcomes for any election instance.
§ CONCLUSION
We considered two different interpretations of justified representation from multiwinner voting and adapted them to a public-decision model with constraints.
In analysing the feasibility of the axioms, we devised restricted classes of constraints (the NFD property and simple implications).
While we could show mostly negative results for the satisfaction of cohesiveness-EJR under constraints, we were able to adapt successfully three known rules (MES, Local Search PAV and MeCorA)
to yield positive proportional guarantees that meet, in an approximate sense, the requirements of agreement-EJR. Additionally, we defined a suitable notion of priceability and showed that our adaptation of MeCorA always returns priceable outcomes.
Our work opens up a variety of paths for future research. First, assessing a class of constraints that are more expressive than the simple implications seems a natural starting point in extending our work. Then, on a more technical level, it would be interesting to check if the representation guarantees that are offered by , LS-PAV_ and Greedy MeCorA_-(k-1) still hold for a wider range of election instances. Regarding our adaptation of priceability, the question is open as to whether there are more constrained public-decision that always produce complete priceable outcomes. Given that we opted to represent the constraints as an enumeration of all feasible outcomes, it is natural to ask what occurs to results such as Proposition <ref> when we consider the constraint takes a particular form of representation, e.g., is represented as a Boolean formula of propositional logic. We also note some lingering computational questions such as the computational complexity of (i) computing outcomes for rules such as and Greedy MeCorA_-(k-1) for general constraints, and (ii) of checking whether a given feasible outcome is priceable. Finally, the list of proportionality notions to be tested on the constraints test-bed is not exhausted, with the proportionality degree <cit.> most notably still to be considered.
abbrvnat
|
http://arxiv.org/abs/2409.03335v1 | 20240905082105 | Semi-Supervised Sparse Gaussian Classification: Provable Benefits of Unlabeled Data | [
"Eyar Azar",
"Boaz Nadler"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
Global prescribed-time control of a class of uncertain nonholonomic systems by smooth
time-varying feedback
Kang-Kang Zhang, Bin Zhou, Chenchen Fan, James Lam, Fellow, IEEE
This work was supported by the National Science Found for
Distinguished Young Scholars (62125303), the Science Center Program of
National Natural Science Foundation of China (62188101), the
Fundamental Research Funds for the Central Universities
(HIT.BRET.2021008), and HKU CRCG (2302101740). (Corresponding authors: Bin Zhou)
Kang-Kang Zhang is with the Department of Mechanical Engineering, University of Hong Kong,
Hong Kong, China, and the Department of Computer Science, KU Leuven, B-3001 Heverlee, Belgium; James Lam is with the Department of Mechanical Engineering, University of Hong Kong,
Hong Kong, China;
Bin Zhou is with the Center for Control Theory and Guidance Technology, Harbin Institute of Technology, Harbin, 150001, China; Chenchen Fan is with the Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, China (email: [email protected], [email protected], [email protected], [email protected]).
Received ?? ; accepted ??
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
The premise of semi-supervised learning (SSL) is that combining labeled and unlabeled data yields significantly more accurate models.
Despite empirical successes, the theoretical understanding of SSL is still far from complete.
In this work, we study SSL for high dimensional sparse Gaussian classification.
To construct an accurate classifier a
key task is feature selection, detecting the few variables that separate the two classes.
For this SSL setting, we analyze information theoretic lower bounds for accurate feature selection as well as computational lower bounds,
assuming the low-degree likelihood hardness conjecture.
Our key contribution is the identification of a regime in the problem parameters (dimension, sparsity, number of labeled and unlabeled samples) where SSL is guaranteed to be advantageous for classification.
Specifically, there is a regime
where it is possible to construct in polynomial time an accurate SSL classifier.
However,
any computationally efficient supervised or unsupervised learning schemes, that
separately use only the labeled or unlabeled data
would fail.
Our work highlights the provable benefits of combining labeled and unlabeled data for
classification and
feature selection in high dimensions.
We present simulations that complement our theoretical analysis.
§ INTRODUCTION
The presumption underlying
Semi-Supervised Learning (SSL) is that more accurate predictors may be learned by leveraging both labeled and unlabeled data. Over the past 20 years, many SSL methods have been proposed and studied
<cit.>. Indeed, on many datasets SSL yields significant improvements over supervised learning (SL) and over
unsupervised learning (UL).
However, there are also cases where unlabeled data does not seem to help.
A fundamental theoretical issue in SSL is thus to understand under which settings can unlabeled data help to construct more accurate predictors and under which its benefit, if any, is negligible.
To address this issue, SSL was studied theoretically under various models.
Several works proved that under a cluster or a manifold assumption,
with sufficient unlabeled data, SSL significantly outperforms SL <cit.>.
In some cases, however, SSL performs similarly to UL (i.e., clustering, up to a label permutation ambiguity).
In addition, <cit.> described a family of distributions
where SSL achieves the same error rate as SL.
In the context of the cluster assumption, a popular model for theoretical analysis is Gaussian classification, in particular binary classification for a mixture of two spherical Gaussians.
In this case, the label Y∈{± 1} has probabilities (Y=y)=π_y and conditional on a label value y,
the vector ∈ℝ ^p follows a Gaussian distribution,
|y ∼𝒩(μ_y, I_p)
where μ_1,μ_-1∈ℝ^p are both unknown.
This model and related ones were studied theoretically in supervised, unsupervised and semi-supervised settings, see for example
<cit.>, and references therein.
Without assuming structure on the vectors μ_y or on their difference (such as sparsity), there are
computationally efficient SL and UL algorithms that achieve the corresponding minimax rates.
Moreover, <cit.> proved that
for the model (<ref>), no SSL algorithm simultaneously improves upon the minimax-optimal error rates of SL and UL. In simple words, there do not seem to be major benefits for SSL under the model (<ref>).
In this paper we consider a mixture of two Gaussians in a sparse high dimensional setting.
Specifically, we study balanced binary classification with a sparse difference in the class means,
which is a specific instance of (<ref>).
Here, the joint distribution of a labeled sample
(,y) is given by
y ∼Unif{± 1}, |y ∼𝒩(μ_y, I_p).
The class means μ_1, μ_-1∈ℝ^p are unknown,
but their difference Δμ = μ_1- μ_-1
is assumed to
be k-sparse, with k≪ p.
In a supervised setting, model
(<ref>) is closely related to the sparse normal means problem, for which both minimax rates and computationally efficient (thresholding based) algorithms have been developed and analyzed, see e.g. <cit.>.
In an unsupervised setting, inference on the model (<ref>) is closely related to clustering and learning mixtures of Gaussians <cit.>.
A key finding is that in an unsupervised setting with a sparsity assumption, there is a statistical-computational gap <cit.>.
Specifically, from an information viewpoint
a number of unlabeled samples n proportional to k
suffices to accurately cluster and to detect the support of Δμ. However, under under various hardness conjectures, unless n ∝ k^2,
no polynomial time algorithm is able to even detect if the data came from
one or from two Gaussians (namely, distinguish between
Δ= 0 and Δ = O(1)).
In this work we study the model (<ref>) in a SSL setting,
given L labeled samples and n unlabeled samples, all i.i.d. from (<ref>).
Despite extensive works on the SL and UL settings for the model (<ref>),
the corresponding SSL setting has received relatively little attention so far. This gives rise to several questions:
On the theoretical front, what is the information lower bound for
accurate classification and for recovering the support of Δ?
On the computational side, is there a computational-statistical gap in SSL?
In addition, are there values of L and n for which SSL is provably beneficial as compared to
SL and UL separately?
Our Contributions.
(i) We derive information theoretic lower bounds for exact support recovery in the SSL setting. As described in Section <ref>, our lower bounds characterize sets of values for the number of labeled and unlabeled samples,
where any estimator based on both types of data is unable to recover the support.
To derive these bounds, we view SSL as a data fusion problem
involving the merging of
samples that come from two measurement modalities: the labeled set and the unlabeled set.
In Theorem <ref> we present a general non-asymptotic information-theoretic result for recovering a discrete parameter in this setting. This general result is applicable to other data fusion problems and may thus be of independent interest.
(ii) We present SSL computational lower bounds. These are based on
the low-degree likelihood ratio hardness conjecture <cit.>,
in an asymptotic setting where dimension p→∞ and
a suitable scaling of the sparsity k, and of the number of labeled and unlabeled samples.
Our main result is that there is a region
of the number of labeled and unlabeled samples,
whereby in a SSL setting, accurate classification and feature selection are computationally hard.
Our analysis extends to the SSL case previous computational lower bounds that were derived only in UL settings.
In particular, if the number of the labeled samples is too small then the statistical-computational gap still remains.
To the best of our knowledge, our work is the first to extend this framework to a SSL setting.
(iii) Building upon (i) and (ii), our key contribution is
the identification of a region where SSL is provably computationally advantageous for
classification and feature selection.
Specifically, in Section <ref> we develop a
polynomial time SSL algorithm, denoted ,
to recover the support of
Δμ and consequently construct a linear classifier.
We then prove that in a suitable region for the number of labeled and unlabeled samples, succeeds in both feature selection and accurate classification.
In contrast, under the low degree ratio hardness conjecture,
any computationally efficient SL or UL schemes, that use only the labeled or unlabeled data separately, would fail.
In Section <ref> we show via simulations the superiority
of , in both support recovery and classification error,
in comparison to several SL and UL methods, a self-training SSL scheme and the SSL method of <cit.>.
Figure <ref> summarizes the
picture emerging from our work in combination with previous papers that analyzed the
UL and SL settings of (<ref>) , namely, the x-axis and y-axis in Figure <ref>.
As in prior works we consider a fixed separation
λ = Δμ^2_2/4 = O(1), where Δμ is k-sparse.
The
asymptotic setting is that
(k,L,n,p) all tend to infinity with the following scaling, which arises naturally for this problem (see Section <ref>):
the number of labeled samples is
L=⌊ 2k βlog(p)/λ⌋, the number of unlabeled samples scales as n ∝ k^γ/λ^2, and the sparsity scales
as k∝ p^α for some α∈(0,1/2).
The figure shows different regions in the (γ,β) plane, namely as a function of the number of unlabeled and labeled samples, where classification and feature selection are either impossible, hard or computationally easy.
We say that classification is impossible if
for any classifier there is a k-sparse vector Δ whose corresponding accuracy is no better than random.
Similarly, we say that feature selection is impossible if for any estimator Ŝ of size k
there is a k-sparse Δ with support S such that
|Ŝ∩ S|/k→ 0 as p→∞. Feature selection is easy if it is possible to construct in polynomial time a set Ŝ of size k such that |Ŝ∩ S|/k→ 1. This implies that the corresponding classifier has an excess risk that asymptotically tends to zero.
The green region γ≥ 2 follows from <cit.>, since in this case support estimation is computationally feasible using only the unlabeled data.
The region depicted in red is where classification and support recovery are impossible.
The impossibility of support recovery follows from <cit.>, who proved that support recovery is feasible if and only if β>1-α.
The same condition holds for classification as well, as described in the supplement.
The orange and blue regions in Figure <ref> follow from
novel results of this paper.
In the orange region, defined as β<1-α and 1<γ<2, our computational lower bound in Theorem <ref> suggests that any polynomial-time scheme will not succeed in accurate classification.
In the blue region, characterized by β∈(1-γα, 1-α) and 1<γ<2,
our proposed polynomial time SSL method is guaranteed to
construct an accurate classifier. This is proven in Theorem <ref>.
In addition, note that in this regime, the availability of unlabeled data allows to decrease the number of labeled samples by a factor of 1-α/1-γα.
Under the low degree hardness conjecture,
in this blue region no computationally efficient SL or UL method that separately analyze the labeled or unlabeled samples, respectively, would succeed.
We conjecture that in the remaining white region, no polynomial-time algorithm exists that is able to recover the support or able to construct an accurate classifier.
In summary, our work highlights the provable computational benefits of combining labeled and unlabeled data for classification and feature selection in a high dimensional sparse setting.
Notation
For an integer p, we write [p] = {1,...,p}.
The cardinality of a set B is |B|.
For a vector v ∈ℝ^p, we denote its restriction to a subset T⊂[p] by v|_T.
For vectors a, b, their inner product is ⟨ a, b ⟩,
and a denotes the ℓ_2 norm.
We say that f(p)=ω(g(p)) if for any c > 0, there exists p_0 ≥ 1
such that f(p) > c g(p) for every p ≥ p_0.
§ THEORETICAL RESULTS
In this section we present our first two contributions, namely
an information-theoretic lower bound for exact support recovery of Δμ, and a computational lower bound for classification and support recovery, in a SSL setting.
To this end, in Section <ref>
we first review lower bounds for SL and UL settings. As we were not able to find these precise results in the literature, for completeness we present their proofs, based on Fano's inequality, in the supplementary.
Our main contribution here, described in Section
<ref>, is a SSL lower bound. To derive it, we view SSL as a data fusion problem with two types of data (the labeled set and the unlabeled set). The SSL lower bound then follows by a combination of the lower bounds for SL and UL.
To derive lower bounds, it suffices to consider a specific instance of (<ref>), where the two Gaussian means are symmetric around the origin, with μ_1 = - μ_-1 = μ. Hence,
y ∼Unif{± 1}, |y ∼𝒩(yμ , I_p).
Here, μ∈ℝ^p is an unknown k-sparse vector with ℓ_2 norm of √(λ).
We denote its support by S=(μ) = {i|μ_i≠ 0},
and by 𝕊 the set of all
()0pt2pk possible k-sparse support sets.
We denote by _L = {( x_i, y_i)}_i =1^L and _n = {_i}_i =L+1^L+n the i.i.d. labeled and the unlabeled datasets, respectively.
To derive information and computational lower bounds for support recovery, it is necessary to
impose a lower bound on min_i∈ S |μ_i|.
As in <cit.>, it suffices to study the set of most difficult k-sparse vectors with such a lower bound on their entries. In our case this translates to the nonzero entries of μ belonging to {±√(λ/k)}.
Clearly, if some signal coordinates had magnitudes larger that √(λ/k), then the problem of detecting them and constructing an accurate classifier would both be easier.
Throughout our analysis, we assume μ is of this form and the sparsity k is known.
All proofs appear in the supplementary.
§.§ Information Lower Bounds (Supervised and Unsupervised)
The next theorem states a non-asymptotic result for exact support recovery in the SL case.
Fix δ∈(0,1).
For any (L,p,k) such that
L < 2(1-δ)k/λlog(p-k+1),
and for any support estimator Ŝ based on _L,
it follows that
max_S ∈𝕊(Ŝ≠ S) > δ - log 2/log(p-k+1) .
<cit.> proved a similar result in an asymptotic regime.
Specifically, they proved that for k = p^α and L= 2 β k/λlog p, approximate support recovery is possible if and only if β>1-α.
Theorem <ref> states a stronger non-asymptotic result for exact support recovery.
It implies that even if β>1-α, it is still impossible to recover the exact support with probability tending to one if β<1.
Next we present
an information lower bound for exact support recovery in UL.
Here we observe n vectors _i from (<ref>) but not their labels y_i.
Fix δ∈ (0,1).
For any (n,p,k) →∞ with k/p→ 0 and
n< 2(1-δ)k/λ^2log(p-k+1)max{1, λ},
for any support estimator Ŝ based on _n,
as p→∞ , then
max_S ∈𝕊(Ŝ≠ S) ≥δ
.
The scaling in Eq. (<ref>)
appeared in several prior works on related problems.
<cit.> showed that for λ<1, with number of samples n<C k/λ^2log(p/k), no clustering method can achieve accuracy better than random.
<cit.>
studied hypothesis
testing whether the data came from a single Gaussian
or from a mixture of two Gaussians.
In Proposition 3 of their paper, they proved that for n ≤k/λ^2log(ep/k) max{1, λ}, any testing procedure is asymptotically powerless. Note that for k = p^α with α < 1, the lower bound derived in <cit.> has a similar form to (<ref>) with a factor of 1 - α, which is slightly smaller.
Thus the bound in (<ref>) is sharper.
§.§ Semi-Supervised Setting
In the SSL case, the observed data consists of two subsets, one with L labeled samples and the other with n unlabeled ones.
We now develop information-theoretic and computational lower bounds for this setting.
The information lower bound is based on the
results in Section <ref>
for SL and UL settings.
The computational lower bound relies on the low-degree likelihood hardness conjecture.
Over the past 10 years, several authors studied statistical-computational gaps for various high dimensional problems.
For the sparse Gaussian mixture (<ref>) both <cit.> and
<cit.> derived such gaps in an UL setting.
To the best of our knowledge, our work is amongst the first to explore computational-statistical gaps in a SSL setting. Our analysis, described below, shows that with relatively few labeled samples, the computational statistical gap continues to hold. In contrast, as we describe in
Section <ref>, with a sufficiently large number of labeled samples, but not enough so solve the problem using only the labeled set, the computational-statistical gap is resolved.
In particular, we present a polynomial time SSL algorithm that bridges this gap.
Information Lower Bounds.
Before presenting results for the mixture model
(<ref>), we analyze a more general case.
We study the recovery of a latent variable S that belongs to a large finite set 𝕊, given
measurements from
two different modalities.
Formally,
the problem is to recover S from two independent sets of samples {_i}_i=1^N and {_j}_j=1^M of the following form,
{_i}_i=1^N ∼ f_x(|S), {_j}_j=1^M ∼ g_z(|S).
Here,
f_x(|S) and g_z(|S) are known probability density functions.
These functions encode information on S from the two types of measurements.
In our SSL setting,
represents an unlabeled sample, whereas =(,y) a labeled one,
and S is the unknown support of μ.
Our goal is to derive information lower bounds for this problem.
To this end, we assume that S
is a random variable uniformly distributed over a finite set 𝕊,
and denote by I_x = I(; S) and I_z = I(; S) the mutual information of with S and of with S, respectively.
Further, recall a classical result in information theory that to recover S from N i.i.d. samples of , N must scale at least linearly with log |𝕊|/I_x.
A similar argument applies to .
For further details, see <cit.>.
The following theorem states a
general non-asymptotic information-theoretic result for recovering S from the above two sets of samples.
Hence, it is applicable to other problems involving data fusion from multiple sources
and may thus be of independent interest.
Fix δ∈(0,1).
Let N, M be integers that satisfy
max{N· I_x, M· I_z }< (1-δ)log|𝕊|.
Let N_q = ⌊ q N⌋ and M_q = ⌊(1-q) M⌋, for q∈[0,1].
Then, any estimator Ŝ based on {_i}_i=1^N_q and {_j}_j=1^M_q satisfies
( Ŝ≠ S) > δ - log 2/log |𝕊|.
This theorem implies that with any convex combination of samples from the two modalities, q N from the first and (1-q)M from the second, accurate recovery of S is not possible if N and M are
both too small.
Essentially this follows from the additivity of mutual information.
Combining Theorem <ref> with
the proofs of
Theorems <ref> and <ref> yields the following information lower bound for the semi-supervised case.
Let _L and _n
be sets of L and n i.i.d. labeled and unlabeled samples from the mixture model (<ref>).
Fix δ∈ (0,1).
Let (L_0,n_0,p,k) →∞, with k/p→ 0, be such that
L_0 < 2(1-δ)k/λlog(p-k+1),
n_0< 2(1-δ)k/λ^2log(p-k+1)max{1, λ}.
Suppose the number of labeled and unlabeled samples satisfy L = ⌊ q L_0⌋ and n = ⌊(1-q)n_0⌋ for
some q∈[0,1].
Then, for any estimator Ŝ based on _L ∪_n,
as p→∞
max_S ∈𝕊( Ŝ≠ S) ≥δ.
Computational Lower Bounds.
Our SSL computational lower bound is based on the low-degree framework, and its associated hardness conjecture <cit.>.
This framework was used to derive computational lower bounds for various unsupervised
high dimensional problems including sparse-PCA and sparse Gaussian mixture models
<cit.>.
To the best of our knowledge, our work is the first to adapt this framework to a SSL setting.
For our paper to be self-contained, we first briefly describe this framework and its hardness conjecture. We then present its adaptation to our SSL setting.
The low degree likelihood framework focuses on unsupervised detection problems, specifically
the ability to distinguish between two distributions ℙ and ℚ, given n
i.i.d. samples. Specifically, denote the null distribution of n samples by ℚ_n,
whereby all _i∼ℚ, and denote by ℙ_n the alternative distribution, with
all _i∼ℙ.
Under the low-degree framework, one
analyzes how
well can the distributions ℙ_n and ℚ_n be distinguished by a low-degree
multivariate
polynomial f: ℝ^p× n→ℝ.
The idea is to construct a polynomial f
which attains large values
for data from ℙ_n and small values
for data from ℚ_n.
Specifically, the following metric
plays a key role in this framework,
ℒ_n^≤ D := max_deg(f) ≤ D_X ∼ℙ_n[ f(X)]/√(_X ∼ℚ_n[ f(X)^2]),
where the maximum is over polynomials f of degree at most D.
The value ℒ_n^≤ D
characterizes how well degree-D polynomials can distinguish ℙ_n from ℚ_n.
If ℒ_n^≤ D = O(1), then ℙ_n and ℚ_n cannot be distinguished via a degree-D polynomial.
Computational hardness results that use the low-degree framework are based on the following conjecture, which we here state informally, and refer the reader to
<cit.>
for its precise statement.
Let ℚ_n and ℙ_n be two distinct distributions.
Suppose that there exists D = ω(log(pn)) for which ℒ_n^≤ D remains bounded as p →∞.
Then, there is no polynomial-time test T:ℝ^p× n→{0,1} that satisfies
_X ∼ℙ_n[T(X)] + _X ∼ℚ_n[1-T(X) ] = o(1).
In simple words, Conjecture <ref> states that
if ℒ_n^≤ D = O(1) as p→∞, then it is not possible to distinguish between ℙ_n and ℚ_n using a polynomial-time algorithm, as no test has both a low false alarm as well as a low mis-detection rate (the two terms in the equation above).
We now show how to extend this framework, focused on unsupervised detection, to our SSL setting.
To this end, consider L+n samples, distributed according to either
a null distribution ℚ_L+n or
an alternative distribution ℙ_L+n.
In our case, the null distribution
is
ℚ_L+n: _i = ξ_i∼𝒩(0, I_p) , i∈[L+n],
whereas the alternative belongs to the following set of distributions,
ℙ_L+n: { _i = μ^S + ξ_i , i∈[L],
_i = y_i μ^S + ξ_i , L< i≤ L+n.
Here, S is uniformly distributed over 𝕊, μ ^S (j) = √(λ/k){j∈ S}, and y_i are unobserved Rademacher
random variables.
The next theorem presents a low-degree bound
for our SSL testing problem.
The scalings of L,n and k with p and λ are motivated by those appearing in Theorems <ref> and <ref>.
Let k=⌊ c_1 p^α⌋, L = ⌊2β k /λlog (p-k)⌋,
n = ⌊ c_2 k^γ/λ^2⌋ and D = (log p)^2, for some β,γ,λ,c_1,c_2 ∈ℝ_+ and α∈ (0, 1/2).
With the null and alternative distributions defined in (<ref>) and (<ref>),
if β<1/2 - α and γ<2, then as p→∞
ℒ_L+n^≤ D^2 = O(1).
Theorem <ref> together with the hardness conjecture <ref> extends to the SSL case previous computational lower bounds that were derived only in
UL settings (β=0)
<cit.>.
Next, we make several remarks regarding the theorem.
SSL statistical-computational gap.
In the rectangular region β<1/2-α and 1<γ<2, depicted in orange in Figure <ref>, under the hardness conjecture <ref>, distinguishing between ℙ and ℚ is computationally hard.
Since testing is easier than variable selection and classification
<cit.>,
in this region these tasks are computationally hard as well.
Tightness of condition γ<2 in
Theorem <ref>.
This condition is sharp, since
for
γ≥2,
namely n≳k^2/λ^2,
the support can be recovered by a polynomial-time algorithm, such as
thresholding the
covariance matrix followed by PCA,
see <cit.>
and <cit.>.
Tightness of condition β<12-α. This condition is tight for detection, though not necessarily for feature selection or classification. The reason is that
for β>1/2-α, it is possible to distinguish between ℙ
and ℚ, using only the labeled data <cit.>.
Combining Theorems <ref>-<ref> leaves a rectangular region 1< γ < 2 and 1/2-α < β < 1-α where SSL support recovery is feasible from an information view, but we do not know if it possible in a computationally efficient manner. In the next section we present a polynomial time SSL method that in part of this rectangle, depicted in blue in Figure <ref>, is guaranteed to recover S
and construct an accurate classifier.
We conclude with the following conjecture regarding the remaining white region:
Let _L, _n be sets of L and n i.i.d. labeled and unlabeled samples from the model (<ref>).
Assume, as in Theorem <ref> that
k∝ p^α, L = ⌊2β k/λlog(p-k)⌋ and n ∝k^γ/λ^2.
Then in the white region depicted in Figure <ref>, no polynomial-time algorithm is able to recover the support S or to construct an accurate classifier.
§ SEMI-SUPERVISED LEARNING SCHEME
We present a SSL scheme, denoted , for the model (<ref>), that
is simple and
has polynomial runtime.
In subsection <ref> we prove that in the blue region of Figure <ref> it recovers the support, and thus constructs an accurate
classifier. In this region, under the hardness conjecture <ref>, computationally efficient algorithms that rely solely on either labeled or unlabeled data would fail.
Preliminaries. To motivate the derivation of , we first briefly review some properties of the sparse model (<ref>).
First, note that the covariance
matrix of is Σ_x = 14ΔμΔμ^⊤ + I_p. This is a
rank-one spiked covariance model, whose leading eigenvector is Δμ, up to a ± sign.
Hence, with enough unlabeled data, Δμ can be estimated by
vanilla PCA on the sample covariance or by some sparse-PCA procedure taking advantage of the sparsity of Δμ.
Unfortunately, in high dimensions with a limited number of samples, these procedures may output quite inaccurate estimates, see for example
<cit.>.
The main idea of our approach is to run these procedures after an initial variable screening step that uses the labeled
data to reduce the dimension.
§.§ The Scheme
Our SSL scheme, denoted , stands for Label Screening PCA.
As described in Algorithm 1, has two input parameters: the sparsity k and a
variable screening factor
β̃<1.
The scheme consists of two main steps:
(i) removal of noise variables using the labeled samples;
(ii) support estimation
from the remaining variables
using the unlabeled samples via PCA.
Finally, a linear classifier is constructed via the leading eigenvector of the covariance matrix on the estimated support.
The first stage screens variables using only the labeled samples.
While our setting is different, this stage is similar in spirit
to Sure Independence Screening (SIS), which was developed for high-dimensional regression <cit.>.
To this end, our scheme first constructs the vector,
_L = 1/L_+∑_i:y_i=1x_i -
1/L_-∑_i:y_i=-1x_i
,
where L_+=|{i∈[L]:y_i=1}| and L_-=L-L_+.
With a balanced mixture (Y=±1) = 12, it follows that _L ≈Δμ +
2/√(L) N(0, 𝐈_p).
Hence, _L can be viewed as a noisy estimate of Δμ.
If the number of labeled samples were large enough, then the top k coordinates of _L would coincide with the support of Δμ. With few labeled samples, while not necessarily the top-k, the entries of _L at the support indices still have relative large magnitudes.
Given the input parameter β̃>0, the scheme retains the indices that correspond to the
largest p^ 1-β̃ entries in absolute value of _L.
We denote this set by S_L.
Note that for any β̃>0, this step significantly reduces the dimension
(as β̃>0 then p^1-β̃≪ p). In addition,
as analyzed theoretically
in the next section, for
some parameter regimes, this step
retains
in S_L (nearly all of) the k support indices.
These two properties are essential for the success of the second stage, which we now describe.
The second step estimates the support S using the unlabeled data.
Specifically, constructs the sample covariance matrix restricted to the
set S_L,
Σ̂|_S_L = 1/n∑_i=L+1^n+L (_i - )|_S_L(_i -)|_S_L^⊤,
where = 1/n∑_i=L+1^n+L_i is the empirical mean of the unlabeled data.
Next, it computes the leading eigenvector v̂_PCA of Σ̂|_S_L.
The output support set Ŝ
consists of the k indices
of v̂_PCA
with largest magnitude. Finally, the vector Δμ is (up to scaling) estimated by the leading
eigenvector of Σ̂ restricted to Ŝ, with its sign determined by the labeled data.
After the removal of variables in the first step, the input dimension to the second step is much lower, p̃ = p^1-β̃.
Despite this reduction in dimension, as long as the vector Δμ is sufficiently sparse with k≪p̃, or equivalently α < 1-β̃,
then in the second step our goal is still to find a sparse eigenvector. Hence, instead of vanilla PCA, we may replace the second step by
any suitable (polynomial time) sparse-PCA procedure.
We refer to this approach as (Labeled Screening Sparse-PCA).
As illustrated in the simulations, for finite sample sizes, this can lead to improved support recovery and lower classification errors.
§.§ Support Recovery and Classification Guarantees for
Before presenting our main result, we first recall two standard evaluation metrics.
The classification error of a classifier C: ℝ^p →{±1} is defined as
ℛ (C) = (C()≠ y).
Its excess risk is defined as
ℰ(C) = ℛ(C) - ℛ^* = ℛ(C) - inf_C'ℛ(C').
As in <cit.>,
the accuracy of a support estimate Ŝ
is defined by its normalized overlap with the true support,
namely |Ŝ∩ S|/k.
To simplify the analysis, we focus on the symmetric setting where μ_1 = -μ_-1 = μ.
The next theorem presents theoretical guarantees for
, in terms of support recovery and the excess risk of the corresponding classifier.
Let _L,_n be labeled and unlabeled sets
of L and n i.i.d. samples
from
(<ref>)
with a k-sparse μ whose non-zero entries are ±√(λ/k).
Suppose that k= ⌊ c_1p^α⌋,
L = ⌊2β k/λlog(p-k)⌋, n = ⌊ c_2k^γ/λ^2⌋,
for some fixed
0<α <1/2, 0<β<1-α, γ>1 and λ,c_1,c_2 ∈ℝ_+.
Let Ŝ, v̂ be the output of Algorithm 1 with input k and screening factor β̃.
If β> 1-γα
and β̃∈ ( 1-γα, β), then
lim_p →∞(|S ∩Ŝ|k≥ 1-ϵ) = 1, ∀ϵ > 0,
and the excess risk of the corresponding classifier C() = ⟨v̂,⟩, satisfies
lim_p →∞ℰ (C) = 0.
The interesting region where Theorem <ref> provides a non-trivial recovery guarantee
is the triangle depicted in blue in Figure <ref>.
Indeed, in this region, recovers the support and constructs an accurate classifier.
In contrast, any SL algorithm would fail, and under the low degree hardness conjecture,
any computationally efficient UL scheme would fail as well.
To
the best of
our knowledge, our work is the first to rigorously prove the computational benefits of SSL, in bridging the computational-statistical gap in high dimensions.
As mentioned above, we conjecture that in the remaining white region
in Figure <ref>, it is not possible to construct in polynomial time an accurate SSL classifier.
The intuition underlying this conjecture is based on the work of <cit.>, where
in the fully supervised (SL) setting, the authors show that there is a detection-recovery gap.
Namely for a range of number of labeled samples, it is possible to detect that a sparse signal is present, but it is not possible to reliably recover its support. Intuitively, adding a few unlabeled samples should not resolve this gap.
§ SIMULATION RESULTS
We illustrate via several simulations some of our theoretical findings. Specifically,
we compare and to various SL, UL and
SSL
schemes, in terms of both accuracy of support recovery and classification error.
The sparse PCA method used in was iteratively proxy update () <cit.> .
We generate L+n labeled and unlabeled samples according to the model (<ref>) with
a k-sparse μ whose non-zero entries are ±√(λ/k).
The quality of a support estimate Ŝ is measured by its normalized accuracy |Ŝ∩ S|/k.
For all methods compared we assume the sparsity k is known. Hence,
each method outputs a k-sparse unit norm vector μ̂, so its corresponding linear classifier is ↦⟨μ̂ ,⟩.
Given the model (<ref>), its generalization error is
Φ^c(⟨μ̂ ,μ⟩).
We run our SSL schemes with β̃= β-(β-(1-γα))/4 which satisfies the requirements of Theorem <ref>.
We present experiments with p=10^5, k= p^0.4=100 and λ=3, though the behavior is similar for other settings as well.
The error of the Bayes classifier is Φ^c(√(λ))≈ 0.042.
We report the average (with ± 1 standard deviation)
of the support recovery accuracy and the classification error
over M=50 random realizations.
All experiments were run on a Intel i7 CPU 2.10 GHz.
We empirically evaluate the benefit of L=200 labeled samples in addition to n unlabeled ones.
We compare our SSL schemes and to the following UL methods, taking all L+n samples as unlabeled:
<cit.>, and <cit.>.
The SSL methods that we compare are <cit.>, and self-training ().
The self-training algorithm is
similar to the approach in
<cit.>, but explicitly accounts for the known sparsity k: (i) compute _L of (<ref>) using the labeled samples, and keep its k largest entries,
denote the result by _L^(k); (ii)
compute the dot products c_i = ⟨_L^(k), _i ⟩ and the pseudo labels ỹ_i = (c_i);
(iii) let n_eff be the cardinality of the set {i: |c_i|> Γ}, for some threshold value Γ≥0;
(iv) estimate the support by the top-k coordinates in absolute value of the following vector:
_self = 1/L+n_eff(∑_i=1^Ly_i_i + ∑_i=L+1^L+n{|c_i|>Γ}ỹ_i _i )
In the experiments we used Γ = 0.8, which gave the best
performance.
Also, we implemented the SL scheme that selects the indices of the top-k entries of | w_L| of (<ref>).
As shown in the supplementary,
this is the maximum-likelihood estimator for S based on the labeled data.
Figure <ref>
illustrates our key theoretical result - that in certain cases SSL
can yield accurate classification and feature selection where SL and UL simultaneously fail.
The left panel of Figure <ref> shows the average accuracies of support estimation for the different schemes as a function of number of unlabeled samples n.
Except at small values of n, achieved the best accuracy out of all methods compared.
The right panel shows the classification errors of the different methods.
The black horizontal line is the error of the Bayes optimal () classifier.
As seen in the figure, our SSL schemes come close to the Bayes error while SL and UL schemes have much higher errors.
We present further experiments that empirically illustrate the benefit of using a fixed number of n=1000 unlabeled samples while varying the number of labeled samples L.
Specifically, we compare our SSL algorithms and to the SSL methods
and , as well as to the SL scheme , which uses only the L labeled samples.
Figure <ref> illustrates the support recovery accuracies and the classification error as a function of the number of labeled samples L.
As seen in the figure, adding n=1000 unlabeled samples significantly improves the classification and support recovery accuracies.
§ SUMMARY AND DISCUSSION
In this work, we analyzed classification of a mixture of two Gaussians in a sparse high dimensional setting. Our analysis highlighted provable computational benefits of SSL.
Two notable limitations of our work are that we studied a mixture of only two components, both of which are spherical Gaussians. It is thus of interest to extend our analysis to more components and to other distributions.
From a broader perspective, many SSL methods for
feature selection have been proposed and shown empirically to be beneficial,
see for example the review by <cit.>. An interesting open problem is to theoretically prove their benefits,
over purely SL or UL. In particular it would be interesting to find cases where SSL improves over both SL and UL in its error rates, not only computationally.
§ ACKNOWLEDGEMENTS
The research of B.N. was partially supported by ISF grant 2362/22.
B.N. is an incumbent of the William Petschek professorial chair of mathematics.
neurips_2024
§ AUXILIARY LEMMAS
We first present several auxiliary lemmas used to prove our theorems. We denote the complement of the standard normal cumulative distribution function by Φ^c(t) = (Z > t), where Z ∼𝒩(0, 1). The following lemma states a well known upper bound on Φ^c.
For any t>1,
Φ^c(t) ≤1/√(2π)t e^-t^2/2.
Suppose {x_i}_i=1^n are i.i.d. Bernoulli random variables, with [x_i = 1]=q.
Let X denote their sum.
Then, for any δ≥0,
(X ≥(1+δ)
nq) ≤ e^-δ^2 nq/2+δ,
and for any δ∈ [0,1]
(X ≤(1-δ)
nq) ≤ e^-δ^2 nq/2.
A common approach to prove lower bounds is using Fano's inequality. Here, we use the
following version of Fano’s lemma, see <cit.>.
Let θ be a random variable uniformly distributed over a finite set Θ.
Let _1,_2,…, _n be n i.i.d. samples from a density f(|θ).
Then, for any estimator θ̂(_1,…,_n) ∈Θ,
(θ̂≠θ) ≥ 1 - I(θ; Z^n) + log 2/log |Θ|,
where I(θ; Z^n) is the mutual information between θ and the samples Z^n = (_1,_2,…, _n).
In our proofs we use several well known properties of the entropy function.
For convenience we here state some of them. First, we recall the explicit expression for
the entropy of a multivariate Gaussian.
Let ∼𝒩(μ, Σ).
Then, its entropy is given by
H() = p/2(1+log(2π)) + 1/2logdet (Σ).
Next, the following lemma states that the multivariate Gaussian distribution maximizes the
entropy over all continuous distributions with the same covariance
<cit.>.
Let be a continuous random variable with mean μ∈ℝ^p and covariance Σ∈ℝ^p× p, and let y ∼𝒩(μ, Σ).
If the support of is all of ℝ^p then
H() ≤ H( y).
The next lemma states the sub-additive property of the entropy function.
Let and y be jointly distributed random variables. Then,
H(, y) ≤ H() + H( y).
To prove Theorem <ref> we use the following lemma.
Let λ∈ (0,1), and let k be a positive integer.
Then, for w ∼ N(0, k-1/k)
[tanh (λ + √(λ)w ) - 1/2tanh^2(λ +√(λ)w )] ≤1/2(λ +
3√(λ/k))
Let q(w) = tanh (λ + √(λ)w ) - tanh^2 (λ + √(λ)w ).
In terms of q(w), the left hand side of (<ref>) may be written as follows
[tanh (λ + √(λ)w ) - 1/2tanh^2(λ +√(λ)w )] =
1/2([tanh (λ + √(λ)w) ] +[q(w)]).
We now upper bound the two terms in the RHS of (<ref>).
We start by showing that [q(w)]≤ 3√(λ/k).
Let L_q = max_w∈ℝ|q'(w)| be the Lipschitz constant of the function q(w).
It is easy to show that L_q ≤ 3√(λ).
Let z∼(0,1) be independent of w.
Using the first order Taylor expansion yields
_w,z[q(w ) - q(w +1/√(k)z)] ≤ L_q1/√(k)_z[|z|] =
3√(λ/k)√(2/π)≤ 3√(λ/k).
Next, note that (w +1/√(k)z) ∼ N(0,1).
Therefore
_w,z[q(w +1/√(k)z)]
= ∫_-∞^∞ q(v) e^-v^2/2/√(2π)dv
= ∫_-∞^∞(tanh (λ + √(λ)v ) - tanh^2 (λ + √(λ)v )) e^-v^2/2/√(2π)dv.
Making a change of variables x = λ + √(λ)v, gives
_w,z[q(w +1/√(k)z)] = ∫_-∞^∞(tanh (x) - tanh^2 (x)) e^-(x-λ)^2/2λ/√(2πλ)dx.
Define t_λ(x) = (tanh (x) - tanh^2 (x)) e^-(x-λ)^2/2λ/√(2πλ).
Note that t_λ (x) is an absolutely integrable function which satisfies t_λ(x) = -t_λ(-x) for any λ.
Therefore, the above integral is equal to zero, namely ∫_-∞^∞ t_λ (x)dx = 0.
Inserting this into (<ref>) gives
_w[q(w )] ≤ 3√(λ/k).
Next, we upper bound the first term on the RHS of Eq. (<ref>).
Denote by f_W(w) the probability density function of w∼(0, k-1k).
Then, writing the expectation explicitly gives
[tanh (λ + √(λ)w )]= ∫_-∞^∞tanh (λ + √(λ)w ) f_W(w) dw.
Making the change of variables x = λ + √(λ)w yields
[tanh (λ + √(λ)w )]
= 1/√(λ)∫_-∞^∞tanh(x)
f_W(x-λ√(λ))dx
= 1/√(λ)∫_0^∞tanh(x)
(f_W(x-λ√(λ))- f_W(x+λ√(λ)) )dx.
Since λ>0, it follows that f_W(x-λ√(λ))- f_W(x+λ√(λ)) ≥ 0, for all x≥ 0.
Therefore, to bound this expectation it suffices to construct a function g(x) such that g(x) ≥tanh(x) for any x≥0 and for which we can compute explicitly the corresponding integral.
Consider the function g(x) = x. It is well-known that x≥tanh(x), for all x≥0.
Thus,
[tanh (λ + √(λ)w )]
≤1/√(λ)∫_0^∞x
(f_W(x-λ√(λ))- f_W(x+λ√(λ)) )dx
= 1/√(λ)∫_-∞^∞x
f_W(x-λ√(λ))dx
Substituting w = (x-λ)/√(λ), yields
[tanh (λ + √(λ)w )]≤∫_-∞^∞ (λ + √(λ)w ) f_W(w) dw = [ λ + √(λ)w ] = λ.
Combining (<ref>),(<ref>) and (<ref>) gives
[tanh (λ + √(λ)w ) - 1/2tanh^2(λ +√(λ)w )] ≤1/2(λ+ 3√(λ/k)).
Consider a sequence of classification problem of the form,
y ∼Unif{± 1}, |y ∼ N(yμ, 𝐈_p),
where the dimension p →∞ but =√(λ) is fixed.
Let _p be a sequence of unit norm estimators and T_p() = (μ̂_p,) the corresponding classifier.
Assume that every ϵ>0
lim_p →∞(⟨μ/√(λ) , _p⟩>1-ϵ ) = 1.
Then, the excess risk of the classifier T_p() = ⟨μ̂_p,⟩ tends to zero as p tends to infinity.
By definition, the excess risk of the classifier T_p that corresponds to _p can be written as
ℰ (T_p) =
_∼ N( 0,𝐈_p)(⟨_p, + ⟩ <0 ) - Φ^c(√(λ)).
Since is independent of _p and
_p has unit norm, then z=⟨_p,⟩∼𝒩(0,1), and the excess risk may be written as
ℰ (T_p) = 𝒫(z >
⟨_p,⟩) - Φ^c(√(λ))
Let ϵ>0, and consider the event 𝒜_ϵ = {⟨μ/√(λ) , _p⟩>1-ϵ}.
Then,
ℰ (T_p) ≤{[ Φ^c(√(λ)(1-ϵ))
- Φ^c(√(λ)) 𝒫(𝒜_ϵ); 1 1-P(𝒜_ϵ) ].
Since by assumption 𝒫(𝒜_ϵ)→ 1 for any ϵ>0, then
the excess risk tends to zero as p →∞.
§ LOWER BOUNDS - PROOFS OF RESULTS IN SECTION <REF>
§.§ Proof of Theorem <ref>
Our proof relies on Fano's inequality, and is conceptually similar to the proof of Theorem 3 in <cit.>.
First, note that for any sub-collection 𝕊̃⊂𝕊, we have the following
max_S ∈𝕊(Ŝ≠ S)≥1/|𝕊̃|∑_S ∈𝕊̃(Ŝ≠ S).
The right hand side in the above display is the error probability of an estimator
Ŝ, where S is considered as a random variable uniformly distributed over the set
𝕊̃. In other words, the right hand side may be written as follows,
( error) =
∑_s ∈𝕊( Ŝ≠ s | S=s) ·(S=s).
In our proof, we consider the following sub-collection
𝕊 := {T ∈𝕊: 1, …, k-1∈ T },
which consists of all k-element subsets that contain the first k-1 support indices {1,… ,k -1}
and one from {k,…,p}.
To lower bound the probability of error, we focus on a specific class of means: given the support S, the mean entries have the form
μ_j = √(λk){j ∈ S}.
So, with S known, μ is deterministic and we write it as μ^S.
In the proof we consider an equivalent model of (<ref>), whose observations are divided by √(λ),
x_i = y_i θ^S + σξ_i, i= 1,…,L ,
where θ^S(j) = 1√(k){j ∈ S}, σ = 1/√(λ) and _i ∼𝒩(0, 𝐈_p).
Since for each observation x_i we also know its corresponding label
y_i∈{-1,1}, we may consider the following transformed observations
_i = y_i _i = θ^S + σξ̃_i, i= 1,…,L.
where ξ̃_i=y_i ξ_i has the same distribution as ξ_i.
Denote by X^L, ^L, Y^L the sets of L i.i.d. samples {_i}_i=1^L, {_i}_i=1^L and {y_i}_i=1^L, respectively.
To apply Fano's lemma,
we consider the joint mutual information I((X^L, Y^L); S ).
Note that,
I( (X^L, Y^L); S ) = I(( ^L, Y^L);S) =I( ^L;S),
where the last equality follows from the fact that the labels Y^L are independent of the support S and the observations ^L.
Hence, it is enough to consider the samples ^L from the model in (<ref>).
Let S be a subset chosen uniformly at random from 𝕊.
Then, from Lemma <ref>, it follows that
( error) ≥ 1 - I(^L; S) + log 2/log|𝕊| = 1 - I(^L; S) + log 2/log(p-k+1).
We now derive an upper bound on I(^L ; S).
First, from the relation between mutual information and conditional entropy, I(^L ; S) = H(^L) - H(^L|S).
By the sub-additivity of the entropy function H(^L) ≤ L H().
Also, since the samples _i are conditionally independent given S, the joint entropy H(^L|S) can be expressed as
H(^L|S) = ∑_i∈[L]H(_i|S) = L H(|S).
where is a single observation from the model (<ref>).
Therefore,
I(^L ; S) ≤ L (H() - H(|S)).
By the definition of conditional entropy,
H(|S) =
- ∑_s ∈𝕊 P(S=s) ∫
f( |S) log f(|S) d ,
where f(|S) the probability density function of a single random sample given S.
For any S ∈𝕊, the vector ( | S) is a p-dimensional Gaussian with
mean θ^S and covariance matrix σ^2 𝐈_p. Its entropy is independent of
its mean, and is given by p2(1 + log(2πσ^2) ).
Hence,
H(|S) = p/2(1 + log(2π) + log(σ^2) ).
The final step is to upper bound H(). To this end, note that is distributed as a mixture of (p-k+1) Gaussians, each centered at θ^S for S∈𝕊̃.
Let us denote its mean and covariance by ν_x = [] and Σ = [ (-ν_x)(- ν_x)^T], respectively.
By the maximum entropy property of the Gaussian distribution (Lemma <ref>),
and Eq. (<ref>) for the entropy of a multivariate Gaussian, we have
H() ≤ H(𝒩(ν_x,Σ)) =p/2(1 + log (2π) ) + 1/2logdet(Σ) .
Combining (<ref>), (<ref>) and (<ref>) gives
I(^L;S) ≤L/2(
log (Σ) - p logσ^2
) .
The following lemma, proved in Appendix <ref>, provides an upper bound for log (Σ).
Let Σ be the covariance matrix of the random vector of Eq. (<ref>), with the set S uniformly distributed on 𝕊 of Eq. (<ref>).
Then,
logdet(Σ) ≤ p log(σ^2) + (p-k+1)log(1+ 1/k(p-k+1)σ^2).
Substituting this upper bound into (<ref>) leads to
I(^L;S) = L/2 (p-k+1)log(1+ 1/k(p-k+1)σ^2)
≤L/2kσ^2 = Lλ/2k
where the last inequality follows from log(1+x) ≤ x, for all x>0.
Inserting (<ref>) into (<ref>), implies that a sufficient condition for the error probability to be greater than δ - log 2/log(p-k+1) is Lλ/k < 2(1-δ)log(p-k+1), which completes the proof.
§.§ Proof of Theorem <ref>
To prove Theorem <ref> we use the following lemma, proven in Appendix <ref>.
Let be a random vector from the model (<ref>),
with a vector μ^S, where the random variable S is uniformly distributed over the set 𝕊̃ of Eq. (<ref>), and let I(;S) be their mutual information.
Consider an asymptotic setting where p →∞ and k/p →∞.
Then, for λ <1, and for p and k sufficiently large with k/p sufficiently small
I(;S) ≤1/2λ^2/k(1+o(1)).
First, note that for λ≥1, the information lower bounds proven
in Theorem <ref> and those we aim to prove in Theorem <ref> coincide.
Clearly, Theorem <ref> which considers all possible estimators based on {(_i, y_i)}_i=1^L, includes in particular all unsupervised estimators that ignore the labels.
Hence, for λ≥ 1, Theorem <ref> follows from Theorem <ref>.
Therefore, we consider the case λ<1.
The proof, similar to that of Theorem 1, is also based on Fano’s inequality.
To lower bound the probability of error, we view S as a subset uniformly
distributed over 𝕊, where 𝕊 is the
sub-collection of support sets defined in (<ref>).
Then, using the same arguments as in the proof of Theorem 1,
max_S ∈𝕊(Ŝ≠ S)≥1/|𝕊̃|∑_S ∈𝕊̃(Ŝ≠ S) ≥ 1 - I(X^n;S) + log 2/log |𝕊|,
where X^n = (_1,...,_n) and I(X^n;S) is the mutual information between the n unlabeled samples and S.
We now derive an upper bound for I(X^n;S).
The sub-additivity of the entropy function, and
the fact that _i are conditionally independent given S=s, imply
I(X^n;S) ≤ n(H() - H( |S)) = n I(;S),
where is a single sample from the model (<ref>).
Hence, for p and k sufficiently large, with k/p sufficiently small, combining (<ref>) and (<ref>) gives
I(X^n;S) ≤nλ^2/2k(1+o(1)).
By Fano's bound in (<ref>), the error probability is at least δ if
n< 2(1-δ)k/λ^2 log(p-k+1)
.
§.§ Proof of Theorem <ref>
Recall that the random variable S is uniformly distributed over the set 𝕊.
For X^ = {_i}_i=1^ and Z^ = {_i}_i=1^,
their combined mutual information with the random variable S is
I(X^, Z^ ; S) = H(X^,Z^) - H(X^,Z^|S).
Since the two sets of samples {_i}_i=1^, {_j}_j=1^ are conditionally independent given S=s, then
I(X^, Z^ ; S) = H(X^,Z^) - H(X^|S) - H(Z^|S)
= H(X^, Z^) - H(x|S) - H(z|S).
By the sub-additivity property of the entropy function,
H(X^,Z^) ≤ H(X^) + H(Z^)
≤ H(x) + H(z).
Hence, combining (<ref>) and (<ref>) yields
I(X^, Z^ ; S) ≤
· I_x + · I_z.
Combining Fano's inequality with the upper bound in (<ref>) gives
( Ŝ≠ S) ≥ 1 - · I_x + · I_z +log2/log|𝕊| .
Finally, the assumption max{N· I_, M· I_} < (1-δ)log|𝕊| yields that ( Ŝ≠ S)> δ-log2/log|𝕊|
§.§ Proof of Corollary <ref>
To lower bound the error probability, we view S as a random variable uniformly distributed over
the discrete set 𝕊 defined in (<ref>).
By the same arguments as in the proofs of Theorems 1 and 2,
max_S ∈𝕊(Ŝ≠ S)≥_S ∼ U(𝕊)(Ŝ≠ S).
We apply Theorem <ref>, with the set _i of i.i.d. unlabeled samples from (<ref>),
and the second set z_i = (_i,y_i) of i.i.d. labeled samples from model (<ref>).
Next, in the proofs of Theorems <ref> and <ref>
the following upper bounds for I_ and I_ were derived,
I_≤λ/2k, I_≤λ/2kmin{1,λ}(1+o(1)).
Finally, by the conditions of the Corollary,
the number of labeled and unlabeled samples satisfy L = ⌊ q L_0⌋ and
n = ⌊(1-q) n_0⌋,
with L_0< 2(1-δ)k/λlog(p-k+1 ) and n_0 <2(1-δ)k/λ^2log(p-k+1 )max{1,λ}.
Hence, for sufficient large p, combining these conditions with (<ref>) gives
L_0 · I_, n_0 · I_≤
(1-δ)log(p-k+1)
Therefore, by Theorem <ref> the error probability is at least δ.
§.§ Proof of Theorem <ref>
Even though we analyze a SSL setting, the observed data still belongs to the
additive Gaussian noise model (see Section 2 in <cit.>).
This key point allows to simplify the low-degree norm in our setting.
Specifically, let Z = (_1, …, _L+n) be the following set of random vectors, which
are the noise-free underlying signals from the alternative ℙ_L+n
of Eq. (<ref>) in the main text,
_i = { μ^S , i∈[L],
y_i μ^S , L< i≤ L+n.
In the equation above, S is uniformly distributed on 𝕊
(the set of all size-k subsets over p variables),
μ^S is a k-sparse vector with support S and non-zero entries √(λ/k) and y_i are Rademacher random variables.
Similarly, let Z̃ = (_1, …, _L+n) be an independent set of
the underlying noise-free signals, with a possibly different support S̃, and independent labels _i.
Then, by Theorem 1 in <cit.> the low degree norm ℒ_L+n^D^2 can be expressed as
ℒ_L+n^D^2 = _Z,[∑_d=0^D
1/d!( ∑_i=1^L+n⟨_i, _i ⟩)^d
].
Inserting the expressions for _i and _i into the equation above, gives
ℒ_L+n^D^2
= ∑_d=0^D
1/d![
(
∑_i=1^L⟨μ^S , μ^S̃⟩
+
∑_i=L+1^L+n⟨ y_iμ^S , _iμ^S̃⟩)^d
],
where the expectation is over the two random sets S,S̃ and over the random labels y_i and _i.
Since all these random variables are independent, the right hand side above simplifies to
∑_d=0^D
1/d![
(
L ⟨μ^S, μ^S̃⟩ +
∑_i=L+1^L+n y_i _i⟨μ^S, μ^S̃⟩)^d
]
= ∑_d=0^D
1/d![(
⟨μ^S, μ^S̃⟩(L +
∑_i=L+1^L+n y_i _i
)
)^d
] .
Denote R_i = y_i_i, and note that R_i are Rademacher random variables, independent of S and S̃.
Thus, the expectation above can be factored into the product of two separate expectations,
ℒ_L+n^D^2 = ∑_d=0^D
1/d!_S,S̃[
⟨μ^S, μ^S̃⟩ ^d ]
_{R_j}_j[(L +
∑_i=L+1^L+nR_i
)^d
].
We now separately analyze each of these two expectations, starting from the second one.
By the Binomial formula,
[(L +
∑_i=L+1^L+nR_i
)^d
] =
∑_ℓ = 0^ddℓL^d-ℓ[
(
∑_i=L+1^L+nR_i
)^ℓ ] .
Note that for any odd integer ℓ, the ℓ-th moment of the Rademacher's sum is zero. Therefore,
[(L +
∑_i=L+1^L+nR_i
)^d
]
= ∑_ℓ = 0^⌊d/2⌋d2ℓL^d-2ℓ[
(
∑_i=L+1^L+nR_i
)^2ℓ ] .
As analyzed in <cit.>
[
(
∑_i=L+1^L+nR_i
)^2ℓ ]
≤ n^ℓ (2ℓ - 1)!!
,
where (2ℓ -1 )!! = (2ℓ - 1)(2ℓ - 3)⋯ 3· 1 =
(2ℓ)!/2^ℓ ℓ!. Thus,
[(L +
∑_i=L+1^L+nR_i
)^d
]
≤ L^d∑_ℓ = 0^⌊d/2⌋d!/(d-2ℓ)! ℓ!(n/2L^2)^ℓ
= L^d∑_ℓ = 0^⌊d/2⌋dℓ(n/2L^2)^ℓ(d-ℓ)!/(d-2ℓ)!.
Since (d-ℓ)!/(d-2ℓ)!
= (d-ℓ)⋯ (d-2ℓ+1)≤ d^ℓ, then
[(L +
∑_i=L+1^L+nR_i
)^d
]
≤ L^d∑_ℓ = 0^⌊d/2⌋dℓ(n d2L^2)^ℓ
≤ L^d∑_ℓ = 0^ddℓ(n d2L^2)^ℓ
=L^d(1 +nd2L^2)^d
= (L + nd/2L)^d.
Next, we analyze the first expectation
_S,S̃[
⟨μ^S, μ^S̃⟩ ^d ]
in (<ref>).
Recall that μ^S_j = √(λk){j ∈ S}. Hence,
⟨μ^S, μ^S̃⟩ = λ k |S ∩S̃|.
Denote by G = |S ∩S̃| the size of the overlap between the sets.
Then G is a hypergeometric random variable with the following probability distribution,
for 0≤ m≤ k,
(G = m) = kmp-kk-mpk^-1.
From <cit.> this probability is upper bounded as follows
(G = m) ≤km(k/p-k)^m.
Therefore,
_S,S̃[
⟨μ^S, μ^S̃⟩ ^d ]
= λ^d/k^d[
| S ∩S̃ | ^d ] =
λ^d/k^d∑_m=0^k m^d (G=m)
≤λ^d/k^d∑_m=0^k m^d km(k/p-k)^m.
Inserting (<ref>) and (<ref>) into (<ref>) gives
ℒ_L+n^D^2 ≤∑_d=0^D λ^d/d! k^d(L+nd/2L)^d ∑_m=0^k m^d km(k/p-k)^m .
In the above expression, since d≤ D, we may upper bound nd/2L by nD/2L. Furthermore,
changing the order of summation between the two sums above, gives
ℒ_L+n^D^2
≤
∑_m=0^k km(k/p-k)^m
∑_d=0^D
1/d!(m(Lλ/k + nλ D/2Lk) )^d
≤∑_m=0^k km(k/p-k)^m
exp(m(Lλ/k + nλ D/2Lk) )
.
According to the conditions of the Theorem, L = ⌊2β k/λlog(p-k)⌋
and n = ⌊ c_2k^γ/λ^2⌋
for some c_2 > 0 and γ < 2.
Hence, for sufficiently large p, L>β k/λlog(p-k).
Then, inserting these values into the above gives
ℒ_L+n^D^2 ≤∑_m=0^k km(k/p-k)^m
exp(m(2βlog(p-k) + c_2 k^γ D/2 β k^2 log(p-k)) ) .
Setting D = (log(p-k))^2, yields
ℒ_L+n^D^2 ≤∑_m=0^k km(k/p-k)^m
exp(m log(p-k)(2β + c_2/2β1/k^2-γ) ).
Since γ<2, for any fixed ϵ > 0 it follows that
c_2k^γ/2β k^2≤ϵ
for sufficiently large k.
Therefore
exp(m log(p-k)(2β + c_2/2β1/k^2-γ) )
≤exp(m log(p-k)(2β +ϵ) )
=(p-k)^m(2β +ϵ)
Combining the above two displays gives
ℒ_L+n^D^2
≤∑_m=0^k km(k/p-k)^m
(p-k)^m(2β +ϵ)
= ∑_m=0^k km(k/(p-k)^1-2β-ϵ)^m
=(1+k/(p-k)^1-2β-ϵ)^k .
Finally, recall that by the assumptions of the theorem, k =⌊ c_1 p^α⌋ for some α∈ (0,1/2), c_1>0 and that β < 1/2 - α.
Choosing ϵ = 1/2 -α -β>0, gives
ℒ_L+n^D^2
≤(1+k/(p-k)^1/2 + α - β)^k .
Since k=⌊ c_1 p^α⌋, then as p→∞, the above behaves as
(1+c_1/p^1/2-β(1+o(1)))^c_1 p^α
Therefore, for β < 1/2-α, as p→∞,
ℒ_L+n^D^2 → O(1).
§ SSL ALGORITHM
§.§ Proof of Theorem <ref>
We prove the theorem, assuming that
is run with the correct sparsity k and
with a slightly smaller screening factor β̃= β-ϵ̃
for a fixed (though potentially arbitrarily small) ϵ̃> 0, which implies the first stage retains a bit
more than p^1-β of the original p variables.
Our proof relies on the following two key properties:
(i) The set S_L of size p̃ = ⌈ p^1-β̃⌉,
which is the output of the first step of ,
contains
nearly all indices of the true support;
(ii) since the reduced dimension p̃≪ n, the leading eigenvector of PCA is asymptotically consistent,
thus allowing recovery nearly all support indices.
The following lemma formally states the first property. Its proof appears in Appendix <ref>.
Let {(_i, y_i)}_i=1^L be L i.i.d. labeled samples from the mixture model (<ref>)
with a vector μ of sparsity k=⌊ c_1 p^α⌋
and nonzero entries ±√(λ/k).
Suppose that L = ⌈2β klog (p-k)/λ⌉, for some β∈ (0,1-α).
Let β̃= β-ϵ̃ where ϵ̃> 0 is sufficiently small
so that β̃> 0.
Let S_L be the indices of the top p̃ = ⌈ p^1-β̃⌉
entries of the vector
_L of Eq. (<ref>).
Then, for any ϵ >0,
lim_p→∞(|S ∩ S_L|/k≥ 1-ϵ) = 1.
As described above, we run with β̃= β - ϵ̃, and denote by S_L the set found by the first step of the algorithm.
By Lemma <ref> this set satisfies Eq. (<ref>).
Denote by Σ|_S_L and Σ̂|_S_L the population and sample covariance matrices restricted to the set of indices S_L.
Note that,
Σ|_S_L = μ|_S_Lμ|_S_L ^⊤ +
I_p̃.
Hence, up to sign, the leading eigenvector of Σ|_S_L is μ|_S_Lμ|_S_L.
Denote by v̂_PCA the unit norm leading eigenvector of the sample covariance Σ̂|_S_L.
We now show that these two eigenvectors are close to each other.
Indeed, since β̃> 1-αγ,
we have n/p = p^αγ/λ^2 (p^1-β^* )→∞, as p →∞.
Then, combining this observation with Theorem 2.3 in <cit.>, implies that with probability tending to 1,
lim_p→∞|⟨v̂_PCA, μ|_S_L/μ|_S_L⟩|= 1.
Since μ∈ℝ^p is a k-sparse with non-zero entries ±√(λk), Eq. (<ref>) implies that μ|_S_L→√(λ), as p→∞.
Hence,
lim_p→∞|⟨v̂_PCA, μ|_S_L/√(λ)⟩|= 1,
which implies that with the correct sign,
lim_p→∞v̂_PCA - μ|_S_L√(λ)= 0.
From now we extend v̂_PCA which originally had dimension |S_L, to a p-dimensional vector with zeros in S_L^c. Hence, since μ|_S_L ^c→ 0, it follows
lim_p→∞v̂_PCA - μ/√(λ)= 0.
Next, let us assume by contradiction that there exist ϵ_0,δ_0 ∈ (0,1) such that for every p∈ℕ, with probability at least δ_0
|S∩Ŝ|/k< 1-ϵ_0.
where Ŝ is the set of top-k coordinates of |v̂_PCA|.
Combining this assumption and Eq. (<ref>), with probability at least δ_0
lim_p→∞v̂_PCA |_Ŝ^2 =
lim_p→∞1/λμ |_Ŝ^2
= lim_p→∞1/λμ |_Ŝ∩ S^2
= lim_p→∞|Ŝ∩ S|/k≤ 1-ϵ_0,
where the last inequality follows from the above assumption and |μ_j| = √(λ/k), for all j ∈ S.
Next, from (<ref>) and (<ref>) it follows that for any subset T that satisfies S_L∩ S ⊂ T ⊂ S_L and |T|=k,
lim_p→∞v̂_PCA |_T^2 = lim_p→∞1/λμ |_T^2 = lim_p→∞|S∩ S_L|/k = 1.
However, since Ŝ is the set of the top-k indices of |v̂_PCA|, for any |T|=k, T⊂ S_L
v̂^PCA |_Ŝ≥v̂^PCA |_T ,
which is a contradiction to (<ref>) and (<ref>).
Hence, for any ϵ>0, as p tends to infinity, (|S ∩Ŝ|k≥ 1-ϵ)→ 1, which completes the first part of the proof.
The second part of the proof follows from combining (<ref>) and Lemma <ref>.
§ PROOFS OF ADDITIONAL LEMMAS
First, note that the mean ν_x = [] = _S [ θ^S ] is given by
(ν_x)_j = 1/√(k) 1≤ j ≤ k-1
1/√(k)1/p-k+1 k ≤ j ≤ p.
To derive an explicit expression for the covariance matrix Σ we use the law of total expectation
Σ =
1/p-k+1∑_j=k^p[ (-ν_x)(-ν_x)^⊤ | S=s_j].
where s_j = [k-1]∪{j} is a member of 𝕊̃.
By definition,
((-ν_x) | S=s_j)
= (θ^S_j - ν_x) + σξ
=1/√(k) e_j - 1/√(k)(p-k+1) u + σξ
where
u = [ 0^⊤ _k-1 , 1^⊤ _p-k+1 ]^⊤
and { e_j}_j∈ [p] denote the standard basis of ℝ^p.
Since ξ is independent of S, inserting (<ref>) into (<ref>) gives
Σ = 1/p-k+1∑_j=k^p[
(1/√(k)( e_j - 1/(p-k+1) u )+ σξ)
(1/√(k)( e_j - 1/(p-k+1) u ) + σξ)^⊤]
=1/k(p-k+1)^2(
(p-k+1)∑_j=k^p e_j e_j^T
-( u∑_j=k^p e_j^⊤
+ ∑_j=k^p e_j u^⊤)
+ u u^⊤) + σ^2 𝐈_p
Note that ∑_j=k^p e_j = u. Thus,
Σ
= 1/k(p-k+1)^2(
(p-k+1)∑_j=k^p e_j e_j^T
- u u^⊤) + σ^2 𝐈_p
≼1/k(p-k+1)∑_j=k^p e_j e_j^T + σ^2 𝐈_p
= 1/k(p-k+1)[ 0_(k-1)× (k-1) 0_(k-1)× (p-k+1); 0_(p-k+1)× (k-1) I_(p-k+1)× (p-k+1) ]
+ σ^2 𝐈_p.
Therefore,
logdet(Σ) ≤log((σ^2)^p (1+ 1/k(p-k+1)σ^2)^p-k+1)
= p log( σ^2) + (p-k+1)log(1+ 1/k(p-k+1)σ^2) .
By definition I(;S) = H() - H(|S). Hence, we first derive expressions for these two terms.
Since follows the mixture model
(<ref>), it is of the form = yμ^S + ξ. Given S=s, μ^s is deterministic with μ^s_j = √(λ/k) {j ∈ s}.
Thus, the vector ( | S=s) is distributed as a mixture of two Gaussians with centers ±μ^s and identity covariance matrix. Its density is
f(|S=s) = 1/(2π)^p/2( e^--μ^s^2 /2 + e^-+μ^s^2 /2/2) =
e^-^2 + λ/2/(2π)^p/2(e^-⟨,μ^s⟩ + e^⟨,μ^s⟩/2).
By the definition of conditional entropy,
H(|S) =
- ∑_s ∈𝕊 P(S=s) ∫
f( |S=s) log f(|S=s) d.
Given the structure of the vectors μ^s for all s∈𝕊̃, all the integrals in the sum above give the same value.
Therefore, it suffices to consider a single set
s_0 = {1, …,k},
H(|S) = - ∫ f( |S=s_0) log f(|S=s_0) d.
Inserting (<ref>) into (<ref>), gives
H(|S) = -_|s_0[ log( e^-^2 + λ/2/(2π)^p/2cosh (⟨,μ^s_0⟩)
) ] .
Note that for any s ∈𝕊̃, [^2| S=s] = λ+p.
Thus,
H(|S) = C(p,λ)
- _|s_0[ logcosh (⟨,μ^s_0⟩)
].
where C(p,λ) = λ +p/2(1+log(2π)).
Consider the following two independent random variables,
w = 1/√(k)∑_j=1^k-1ξ_j ∼ N(0, k-1/k) and ξ = ξ_k ∼ N(0,1).
For S=s_0,
⟨,μ^s_0⟩ = ⟨ y μ^s_0 + ξ,μ^s_0⟩ =
λ y + √(λ)w + √(λ)/√(k)ξ.
Inserting the above into (<ref>) gives
H(|S) = C(p,λ)
-[ logcosh (λ y + √(λ)w + √(λ/k)ξ)
],
where the expectation is over w,y and ξ.
Note that w,y and ξ are independent random variables with zero mean and symmetric distributions around zero.
Further, recall that y attains the values ± 1 with equal probabilities.
Hence, by a symmetry argument we may set y=1 and take the expectation
only over w and ξ. This gives
H(|S) = λ + p/2(1+log2π)
-[ logcosh(λ + √(λ)w + √(λ/k)ξ)
].
Next, we derive an expression for H(). Recall that depends on
a vector μ^S with S distributed uniformly at random from 𝕊̃
of size p-k+1.
Note that 𝕊̃ = ⋃_ℓ=k^p s_ℓ
where s_ℓ = [k-1]∪{ℓ}.
By the law of total probability
f() =
1/|𝕊|∑_s ∈𝕊 f(|S=s) = 1/p-k+1∑_ℓ=k^p f(|S=s^ℓ) .
Using the same analysis as before, it follows that
f() =
e^-^2 + λ/2/(2π)^p/2·1/p-k+1∑_ℓ=k^p
e^-⟨,μ^s^ℓ⟩ + e^⟨,μ^s^ℓ⟩/2.
Hence,
H() = -[log f()] = C(p,λ) - [ log( 1/p-k+1∑_ℓ=k^p
e^-⟨,μ^s^ℓ⟩ + e^⟨,μ^s^ℓ⟩/2)]
We now simplify the expectation in (<ref>).
First, by a symmetry argument, we may assume the label that corresponds to
is simply y=1. Let us
simplify the inner product ⟨, μ^s_ℓ⟩.
⟨,μ^s_ℓ⟩ = ⟨√(λ)μ^S + ξ,μ^s_ℓ⟩
= λ(k-1/k + {ℓ∈ S}/k) + √(λ)w + √(λ)/√(k)ξ_ℓ.
Hence, the expectation above can be written as
[ log( 1/p-k+1∑_ℓ =k^p
e^-(λ(k-1)/k + √(λ)w + √(λ/k)ξ_ℓ + λ/k{ℓ∈ S})
+ e^(λk-1/k + √(λ)w +√(λ/k)ξ_ℓ+ λ/k{ℓ∈ S})/2)]
where the expectation is over S, w and {ξ_ℓ}_ℓ=k^p.
Since S is uniformly distributed over 𝕊̃ and
for any S=s^j the expectation is the same, we may thus set S=s^k = [k], and take the expectation only over w and {ξ_ℓ}_ℓ=k^p.
Next, we decompose the sum inside the logarithm as S_1+S_2, where
S_1 = exp(-λk-1/k - √(λ)w)/2·1/p-k+1∑_ℓ=k^p e^-√(λ/k)ξ_ℓ-λ/k{ℓ = k}
S_2 = exp(λk-1/k + √(λ)w)/2·1/p-k+1∑_ℓ=k^p e^√(λ/k)ξ_ℓ+λ/k{ℓ = k}
We now analyze each of these terms in an asymptotic setting
where p,k →∞ and k = o(p).
To this end we write the sum in S_2 as follows
1/p-k+1∑_ℓ=k^p e^√(λ/k)ξ_ℓ
+
1/p-k+1 e^√(λ/k)ξ_k (e^λ/k -1 )
By the central limit theorem, asymptotically, the first sum may be written as
e^λ/2k + O_P(√(λ)/√(k p)), which follows from the
fact that 𝔼_Z∼𝒩(0,1) [e^t Z] = e^t^2/2.
The second term above is O_P(√(λ)/k p), which is negligible w.r.t. to the
previous O_P term.
Note that its expectation is finite and given by e^λ/2k(e^λ/k -1 )p-k+1.
The sum S_1 can be analyzed similarly. In summary we obtain that
S_1 +S_2 = e^λ/2k·cosh(λk-1/k + √(λ) w) ·(1 + O_P(√(λ)/√(kp)) )
Hence, the expectation above simplifies to
[log(S_1+S_2)] = λ/2k + [logcosh(λk-1/k + √(λ) w)] + O(√(λ)/√(kp))
Inserting the above into (<ref>) gives
H() = C(p,λ) -
[logcosh(λk-1/k + √(λ) w)] + O(√(λ)/√(kp))
.
Next, we derive an upper-bound for the mutual information I(;S) = H() - H(|S). Combining (<ref>) and (<ref>), the constant
C(p,λ) cancels out, and we obtain
I(;S) =[ logcosh(
λ + √(λ)w + √(λ/k)z
) - logcosh(λ(k-1)/k + √(λ)w)
] - λ/2k + O(√(λ)/√(kp)).
=[ g_w(√(λ/k)z ) - g_w (-λ/k) ] - λ/2k +
O(√(λ)/√(kp)).
where g_w(t) = logcosh( λ + √(λ)w + t).
For future use, note that d/dt g_w(t) = tanh(λ + √(λ)w + t).
To upper bound I(;S) we split the expectation in (<ref>) into two parts as follows,
I(;S) = [ g_w(√(λ/k)z ) - g_w(0)] +
[g_w(0) -
g_w (-λ/k)
] -λ/2k +
O(√(λ)/√(kp)).
The Taylor expansion of g_w(t) is given by
g_w(t) = g_w(0) + t tanh(λ + √(λ)w) + t^2/2(1 - tanh^2(λ + √(λ)w)) + t^3/3!g_w^(3)(τ_t).
Here τ_t is some number between 0 and t, and g_w^(3)(τ_t) = -2tanh(λ + √(λ)w + τ_t) + 2tanh^3(λ + √(λ)w + τ_t).
Note that for all t∈ℝ,
t^3/3!g_w^(3)(τ_t)≤2|t^3|tanh(λ + √(λ)w + τ_t)/3! (1 - tanh^2(λ + √(λ)w +τ_t))≤2|t^3|/3!.
Since z,w are independent and [z]=0, it follows that
[ g_w(√(λ/k)z ) - g_w(0)]
≤ [ √(λ/k)z tanh( λ +√(λ)w) + λ/2kz^2 (1 - tanh^2 (λ +√(λ)w )) + 2λ^3/2|z^3|/3! k^3/2]
= λ/2k[1 - tanh^2(λ +√(λ)w )] + O(√(λ^3/k^3)).
For the second term in (<ref>), for any value of w,
by the mean value theorem, it follows that
g_w(0) -
g_w (-λ/k)
= λ/ktanh (λ + √(λ)w +ζ_w)
≤λ/ktanh (λ + √(λ)w)
where ζ_w ∈ [-λ/k,0], and the last inequality follows
from the fact that
tanh(·) is a monotonically increasing function. Hence,
[g_w(0) -
g_w (-λ/k)
]≤λ/k[tanh (λ + √(λ)w )].
Hence, combining (<ref>), (<ref>) and (<ref>), yields
I(;S) ≤λ/2k[1 - tanh^2(λ +√(λ)w )] + λ/k[tanh (λ + √(λ)w )]
- λ/2k +
O(√(λ^3/k^3) +
√(λ)/√(kp))
=λ/k[tanh (λ + √(λ)w ) - 1/2tanh^2(λ +√(λ)w )]
+O(√(λ^3/k^3) +
√(λ)/√(kp)).
By Lemma <ref>,
the expectation above can be bounded as follows:
[tanh (λ + √(λ)w ) - 1/2tanh^2(λ +√(λ)w )]
≤1/2(λ +
3√(λ/k)).
Hence, inserting this upper bound into (<ref>), gives
I(;S) ≤1/2λ^2/k
+O(
√(λ)/√(kp)+ √(λ^3/k^3))
Asymptotically,
as p,k→∞ with k/p→ 0 then 1/k≫1/√(kp).
Thus, the term
1/2λ^2/k
is asymptotically larger than the O(·) terms
in the display above.
Hence, the inequality (<ref>) of the lemma follows.
The main idea of the proof is to show that with a suitable choice of threshold τ,
nearly all entries _L(j) for j∈ S are above this threshold, in absolute value,
whereas the number of noise magnitudes above it is smaller than p̃-k.
Since we prove the lemma for the case of two symmetric Gaussians
μ_1 =
-μ_-1 =
μ,
for simplicity we consider the following formula for _L = 1/L∑_i=1^Ly_i_i.
With minor adaptations one can also prove the Lemma with the original formula (<ref>) of _L.
Fix ϵ > 0, and let 𝒜_L denote the event that |S_L ∩ S| ≥ k(1-ϵ).
For any threshold τ define the following two events,
ℬ̃(τ) = {∑_j ∈ S{|_L(j)|>τ}> k(1-ϵ) },
and
𝒞̃(τ) = {∑_j ∉ S{|_L(j)|>τ}< p̃- k }.
By their definition, it follows that for any τ > 0
ℬ̃(τ) ∩𝒞̃(τ) ⊆𝒜_L .
Furthermore, note that the two events ℬ̃(τ) and 𝒞̃(τ) are independent.
Hence, to prove that (𝒜_L) → 1, it suffices to prove that for a suitable threshold τ,
(ℬ̃(τ) ∩𝒞̃(τ)) =
( ℬ̃(τ) ) ·( 𝒞̃(τ) ) → 1
In other words, it suffices to show that each of these events occurs with probability tending to one.
We start by showing that [ℬ̃(τ)]→ 1. First, let us define an even simpler event, with the absolute value removed,
ℬ(τ) = {∑_j∈ S{(μ_j) _L(j) > τ} > k (1-ϵ)
}
Clearly ℬ(τ) ⊂ℬ̃(τ) and thus it suffices to show that
(ℬ(τ)) → 1 as p→∞.
To this end, we consider
a threshold of the form τ = √(λ/k) T, with the value of T specified below.
By the sparse mixture model (<ref>), at support coordinates,
(μ_j) _L(j) = √(λ/k) + (μ_j)/√(L)ξ_j
where ξ_j∼𝒩(0,1).
Inserting this expression with L ≥2β k log (p-k)/λ into the above, and suppressing the
dependence on τ in ℬ(τ), gives
(ℬ)
≥ (∑_j ∈ S{
1+√(1/2βlog (p-k))ξ_j> T}>k(1-ϵ))
= (∑_j∈ S{ξ_j > -(1-T)√(2βlog(p-k))} > k(1-ϵ) ).
Next, we choose
T = 1 - 1/(2βlog(p-k))^1/4,
and define
q_1 = (N(0,1)>-(2βlog(p-k))^1/4).
Since the ξ_j's are all independent and |S|=k, with this choice of T, Eq. (<ref>) simplifies to
(ℬ)
≥(
Bin(k,q_1) > k (1-ϵ)
)
Note that lim_p→∞q_1 =1. Thus, for sufficiently large p, it holds that
q_1(1-ϵ/2)> 1-ϵ. Hence, instead of the right hand side above we may
bound (Bin(k,q_1) > kq_1(1-ϵ/2)).
Indeed, by Chernoff's bound (<ref>),
(ℬ)≥
1 - e^ϵ^2 kq_1/8
which tends to one as p,k→∞.
Next, we show that the second term in Eq. (<ref>), (𝒞̃), also tends to one
as p→∞.
First of all, since k=⌊ c_1p^α⌋ and α < 1-β < 1-β̃, then k ≪ p^1-β̃, and thus p̃ -k ≪p̃/2.
Hence, for p sufficiently large, we may instead consider the following event
𝒞(τ) = {∑_j∉ S{|_L(j)| > τ} < p̃/2} = {
Bin(p-k,q_2(τ)) < p̃/2}
where q_2(τ) =2 Φ^c(√(L)τ).
Clearly 𝒞(τ) ⊂𝒞̃(τ), and we now prove that with τ = √(λk)T, and T given in (<ref>)
(𝒞(τ)) → 1, by applying a Chernoff bound.
To this end, we write
p̃/2 = q_2 (p-k) (1+δ)
where δ = p̃/2(p-k)1/q_2 - 1.
To use Chernoff's inequality we first need to show that δ≥0.
Indeed, as p→∞, with τ = √(λ/k) T, and using Lemma <ref>
which bounds the tail function Φ^c,
lim_p→∞ (δ+1) =
lim_p→∞p^1-β̃/2(p-k)1/ 2Φ^c(T√(2βlog(p-k)))
≥
√(πβ)/2lim_p→∞ T√(log(p-k))/(p-k)^β̃exp(- T^2 βlog(p-k))
= √(πβ)/2lim_p→∞ T√(log(p-k))/(p-k)^β̃- β T^2
Note that for sufficiently large p,
β̃- β T^2 = β(1-T^2) - ϵ̃=
β/(2βlog(p-k))^1/4(2 - 1(2βlog(p-k))^1/4) - ϵ̃< 2β/(2βlog(p-k))^1/4 - ϵ̃<0.
Combining the above and (<ref>) gives
lim_p→∞ (δ+1) ≥√(πβ)/2lim_p→∞ T √(log(p-k)) = ∞.
Since β̃<β, for sufficiently large p, and T given in (<ref>) it follows that β - β T^2<0.
Therefore, for sufficiently large p, it follows that δ>0.
By Chernoff's bound (<ref>)
(𝒞(√(λk)T)) =
(Bin(p-k,q_2) < q_2 (p-k) (1+δ))
≥ 1-e^-δ^2(p-k)q_2/2+δ.
We now prove that the term in the exponent tends to infinity.
First, note that since δ→∞ then δ = p̃2(p-k)q_2 -1 ≥p̃4(p-k)q_2.
Hence,
lim_p→∞δ^2(p-k)q_2/2+δ =
lim_p→∞δ(p-k)q_2≥lim_p→∞p̃/4(p-k)q_2(p-k)q_2 = ∞.
Combining the above and (<ref>) gives
lim_p→∞(𝒞(√(λk)T)) = 1
which completes the proof.
§ THE MLE IN THE SUPERVISED SETTING
The following proposition presents the form of the maximum likelihood estimator (MLE) for the support S in a fully supervised setting, where the sparsity k is assumed known, and the non-zero entries of μ have magnitude ±√(λ/k).
Let {(_i, y_i)}_i = 1^L be L i.i.d. labeled samples
from model (<ref>) where is k-sparse with non-zero entries
±√(λ/k),
and let _L = 1/L∑_i∈ [L]y_i _i.
Assuming the sparsity k is known,
the MLE for S=() is given by the indices corresponding to the top-k magnitudes of _L.
Under our assumptions, the set of all possible vectors μ has a one-to-one mapping to
a support set S∈𝕊 and a vector D∈{-1,1}^k containing the signs of the k non-zero entries of μ.
We thus denote θ = (S,D), and μ^θ by
μ^θ _j = √(λ)/√(k)D_j, j ∈ S,
0, j ∉S.
Let us denote by p_X,Y(,y; θ) the joint probability density function of a single sample (,y)
from the model (<ref>) with parameter θ.
Since y is a Rademacher random variable with a distribution not dependent on θ, we may write
p_X,Y(,y ;θ) = p_X|Y(|y; θ) p_Y(y) =12 p_X|Y(|y; θ) .
Since |y ∼𝒩(yμ,𝐈_p),
the conditional density p_X|Y(|y; θ) simplifies to
p_X,Y(|y, θ) = 1/(2π)^p/2exp(- - y μ^θ^2 /2 ).
By definition, the maximum-likelihood estimator of θ is given by
θ̂^() =
_θ∑_i=1^L
log p_X,Y(,y ;θ),
Inserting (<ref>) into the above, and using μ ^θ=√(λ) (fixed for all θ), gives
θ̂^() = _θ∑_i=1^L
(-_i - y_i μ^θ^2)=
_θ̂∑_i=1^L ⟨ y_i_i, μ^θ⟩ =
_θ⟨_L , μ^θ⟩.
Therefore, the maximum value of ⟨_L , μ^θ⟩ is obtained for Ŝ^() being the set of indices corresponding to the k largest magnitude
entries of _L, and
D̂^() = {(_L)_j: j ∈Ŝ^()}.
The next proposition shows that with sufficient number of labeled samples the MLE
for S has significant overlap with the true support set S.
Let 𝒟_L = {(_i, y_i)}_i=1^L be a set of L i.i.d. labeled samples from the model (<ref>) with a k-sparse vector μ whose non-zero entries are
±√(λ/k). Assume that for some α∈(0,1), k=⌊ c_1 p^α⌋
and that L = ⌈2β klog (p-k)/λ⌉, for some β∈ (0,1).
Let S_L be the indices of the k largest magnitudes of the vector
_L = 1/L∑_i∈ [L]y_i _i.
If β>1-α, then for every ϵ∈ (0,1),
lim_p →∞(|S ∩ S_L |/k>1-ϵ) = 1.
Fix ϵ > 0, and let 𝒜_L denote the event that |S_L ∩ S| ≥ k(1-ϵ).
For any threshold τ define the following two events,
ℬ̃(τ) = {∑_j ∈ S{|_L(j)|>τ}> k(1-ϵ) },
𝒞̃(τ) = {∑_j ∉ S{|_L(j)|>τ}< kϵ}.
By their definition, it follows that for any τ > 0, ℬ̃(τ) ∩𝒞̃(τ) ⊂𝒜_L.
Since the two events ℬ̃(τ) and 𝒞̃(τ) are independent,
(𝒜_L) ≥(ℬ̃(τ) ∩𝒞̃(τ)) =
( ℬ̃(τ) ) ·( 𝒞̃(τ) )
Hence, for (𝒜_L) → 1, it suffices that for a suitable threshold τ, both probabilities on the right hand side tend to one,
We start with (ℬ̃(τ)). To this end, we define an even simpler event, with the absolute value removed,
ℬ(τ) = {∑_j∈ S{(μ_j) _L(j) > τ} > k (1-ϵ)
}
Clearly ℬ(τ) ⊂ℬ̃(τ) and thus it suffices to show that
(ℬ(τ)) → 1 as p→∞.
By the sparse mixture model (<ref>), for j ∈ S,
(μ_j) _L(j) = √(λ/k) + (μ_j)/√(L)ξ_j,
where ξ_j ∼𝒩(0,1).
Combining a threshold value τ = √(λ/k1-α +β/2β)
and the assumption that L ≥2β k log(p-k)/λ
with this expression, gives that
( ℬ(τ)) ≥(∑_j ∈ S{ξ_j>- (√(β) - √(1-α+β/2))√(2log p)}>k (1-ϵ)).
Let q_1 be the probabiity of each event in the above sum,
q_1 = (N(0,1)>-(√(β) - √(1-α+β/2))√(2log p)).
Since the ξ_j's are all independent and |S|=k, with this choice of τ, Eq. (<ref>) simplifies to
( ℬ(τ)) ≥(Bin(k,q_1)>k(1-ϵ)).
Note that since β>1-α, then lim_p→∞q_1 =1. Thus, for sufficiently large p, it holds that q_1(1-ϵ/2)> 1-ϵ.
Hence, the right hand side above may be bounded by (Bin(k,q_1) > kq_1(1-ϵ/2)).
Indeed, by Chernoff's bound (<ref>), (ℬ(τ))→ 1 as p,k→∞, since
(ℬ)≥
1 - e^-ϵ^2 kq_1/8.
Next, we show that the second term in Eq. (<ref>), (𝒞̃), also tends to one
as p→∞.
First, note that
𝒞̃ (τ) = {
Bin(p-k,q_2(τ)) < kϵ}
where q_2(τ) =2 Φ^c(√(L)τ) = 2 Φ^c(√((1-α +β)log(p-k))).
We now prove that (𝒞̃(τ))→ 1, by applying a Chernoff bound.
To this end, we write
kϵ = q_2 (p-k) (1+δ)
where δ = kϵ/(p-k)1/q_2 - 1.
To use Chernoff's inequality we first need to show that δ≥0.
Indeed, as p→∞, using Lemma <ref>
which bounds the tail function Φ^c,
lim_p→∞ (δ+1) ≥lim_p→∞ϵ k/p-k1/ 2 Φ^c(√((1-α +β)log(p-k)))
≥ϵ c_1 √(π(1-α + β)/2)lim_p→∞√(log(p-k))/p^1-αexp(-1-α +β/2log(p-k))
=
ϵ c_1 √(π(1-α + β)/2)lim_p→∞√(log(p-k))/p^1-α - β/2 = ∞,
where the last equality follows from β > 1-α.
By Chernoff's bound (<ref>)
(𝒞̃(τ)) =
(Bin(p-k,q_2) < q_2 (p-k) (1+δ))
≥ 1-e^-δ^2(p-k)q_2/2+δ.
We now prove that the term in the exponent tends to infinity.
Since δ→∞ it follows that
lim_p→∞δ^2(p-k)q_2/2+δ =
lim_p→∞δ(p-k)q_2 ≥lim_p→∞(1+δ)(p-k)q_2/2 =
lim_p→∞kϵ/2 = ∞.
Combining the above and (<ref>) gives that lim_p→∞(𝒞̃(τ)) = 1,
which completes the proof.
The above proposition implies the following result regarding accurate classification:
Let T() = ⟨, ⟩, with = _L |_S_L, be a linear classifier that is constructed using only the set of labeled samples _L. If β > 1-α,
then combining Proposition <ref> and Lemma <ref> implies that the excess risk of T tends to zero as p→∞.
§.§ Impossibility of Classification
In this section we prove a lower bound for classification in the SL setting.
To do that, we consider a slightly different model, known as the rare and weak model.
Here the sparsity of the vector is not fixed at exactly k. Instead the vector is generated randomly with entries μ_j = √(λ / k) B_j, where B_j ∼ Ber (ϵ_p) are i.i.d. Bernoulli random variables with
ϵ_p = kp = p^-(1-α).
The next theorem implies that in the red region (impossible) of figure <ref>, indeed there exists (approximately) k-sparse vectors for which any classifier would asymptotically be no better than random.
Let 𝒟_L={(_i, y_i)}_i=1^L be set of L i.i.d. labeled samples from the rare weak model
𝒩(yμ,I) where all non-zero entries of are ±√(λ/k).
Suppose L = ⌈2β k log p /λ⌉ and k ∝ p^α, for some α<1.
If β <1-α then the classification error of any classifier based on 𝒟_L, tends to 1/2 as p→∞.
This proof is similar to the one of <cit.>.
First, let us denote the vector of z-scores by = 1√(L)∑_i∈[L]y_i _i.
Since the entries of are generated independently under the rare-weak model,
the entries of are also independent and all have the same density, which we denote by f(z).
Given , we denote the conditional probability that the j-th entry contains a feature by η = (j∈ S | )= (j∈ S| z_j = z).
By Bayes theorem,
η(z) = (j∈ S) f(z|j∈ S)/f(z) = ϵ_p ϕ(z-τ_p)/(1-ϵ_p)ϕ(z) +ϵ_p ϕ(z-τ_p),
where τ_p = √(Lλ /k) = √(2βlog p), and ϕ is the density of N(0,1).
From Lemmas 1,2 and 4 in <cit.>, the misclassification error of any classifier T constructed using the set _L, can be bounded as
| (T()≠ y ) - 1/2 |<
C (1-(_z[H(z)])^p )^1/2,
where H(z) =_x[(1 + η(z)( e^√(λ/k)x - λ/2k-1))^1/2]
with x∼ N(0,1), and z ∼ (1-ϵ_p)N(0,1) + ϵ N(τ_p,1).
Our goal is to show that _z[H(z)] = 1+o(1/p), which implies that asymptotically the accuracy of T is no better than random.
First, combining that _x[e^√(λ/k)x - λ/2k - 1] = 0 and the inequality |√(1 + t) - 1 - t/2|≤ Ct^2, for any t>-1, gives
|H(z) - 1|
=
|H(z) - 1- 12[η(z)][e^√(λ/k)x - λ/2k - 1] |
≤
Cη^2 (z) _x[(e^√(λ/k)x - λ/2k - 1)^2] = C η^2 (z) (e^λ/k - 1)
.
For sufficiently large p, e^λ/k-1 ≤ 2 λ/k. Since k∝ p^α, then,
|H(z) - 1| ≤C̃ p^-αη^2 (z).
Hence, to prove that _z[H(z)] = 1+o(1/p), it suffices to show
that _z[η^2 (z)] = o(p^-(1-α)) = o(ϵ_p).
To this end, we first note
[η^2(z)] = (1-ϵ_p)[η^2(w)] + ϵ_p[η^2(w+τ_p)] ≤[η^2(w)] + ϵ_p[η^2(w+τ_p)],
where w∼ N(0,1).
Write (η^2(z)) = I + II + III + IV, where we have split each of the expectations into two separate integrals, with the split at suitably chosen values t_1 and t_2,
I = ∫_-∞^t_1η^2(w) ϕ(w) dw,
II = ∫_t_1^∞η^2(w) ϕ(w) dw,
and
III = ϵ_p∫_-∞^t_2η^2(w+τ_p) ϕ(w) dw,
IV = ϵ_p ∫_t_2^∞η^2(w + τ_p) ϕ(w) dw,
As we see below, the following values will be suitable to derive the required bounds: t_1 = 1-α +β/2βτ_p and t_2 = 1-α-β/2βτ_p.
Starting with I, note that
I = ∫_-∞^t_1(ϵ_p ϕ(w-τ_p)/(1-ϵ_p)ϕ(w) + ϵ_p ϕ(w-τ_p))^2 ϕ(w) dw
≤∫_-∞^t_1(ϵ_p ϕ(w-τ_p)/(1-ϵ_p)ϕ(w))^2 ϕ(w) dw
For large enough p, the above can be bounded via
I ≤2ϵ_p ^2∫_-∞^t_1ϕ^2(w-τ_p)/ϕ^2(w)ϕ(w)dw
= Cϵ_p ^2∫_-∞^t_1 e^2wτ_p - τ_p^2 - w^2/2dw,
where the equality follows from the definition of ϕ(w).
Completing the square, the above can be written as
I ≤ Cϵ_p ^2∫_-∞^t_1 e^-(w - 2τ_p)^2 / 2 e^τ_p^2 dw.
Changing the variable x = w-2τ_p reads
I ≤ Cϵ_p ^2 e^τ_p^2∫_-∞^t_1-2τ_p e^-x^2 / 2 dx = Cϵ_p ^2 e^τ_p^2Φ^c (2τ_p - t_1)
≤ C ϵ_p ^2 e^τ_p^2 e^-(2τ_p - t_1)^2 /2
Finally, since β<1-α, it follows that C ϵ_p ^2 e^τ_p^2 e^-(2τ_p - t_1)^2 /2 = o(ϵ_p).
Next, since η(w)<1 it follows that
II = ∫_t_1^∞η^2(w) ϕ(w) dz
≤∫_t_1^∞ϕ(w) dw = Φ^c(t_1) ≤ e^-t_1^2/2.
Similar to the above, under the condition β<1-α it holds that e^-t_1^2/2 = o(ϵ_p).
Next, note that
III = ϵ_p∫_-∞^t_2(ϵ_p ϕ(w)/(1-ϵ_p)ϕ(w+τ_p) + ϵ_p ϕ(w))^2 ϕ(w) dw
≤ϵ_p∫_-∞^t_2(ϵ_p ϕ(w)/(1-ϵ_p)ϕ(w+τ_p))^2 ϕ(w) dw.
For large enough p
III ≤
Cϵ_p∫_-∞^t_2(ϵ_p ϕ(w)/ϕ(w+τ_p))^2 ϕ(w) dw
=Cϵ_p ^3∫_-∞^t_2
e^2wτ_p + τ_p^2 - w^2/2dw
.
Completing the square reads
III ≤
Cϵ_p ^3 e^3τ_p^2∫_-∞^t_2
e^-(w-2τ_p)^2 /2dw =
Cϵ_p ^3 e^3τ_p^2Φ^c(2τ_p - t_2) ≤ Cϵ_p ^3 e^3τ_p^2 e^-(2τ_p - t_2)^2 /2.
Since β≤ 1-α it holds that III = o(ϵ_p).
Finally, since η(w)<1, IV can be bounded as follows
IV = ϵ_p ∫_t_2^∞ϕ(w) dw = ϵ_pΦ^c(t_2) ≤ϵ_p e^-t_2^2 /2 .
Again, by β<1-α, IV = o(ϵ_p).
|
http://arxiv.org/abs/2409.02751v1 | 20240904143013 | A Comparative Study of Pre-training and Self-training | [
"Yiheng Wang",
"Jiayu Lin",
"Zuoquan Lin"
] | cs.CL | [
"cs.CL"
] |
High resolution observations of ^12CO and ^13CO J = 3 → 2 toward the NGC 6334 extended filament
S. Neupane 1
F. Wyrowski 1
K. M. Menten1
J. Urquhart2
D. Colombo1,3
L.-H. Lin1
G. Garay4
Received 2024; accepted 2024
===================================================================================================================
§ ABSTRACT
Pre-training and self-training are two approaches to semi-supervised learning. The comparison between pre-training and self-training has been explored. However, the previous works led to confusing findings: self-training outperforms pre-training experienced on some tasks in computer vision, and contrarily, pre-training outperforms self-training experienced on some tasks in natural language processing, under certain conditions of incomparable settings. We propose, comparatively and exhaustively, an ensemble method to empirical study all feasible training paradigms combining pre-training, self-training, and fine-tuning within consistent foundational settings comparable to data augmentation. We conduct experiments on six datasets, four data augmentation, and imbalanced data for sentiment analysis and natural language inference tasks. Our findings confirm that the pre-training and fine-tuning paradigm yields the best overall performances. Moreover, self-training offers no additional benefits when combined with semi-supervised pre-training.
[Our codes are available at https://github.com/PKUAI-LINGroup/PAS.]
§ INTRODUCTION
Semi-supervised learning (SSL) involves the utilization of both labeled and unlabeled data, typically relies on a constrained amount of labeled data, and improves learning performance through the incorporation of a larger set of unlabeled data (for surveys, see <cit.>). Pre-training and self-training are two approaches in SSL (for surveys, see <cit.>). While pre-training and self-training share similarities that leverage unlabeled data, their methodologies and applications also have distinct differences.
In pre-training, a model is initially trained on a large amount of unlabeled data in a self-supervised way. This pre-trained model is then fine-tuned on smaller labeled data in a supervised way for the specific tasks. Fine-tuning is the supervised component of semi-supervised pre-training. The pre-training and fine-tuning paradigm involves training with unlabeled data and then labeled data, which can continue multiple times. Unsupervised pre-training or self-supervised pre-training refers to the pre-training conducted without subsequent fine-tuning. The pre-training and fine-tuning paradigm yields superior results for specific tasks than unsupervised pre-training. Continual pre-training refers to the pre-training and fine-tuning paradigm conducted as an additional step to continue pre-training on task-specific unlabeled data before fully supervised fine-tuning <cit.>.
In self-training, on the other hand, the teacher model is initially trained on a small set of labeled data. The model then makes predictions on the unlabeled data, and the data points with high-confidence predictions are pseudo-labeled and added to the labeled data, resulting in the student model. The model is trained on this expanded labeled and pseudo-labeled data, and the process is iterated. The teacher and student paradigm involves training first with labeled data and then acquiring high-confidence pseudo-labels from additional unlabeled data. Self-training incorporates a form of label propagation through pseudo-labeling from unlabeled data, effectively extending the labeled data. Pre-training does not involve label propagation, instead, it centers on representation learning through patterns and structures inherent in unlabeled data.
Due to the prominence of pre-trained large language models (LLMs), pre-training remains the best practice under scaling laws (for a survey, see <cit.>). While self-trained large models have yet to emerge, self-training and its interplay with pre-training have garnered increasing research interest.
The comparison between pre-training and self-training has been explored.
In <cit.> experienced in computer vision (CV), the finding was that self-training is stronger than pre-training in the following sense: Self-training performed effectively in the same setup where pre-training failed. In <cit.> experienced in natural language processing (NLP), the finding was that pre-training is stronger than self-training in the following sense: Continual pre-training performed better than various self-training methods. These findings led to confusion. The comparison between pre-training and self-training about these findings is somewhat unfair and lacks clarity, especially given the different settings and extra techniques involved (for detail see the <ref>).
In this paper, we revisit the relationship between pre-training and self-training, while also rethinking the limitations that may prevent one from improving the performance of the other. To our knowledge, we are the first to propose an ensemble method to comparatively and exhaustively investigate all feasible training paradigms combining pre-training, self-training, and fine-tuning. In particular, we employ language models, or so-called foundation models (for a survey, see <cit.>)), as consistent foundational settings across all paradigms of ensemble training for downstream tasks. We employ data augmentation techniques to enhance the effectiveness of self-training. We undertake an empirical study to assess the effectiveness of the ensemble paradigms, specifically targeting six datasets, four data augmentation, and imbalanced data for sentiment analysis and natural language inference tasks in NLP. Our contributions are the findings summarized as follows:
(1) We find that semi-supervised pre-training consistently outperforms self-training and all the other training paradigms, exhibiting robust performance across varying intensities of data augmentation.
(2) We find that the combination of pre-training, fine-tuning, and self-training yields no benefit over the pre-training and fine-tuning paradigm. In other words, self-training offers no additional benefits when combined with semi-supervised pre-training.
(3) We find a modest decline in pre-training performance in scenarios characterized by data imbalance; conversely, other training paradigms experienced a significant reduction in efficacy.
§ RELATED WORKS
The relationship between pre-training and self-training has been examined from two perspectives: first, to evaluate the relative strengths of pre-training versus self-training; and second, to investigate how combining these two methods can mutually enhance their overall effectiveness.
Pre-training vs. self-training.
As the first comparative study to challenge the prevailing paradigm of pre-training with self-training <cit.>, this research posited that self-training is stronger than pre-training experienced in CV. Specifically, the self-training demonstrated superior performance compared to the pre-training, particularly under conditions of enhanced data augmentation and increased availability of labeled data for image recognition tasks. Notably, these experiments employed unsupervised pre-training without subsequent fine-tuning. This result contrasts with the strong baseline established by pre-trained language models. It is widely acknowledged that smaller models utilized in these experiments lack the capacity for zero-shot or few-shot learning, a capability present in LLMs <cit.>. The substantial data and strong augmentation leveraged in the self-training are not adequately mirrored in unsupervised pre-training; thus, this comparative discrepancy renders the performance comparisons between pre-training and self-training somewhat inequitable.
In <cit.>, the authors argued that pre-training is stronger than self-training experienced in NLP. Specifically, continual pre-training with or without prompt templates showed superior performance to several self-training methods for natural language understanding tasks. Compared to continual pre-training in a task-specific way, the self-training methods employed back-translation as data augmentation <cit.>. However, a comparison still needs to be made between unsupervised pre-training used in a task-agnostic manner and self-training.
Pre-training & self-training.
Two complementary can be identified in combining pre-training and self-training. One involves utilizing pre-training to enhance self-training <cit.>. The effectiveness of self-training is heavily dependent on the quality of the pseudo labels, underscoring the importance of a high-performing initial teacher model. In this context, the teacher model of self-training is typically initialized using pre-trained language models <cit.>, such as BERT or RoBERTa, as demonstrated in <cit.>. This paradigm enhances model calibration and has gained traction for effectively combining self-training with pre-training, showcasing a strongly additive relationship between the two methods.
The other entails employing self-training to improve pre-training <cit.>. Self-training improved upon pre-training, demonstrating a strong additive effect <cit.>. Self-training with strong data augmentation offered complementary advantages to unsupervised and continual pre-trained language models <cit.>. Notably, most of these experiments did not conduct a comparison with the pre-training and fine-tuning paradigm. The complementary relationship between self-training and pre-training was further explored in <cit.>. In <cit.>, self-training was utilized in a task-specific manner as a form of unsupervised fine-tuning, aimed at improving the performance of zero-shot learning in pre-trained models. Almost all of these self-training methods rely on strong data augmentation. Therefore, it is necessary to consider the effect of data augmentation when comparing pre-training and self-training.
Historically, self-training was first applied in NLP <cit.> (originally back <cit.>). In this work, we contend that a meaningful comparison between pre-training and self-training is achievable only when utilizing consistent foundational settings, particularly language models. This is especially relevant as both NLP and CV serve as downstream tasks that can be analogized to data augmentation. It is important to exclude additional training and techniques specifically developed in prior studies to prevent incomparable settings and potentially conflicting conclusions. We aim to establish a fair comparison between pre-training and self-training within the context of language models. Embracing the pre-training and fine-tuning paradigm is crucial, as it closely mirrors the teacher-student paradigm employed in self-training. Unlike previous studies, we confirm that the pre-training and fine-tuning paradigm achieves the best overall performance, with no additional benefits from combining it with self-training.
§ METHOD
We revisit the comparison and complementarity between pre-training and self-training, while also rethinking the limitations that may prevent one from improving the performance of the other. To this end, comparatively and exhaustively, we propose an ensemble method to study all feasible training paradigms combining pre-training and self-training within consistent foundational settings.
Ensemble principles.
When considering (unsupervised) pre-training and fine-tuning as separate processes, we identify three training components: pre-training, fine-tuning, and self-training. When combining pre-training and self-training, it is crucial to determine whether fine-tuning is included in the training protocol. It's important to recognize that not all combinations of these three components are feasible or effective for training. When designing the ensemble for these three training components, we consider the following principles:
* A training component cannot occur consecutively, as adjacent identical training components are considered the same.
* Pre-training can only serve as the initial component of an ensemble. If the pre-trained model is initialized during training any prior training becomes irrelevant.
* Self-training requires the unlabeled data in iterations and can only be performed once unless additional unlabeled data becomes available.
Paradigms and notations.
According to the ensemble principles, we list all feasible paradigms of ensemble training. For convenience, we use the abbreviation notations for various paradigms described in <ref>.
We leave F along, i.e. supervised training, and P, i.e. unsupervised pre-training, as baselines that are not SSL.
Most of the previous works <cit.> belong to PFS (see the <ref>). In <cit.>, the student model was also initialized with the pre-trained model, which can be viewed as a variant of PFS (an analysis see <ref> in the <ref>). In <cit.>, an unsupervised classifier included self-training is similar to PS.
SF, PSF, and PFSF have not been explored in prior research. We examine these paradigms for some considerations. One major challenge in self-training is semantic drift, where accumulating incorrect pseudo labels can misguide the training process over time. A potential solution to this problem is to fine-tune the final student model using labeled data. The complex PFSF is depicted in <ref> (for self-training strategy refer to the explanation below). To some extent, other paradigms can be regarded as special parts of PFSF. As we shall see later, the limitation of PFSF is that increasing training costs does not necessarily bring efficiency.
Self-training.
We use a competitive version of pseudo-labeling by using a self-paced curriculum strategy in the context of self-training <cit.>. Pseudo-labeling is trained incrementally by iteratively propagating labels from labeled data to unlabeled data using the model, re-labeling high-confidence predictions, and retraining with labeled and pseudo-labeled data. Instead of adding all pseudo-labeled data in each iteration in original pseudo-labeling <cit.>, self-pace pseudo-labeling carefully selects a subset of the most confident data to help guide the model towards harder samples in a controlled manner, improving performance. The algorithm is briefly described as follows:
(1) Train: The teacher model is first trained on the labeled data.
(2) Predict: Pseudo-labels are assigned to the unlabeled data using the current model.
(3) Select: A subset of pseudo-labeled data is selected based on their prediction scores and percentile thresholds.
(4) Re-train: The student model is trained from scratch using both labeled and selected pseudo-labeled data.
(5) Repeat: Steps (2-4) are repeated until all data in the dataset have been used during training.
To alleviate concept drift and confirmation bias, the model parameters are reinitialized before each iteration. This ensures that previous erroneous predictions do not accumulate over time (for detail, refer to <cit.>).
Language models.
We employ language models as consistent foundational settings across all paradigms of ensemble training for various downstream tasks. Specifically, we utilize the transformer-based BERT model as our initial backbone <cit.>.
We value BERT's encoding representation capability, as we do not primarily consider the generation capability of language models. Moreover, we choose the basic BERT model by two key considerations: first, we aim to avoid using stronger pre-trained language models to maintain a level playing field for self-training; second, we know that utilizing pre-trained language models as the initial teacher model enhances the self-training process. The effectiveness of self-training is heavily dependent on the calibration of the teacher model, as inaccurate pseudo-labels generated by the initial teacher can misguide the training of the student model.
Data augmentation.
Data augmentation (DA) artificially increases the size of a training dataset by generating modified versions of existing data points, addressing the challenge of limited labeled data like SSL. Previous research has shown that experiments favoring self-training over pre-training often employed data augmentation to enhance the effectiveness of self-training. Consequently, we investigate the impact of four data augmentation strategies of varying intensities on different paradigms of ensemble training, including natural noise, conditional BERT, and back-translation.
Natural noise is a data augmentation technique in NLP that simulates common human errors, introducing character-level and word-level mistakes to enhance comprehension <cit.>. Conditional BERT addresses data-label mismatch via masked language modeling, allowing it to generate sentences aligned with specific labels during fine-tuning <cit.>. Additionally, back-translation involves translating text to a target language and back to the source to create augmented data that retains the original meaning while varying its form <cit.>, facilitated by tools like Fairseq <cit.>.
§ EXPERIMENTS
§.§ Datasets
We conduct experiments on two tasks in NLP: sentiment analysis (SA) and natural language inference (NLI). NA identifies the emotions and feelings expressed in text and is a text classification problem with two or more classes. We use four datasets: IMDB <cit.>, SST <cit.>, AG News <cit.> and Elec <cit.>. NLI judges whether the premise and the hypothesis match, and the result can be True, False, and Undetermined. We use two datasets: SNLI <cit.> and MultiNLI <cit.>. The statistics of the datasets are shown in <ref>.
§.§ Implementations
We employ BERT to map input text into a feature space. We attach a linear layer as a classifier atop the BERT model for classification tasks. We utilize BERT in two configurations: BERT-medium, which comprises 8 layers with a hidden size of 512, 8 attention heads, and an intermediate size of 2048 <cit.>, and BERT-base, which comprises 12 layers with a hidden size of 768, 12 attention heads, and an intermediate size of 3072 <cit.>.
For the select step in self-training (refer to the <ref>), we retrieve the top R% (the multiples of 10 or 20) confident data with R improving as the number of iterations increases from all unlabeled data. As usual, we set the learning rate as 1e-5 and batch size as 64 and trained the model within 20 epochs and 40 epochs for BERT-medium and BERT-base respectively.
§.§ Results
We conduct experiments for each paradigm of ensemble training on all the datasets to observe the performance. The results are shown in <ref>, from which we can find the following facts:
* Self-training (S) is effective, surpassing the baselines (F and P).
* The pre-training and fine-tuning paradigm (PF) demonstrates the best performance across all the datasets. This verifies the superiority of PF.
* The accuracy of S, PS, and PFS are close, which reveals the invalidity of the pre-trained teacher model with or without fine-tuning (see more discussions in the <ref>).
* Fine-tuning has either resulted in negligible improvement or a slight decline in the performance of S, PS, and PFS, which indicates that the information in labeled data has already been exploited sufficiently.
§.§ Data augmentation
We perform experiments to assess the effectiveness of varying intensities of data augmentation within ensemble paradigms. We create four data augmentation strategies by integrating natural noise, conditional BERT, and back-translation to ensure increased data augmentation. These strategies are designated as DA1, DA2, DA3, and DA4, as detailed in <ref>, where we write DA0 for no data augmentation for the sake of comparison.
We perform experiments using two datasets: IMDB and SST. We begin by sampling 1,000 instances evenly from each class as labeled data while leaving the unlabeled data unchanged. We then augment the labeled data to a total of 10,000 instances. The objective is to investigate the effects of pre-training intensity and the degree of data augmentation. The findings are detailed in <ref>, <ref>, <ref>, and <ref>.
We've omitted the accuracy of P in these tables due to its trivial nature. The number in the bracket indicates the change magnitude relative to DA0. We find two trends regarding accuracy as the magnitude of data augmentation increases:
* Accuracy initially rises and then declines as the extent of the data augmentation strategy grows.
* Accuracy increases initially and then stabilizes, indicating that moderate data augmentation enhances performance, whereas excessive augmentation is ineffective and may even hinder results.
Additionally, the pre-training and fine-tuning paradigm demonstrates greater stability than other paradigms. It shows resistance to variations in data augmentation magnitude and does not depend on sufficient labeled data.
Our observations reveal that when using pre-trained weights, stronger pre-training knowledge (BERT-base) outperforms weaker pre-training knowledge (BERT-medium) in scenarios with no data augmentation and moderate data augmentation. While most paradigms that utilize pre-training knowledge, apart from PF, struggle with strong pre-training under excessive data augmentation, PF shows improvement. Specifically, PS, PFS, PSF, and PFSF exhibit poorer performance or only marginal increases compared to PF. Employing a stronger pre-training model results in a wider performance gap between PF and the other paradigms.
§.§ Imbalanced data
Data imbalance is a prevalent issue that hinders model performance, prompting us to examine the effectiveness of the paradigms in this context. To create imbalanced training data, we sample from the original datasets and conduct experiments on two datasets: IMDB for binary classification and AG News for four-category classification. The data ratio for IMDB is set at 1:5, while for AG News, the ratio is 1:1:1:7. The results of these experiments are demonstrated in <ref>. The number in the bracket indicates the change magnitude relative to balanced data.
In binary classification, the performance of the paradigms does not experience a significant decline. However, the four-category classification shows a marked drop in performance across the paradigms. Notably, PS and PSF demonstrate greater resilience to the adverse effects of data imbalance compared to PFS and PFSF, while PFS and PFSF maintain more stability than S and SF. Although most other paradigms face substantial decreases, PF maintains consistent performance with a few value changes.
§ DISCUSSIONS
We discuss the reasons behind the failure of PFS in the experiments. Unlike previous studies, we find that self-training and pre-training do not function as complementarity. Notably, there are instances where PFS performs worse than S. To delve deeper into this issue, we analyze the evaluation accuracy for each iteration in PFS.
As illustrated in <ref> (a)(b) for PFS with random initialization (written as PFS Random-init), i.e. regular self-training described in the <ref>, there is a significant drop in performance during the first iteration, followed by gradual improvements in subsequent iterations, but converges to poor performance. This pattern indicates that the student model in the initial iteration struggles to retain the collective knowledge gained during pre-training. We hypothesize that this may result from inefficient knowledge transfer from the pre-trained teacher model to the student model through pseudo-labels in the process of PFS.
We consider the PFS with a student model with sufficient pre-training knowledge to test our hypothesis. To inject pre-trained knowledge into a student model, the student model is initialized with pre-trained parameters and then fine-tuned by labeled and pseudo-labeled data in each iteration (written as PFS Pre-init). We find that the PFS outperforms both PF and S, as depicted in <ref> (c)(d).
This observation illustrates that the PFS with sufficient pre-training knowledge succeeds in improving upon PF and provides evidence to support our hypothesis.
§ CONCLUSIONS
We proposed an ensemble method to empirically explore all feasible training paradigms that combine pre-training, self-training, and fine-tuning with language models. Our study revisited the relationship between pre-training and self-training, while critically examining the limitations that may hinder improvements in either approach.
Our findings indicated that the pre-training and fine-tuning paradigm is the most effective among the various training paradigms. While this is not a discovery, it clarifies existing research on self-training and its interaction with pre-training. This analysis provides valuable insights for future design considerations and assists in selecting the most appropriate learning strategies.
plain
|
http://arxiv.org/abs/2409.03654v1 | 20240905161030 | Tressl's Structure Theorem for Separable Algebras | [
"Gabriel Ng"
] | math.AC | [
"math.AC",
"12H05, 13N99"
] |
§ ABSTRACT
This note presents a generalisation of Tressl's structure theorem for differentially finitely generated algebras over differential rings of characteristic 0 to the case of separable algebras over differential rings of arbitrary characteristic.
Quantum reservoir computing on random regular graphs
Tapio Ala-Nissila
====================================================
§ INTRODUCTION
Tressl's structure theorem for differential algebras <cit.> is a result which describes the structure of differentially finitely generated algebras over differential rings of characteristic 0. It has been applied by León Sanchez and Tressl in the study of differentially large fields of characteristic 0 in order to characterise these fields in terms of the existence of points of certain algebras <cit.>.
In this note, we adapt Tressl's proof to generalise this theorem to the case of separable algebras over differential rings of arbitrary characteristic. We assume throughout that all rings are commutative and unital, and differential rings are equipped with m commuting derivations.
§ CHARACTERISTIC SETS
Let us recall the notion of a characteristic set, and some facts regarding differential ideals. For this section, we fix the following notation and objects:
* (R, ) is a differential ring in m commuting derivations (of arbitrary characteristic), where = (δ_0,...,δ_m-1);
* Y = (Y_0,...,Y_n-1) is a tuple of n indeterminates;
* 𝒟 = {^α : α∈^m} is the free abelian monoid of differential operators generated by .
* 𝒟Y = {δ Y_i : δ∈𝒟, i < n} is the set of derivatives of variables from Y;
* 𝒟Y^* = {y^p : p ∈_>0} is the set of powers of elements from 𝒟Y;
* R{Y} is the differential polynomial ring in indeterminates Y, identified with R[𝒟Y] as an R-algebra and equipped with the natural derivations extending those on R.
We define the rank of 𝒟Y^* to be the map
: 𝒟Y^* →× n ×^m ×_>0
(^α Y_i)^p ↦ (|α|, i, α_m-1,...,α_0, p).
We equip Ø = × n ×^m ×_>0 with the lexicographic order, which is well ordered.
For a differential polynomial f ∈ R{Y}∖ R, say that a variable y ∈ appears in f if y appears in f considered as an algebraic polynomial in R[]. The leader of f, denoted u_f, is the variable y ∈ of maximal rank which appears in f. Denote by u_f^* the highest power of u_f which appears in f. We extend the notion of rank from ^* to R{Y} by setting
(f) = (u_f^*) ∈Ø.
Let f, g ∈ R{Y} be differential polynomials, with g ∉R. We say that f is partially reduced with respect to g (also called weakly reduced), if no proper derivative of u_g appears in f. Say that f is reduced with respect to g if f is partially reduced with respect to g, and _u_g(f) < _u_g(g).
We say that f is (partially) reduced with respect to G, where G ⊆ R{Y}∖ R is nonempty, if f is (partially) reduced with respect to every g ∈ G.
We say that a nonempty subset G ⊆ R{Y} is autoreduced if every f ∈ G is reduced with respect to every g ≠ f ∈ G (including the case where G is a singleton).
It is easy to see that if G is autoreduced, then for any distinct f, g ∈ G, u_f ≠ u_g.
Every autoreduced set is finite.
Let ∞ be an element larger than every element of Ø, and equip (Ø∪{∞})^ with the lexicographic order. The rank of an autoreduced set G is defined as follows: Let G = {g_0,...,g_l-1} with (g_0) < ... < (g_l-1). Define
(G) = ((g_0),... , (g_l-1), ∞, ∞,...)
There is no infinite strictly rank-decreasing sequence of autoreduced sets.
Let 𝔨⊆ R{Y} be a differential ideal not contained in R. Then, by the previous proposition,
{(G) : G ⊆ M is autoreduced such that S(g) ∉𝔨 for all g ∈ G}
has a minimum. We call an autoreduced subset G of M of minimal rank a characteristic set of 𝔨.
Let G be a characteristic set of M ⊆ R{Y}, and f ∈ M∖ R. Then, f is not reduced with respect to G.
Let f ∈ R{Y}∖ R, and write f = f_d u_f^d + ... + f_1 u_f + f_0, where f_i ∈ R[y ∈, y ≠ u_f], and f_d ≠ 0. The initial of f, denoted I(f) is:
I(f) = f_d.
The separant of f, denoted S(f) is:
S(f) = ∂/∂ u_f f = df_du_f^d-1 + ... + f_1.
Finally, for any autoreduced subset G = {g_0,...,g_l-1} of R{Y}, define
H(G) = ∏_i < l I(g_i)· S(g_i),
i.e. H(G) is the product of all initials and separants of elements of G, and
H_G = {∏_i < l I(g_i)^n_i S(g_i)^m_i : n_i, m_i ∈},
i.e. H_G is the set of all products of powers of initials and separants of elements of G.
Let G be an autoreduced set in R{Y} and f ∈ R{Y}. There exists f̃∈ R{Y} which is reduced with respect to G, and H ∈ H_G such that
H · f ≡f̃[G].
If in addition f is partially reduced with respect to G, then there is h ∈ H_G such that
H · f ≡f̃(G).
§ (QUASI-)SEPARABLE RINGS AND IDEALS
We recall the notion of (quasi-)separable R-algebras and ideals from <cit.>.
Let R_0 ⊆ R be rings. We say that R is separable over R_0 if either R is zero, or R is nonzero and:
* R contains no nonzero nilpotent elements;
* For any a_0 ∈ R_0 and b ∈ R with a_0, b ≠ 0, then a_0b ≠ 0 (in particular, R_0 is a domain);
* Either R_0 has characteristic 0, or R_0 has characteristic p > 0 and R^p and R_0 are linearly disjoint over R_0^p.
Let 𝔨⊆ R be an ideal, and let ϕ: R → R/𝔨 denote the quotient map. We say that 𝔨 is separable over R_0 if ϕ(R) is separable over ϕ(R_0).
In particular, we note the special case when L/K is a separable field extension, any K-subalgebra of L is a separable K-algebra.
From now, let S = R{Y} be a finitely generated differential polynomial algebra over R. For a set A ⊆ S, denote the differential ideal generated by A by [A]. For an ideal 𝔨⊆ S and s ∈ S, define the saturated ideal of 𝔨 over s as
𝔨 : s^∞ = {h ∈ S : s^n h ∈𝔨 for some n ∈}
It is verify that if 𝔨 is a differential ideal, then 𝔨:s^∞ is also differential.
Let G be an autoreduced set in S, and 𝔨 be an ideal (not necessarily differential) of S. We say that G is 𝔨-coherent if the following hold:
* 𝔨 has a set of generators partially reduced with respect to G;
* [𝔨] ⊆ ([G] + 𝔨) : H(G)^∞
* For any f, g ∈ G, and v a common derivative of u_f, u_g, say v = θ_f u_f = θ_g u_g, we have that
S(g)θ_f f - S(f)θ_g g ∈ ((G_v) + 𝔨) : H(G)^∞,
where G_v denotes he set of all differential polynomials of the form τ h, where h ∈ G, τ∈Θ, and τ u_h is of lower order than v.
Let 𝔭 be a prime differential ideal of S which is quasi-separable over R, and let G be a characteristic set of 𝔭. Then, there exists a finite set Y' of derivatives of the indeterminates Y, each partially reduced with respect to G, such that setting 𝔭_1 = 𝔭∩ R[Y'], G is S𝔭_1-coherent and 𝔭 = ([G] + S𝔭_1) : H(G)^∞. The set Y' may be replaced by any larger finite set of derivatives of Y partially reduced with respect to A.
Recall that by the definition of a characteristic set, for each a ∈ G, S(a) ∉𝔭. Further, for any g ∈ G, I(g) ∉𝔭 by <cit.>. Since 𝔭 is a prime ideal, then we also have that H_G does not contain any element of 𝔭, as for any g ∈ G, neither I(g) nor S(g) lie in . In particular, H_G does not contain 0.
We will also require an additional lemma which strengthens the above result for partially reduced polynomials:
Let G be a 𝔨-coherent autoreduced set in S, and suppose that for each g ∈ G, S(g) is not a zero-divisor in S. Then, every element of ([G] + 𝔨) : H(G)^∞ which is partially reduced with respect to G lies in ((G) + 𝔨) : H(G)^∞.
In particular, in the case where G is a characteristic set of a prime differential ideal ⊆ S, we have the following:
Let , G and _1 be as in Lemma <ref>. Then, any f ∈ partially reduced with respect to G lies in ((G) + S_1) : H(G)^∞.
§ THE STRUCTURE THEOREM FOR SEPARABLE ALGEBRAS
We state and prove the main theorem, which is adapted from <cit.>:
Let (S, ) be a differential domain, and let (R, ) be a differential subring of S, with R Noetherian as a ring, such that:
* S is separable over R; and,
* S is differentially finitely generated as a differential R-algebra.
Then, there exist (not necessarily differential) R-subalgebras P and B of S and an element h ∈ B with h ≠ 0 such that:
* B is a finitely generated R-algebra, and B_h is a finitely presented R-algebra;
* P is a polynomial algebra over R;
* S_h = (B· P)_h, and S_h is a differentially finitely presented R-algebra;
* The homomorphism B ⊗_R P → B · P induced by multiplication is an isomorphism of R-algebras.
§.§ Proof of Theorem <ref>
As S is a differentially finitely generated R-algebra, there is a surjective differential R-algebra homomorphism ϕ: R{Y_1,...,Y_n}→ S. Write Y = (Y_1,...,Y_n), and let 𝔭 = (ϕ). As S is prime, and ϕ is injective on R, we have that 𝔭 is a differential prime ideal of R{Y} with 𝔭∩ R = 0. Further, since S is separable over R, the ideal 𝔭 is separable over R also. Let G be a characteristic set of 𝔭.
By Lemma <ref>, there is a finite set Y' ⊆, whose members are each partially reduced with respect to G, such that G is S𝔭_1-coherent and 𝔭 = ([G] + S𝔭_1) : H(G)^∞, where 𝔭_1 = 𝔭∩ R[Y'].
Define the following:
h ϕ(H(G)),
V {y ∈ : y is not a proper derivative of any u_g ∈ G},
V_B Y' ∪{y ∈ V : y appears in some g ∈ G},
B ϕ(R[V_B]),
P ϕ(R[V ∖ V_B]).
Observe that Y' is a subset of V: if a differential indeterminate is a proper derivative of some u_g, then it is not partially reduced with respect to G and thus does not lie in Y'.
Since G is autoreduced, f ∈ R{Y} is partially reduced with respect to G if and only if f ∈ R[V].
The restriction of ϕ to P' = R[V ∖ V_B] is injective.
Suppose we have f ∈ P' ∩. We claim that f is reduced with respect to G. By the previous observation, we have that f is partially reduced with respect to G. Since the leader u_g of each g ∈ G is in V_B, no leader of any g ∈ G appears in f, and thus f is reduced with respect to G. By Lemma <ref>, f ∈ R, and since ∩ R = 0, f = 0, as required.
h ≠ 0.
By the remark following Lemma <ref> and writing G = {g_0,...,g_l-1}, we observe that
H(G) = ∏_i < l I(g_i)· S(g_i)
does not lie in the prime ideal , as no I(g_i) nor S(g_i) lies in . Thus h = ϕ(H(G)) is not 0 in S.
S_h = (B · P)_h.
Let f ∈ R{Y}. By Proposition <ref>, there is f̃∈ R{Y} reduced with respect to G, and H ∈ H_G such that H · f ≡f̃[G]. Since G ⊆, we have that [G] ⊆, thus ϕ(f) ·ϕ(H) = ϕ(f̃) in S.
Since f̃ is reduced with respect to G, we have that f̃∈ R[V], and in particular ϕ(f̃) ∈ B · P.
Further, for each g ∈ G, ϕ(I(g)) and ϕ(S(g)) are units in (B · P)_h, so ϕ(H) is a unit also. Thus, ϕ(f) = ϕ(H)^-1ϕ(f̃) ∈ (B · P)_h, as required.
S_h is a differentially finitely presented R-algebra.
We extend the projection ϕ: R{Y}→ S to ψ: R{Y}[H(G)^-1] → S_h by defining ψ(H(G)^-1) = h^-1. Thus S_h is differentially finitely generated.
Let 𝔮 = (ψ). We show that 𝔮 is finitely generated as a differential ideal. As R is Noetherian, R[Y'] is also Noetherian, and thus _1 is finitely generated as an ideal. Let A be such an generating set.
Clearly, since ψ extends ϕ, G ∪ A ⊆𝔮. We claim that G ∪ A generates 𝔮 as a differential ideal.
For the reverse inclusion, suppose that f/H(G)^d ∈𝔮 for some f ∈ R{Y} and d ∈.
Since S_h is a domain, and h^-d is nonzero, we have that ψ(f) = ϕ(f) = 0, i.e. f ∈. Since = ([G] + S_1) : H(G)^∞, there is some n ∈ with fH(G)^n ∈ [G] + S_1 ⊆ [G ∪ A]. Thus, we have that f lies in the differential ideal generated by G ∪ A in R{Y}[H(G)^-1].
B is finitely generated, and B_h is finitely presented as R-algebras.
This is clear.
The above claims prove parts (a), (b) and (c) of Theorem <ref>. It remains to prove (d).
Suppose b_1,...,b_m ∈ B are linearly dependent over P. Then, they are linearly dependent over R.
Suppose we have f_i ∈ R[V_B] with ϕ(f_i) = b_i, and p_i ∈ R[V ∖ V_B], not all in , such that q ∑_i f_i p_i ∈. We may assume in particular that p_1 ∉. Since q ∈ R[V], q is partially reduced with respect to G.
Let A again be a finite generating set of _1 as an ideal of R[Y']. Then, by Corollary <ref>, there are n ∈, and h_g for g ∈ G ∪ A such that
H(G)^n q = ∑_g ∈ G ∪ A h_g g.
Since H(G), q and every f lies in R[V], we may assume the coefficients h_f also lie in R[V].
Since p_1 ≠ 0, there exists an R-algebra homomorphism ψ: R[V ∖ V_B] → R such that ψ(p_1) ≠ 0. This can be extended to an R[V_B]-algebra homomorphism ψ: R[V] → R[V_B] with ψ(p_1) ≠ 0.
Since all p_i and h_g lie in R[V], we may now apply ψ to the following equation:
H(G)^n( f_1 p_1 + ... + f_k p_k) = ∑_g ∈ G∪ A h_g g.
Since H(G), every f_i and g ∈ G ∪ A lies in R[V_B], these are preserved by ψ, and thus we obtain that
H(G)^n(ψ(p_1) f_1 + ... + ψ(p_k) f_k) = ∑_g∈ G∪ Aψ(h_g) g.
Observe that the right hand side lies in the ideal ([G] ∪ S_1), and thus
ψ(p_1) f_1 + ... + ψ(p_k) f_k ∈ ([G] ∪ S_1) : H(G)^∞ = .
Recall that ψ(p_i) ∈ R, and ϕ is an R-algebra homomorphism. Applying ϕ to the above yields:
ψ(p_1) b_1 + ... + ψ(p_k) b_k = 0.
By previous assumption, we have that ψ(p_1) is not zero, and thus the b_i are linearly dependent over R.
This implies (d) as follows:
The homomorphism B ⊗_R P → B· P induced by multiplication is an isomorphism of R-algebras.
The homomorphism is clearly surjective, and it remains to show injectivity. Let m ∈ be minimal such that there are b_1,...,b_m ∈ B and p_1,...,p_m ∈ P with ∑_i b_i p_i = 0 and x ∑_i b_i ⊗ p_i ≠ 0. Then, the b_i are linearly dependent over P. By Claim <ref>, the b_i are in fact linearly dependent over R. Thus, there are r_1,...,r_m ∈ R, not all zero, such that r_1b_1 + ... + r_mb_m = 0.
Without loss, assume that r_1 ≠ 0. Then, m > 1, and observe that
r_1b_1 ⊗ p_1 = -(r_2b_2 + ... + r_mb_m) ⊗ p_1,
which gives
r_1x = -(r_2b_2 + ... + r_mb_m) ⊗ p_1 + r_1b_2 ⊗ p_2 + ... + r_1b_m ⊗ p_m.
Collecting terms and rearranging, we have
r_1x = b_2 ⊗ (r_1p_2 - r_2p_1) + ... + b_m ⊗ (r_1p_m - r_mp_1).
We necessarily have that r_1x = 0, otherwise this contradicts the minimality of the choice of m. Let F denote the quotient field of R. Then, in F ⊗_R (B ⊗_R P), we have that
1 ⊗ x = 1/r_1⊗ r_1x = 0.
By Claim <ref>, P is a polynomial R-algebra, hence flat. As the inclusion B → F ⊗_R B is injective, it follows that B ⊗_R P → F ⊗_R B ⊗_R P is injective. Thus, 1 ⊗ x = 0 in F ⊗_R B ⊗_R P implies that x = 0, which is a contradiction.
alpha
|
http://arxiv.org/abs/2409.02457v2 | 20240904053925 | On Oriented Diameter of Power Graphs | [
"Deepu Benson",
"Bireswar Das",
"Dipan Dey",
"Jinia Ghosh"
] | math.CO | [
"math.CO",
"cs.DM",
"05C12, 05C20, 05C25, 20D15"
] |
Toward Realistic Solar Flare Models
M. Haahr
1,2
B. V. Gudiksen1,2
Å. Nordlund 1,3
September 9, 2024
===========================================================================================
§ ABSTRACT
In this paper, we study the oriented diameter of power graphs of groups. We show that a 2-edge connected power graph of a finite group has oriented diameter at most 4. We prove that the power graph of a cyclic group of order n has oriented diameter 2 for all n≠ 2,4,6. Until our work, to the best of our knowledge, no infinite family of graphs with oriented diameter 2 had been identified except for subclasses of complete graphs. Finally, we give a complete characterization of the oriented diameter of the power graphs of nilpotent groups. This, in turn, gives an algorithm for computing the oriented diameter of the power graph of a given nilpotent group that runs in time polynomial in the size of the group.
§ INTRODUCTION
An orientation of an undirected graph X is an assignment of exactly one direction to each of the edges of X. An orientation is called a strong orientation if any two vertices are reachable from each other by directed paths introduced by the orientation. It is easy to see that a graph with a bridge cannot admit a strong orientation. In 1939, Robbins <cit.> proved that a graph is strongly orientable if and only if it is 2-edge connected[A graph is 2-edge connected if and only if it is bridgeless and connected.].
The diameter of an undirected graph is the maximum distance between any two vertices in the graph. We denote the class of 2-edge connected undirected graphs with diameter d by ℱ_d.
For a directed graph 𝔛, the distance d_𝔛(u,v) of a vertex v from a vertex u is the length of a shortest directed path from u to v. The diameter of a directed graph 𝔛, denoted by diam(𝔛), is the number max_u,vd_𝔛(u,v). We write diam(𝔛):=∞ if there is no directed path from u to v for some pair of vertices u and v in 𝔛. Let X_ be the directed graph obtained from X after introducing the orientation . The oriented diameter OD(X) of X is defined to be the minimum number in the set {diam(X_) | is an orientation of X}. Let OD(ℱ_d):=max {OD(X) | X∈ℱ_d}. Note that OD(X)=∞ if the graph X is not 2-edge connected[
We assume both the diameter and the oriented diameter of a graph with a single vertex to be 0.].
While Robbins <cit.> provided the necessary and sufficient condition for the existence of a strong orientation of a graph, the paper does not offer any quantitative analysis of the difference in distances between a pair of vertices before and after strongly orienting the graph. In 1978, Chvátal and Thomassen <cit.> accepted this challenge and proved that 1/2 d^2 + d ≤ OD(X) ≤ 2d^2+2d for all X∈ℱ_d. In 2021, Babu et al. <cit.> improved the upper bound to 1.373d^2 + 6.971d - 1 for all d ≥ 8. Both of these works yield polynomial-time algorithms to obtain the required orientation.
The exploration of oriented diameters for classes of graphs with small values of diameter, as well as specific graph classes, was prompted by the quadratic upper bound on the oriented diameter. AT-free graphs <cit.> and chordal graphs <cit.> are popular such graph classes investigated. Attempts were also made to improve the general bound for OD(ℱ_d) provided by Chvátal and Thomassen <cit.> for specific values of d. From a result in <cit.>, it can be seen that OD(ℱ_1) = 3. Chvátal and Thomassen <cit.> proved that OD(ℱ_2) = 6. A tight bound was obtained for OD(ℱ_3) also. The results from <cit.> proved that OD(ℱ_3) = 9. However, exact bounds are not available when d > 3. The current best upper bound is 21, and the lower bound is 12 for OD(ℱ_4) <cit.>. The upper and lower bounds for OD(ℱ_d) when d ≥ 5 also follow from these two works. Moreover, these results demonstrate the challenging nature of determining the oriented diameter for classes of graphs, even when the diameter is very small.
There are several classes of graphs defined in terms of groups, e.g., Cayley graphs, commuting graphs, power graphs, etc. Cameron's survey contains an interesting collection of results on such graphs <cit.>. In this paper, we focus on power graphs of finite groups (<Ref>), which were defined by Chakrabarty et al. <cit.>.
Abawajy et al. <cit.> and Kumar et al. <cit.> gave surveys on power graphs.
Our primary motivation was to investigate if the symmetry structure of the underlying group of a power graph is useful for studying its oriented diameter. In this paper, we provide strong evidence that the algebraic structure is indeed helpful.
The result of Chvátal and Thomassen <cit.> implies that OD(Pow(G))≤ 6 for all 2-edge connected power graphs, as Pow(G)∈ℱ_1 ∪ℱ_2 (see <Ref>). We obtain a tighter upper bound for power graphs by showing that every 2-edge connected power graph has oriented diameter at most 4. Moreover, the condition of Pow(G) being 2-edge connected simply translates to G not having any maximal cyclic subgroup of order 2.
Observe that ℱ_1 consists only of complete graphs K_n. A result in <cit.> proved that a complete graph K_n has oriented diameter 2 when n ≥ 3 and n ≠ 4. On the other hand, OD(K_4)=3. A graph X ∈ℱ_d can have oriented diameter 2 only if d ∈{1, 2} because OD(X) ≥ diam(X).
We show that the power graph of a cyclic group of order n has oriented diameter 2 for all n≠ 2,4,6. Until our work, to the best of our knowledge, no infinite family of graphs belonging to ℱ_2 with oriented diameter 2 had been identified.
Nilpotent groups are important classes of groups that have been studied extensively (see, e.g., <cit.>). We show that the oriented diameter of finite non-cyclic nilpotent groups is either 3 or 4. Moreover, we determine the exact conditions under which the oriented diameter is 3 and 4. Our main result in this paper is a complete group theoretic characterization of the oriented diameter of power graphs of nilpotent groups. We give this characterization in terms of the uniqueness of certain subgroups and the existence of a certain maximal cyclic subgroup.
Next, we focus on the computational problem of computing the oriented diameter of a given graph X. A key result by Chvátal et al. <cit.> showed that it is NP-hard to decide whether a given undirected graph has oriented diameter 2. This leads to the investigation of several versions of the problem by restricting the class of graphs.
Fomin et al. provided an approximation algorithm to orient an AT-free graph X of diameter d with an orientation of diameter at most 2d+11 <cit.>.
Fomin, Matamala and Rapaport <cit.> gave a linear-time approximation algorithm for computing a strong orientation for a chordal graph X with diameter at most one plus twice the oriented diameter of X. Eggemann and Noble <cit.> designed a fixed-parameter tractable (FPT) algorithm that decides if a planar graph X has oriented diameter at most l, where l is the parameter.
We show that the oriented diameter of the power graphs of nilpotent groups can be computed in polynomial time.
It turns out it is rather straightforward to check the conditions in the characterization of the oriented diameter of power graphs of finite nilpotent groups in polynomial time.
Our results on the oriented diameter of power graphs hinge on figuring out interesting combinatorial and algebraic structures of the power graphs. For example, the results on the power graphs of cyclic groups depend on a careful “decomposition” of the graph in “layers” using its subgroup structures, which in turn helps us to apply an inductive approach for constructing a diameter 2 orientation (see <Ref>).
The orientations we construct in this paper depend on careful designs of gadgets (P_4-gadget in <Ref> and C_4-gadget in <Ref>) and their placements in Pow(G) using group theoretic properties (
<Ref>,
<Ref>).
For a nilpotent group G, we prove that for Pow(G) to have oriented diameter 3, the oriented edges of Pow(G) must obey certain uniformity conditions (<Ref>). While proving an important lower bound on OD(Pow(G)) for nilpotent group G, these conditions are crucial for cutting down the number of possibilities of orienting edges in Pow(G) (<Ref>).
§ PRELIMINARIES
For a simple graph X=(V,E), the vertex set of X is denoted by V(X), and the edge set of X is denoted by E(X)
. For basic definitions and notations from graph theory, an interested reader can refer to any standard textbook (e.g., <cit.>). The induced subgraph of X on S⊆ V(X) is denoted by X[S]. We denote a path (both directed and undirected) from u_1 to u_k by the sequence of vertices u_1u_2… u_k.
Let X=(V,E) be an undirected graph. A subset ⊆ V× V is said to be a partial orientation of X if is obtained from assigning exactly one direction to a subset E' of the edge set E. That is, for all {u,v}∈ E', either (u,v) or (v,u) is in .
We use X_ to denote the directed graph (V,). Further, we denote the distance from a vertex x to a vertex y in the directed graph X_ by d_X_(x,y).
If is a partial orientation of an undirected graph X, then OD(X) ≤ diam(X_).
The basic definitions and facts on group theory can be found in any standard book (e.g., <cit.>). In this paper, we only consider finite groups. A subset H of a group G is called a subgroup of G if H forms a group under the binary operation of G. This is denoted by H ≤ G.
The number of elements in a group G is called the order of the group, denoted by |G|. The order of an element g in G, denoted by o(g), is the smallest positive integer m such that g^m=e, where e is the identity element.
A group G is called cyclic if G={g, g^2, …, g^m-1, g^m=e} for some g∈ G. The element g is called a generator of G, and we write G=⟨ g ⟩. The set of all generators of a cyclic group G is denoted by gen(G).
For a cyclic group G, |gen(G)|=ϕ(|G|), where ϕ is the Euler's totient function. Recall that ϕ(p_1^_1… p_k^_k)=p_1^_1-1(p_1-1)… p_k^_k-1(p_k-1), where p_i's are distinct primes and _i's are natural numbers.
A subgroup C of G is called a cyclic subgroup if C is cyclic. We call a cyclic subgroup C of G a maximal cyclic subgroup of G if C is not properly contained in any cyclic subgroup of G.
We now state a well-known group theoretic fact that is used extensively in this paper.
A finite cyclic group of order n has a unique subgroup (which is also cyclic) of order d for each divisor d of n.
A group G is called a p-group if the order of each non-identity element is some positive power of p, where p is a prime. We denote the class of groups with prime power order by .
For a prime p, if p^m is the highest power of p such that p^m divides |G|, then a subgroup H ≤ G such that |H|=p^m is called a Sylow p-subgroup of G. The direct product of two groups G and H, denoted by G× H, is the group with elements (g,h) where g ∈ G and h ∈ H under the group operation (g_1,h_1)(g_2,h_2)=(g_1g_2,h_1h_2), where the co-ordinate wise operations are the group operations of G and H respectively. A finite group is called a nilpotent group if it is a direct product of its Sylow subgroups. Moreover, each Sylow subgroup is unique in a finite nilpotent group.
We now give the definition of power graphs (see <cit.>).
The power graph of a group G, denoted by Pow(G), is an undirected graph with vertex set G, and edge set E= {{x,y}: y = x^m for some integer m }.
If {x,y} is an edge in Pow(G), then either o(x)|o(y) or o(y)|o(x).
Since e is a dominating vertex of Pow(G), diam(Pow(G))≤ 2.
We define an equivalence relation ∼ on G as follows: for x,y∈ G, x∼ y if and only if ⟨ x ⟩ = ⟨ y ⟩, i.e., x and y generate the same cyclic subgroup of G. We call this equivalence class generator equivalence class (in short, ge-class). Let us denote the ge-class containing x under ∼ by [x]. Note that [x]=gen(x).
So, all the elements of a ge-class are of the same order. We define the order of a ge-class by the order of any element belonging to the class.
One can easily notice that the size of a class [x] is ϕ(o(x)).
In Pow(G) the following two facts hold:
(i) Each ge-class [x] of G induces a complete subgraph of Pow(G);
(ii) For two ge-classes [x] and [y], if an element x∈ [x] is adjacent to an element y∈ [y] in Pow(G), then every element of [x] is adjacent to every element of [y].
Hence, in this case, it makes sense to say that [x] and [y] are adjacent in the graph Pow(G).
This remark motivates us to formulate the following definition.
Two distinct ge-classes [x] and [y] are called adjacent if x and y are adjacent in Pow(G).
In <Ref>, we have provided an extended preliminary.
§ ORIENTED DIAMETER OF POWER GRAPHS
We begin the section by stating a necessary and sufficient condition on a finite group for the existence of a strong orientation of the corresponding power graph. The main result of this section is that the oriented diameter of 2-edge connected power graphs is at most 4.
Now, we state one useful result regarding the oriented diameter of complete graphs that is required for our further discussion.
Fomin, Matamala and Rapaport <cit.> proved the following theorem about the oriented diameter of complete graphs.
<cit.>
For every n ≥ 3, OD(K_n)=2 if n ≠ 4, and OD(K_4)=3. Moreover, for every n ≥ 5,
every strong orientation of K_n with diameter 2 can be extended to a strong orientation of K_n+1 with diameter 2 and this extension can be constructed in linear time.
A power graph is 2-edge connected if and only if the underlying group has no maximal cyclic subgroup of order 2.
If a group G has a maximal cyclic subgroup ⟨ g ⟩ of order 2, then Pow(G) has a pendant vertex (a vertex with degree 1) g adjacent to identity, i.e., {e,g} is a bridge in Pow(G).
For the other direction, let {u,v} be a bridge of Pow(G). Now, if none of u and v are identity, then the subgraph induced on {u,v,e} forms a cycle, which is a contradiction to the fact {u,v} is a bridge in Pow(G). We can assume without loss of generality that u=e. If v has a neighbour, say v', in the graph Pow(G), then {u=e,v,v'} makes a cycle, and this again leads to the contradiction that {u=e,v} is a bridge. So, we can assume that e is the only neighbour of v in Pow(G). This means ⟨ v ⟩={v,e} and v ∉⟨ g ⟩ for any g ∈ G ∖{e}. So, ⟨ v ⟩ is a maximal cyclic subgroup of order 2.
According to <Ref>, the power graph of ℤ_2, dihedral group D_2n are not 2-edge connected and hence are not strongly orientable.
Let X=(V, E) be an undirected graph with a dominating vertex e. Suppose V∖{e} can be partitioned into sets C_1,…, C_m such that each induced subgraph X[C_i] is a complete subgraph with at least two vertices, then OD(X) ≤ 4.
r0.35
[scale=0.8]
(e) at (0,0);
(a) at (0,2);
(b) at (2,2);
(c) at (2,0);
(e) node[left][scale=1] e;
(a) node[left][scale=1] a;
(b) node[right][scale=1] b;
(c) node[right][scale=1] c;
[ line width=0.5mm,black,opacity=1] (e) – (a) node[ currarrow, pos=0.5, xscale=1, sloped, scale=1] ;
[ line width=0.5mm,black,opacity=1] (e) – (b) node[ currarrow, pos=0.25, xscale=1, sloped, scale=1] ;
[ line width=0.5mm,black,opacity=1] (e) – (c) node[ currarrow, pos=0.5, xscale=-1, sloped, scale=1] ;
[ line width=0.5mm,black,opacity=1] (a) – (b) node[ currarrow, pos=0.5, xscale=1, sloped, scale=1] ;
[ line width=0.5mm,black,opacity=1] (a) – (c) node[ currarrow, pos=0.25, xscale=1, sloped, scale=1] ;
[ line width=0.5mm,black,opacity=1] (b) – (c) node[ currarrow, pos=0.5, xscale=1, sloped, scale=1] ;
An orientation of K_4 with ecc(e)=2
Proof. We claim that there is a partial orientation of the given graph X such that the eccentricity [The out-eccentricity of a vertex v of a directed graph is the maximum distance from v to a vertex u in . The in-eccentricity of a vertex v of a directed graph is the maximum distance from a vertex u in to v. The eccentricity of a vertex v of is the maximum of its out-eccentricity and in-eccentricity.] of e in is 2. This will give us diam() ≤ 4, which in turn will imply that OD(X) ≤ 4 (due to <Ref>).
Therefore, it is enough to give a partial orientation of each induced subgraph X[C_i ∪{ e }], such that the vertex e has eccentricity 2 in the oriented subgraph X[C_i ∪{ e }].
We observe that for each i, C_i∪{e} induces a complete subgraph of X of size at least 3. If X[C_i ∪{ e }] is a complete subgraph of size n ≠ 4, then by <Ref>, we can orient the subgraph with diameter 2. In particular, e has eccentricity 2 with this orientation.
Otherwise, if C_i={ a,b,c } then we can give an orientation to the induced subgraph X[C_i ∪{ e }] (as shown in <Ref>) with e having eccentricity 2 in the oriented subgraph X[C_i ∪{ e}]. □
The oriented diameter of Pow(G) is at most 4, where G is a finite group with no maximal cyclic subgroup of order 2.
Let S=G∖{e}. Our idea is to partition S into sets C_1,…, C_m such that the condition of <Ref> is satisfied.
To construct C_1, we pick a vertex g ∈ S such that o(g) >2. Such a vertex exists as G does not have any maximal cyclic subgroup of order 2. Let C_1=[g]. Inductively, assume that we have constructed C_1, …, C_l. In S ∖ (C_1 ∪…∪ C_l) we pick a vertex g such that o(g)>2. The process ends if there is no such element. Otherwise, let C_l+1=[g].
Let C_1, …, C_m be the sets created at the end of the process. If S ∖ (C_1 ∪…∪ C_m) is non-empty, it consists of elements of order 2 only. Let y ∈ S ∖ (C_1 ∪…∪ C_m). Since ⟨ y ⟩ is not a maximal cyclic subgroup, y must be generated by some element g of order more than 2. Let g ∈ C_i. Note that no other element y' ∈ S ∖ (C_1 ∪…∪ C_m) can be generated by any element in C_i. Otherwise, it implies that ⟨ g ⟩ contains two elements of order 2, which contradicts <Ref>. Now, as g generates y, the ge-class [g] is adjacent to [y]={y} (by <Ref>). Hence, by <Ref>, C_i ∪{y} induces a clique. We update C_i by C_i ∪{y}. Thus, each y ∈ S ∖ (C_1 ∪…∪ C_m) can be merged to a unique C_j. Now, we apply <Ref> to conclude that OD(Pow(G))≤ 4. Note that if S ∖ (C_1 ∪…∪ C_m) is empty, then we can directly apply <Ref> to obtain the result.
§ ORIENTED DIAMETER OF POWER GRAPHS OF CYCLIC GROUPS
Each cyclic group of order n is isomorphic to _n, where _n is the additive group of integers modulo n.
<Ref> tells that when n ≥ 3, Pow(_n) has at least two dominating vertices. Using this, we prove that Pow(_n), where n ≥ 3, can be given a partial orientation of diameter 3 (see <Ref>).
font=small
r0.45
[scale=0.35]
[color=black!60, fill=black!5, very thick][rounded corners] (6,13) rectangle (16,15.5);
(11,17) node[below][scale=1] _n∖{d_1,d_2};
[black] (10,14) circle(2pt);
(10,14) node[above][scale=1] u;
[black] (12,14) circle(2pt);
(12,14) node[above][scale=1] v;
[black] (9,11) circle(4pt);
(8.9,11) node[left][scale=1] d_1;
[black] (13,11) circle(4pt);
(13.1,11) node[right][scale=1] d_2;
(9,11) – (13,11) node[ currarrow, pos=0.5, xscale=1, sloped, scale=1] ;
(10,14) – (13,11) node[ currarrow, pos=0.5, xscale=-1, sloped, scale=1] ;
(10,14) – (9,11) node[ currarrow, pos=0.5, xscale=-1, sloped, scale=1] ;
(9,11) – (12,14) node[ currarrow, pos=0.5, xscale=-1, sloped, scale=1] ;
(13,11) – (12,14) node[ currarrow, pos=0.5, xscale=-1, sloped, scale=1] ;
Illustration of a partial orientation of Pow(_n) with diameter 3.
<cit.>
Let G be a cyclic group. Then Dom(Pow(G)) consists of all elements in G, if G is of prime power order; otherwise Dom(Pow(G))=gen(G)∪{e}.
The oriented diameter of Pow(ℤ_n) is at most 3, where n ≥ 3.
Proof. From <Ref>, we know that Pow(ℤ_n) has at least two dominating vertices since ℤ_n has ϕ(n)≥ 2 generators. Let d_1 and d_2 be two such dominating vertices. First we orient the edge {d_1,d_2} as (d_1,d_2). Then for any vertex u ∈ℤ_n ∖{d_1,d_2}, we orient the edges {u,d_1}, {u,d_2} as (u,d_1) and (d_2,u) respectively forcing u,d_1,d_2 to form a directed cycle (see <Ref>). It is easy to see that the diameter of this oriented graph is 3.
□
Now, we present the main result on the oriented diameter of the power graphs of cyclic groups.
The oriented diameter of Pow(_n) =
0 if n=1
∞ if n=2
3 if n=4,6
2 otherwise
First, we discuss the case when n=6.
The oriented diameter of Pow(ℤ_6) is 3.
We prove here that Pow(ℤ_6) (<Ref>) cannot have oriented diameter 2. The cyclic group _6 has group elements . In any strong orientation, it can not be the case that all the edges incident to a vertex v are directed outwards from v or inwards to v. Now in the case when the vertex in Pow(ℤ_6) has only one outward edge, we can assume without loss of generality that the directions given to the edges incident to in the graph Pow(_6) are and . Then, to have a directed path of length 2 from to the vertices , we need the following directed edges: respectively. In that case, we can not have a directed path of length 2 from to (see <Ref>), and hence, Pow(_6) cannot have an orientation with diameter 3. The case when vertex in Pow(ℤ_6) has only one inward edge is similar. Therefore, by <Ref>, we have OD(Pow(_6))=3.
We now present some tools and techniques which are necessary to prove <Ref>. First, consider the directed subgraph, which is a directed path of length 3, shown in <Ref>. We call it a `P_4-gadget'. As the name suggests, the `P_4-gadget' is a tuple of four vertices (a, b, c, d) with directed edges (a,b), (b,c), (c,d).
For n≥ 4, there exists an optimal orientation of K_n having a P_4-gadget as a subgraph.
<Ref> shows an optimal orientation of K_4 with a P_4-gadget. Figure <ref> shows an optimal orientation of K_5 with a P_4-gadget. In both the figures, the subgraph formed by the directed edges (a,b), (b, c), and (c, d) gives the required P_4-gadget (marked in blue). By Theorem <ref>, this orientation of K_5 can be extended to obtain an orientation of K_n with diameter 2 for n≥ 6.
Let X=(V, E) be an undirected graph such that V=L_T ⊔ L_M ⊔ L_B (where ⊔ denotes disjoint union) and the following properties hold: (a) There is a partial orientation of X[L_T] with diameter at most 2; (b) |L_M| is even, |L_M|≥ 4, and L_M is a set of dominating vertices of X; (c) There is a partial orientation _B of the edges of X[L_B] and the edges in E(L_T, L_B) such that there is a directed path of length at most 2 between any two vertices a,b ∈ L_B using only the directed edges in _B. Then the oriented diameter of X is 2.
We orient the graph X with the following partial orientations _α, _β, _γ (see <Ref>).
_α: Since the set L_M induces a clique of size at least 4, by <Ref> there is an optimal orientation of X[L_M] having a P_4-gadget. In _α, we include this optimal orientation of X[L_M] along with the optimal orientation of X[L_T] (as per condition (a)) and _B (as per condition (c)).
_β: Pick a P_4-gadget (a,b,c,d) in L_M. Then, for any u ∈ L_T, we put (u,a),(b,u),(u,c),(d,u) in _. Also, depending on the directions of the edges {a,d} and {b,c} given in _, we orient the edges between any vertex r ∈ L_B and a vertex in {a,b,c,d} such that r,b,c as well as r,a,d lie in a directed 3-cycle. For example, if (a,d) ∈_α, then we put (r,a),(d,r) in _β. See <Ref>.
_γ: When |L_M|≠ 4, partition the set L_M∖{a,b,c,d} into disjoint pairs {v,w}. This partitioning is possible since |L_M| is even. Now, we orient the edges between any vertex r ∈ L_B and a vertex in {v,w} such that r,v,w lie in a directed 3-cycle. For example, if (v,w) ∈_α, then we put (r,v),(w,r) in _γ.
The case when |L_M|=4 is slightly different and handled as shown in <Ref>.
We now show that using _α,_β and _γ, we indeed get OD(X)=2.
Let X_ be the directed graph derived after orienting the edges of X using the partial orientations _,_,_γ.
It is easy to see that using _, there is a directed path of length at most 2 between any two vertices of L_T (and L_B). The same applies for L_M if |L_M|≥ 5. Whereas, if |L_M|=4, we can see from <Ref> that {a,d} is the only pair of vertices in L_M such that _ gives a directed path of length 3 from a to d in X[L_M]. But since (d,a) is in _, we have the directed edges (a,r) and (r,d) for any vertex r∈ L_B while applying the rule of _. Hence, in this case, there is a directed path ard of length 2, which solves our purpose.
From <Ref> it is clear that d_X_(r,u)=d_X_(u,r)=2 for any vertex u∈ L_T and for any vertex r∈ L_B. Moreover, for any vertex u∈ L_T, for any vertex y∈{a,b,c,d}⊂ L_M and for any vertex r∈ L_B, we have d_X_(u,y)=d_X_(y,u)=2 and d_X_(r,y)=d_X_(y,r)=2. Now since every vertex y of L_M∖{a,b,c,d} participates in a directed 3-cycle with any vertex r of L_B as well as with any vertex u of L_T, due to _ (see <Ref> ), we have d_X_(u,y)=d_X_(y,u)=2 as well as d_X_(r,y)=d_X_(y,r)=2.
Hence, diam(X_)=2 and by <Ref>, we have OD(X)=2.
Let us state a useful fact about the structure of power graphs of cyclic groups.
Let G be a cyclic group and x,y ∈ G. Then {x,y} is an edge of Pow(G) if and only if o(x) | o(y) or o(y)| o(x). Therefore S is a clique in Pow(G) if and only if o(x)|o(y) or o(y)|o(x) for all x,y ∈ S.
If q≥ 3 is a prime, then the oriented diameter of Pow(ℤ_2^q^), , ≥ 1, is 2 except when (,,q)=(1,1,3) (i.e., for _6).
In this proof, we use the fact that a cyclic group H has exactly ϕ(k) elements of order k for each divisor k of |H|. Let G=_2^q^. Let G_j be the subgroup of G of order 2^q^j, 1≤ j ≤ (Since G is cyclic, unique G_j exits by <Ref>.). The idea is to inductively show that if Pow(G_j) has oriented diameter 2, so does G_j+1. For this, we apply <Ref> with L_B=G_j, L_M=gen(G_j+1)={x | o(x)=2^q^j+1}, and L_T=G_j+1∖ (L_B∪ L_M)={x | o(x)=2^kq^j+1, 0≤ k ≤ (-1)}. The proof is by induction on j.
There are two base cases:
Base cases:
1. (,q)≠ (1,3). Then, we use j=1 as the base case.
We divide G_1 into three sets L_B={x| o(x)=1 or o(x)=2^k · q where 0 ≤ k < }; L_M= gen(G_1)={x| o(x)=2^· q}; L_T={x | o(x)=2^k where 1 ≤ k ≤}. Using <Ref>, L_B and L_T induce complete subgraphs and, moreover, the corresponding induced subgraphs are isomorphic to K_2^-1(q-1)+1 and K_2^-1 respectively. | L_M|= ϕ(2^· q)=2^-1(q-1)≥ 4.
2. (,)=(1,3). Then, we use j=2 as the base case.
We divide G_2 into three sets L_B={x| o(x)=2 or 2 · 3}; L_M= gen(G_2)={x| o(x)=2 · 3^2}; L_T={x | o(x)=3^k where 0≤ k ≤ 2}. Using <Ref>, L_B and L_T induce complete subgraphs and, moreover, the corresponding induced subgraphs are isomorphic to K_3 and K_7 respectively. |L_M|= ϕ(2 · 3^2)=6.
Now we verify that in both cases, the sets L_B, L_M and L_T satisfy the conditions of <Ref>. Since, in the first case, (α,q) ≠ (1,3), |L_B| and |L_T| are not equal to 2,4 for any value of . So, in both cases, L_B and L_T are either singleton sets or induce complete subgraphs with oriented diameter 2. Hence, it is sufficient to take _B as the optimal orientation of X[L_B]. Moreover, in each case, since L_M=gen(G_j), L_M consists of dominating vertices of Pow(G_j) by <Ref>. Hence, by <Ref>, the oriented diameter of Pow(G_j) is 2.
Inductive step: We assume that OD(Pow(G_j))=2 and want to show that OD(Pow(G_j+1))=2. For this, we divide G_j+1 into L_B, L_M and L_T as described in the proof sketch. Now using <Ref> in L_T, any element of order 2^k_1q^j+1 is adjacent to any element of order 2^k_2q^j+1, where 0≤ k_1 < k_2 ≤ (-1). Hence, Pow(G_j+1)[L_T] is a complete subgraph of size at least ϕ(q^2)≥ 6 that can be oriented with diameter 2. The set L_M=gen(G_j+1) contains dominating vertices of Pow(G_j+1). Moreover, as this is not the base case, |L_M|=ϕ(2^q^j+1)≥ϕ(2^2 · 3^2)=12. Therefore, by <Ref>, OD(Pow(G_j+1))=2.
Hence by mathematical induction Pow(G_) has oriented diameter 2.
We now state two group theoretic facts which are used in the proof of <Ref> and the proof of <Ref>. For a proof of <Ref>, one can refer to <Ref>.
Let G and H be two finite groups such that gcd(|G|,|H|)=1. If g_1 generates g_2 in G and h_1 generates h_2 in H, then (g_1,h_1) generates (g_2,h_2) in G× H.
<cit.>
If m and n are two relatively prime numbers, then _mn≅_m×_n.
Let H be a cyclic group such that Pow(H) has oriented diameter 2. If gcd(|H|,p)=1, where p≠ 2 is a prime, then Pow(H×_p^α), α≥ 1, can be oriented with diameter 2.
First, we give a proof sketch of the lemma.
Let Γ=Pow(H×_p^α). We pick elements g_0,…, g_α∈_p^α such that o(g_i)=p^i. This gives a tower of subgroups {e}=g_0≤…≤g_α=_p^α, where e is the identity element of _p^. Let G_j=H×g_j. Since |H| and |g_j| are coprime to each other, by <Ref>, each G_j, 0 ≤ j ≤ is a cyclic subgroup of H ×_p^. These subgroups form a tower of cyclic subgroups G_0≤…≤ G_α. We note that H≅ G_0 and G_α=H×_p^α. By induction on j, we show that the induced subgraph Γ_j=Γ[G_j]=Pow(G_j) has oriented diameter 2.
As Γ_0 ≅ Pow(H), we have OD(Γ_0)=2. For the inductive step, we use <Ref>. Let L_T=G_j-1. By the induction hypothesis, Γ_j[L_T]=Γ_j-1 has oriented diameter 2, i.e., condition (a) of <Ref> is satisfied. The set of generators of G_j is gen(H)× [g_j]. We pick L_M to be the set of generators gen(H)× ([g_j]∖{g_j}). Since j>0 and p ≠ 2, |[g_j]|=ϕ(p^j)≥ 2. Thus, L_M ≠∅. We finally set L_B=G_j∖ (L_T∪ L_M)=((H∖ gen(H))× [g_j])⊔ (gen(H)×{g_j}). We show conditions (b) and (c) of <Ref> in the main proof.
Now we go into more details of the proof.
We note that |H|∉{2,4} as Pow(H) has oriented diameter 2. Moreover, if |H|=3 then p≥ 5.
The set L_M being a subset of generators of G_j consists of dominating vertices of Γ_j=Pow(G_j), and |L_M|=|gen(H)|×|[g_j]∖{g_j}| is even since |gen(H)|=ϕ(|H|) is an even number (since |H|≠ 2).
Now we show conditions (b) and (c) of <Ref>.
The set L_M being a subset of generators of G_j consists of dominating vertices of Γ_j=Pow(G_j), and |L_M|=|gen(H)|×|[g_j]∖{g_j}| is even since |gen(H)|=ϕ(|H|) is an even number (as |H|≠ 2).
For L_M to satisfy the condition (b) of <Ref>, we also need that, |L_M| ≥ 4. As |H|≠ 2, we have |gen(H)| ≥ 2. But the situation when |gen(H)|=2 and |[g_j]|=2 is problematic since it yields |L_M|=2. Now |[g_j]|=2 happens only if p=3. But in that case, as gcd(|H|,p)=1 and |H| ≠ 2 or 4, |H| must have a prime factor greater than or equal to 5 or |H| must be divisible by 2^3. In that case, |gen(H)| ≥ 4 and hence, |L_M| ≥ 4.
The rest of the proof involves showing that condition (c) of <Ref> is satisfied, i.e., there exists an orientation _B of the edges of Γ[L_B] and the edges in E(L_T, L_B) such that there is a directed path of length at most 2 between any two vertices using only the directed edges in _B.
Observe that, L_B =((H∖ gen(H))× [g_j])⊔ (gen(H)×{g_j}) ⊆ G_j∖ G_j-1. Let _H be an orientation of H having diameter 2. Our idea is to mimic the orientation _H of H while being oblivious to the second component of a vertex in L_B. In other words, for pairs of vertices (u,g) and (v,g') in L_B, if (u,v) ∈_H we put ((u,g),(v,g')) in _B, else we put ( (v,g'), (u,g) ) in _B. Note that if {u,v} is an edge in Pow(H), then {(u,g), (v,g')} is an edge in Γ_j (This can be verified easily by using <Ref>.).
Since there is a directed path of length at most 2 between two distinct vertices u and v in Pow(H), the newly added directed edges in _B imply a directed path of length at most 2 between two distinct vertices (u,g'_j) and (v,g”_j), where u ≠ v and g'_j may or may not be equal to g”_j. So, the only remaining case to handle is when u=v, i.e., a pair of vertices in (H ∖ gen(H))× [g_j]. Now, observe that for all u ∈ H ∖ gen(H), the set {u}× [g_j] ⊆ L_B is a clique (due to <Ref>). Now if |[g_j]| ≠ 2, we put the optimal orientation of Γ[{u}× [g_j]] (using <Ref>) in _B.
Note that if |[g_j]| ≠ 2 or 4, for any a,b ∈{u}× [g_j], for all u ∈ H ∖ gen(H), we have a directed path of length at most 2. If |[g_j]|=2 or 4, there exist exactly two vertices a=(u,g_j), b=(u,g'_j) in each {u}× [g_j] such that d_Γ__B(a,b)=3 (where Γ__B is the directed graph (V(Γ),_B)). To solve this, we use the edges E(L_T,L_B). Let e' be the identity element of H×_p^α. Since e'∈ H×g_0 = G_0 ⊆ G_j-1, note that e' ∈ L_T and e' is adjacent to all the vertices in L_B.
Now for a fixed u ∈ H ∖ gen(H), we orient the edges {a,e'} and {b,e'} (depending on whether (a,b) ∈_B or (b,a) ∈_B) so that a,b,e' create a directed triangle in Γ__B.
We do this for all u ∈ H ∖ gen(H). This gives a directed path of length at most 2 for all the remaining pairs of vertices from L_B. Hence, condition (c) of <Ref> is satisfied. Now, we apply <Ref> and get an orientation of Γ_j.
Now we are ready to prove the main result (<Ref>) of this section.
Proof of <Ref>.
The cases when n=1 and n=2 are easy to see. By <Ref> and observing Pow(ℤ_4)=K_4, we have OD(Pow(ℤ_4))=3. We have proved the case for n=6 in <Ref>. Using <Ref> and <Ref>, we get OD(Pow(G))=2 for a cyclic group G ∈ that is not ℤ_2 or ℤ_4.
Now, we are left with the case when n has at least two prime factors and n≠ 6. Let n=p_1^α_1 p_2^α_2p_3^α_3... p_k^α_k be the prime factorization of n, where p_1,p_2,…,p_k are distinct primes, _i's are positive powers and k≥ 2. By <Ref>, we can write ℤ_n= ∏_i∈ Sℤ_p_i^α_i×∏_i∉ Sℤ_p_i^α_i for any S⊆ [n]. One can check that by suitably picking a subset S of size at most 2, we can ensure that the oriented diameter of the power graph of H=∏_i∈ Sℤ_p_i^α_i is 2.
In particular, we consider p_1 and p_2 the smallest and the largest prime, respectively. We take a recursive approach to achieve an orientation of Pow(ℤ_n) with diameter 2. If p_1=2, then we start with orienting the power graph of H=_2^_1p_2^_2 with diameter 2 by applying <Ref>. If p_1>2, then we start with orienting the power graph of H=_p_1^_1 with diameter 2. In the first case, we extend H recursively by _p_3^_3,…,_p_k^_k, and with (k-2) applications of <Ref>, we get OD(Pow(_n))=2. Whereas, in the second case, we extend H recursively by _p_2^_2,…,_p_k^_k, and with (k-1) applications of <Ref>, we get OD(Pow(_n))=2.
□
§ ORIENTED DIAMETER OF POWER GRAPHS OF P-GROUPS
In this section, we study the oriented diameter of power graphs for finite non-cyclic groups from the class (recall that ={ G | G is a p-group for some prime p}). The main result of this section is <Ref>, where we fully characterize the group class .
The definition of generalized quaternion group Q_2^n of order 2^n can be found in any standard textbook of abstract algebra (for example, see <cit.>). We note that a generalized quaternion group of order 4n for any n can be defined. In this paper, we just need quaternion groups of order 2^n and a few facts about such groups, which we list below.
<cit.>
The generalized quaternion Q_2^n, n≥ 3 contains [Note that Q_2≅_2 and Q_2^2≅_4.] exactly one maximal cyclic subgroup ⟨ x ⟩ of order 2^n-1, and each element outside ⟨ x ⟩ is of order 4.
Moreover, we use the following two statements; one is a lemma by Burnside (<Ref>, 1911) and one is a result from <cit.> in the proof of the next theorem.
<cit.>
Let G be a p-group for a prime p, which is neither cyclic nor generalized quaternion. Then G has at least two subgroups of order p.
<cit.>
Let G ∈. Then Pow(G)∖{e} is connected if and only if G is either cyclic or generalized quaternion.
(1) Let G ∈ be neither cyclic nor generalized quaternion. If G has no maximal cyclic subgroup of order 2, then the oriented diameter of Pow(G) is 4 ; (2) The oriented diameter of Pow(Q_2^n) is 3, where Q_2^n, n≥ 3 is the generalized quaternion group.
(1)
Let Γ= Pow(G). Due to <Ref>, it is sufficient to prove that OD(Γ)≥ 4. By <Ref>, we know that Γ∖{e} is disconnected and hence Γ∖{e} has at least two connected [A connected component of a graph is a maximal connected subgraph of the graph.] components C_1 and C_2. So, there is no undirected e-avoiding path [A path in Pow(G) for a finite group G is e-avoiding if it does not include the vertex corresponding to the identity element e of G.] between a vertex of C_1 and a vertex of C_2 in the graph Γ.
Now we prove that, for any arbitrary orientation of Γ, there are two vertices u_1∈ C_1 and u_2 ∈ C_2 such that d_Γ_(u_1,u_2) ≥ 4 (recall Γ_ denotes the directed graph (V(Γ),)). For that, let us consider two elements c_1 ∈ C_1 and c_2 ∈ C_2. Without loss of generality, let us assume that (c_1 ,e) ∈. If (e,c_2) ∈ then d_(c_2,e), d_(e,c_1) ≥ 2. Thus, we have d_(c_2,c_1) ≥ 4. For the other case, suppose (c_2,e) ∈. Now, to have d_(c_1,c_2) ≤ 3, we must have a vertex d in C_2 such that (e,d), (d,c_2) ∈. Analogously, to have d_(c_2,c_1) ≤ 3, we must have a vertex d' ∈ C_1 such that (e,d'), (d',c_1) ∈. This gives us d_(d',e), d_(e,c_2) ≥ 2 which implies d_(d',c_2) ≥ 4. So, using , there is no directed path of length at most 3 from d' ∈ C_1 to c_2 ∈ C_2.
(2) <Ref> and <Ref> implies that Q_2^n has a unique subgroup, say y, of order 2 . Since any element in Q_2^n∖⟨ x ⟩ (see description of x in <Ref>) belongs to some maximal cyclic subgroup of order 4, there are 2^n-2^n-1/ϕ(4)=2^n-2≥ 2 (as n≥ 3) maximal cyclic subgroups of order 4 in Q_2^n. Hence Q_2^n has at least two maximal cyclic subgroups C_1, C_2 of order 4 and one cyclic subgroup C_3 of order 4 inside the maximal cyclic subgroup ⟨ x ⟩ such that the intersection C_i ∩ C_j={e,y}, 1≤ i < j ≤ 3, where y is the unique element of Q_2^n of order 2. Let the two elements of order 4 in C_i be c_i1 and c_i2 such that 1 ≤ i ≤ 3. Since C_i ∩ C_j={e,y} for i ≠ j, a path between a vertex c_ir, r=1,2 and a vertex in c_js, s=1,2 in Pow(Q_2^n) has to include e or y.
Let Γ = Pow(Q_2^n). First, we show that no orientation of Γ has diameter 2. For the sake of contradiction, let us assume that be an orientation of Γ such that the diameter of Γ_
is 2. The directed path from c_11 to c_21 of length 2 in Γ_ must pass through e or y. Without loss of generality we assume that the path goes through e. The case when the directed path from c_11 to c_21 of length 2 in Γ_ passes through y can be dealt similarly. Now, since the directed path from c_11 to c_21 of length 2 in Γ_ passes through e, we have (c_11,e), (e,c_21) ∈. In this case, the directed path from c_21 to c_11 of length 2 has to pass through y. Hence, we must have (c_21,y), (y,c_11) ∈. This also implies that (c_12,e), (e,c_22), (c_22,y), (y,c_12) ∈. Now, to have a directed path of length 2 from c_11 to c_31, we need (e,c_31) ∈. On the other hand, to have a directed path of length 2 from c_31 to c_21, we need (c_31,e) ∈. This means we can have a directed path of length at most 2 either from c_11 to c_31 or from c_31 to c_21, but not both. This contradicts the fact that the diameter of Γ_ is 2.
As we have shown that OD(Pow(Q_2^n) ≥ 3, if we can give a partial orientation of Pow(Q_2^n) with diameter 3, then OD(Pow(Q_2^n) = 3 (by <Ref>). Now, we give a partial orientation of Q_2^n, n≥ 3 with diameter 3. <Ref> shows such a partial orientation of Pow(Q_2^n). In <Ref>, C_1, C_2, …, C_m denote the maximal cyclic subgroups of order 4, where m = 2^n-2. Also, c_i1,c_i2 denote the elements of order 4 in C_i. Note that C_i ∩ C_j={e,y}, for 1≤ i < j ≤ m and C_i ∩⟨ x ⟩ = {e,y}. We partition the set ⟨ x ⟩∖{e,y} into two arbitrary non-empty subsets A and B (note that |x| = 2^n-1≥ 4).
We put (e,y) in .
For all a ∈ A and for all b∈ B we put the following directions in : (b,a), (y,b), (e,b), (a,e), (a,y). Moreover, we put the following directions in : (c_i1,e), (c_i1,y), (c_i2,c_i1), (e, c_i2), (y, c_i2), for each i, 1≤ i ≤ 2^n-2. From <Ref>, it is easy to observe that the diameter of Pow(Q_2^n) after giving orientation is 3.
Hence, the theorem is proved.
§ ORIENTED DIAMETER OF POWER GRAPHS OF NILPOTENT GROUPS
Since in the previous section we have dealt with non-cyclic finite groups of , in this section, we only consider finite non-cyclic nilpotent groups G such that G ∉. We fully characterize the oriented diameter of power graphs of all such groups in the main result (see <Ref>) of this section. We write π(G) to denote the set of all prime divisors of |G|. We start our discussion with a fact about finite nilpotent groups, which follows from two group theoretic facts: <Ref> and <Ref> (see <ref>).
Let G be a finite nilpotent group and x,y ∈ G ∖{e} be two elements such that o(x) and o(y) are co-prime to each other. Then o(xy)=o(x)· o(y). Moreover, if M is any maximal cyclic subgroup of a finite non-cyclic nilpotent group G, then p divides |M| for all p ∈π(G).
We now classify the non-trivial ge-classes (defined in <Ref>) of a nilpotent group G into two types based on their orders.
Base class: We call a ge-class [x] with order o(x) divisible by exactly one prime from π(G) a base class. An element from a base class is called a base element. Moreover, for a prime p ∈π(G), a base element is called a p-base element if its order is a positive power of p. We denote the set of all base elements by B and the set of all p-base elements by B_p for a prime p∈π(G).
Non-base class: We call a ge-class with order divisible by at least two primes from π(G) a non-base class. We call an element from a non-base class non-base element. We denote the set of all non-base elements by NB.
In finite nilpotent groups, if [x] and [y] are two ge-classes of order p^k and q^l, respectively, where p and q are distinct primes, then [xy] is the ge-class of order p^kq^l and [xy]=gen(⟨ xy ⟩) (using <Ref>). Moreover, by <Ref>, [xy] is adjacent to both [x] and [y]. By <Ref>, it can be easily observed that a non-base class of order n is adjacent to exactly one base class of order p^i, i≥ 1 where p is a prime and p^i is a divisor of n.
The next lemma is similar to Theorem 2.6 <cit.>.
Let G be a non-cyclic nilpotent group with |G|=p^mq^n, where p and q are distinct primes[Note that the condition on |G| in <Ref> is not necessary, but it is enough for our further discussion and makes the presentation simpler.]
and m,n ≥ 1. Let u,v ∈ G ∖{e} such that ⟨ u ⟩∩⟨ v ⟩ ={e } satisfying one of the following conditions: (i) Both u and v are p-base elements; (ii) Both u and v are q-base elements; (iii) Both u and v are non-base elements. Then, any e-avoiding shortest path between u and v in Pow(G) is of length 4.
We first prove the following claim.
Let P be any e-avoiding path between u and v in Pow(G). Then P (including u and v) must contain one p-base element and one q-base element.
Proof of <Ref>: Let a,b ∈ G be two adjacent elements in Pow(G). If there exists a prime p which divides both o(a) and o(b), then ⟨ a ⟩∩⟨ b ⟩ contains a p-order subgroup (because a∩b is a cyclic subgroup and <Ref> holds). Now, let P: ug_1g_2… g_n v be an e-avoiding path between u and v. For the sake of contradiction, assume that every element/vertex of P has its order divisible by p. Then, we can say that ⟨ u ⟩∩⟨ g_1 ⟩∩…∩⟨ g_n ⟩∩⟨ v ⟩ contains a p-order subgroup. This contradicts the fact that ⟨ u ⟩∩⟨ v ⟩={e}. So, there is at least one element in P whose order is not divisible by p. In other words, a q-base element is in P. Similarly, we can say that P contains at least one p-base element.
We now go over the conditions of <Ref> one by one.
(i) Let o(u)=p^α, ≥ 1 and o(v)=p^α ', ' ≥ 1. From <Ref>, any e-avoiding path between u and v in Pow(G) contains at least one element a of order q^β, where β≥ 1. Now by <Ref>, a is not adjacent to either u or v in Pow(G). Hence, any shortest e-avoiding path between u and a is of length at least 2, and similarly, any shortest e-avoiding path between a and v is of length at least 2.
(ii) The proof is similar to (i).
(iii) Let o(u)=p^αq^β, ,≥ 1 and o(v)=p^α 'q^β', ',' ≥ 1. From <Ref>, an e-avoiding path P between u and v in Pow(G), contains at least one element, say a, of order p^r, where r ≥ 1 and one element, say b, of order q^r', where r'≥ 1. Without loss of generality, we can assume that uabv is a subpath of P. Now, an e-avoiding path between a and b is of length at least 2 (since a and b are not adjacent in Pow(G) by <Ref>).
So, the length of P is at least 4.
The next lemma is crucial in cutting down the number of patterns for showing lower-bound of the oriented diameter of groups considered in <Ref>.
(Uniformity lemma)
Let G be a non-cyclic nilpotent group such that |G|=p^mq^n, where p and q are distinct primes and m,n ≥ 1. Let u,v ∈ G∖{e} such that ⟨ u ⟩∩⟨ v ⟩ = {e} satisfying one of the following conditions:
(i) Both u and v are p-base elements ;
(ii) Both u and v are q-base elements ;
(iii) Both u and v are non-base elements .
In an orientation of Pow(G) with diameter 3, if (u,e) ∈ then (v,e) ∈. Also, if (e,u) ∈ then (e,v) ∈.
Using <Ref>, any undirected path between u and v of length at most 3 in Pow(G) must include identity e. So, in , we must include e in any directed path between u and v. We prove by contradiction that if (u,e) ∈ then (v,e) ∈. So, let us assume that (u,e) ∈ but (v,e) ∉. Now consider going from v to u. Since (v,e), (e,u) ∉, both the directed paths from v to e and e to u are of at least length 2. This implies that any directed path from v to u is of length at least 4. Hence, this contradicts our assumption that is of diameter 3. So, (u,e) ∈ implies (v,e) ∈. The reverse case is similar.
For a finite non-cyclic nilpotent group G ∉, the oriented diameter of Pow(G) is at least 3.
A finite nilpotent group is a direct product of its Sylow subgroups. As G is non-cyclic, at least one such Sylow subgroup, say Sylow p-subgroup S_p is non-cyclic.
If S_p is not also generalised quaternion, then using <Ref>, we can say that S_p has at least two subgroups P_1 and P_2 of order p. Let P_1=g_1 and P_2=g_2. Now using <Ref>
one can verify that g_1 and g_2 have no common neighbour other than e in Pow(G). Hence, we can conclude that Pow(G) cannot have an orientation of diameter 2.
Now, let S_p=S_2=Q_2^n, n ≥ 3. From the proof of (2) of <Ref>, we know that S_2 has at least three distinct cyclic subgroups C_1,C_2,C_3 of order 4 such that C_i ∩ C_j ={e,y} where 1 ≤ i < j ≤ 3 and y is the unique element of order 2 in S_2. Now, similar to the proof of (2) of <Ref>, we can argue that e and y are the only common neighbours of any vertex of C_i and any vertex of C_j in Pow(G) and OD(Pow(G)) is at least 3.
We now characterise the nilpotent groups for which an oriented diameter of 3 is not possible in <ref>. For that, we use the following notations.
Subset Notations:
Let G be a non-cyclic nilpotent group and |G|=2^mp^n, where p is an odd prime and m,n ≥ 1. We use the notations [x_1],[x_2],…,[x_r] to denote the ge-classes of order 2 and the notations [y_1],[y_2],…,[y_s] to denote the ge-classes of order p. We partition the set B_2 of 2-base elements into sets X_i, 1 ≤ i ≤ r where X_i={u | u ∈ B_2 and [x_i] ⊆}. Similarly, we partition the set B_p of p-base elements into Y_1,Y_2, …, Y_s.
We partition the set of non-base elements of G into rs sets as follows: A_ij={u | [x_i] ⊆ and [y_j] ⊆} for 1 ≤ i ≤ r and 1 ≤ j ≤ s.
The following fact can be verified using <Ref> and the fact that the intersection of two cyclic subgroups is also a cyclic subgroup. Moreover, it is used to prove <Ref>.
Let i≠ i', j ≠ j'. If u∈ A_ij, v ∈ A_i'j', then u∩v={e}.
The following two statements, one an easy observation about power graphs and the other a lemma by Frobenius (1895) <cit.>, are used in the proof of <Ref>.
Let G be a group and x be an element of order p, where p is any prime. Then, for any element y (≠ e) ∈ G such that {x,y} is an edge of Pow(G), we have x ∈y.
<cit.> The number of p-order subgroups in a finite group G is k· p +1, where k ≥ 0.
Let G ∉ be a non-cyclic nilpotent group. If G satisfies all of the following conditions: (a) |G|=2^mp^n, where p is an odd prime, m,n ≥ 1; (b) G has a maximal cyclic subgroup of order 2p^β, for some 1 ≤β≤ n; (c) G has at least two subgroups of order p; (d) G has at least two subgroups of order 2, then the oriented diameter of Pow(G) is 4.
The proof is by contraction and is divided into two main steps: In Step 1, we show that if there is an orientation with diameter 3, then it must follow one of 8 general patterns, which is discussed below. In Step 2, we show that each of these 8 patterns gives a contradiction. As we will see, some of these patterns are just symmetric versions of other patterns.
Step 1: We first note that all maximal cyclic subgroups of G are of order 2^m'p^n', m',n'≥ 1 (by <Ref>). Therefore, by <Ref>, Pow(G) has an orientation with diameter 4. Moreover, by <Ref>, OD(Pow(G))≥ 3.
We show that Pow(G) cannot have oriented diameter 3. For the sake of contradiction, let be an orientation of Pow(G) with diameter 3. We use the same notations A_ij, i∈{1,2,…,r} and j∈{1,2,…,s} as defined above. Since (c) and (d) hold, we note that r,s≥ 3 by <Ref>. We start by picking an element v∈ A_11. Without loss of generality, let (v,e)∈. We show that for any u∈ NB, we must have (u,e)∈.
Let u∈ A_ij where 1<i≤ r, 1<j≤ s. It is easy to see that u∩v= {e}, by <Ref>. So, using <Ref>, we have (u,e)∈. As u is arbitrarily chosen, we have (u,e)∈, for all u∈ A_ij where 1<i≤ r, 1<j≤ s. Now if u ∈ A_1j (or A_i1), we have a set A_kk where k∉{1,j} (or k ∉{i,1}), as r,s≥ 3. Note that for w ∈ A_kk, v∩w={e} and u∩w={e} (by <Ref>). So applying <Ref> to v and w gives (w,e) ∈. Another application of <Ref> on u and w shows that (u,e)∈. As u is arbitrarily chosen, we have (u,e)∈ for all u∈ A_1j (or A_i1) where i ∈{1,2,…,r}, j ∈{1,2,…,s}. The case (e,v)∈ similarly implies that (e,u)∈ for all u∈ NB.
Hence, we have either (u,e) ∈ for all u ∈ NB, or (e,u) ∈ for all u ∈ NB. We denote these by NB {e} and {e} NB, respectively. In general for an orientation and two sets A and B, we write A B, if (a,b)∈ for all a∈ A,b ∈ B.
In a similar way, using <Ref>, we can show that either (u,e) ∈ for all u in the set B_2 of all 2-base elements (we use the shorthand B_2 {e} to denote this case) or (e,u) ∈ for all u ∈ B_2 (denoted by {e} B_2). We can also show that either (u,e) ∈ for all u in the set B_p of all p-base elements (denoted by B_p {e}) or (e,u) ∈ for all u ∈ B_p (denoted by {e} B_p).
The above discussion shows that there are 8 possible patterns in .
Step 2: Now we will inspect all the patterns one by one.
Pattern 1: NB{e}, B_2{e}, B_p{e}. This pattern does not yield a strong orientation since there is no outward edge from e.
Pattern 2: {e} NB, B_2{e}, B_p {e}.
In this pattern, any directed path containing e from any non-base element to any base element is of length at least 4. Now, we show that there exists at least a pair of vertices a and b such that we can not have a directed e-avoiding path from a to b of length at most 3. By condition (b), G has a maximal cyclic subgroup C of order 2p^β, for some 1 ≤β≤ n. Now, by condition (c), G has at least two subgroups of order p. Hence by <Ref>, G has a subgroup v of order p such that C ∩v={e}. We need the following claim.
Let C be a maximal cyclic subgroup of G of order 2p^β, β≥ 1. Let u be a non-base element in C and v be an element of order p not in C. Then, there is no e-avoiding path between u and v of length at most 2 in Pow(G). Moreover, if P: uw_1w_2 v is an e-avoiding path of length 3 between u and v in Pow(G), then w_1 has to be the unique element of order 2 in C.
Proof of <Ref>:
Let x and y be elements of order 2 and p in C. Then by <Ref>, y≤u≤ C. First, we show that there is no e-avoiding path between u and v of length at most 2 in Pow(G). Since u does not generate v, by <Ref>, {u,v} is not an edge. Now, if possible, let w≠ e be a common neighbour of u and v. By <Ref>, v ∈w and v≤w. Moreover since C ∩v={e}, we must have u ∈w (Otherwise w ∈u, which together with the fact v ∈w implies that v ∈u.) and u≤w and y≤w. This contradicts <Ref>. Hence any e-avoiding path between u and v in Pow(G) is of length at least 3.
At first, observe that if o(w_1)=2^ for some α≥ 1, then[This is because in Pow(G) if an element x whose order is prime power is adjacent to an element y whose order is non-prime power, then x ∈y.] w_1 ∈u⊆ C. Now, as o(w_1) divides |C| (since the order of an element of a finite group divides the order of the group), we have o(w_1)=2 and hence w_1=x. So, to prove the claim, it is enough to show that p does not divide o(w_1). For the sake of contradiction, let p| o(w_1).
If w_1 ∈u, then w_1 is a subgroup of u and C. Also, as p| o(w_1), by <Ref>, w_1 contains a unique subgroup of order p. So, y≤w_1. If u ∈w_1, then y≤u≤w_1. Hence y≤w_1 in both the cases.
To have an e-avoiding path P of length 3 from u to v, we must have a non-identity vertex, which is a common neighbour of w_1 and v. Let w_2 be a common neighbour of w_1 and v. As {w_2, v}∈ Pow(G) and o(v)=p, by <Ref>, we have v ∈w_2.
Now, the edge {w_1,w_2} implies w_1 ∈w_2 or w_2 ∈w_1. If w_1 ∈w_2 then w_2 contains w_1. This implies w_2 contains two distinct subgroups y and v of order p, which is a contradiction to <Ref>. Also, if w_2 ∈w_1, then v ∈w_1 (since v ∈w_2) and v≤w_1. Again, this contradicts <Ref>. So, p does not divide o(w_1). Hence, the claim is proved.
Let C_1 be a maximal cyclic subgroup of order 2p^β containing x as the unique 2 order element and y_1 as a p order element. Let u_1 be a non-base element of C_1. Let v_2 be an element of order p outside C_1. Using <Ref>, any directed e-avoiding path of length 3 from u_1 to v_2 must use the edge (u_1,x). So, the path should be of the form: u_1xgv_2, where g is some element such that g contains both x and v_2 (by <Ref>) [Note that g is a non-base element.]. So, to have a directed e-avoiding path from u_1 to v_2 of length 3, we must have (x,g) ∈ for some non-base element g outside C_1. Now if C_2 is a maximal cyclic subgroup containing g (and hence containing x), then by <Ref> (see <Ref>), |C_2|=2p^β', for some β' ≥ 1. Now, to have a directed e-avoiding path from g to y_1 of at most length 3, by using <Ref>, we must put (g,x) ∈. This clashes with our previous requirement of (x,g) ∈. Hence, this pattern is not possible in .
Pattern 3: NB{e}, {e} B_2, B_p{e}. Let y and y' be two p-base elements such that [y]≠ [y'], i.e., y∩y' = {e}. Using <Ref>, we can conclude that a directed path P from y to y' of length 3 must pass through e. Moreover, since B_p{e}, P must have (y,e).
Note that the only outward edges from e are towards the 2-base elements. Also, any directed path from a 2-base element to y' is of at least length 2. This is because y' is a p-base element, and by <Ref>, a 2-base element and a p-base element can not be adjacent in Pow(G). Therefore, the length of a directed path from y to y' is at least 4. Hence, we cannot have an orientation with diameter 3.
Pattern 4: NB{e}, B_2 {e}, {e} B_p. As done in Pattern 3, we can similarly argue that there is no directed path of length at most 3 from x to x', where x and x' are 2-base elements and [x]≠[x'].
The last four patterns are symmetric to the first four patterns and can be dealt with using the following simple observation.
Let 𝔛 =(V,E) be a directed graph. Also, let 𝔛_rev=(V, E_rev) be the directed graph where E_rev is the set of edges obtained by reversing the directions of all the edges in E. Then diam(𝔛)=diam(𝔛_rev).
Let be a partial orientation of an undirected graph X=(V,E) and X_ be the directed graph (V,). Moreover, let A ⊆. If diam(X_)=d, then there exists a partial orientation ' of X containing A_rev such that diam(X_')=d.
Pattern 5: {e} NB, {e} B_2, {e} B_p. By <Ref>, this is symmetric to Pattern 1. By `symmetric', we mean that getting a partial orientation containing Pattern 5 with diameter 3 would imply that there is a partial orientation containing Pattern 1 with diameter 3.
Pattern 6: {e} NB, B_2{e}, {e} B_p. By <Ref>, this is symmetric to Pattern 3.
Pattern 7: {e} NB, {e} B_2, B_p {e}. By <Ref>, this is symmetric to Pattern 4.
Pattern 8: NB {e}, {e} B_2, {e} B_p. By <Ref>, this is symmetric to Pattern 2.
So we have shown that none of the 8 patterns is satisfied in an orientation of Pow(G) with diameter 3. Hence, it is proved that if G satisfies the given conditions (a)-(d), then an orientation of Pow(G) with diameter 3 is not possible.
We now state the main result on the oriented diameter of power graphs of finite non-cyclic nilpotent groups which are not in .
Let G ∉ be a finite non-cyclic nilpotent group. Then the oriented diameter of Pow(G) is 3 if and only if G satisfies at least one of the following conditions: (a) |G| is divisible by at least two distinct odd primes; (b) G has no maximal cyclic subgroup of order 2p^β, 1≤β≤ n, where p is an odd prime; (c) G has unique p-order subgroup, where p is an odd prime; (d) G has unique 2-order subgroup. Otherwise, the oriented diameter of Pow(G) is 4.
Examples of groups for each of the conditions are in <Ref>. We first state the following four lemmas: <Ref>, <Ref>, <Ref> and <Ref>, which are used to prove the above theorem.
Let G be a non-cyclic nilpotent group. If |G| is divisible by at least two distinct odd primes, then the oriented diameter of Pow(G) is 3.
Let G be a non-cyclic nilpotent group of order 2^mp^n, where p is an odd prime and m,n ≥ 1. If G has no maximal cyclic subgroup of order 2p^β, 1≤β≤ n, then the oriented diameter of Pow(G) is 3.
Let G be a non-cyclic nilpotent group and |G|=2^mp^n, where p is an odd prime and m,n ≥ 1. If G has unique subgroup of order p, then the oriented diameter of Pow(G) is 3.
Let G be a non-cyclic nilpotent group and |G|=2^mp^n, m, n ≥ 1, where p is an odd prime. If G has a unique subgroup of order 2, then the oriented diameter of Pow(G) is 3.
Proof of <Ref>. If G does not satisfy any of the conditions (a)-(d), then by <Ref>, we have OD(Pow(G))=4. Now consider the opposite direction. The case when G satisfies condition (a) is handled in <Ref>. Now note that for the remaining cases, i.e., when G satisfies condition (b) or (c) or (d), it is enough to consider |G|=2^mp^n, where p is an odd prime and m,n ≥ 1. Hence, by applying <Ref>, <Ref> and <Ref>, the oriented diameter of Pow(G) is 3 when G satisfies condition (b), (c) and (d) respectively.
□
We now prove <Ref> and <Ref> in the rest of this section. The proof techniques of <Ref> and <Ref> are almost similar to <Ref>, and hence, we have put the proofs of these two lemmas in <Ref> and <Ref> respectively. Now due to <Ref> and <Ref>, in order to prove each of the four lemmas, it is enough to give a partial orientation of diameter 3. The partial orientations used in <Ref>, <Ref> and <Ref> are different but involve some common partial orientations, namely _1,_2 and _3. These common partial orientations are described in <Ref> below. In <Ref>, we prove that _2 and _3 themselves can establish directed paths of length at most 2 between certain sets of vertices. In each of the three lemmas: <Ref>, <Ref> and <ref>, we augment ∪_i=1^3 _i with suitable partial orientations. Whereas, in <Ref> we design a completely different partial orientation.
Let G ∉ be a non-cyclic nilpotent group. The descriptions of partial orientations _1,_2,_3 of Pow(G) are as follows:
r0.45
[scale=0.35]
(a) at (2,7);
(b) at (3,11);
(c) at (4,7);
(d) at (4.1,11);
(e) at (10,7);
(f) at (10,11);
(g) at (12,7);
(h) at (11.1,11);
[color=black!50, fill=black!10, very thick][rounded corners] (1,9.7) rectangle (13,13.5);
[color=brown!80, fill=brown!50, very thick][rounded corners] (1,5) rectangle (3,7);
[color=brown!80, fill=brown!50, very thick][rounded corners] (3,5) rectangle (5,7);
[color=teal!80, fill=teal!40, very thick][rounded corners] (9,5) rectangle (11,7);
[color=teal!80, fill=teal!40, very thick][rounded corners] (11,5) rectangle (13,7);
[color=brown!80, fill=brown!50, very thick][] (3.5,11) ellipse (1.5cm and 1cm);
[color=teal!80, fill=teal!40, very thick][] (10.6,11) ellipse (1.5cm and 1cm);
[brown] (3,11) circle(4pt);
[brown] (4.1,11) circle(4pt);
[teal] (10,11) circle(4pt);
[teal] (11.2,11) circle(4pt);
(3,11.5) node[][scale=0.8] n_1;
(4.2,11.5) node[][scale=0.8] n_2;
(10.2,11.5) node[][scale=0.8] n_1';
(11.4,11.5) node[][scale=0.8] n_2';
(1,6) node[right][scale=0.8] M_1;
(3,6) node[right][scale=0.8] M_2;
(9,6) node[right][scale=0.8] M_1';
(11,6) node[right][scale=0.8] M_2';
[brown, thick] (a) – (b) node[ currarrow, pos=0.5, xscale=0.6, sloped, scale=-1, color=brown] ;
[brown, thick] (b) – (c) node[ currarrow, pos=0.5, xscale=0.6, sloped, scale=-1, color=brown] ;
[brown, thick] (c) – (d) node[ currarrow, pos=0.5, xscale=0.6, sloped, scale=-1, color=brown] ;
[brown, thick] (d) – (a) node[ currarrow, pos=0.5, xscale=-0.6, sloped, scale=-1, color=brown] ;
[teal,thick] (e) – (f) node[ currarrow, pos=0.5, xscale=-0.6, sloped, scale=1] ;
[teal,thick] (f) – (g) node[ currarrow, pos=0.5, xscale=-0.6, sloped, scale=1] ;
[teal,thick] (g) – (h) node[ currarrow, pos=0.5, xscale=0.6, sloped, scale=1] ;
[teal,thick] (h) – (e) node[ currarrow, pos=0.5, xscale=0.6, sloped, scale=1] ;
[ line width=0.6mm,color=brown!90] (2,7) .. controls (-2,11) and (1.5,12) .. (3,13) node[ currarrow, pos=0.5, xscale=1, sloped, scale=1, color=brown] ;
[ line width=0.6mm,color=brown!90] (4,7) .. controls (8,9) and (7,11) .. (6,12) node[ currarrow, pos=0.5, xscale=1, sloped, scale=1, color=brown] ;
[ line width=0.6mm,color=teal!80] (10,7) .. controls (6,9) and (8,12) .. (8,12) node[ currarrow, pos=0.45, xscale=-1, sloped, scale=1, color=teal] ;
[ line width=0.6mm,color=teal!80] (12,7) .. controls (15,9) and (14,11) .. (12,13) node[ currarrow, pos=0.5, xscale=-1, sloped, scale=1, color=teal] ;
(7,14.25) node[][scale=1] Non-base class N of order p^q^, ,≥ 1;
(2.5,4) node[][scale=1] Base class M;
(2.5,3) node[][scale=1] of order p;
(12.2,4) node[][scale=1] Base class M';
(12.2,3) node[][scale=1] of order q;
Illustration of _2. The directed edges of E({n_1,n_2}, M_1 ∪ M_2) and E({n'_1,n'_2}, M'_1 ∪ M'_2) form two C_4-gadgets.
_1 : From all u ∈ NB, we orient the edges towards e as (u,e). Also, from e, we orient the edges towards all u ∈ B as (e,u).
_2 : At first, we arbitrarily partition each base class M of odd prime order into two non-empty subsets M_1 and M_2. Let N be a non-base class and p be an odd prime divisor of the order of N. Due to <Ref>, N is adjacent to exactly one base class M of order p.
At first, we mark two elements of N as n_1,n_2 (choices of n_1 and n_2 depend on M, as discussed in the note below).
Now, we make directed 4-cycles in the edges of E(N,M) (see <Ref>) as follows: For all u_1 ∈ M_1 and u_2 ∈ M_2 we put (n_1,u_1), (u_1,n_2), (n_2,u_2), (u_2,n_1) in _2. We call the directed subgraph formed by the directed edges of E({n_1,n_2},M_1∪ M_2) due to _2 - `C_4-gadget'. This naming is due to the presence of several directed C_4 in E({n_1,n_2},M_1∪ M_2) after _2. Moreover we call n_1,n_2 the gadget anchor points in N for M.
Now, the edges in E(M,N) of the form {u,v}, where u∈ M and v ∈ N∖{n_1,n_2} are oriented as (u,v).
Note: While introducing _2, we choose a disjoint pair of gadget anchor points in N for each base class M of odd prime order adjacent to N. This is possible as the number k of base classes of odd prime order adjacent to N equals the number of odd prime divisors of r, where r is the order of N and also noting that |N|=ϕ(r)>2^k≥ 2k.
_3: From any base element of order p^, ≥ 2, where p is an odd prime in π(G), we orient the edges towards all the adjacent non-base elements.
Let G ∉ be a finite non-cyclic nilpotent group and p,q be two odd prime divisors of |G|. Then using partial orientations _2, _3 as stated in <Ref>, we have the following:
(1) There is a directed path of length 2 between any vertex of a base class of order p and any vertex of a base class of order q using _2, where p and q are distinct.
(2) There is a directed path of length at most 2 from any vertex of a base class of order p^, ≥ 2 to any vertex of a base class of order q, using _3 and _2, where p and q may or may not be distinct.
We only prove (1) as (2) is similar. Let M and M' be two base classes of order p and q, respectively. Then, we need to show that between any pair of vertices m ∈ M and m' ∈ M', there is a directed path of length 2.
As G is a nilpotent group, mm' is a non-base element (by <Ref>). Let N be the non-base class [mm']. Since mm' is adjacent to both m and m' in Pow(G), N is adjacent to both the base classes M and M' (by using <Ref>). Hence, we have oriented the edges in E(N,M) and E(N,M') as described in _2.
Let n_1,n_2 be the gadget anchor points in N involved in the C_4-gadget with M. Also, let n_3, n_4 be the gadget anchor points in N involved in the gadget with M'. From our discussion in <Ref>, the vertices n_1,n_2,n_3,n_4 are distinct. Now, for any m' ∈ M', we have either (n_3,m') or (n_4,m'). Also, as n_3 and n_4 are not involved in the C_4-gadget with M, we have (m,n_3) and (m,n_4) for all m ∈ M. Hence, we have a directed path of length 2 from any m ∈ M to any m' ∈ M' via n_3 or n_4.
Let G be a finite nilpotent group such that the set of prime divisors π(G) contains 2 and at least two distinct odd primes p and q. Then using <Ref>, any element of order 2^, ≥ 1 and any element p^, ≥ 1 have one common neighbour of order 2^p^q in Pow(G).
Proof of <Ref>: Due to <Ref> and <Ref>, it is sufficient to give a partial orientation of Pow(G) with diameter 3. For that purpose, if |G| is even, then along with partial orientations _1,_2,_3 as stated in <Ref>, we use the following partial orientations (see <Ref>). If |G| is odd, we will see below that the partial orientations _1,_2 , _3 are sufficient.
_4: From any base element of order 2, we orient the edges towards all the adjacent non-base elements of order 2^αp^β, where p is any odd prime in π(G) and α, β≥ 1.
_5: Let N be a non-base class of order 2^αt, where ≥ 2 and t (≠ 1) is co-prime to 2. Also, let M be the unique base class of order 2^2 that is adjacent to N. We orient the edges in E(N,M) similarly to _2 as stated in <Ref>. Here also, the choices of gadget anchor points of N for M depend on M as in <Ref>. In other words, while introducing _5 in a non-base class N of order 2^αt, where ≥ 2 and 2 ∤ t, we select a pair of gadget anchor points {n_1,n_2} in N for the base class of order 2^2 adjacent to N in such a way that neither n_1 nor n_2 has been used as gadget anchor point in N while introducing _2. This is possible as the number k of base classes of odd prime order adjacent to N equals the number of prime divisors of t, and |N|=ϕ(2^t)>2^(k+1)≥ 2(k+1).
_6: From any base element of order 2^, ≥ 3, we orient the edges towards all the adjacent non-base elements. Note that these non-base elements are of order 2^δt, where δ≥ and t (≠ 1) is co-prime to 2.
_7: From any non-base element of order 2p^q^, ,≥ 1, where p and q are any two distinct odd primes in π(G), we orient the edge towards the adjacent unique base element of order 2.
We show an illustration of the introduced partial orientations in <Ref>.
The set B is partitioned into three subsets as follows: (a) R_1: consisting of the elements of order 2; (b) R_2: consisting of the elements of order 2^2 and p, where p is any odd prime in π(G); (c) R_3: consisting of the elements of order 2^, ≥ 3, and p^, ≥ 2, where p is any odd prime in π(G). Note that if |G| is odd, then G has no 2-base element. Hence, the region R_1 does not exist, whereas R_2 and R_3 only contain p-base elements, where p is an odd prime in π(G).
Path directions:
First, we list down some necessary observations, which can be argued similarly to the proof of <Ref>. We also use <Ref> for discussing path directions. In the following observations, p is an odd prime in π(G).
Note 1:
There is a directed path of length 2 from any element of order 2^α to any element of order p, using partial orientation _4 (when α=1) or _5 (when α=2) or _6 (when α≥ 3) along with partial orientation _2.
Note 2:
There is a directed path of length 2 from any element of order p^β, β≥ 2 (or, of order p) to any element of order 2 by noting <Ref> and using _3 (or _2) together with _7.
Let Γ=Pow(G) and denote the disjoint union of _1,…,_7 (or _1,…,_3 as required). Then, we use the notation Γ_ to denote the directed graph (V(Γ),).
Moreover, let d(a,b) (we use d(a,b) instead of d_Γ_(a,b) as Γ and are fixed in this context) denote the shortest distance from a vertex a to a vertex b in the directed graph Γ_ and d(a,S)=min{d(a,s) : s ∈ S} denote the shortest distance from a vertex a to a set S in Γ_.
From <Ref>, one can see that d(v,e)=1 for any non-base element v and d(e,u)=1 for any base element u. This also implies that d(v,u)≤ 2, i.e., there is a directed path of length at most 2 from any element v∈ NB to any element u∈ B.
We claim that if u ∈ B=R_1∪ R_2∪ R_3, then d(u,NB)=1. For this, observe that if u ∈ R_1, then there exists some v∈ NB such that (u,v) ∈_4. Similarly, if u ∈ R_2, then there exists some v ∈ NB such that (u,v) ∈_2∪_5 and if u ∈ R_3, then there exists some v ∈ NB such that (u,v) ∈_3 ∪_6.
Noting d(u,NB)=1 for all u∈ B and (v,e)∈_1 for all v∈ NB, we have a directed path of length at most 2 from any element of B to e. Combining such a path with (e,u') ∈_1, where u' is any element in B, we get a directed path of length at most 3 between any two elements of B.
Now, for any non-base element v∈ NB, there exists at least one element u ∈ R_2 such that (u,v)∈_2. Moreover since (e,u)∈_1 for all u∈ R_2 ⊆ B, we have d(e,v) = 2 for all v∈ NB. Now, as (v',e)∈_1 for all v'∈ NB, we get d(v',v) ≤ 1+d(e,v) = 3, i.e., there is a directed path of length at most 3 between any two elements in NB.
Now, we discuss the remaining case, i.e., when the source vertex u is from B, and the destination vertex v is from NB. Since v is a non-base element, o(v) always has at least one odd prime divisor, and hence there exists an element a∈⟨ v ⟩ such that o(a) is an odd prime. So the base class [a] is in R_2 and participates in a C_4-gadget with the non-base class [v] due to _2 (see <Ref>). Now, if o(u)=2^α ,α≥ 1, then using Note 1, we have d(u,a) ≤ 2 for all a ∈ [a]. Further using the C_4-gadget between [a] and [v], we have d(u,v) ≤ 3. Now the case o(u)=p^β, β≥ 1 (where p is an odd prime) is divided into two subcases according to the number of distinct odd prime divisors of o(v). The first subcase is when o(v) is divisible by at least two odd primes p and q. Then, there exists an element c ∈v of order q, and hence, there is a C_4-gadget between [c] and [v] due to _2. Therefore, using the directed path of length 2 from u to any element of [c] as described in <Ref> and the gadget between [c] and [v], we have d(u,v) ≤ 3. Now, consider the second subcase, i.e., when o(u)=p^β, β≥ 1 and o(v) is divisible by only two primes 2 and p. If w is the (unique) element of order 2 in v, then Note 2 implies d(u,w)≤ 2. Moreover, since w ∈ R_1, we have (w,v) ∈_4. This gives us d(u,v) ≤ 3 in this case.□
Proof of <Ref>: Due to <Ref> and <Ref>, it is sufficient to give a partial orientation of Pow(G) with diameter 3. The maximal cyclic subgroups of G are of order 2^αp^β, where 1≤≤ m, 1≤≤ n and (,) ≠ (m,n) (by <Ref>). Now, if G has no maximal cyclic subgroup of order 2p^k for any 1 ≤ k ≤ n, then we can use <Ref> to prove that OD(Pow(G))=3.
Now we consider the case when G has a maximal cyclic subgroup of order 2p^k, for some 1≤ k ≤ n.
Let x be the unique subgroup of order 2 of G. Since every maximal cyclic subgroup of G contains x, by <Ref>, one can see that each maximal cyclic subgroup of G is of order 2p^β, 1 ≤β≤ n. Note that, here G=_2× S_p, (where S_p is the Sylow p-subgroup of G) due to Burnside's lemma (see <Ref>).
Now we claim that in G, a base class M_i of order p^i is adjacent to exactly one non-base class N_i of order 2p^i (see <Ref> for the definition of two ge-classes being adjacent).
One can verify this by using the facts that G has a unique subgroup of order 2, and the intersection of two cyclic subgroups is a cyclic subgroup. On the other hand, using <Ref>, one can verify that each N_i is adjacent to exactly one M_i.
So, there is a matching between the ge-classes of order p^i and 2p^i for all 1 ≤ i ≤ n in G. Let C_ij = M_ij∪ N_ij, where M_ij and N_ij denote the j-th ge-class of order p^i and 2p^i (since G is not cyclic, j>1 for at least one i). Analogously, we match the elements e and x (recall that x is the unique element of G of order 2) and put them in C_0. Now observe that G can be viewed as a disjoint union of C_0 and the sets C_ij, where 1≤ i ≤ n and 1< j. We also partition each N_ij in two non empty subsets {a_ij}, where a_ij is an arbitrary element of N_ij and B_ij=N_ij∖{a_ij} (this can always be done since |N_ij|≥ 2). We now describe a partial orientation in which we orient a subset of the edges in the subgraph induced by the set C_0 ⊔ C_ij=C_0⊔({a_ij}⊔ B_ij⊔ M_ij), for each i and each j as follows:
: In this partial orientation, we put the following directed edges:
(i) (a_ij,b), (e,b) and (b,x), for all b ∈ B_ij;
(ii) (a_ij,e), (e,x) and (x,a_ij);
(iii) (v,u), for all v ∈ N_ij and for all u ∈ M_ij.
See <Ref> for an illustration of .
Path directions:
From <Ref>, it can be observed that using , there is a directed path of length at most 2 between any vertex of C_0 and any vertex of C_ij, for any i and any j.
Note that any vertex c ∈ C_ij has an outward edge either (c,e) to e or (c,x) to x (Recall that x is the unique element of order 2.). Now, we want to exhibit a directed path of length at most 3 between two vertices c ∈ C_ij and c' ∈ C_i'j' where i,i',j,j' are non-zero indices and i (respectively j) may or may not be equal to i' (respectively j'). Without loss of generality, let (c, e) ∈ (The other case can be argued similarly.). Then, we can use the edge (c,e) together with the path from e to c' to have a path from c to c' of length at most 3. Hence, it is shown that there is a directed path of length at most 3 between any two vertices of C ∖ C_0. To have a directed path of length at most 3 between any two vertices of C_0, we use the directed 3-cycle (e,x), (x,a_ij), (a_ij,e) for any i, j.□
Algorithm: Given a nilpotent group G, it is easy to compute the oriented diameter of Pow(G) with the help of the characterization given in this paper. We can compute the orders of each element in time linear in |G| <cit.>. Once that is done, checking if a group is cyclic is easy. Checking if G has multiple subgroups of prime order p boils down to checking if it has at least p elements of order p. A cyclic group x is maximal if it is not properly contained in y for any y. This can be tested in polynomial time. We note that nilpotency can be tested in polynomial time <cit.>.
§ NOTES ON ORIENTED DIAMETER OF ENHANCED POWER GRAPHS AND COMMUTING GRAPHS
As a consequence of our results so far, one can easily note the following results regarding the oriented diameter of two related graph classes, namely enhanced power graphs and commuting graphs. First, we provide the definitions of these two graphs.
The enhanced power graph of a group G, denoted by EPow(G), is an undirected graph with vertex set G, in which two vertices x and y are adjacent if and only if they are in a common cyclic subgroup of G, i.e., there exists z in G such that x,y ∈⟨ z ⟩.
The commuting graph of a group G, denoted by Com(G), is an undirected graph with vertex set G, in which {x,y} is an edge if xy=yx under the group operation.
From definitions, one can easily note that E(Pow(G)) ⊆ E(EPow(G)) ⊆ E(Com(G)). Hence for a finite group, OD(Com(G)) ≤ OD(EPow(G)) ≤ OD(Pow(G)). Therefore, if OD(Pow(G))≤ d, then d is an immediate upper bound for the oriented diameter of Com(G) and as well as EPow(G). Hence, from <Ref>, we have the following straightforward corollary.
Let G be a group without any maximal cyclic subgroup of order 2. Then, the oriented diameter of EPow(G) and Com(G) is at most 4.
Moreover, EPow(G) is a complete graph if and only if G is cyclic, and Com(G) is a complete graph if and only if G is abelian. Hence, it makes sense to study oriented diameter for enhanced power graphs of non-cyclic finite groups and commuting graphs for non-abelian finite groups. Now, from our previous discussion, it is clear that <Ref> and <Ref> yield upper bounds for oriented diameter for corresponding enhanced power graphs and commuting graphs. But since there are more edges in EPow(G) and Com(G) than Pow(G), there is a possibility that the actual value of the oriented diameter is less than these upper bounds. Hence, this leads to the following two natural questions.
Question 1: Can we characterize the oriented diameter of enhanced power graphs of non-cyclic finite nilpotent groups?
Question 2: Can we characterize the oriented diameter of commuting graphs of non-abelian finite nilpotent groups?
alpha
§ EXTENDED PRELIMINARY
Let X=(V,E) be a graph. If S ⊆ V(X), then the subgraph with the vertex set S, and edges in E(X) with both endpoints in S, is called the induced subgraph of X on S, and it is denoted by X[S]. In an undirected graph X, a vertex u is said to be a neighbour of a vertex v (and vice versa) if {u,v}∈ E(X). A vertex u is said to be a dominating vertex of a graph X if for any v ∈ V(X), we have {u,v}∈ E(X). The set of all dominating vertices of a graph X is called the set of dominating vertices, denoted by Dom(X). If S,T ⊆ V(X), then E(S,T) denotes the set of edges {s,t}∈ E(X), i.e., the set of edges with one endpoint from S and another endpoint from T. Moreover, we say the edges between S and T are complete edges, if {s,t}∈ E(X) for all s∈ S and for all t∈ T. A bridge in a connected graph is an edge whose removal disconnects the graph.
In an undirected graph X a path between u_1 and u_k is a sequence u_1u_2… u_k of distinct vertices from V(X) such that {u_i,u_i+1}∈ E(X) for each 1≤ i ≤ (k-1). The length of a path is the number of edges participating in it, i.e., the length of the path u_1u_2… u_k is (k-1). A directed path in a directed graph is defined analogously with the condition (u_i,u_i+1)∈ E() for each 1≤ i ≤ (k-1).
We now state some useful group-theoretic facts.
Let x and y be two non-trivial elements of a group G such that xy=yx. Then (xy)^n=x^ny^n.
Let G be a finite group and x,y ∈ G ∖{e} be two elements such that o(x) and o(y) are co-prime to each other and xy=yx. Then, ⟨ xy ⟩ forms a cyclic subgroup of G of order o(x)· o(y). In particular, o(xy)=o(x)· o(y).
The next property about finite nilpotent groups can be proved from the definition of finite nilpotent groups (see <Ref>).
<cit.>
A finite group is nilpotent if and only if two elements with relatively prime orders commute with each other.
§ APPENDIX
§.§ Proof of <Ref>
Since g_1 generates g_2 in G, there exists natural number k_1 ≥ 1 such that g_1^k_1=g_2. Moreover, g_1^k_1+m_1· o(g_1)=g_2, where m_1 is an integer. Since h_1 generates h_2 in H, we can similarly write that h_1^k_2+m_2· o(h_1)=h_2, where k_2≥ 1 is a natural number and m_2 is an integer. The element (g_1,h_1) generates (g_2,h_2) if and only if there is an integer x such that (g_1,h_1)^x=(g_2,h_2), i.e., g_1^X=g_2 and h_1^x=h_2. Such x exists if the congruence equations x≡ k_1 (mod o(g_1)) and x≡ k_2 (mod o(h_1)) have a solution. Now, gcd(|G|,|H|)=1 implies that o(g_1) and o(h_1) are co-prime to each other. So, by the Chinese Remainder Theorem (see <cit.>), the above equations have a solution, say l, and we can write (g_1,h_1)^l=(g_2,h_2). Hence, (g_1,h_1) generates (g_2,h_2) in G× H.
§.§ An observation for the proof of <Ref>
Let G be a non-cyclic nilpotent group and |G|=2^mp^n, where p is an odd prime and m,n ≥ 1. If C is a maximal cyclic subgroup of G of order 2p^β, 1≤≤ n, containing a base element x of order 2, then any maximal cyclic subgroup of G containing x is of order 2p^γ, for some γ≥ 1.
Let y ∈ C be an element of order p^β. Note that xy generates C. For the sake of contradiction, we assume that a maximal cyclic subgroup C' of G containing x has order 2^p^γ where >1, γ≥ 1. Hence, C' must have an element w of order 2^2 (by <Ref>), and w must generate x (because w generates an element of order 2 of C', and x is the only element of order 2 in C'). Now, using <Ref>, w and y commute with each other. Hence, using <Ref> and a well-known number theoretic fact[The congruence equation az≡ b (mod n) has a solution for z if and only if gcd(a,n) divides b.], wy generates both y and w. This implies that wy generates both x (since w generates x) and y. Therefore, wy generates xy. So, wy generates C. This contradicts that C is a maximal cyclic subgroup of G.
§.§ Examples of Nilpotent Groups corresponding to <Ref>
For each of the conditions of <Ref>, we provide examples of finite non-cyclic nilpotent groups that are not in .
* Only condition (a): G=G_1 × G_2 where G_1 is a p-group and G_2 is a q-group, where p and q are odd primes.
* Only condition (b): G= ℤ_4p^n×ℤ_4q^m where p and q are odd primes, and m,n ≥ 1.
* Only condition (c): G= G_1 ×ℤ_p^n where G_1 is a 2-group, p is an odd prime and n ≥ 1.
* Only condition (d): G= Q_8 × G_1 where G_1 is a p-group, and p is an odd prime.
* None of (a)-(d): G= ℤ_2p×ℤ_2p where p is an odd prime.
§.§ Proof of <Ref>
Due to <Ref> and <Ref>, it is sufficient to give an orientation of Pow(G) with diameter 3. Along with the partial orientations _1,_2,_3 given in <Ref>, we also use the following partial orientations:
_4 : From any base element of order 2, we orient the edges towards all the adjacent non-base elements of order 2p^β (where β≥ 1).
_5 : Consider a non-base class N of order 2^ p^, α≥ 2, β≥ 1. By <Ref>, N is adjacent to only one base class M of order 2^2. Then, we orient the edges of E(M,N) as described in _2 of <Ref>. The choices of anchor gadget points of N for M depend on M as stated in <Ref>. In other words, while introducing _5 in a non-base class N of order 2^p^, ≥ 2, ≥ 1, we select a pair of gadget anchor points {n_1,n_2} in N for the base class of order 2^2 adjacent to N in such a way that neither n_1 nor n_2 has used as gadget anchor point in N while introducing _2 for the base class of order p adjacent to N. This is possible since only one base class of odd prime order is adjacent to N and also |N|=ϕ(2^p^)>2^2 = 2·2.
_6 : From any base element of order 2^, ≥ 3, we orient the edges towards all the adjacent non-base elements. Note that these non-base elements are of order 2^δp^β, where δ≥ and β≥ 1.
_7 : From any non-base element of order 2^2p^β, ≥ 1, we orient the edge towards the (unique) adjacent element of order 2.
We show an illustration of the given orientations in <Ref>. The set B is partitioned into three subsets as follows: (a) R_1: consisting of the elements of order 2; (b) R_2: consisting of the elements of order 2^2 and p; (c) R_3: consisting of the elements of order 2^, ≥ 3, and p^, ≥ 2.
Path directions: First, we point out the following observations, which can be argued similarly to <Ref>:
Note 1:
There is a directed path of length 2 from any element of order 2^α to any element of order p, using _4 (when α=1) or _5 (when α=2) or _6 (when α≥ 3) along with using _2.
Note 2: There is a directed path of length 2 from any element of order p^, ≥ 1 to any element of order 2^2 using _2 (when =1) or _3 (when ≥ 2) along with using _5.
Note 3: There is a directed path of length 2 from any element of order p^, ≥ 1 to any element of order 2 using _2 (when =1) or _3 (when ≥ 2) along with using _7.
Let Γ=Pow(G) and denote the disjoint union of _1,…,_6. Then, we use the notation Γ_ to denote the directed graph (V(Γ),). Moreover, let d(a,b) (we use d(a,b) instead of d_Γ_(a,b) as Γ and are fixed in this context) denote the shortest distance from a vertex a to a vertex b in the directed graph Γ_ and d(a,S)=min_s ∈ Sd(a,s) denote the shortest distance from a vertex a to a set S in Γ_.
Although other than the path direction from a base element to a non-base element, the path directions are the same as those discussed in the proof of <Ref>, we discuss them here also for the sake of completeness.
From <Ref>, one can see that d(v,e)=1 for any non-base element v and d(e,u)=1 for any base element u. This also implies that d(v,u) ≤ 2, i.e., there is a directed path of length at most 2 from any element v∈ NB to any element u∈ B.
We claim that if u ∈ B=R_1∪ R_2∪ R_3 then d(u,NB)=1. For this, observe that if u ∈ R_1, then there exists some v∈ NB such that (u,v) ∈_4. Similarly, if u ∈ R_2, then there exists some v ∈ NB such that (u,v) ∈_2∪_5 and if u ∈ R_3, then there exists some v ∈ NB such that (u,v) ∈_3 ∪_6.
Noting d(u,NB)=1 for all u∈ B and (v,e)∈_1 for all v∈ NB, we have a directed path of length at most 2 from any element of B to e. Combining such a path with (e,u')∈_1, where u' is any element in B, we get a directed path of length at most 3 between any two elements of B.
Now, for any non-base element v∈ NB, there exists at least one element u ∈ R_2 such that (u,v)∈_2. Moreover since (e,u)∈_1 for all u∈ R_2 ⊆ B, we have d(e,v)= 2 for all v∈ NB. Now, as (v',e)∈_1 for all v'∈ NB, we get d(v',v)≤ 1+d(e,v) = 3, i.e., there is a directed path of length at most 3 between any two elements in NB.
Now, the only case that remains to be discussed is when the source vertex u is from B, and the destination vertex v is from NB. Since v is a non-base element, p | o(v), and hence there exists an element a∈⟨ v ⟩ such that o(a)=p. So the base class [a] is in R_2 and participates in a C_4-gadget with the non-base class [v] due to _2 (see <Ref>). Now, if o(u)=2^α, α≥ 1, then using Note 1, we have d(u,a') ≤2 for all a' ∈ [a]. Further using the C_4-gadget between [a] and [v], we have d(u,v) ≤ 3. Else, consider the case when o(u)=p^β, β≥ 1. If 2^2 ∤ o(v), then Note 2 implies d(u,b) ≤ 2, where b is the (unique) element of order 2 in v. Moreover, since b ∈ R_1, we have (b,v) ∈_4. This gives us d(u,v) ≤ 3 in this case. If 2^2 | o(v), then there exists an element c∈v of order 2^2 and [c] participates in a C_4-gadget with [v]. Now, using Note 3, we have d(u,c') ≤ 2 for all c' ∈ [c]. After that, due to the C_4-gadget between [c] and [v] we have d(u,v) ≤ 3.
§.§.§ Proof of <Ref>
Due to <Ref> and <Ref>, it is sufficient to give an orientation of Pow(G) with diameter 3. For that, along with the partial orientations _1,_2,_3 discussed in <Ref>, we use the following partial orientation.
_4: From any base element of order 2^, ≥ 1, we orient the edges towards all the adjacent non-base elements. Note that these non-base elements are of order 2^δp^β, where δ≥ and β≥ 1.
Path directions: First, we point out the following observations, which can be argued similarly to <Ref>:
Note 1: Using _4 together with _2, there is a directed path of length 2 from any base element of order 2^α, α≥ 1 to any base element of order p.
Note 2: Using _3 together with _2, there is a directed path of length 2 from any base element of order p^, ≥ 2 to any base element of order p.
Let Γ=Pow(G) and denote the disjoint union of _1,…,_4. Then, we use the notation Γ_ according to <Ref>. Moreover, let d(a,b) (we use d(a,b) instead of d_Γ_(a,b) as Γ and are fixed in this context) denote the shortest distance from a vertex a to a vertex b in the directed graph Γ_ and d(a,S)=min_s ∈ Sd(a,s) denote the shortest distance from a vertex a to a set S in Γ_.
Although other than the path direction from a base element to a non-base element, the path directions are the same as those discussed in the proof of <Ref>, we discuss them here also for the sake of completeness.
One can see that d(v,e)=1 for any non-base element v and d(e,u)=1 for any base element u. This also implies that d(v,u) ≤ 2, i.e., there is a directed path of length at most 2 from any element v∈ NB to any element u∈ B.
We claim that if u ∈ B, then d(u,NB)=1. For this, observe that if o(u)=2^, ≥ 1, then there exists some v∈ NB such that (u,v) ∈_4. Similarly, if o(u)= p, then there exists some v ∈ NB such that (u,v) ∈_2 and if o(u)=p^, ≥ 2, then there exists some v ∈ NB such that (u,v) ∈_3.
Noting d(u,NB)=1 for all u∈ B and (v,e)∈_1 for all v∈ NB, we have a directed path of length at most 2 from any element of B to e. Combining such a path with (e,u')∈_1, where u' is any element in B, we get a directed path of length at most 3 between any two elements of B.
Now, for any non-base element v∈ NB, there exists at least one element u ∈ B such that o(u)=p and (u,v)∈_2. Moreover since (e,u)∈_1 for all u∈ B, we have d(e,v) = 2 for all v∈ NB. Now, as (v',e)∈_1 for all v'∈ NB, we get d(v',v) ≤ 1+d(e,v) = 3, i.e., there is a directed path of length at most 3 between any two elements in NB.
Now, the only case that remains to be discussed is when the source vertex u is from B, and the destination vertex v is from NB. At first, observe that since by assumption G has a unique subgroup of order p, it has only one base class [a] of order p. Also, the base class [a] participates in a C_4-gadget with the non-base class [v] due to _2. Now, if o(u)=2^α, α≥ 1, then using Note 1, we have d(u,a')=2 for all a' ∈ [a]. Further using the C_4-gadget between [a] and [v], we have d(u,v) ≤ 3. If o(u)=p^β, β≥ 2, then using Note 2 and the C_4-gadget between [a] and [v], we have d(u,v) ≤ 3. If o(u)=p, it is easy to observe that u ∈ [a]. Now, we use the directed edges between [a] and [v], which are in _2. This gives a directed path from any u ∈ [a] to v ∈ NB of length at most 3 (see <Ref>).
|
http://arxiv.org/abs/2409.02461v1 | 20240904060055 | Phase separation in soft repulsive polymer mixtures: foundation and implication for chromatin organization | [
"Naoki Iso",
"Yuki Norizoe",
"Takahiro Sakaue"
] | cond-mat.soft | [
"cond-mat.soft"
] |
1]Naoki Iso
1]Yuki Norizoe
1]Takahiro Sakaue^∗
[1]Department of Physical Sciences, Aoyama Gakuin University
Phase separation in soft repulsive polymer mixtures: Foundation and implication for chromatin organization
[
==========================================================================================================
§ ABSTRACT
Given a wide range of length scales, the analysis of polymer systems often requires coarse-graining, for which various levels of description may be possible depending on the phenomenon under consideration. Here, we provide a super-coarse grained description, where polymers are represented as a succession of mesosopic soft beads which are allowed to overlap with others.
We then investigate the phase separation behaviors in mixture of such homopolymers based on a mean-field theory, and discuss universal aspects of the miscibility phase diagram in comparison with the numerical simulation. We also discuss an extension of our analysis to the mixtures involving random copolymers, which might be interesting in the context of chromatin organization in cellular nucleus.
[0]^* Department of Physical Sciences, Aoyama Gakuin University, 5-10-1 Fuchinobe, Chuo-ku, Sagamihara, Japan. E-mail:[email protected]
§ INTRODUCTION
Phase separations in polymer solution and blend have a long history of research due to its importance in fundamental science as well as industrial applications <cit.>. Recently, its pivotal role in the field of biophysics has been recognized as a basic mechanism to organize various cellular and nuclear bodies <cit.>.
Here, given the complexity in biological systems, standard approaches such as the Flory-Huggins theory to analyze the phase separation do not always suffice, and various extensions or modifications may be called for depending on the phenomena under consideration.
In this paper, we provide one such example, where we investigate the phase separation behavior of polymer mixtures made of mesoscopic segments.
Our work has been motivated by the recent attempts to simulate chromatin organization in cellular nucleus.
In ref. <cit.>, Fujishiro and Sasai constructed a polymer model of the whole genome of human cells, where each chromatin is modeled as a succession of soft-core monomers.
Here, individual monomers (beads) represent ∼ 10^2 kbp of DNA, which is much larger than the conventional monomers defined in standard theory or simulation of polymer systems. They argued that the interaction between such mesoscopic segments is soft and repulsive, and the imbalance in such repulsion in systems with e.g., eu- and hetero-chromatic monomers would trigger the phase separation.
Similar modelings of large scale behavior of chromatin with soft-core potential naturally arises after the coarse-graining, hence have been employed in other works as well <cit.>, where the soft potential incorporates the entropic effect relevant to the mesoscipic segments.
How can we describe such a phase separation phenomena in chromatin theoretically? The immediate complication lies in the copolymer nature of the chromatin model <cit.>. However, even if we let aside the sequence effect and restrict our attention to a binary mixture of homopolymers, the application of a Flory-Huggins theory is hampered because of the allowed overlap between monomers due to the soft-core nature of the inter-monomer potentials.
A key insight would thus be obtained by the phase behavior of the mixture of soft particles. This problem has been extensively studied by groups of Likos, Löwen and Kahl<cit.>.
Very recently, Staňo, Likos and Egorov have extended the framework to the system of chains of soft beads<cit.>. Although their primal target is a mixture of linear polymers and ring polymers (or polycatenanes), we expect that the same physics applies to chromatin system, too.
Our first aim is thus to recapitulate and to numerically validate the theoretical framework for the binary mixture of polymers made of soft monomers, which allows one to analyze the phase behavior.
In Sec. <ref>, we introduce the mean-field free energy for our system. From the analysis of the free energy, we present in Sec. <ref>, the miscibility phase diagram and compare it with the result from molecular dynamic (MD) simulations. Sec. <ref> is devoted to discussions on universal aspects of the phase diagram, comparison with a conventional Flory-Huggins theory, and connection to the Gaussian core model. Building on the framework, we also discuss its extension to a system containing copolymers in Sec. <ref>.
§ FREE ENERGY OF THE SOFT REPULSIVE POLYMER MIXTURE
Following Staňo <cit.>, we adopt the following free energy density for the mixture of polymers modeled as the succession of soft beads which represent mesoscopic segments
f/k_BT = c_a/N_alnc_a + c_b/N_blnc_b
+ 1/2χ_aa c_a^2 + 1/2χ_bb c_b^2+ χ_ab c_a c_b
where k_BT is thermal energy, c_x and N_x are, respectively, the number density of beads and the chain length (number of beads per chain) of component x (=a or b). Parameters χ_xy (>0) represent the strength of the repulsive interaction between beads x and y.
Note that in this representation, the parameters χ_xy have a unit of volume, and we measure them in unit of the volume of individual beads. In other words, we assume, for simplicity, the characteristic size (σ) of beads a and b are equal, which is taken to be the unit of length. Although there is no attraction, the phase separation may be induced by the asymmetry in the repulsion, i.e., χ_aa≠χ_bb. At first sight, Eq. (<ref>) looks as a free energy in the second virial approximation valid for low concentrations. As we shall show below, however, the free energy (<ref>) is capable of describing the phase separation in concentrated regime (c_a + c_b) σ^3 > 1 (see Seq. <ref> for discussion).
Let us first clarify a mathematical aspect relevant to the phase equilibria condition in the system described by the free energy Eq. (<ref>). If the homogeneous mixture of polymer A and polymer B separates into phase 1 and phase 2, the demixed state is specified by the concentrations of both components in respective phases, i.e., (c_a^(1), c_b^(1)) and (c_a^(2), c_b^(2)). The number of unknowns is thus n_u=4.
On the other hand, the phase equilibria between two phases indicates the equalities of chemical potentials μ_x^(α)=∂ f/∂ c_x^(α) between two phases (α=1 or 2) for both components (x=a or b), i.e., μ_a^(1)=μ_a^(2) and μ_b^(1)=μ_b^(2) and also the mechanical balance ensured by the equality of pressure P^(1)=P^(2) , where P(c_a,c_b) = c_a [∂ f (c_a,c_b)/ ∂ c_a] + c_b [∂ f (c_a,c_b)/ ∂ c_b] - f(c_a,c_b), leading to the number of condition to determine the phase equilibria n_c=3.
Comparing the number of unknowns and that of conditions, we expect that the dimensionality of the phase boundary, i.e., binodal is d_pb=n_u-n_c=1.
§ PHASE DIAGRAM OF THE SOFT REPULSIVE POLYMER MIXTURE
In Fig. <ref> (a), we show an example of the miscibility phase diagram obtained from the free energy Eq. (<ref>) under the fixed interaction parameters.
As we have discussed, the phase diagram is two-dimensional spanned by c_a and c_b, in which the uniform state (bottom left) and the demixed state (upper right) are separated by one-dimensional phase boundary. Tie lines, which connect (c_a^(1), c_b^(1)) and (c_a^(2), c_b^(2)) in demixed state, are negatively sloped, indicating the phase separation is a segregative type.
As expected, the region for demixing widens with the increase in either chain length or the repulsion strength, see Fig. <ref> (b).
Note that despite the symmetry in the chain length N_a = N_b in the examples shown here, the phase diagram exhibits the asymmetry about the diagonal c_a=c_b. The asymmetry is caused by the difference in physical properties of type a and b beads, which leads to the phase rich in softer beads b being more concentrated than the other. Such a feature can be made more evident by re-plotting the phase diagram in total concentration c = c_a + c_b and the composition ψ = c_a/c plane (Fig. <ref> (c)).
To check the validity of the free energy prediction, we have performed numerical simulations of the polymer mixture.
Briefly, the system is a mixture of two types of linear homopolymers A and B, where A (B) polymer is made of a succession of N_a (N_b) beads of type a (b) (see Appendix for details of the simulation model). To represent the soft repulsion between monomers, we employ the Gaussian potential, see Eq. (<ref>) in Appendix, where the strength of the repulsion between x-bead and y-bead is ϵ_xy in unit of k_BT.
The numerically determined phase boundary shown in Fig. <ref> (a) is obtained with the interaction strength ϵ_aa=2, ϵ_bb =1, ϵ_ab=1.5 and the chain length N_a = N_b=20, where the overall concentrations are varied from (c_a^(o), c_b^ (o)) = (0.1, 0.1) to (c_a^(o), c_b^ (o)) = (0.5, 0.5).
We find that the mixture is homogeneous at (c_a^(o), c_b^ (o)) = (0.1, 0.1), but develops large concentration fluctuation at (c_a^(o), c_b^ (o)) = (0.25, 0.25), and further increase in concentration leads to a well-defined phase separated structure (Fig. <ref> (b)).
Remarkably, the numerically determined phase diagram resembles that predicted by the free energy analysis (Fig. <ref>). More specifically, we find that the numerical and analytical phase diagrams almost overlap under the correspondence χ_xy/ ϵ_xy≃ 2.5 between interaction parameters in free energy and interaction strengths in simulation.
§ DISCUSSIONS
§.§ General aspects of phase diagram
In Sec. <ref>, we have shown one example how the phase boundary alters with the change in chain length or the interaction strength (Fig. <ref> (b)). To clarify the dependence of the shape of phase diagram on system parameters in a more systematic way, it is desirable to find out universal aspects inherent to the model described by the free energy (<ref>).
To this end, we introduce the rescaled concentrations c̃_a = c_a N_a χ_aa and c̃_b = c_b N_b χ_bb, which enables us to rewrite Eq. (<ref>) as
f/k_BT = 1/N_a^2 χ_aa[c̃_alnc̃_a + k_1 c̃_blnc̃_b + 1/2c̃_a^2 + 1/2 k_1 c̃_b^2+ k_2 c̃_ac̃_b]
where irrelevant linear terms in concentrations are dropped, and coefficients are
k_1 = N_a^2/N_b^2χ_aa/χ_bb, k_2 = N_a/N_bχ_ab/χ_bb = √(k_1)χ_ab/√(χ_aa χ_bb)
The above free energy density is invariant under the parameter changes which keep k_1 and k_3 = χ_ab/√(χ_aa χ_bb) constant. These conditions are satisfied by the following transformations
(χ_aa, χ_bb, χ_ab) ⇒ p (χ_aa, k χ_bb, √(k)χ_ab)
(N_a, N_b) ⇒ q (N_a, N_b/√(k))
where p, q, k are positive real numbers.
Therefore, with the change in parameters according to Eqs (<ref>) and (<ref>), the phase diagram drawn in terms of rescaled concentrations remains the same.
A similar analysis has been done by Staňo et. al <cit.>.
Indeed, from the stability analysis of the uniform state, one finds the spinodal curve
(1+c̃_a)(1+c̃_b)/c̃_ac̃_b = k_3^2
and the critical point is determined by Eq. (<ref>) together with
c̃_a(1+c̃_a)^3/c̃_b(1+c̃_b)^3 = k_1
Eq. (<ref>) indicates that a necessary condition for the phase separation k_3>1 ⇔χ_ab > √(χ_aaχ_bb), and Eq. (<ref>) determines the location of critical point on the spinodal curve <cit.>.
In Fig. <ref>, we demonstrate a collapse of the phase diagram upon rescaling.
§.§ Comparison with Flory-Huggins theory
It is instructive to compare the present theory with the standard Flory-Huggins theory for polymer blends.
The Flory-Huggins free energy (per lattice site) for a blend of polymer A and B is written as <cit.> <cit.>
f_ FH/k_BT = ϕ_a/N_alnϕ_a + ϕ_b/N_blnϕ_b + χϕ_a ϕ_b
where ϕ_x is the volume fraction of component x, and a non-dimensional parameter χ measures the nature and the strength of interaction. The incompressiblity condition enforces ϕ_a + ϕ_b =1.
Since χ is usually positive, corresponding to attraction among the like-species, such an interaction acts as a driving force to the phase separation. The free energy (<ref>) reduces to that of polymer solution in the limit N_b=1 where the component b represents a solvent.
Figure <ref> shows the phase diagram calculated from Eq. (<ref>).
When we fix the interaction parameter at the value χ > χ_c, where χ_c=(√(1/Na) + √(1/Nb))^2/2 is the critical value for the phase separation, the phase diagram as a function of ϕ_a is one-dimensional line with two points ϕ_a^(1) and ϕ_a^(2) representing the phase boundaries. If the overall concentration falls in between these two points, the uniform state is metastable (outside spinodal region) or unstable (inside spinodal region) and the system phase separates into the A-poor (dilute) and A-rich (concentrated) phases with the volume fractions ϕ_a^(1) and ϕ_a^(2), respectively. Note that the dimensionality of the phase boundary at fixed χ is d_ pb=0, i.e., points, which is a consequence of the equality of number of conditions (n_c=2, i.e., μ_a^(1)=μ_a^(2) and μ_b^(1)=μ_b^(2)) and the number of unknowns (n_u=2, i.e., ϕ_a^(1) and ϕ_a^(2)).
We note that a set of conditions μ_a^(1)=μ_a^(2) and μ_b^(1)=μ_b^(2) is equivalent to μ_a^(1)=μ_a^(2) and Π^(1)=Π^(2), where Π(ϕ_a) = ϕ_a [df_ FH(ϕ_a)/dϕ_a] - f_ FH(ϕ_a) is the osmotic pressure, the use of which may be more common in polymer solution, where the component b is regarded solvent.
These two methods are equivalent due to the relation
-Π(ϕ_a) v_0 = μ_b(ϕ_a)
which follows from the incompressible condition <cit.>, where v_0 is the volume of the monomers and solvents.
In contrast, in our description of polymer mixture with soft potential, the solvent degrees of freedom is already integrated out, and c_a and c_b are independent variables without constraint, i.e., free from the incompressible condition. One can conceive that our system under consideration is a three component system (two solute A and B plus solvent), and the free energy density (<ref>) represents a mesoscopic description after coarse-graining.
It is also known that the interaction part in the Flory-Huggins free energy initially takes the form (χ_aa/2) ϕ_a^2 + (χ_bb/2) ϕ_b^2 + χ_abϕ_a ϕ_b. Rewriting it into the form of Eq. (<ref>) with a single parameter χ = χ_aa/2 + χ_bb/2 - χ_ab is made using the incompressibility condition. Again, it does not apply to our soft polymer description. Since our system originally possesses three components, we naturally need three interaction parameters to characterize the system.
We also note that the critical χ parameter in blends of long polymers N_A, N_B ≫ 1 is χ_c → 0 in Flory-Huggins theory. A necessary condition for the phase separation in this limit is thus χ > 0 ⇔ (χ_aa+χ_bb)/2 > χ_ab. In contrast, as discussed in Sec. <ref>, the corresponding condition in our soft polymer mixtures is χ_ab > √(χ_aaχ_bb) independent of the chain length<cit.>.
§.§ Relation with Gaussian core model
In our model, polymers are described as N successive soft beads, where these beads are already mesoscopic entities with their internal degrees of freedom integrated out. We note here that, starting from a microscopic model, there is a freedom to choose N, i.e., the degree of the coarse-graining.
Although the extreme limit of the choice is N=1, in which individual polymers are described as single soft particles, the validity of such a description may break down at high concentration<cit.>.
It is known that the effective pair potential between two isolated polymer coils in dilute solution can be well approximated by a simple Gaussian potential
U(r)/k_BT = ϵexp( -r^2/R^2)
where ϵ≃ 2 and the width R is of order of the gyration radius the coil <cit.>. The fact that the energy scale of the interaction is of order of thermal energy indicates entropic nature of the interaction. It has been shown that the above potential also provides a reasonable description for the effective interaction in semidilute solution, where polymers are overlapped.
Thermodynamic properties of fluid composed of soft particles interacting through Eq. (<ref>), i.e., the Gaussian core model, has been analyzed in detail by Louis et. al. <cit.>.
Phase separation in binary mixture of such fluids has been also extensively studied <cit.>.
As discussed in Sec. <ref>, our free energy (<ref>) can formally be mapped to that case by (N_a, N_b, c_a, c_b) → (1,1,c_a/N_a, c_b/N_b).
The analysis in Sec. <ref> may then indicate that the miscibility phase diagram is intact if we simultaneously transform the interaction parameters as (χ_aa, χ_bb, χ_ab) → (χ_aaN_a^2, χ_bbN_b^2, χ_abN_a N_b).
One may then conclude that the introduction of the “polymerization index" N_a and N_b might be auxiliary for the description of homopolymer mixtures.
However, there are, at least, two reasons we need the polymeric description.
First, N=1 description is known to suffer from a significant concentration dependence of the effective repulsive interactions once polymer coils start to overlap deep in semidilute concentration regime. This motivates the multisegment description with N>1 <cit.>, where a suitable choice for N would be guided by the overlapping condition for mesoscopic segments.
Second, once there arises a characteristic length scale in the problem, we need the polymeric description as a succession of beads with appropriate degree of coarse-graining. In Sec. <ref>, we provide one such example, where we analyze the effect of modulation in local physical properties along polymers, (i.e., due to post translational modification) on the phase separation.
Another point deserving comment is the relation between the strength of the interaction potential ϵ_xy and the interaction parameter χ_xy in our free energy (<ref>). We have shown in Sec.<ref> that the simulation results quantitatively match with the free energy prediction under the relation χ_xy/ ϵ_xy≃ 2.5.
Since the free energy (<ref>) takes apparently the same form as the virial expansion up to second order, one may expect that the ϵ_xy - χ_xy relation would be obtained from χ_xy= - ∫ (e^-U_xy(r)/k_BT-1) dr⃗.
As emphasized in ref. <cit.>, however, the free energy (<ref>) is based on the random phase approximation (RPA)<cit.>. As such, it becomes more and more accurate in higher concentration regimes in contrast to the second virial approximation<cit.>. In fact, unlike the virial expansion, the quadratic form of the free energy in concentrations is a consequence of the RPA closure, where the direct correlation functions, which appears in the Ornstein-Zernike relation, is independent of the concentrations. The analysis of equation of state with RPA leads to the identification χ_xy= (1/k_BT) ∫ U_xy(r) dr⃗ = π^3/2σ_xy^3 ϵ_xy.
Given the resultant ratio χ_xy/ ϵ_xy = π^3/2 is considered to be an upper bound compared to a more accurate estimate e.g., obtained from hypernetted chain closure <cit.>, we find our result χ_xy/ ϵ_xy≃ 2.5 reasonable and providing an overall consistency of the soft core model description of the phase separation based on the free energy Eq.(<ref>).
§ MIXTURES WITH RANDOM COPOLYMERS
So far discussed is a foundation for the coarse-grained description of polymer mixture, where polymers are represented as succession of soft mesoscopic beads. In particular, we have focused on the phase behavior of mixture of homopolymers.
As stated in Introduction, however, one of the main motivations to necessitate such a description is its relevance to describe the large scale behavior of chromatin in cellular nucleus. In this section, we would like to discuss a simple extension of our theory, which may be linked to a certain aspect of chromatin organization in living cells.
It is known that interphase chromatin in early embryo is quite homogeneous inside nucleus, which is, in certain sense, reminiscent to a uniform solution of homopolymers <cit.>. With the progress of the development stage, however, several characteristic structures, such as heterochromatin foci and transcriptional factories, start to appear <cit.>. Responsible for such structure formations would be a phase separation, which is driven by local alternation of chromatin monomers caused by e.g., post-translational modification. The change in the chemical state in chromatin monomers likely induces the modulation of physical properties along chromatin polymer, which could be represented by a copolymer model.
Since the variation in repulsive forces primarily reflects the difference in density of core-bearing monomers within the coarse-grained segments<cit.>, the segment “a" represents regions where chromatin exists in a relaxed, less condensed state, reminiscent of euchromatin, while the segment “b" corresponds to more condensed regions akin to heterochromatin. The structure formation under consideration could thus be treated by the appearance of copolymers in the matrix of homopolymers. With this in mind, let us consider a mixture of homopolymers H (with length N_h), which composes of type a beads only, and copolymer C (with length N_c) , which composes of types a and b beads.
The monomer concentration of H and C polymers are c_h and c_c, respectively. For the analytical tractability in a simple mean-field description, we assume the latter to be a random copolymer, which is characterized by the fraction α of b beads, i.e., the number of b beads in a copolymer C is α N_c.
The free energy of the mixture is written as
f/k_BT = c_h/N_hlnc_h + c_c/N_clnc_c
+ 1/2χ_hh c_h^2 + 1/2χ_cc c_c^2+ χ_hc c_h c_c
which takes the same form as Eq. (<ref>) except for the appearance of new interaction parameters.
While χ_hh=χ_aa trivially from the definition of the homopolymer H, the others χ_cc and χ_hc are nontrivial, which appear instead of χ_bb and χ_ab , respectively.
Given the randomness in the sequence of the copolymer, we can evaluate these interaction parameters as mean values of the inter-beads interactions χ_aa, χ_bb, χ_ab;
χ_cc = (1-α)^2 χ_aa + α^2 χ_bb + 2 α (1-α) χ_ab
χ_hc = (1-α) χ_aa + αχ_ab
In Fig. <ref>, we show phase diagrams obtained from the free energy (<ref>) with Eqs. (<ref>) and (<ref>) for a fixed α. Note that α=0 reduces to a homopolymer solution (with only type a bead), and α=1 corresponds to a blend of homopolymers A and B analyzed in earlier sections. Here we show the cases for α=0.3, 0.5. As expected, the region for phase separation enlarges with the fraction α. In addition, the results depend on a relative stiffness between beads a and b.
As shown, the system is more prone to phase separation when the matrix polymer is softer χ_aa < χ_bb reflecting the asymmetry in the phase diagram of homopolymer mixtures (Sec. <ref>).
To check the validity of the free energy prediction, we again performed numerical simulations using Gaussian potentials to represent bead-bead soft repulsions. In Figs. <ref> and <ref>, we compare the theoretical phase diagram in Fig. <ref> with simulation results, where the repulsion strengths for Gaussian potentials are set to be ϵ_xy = χ_xy/2.5 between beads x and y as determined from the result of homopolymer mixtures. As shown, the agreement is rather satisfactory, demonstrating that the overall trend of phase separation is well captured by the proposed free energy. In Figs. <ref> and <ref>, we also show the spatial profiles of the monomer concentration c_h of homopolymer together with the corresponding typical snapshots.
§ DISCUSSIONS AND SUMMARY
Numerical simulations of large scale chromatin organization in cellular nucleus often adopt highly coarse-grained models, in which chromatin polymer is represented as a succession of soft beads <cit.>. Unlike a model which employs nucleosomes as monomers of chromatin polymer, each of beads here represents a substantial amount of nucleosomes, thus regarded as a mesoscopic entity, allowing mutual overlaps with entropic penalty. It has been shown that the effective interaction between such mesoscipic segments is soft and repulsive, qualitative feature of which is well approximated by the Gaussian potential <cit.>.
We have considered binary mixtures of such soft repulsive polymers and investigated how the imbalance in repulsive interactions between different species leads to the phase separation.
After summarizing universal aspects of the phase diagram based on invariant property of the free energy upon changes in parameter values, we have extended the theory to mixtures including random copolymers, which may have some implications to chromatin phase separation.
As discussed in Sec. <ref>, random copolymer model is inspired by epigenetic modification of chromatin. This modification is performed and maintained by enzymes, thus includes energy consuming nonequilibrium process. In this sense, our description based on equilibrium framework should be considered a useful effective description to elucidate the impact of phase separation on chromatin organization. The same remark applies to many of current biophysical modelings of chromatin, not only for its structural organization but also for its dynamics. Yet, there are several other works, which emphasize possible impacts of various nonequilibrium effects on chromatin<cit.>.
Perhaps, some of these effects associated with nonequilibrium activities could be described by effective equilibrium models. Such a strategy may well work to understand some aspect of the problem, but may fail to capture other aspects. In our opinion, it remains to be seen how and when nonequilibrium factors are critically important in chromatin biophysics.
The same comment would apply to topological constraints, another factor presumably important in chromatin, but not explicitly included in our description <cit.>.
In this regard, it is interesting to note that as discussed in <cit.>, the similar physics as described in present paper may be important in blend of non-concatenated ring polymers, where the soft repulsion arises from the so-called topological volume due to topological constraints <cit.>.
As possible extensions of our work, we first note that our theory deals with the macro-phase separation, hence does not capture the possible appearance of mesophases. However, the occurrence of the “micro-phase separation" is naturally expected in copolymer systems, the elucidation of which should provide further insight into the problem of chromatin organization in nuclei.
Secondly, although we have only analyzed bulk property based on mean-field theory, we expect that the effect of correlations and interfacial properties at phase boundary and near a confining wall could be analyzed by following an approach outlined in ref. <cit.>. It would be interesting to see how such an analysis can be compared to the chromatin spatial profile, e.g. near the nuclear membrane.
Finally, we point out that there are several studies on compressibility effects in polymer solutions, which become evident, for instance, in pressure-induced phase separation <cit.>. Here, interesting phenomena such as the acousto-spinodal decomposition have been predicted <cit.>. Although comparison with these studies may be interesting, we note that the loss of incompressible condition in our description results after coarse-graining, i.e., integrating out solvent degrees of freedom. Therefore, to address the kinetic effect, we need to take properly solvent effects into account.
§ APPENDIX
§.§ Simulation model
The system is a mixture of two types of linear homopolymers A and B, where A (B) polymer is made of a succession of N_a (N_b) beads of type a (b).
The potential energy in the system has two contributions. The first is the intrachain bonding potential
U^ (b)(r)/k_BT = 1/2 k_b (r-r_0)^2
which acts the bonding pairs to maintain the linear connectivity of the chain, where r and r_0 denotes the separation between bead centers and the natural bond length, respectively. We set the spring constant k_b = 70.0/ σ^2 to keep the bond length nearly constant r_0 = σ, where σ is the unit of the length (see below). The thermal energy, k_BT, is chosen as the unit energy in the simulation system. The second is the non-bonded interaction potential, which represents the soft repulsion between monomers. We employ the Gaussian core potential; the pair potential between one bead with type X and another bead with type Y reads
U^( int)_xy(r)/k_BT = ϵ_xyexp(-r^2/σ_xy^2)
where ϵ_xy and σ_xy, respectively, measure the strength and the range of repulsive interaction between x and y beads. For simplicity, we set the range of repulsive interaction for all types of pair equal, i.e., σ_aa = σ_bb = σ_ab = σ, and adopt this range (denoted as σ) as the unit length.
Note that this interaction is acting on all the bead pairs except for the nearest neighbors along the chain (bonded pairs).
Molecular Dynamics (MD) simulations with fixed volume and the constant temperature are performed using the LAMMPS package <cit.>. To integrate the equations of motion, we adopt the velocity Verlet algorithm, in which all beads are coupled to a Langevin thermostat with the damping constant γ = τ_0^-1 with τ_0 = σ(m/k_BT)^1/2, where
m is the bead mass (assumed to be the same for type a and b). This τ_0 is chosen as the unit time. The integration time step is set at 0.01 τ_0. ( L_x, L_y, L_z ) represents the size of the rectangular parallelepiped system box with periodic boundary conditions. This system box is placed in - L_ω / 2 < ω < L_ω / 2, where ω represents the Cartesian axes x, y, and z. ( L_x = 48 σ, L_y = 16 σ, L_z = 16 σ) is fixed unless ( L_x = 120 σ, L_y = 16 σ, L_z = 16 σ) is utlized for the simulation of the A-B homopolymer mixtures at ( c_a, c_b ) = ( 0.3, 0.3 ).
To prepare the initial state, we start from dilute solution, where the same numbers of the homopolymers A and B are distributed in a large cubic box with the size ( 200 σ, 200 σ, 200 σ ). We run the simulation at ϵ_aa = ϵ_bb = ϵ_ab = 2.0 (mixture of homopolymers) or 2.5 (mixture with random copolymers) for 2 × 10^7 steps with slowly compressing the system box into the final system size ( L_x, L_y, L_z ). In this way, we obtain the desired concentration of polymer mixture, where both the A and B- polymers are homogeneously mixed. For the mixture with copolymers, (1-α) N_c beads in each B-homopolymer in this initial configuration are randomly chosen, and turned into type-a beads. We then set the interaction strengths to appropriate values in the sebsequent production run (see below).
Then, we reassign the monomer label to adjust the initial spatial concentration profile to prepare the phase separated initial state.
To perform various statistical analysis, we sampled microstates of the system every 1000 steps after the system reaches equilibrium, i.e., at 2 × 10^6 steps (simulation runs in Sec. 4) and 3 × 10^6 steps (Sec. 5) after setting the interaction strengths to appropriate values.
However, for the case of simulation runs given in Sec. 4 at ( c_a^(0), c_b^(0) ) = ( 0.3, 0.3 ), the sampling starts at 9 × 10^6 steps, and for the case of Sec. 5 at α = 0.3, (ϵ_aa, ϵ_bb, ϵ_ab) = (2.5, 0.5, 1.5), (c_h^(0), c_c^(0)) = ( 0.5, 0.5 ), the sampling starts at 7 × 10^6 steps.
In all simulations, we collect 1001 independent samples of particle configurations in equilibrium. For the simulation at ( c_a^(0), c_b^(0) ) = ( 0.3, 0.3 ), as only one exception, production runs end at 18 × 10^6 steps, and particle coordinates are sampled every 1000 steps from 9 × 10^6 to 18 × 10^6 steps, during which 9001 independent samples of particle configurations in equilibrium are collected for statistical accuracy improvement in the vicinity of the critical point. We have confirmed that the physical properties of the simulation system are not significantly changed when we start the simulation from the homogeneously mixed initial states.
0
§.§ Tables
Tables typeset in RSC house style do not include vertical lines. Table footnote symbols are lower-case italic letters and are typeset at the bottom of the table. Table captions do not end in a full point.<cit.>
Adding notes to tables can be complicated. Perhaps the easiest method is to generate these manually.[Footnotes should appear here. These might include comments relevant to but not central to the matter under discussion, limited experimental and spectral data, and crystallographic data.]
§ EQUATIONS
Equations can be typeset inline e.g. y = mx + c or displayed with and without numbers:
A = π r^2
γ/ϵ x r^2 = 2r
For footnotes in the main text of the article please number the footnotes to avoid duplicate symbols. e.g. . The corresponding author ∗ counts as footnote 1, ESI as footnote 2, e.g. if there is no ESI, please start at [num]=[2], if ESI is cited in the title please start at [num]=[3] etc. Please also cite the ESI within the main body of the text using †. For the reference section, the style file can be used to generate the correct reference style.
§ CONFLICTS OF INTEREST
`There are no conflicts to declare.
§ ACKNOWLEDGEMENTS
We thank M. Sasai and S. Fujishiro for discussions. This work is supported by JSPS KAKENHI (Grants No. JP23H00369, JP23H04290 and JP24K00602).
rsc
|
http://arxiv.org/abs/2409.02685v1 | 20240904131655 | RouterRetriever: Exploring the Benefits of Routing over Multiple Expert Embedding Models | [
"Hyunji Lee",
"Luca Soldaini",
"Arman Cohan",
"Minjoon Seo",
"Kyle Lo"
] | cs.IR | [
"cs.IR",
"cs.AI"
] |
Pointwise and uniform bounds for functions of the Laplacian on non-compact symmetric spaces
[
September 9, 2024
===========================================================================================
§ ABSTRACT
Information retrieval methods often rely on a single embedding model trained on large, general-domain datasets like MSMARCO. While this approach can produce a retriever with reasonable overall performance, models trained on domain-specific data often yield better results within their respective domains. While prior work in information retrieval has tackled this through multi-task training, the topic of combining multiple domain-specific expert retrievers remains unexplored, despite its popularity in language model generation.
In this work, we introduce , a retrieval model that leverages multiple domain-specific experts along with a routing mechanism to select the most appropriate expert for each query. It is lightweight and allows easy addition or removal of experts without additional training. Evaluation on the BEIR benchmark demonstrates that outperforms both MSMARCO-trained (+2.1 absolute nDCG@10) and multi-task trained (+3.2) models. This is achieved by employing our routing mechanism, which surpasses other routing techniques (+1.8 on average) commonly used in language modeling.
Furthermore, the benefit generalizes well to other datasets, even in the absence of a specific expert on the dataset.
To our knowledge, is the first work to demonstrate the advantages of using multiple domain-specific expert embedding models with effective routing over a single, general-purpose embedding model in retrieval tasks[Code in <https://github.com/amy-hyunji/RouterRetriever>].
§ INTRODUCTION
While a single embedding model trained on large-scale general-domain datasets like MSMARCO <cit.> often performs well, research shows that models trained on domain-specific datasets, even if smaller, can achieve superior results within those domains <cit.>.
Moreover, finetuning on MSMARCO after pretraining with contrastive learning can sometimes degrade performance on specific datasets <cit.>. To improve embedding models for domain-specific datasets, previous studies have explored approaches such as data construction <cit.> and domain adaptation methods <cit.>. However, less attention has been paid to leveraging multiple expert embedding models and routing among them to select the most suitable one during inference.
In this work, we introduce , a retrieval model that leverages multiple domain-specific experts with a routing mechanism to select the most suitable expert for each instance. For each domain, we train gates (experts), and during inference, the model determines the most relevant expert by computing the average similarity between the query and a set of pilot embeddings representing each expert, selecting the expert with the highest similarity score.
is lightweight, as it only requires the training of parameter-efficient LoRA module <cit.> for each expert, resulting in a minimal increase in parameters. Additionally, offers significant flexibility: unlike a single model that requires retraining when domains are added or removed, simply adds or removes experts without the need for further training.
Evaluation on the BEIR benchmark <cit.> with various combinations of experts highlights the benefits of having multiple expert embedding models with a routing mechanism compared to using a single embedding model.
When keeping the total number of training datasets constant, consisted of only domain-specific experts without an MSMARCO expert outperforms both a model trained on the same dataset in a multi-task manner and a model trained with MSMARCO.
Also, adding domain-specific experts tends to improve performance even when an expert trained on a large-scale general-domain dataset like MSMARCO is already present, suggesting that, despite the capabilities of a general-domain experts, domain-specific experts provide additional benefits, underscoring their importance.
Moreover, consistently improves performance as new experts are added, whereas multi-task training tends to show performance degradation when a certain number of domains are included. This indicates the advantage of having separate experts for each domain and using a routing mechanism to select among them. Notably, the benefits of generalize not only to datasets that have corresponding experts but also to additional datasets without specific experts.
We further explore the factors behind these performance benefits.
First, consistently shows improved performance with the addition of more experts (gates), suggesting that broader domain coverage by experts enhances retrieval accuracy. This trend holds even in an oracle setting, where the gate that maximizes performance is always selected. Notable, adding a new expert for a different domain yields greater performance gains than adding additional experts within the same domain.
Second, we observe that parametric knowledge influences embedding extraction. This observation supports the idea that training with domain-specific knowledge improves the quality of embedding extraction of the domain.
Last, the performance difference between an instance-level oracle (which routes each instance to its best expert) and a dataset-level oracle (which routes queries to the expert with the highest average performance for the dataset) suggests that queries may benefit from a knowledge of other domains, supporting the effectiveness of our routing technique.
Our results point to potential research opportunities in improving routing techniques among multiple expert retrievers, a direction that leads to the development of a retriever system that performs well across both general and domain-specific datasets.
§ RELATED WORKS
Domain Specific Retriever
There exists substantial research on retrieval models that aim to improve performance on domain-specific tasks.
One approach focuses on dataset augmentation. As domain-specific training datasets are often unavailable and can be costly to construct, researchers have developed methods that either train models in an unsupervised manner <cit.> or fine-tune models on pseudo-queries generated for domain-specific datasets <cit.>.
Another approach is developing domain-specific embeddings. A common approach is training in a multi-task manner over domain-specific datasets <cit.>.
Recent works have aimed to improve domain-specific retrievers by developing instruction-following retrieval models <cit.>; instruction contains such domain knowledge.
Another example is <cit.> which trains a soft token for domain-specific knowledge.
While these methods also aim to extract good representative embeddings for the input text, these methods rely on a single embedding model and produce domain-specific embeddings by additionally including domain-specific knowledge (e.g., appended as instructions) to the input.
differs from these prior methods by allowing for the employment of multiple embedding models where rather than providing the domain knowledge to the input, added to the model as parametric knowledge to produce the domain representative embeddings.
Routing Techniques
Various works have focused on developing domain-specific experts and routing mechanisms to improve general performance in generation tasks.
One approach simultaneously trains experts (gates) and the routing mechanism <cit.>.
Another line of work includes post-hoc techniques that do not require additional training for routing.
Some approaches use the model itself as the knowledge source by training it on domain-specific knowledge <cit.>, incorporate domain-specific knowledge in the token space <cit.>, or select the most relevant source from a sampled training dataset of each domain <cit.>.
Routing techniques have also been investigated for improving generation quality in retrieval-augmented generation tasks; <cit.> explores routing to decide whether to utilize external knowledge and <cit.> focuses on routing to choose among different retrieval approaches.
However, there has been less emphasis on applying these techniques to information retrieval tasks. In this work, we investigate the benefits of leveraging multiple domain-specific experts and routing mechanisms in information retrieval, contrasting this approach with the traditional methods of using a single embedding model trained on a general-domain dataset or multi-task training across various domains. Additionally, we find that simply adapting routing techniques from generation tasks to information retrieval does not yield high performance, underscoring the importance of developing routing techniques tailored specifically for information retrieval.
§ ROUTER RETRIEVER
Constructing Pilot Embedding Library
In this section, we introduce , a retrieval model composed of a base retrieval model and multiple domain-specific experts (gates). As shown in Figure <ref>, for a given input query, 1 the most appropriate embedding is selected using a routing mechanism. Then, 2 the query embedding is generated by passing the query through the selected gate alongside the base encoder.
In the offline time, we train the experts (gates) with domain-specific training datasets and construct a pilot embedding library. This library contains pairs of pilot embeddings for each domain along with the corresponding expert trained on that domain. Please note that this process is performed only once.
During inference (online time), when given an input query, a routing mechanism determines the appropriate expert. We calculate the similarity score between the input query embedding and the pilot embeddings in the pilot embedding library, and then choose the expert with the highest average similarity score.
We use Contriever <cit.> as the base encoder and train parameter-efficient LoRA <cit.> for each domain as the gate for that domain keeping the model lightweight.
For example, in the case of Figure <ref>, includes a base encoder with three gates (experts): Gate A, Gate B, and Gate C, and the Expert Encoder A is composed of the base encoder with Gate A (LoRA trained on a dataset from domain A) added. This approach allows for the flexible addition or removal of domain-specific gates, enabling various gate combinations without requiring further training for the routing mechanism.
Experts (Gates)
For each domain D_i, where i = 1, …, T and T is the total number of domains, we train a separate expert (gate) g_i using the corresponding domain dataset. After the training step, we have a total of T different gates, 𝒢 = { g_1, g_2, …, g_T }, with each gate g_i specialized for a specific domain.
Pilot Embedding Library
Given a domain-specific training dataset D_i = {x_1, …, x_k} where x_j is an instance in D_i, we perform inference using all gates 𝒢 to identify which gate provides the most suitable representative embedding for each instance (line 4-7 in Alg. <ref>). For each instance x_j, we select g_max, the gate that demonstrates the highest performance, defined as g_max(x_i) = max_g_j ∈𝒢Performance(g_j, x_i). This process produces pairs (x_j, g_max) for all instances in the dataset D_i.
Next, we group these pairs by g_max, constructing T groups, one for each domain. Then for each group, we perform k-means clustering with cluster size 1 to get the pilot embedding (line 8-19 in Alg. <ref>).
In specific, with the constructed pairs (x_j, g_max), we group them by the ones that have the same g_max, Group_m, which contains list of instances x_j with same gate as the max gate. This results in T groups, one for each domain (m=1,⋯, T).
If the Group_m is not empty, we first extract all embeddings for instances in the group with the base encoder (BaseModel).
We then apply k-means clustering <cit.> to these embeddings with a cluster size of one. The centroid of this cluster 𝐜_m is taken as the pilot embedding for the domain. This results in one pilot embedding per group, yielding a maximum of T pilot embeddings for the training dataset D_i. Each of these embeddings is associated with a different gate, representing the most suitable one for that domain. Please note that since when Group_m is empty, we do not extract pilot embedding for the empty group (cluster), thereby the number of pilot embeddings for the training dataset could be less than T.
By repeating this process across all domain-specific training datasets D_1, …, D_T, we obtain T pilot embeddings for each gate, one from each domain-specific training dataset (repeating line 3-19 in Alg. <ref> for all training dataset D_1 ⋯ D_T). Consequently, the pilot embeddings contains a maximum of T^2 pilot embeddings, with each of the T domain-specific training datasets contributing up to T pilot embeddings.
For example, consider a scenario with three experts, each trained on one of the following datasets: SciFact, FiQA-2018, and HotpotQA.
To construct the pilot embedding library, we first perform inference on all training instances used to train the SciFact expert across all three experts to determine which expert produces the most suitable embedding (line 4-7 in Alg. <ref>).
The chosen gate can be any of the three experts.
Next, we group all training instances from the SciFact dataset according to the expert that achieved the highest performance for each instance, resulting in up to three groups: instances where the SciFact expert, FIQA-2018 expert, or HotpotQA expert performed best. For each group, we apply k-means clustering with k=1 to compute the centroid, which serves as the pilot embedding for that group. This pilot embedding is added to the pilot embedding library, with the corresponding expert as the key. For example, if the centroid is extracted from the group where the HotpotQA expert performed best, the HotpotQA expert is considered the most suitable expert, even if the instances are from the SciFact dataset. This process adds three pilot embeddings to the library, one for each expert (lines 8-19 in Alg. <ref>).
We repeat this process for all domains (the example on top is for SciFact and we repeat the process for FIQA-2018 and HotpotQA), ultimately creating a total of a maximum of nine pilot embeddings in the library, with three pilot embeddings associated with each expert.
Routing Mechanism
When given an input query, we calculate the similarity between the query embedding extracted from the base encoder and the T^2 pilot embeddings in the pilot embedding library. We then average the similarity scores for T pilot embeddings associated with the same gate, resulting in a mean similarity score for each gate. The gate corresponding to the highest mean similarity score is selected as the most suitable embedding model.
§ EXPERIMENTAL SETUP
Baselines
We compare the performance of with when training on the same dataset in a multi-task manner (Multi-Task) and training on a large-scale general-domain dataset MSMARCO (MSMARCO-Trained). Additionally, following previous works <cit.>, we evaluate performance using two oracle settings: Best Individual and Oracle. The Best Individual setting is a dataset-level oracle that routes all queries in a dataset to the expert with the highest average performance for that dataset, while the Oracle setting is an instance-level oracle that routes each individual instance to its best-performing expert.
We also conduct experiments with various other routing techniques commonly used in language modeling tasks; ExpertClassifierRouter <cit.>, ClassificationHeadRouter <cit.>, and DatasetRouter <cit.>.
ExpertClassifierRouter employs a binary classifier for each gate to calculate the probability of that gate being selected. The gate with the highest probability is chosen for the final selection.
ClassificationHeadRouter uses a single classifier layer to determine the appropriate expert for each instance.
DatasetRouter is the most similar to , as it selects the gate by retrieving the instance with the highest similarity score. However, there are two key differences: uses the predicted label, whereas DatasetRouter relies on the original dataset label. Also, incorporates a clustering step to group instances, while DatasetRouter randomly samples 100 instances from the training dataset.
Further details of the baselines and training methods for each are provided in the supplementary materials.
Dataset
We used datasets in BEIR benchmark <cit.>, which includes 14 datasets across 6 domains: Bio-Medical, Wikipedia, Finance, Misc., Quora, and Scientific[Details of datasets and domains in supplementary].
To train domain-specific gates, we utilize the training sets provided by BEIR. Due to the limited number of datasets with available training sets, we also employed generated queries provided by BEIR[<https://huggingface.co/BeIR>].
The models were evaluated using the test sets. We categorize the datasets in the Misc. domain as separate general domains, Wikipedia as a general domain, and Bio-Medical, Finance, Quora, and Scientific as domain-specific datasets based on how broadly each instance is distributed. As illustrated in Figure <ref>, which shows the embeddings extracted from the pre-trained Contriever model (our base model), datasets in the Misc. domain are often widely dispersed even within the same domain. Although the Wikipedia datasets are generally close to others within the same domain, they also exhibit a broad spread. In contrast, datasets from the Bio-Medical, Finance, Quora, and Scientific domains tend to be more compact and closely clustered.
Hyperparameters
We use the pre-trained Contriever <cit.> as our base encoder and train gates (LoRA) according to the settings in <cit.>, with a rank of 8, an alpha of 32 per gate, thereby training approximately 0.5% of the parameters (about 1M parameters) per gate.
For training, we adopt the few-shot hyperparameters from <cit.>: a learning rate of 1e-4, a batch size of 256 with in-batch negatives, and a maximum of 500 epochs with early stopping.
Gates are applied only to the query encoder, keeping the context encoder frozen, as our focus is on understanding the impact of routing by query instances, thereby eliminating the influence of routing on the context encoder. We also include the results of applying gates to the context encoder in the supplementary materials.
§ EXPERIMENTAL RESULTS & DISCUSSIONS
§.§ Overall Performance
Table <ref> shows the performance of compared to baseline models using seven domain-specific gates (AR, MS, HO, NF, SF, QU, and FI)[All gates, except for MSMARCO, are selected to have the smallest training dataset from each domain to ensure that the total number of training dataset is equal to that of MSMARCO when excluding the MS gate.].
outperforms the MSMARCO-trained model, indicating that even with a large-scale general-domain training dataset, incorporating additional domain-specific gates further enhances performance.
Also, when keeping the training dataset the same when comparing to the Multi-Task model, which is trained with the same training datasets, consistently shows higher performance.
Moreover, (w/o MS expert), which excludes the MSMARCO gate but maintains the same total number of training datasets as the MSMARCO-trained model, still achieves superior performance. These results underscore the importance of having separate embedding models (gates) for each domain and dynamically selecting the most appropriate gate for each query rather than relying on a single model to handle multiple domains.
For additional results with different combinations of experts, please refer to the supplementary material.
§.§ Affect of Dataset Size when Training Experts
Figure <ref> shows the relationship between performance (y-axis) and the number of training samples (x-axis) across various datasets.
For in-domain evaluation datasets, performance generally improves as the number of training samples increases. However, in out-of-domain evaluation datasets, simply increasing the number of training samples does not necessarily lead to better performance.
When the same number of training samples is used, for in-domain cases, experts consistently achieve the highest performance across all evaluation datasets.
Interestingly, for out-of-domain cases, experts perform better when trained on general domains (e.g., Arguana and MSMARCO) compared to domain-specific experts (e.g., SciFact and NFcorpus). We attribute this to the broader coverage and stability of general-domain datasets, as illustrated in Figure <ref>.
These results suggest that while a larger training dataset is generally beneficial for expert in-domain performance, the coverage of the training dataset has a more significant impact on out-of-domain performance.
§.§ Impact of Number of Gates
Figure <ref> shows that adding gates (x-axis) consistently improves the performance of (y-axis). Notably, outperforms the MSMARCO-trained model even with just three gates, indicating that despite not having as diverse or large a training dataset as MSMARCO, the advantage of having multiple embedding models and the ability to select the most suitable one leads to better performance. also shows a small gap with the Best Individual performance which is the in-domain performance for each expert (Oracle performance for dataset-wise).
The performance in multi-task training tends to fluctuate as the number of domains (gates) increases. We hypothesize that with a large number of domains, the model struggles to find the optimal embedding for general cases due to the high variance across training datasets.
Figure <ref> illustrates the performance when we use 7 gates and increase the number of experts that the model can choose from, selecting the one with the maximum performance for each instance. Oracle represents the performance when the model can route through all 7 gates and choose the best-performing one instance-wise. As the number of gates increases, performance consistently improves. Notably, the rate of improvement is higher when adding gates initially, and as the number of gates grows, the rate of increase diminishes, regardless of the order in which experts are added. We believe this tendency arises because the routing technique tends to be more distracted as more gates are added. Nonetheless, the consistent improvement with additional gates highlights the potential for further enhancement with better routing techniques, emphasizing the importance of investigating these techniques across various expert retrievers. We randomly varied the order and combination of gates in the figure but observed that the trend remained consistent. Details are provided in the supplementary materials.
§.§ Maximum Performing Gate Rates and Analysis of Gate Selection
Figure <ref> shows the rate at which each gate achieves the highest performance across different evaluation datasets. For general-domain datasets (AR, MS, HO), the best-performing gate is often distributed across multiple experts. However, for domain-specific datasets (SF, NF, FI, QU), the best performance is typically achieved by the gate trained specifically on that domain.
This indicates that while gates generally perform well on general-domain datasets, having a domain-specific expert model is essential for achieving high performance in specialized areas.
Figure <ref> shows the rate at which gates are selected by our routing technique for each evaluation dataset. The MS gate is often chosen, likely because, as shown in Figure <ref>, MSMARCO instances are broadly distributed, leading to more generalized pilot embeddings. Since the MS gate performs well across all datasets (as seen in Figure <ref>), this selection seems reasonable. For domain-specific datasets like SF and FI, the routing strongly favors the gate trained on the respective dataset, which we assume is likely because these datasets cluster closely together in Figure <ref>. We add a detailed error analysis of the routing technique in the supplementary.
§.§ Impact of Expert Combinations
We experiment with various combinations of experts to assess their impact on performance. Our findings suggest that broad coverage across domains is critical. Within a single domain, the specific expert chosen does not significantly affect performance as long as it is trained with a sufficient amount of training dataset. Adding an expert from a new domain tends to significantly improve performance while adding additional experts to a domain that already has an expert doesn't yield as much improvement.
Adding domain-specific gates like SciDocs in the Science domain or TREC-COVID in the Bio-Medical domain improves performance for those datasets, with SciDocs increasing from 44.3 to 56.2 and TREC-COVID from 15.1 to 16.1. However, the overall average performance across all datasets remains relatively stable. We also observed that performance tends to improve when using experts trained on larger datasets, consistent with our earlier observation in Figure <ref> that expert performance generally increases with the size of the training dataset. Detailed results are provided in the supplementary materials.
§.§ Various Routing Techniques
We experiment with various routing techniques commonly used in language modeling and compared them with our proposed routing mechanism. Results in Table <ref> show that the routing technique used in consistently achieves the highest performance.
In fact, ClassificationHeadRouter and ExpertClassifierRouter approaches tend to underperform compared to when using a single expert trained solely on MSMARCO (MSMARCO-Trained). DatasetRouter, which is the closest to , tends to show higher performance than MSMARCO-Trained but also consistently shows lower performance than . These results suggest that these routing techniques are not well-suited for information retrieval and may even degrade performance compared to using a single expert. We hypothesize that the differences in the effectiveness of routing techniques between language modeling and information retrieval can be explained from two perspectives. First, in language modeling, experts are often trained to handle distinct tasks, making them easier to differentiate. In contrast, information retrieval involves domain classification, which may be more challenging. Second, in language modeling, routing decisions are often made at the token level, which allows for greater flexibility and reduces the impact of any single choice. However, in information retrieval, where a single representative embedding is required, the choice of expert is made only once per instance, making the process more vulnerable to the routing technique used, and thus requiring greater precision.
§.§ General Performance of over various datasets
Table <ref> demonstrates that consistently outperforms other baselines that rely on a single general-purpose embedding model[Detailed numbers of the table are in Supplementary.]. This is evident not only in datasets that have their experts (w/ Experts) but also across various other datasets that do not have their experts (w/o Experts).
These findings suggest that the benefits of having multiple experts and routing across them extend well beyond the datasets for which specific experts were trained.
§.§ Where does the benefit come from?
We hypothesize that the benefit of having domain-specific gates comes from the model's tendency to be influenced by its parametric knowledge; models trained on domain-specific datasets are likely to have domain-specific knowledge embedded in their parametric space, enabling them to produce more meaningful embeddings related to those domains.
To test the hypothesis, we conduct experiments with the dataset from <cit.>, which contains both original NQ <cit.> contexts that align with the retriever's parametric knowledge and conflicting contexts for each instance. We experiment with RepLlama <cit.> and E5-Mistral <cit.>[We used these two models not Contriever to ensure that the NQ contexts from <cit.> align with their parametric knowledge] and found that the retrievers surprisingly for all case prefer contexts that align with their parametric knowledge; they consistently retrieve the original NQ contexts over conflicting contexts[We exclude contexts containing an extensive length of contexts (context with table information) as they tend to introduce bias <cit.>].
This finding supports our hypothesis that embedding models are influenced by parametric knowledge when extracting embedding thus their knowledge of domain-specific datasets are better able to extract meaningful embeddings relevant to their domain knowledge. Further details are in the supplementary.
§.§ Efficiency
achieves high efficiency by using parameter-efficient LoRA gates, which account for only about 0.5% of the parameters per gate. This makes the addition of new gates relatively insignificant in terms of parameter count.
In terms of training, it uses the same amount of training data as in a multi-task approach. However, unlike multi-task training, which requires retraining the entire model when adding, removing, or changing domains, allows for these modifications without additional training, as our routing technique is training-free.
However, during inference, computing the query embedding involves two forward passes: the first to identify the appropriate gate (routing), and the second to generate the final query embedding. Improving the computation efficiency of this routing technique is a direction for future work.
§ CONCLUSION
In this paper, we present , a retrieval model that integrates multiple domain-specific experts with a routing mechanism to extract the most suitable embedding for each query. This approach is both lightweight and flexible, allowing for the addition or removal of experts without additional training. Our experiments demonstrate that it consistently outperforms single embedding models, showcasing the advantages of integrating domain-specific experts. Additionally, it surpasses various widely used routing techniques in language modeling, emphasizing the significance of effective routing for information retrieval tasks. These results highlight the crucial role of domain-specific experts in improving retrieval performance and suggest that combining them with efficient routing techniques can significantly enhance results, potentially approaching oracle performance.
§ ACKNOWLEDGMENTS
We thank Nandan Thakur, Orion Weller, Jiyeon Kim, and Hanseok Oh for helpful discussions and constructive feedback.
§ EXPERIMENTAL SETUP
§.§ Baselines
MSMARCO
This baseline uses a single MSMARCO gate, which is trained on a large-scale, general-domain dataset without any routing techniques applied.
Multi-Task
In this approach, we train a single embedding model on all datasets simultaneously in a multi-task manner. We keep the number of training datasets for each label the same, keeping to the one with the minimum value by sampling.
Best Individual
This represents the oracle performance when selecting the single best-performing gate for each dataset. For example, if the SciFact gate shows the highest overall performance on the SciDocs evaluation dataset compared to other gates, the performance of the SciFact gate is recorded as the best individual performance for SciDocs.
Oracle
This is the oracle performance when selecting the best-performing gate for each individual instance. For example, within the SciDocs dataset, certain instances might achieve the highest performance with the SciFact gate, while others might perform better with the MSMARCO gate. This baseline measures the performance when, for each instance, the gate that yields the best result is selected.
ExpertClassifierRouter
This routing technique, inspired from <cit.>, uses a binary classifier for each gate. For each instance, the classifier calculates the probability of selecting or not selecting a specific gate. The gate with the highest probability of being selected is chosen.
To construct the training dataset, we use the predicted label (g_max) from the Pilot Embedding Library. For each (x_i, g_max) pair, we randomly sample instances where the maximum gate differs, which are used to train the "not choosing the gate" label. The dataset is balanced across labels, with the following number of training instances for each dataset: AR (16,108), FI (1070), SF (1,414), NF (892), HO (4,618), QU (4,326), and MS (4,252). Please note that the training datasets only consist of instances where only a single gate shows maximum performance.
We then train a binary classifier for each gate to predict whether an instance is likely to achieve the highest performance through that gate.
ClassificationHeadRouter
This routing technique, inspired from <cit.>, uses a classification head where the number of labels corresponds to the number of gates. The gate with the highest predicted probability is selected as the one likely to yield the best performance. To ensure balance, we equalize the number of training instances for each label, matching the dataset with the fewest instances (NFcorpus with 892 instances, other numbers in ExpertClassifierRouter paragraph). AS a result, the total number of training instances is 6,244.
DatasetRouter
This routing technique, inspired from <cit.>, is the closest baseline to . It samples 100 training instances from each dataset and when given a query, it retrieves the most relevant instances from these samples. The gate trained on the dataset from which the sample originated is then used.
The key differences between DatasetRouter and are as follows. (1) uses the predicted label to map an instance to a gate, while DatasetRouter relies on the original dataset label. For example, if a training instance from MSMARCO performs best with the sciFact gate, will select the Scifact gate for a similar query, whereas DatasetRouter will select the MSMARCO gate. (2) incorporates a clustering step, grouping similar instances together and using centroid embeddings, rather than treating each instance individually.
§.§ Datasets
Stats of Training Dataset
Table <ref> presents the statistics and details of the datasets in the BEIR benchmark, which we used for training and evaluation. We sampled datasets from Quora to ensure that the number of training instances for AR, HO, NF, SF, FI, and QU matches that of MS.
Examples of Oracle
Table <ref> shows examples of questions where a gate from a different dataset outperforms the gate trained on the dataset to which the question belongs. We observe that questions related to biology often achieve higher performance with the NFCorpus gate, while those involving scientific knowledge tend to favor the SciFact gate, and questions requiring arguments perform better with the Arguana gate. This pattern suggests that, even within a single dataset, some instances may be more closely aligned with other datasets, likely because the datasets were not labeled or constructed to avoid overlap with existing datasets.
§.§ Hyperparameters
We trained the Contriever model <cit.> using an asymmetric architecture, where the query encoder encodes the query and the context encoder encodes the context. In our experiments, we fine-tuned only the LoRA (Low-Rank Adaptation) parameters of the query encoder, training approximately 1 million parameters per gate (which accounts for 0.5% of the total model parameters).
For evaluation, we used the NDCG@10 metric, consistent with previous works <cit.>, which measures the ranking quality of the top 10 retrieved documents. All results were calculated using the official BEIR evaluation code.
The experiments were conducted on 8 or fewer A6000 GPUs (each with 40GB of memory). We utilized checkpoints from all pretrained models available on Huggingface[<https://huggingface.co/facebook/contriever>]. The experiments were performed over various combinations of gates, with all random seeds set to 10.
When unfreeze context encoder
In our main experiments, we focus on scenarios where the context encoder is frozen, and only the LoRA of the query encoder is trainable to isolate the impact of routing on the query encoder alone. However, we observe that the overall performance trend remains similar even when the context encoder is not frozen, with the unfrozen models generally achieving higher performance. Table <ref> presents the results when the context encoder is frozen. In these experiments, consistently outperforms the MSMARCO-trained model and the Multi-Task model.
§ EXPERIMENTAL RESULTS & DISCUSSIONS
§.§ Performance of each gates
To analyze the performance trends of each gate, we evaluate them individually without applying any routing techniques in Table <ref>.
The performance generally shows the highest when the evaluation dataset matches the training dataset of the gate. Additionally, the performance gap between matching and non-matching datasets is larger for domain-specific datasets (NF, TR, SD, SF, QU, FI). In contrast, gates trained on general-domain datasets (AR, MS, HO) tend to perform well across a broader range of datasets.
§.§ Affect of Number of Pilot Embeddings
We experiment with how the number of pilot embeddings affects performance. In Figure <ref>, we observe that performance tends to degrade as the number of pilot embeddings increases. We hypothesize that this decline is due to the increased number of pilot embeddings becoming distracting, leading to less effective routing decisions.
§.§ Impact of Number of Gates
To investigate the impact of number of gates, we randomly shuffle the gate order and experiment how adding gates tend to affect performance.
The order of gates added in Figure 4 and Figure 5 is AR, FI, SF, NF, HO, QU, and MS.
We tried various other combinations and could see that the findings are stabilized (Figure <ref>): (1) performance tend to increase with more gates added and (2) the improvement rate tend to be higher when adding gates initially and as the number of gates grows, the rate of increase diminishes.
§.§ Detailed numbers by gates
In this section, we show detailed number of performance with different combinations of gates.
Table <ref> shows performance with AR, NF, SF, FI as gates.
Table <ref> shows performance with AR, HO, NF, SF, FI as gates.
Table <ref> shows performance with AR, HO, NF, SF, QU, FI as gates.
Table <ref> shows performance with AR, MS, HO, NF, SF, QU, FI as gates.
Figure 4 shows only with three gates, outperforms the MSMARCO-trained ones thereby in all results, we can see that outperforms the MSMARCO-trained ones and multi-task baselines.
§.§ Routing Mechanism Error Analysis
Figure 7 illustrates the rate at which each router selects a gate, while Figure 6 shows the rate at which each gate tends to deliver high performance for the dataset. The discrepancy between these two heatmaps highlights the gap between and the oracle performance.
For Arguana, the maximum gate distribution is evenly spread, and the routing tends to follow this distribution closely.
For Quora, while the maximum gate rate is high overall, the routing often favors the HotpotQA gate in many cases.
For MSMARCO, the gate trained on MSMARCO generally shows high performance, but the routing technique tends to distribute selections across different gates.
For HotpotQA, selecting the HotpotQA gate most frequently results in the highest performance, with MSMARCO being the next best option. The routing technique tends to reflect this pattern.
For SciFact, choosing the SciFact gate is crucial in both cases.
For NFCorpus, selecting the NFCorpus gate is important, yet the routing technique often opts for the Arguana gate in many instances.
For FiQA-2018, the best performance is achieved by selecting the FiQA-2018 gate, and the routing technique successfully identifies this gate most of the time.
We specifically investigated why NFCorpus often fails to select the NFCorpus gate and instead tends to choose the Arguana gate. Upon examining the representative embeddings for Arguana, we found that many of them are confused with Arguana embeddings that were extracted from the NFCorpus dataset. These instances originally belong to NFCorpus but show the highest performance with the Arguana gate, leading to their labeling as Arguana. This suggests that instead of completely removing information about the original dataset, incorporating a weighting factor between the two could further improve performance.
§.§ Generalization to other datasets
We observe that demonstrates stable performance not only on datasets with corresponding gates but also on those without them. The performance with different numbers of gates is shown in the following tables: Table <ref> (4 gates), Table <ref> (5 gates), Table <ref> (6 gates), Table <ref> (7 gates), and Tables <ref> and <ref> (8 gates).
When using a similar total number of training datasets (Table <ref>), and the MSMARCO-trained model exhibit comparable generalization performance (both at 31.6). However, achieves higher performance on datasets that have corresponding gates (47.5 for MSMARCO-only vs. 49.3 for ). As more gates are added, both generalization ability and performance on datasets with corresponding gates tend to improve (Figure <ref>).
§.§ Where does the benefit come from?
We hypothesize that the advantage of using multiple expert embedding models with routing, rather than a single embedding model, stems from the influence of the training dataset on a model's parametric knowledge, which in turn affects the extracted embeddings. To test this hypothesis, we experimented to determine whether a model tends to prefer contexts that align with its parametric knowledge over those that conflict with it.
We used a dataset released by <cit.>, which includes instances where each context either aligns with or conflicts with the model's parametric knowledge. For each instance with 5-6 contexts, we evaluated which context the model chose based on the highest similarity. Interestingly, in all instances[We excluded contexts containing extensive length of contexts (tables), as they tend to introduce bias <cit.>.], the models consistently preferred the context that aligned with their parametric knowledge. This suggests that the internal knowledge of the model influences how embeddings are extracted, and that having domain knowledge embedded in the model's parameters enhances performance.
|
http://arxiv.org/abs/2409.03082v1 | 20240904211200 | Generalised doubles and simple homotopy types of high dimensional manifolds | [
"Csaba Nagy",
"John Nicholson",
"Mark Powell"
] | math.GT | [
"math.GT",
"math.AT",
"57N65, 57Q10 (Primary) 19J10 (Secondary)"
] |
cd,
calc,
positioning,
fit,
arrows,
decorations.pathreplacing,
decorations.markings,
shapes.geometric,
backgrounds,
bending
*rep@theorem@title
propositionProposition[section]
theorem[proposition]Theorem
*theorem*Theorem
corollary[proposition]Corollary
lemma[proposition]Lemma
thmxTheorem
thmxTheoremTheorems
definition
definition[proposition]Definition
question[proposition]Question
questionxQuestion
example[proposition]Example
conjecture[proposition]Conjecture
problem[proposition]Problem
remark
remark[proposition]Remark
*claimClaim
*remark*Remark
*acknAcknowledgements
const[proposition]Construction
assumptions[proposition]Hypothesis
theoremTheorem
lemmaLemma
propositionProposition
corollaryCorollary
questionQuestion
equationsection
|
http://arxiv.org/abs/2409.02513v1 | 20240904082453 | SG-MIM: Structured Knowledge Guided Efficient Pre-training for Dense Prediction | [
"Sumin Son",
"Hyesong Choi",
"Dongbo Min"
] | cs.CV | [
"cs.CV"
] |
Nonequilibrium dynamics of coupled oscillators under the shear-velocity boundary condition
Hidetsugu Sakaguchi
September 9, 2024
==========================================================================================
§ ABSTRACT
Masked Image Modeling (MIM) techniques have redefined the landscape of computer vision, enabling pre-trained models to achieve exceptional performance across a broad spectrum of tasks. Despite their success, the full potential of MIM-based methods in dense prediction tasks, particularly in depth estimation, remains untapped. Existing MIM approaches primarily rely on single-image inputs, which makes it challenging to capture the crucial structured information, leading to suboptimal performance in tasks requiring fine-grained feature representation. To address these limitations, we propose SG-MIM, a novel Structured knowledge Guided Masked Image Modeling framework designed to enhance dense prediction tasks by utilizing structured knowledge alongside images. SG-MIM employs a lightweight relational guidance framework, allowing it to guide structured knowledge individually at the feature level rather than naively combining at the pixel level within the same architecture, as is common in traditional multi-modal pre-training methods. This approach enables the model to efficiently capture essential information while minimizing discrepancies between pre-training and downstream tasks. Furthermore, SG-MIM employs a selective masking strategy to incorporate structured knowledge, maximizing the synergy between general representation learning and structured knowledge-specific learning. Our method requires no additional annotations, making it a versatile and efficient solution for a wide range of applications. Our evaluations on the KITTI, NYU-v2, and ADE20k datasets demonstrate SG-MIM's superiority in monocular depth estimation and semantic segmentation.
§ INTRODUCTION
In the field of computer vision, pre-training with supervised classification on ImageNet <cit.> has long been the gold standard, consistently demonstrating its unmatched effectiveness across a broad spectrum of visual tasks, particularly in tasks related to semantic understanding, such as image classification <cit.>, semantic segmentation <cit.>, and object detection <cit.>.
Building on this foundation, self-supervised pre-training methods—most notably 'Masked Image Modeling' <cit.>, where the model learns to reconstruct randomly masked portions of an image—have become the leading approach, achieving superior performance across a range of downstream tasks.
The success of Masked Image Modeling (MIM) can be attributed significantly to the role of locality inductive bias <cit.>. Contrasted with supervised pre-training, MIM encourages models to aggregate adjacent pixels, thus increasing their ability to capture local features.
Yet, despite their impressive achievements, MIM models often fall short in generalizing effectively to dense prediction tasks such as monocular depth estimation <cit.> and semantic segmentation <cit.>.
This is primarily due to the inherent lack of spatially structured information, such as relational cues between pixels, leading to a deficiency in essential data that must be effectively transferred during pre-training for downstream tasks.
To address this issue, prior MIM models have investigated the integration of multiple modalities or additional images as input sources. These approaches typically employ architectures that naively combine an image with another modality or additional images, treating them as a unified input to the encoder, as illustrated in Figure <ref>(a). For instance, CroCo <cit.> utilizes two images from different viewpoints of the same scene, while MultiMAE <cit.> integrates images with pseudo-depth and segmentation maps within the same architecture.
However, this method of naively merging an image with supplementary data introduces several challenges. (1) First, it creates a discrepancy between the pre-training phase and the fine-tuning phase. During pre-training, the encoder processes multiple inputs, while in fine-tuning, it manages only a single image. This discrepancy restricts the model's ability to effectively leverage the diverse information from additional images and modalities. (2) Furthermore, the model is vulnerable to noise introduced by the supplementary data. Predicted depth and segmentation maps are often employed as additional data, yet directly feeding this unrefined input into the encoder at the pixel level inevitably degrades performance. (3) Finally, naively merging an image with supplementary data increases the information load on the encoder, requiring longer training times. For example, MultiMAE <cit.> demands double the pre-training epochs—1600 compared to the 800 used by models like MAE <cit.> and SimMIM <cit.>.
Building on the aforementioned challenges, we propose a strategically designed architecture that efficiently leverages additional structured data. Our Structured knowledge Guided Masked Image Modeling (SG-MIM) introduces an innovative architecture where the encoder indirectly learns spatially structured information via a lightweight relational guidance framework. By utilizing an independent feature extraction branch, the proposed framework efficiently encodes structured knowledge, effectively bridging the gap between pre-training and downstream tasks. Moreover, unlike existing approaches <cit.> that naively merge inputs at the pixel level, the proposed architecture separately encodes structured information and guides the main image encoder with a feature fusion module at the feature level. This feature-level guidance enhances robustness to noise by filtering out irrelevant information, allowing the model to focus on meaningful patterns and achieve a more comprehensive contextual understanding.
In addition to utilizing a well-designed framework that seamlessly integrates additional structured knowledge with image input, we propose a semantic selective masking approach that introduces heterogeneous masking between different input signals. Our semantic selective masking approach strategically chooses specific patches for masking by considering the balance of learning difficulty. This balanced approach enhances the effectiveness of the relational guidance framework, leading to more robust and efficient feature learning.
Our approach serves as a general solution that operates without the need for additional annotations, offering adaptability and efficiency across a wide range of tasks. Moreover, it facilitates the generation of fine-grained, texture-rich features that substantially boost performance in dense prediction tasks, as highlighted in the analysis presented in Figure <ref>.
In experimental comparisons with other models, SG-MIM consistently demonstrated superior performance, particularly at lower epochs such as 100. Notably, our method achieved an RMSE of 2.04 on the KITTI validation dataset <cit.>, a δ_1 of 0.91 on the NYU-v2 validation dataset <cit.>—where δ_1 represents the percentage of predicted pixels where the ratio between the predicted and true depth is within a threshold of 1.25— and an mIoU of 47.59 on the ADE20K dataset <cit.>, demonstrating superior performance in dense prediction tasks across various backbone models and epochs compared to existing MIM models.
The contributions of our model can be summarized as follows:
* We propose an efficient independent relational guidance framework to address the framework issues of existing models, which often cause discrepancies between pre-training models and downstream tasks and are vulnerable to noise in different modalities.
* We experimentally demonstrate that using a selective guidance masking strategy during pre-training effectively transfers structured knowledge to the image encoder by strategically focusing on patches that best balance the learning difficulty.
* Our method is an off-the-shelf approach with general applicability, capable of integrating into any backbone model without requiring additional annotations. Furthermore, our performance has been validated through diverse experiments on monocular depth estimation and semantic segmentation tasks across various backbones.
§ RELATED WORK
§.§ Masked Image Modeling (MIM)
In the domain of computer vision, self-supervised learning has identified MIM <cit.> as playing a crucial role. Inspired by Masked Language Modeling from BERT <cit.>, MIM has demonstrated impressive performance in visual representation learning <cit.>. This approach involves learning visual representations by restoring pixels missing in images, a method that leverages the concept of learning through reconstruction. The success of MIM can be attributed to its ability to impart locality inductive bias <cit.> to the trained models, enabling the models to aggregate near pixels in the attention heads.
Currently, the MIM approach is exemplified by two main methodologies: MAE <cit.> and SimMIM <cit.>. MAE, utilizing ViT <cit.> as its backbone, operates by inputting only visual image tokens into the encoder and integrating masked tokens just before entering the decoder, where the reconstruction occurs. On the other hand, SimMIM <cit.>, which can use ViT <cit.> or Swin <cit.> as its backbone, introduces both visual image tokens and masked tokens into the encoder, initiating reconstruction from the encoder stage itself. Consequently, the decoder in SimMIM is designed as a lightweight prediction head, distinguishing its architecture from MAE. This diversity in approaches underscores the adaptability and potential of MIM in advancing the field of visual representation learning.
§.§ Variants of MIM
Building on the success of MIM, numerous variations of its structure have been proposed to further extend its capabilities.
Croco <cit.> adopts a cross-view completion strategy, taking as inputs two images of the same scene from different views. Only one input image undergoes masking, and then a siamese encoder <cit.> form is used to encode only the visible parts of the two images. Before entering the decoder, the masked tokens are combined with the encoded visible parts to reconstruct the masked tokens, facilitating learning from this integrated approach.
MultiMAE <cit.> utilizes methods for monocular depth estimation and semantic segmentation tasks to generate pseudo-depth and segmentation maps, which are then integrated with images as inputs. Distinct decoders for each modality are utilized to reconstruct the information, showcasing a comprehensive approach to multimodal visual representation learning. These variations on MIM illustrate the ongoing innovation in the field, aiming to exploit the full potential of self-supervised learning for enhancing visual understanding across a range of applications.
§ PRELIMINARY
Masked Image Modeling (MIM) is a cornerstone technique in self-supervised learning for computer vision, where the model learns to reconstruct randomly masked portions of an input image. This process helps the model acquire general visual representations that are useful across various downstream tasks, such as classification, segmentation, and object detection. The reconstruction loss, typically calculated as L1 or L2 loss between the reconstructed and original pixels, guides the learning process. The loss is formulated as:
L_rec = 1/N∑_i=1^N M_I(i) ·| I_p(i) - I(i) |
where N denotes the total number of masked pixels, I_p(i) represents the reconstructed pixel values, and I(i) denotes the original pixel values. The mask indicator M_I(i) equals 1 if the i-th pixel is masked and 0 otherwise. The encoder, trained through MIM, is then used in downstream tasks, ensuring that the learned features are adaptable to various applications beyond image reconstruction.
§ METHOD
In this section, we introduce the SG-MIM framework, detailing its network architecture and presenting Fourier analysis to show how it enhances fine-grained feature generation and improves performance in dense prediction tasks.
§.§ Overview
While the utilization of additional information during pre-training has been extensively studied, previous network architectures, as illustrated in Figure <ref> (a), have typically relied on naive pixel-level integration. In contrast, SG-MIM leverages structured knowledge <cit.> and adopts an independent network architecture like Figure <ref> (b), by incorporating a relational guidance framework that encodes structured information parallel to the traditional MIM architecture. The framework comprises key components: Selective Guidance Masking and Encoding, which strategically targets patches to adjust learning difficulty; the relational guidance framework, which independently encodes and fuses structured data; Prediction Head and Loss Function, which together optimize the model by combining image reconstruction and structured knowledge prediction to effectively balance general feature learning and structured information capture.
Selective Guidance Masking and Encoding
The input image x ∈ℝ^H × W × C_i is divided into patches x_p ∈ℝ^N × (P^2 · C_i). Similarly, the structured knowledge map is also segmented into patches s_p ∈ℝ^N × (P^2 · C_s). Here, N=HW/P^2 denotes the number of divided patches having a resolution of P× P. C_i=3 and C_s=1 represent channel size, respectively. These patches are then transformed into patch embeddings through their respective linear projections. The image patch embeddings follow the traditional MIM masking strategy, masking the majority of the patches (e.g., 60%).
Meanwhile, the structured knowledge patch embeddings are masked using a semantic selective guidance masking strategy, which ensures that there is no overlap with the masked regions of the input image. By selectively utilizing structured knowledge patches, it ensures that only visible image patches contribute to the estimation of structured details. Furthermore, it prevents the model from trying to infer structured information from invisible image patches, which could unnecessarily complicate the learning process. This approach, grounded in a semantic perspective, focuses on selecting patches that enhance the synergy between structured knowledge and general representation learning.
This masking strategy can be mathematically expressed as follows. Let M_I and M_S represent the masking matrix for the image and structured knowledge patch embeddings, respectively.
Both matrices are of dimension N × 1, consisting of elements in {0, 1}, where 1 indicates an invisible (masked) patch and 0 otherwise.
Our selective masking strategy ensures that no overlap occurs in the masking of the image and structured knowledge map, formalized as M_I,j + M_S,j = 1 each j.
Following this masking strategy, the visible image patch embeddings, along with learnable masked tokens, are input into the transformer encoder <cit.> to create an image latent representation I_F, while the visible structured knowledge patch embeddings are processed by the relational guidance framework to guide the model with structured knowledge. An ablation study in Table <ref> investigates the effects of different masking strategies.
Relational Guidance Framework
The relational guidance framework is a lightweight module designed to encode structured knowledge using MLP layers, specifically aligned with the hierarchical image encoder. By maintaining an independent encoding structure, this module effectively avoids discrepancies with downstream tasks and mitigates the increased learning burden on the encoder.
Our framework receives inputs from the structured knowledge patch embeddings and image latent representations, I_F. It can be divided into two main components: feature extraction comprising shallow MLP layers, which generates structured knowledge features S_F, and a feature fusion module that fuses S_F with the image latent representation I_F.
This shallow feature extraction demonstrates greater efficiency in terms of training complexity (refer to Table <ref>).
Given that structured knowledge contains simpler information compared to images, our method attempts to represent the structured knowledge using shallow MLP layers instead of the computational heavy Transformer encoder <cit.>. This approach mirrors the methodology adopted by PointNet <cit.>, which utilizes MLPs to derive point features from 3D point clouds, highlighting the efficiency of MLPs in processing 3D geometric data.
Also, the feature fusion module facilitates the learning of relationships between the two modalities, enabling the generation of a structured-guided image latent representation I_SF for the visible parts of the image. This is achieved with the help of patches corresponding to areas that are visible in the structured knowledge map (but invisible in the image). The feature fusion module can be implemented as a residual connection structure of a multi-head cross-attention layer with the image latent representation I_F (query) and the structured feature S_F (key and value), as shown in Figure <ref>.
Within a feature fusion module, the query, key, and value projections for each head i are defined as:
Q_i = W^Q_i I_F, K_i = W^K_i S_F, V_i = W^V_i S_F,
where W^Q_i, W^K_i, and W^V_i are learned weights. The multi-head cross-attention mechanism enriches the image features by integrating these projections:
I_SF = Concat(head_1, ..., head_h)W^O + I_F,
where head_i = attention(Q_i, K_i, V_i),
Here, I_SF represents the structured-guided image latent representation, enhanced through multi-head cross attention, combining the outputs from all heads.
Prediction Head and Loss Function
In our SG-MIM model, the image latent representation I_F, processed by the Transformer encoder <cit.>, is fed into a lightweight, one-layer prediction head similar to SimMIM <cit.>. The image reconstruction loss L_I = 1/N∑_i=1^N M_I(i) ·| I_p(i) - I(i) | is calculated using L1 loss between the reconstructed pixels I_p(i) and the target image pixels I(i), where N is the total number of masked pixels, and M_I(i) is derived from traditional MIM masking.
In parallel, the structured-guided latent representation is processed through a separate prediction head designed for handling structured information, resulting in the structured knowledge prediction loss L_S = 1/N∑_i=1^N M_S(i) ·| S_p(i) - S(i) |, which also uses L1 loss to compare predicted structured knowledge S_p(i) with target values S(i).
The total loss function combines these two losses, optimizing the model to learn both general and structured features effectively:
L = λ_I L_I + λ_S L_S,
where λ_I and λ_S balance the contributions of image reconstruction and structured knowledge prediction losses. In our experiments, both weights are set to 1, with an ablation study presented in Table <ref>.
§.§ Fourier Analysis of Feature Maps
We conducted a visualization analysis using Fourier analysis to compare the features produced by SG-MIM and SimMIM. Specifically, the ΔLog amplitude is calculated as the difference between the log amplitude at normalized frequency 0.0π (center) and at 1.0π (boundary). For better visualization, we only provide the half-diagonal components of the two-dimensional Fourier-transformed feature map. Figure <ref> shows that SG-MIM effectively captures high-frequency signals, which facilitates the generation of more detailed features with rich edges and textures. This capability is particularly beneficial for dense prediction tasks, where such fine-grained textural information is crucial for improved performance. The analysis was conducted on the KITTI dataset for depth estimation and the ADE20K dataset for semantic segmentation, demonstrating SG-MIM's superior ability to capture essential high-frequency details across different types of dense prediction tasks.
§.§ Implementation Details
In our pre-training phase, we conducted experiments leveraging Swin-Base <cit.>, Swinv2-Base <cit.>, and ViT-Base <cit.>.
The default input sizes for Swin Transformer and ViT are set to 192 × 192 and 224 × 224, respectively, with a uniform image masking ratio of 0.6 across all tests. The structured knowledge is generated using a DPT-Hybrid <cit.> trained on the OmniData <cit.>.
Training is conducted with a batch size of 1024 on 8 GPUs of NVIDIA RTX 6000 Ada.
Additional experiments and implementation details are available in the Supplementary material.
§ EXPERIMENTS
In this section, we conducted a series of experiments to compare the fine-tuning performance of our model against existing pre-training models <cit.> across a variety of tasks, including monocular depth estimation, semantic segmentation. The experimental setup is organized as follows: we begin with monocular depth estimation experiments, followed by semantic segmentation, and conclude with model efficiency and an ablation study.
§.§ Downstream Task: Monocular Depth Estimation
Data and Setup
For the monocular depth estimation experiments, we utilized the standard dataset splits for both the KITTI <cit.> and NYU-v2 <cit.> benchmarks.
For the KITTI dataset, inspired by GLPDepth <cit.>, we appended a simple depth estimation head consisting of deconvolution layers to the encoder <cit.>. We adopted RMSE as the evaluation metric.
For the NYU-v2 dataset, we employed the DPT <cit.> with encoder <cit.>, evaluating performance with the metric δ_1 <cit.>, e.g., ( d_gt/d_p, d_p/d_gt), which represents the percentage of pixels where the relative depth error is less than 1.25.
Here, d_p and d_gt denote the predicted depth and ground truth depth, respectively.
Result In the performance comparison across downstream models, SG-MIM consistently demonstrates superior results compared to existing MIM models <cit.>. As shown in Table <ref>, SG-MIM improves upon the baseline model, SimMIM <cit.>, across all configurations, including both ViT-Base and Swin-Base backbones, at 100 and 800 epochs (noting that lower RMSE indicates better performance). Additionally, compared to other MIM models, such as MultiMAE <cit.>, which involves a more complex reconstruction task (RGB+D+S), SG-MIM outperforms these models when utilizing the same ViT-Base backbone. Additionally, even though Croco <cit.> uses a larger dataset, specifically the Habitat dataset <cit.>, which includes 1,821,391 synthetic image cross-view pairs, SG-MIM still achieves better performance.
As shown in Table <ref>, we evaluated our model not only against other MIM-based models but also against models specifically designed for monocular depth estimation. In this comparison, both SimMIM and SG-MIM were pre-trained using the Swinv2-Base backbone, with the trained encoder weights transferred to the GLPDepth model for performance evaluation. For representative methods, we included state-of-the-art models such as BinsFormer <cit.> and iDisc <cit.>. Compared to SimMIM using the same downstream model, SG-MIM showed a significant performance improvement at 100 epochs and a slight improvement at 800 epochs. Furthermore, SG-MIM demonstrated comparable or superior performance when compared to state-of-the-art models.
In Table <ref>, where the downstream model is implemented using DPT based on the Vit-Base backbone. Similar to Table <ref>, SG-MIM demonstrates superior performance in the δ_1 metric. Interestingly, contrary to Table <ref>, Croco <cit.> exhibits higher performance among other MIM pre-training models, achieving the same δ_1 score as SG-MIM, while MAE <cit.> shows the lowest performance. However, it should be noted that Croco has been pre-trained with a larger quantity of images <cit.> than other models.
§.§ Downstream Task: Semantic Segmentation
Data and Setup
We conducted semantic segmentation experiments on the ADE20K <cit.> dataset. The UperNet framework <cit.> served as the downstream model, with pre-trained weights loaded into the encoder for finetuning. The performance was evaluated using the mIoU metric, and further details of the experimental setup and results can be found in the Supplementary material.
Result
As shown in Table <ref>, we validated the performance of our model, SG-MIM, on the semantic segmentation task using the ADE20K validation dataset under the same conditions as SimMIM with the SwinV2-Base backbone. Our results demonstrate that SG-MIM achieved an approximately 0.5 higher mIoU score than SimMIM. Additionally, it consistently outperformed other models, such as MultiMAE.
§.§ Model Efficiency
In Table <ref>, we examine the efficiency of SG-MIM based on different feature extraction architectures in the relational guidance framework—MLP layers, Transformer <cit.>, and Siamese Transformer <cit.>—and their performance in monocular depth estimation on the KITTI dataset <cit.>. The Transformer architecture operates independently from the image encoder, while the Siamese Transformer shares weights with the image encoder, indicating a unified processing approach. SG-MIM with MLP-based encoding excels in both training efficiency and RMSE performance. Interestingly, Transformer-based models show lower performance, likely due to their higher capacity requiring longer training times than the 800 epochs used in our experiments. This highlights the suitability of MLPs for capturing structured features efficiently.
§.§ Ablation study
All ablation studies are conducted on the KITTI dataset <cit.>, focusing on the monocular depth estimation using the Swin-Base as a backbone at 100 epochs.
Masking Strategy and Ratio
The study starts with traditional random masking, applied at a 0.6 ratio to both images and structured information. This can complicate the task of estimating structured information for invisible image patches, leading to poorer performance compared to ours, as shown in Table <ref>. However, our selective masking strategy avoids overlap between masked regions in the image and structured information, allowing the model to focus on visible patches and effectively estimate structured details, achieving an RMSE of 2.29, as shown in Table <ref>. We also experimented with adjusting the masking ratio from the default 0.6 to 0.5 and 0.7. Our results indicate that the 0.6 ratio achieves the best performance, yielding an RMSE of 2.29
Loss Weights
In Table <ref>, experiments show that a balanced 1/1 ratio between image reconstruction and structured knowledge prediction losses yields the best RMSE of 2.29. Reducing the weight of the structured knowledge loss results in a progressive decline in performance, highlighting the importance of the relational guidance framework for optimal monocular depth estimation.
§ CONCLUSIONS
In conclusion, SG-MIM enhances Masked Image Modeling by effectively integrating structured knowledge into the pre-training process through a lightweight relational guidance framework. This enables efficient encoding of spatially structured information, reduces noise, and better aligns pre-training with downstream tasks. Additionally, the selective masking strategy manages learning difficulty by focusing on visible image regions, ensuring the model doesn't strain to predict structured details from areas lacking information. This efficient and balanced approach enables the model to generate fine-grained features, leading to improved performance in dense prediction tasks, particularly in depth estimation and semantic segmentation, where SG-MIM outperforms existing methods.
Limitations While SG-MIM effectively integrates structured data into the pre-training process, it is still inherently limited by the 2D nature of traditional MIM frameworks, which focus on reconstruction and prediction within a 2D plane.
Future work will address the limitation by extending the MIM framework to incorporate 3D point cloud data, enabling richer 3D perception and understanding tasks.
|
http://arxiv.org/abs/2409.03292v1 | 20240905065751 | Directional data analysis using the spherical Cauchy and the Poisson-kernel based distribution | [
"Michail Tsagris"
] | stat.ME | [
"stat.ME"
] |
Federated Prototype-based Contrastive Learning for Privacy-Preserving Cross-domain Recommendation
Li Wang, Quangui Zhang, Lei Sang, Qiang Wu, Senior Member, IEEE, and Min Xu^*, IEEE, Member
Li Wang, Qiang Wu, and Min Xu are with the School of Electrical and Data Engineering, University of Technology Sydney, Sydney 2000, Australia. Shoujin Wang is with the institute of data science, University of Technology Sydney, Sydney 2000, Australia. Quangui Zhang is with the School of Artificial Intelligence, Chongqing University of Arts and Sciences, Chongqing 402160, China. *Corresponding author: Min Xu (e-mail: [email protected])
September 9, 2024
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Abstract
The spherical Cauchy distribution and the Poisson-kernel based distribution were both proposed in 2020, for the analysis of directional data. The paper explores both of them under various frameworks. Alternative parametrizations that offer numerical and estimation advantages, including a straightforward Newton-Raphson algorithm to estimate the parameters are suggested, which further facilitate a more straightforward formulation under the regression setting. A two-sample location test, based on the log-likelihood ratio test is suggested, completing with discriminant analysis. The two distributions are put to the test-bed for all aforementioned cases, through simulation studies and via real data examples comparing and illustrating their performance.
Keywords: Directional data, maximum likelihood, regression, discriminant analysis
MSC: 62H11, 62H30
§ INTRODUCTION
Directional data refers to multivariate data with a unit norm, and its sample space can be expressed as:
𝕊^d={ x∈ℝ^d+1||| x||=1 },
where ||.|| denotes the Euclidean norm. Circular data, when d=1, lie on a circle, whereas spherical data, when d=2, lie on a sphere.
Circular data are met in various disciplines, such as political sciences <cit.>, criminology <cit.>, biology <cit.>, ecology <cit.> and astronomy <cit.> to name a few. Spherical data on the other hand are met in geology <cit.>, environmental sciences <cit.>, image analysis <cit.>, robotics <cit.> and space <cit.>.
Numerous spherical and hyper-spherical distributions have been proposed over time, with the von Mises-Fisher <cit.> and projected normal <cit.> distributions being among the earliest and most prevalent. The spherical Cauchy (SC) <cit.> and the Poisson-kernel based PKB) <cit.> distributions are two recently propositions. Despite these distributions assume rotational symmetry, which may restrict their applicability in certain scenarios, they have proved useful in many situations and for data on the sphere, they seem to perform well on some occasions <cit.>.
In this paper the SC <cit.> and the PKB <cit.> distributions are investigated with regards to five aspects. These are random vectors simulation, maximum likelihood estimation, hypothesis testing about locations, regression modelling and discriminant analysis. Most of the existing distributions cover these cases, with some drawbacks whatsoever, such lack of computational efficiency and lack of available techniques, for instance lack of proper regression models.
Regarding simulation of random vectors, many distributions rely on rejection sampling, such as the von Mises-Fisher (vMF) <cit.> and Kent <cit.> distributions, while others, e.g. the projected normal, the elliptically symmetric angular Gaussian (ESAG) <cit.> and the spherical projected Cauchy <cit.> avoid this. As will be showed later, the SC is straightforward to simulate from, while the PKB requires rejection sampling which can hamper its use in cases where fast generation of random vectors is required.
Efficient maximum likelihood estimation (MLE) is crucial for many reasons, not only for simulation study purposes but also when analysis of large scale data is involved. Estimate of mean direction of the VMF is available in closed form, and its concentration parameter can be estimated via a fast Newton-Raphson (NR) algorithm <cit.>. Estimation of the parameters of the Kent and the projected type of distributions is harder and thus one relies numerical optimizers which typically are slow, especially with increasing dimensions. As will be shown later, MLE with the SC relies on NR, whereas MLE with the PKB relies upon a numerical optimizer. Closely related to the MLE is the calculation of the normalizing constant, which in the cases under study exists in a closed form.
A plethora of hypothesis testing procedures for two or more mean directions has been proposed over the years. <cit.> performed a comparison of some methods for the case of two populations, where the computation of the p-value took place using both asymptotic theory and computer intensive techniques. The tests that did not allow for heterogeneity among the samples, did not perform accurately, i.e. they did not retain the nominal type I error, whereas tests that are independent of this assumption were more precise. The perk of the proposed log-likelihood ratio tests, based on the SC and the PKB distribution, do not assume equality of the concentration parameters.
<cit.> related the mean direction of the projected normal, on the circle, to some covariates without assuming equality of the concentration parameter among the errors (the analogue of the homoscedasticity in linear regression). They allowed the concentration parameter to vary among the residuals. This concept was also used by <cit.> with the ESAG, Kent, and the vMF distribution, and by <cit.> for the projected Cauchy on the circle and the sphere. The same strategy is also proposed here with the two competing distributions.
Finally, the case of discriminant analysis, a.k.a supervised learning, is another interesting topic in the field. <cit.> performed a comparison of maximum likelihood based classifiers and the k-NN algorithm and provided evidence that rotationally symmetric distributions sometimes perform equally well or better than distributions that are elliptically symmetric. As shown later, the SC and PKB perform equally well under the discrimination setting, however, discrimination based on the SC is much faster.
The SC and PKB distributions are presented in the next section, along with the five aforementioned aspects, for which new methods are proposed and discussed in detail. Section <ref> compares the performance of these two distributions, while Section <ref> illustrates their performance using real data. Finally, Section <ref> concludes the paper.
§ THE SPHERICAL CAUCHY AND POISSON-KERNEL BASED DISTRIBUTIONS
A model that appears to be closely related to the classical vMF distribution is the SC distribution <cit.>, which can be seen as the generalisation of the wrapped Cauchy distribution <cit.> to the sphere (and hyper-sphere).
The density of the SC on 𝕊^d is given by
f( y)=C_d(1-ρ^2/1+ρ^2-2 y^⊤ m)^d,
where m∈𝕊^d is the location direction, that controls the mode of the density, ρ∈ [0, 1) plays the role of the concentration parameter and C_d=Γ[(d+1)/2]/2π^(d+1)/2 is the normalizing constant. For the most part of this paper though we will use an alternative parameterization that was used in <cit.>
f( y) =C_d(√(μ^2+1)- y^⊤μ)^-d=
C_d(√(γ^2+1)-α)^-d,
where μ∈ℝ^d+1 is the unconstrained location parameter, α= y^⊤μ, γ=μ, m=μ/γ and ρ=(√(γ^2+1)-1)/γ (γ=2ρ/1-ρ^2). The benefit of this parameterization is that the maximisation with respect to the location parameter is unconstrained.
A second, similar, distribution is the PKB distribution that was proposed by <cit.>, and can also be seen as the the generalisation of the wrapped Cauchy distribution, whose density is given by
f( y)=C_d1-ρ^2/(1+ρ^2-2 y^⊤ m)^(d+1)/2,
where m, according to <cit.> is a vector orienting the center of the distribution. This sentence does not sound accurate and any reference to this parameter of the PKB, similarly to the SC, will be the location parameter.
One may express this distribution in an alternative way, similar to the SC, as a function of the unconstrained location parameter μ
f( y)=C_d(√(γ^2+1)-α)^-(d+1)/2(1-(√(γ^2+1)-1)^2/γ^2)^-(d-1)/2.
Figure <ref> presents the contour plots for some location parameter and two values of the concentration parameter ρ for the SC distribution. The contour plots have the same shape, but the tails of the PKB decay slower than those of the SC distribution. The density value of the SC will be larger than the density value of the PKB, that is, if y^⊤μ<(1 + ρ^2)/2.
§.§ Simulation of directional vectors
In order to simulate random directional vectors y_i from the SC( y; m,ρ), <cit.> proposed a simple procedure that relies upon the uniform distribution and then perform some simple calculations. The fact that no rejection sampling is necessary, a strategy common with other distributions, is an appealing one. The two steps of the algorithm are described below.
* Generate vectors u_i (i=…,n) from the uniform distribution in 𝕊^d.
* Set y_i=( u_i + ρ m)(1-ρ^2∑_j=1^d+1 m_j^2)/∑_j=1^d+1( u_i + ρ m_j )^2 + ρ m.
In order to simulate random directional vectors y_i from the PKB( y; m,ρ), <cit.> proposed a rejection sampling that is computationally more expensive than the simulation of the SC distribution.
Step 1 Set λ = 2 ρ / (1 + ρ^2).
Step 2 Define ω_d(λ, β) = 0.5 (d+1) log1 + √(1 - λ^2)/1 + √(1 - λ^2 / β) - 0.5 log(1-β) and find the β^* that minimizes ω_d(λ, β) in the interval (λ(2λ-1), 1).
Step 4 Set β_1 = β^* / (1 - β^*) and β_2 = - 1 + 1 / √((1 - β^*))
Step 5 Simulate u ∼ U(0,1) and z=(z_1,…,z_d+1), where each z_i is simulated from a standard normal distribution, z_i ∼ N(0,1).
Step 6 Set q=(μ^⊤ z + β_2μ^⊤ z)/
√( z^⊤ z+β_1(μ^⊤ z)^2).
Step 7 If log(u)≤d+1/2[ - log(1 - λ * q) + log(1 - β^* q^2)-log2/1 + √(1 - λ^2/β^*)]
set x←( z + β_2μ^⊤ zμ)/
√( z^⊤ z+β_1(μ^⊤ z)^2), otherwise return to Step 5.
§.§ Maximum likelihood estimation
The log-likelihood for a sample of directional vectors y_i, i=1…,n of the SC distribution, using Eq. (<ref>) is given by
ℓ_SC = nlogC_d - d∑_i=1^nlog(√(γ^2+1)-α_i),
where C_d denotes the normalizing constant. To perform maximum likelihood estimation (MLE) the Newton-Raphson (NR) algorithm can be employed, to maximise ℓ (<ref>). The starting value for the NR algorithm is the sample mean vector for which the log-likelihood value is computed. At each successive step the estimated mean vector is updated via μ^t+1=μ^t+1 - H^-1 J and the algorithm terminates when the difference between two successive log-likelihood values is smaller than some tolerance value ϵ (for instance ϵ=10^-6).
A drawback of this method is that the concentration parameter ρ is embedded within the estimation of the mean vector μ. To deal with this problem an alternative optimization strategy is employed to disentangle the mean direction m from the concentration parameter. The method is a hybrid of the Brent algorithm and of the fixed points iteration algorithm. The relevant log-likelihood (excluding the normalizing constant) of the parameterization in Eq. (<ref>) is given by
ℓ_SC = nlogC_d + ndlog(1 - ρ^2) - d∑_i=1^nlog(1+ ρ^2 - 2ρ y_i^⊤ m).
The steps of the hybrid algorithm are delineated below.
Step 1 Start with an initial mean direction given by m̂=y̅/y̅, where y̅ denotes the sample mean vector.
Step 2 Using this mean direction obtain ρ̂ that maximises the log-likelihood in Eq. (<ref>), using the Brent algorithm <cit.>.
Step 3 For the estimated ρ̂ from Step 2, update the mean direction, using the fixed points iteration algorithm, by maximising the log-likelihood in Eq. (<ref>) under the constraint that the mean direction lies in 𝕊^d-1. The Lagrangian function takes the following form
ℓ_SC = nlogC_d + ndlog(1 - ρ̂^2) - d∑_i=1^nlog(1+ ρ̂^2 - 2ρ̂ y_i^⊤ m) + λ( m^⊤ m - 1 ).
Equating the derivative of (<ref>), with respect to m, to zero, yields
∂ℓ_SC/∂ m= d∑_i=1^n2ρ y_i/1+ ρ̂^2 - 2ρ̂ y_i^⊤ m +2λ m = 0.
The updated mean direction is given by the unit vector parallel to ∑_i=1^nρ̂ y_i/1+ ρ̂^2 - 2ρ̂ y_i^⊤m̂.
Step 4 Repeat Steps 2-3 until the log-likelihood in Eq. (<ref>) improves no more than some tolerance value ϵ.
The strategy employed in Step 3 was also employed by <cit.> in order to estimate the median direction and by <cit.> to obtain the eigenvectors of the Cauchy principal component analysis.
As for the PKB distribution, the second representation (<ref>) eases the calculations, since the log-likelihood can be written as
ℓ_PKB = nlog(C_d) -d+1/2∑_i=1^n log(√(γ^2+1)-α_i) - nd-1/2[log2 + log(√(γ^2+1)-1) - logγ^2].
The derivatives are very similar to those of the SC log-likelihood, with the exception of some extra terms. As for the hybrid algorithm, the mathematics are nearly the same in the SC case.
§.§ Log-likelihood ratio test for equality of two location parameters
In order to test the equality of equal population location parameters, based on two samples[Evidently, the procedure can be generalised to the case of more than two groups.] that follow the SC distribution we will employ the log-likelihood ratio test, without assuming equality of the concentration parameters. Under the null hypothesis one must maximise the following log-likelihood (ignoring C_d) with respect to the common mean direction m_c and the two concentration parameters ρ_1 and ρ_2.
ℓ_0 = n_1dlog(1 - ρ_1^2) - d∑_i=1^n_1log(1+ ρ_1^2 - 2ρ_1 y_1i^⊤ m_c ) + n_2dlog(1 - ρ_2^2) - d∑_i=1^n_2log(1+ ρ_2^2 - 2ρ_2 y_2i^⊤ m_c ),
where n_j and y_ji refer to the j-th sample size and the i-th observation of sample j, respectively, for j=1,2. Under the alternative hypothesis, the two location parameters are not equal and hence the relevant log-likelihood to be maximised is given by
ℓ_1 = n_1dlog(1 - ρ_1^2) - d∑_i=1^n_1log(1+ ρ_1^2 - 2ρ_1 y_1i^⊤ m_1 ) + n_2dlog(1 - ρ_2^2) - d∑_i=1^n_2log(1+ ρ_2^2 - 2ρ_2 y_2i^⊤ m_2 ),
where m_j refers to the location parameter of the j-th sample. In order to maximise ℓ_0 (<ref>) the hybrid maximisation approach is used, whereas for the maximisation of ℓ_1 (<ref>) this is accomplished via the NR algorithm, applied to each sample separately. If the null hypothesis is true, standard likelihood theory states that Λ = 2[ℓ_1(m̂_1, m̂_2, ρ̂_1,ρ̂_2)-ℓ_0(m̃_c, ρ̃_1,ρ̃_2)] χ^2_d, where m̂_1, m̂_2, ρ̂_1,ρ̂_2 denote the estimated parameters under H_1, while m̃_c, ρ̃_1,ρ̃_2 denote the estimated parameters under H_0.
The same strategy was adopted for the case of the PKB distribution as well, with the exception that the formulas are slightly different.
§.§ Regression analysis
When a set of p covariates is present, we link the location parameter to the covariates by μ_i= B^⊤ x_i, where B=(β_1^⊤,…,β_p^⊤) denotes the matrix of the regression coefficients and x_i denotes a row of the design matrix. The relevant log-likelihood, using (<ref>), becomes
ℓ_SC = log(C_d) - d∑_i=1^nlog(√(γ_i^2+1)-α_i),
where α_i= y_i^⊤μ_i and γ_i=μ_i.
The advantage of the parametrization in Eq. (<ref>) is evident in the regression setting. Similarly to <cit.> and <cit.> the errors are anisotropic, because we do not assume a common concentration parameter. Each directional vector has its own concentration parameter γ_i that is linked to ρ_i as mentioned earlier. The matrix of the regression coefficients is again estimated via the NR algorithm.
The log-likelihood of the PKB regression model, using the parameterization from Eq. (<ref>) is written as follows
ℓ_PKB=nlog(C_d) - nd-1/2log2 -d+1/2∑_i=1^n log(√(γ_i^2+1)-α_i) -d-1/2[∑_i=1^nlog(√(γ_i^2+1)-1) - ∑_i=1^nlogγ_i^2],
where α_i and γ_i are the same as in the case of the SC regression.
§.§ Discriminant analysis
Under the maximum likelihood discriminant analysis framework the rule is to allocate a new observation vector z∈𝕊^d in the group whose log-likelihood value has the highest value. In the case of the SC, with two groups, z is allocated to to group 1 iff
log√(μ_2^2+1)- z^⊤μ_2/√(μ_1^2+1)- z^⊤μ_1 > 0
and to group 2 otherwise, where μ_1 and μ_2 denote the location parameter of the first and second group, respectively. Hence, at first the location parameter is estimated for each group separately and then the allocation rule is applied to the new observation z. The case of the PKB distribution the allocation rule is straightforward to write down analytically, and evidently it has a more complicated form
log√(μ_2^2+1)- z^⊤μ_2/√(μ_1^2+1)- z^⊤μ_1 + log√(μ_2^2+1)-1/√(μ_1^2+1)-1 - logμ_2^2/μ_1^2 > 0.
§ SIMULATION STUDIES
Simulation studies were conducted to examine the computational efficiency of the two algorithms used for MLE of the SC distribution. Additionally, the SC distribution was compared to the PKB distribution using spherical and hyper-spherical data. The comparisons between the two distributions were carried out under the previously discussed topics: equality of two location parameters, regression and discriminant analysis settings.
§.§ Computational efficiency of the MLE algorithms
We compare the runtime of the two MLE algorithms, namely the hydrid and the NR described earlier, under a combination of various sample sizes and dimensionalities, when the data have been generetaed from the SC or the PKB distribution. The speed-up factor of NR compared to the hybrid algorithm appears in Table <ref>, where evidently the NR is to be preferred to its opponent, especially with increasing sample sizes. However, with increasing dimensions the NR seems to have a reduced dominance, something which does not come by surprise, since the NR requires the inverse of the Hessian matrix. However, we will note that with circular data (d=1), Kent and Tyler's algorithm <cit.> is faster than the NR algorithm[There is no reason to compare the NR algorithm to R's built in function optim(), and hence we skip all other comparisons.].
We also compared the time required to fit the SC distribution and the time required to fit the PKB distribution, when the data are generated either from the SC or the PKB distribution. Table <ref> shows that the MLE of the SC is faster than the MLE of the PKB distribution, for both cases. In this example, the data were generated from the PKB distribution, but the results are similar even if the data were generated from the SC distribution.
§.§ Hypothesis testing for two location parameters
According to the simulation studies of <cit.>, conducted for circular and spherical data, the heterogeneous approach <cit.>, that does not assume equality among the concentration parameters, was shown to be the optimal test in terms of size attainment. We implemented a smaller scale simulation study to estimate the type I error and the power of the SC log-likelihood ratio test, but we cannot compare it to the heterogeneous approach because the latter compares mean directions. Table <ref> presents the estimated type I error and the estimated power of the tests, for both tests when the data are generated from the SC or the PKB distribution, at various dimensions.
§.§ Regression analysis
For the regression analysis, we adhered to the methodology outlined by <cit.>. We utilized a single covariate to generate n values from a standard normal distribution, linking it to the response directional variable in a linear manner, μ_i= X_i B, for i=1,…,n. Subsequently, data were generated from either the SC or the PKB distribution, with the median and mean directions, respectively, being m_i=μ_i/μ_i. For the SC distribution, the concentration parameter was ρ_i=(√(μ_i^2 + 1) - 1 ) / μ_i, whereas for the PKB distribution, it was κ_i=μ_i <cit.>.
This procedure was iterated for various combinations of sample sizes and dimensionalities, with regression coefficients being estimated using both the SC and PKB regression models. The entire process was repeated 1,000 times, and the average fit of the two models, as measured by the quantity ∑_i=1^n y_i^⊤ŷ_i/n is presented in Table <ref> The fit takes values from 0 up to 1, where higher values indicate better fit. The estimated fits of the regression models are nearly nearly identical.
§.§ Discriminant analysis
Following <cit.> we also simulated data from the SC and the PKB distributions, assuming two groups whose mean directions differ by a specific angle and performed a 10-fold cross-validation protocol to estimate the percentage of correct classification[A key note difference from <cit.> is that we moved on to higher dimensions.]. This process was repeated 1,000 times and the average percentages are presented in Table <ref>. Evidently, regardless of the true data generation mechanism, the classification capabilities of either distribution are nearly the same.
§ REAL DATA ANALYSIS
§.§ Hypothesis testing for two location parameters
The Ordovician dataset <cit.> consists of two groups of 50 measurements, on the sphere, each from L_0^1 axes (intersections between cleavage and bedding planes of F, folds) in Ordovician turbidites, collected in the same sub-domain. Figure <ref> presents the data on the sphere with colours indicating the two groups and Table <ref> presents the MLE for the SC and PKB distributions. Evidently, there are small differences between the two models. Both the SC and PKBD log-likelihood ratio tests though provided high p-values, 0.733 and 0.856, respectively, and so were their relevant bootstrap based p-values, 0.825 and 0.914 for the SC and PKB, respectively.
§.§ Regression analysis
Data regarding crop productivity in the Greek NUTS II region of Thessaly during the 2017-218 cropping year were supplied by the Greek Ministry of Agriculture, also known as farm accountancy data network (FADN) data. The data refer to a sample of 487 farms and initially they consisted of 20 crops, but after aggregation they were narrowed down to 10 crops[A larger version of this dataset was used in <cit.>.]. For each of the 487 farms the cultivated area and the production in each of the 10 crops is known. However, the goal of the paper is to relate the composition of the production (simplicial response, Y) to the composition of the cultivated area (simplicial predictor, X) and for this reason were scaled to sum to unity[The raw data cannot be distributed due to disclosure restrictions.]. The square root was then applied to the compositional data (both the response and the predictor variables) so that they are mapped onto the 10-dimensional sphere.
Including the constant terms, there are 110 regression parameters to be estimated. The SC based regression model required 0.11 seconds to complete, whereas the PKB regression model face numerical instability problems with the Hessian matrix and therefore we had to rely upon a numerical optimizer (the function optim() in R), which required more than 5 minutes to complete. This does not come by surprise as the such optimizers are not designed to work with high dimensional problems.
With regards to the fitting performance, the quantity ∑_i=1^n y_i^⊤ŷ_i/n was equal to 0.958 for the SC regression model and 0.955 for the PKB regression model. Due to the vast computational time required by the PKB regression model we did not perform the 10-fold cross-validation procedure.
§.§ Discriminant analysis
For this task we will consider the Wireless Indoor Localization data set, that is publicly available in the https://archive.ics.uci.edu/dataset/422/wireless+indoor+localizationUCI Machine Learning Repository’s website. The data were collected in indoor space by observing signal strengths of seven WiFi signals visible on a smartphone. The data consist of 2,000 measurements on 7 variables that report the measurements of the WiFi signal strength received from 7 Wi-Fi routers in an office location in Pittsburgh (USA). The grouping variable is one of the four rooms with 500 observations from each room. The WiFi signal strength is measured in dBm, decibel milliwatts, which is expressed as a negative value ranging from -100 to 0. In order to apply the discriminant analysis we first normalized the data (projected them onto the hyper-sphere).
We applied the 10-fold cross-validation process, repeated 50 times to quantify the variance caused by the different splits, computing the percentage of correct classification at each repetition.
Figure <ref> presents the results. The average and median percentage of correct classification were equal to 0.9792 and 0.9790, respectively, for the SC distribution and equal to 0.9775 and 0.9775, respectively, for the PKB distribution.
§ CONCLUSIONS
We investigated two recently proposed distributions, namely the spherical Cauchy and the Poisson-kernel based distribution. Specifically, we suggested an alternative, hybrid, method to estimate the parameters of both distributions and in particularly for the SC distribution we suggested a new parameterization that allows for application of the Newton-Raphson algorithm. We also re-parameterised the density function of the PKB distribution, in the same manner as with the SC, but it did not prove useful for the maximum likelihood estimation of its parameters. However, the new parameterizations facilitated the implementation of regression modelling escaping the (hyper-)spherical constraint and allowing for anisotropic errors as in the case of the ESAG distribution <cit.>. The benefit of the re-parameterized SC distribution is that Newton-Raphson was again implemented for the regression setting, thus yielding computationally efficient estimation of the regression parameters. The aforementioned hybrid method for estimation of the parameters of either distribution enabled the hypothesis testing of location directions between two populations, based on the log-likelihood ratio test, without assuming equal concentration parameters. Finally, we explored the maximum likelihood discriminant analysis using either distribution.
The simulation studies and the real data examples, showcased the performance of each distribution, and provided evidence that both distributions perform similarly. The fact that the SC distribution is easier to simulate values from, is faster when it comes to estimating its parameters, with and without covariates, renders it a better choice, for practitioners and researchers.
Regarding future work we can mention the following. Extension of the hypothesis testing for more than two location parameters is straightforward, the only difficulty is the running time, since as the number of groups increase, so does the computational cost. Secondly, rejection sampling for the PKB using the SC as an envelope function did not work very satisfactorily. We matched the parameters of the SC distribution to those of the PKB distribution and estimated the bound between the ratio of the two distributions. However, the bound increases with increasing dimensionality, plus the accuracy of this method seems less than the rejection sampling already proposed by <cit.>. Model based clustering though, using either distribution is something we are currently working on.
§ APPENDIX
§.§ Difference in the log densities of the SC and PKB distributions
The difference of the log-densities, of PKB and SC, can be written as
logf_SC-logf_PKB = d-1/2[ -log(√(γ^2+1)-α) + log(1-(√(γ^2+1)-1)^2/γ^2)]
= d-1/2[log1+ρ^2-2 y^⊤ m/1-ρ^2+log(1-ρ^2)]
= d-1/2log(1+ρ^2-2 y^⊤ m).
§.§ Derivatives of Eq. (<ref>)
J = ∂ℓ_SC/∂μ = - d∑_i=1^nμ/√(γ^2+1)- y_i√(γ^2+1)-α_i
H = ∂^2ℓ_SC/∂μ∂μ^⊤ = - d∑_i=1^n I_d+1√(γ^2+1)-μμ^⊤/√(γ^2+1)/γ^2+1(√(γ^2+1)-α_i)-(μ/√(γ^2+1)- y_i)(μ/√(γ^2+1)- y_i)^⊤(√(γ^2+1)-α_i)^2.
§.§ Derivatives of Eq. (<ref>)
J = ∂ℓ_PKB/∂μ = - d+1/2∑_i=1^nμ/√(γ^2+1)- y_i√(γ^2+1)-α_i - nd-1/2( μ/√(γ^2+1)√(γ^2+1)-1 -2μ/γ^2)
H = ∂^2ℓ_PKB/∂μ∂μ^⊤ = - d+1/2∑_i=1^n I_d+1√(γ^2+1)-μμ^⊤/√(γ^2+1)/γ^2+1(√(γ^2+1)-α_i)-(μ/√(γ^2+1)- y_i)(μ/√(γ^2+1)- y_i)^⊤(√(γ^2+1)-α_i)^2
- nd-1/2[ I_d+1√(γ^2+1)-μμ^⊤/√(γ^2+1)/γ^2+1(√(γ^2+1)-1)-(μ/√(γ^2+1))(μ/√(γ^2+1))^⊤(√(γ^2+1)-1)^2 - 2 I_d+1γ^2-4μμ^⊤/γ^4].
§.§ Derivatives of Eq. (<ref>)
∂ℓ_SC/∂β_k = - d∑_i=1^nμ_ik x_i/√(γ_i^2+1)- y_ik x_i√(γ_i^2+1)-α_i
∂^2ℓ_SC/∂β_k∂β_l^⊤ = {[ - d∑_i=1^n x_i x_i^⊤√(γ_i^2+1)- x_iμ_ikμ_ik x_i^⊤/√(γ_i^2+1)/γ_i^2+1(√(γ_i^2+1)-α_i)-( x_iμ_ik/√(γ_i^2+1)- y_ik x_i)( x_iμ_ik/√(γ_i^2+1)- y_ik x_i)^⊤/(√(γ_i^2+1)-α_i)^2, if k=l; - d∑_i=1^n- x_iμ_ikμ_il x_i^⊤/√(γ_i^2+1)/γ_i^2+1(√(γ_i^2+1)-α_i)-( x_iμ_ik/√(γ_i^2+1)- y_ik x_i)( x_iμ_il/√(γ_i^2+1)- y_il x_i)^⊤/(√(γ_i^2+1)-α_i)^2, if k ≠ l; ]}
§.§ Derivatives of Eq. (<ref>)
The vector of the first derivative is given by
∂ℓ_PKB/∂β_k =
- d+1/2∑_i=1^nμ_ik x_i/√(γ_i^2+1)- y_ik x_i√(γ_i^2+1)-α_i - d-1/2∑_i=1^nμ_ik x_i/√(γ_i^2+1)√(γ_i^2+1)-1 + d-1/2∑_i=1^n2μ_ik x_i/γ_i^2.
The Jacobian matrix of the second derivatives comprises of
∂^2ℓ_SC/∂β_k∂β_k^⊤ =
- d+1/2∑_i=1^n x_i x_i^⊤√(γ_i^2+1)- x_iμ_ikμ_ik x_i^⊤/√(γ_i^2+1)/γ_i^2+1(√(γ_i^2+1)-α_i)-( x_iμ_ik/√(γ_i^2+1)- y_ik x_i)( x_iμ_ik/√(γ_i^2+1)- y_ik x_i)^⊤/(√(γ_i^2+1)-α_i)^2
- d-1/2∑_i=1^n x_i x_i^⊤√(γ_i^2+1)- x_iμ_ikμ_ik x_i^⊤/√(γ_i^2+1)/γ_i^2+1(√(γ_i^2+1)-1)-( x_iμ_ik/√(γ_i^2+1))( x_iμ_ik/√(γ_i^2+1))^⊤/(√(γ_i^2+1)-1)^2
+ d-1/2∑_i=1^n2 x_i x_i^⊤γ_i^2-4μ_ik x_iμ_ik x_i^⊤/γ_i^4,
∂^2ℓ_SC/∂β_k∂β_l^⊤ =
- d+1/2∑_i=1^n- x_iμ_ikμ_il x_i^⊤/√(γ_i^2+1)/γ_i^2+1(√(γ_i^2+1)-α_i)-( x_iμ_ik/√(γ_i^2+1)- y_ik x_i)( x_iμ_il/√(γ_i^2+1)- y_il x_i)^⊤/(√(γ_i^2+1)-α_i)^2
- d-1/2∑_i=1^n- x_iμ_ikμ_il x_i^⊤/√(γ_i^2+1)/γ_i^2+1(√(γ_i^2+1)-1)-( x_iμ_ik/√(γ_i^2+1))( x_iμ_il/√(γ_i^2+1))^⊤/(√(γ_i^2+1)-1)^2
+ d-1/2∑_i=1^n-4μ_ik x_iμ_il x_i^⊤/γ_i^4.
apalike
|
http://arxiv.org/abs/2409.02799v1 | 20240904151648 | Electronic correlations and spin frustration in the molecular conductors $κ$-(BEDT-TTF)$_2$X probed by magnetic quantum oscillations | [
"Shamil Erkenov",
"Sergej Fust",
"Sebastian Oberbauer",
"Werner Biberacher",
"Natalia D. Kushch",
"Harald Mueller",
"Francis L. Pratt",
"Rudolf Gross",
"Mark V. Kartsovnik"
] | cond-mat.str-el | [
"cond-mat.str-el"
] |
details
Walther-Meißner-Institut, Bayerische Akademie der Wissenschaften, D-85748 Garching, Germany
School of Natural Sciences, Technische Universität München, D-85748 Garching, Germany
Walther-Meißner-Institut, Bayerische Akademie der Wissenschaften, D-85748 Garching, Germany
School of Natural Sciences, Technische Universität München, D-85748 Garching, Germany
Walther-Meißner-Institut, Bayerische Akademie der Wissenschaften, D-85748 Garching, Germany
School of Natural Sciences, Technische Universität München, D-85748 Garching, Germany
Walther-Meißner-Institut, Bayerische Akademie der Wissenschaften, D-85748 Garching, Germany
Institute of Problems of Chemical Physics, Russian Academy of Sciences, Chernogolovka, 142432 Russian Federation
ESRF - The European Synchrotron, F-38043 Grenoble 9, France
ISIS Neutron and Muon Source, STFC Rutherford Appleton Laboratory, Chilton, Didcot OX11 0QX, United Kingdom
Walther-Meißner-Institut, Bayerische Akademie der Wissenschaften, D-85748 Garching, Germany
School of Natural Sciences, Technische Universität München, D-85748 Garching, Germany
Munich Center for Quantum Science and Technology (MCQST), D-80799 Munich, Germany
[email protected]
Walther-Meißner-Institut, Bayerische Akademie der Wissenschaften, D-85748 Garching, Germany
§ ABSTRACT
The layered molecular conductors κ-(BEDT-TTF)_2X are a perfect experimental platform for studying the physics of the Mott transition and related exotic electronic states. In these materials, the subtle balance between various instabilities of the normal metallic state can be efficiently changed by applying a very moderate external pressure or by subtle chemical modifications, e.g. by a replacement of the insulating anion X^-, frequently referred to as “chemical pressure”. A crucially important but still unsettled issue is an exact understanding of the influence of physical and chemical pressure on the electronic structure. Here, we use magnetic quantum oscillations to explore in a broad pressure range the behavior of the key parameters governing the Mott physics, the electronic correlation strength ratio U/t and the spin frustration ratio t'/t in two κ salts, the ambient-pressure antiferromagnetic insulator with X = Cu[N(CN)_2]Cl and the ambient-pressure superconductor with X = Cu(NCS)_2.
Our analysis shows that pressure effectively changes not only the conduction bandwidth but also the degree of spin frustration, thus weakening both the electronic correlation strength and the magnetic ordering instability. At the same time, we find that the replacement of the anion Cu[N(CN)_2]Cl^- by Cu(NCS)_2^- results in a significant increase of the frustration parameter t'/t, leaving the correlation strength essentially unchanged.
Electronic correlations and spin frustration in the molecular conductors κ-(BEDT-TTF)_2X probed by magnetic quantum oscillations
M. V. Kartsovnik
September 9, 2024
================================================================================================================================
§ INTRODUCTION
Layered organic charge-transfer salts have been extensively employed as model systems for exploring the correlation-driven Mott insulating instability and a plethora of associated fascinating phenomena ranging from conventional <cit.> and exotic <cit.> charge- and spin-ordered states, to quantum spin and electric-dipole liquids <cit.>, to valence-bond glass or solid <cit.> phases as well as unconventional superconductivity emerging in the direct proximity to an insulating ground state <cit.>.
Of particular interest is the family κ-(BEDT-TTF)_2X, where BEDT-TTF stands for the radical-cation bis(ethylenedithio)tetrathiafulvalene forming conducting layers alternating with insulating layers of a monovalent anion X^- <cit.>.
The organic molecules in the layers form an anisotropic triangular lattice of dimers with the on-site (intra-dimer) Coulomb repulsion significantly exceeding the nearest- and next-nearest-neighbor (inter-dimer) transfer integrals, U ≫ t,t', see refs. <cit.> for a review and inset in Fig. <ref>(a) below for illustration. This gives rise to a narrow, effectively half-filled conducting band.
Most of the abovementioned electronic states can be realized in these compounds, depending on subtle details of their crystal and electronic band structure, which can be controlled, e.g., by applying a moderate pressure, typically below 1 GPa, or by modifying the insulating anion.
Pressure is known to reduce the electronic correlation strength ratio U/t through increasing the conduction bandwidth, without changing the band filling. Therefore, the pressure-induced transition between the metallic and insulating ground states is generally referred to as a bandwidth-controlled metal-insulator transition (MIT).
The anion replacement in the κ salts has also been widely believed to primarily modify the bandwidth and therefore considered as “chemical pressure”, see, e.g.,
<cit.>.
This interpretation has, however, been questioned by first-principles band structures calculations <cit.>, which suggested that the overall ground state properties of these materials are controlled by the degree of anisotropy of the dimer triangular lattice rather than by the correlation strength ratio U/t. The anisotropy of the triangular lattice, quantified by the ratio t'/t, is one of the key parameters in the physics of the Mott transition. Being directly relevant to the spin frustration, it is crucially important for the magnetic properties of the Mott-insulating state and for the nature of the eventual superconducting state in the adjacent domain of the phase diagram <cit.>. Through its impact on the magnetic ordering instability it should also influence the critical electronic correlation strength required for the MIT, see, e.g., <cit.>.
The first experimental argument in support of the theoretical prediction <cit.> was obtained in the recent comparative study of magnetic quantum oscillations in two κ-(BEDT-TTF)_2X salts with X = Cu[N(CN)_2]Cl and Cu(NCS)_2 (hereafter referred to as κ-Cl and κ-NCS, respectively) under pressure <cit.>. These salts have very similar electronic band structures, but different ambient-pressure ground states <cit.>. κ-Cl is an archetypal antiferromagnetic Mott insulator, which transforms into a metal under a pressure of 20-40 MPa. By contrast, the κ-NCS salt is already metallic at ambient pressure. Should the correlation strength be different in these salts, it must be reflected in the many-body renormalization of the effective mass <cit.>.
However, the recent experiment <cit.> has revealed no difference between the effective masses of the two salts in the pressure interval
40 ≲ p ≲ 100 MPa, corresponding to the close proximity to the MIT in κ-Cl, on the metallic side of its phase diagram. This result suggests that the mass renormalization, hence the correlation strength ratio U/t is approximately the same for both salts.
Given the virtually equal electronic correlation strength, the difference in the ambient-pressure ground states is natural to attribute to a difference in the spin frustration ratio t'/t. This would be fully consistent with the band structure calculations predicting a stronger frustration for the more metallic, though not weaker correlated, κ-NCS salt <cit.>. However, for a conclusive proof, it is important to provide an experimental test for the t'/t ratio in the two salts. Further, for a more accurate comparison of the many-body renormalization effects, an experimental data on the effective masses in a considerably broader pressure range is needed.
To this end, we have carried out a detailed study of magnetic quantum oscillations of interlayer magnetoresistance of the κ-Cl and κ-NCS salts in a broad range of pressures, up to p≈ 1.5 GPa. This range covers both the close neighborhood of the MIT in κ-Cl and the region deep in the normal metallic state, where the electronic correlations are significantly reduced. The data obtained allow us to evaluate both the electronic correlation strength and the degree of spin frustration and to trace their evolution with pressure. In this way, we have obtained a quantitative information on the influence of pressure and anion substitution on the Mott-insulating and magnetic-ordering instabilities in the two prominent members of the κ-(BEDT-TTF)_2X family.
The paper is organized as follows. The next section describes the experimental details and conditions. The experimental results and their discussion are given in Sec. <ref>. We start with the general behavior of the quantum oscillations of magnetoresistance (Shubnikov-de Haas, SdH oscillations) in the κ-Cl salt and its evolution with pressure.
In particular, we present here some details on the oscillation amplitude, possibly related to a pressure-dependent spin-splitting, and on the influence of the weak Fermi surface warping in the interlayer direction. For the κ-NCS salt, there is a vast literature on its high magnetic field properties, including quantum oscillations, see, e.g., refs. <cit.> for a review.
Therefore for this salt we only give a very brief account of the SdH oscillations in the Supplemental Material <cit.>, illustrating how the cyclotron masses were evaluated.
In Sect. <ref> we present detailed data on the pressure dependence of the SdH frequencies and analyze them in terms of the effective dimer model <cit.>. We show that both the anion replacement and the pressure lead to significant changes in the inplane anisotropy reflected, in particular, in the spin frustration ratio t'/t. In Sec. <ref>, the cyclotron effective masses corresponding to the two fundamental SdH frequencies are presented for both salts and compared with each other. Throughout the entire pressure range studied, both salts show very similar mass values. Moreover, the overall mass behavior is remarkably well described by the Brinkman-Rice model, indicating the electron-electron interactions as a dominant mechanism of the pressure-dependent mass renormalization and allowing its quantitative analysis.
Our conclusions are summarized in Sec. <ref>.
§ EXPERIMENTAL
The single crystals used in the experiment were grown electrochemically. The κ-Cl crystals were grown following the procedure described in literature <cit.>, using the carefully purified commercial BEDT-TTF donor. For the κ-NCS crystals, the BEDT-TTF donor was synthesized according to the specially developed protocol <cit.> using [1,3]-dithiolo-[4,5-d][1,3-dithiole]-2,5-dione (thiapendione, TPD) as a starting compound. This procedure yielded crystals of very high quality (see the the Supplemental Material <cit.>), which was particularly important for the observation and quantitative analysis of the high-frequency (F_β) SdH oscillations.
For the measurements, two
crystals of each salt, κ-Cl and κ-NCS, have been selected.
SdH oscillations were studied in the interlayer transport geometry, that is conventional for the layered organics <cit.>. The resistance across the layers was measured using the standard four-probe low-frequency (f ∼ 10 - 300 Hz) a.c. technique. Annealed 20 μm-thick platinum wires, serving as current and voltage leads, were attached to the samples with a conducting graphite paste yielding the contact resistance ∼ 10-30 Ω at low temperatures.
The samples were mounted in a Be-Cu clamp pressure cell and cooled down in ^3He or ^4He variable-temperature inserts placed into a 15 T superconducting solenoid.
The pressure p was evaluated at room temperature and at 15 K using a calibrated n-doped InSb pressure gauge (see the Supplemental Material to ref. <cit.> for details). In the high-pressure range, above 0.8 GPa, the calibration was crosschecked using the p-linear resistance of a manganin wire <cit.> as a reference. In what follows, all the indicated pressure values are those determined at T =15 K. The error in the pressure determination did not exceed 10 MPa at p < 0.2 GPa and 5% at higher pressures.
All the measurements on κ-Cl sample #1 and on both κ-NCS samples were done in a magnetic field applied perpendicular to the plane of conducting layers (crystallographic ac-plane and bc-plane for κ-Cl and κ-NCS, respectively). This is a conventional geometry for probing the inplane charge dynamics in layered materials as the field induces cyclotron orbits in the layer plane. The κ-Cl sample #2 was used partly in the same geometry at lower pressures, p < 0.4 GPa, whereas for the rest of the measurements this sample was tilted by an angle of θ = 25^∘ from the perpendicular field direction, as explained in Sec. <ref>. Taking into account the quasi-2D character of the present materials, we simply multiply the SdH frequencies and cyclotron masses, determined in the tilted fields, by the factor cosθ to obtain the values corresponding to the perpendicular field.
§ RESULTS AND DISCUSSION
§.§ SdH oscillations in κ-Cl: general features
Figure <ref> shows examples of the SdH oscillations recorded for κ-Cl sample #1 at different pressures, at the base temperature of the ^3He cryostat.
For each pressure the oscillating signal is normalized by the respective field-dependent nonoscillating background R_bg obtained by a low-order polynomial fit of the as-measured resistance R(B): R_osc/R_bg≡[ R(B) - R_bg(B) ]/R_bg(B). In full agreement with the previous reports <cit.>,
two fundamental frequencies are observed, revealing the Fermi surface topology typical of the κ salts <cit.>, see also Fig. <ref>(b) below. The lower frequency, F_α, is associated with the classical orbit on the Fermi pocket centered at the Brillouin zone boundary and varies between ≈ 530 and 675 T upon increasing pressure from 75 MPa to 1.5 GPa. The dominant oscillations clearly resolved at all pressures have a higher frequency, F_β∼ 4 kT, corresponding to a cyclotron orbit area equal to that of the first Brillouin zone. This orbit is caused by magnetic breakdown between the α pocket and a pair of open sheets and thus represents the entire Fermi surface (in the two-dimensional, 2D, representation, i.e., the Fermi surface in the plane of the conducting molecular layers) <cit.>.
The β oscillations exhibit pronounced beating, indicating that there are in fact two frequencies close to each other.
This frequency splitting, Δ F_β/F_β∼ 0.01, most likely originates from the maximal and minimal cross-sections of the three-dimensional (3D) Fermi surface cylinder slightly warped in the interlayer direction (see the Supplemental Material of ref. <cit.>), a phenomenon observed earlier on a number of organic <cit.> and inorganic <cit.> layered materials.
For example, at p = 0.3 GPa we find two beat nodes, at 9.50 T and at 12.21 T, respectively. From this we roughly evaluate the warping of the Fermi surface: Δ k_F/k_F≃Δ F_β/2F_β≈ 0.55 × 10^-2 (here we used the experimental value F_β(0.3GPa)= 3900 T) <cit.>. This warping is similar to that in the isostructural ambient-pressure superconductor κ-(BEDT-TTF)_2Cu[N(CN)_2]Br <cit.> and an order of magnitude stronger than in the sibling κ-NCS salt <cit.>.
At increasing pressure up to 1.36 GPa, the beat nodes shift to higher field, indicating an increase of the beat frequency, hence of the Fermi surface warping by ≃ 25%.
A noticeable pressure-induced enhancement of the interlayer coupling is common for the layered organic conductors with their relatively soft crystal lattices.
Interestingly, however, the node positions start to shift down upon further pressurizing beyond 1.4 GPa.
This apparent weakening of the interlayer coupling at high pressures is unusual and may deserve further attention.
One can see that the amplitude of the oscillations in Fig. <ref> varies in a nonmonotonic manner at changing pressure. In Fig. <ref> we plot the p-dependence of the amplitudes of main peaks in the fast Fourier transform (FFT) spectra of the oscillatory magnetoresistance, in the field window 12 T to 15 T.
The amplitude of the β oscillations, see Fig. <ref>(a), exhibits pronounced minima at p ≃ 0.2 GPa and 0.9 GPa. Simultaneously, the amplitude ratio between the second and first harmonics, A_2β/F_β, shown in Fig. <ref>(b), displays sharp peaks at the same pressures. This behavior is strongly suggestive of the spin-zero effect caused by the periodic modulation of the oscillation amplitude by the spin-splitting factor R_s^(r) = cos(rπ/2m_c/m_0g), where r is the harmonic index, g is the Landé g-factor averaged over the cyclotron orbit, and m_0 the free electron mass <cit.>. For the quasi-2D organics, this effect has been widely known <cit.> as a periodic vanishing of the fundamental harmonic amplitude (with a simultaneously peaking second harmonic) when rotating the magnetic field, due to the angle-dependent cyclotron mass m_c(θ) = m_c(0)/cosθ. Knowing the effective cyclotron mass, such angle-dependent data can give useful information about the many-body renormalization of the g-factor. It would be interesting to carry out similar measurements on the κ-Cl salt at different pressures. This should provide data for a comparison between the p-dependent renormalization effects on the g-factor and on the effective mass.
The variation of the α-oscillation amplitude with pressure is shown in Fig. <ref>(c). Here we do not see spin-zero dips. A likely reason for that is a significantly lower cyclotron mass, m_c,α≈ m_c,β/2 (see Sect. <ref>), which enters the argument of cosine in the expression for R_s and thus leads to its weaker variation under pressure. It is possible that the nonmonotonic behavior with a maximum near p=0.5 GPa is caused by the spin-splitting factor slowly changing with pressure. On the other hand, we note that the A_α(p) dependence resembles that of A_β(p) in Fig. <ref>(a) once we ignore the modulation of the latter by the oscillating spin-splitting factor. In both cases the amplitudes display a global maximum near 0.5 GPa and a general trend to decrease at high pressures. The mechanisms behind this behavior may be common for the α and β oscillations. For example, one can speculate that the initial increase of the amplitude at low pressures is related to the rapid decrease of both cyclotron masses and concomitant weakening of temperature and scattering damping effects on the quantum oscillations <cit.>. The following slow decrease of the amplitude above 0.5 GPa may come from a pressure-induced enhancement of the Fermi surface warping (i.e. enhancement of the interlayer coupling), which should lead to a decrease of the number of charge carriers contributing in phase to the quantum oscillations.
The beating of the β oscillations and the resulting splitting of the FFT peak obviously affects the precision of determination of the mean Fermi surface area, which we will need in the following for evaluation of the frustration parameter t'/t.
This source of error can be avoided by aligning the sample in the direction corresponding to a maximum in the classical angle-dependent magnetoresistance oscillations (AMRO) <cit.>. At such directions, known also as Yamaji angles, all cyclotron orbits on a weakly warped cylindrical Fermi surface have the same area <cit.>, hence, contribute in phase to the quantum oscillations. As a result, when the sample is turned in a magnetic field, approaching a Yamaji angle, the beat frequency vanishes and the SdH amplitude acquires a local maximum.
This effect is illustrated in Fig. <ref>, where the field-dependent resistance of two samples mounted side by side in the pressure cell but aligned differently with respect to the magnetic field is plotted. Here, panel (a) shows sample #1 aligned with its conducting layers perpendicular to the field. Sample # 2 in Fig. <ref>(b) is tilted from the perpendicular orientation by angle θ≈ 25^∘, which is close to the first Yamaji angle for the β orbit <cit.>. As a result, the amplitude of the β oscillations is strongly enhanced and no trace of beating is seen. The fast Fourier transforms (FFT) of both oscillating signals are shown in Fig. <ref>(c). Here, in order to facilitate a direct comparison between different orientations, we multiply the frequencies by cosθ, thereby reducing them to the values corresponding to θ = 0^∘.
One readily sees that the F_β peak for sample #2 (red curve) is greatly enhanced and, in contrast to sample #1 (black curve), shows no splitting.
Thus, at θ≈ 25^∘ we significantly gain in the accuracy of both the frequency and amplitude of the β oscillations. Therefore, most of the measurements on sample #2 was done at this orientation. Note, however, that at this orientation the information on the Fermi surface warping is lost and the α oscillations are significantly suppressed, in comparison with those in the perpendicular field. Hence, a part of the measurements on sample #2 and all studies of sample #1 were done at θ = 0^∘.
In what follows, we will present the SdH frequencies and effective masses obtained for κ-Cl in both the perpendicular and the tilted orientations.
We also note that due to the very high electronic anisotropy of the κ-NCS salt (see the Supplemental Material <cit.> and references therein) the abovementioned effect of AMRO on the SdH oscillations is absent in this material. Therefore, all measurements on κ-NCS have been done in the perpendicular field geometry.
§.§ p-dependent SdH frequencies and inplane anisotropy
In this section we present a detailed analysis of the SdH frequencies, which are fundamentally determined by the area of the relevant Fermi surface cross-section S_i through the Onsager relation <cit.>, F_i = ħ e/2πS_i, with ħ being the Planck constant and e the elementary charge.
Figure <ref>(a) shows the pressure-dependent frequencies of the β oscillations in κ-Cl sample #2 (red symbols) and κ-NCS (blue symbols).
For κ-Cl, the empty circles correspond to the perpendicular field geometry and the filled circles are the data taken at θ = 25^∘ and multiplied by cosθ. Sample #1 was measured simultaneously with #2, but at θ = 0^∘ and showed consistent values within the error bars. For κ-NCS, the filled circles are the averaged values obtained on two samples measured simultaneously; the difference between the samples lies within the indicated error bars.
The stars are the data obtained in our previous dilution-fridge experiment on κ-Cl and κ-NCS at pressures up to 0.1 GPa <cit.>.
As already mentioned, the β oscillations are associated with the magnetic-breakdown orbit with the area equal to the Brillouin zone area.
Indeed, the zero pressure values, F_β^Cl(0) = (3836 ± 5) T and F_β^NCS(0) = (3867 ± 7) T yield the areas (36.62± 0.05) nm^-2 and (36.91± 0.05) nm^-2, respectively,
perfectly coinciding with the low-temperature Brillouin zone areas of these salts <cit.>.
The shapes of the p dependence in Fig. <ref>(a) look slightly different from each other. However, by plotting the relative change of the frequency under pressure, see Fig. <ref>(b), we find that the difference between the two salts does not exceed the experimental error bars. Moreover, our data are consistent with the quasi-linear p dependence of the Brillouin zone areas [triangles in Fig. <ref>(b)] based on the X-ray data <cit.>. We note that the X-ray studies <cit.> have been done at room temperature. However, their good agreement with our low-temperature SdH data suggests that the compressibility does not change significantly upon cooling.
Plotted in Fig. <ref>(c) is the pressure dependence of the α frequency. For κ-NCS the symbol and color codes are the same as in Fig. <ref>(a). For κ-Cl, the data (circles) have been taken on sample #1 in the perpendicular field geometry. For pressures below 0.1 GPa, the results from our previous dilution fridge experiment <cit.> are added (stars). Both data sets are perfectly consistent with each other. Therefore, we will not distinguish between them in the following.
For κ-NCS our data set is consistent with the early study by Caulfield et al. <cit.>. As was already noticed by those authors, the relative increase of F_α with pressure is much stronger than that of F_β. The κ-Cl salt shows the same, even more pronounced trend.
At lower pressures, p < 0.3 GPa, the α frequency increases at a relative rate of ≃ 0.25 GPa^-1 for κ-NCS and ≃ 0.4 GPa^-1 for κ-Cl [cf. the relative increase rate of the β frequencies in Fig. <ref>(b) is only ≈ 0.04 GPa^-1].
Interestingly, for κ-Cl, the absolute changes of F_α(p) and F_β(p) are virtually the same in this pressure range, see the inset in Fig. <ref>(c). This suggests that, in contrast to the rapidly expanding α pocket, the rest of the Fermi surface remains almost unchanged. At higher pressures, the increase of F_α becomes more moderate, with a slope saturating at ∼ 0.15 GPa^-1.
For κ-NCS, due to the weakness of the β oscillations at p< 0.3 GPa,
a sufficiently accurate comparison of Δ F_α(p) and Δ F_β(p) is difficult. However, qualitatively, the behavior is similar to that of κ-Cl.
The difference in the behaviors of the α and β frequencies is summarized as the p-dependent ratio F_α/F_β in Fig. <ref>(d).
Let us discuss this ratio in terms of the electronic anisotropy of the conducting layers, in other words, in terms of the shape of the 2D Fermi surface.
To this end, we follow the approach <cit.> based on the effective dimer model commongly used for the κ salts. This is a tight-binding model of an anisotropic triangular lattice of BEDT-TTF dimers with the nearest and next-nearest transfer integrals, t and t^', respectively [see inset in Fig. <ref>(a)], and the dispersion relation:
ϵ(𝐤) = 4tcos( k_x x/2) cos( k_y y/2) +2t^'cos( k_y y ).
Here, x and y should be substituted by the crystallographic parameters a(c) and c(b) in κ-Cl(-NCS), respectively.
The above equation is a parametric expression for the Fermi surface which directly determines the ratio between the Fermi surface areas S_α and S_β, hence the ratio F_α/F_β through t^'/t, the spin frustration parameter of the anisotropic triangular lattice of the molecular dimers <cit.>.
Fitting the experimental data in Fig.<ref>(d) with Eq. (<ref>), we evaluate the frustration ratio t^'/t and its dependence on pressure for both salts, as shown in Fig. <ref>(a).
The first, obvious result is that the frustration in the metallic κ-NCS salt is significantly stronger than in the ambient-pressure Mott insulator κ-Cl. While this difference was predicted by some band structure calculations <cit.>, our data provide, to the best of our knowledge, the first direct experimental evidence for that.
Another important observation is that even our quasi-hydrostatic pressure significantly changes the electronic anisotropy in the conducting layers, leading to an enhancement of the spin frustration.
In a broad pressure range the t^'/t ratio increases with an approximately constant rate of ≃ 0.07 GPa^-1, which even increases below 0.3 GPa, as the system approaches the MIT. The overall increase of the frustration ratio in the studied pressure range is ≳ 20%. In particular, at 1 GPa the frustration in the κ-Cl salt already exceeds the ambient-pressure value for κ-NCS.
Thus, besides the well-known effect of pressure on electronic correlation strength, it is important to take into account its strong influence on magnetic ordering instability in these materials.
In Fig. <ref>(b) we show the Fermi surfaces of κ-Cl and κ-NCS, calculated using Eq. (<ref>) and the experimental SdH frequencies, for the lowest and for the highest pressure. Even though the crystal lattice compressibility is assumed to be isotropic in the layers plane, as it is at room temperature <cit.>, the changes in the Fermi surfaces of both salts are obviously anisotropic. While the α pocket shows a significant increase along its short axis (k_x), the rest of the Fermi surface remains almost the same. Such a behavior is indeed observed experimentally on κ-Cl at pressures of up to 0.3 GPa, as noted above. At higher pressures, however, the absolute changes Δ F_β(p) and Δ F_α(p) deviate from each other, see, e.g., the inset in Fig. <ref>(c). This means that the Fermi surface also expands in the k_y direction. The apparent absence of such expansion in Fig. <ref>(b) is most likely due to a limited precision of the simple effective dimer model employed here. It would be highly interesting to perform a more elaborated analysis confronting our data with an ab-initio band structure calculation taking into account electronic correlations. This, however, appears to be a very challenging task, requiring, furthermore, detailed low-T structural data at high pressures. On the other hand, our main conclusions concerning the comparison of the spin frustration in the present two compounds and their pressure dependence seem to be robust against small quantitative corrections.
§.§ Effective cyclotron masses
The effective cyclotron masses were evaluated from the T-dependence of the amplitude A_i(T) of the fundamental harmonic of the SdH oscillations in a conventional way based on the Lifshitz-Kosevich (LK) theory <cit.>.
Details of the evaluation including some examples are given in the Supplemental Material <cit.>.
The results for the mass (in units of the free electron mass m_0) on the β orbit, characterizing the entire Fermi surface, are plotted in Fig. <ref>. Here the blue symbols represent κ-Cl sample #2. The filled circles and diamonds correspond to the data obtained at θ = 0^∘ (field perpendicular to the layers) and at 25^∘ (near the AMRO peak), respectively. The latter are multiplied by cos 25^∘.
Sample #1, measured simultaneously with sample #2, yielded very similar mass values (not shown in Fig. <ref>, for the sake of clarity). The empty circles are the data obtained on sample #1 in our earlier lower-pressure experiment <cit.>.
The green symbols show the mass for the κ-NCS salt obtained in this work (filled triangles) and in Ref. <cit.> (empty triangles).
Due to a larger magnetic breakdown gap, the β oscillations in κ-NCS are relatively weak (see the Supplemetal Material <cit.>), which leads to a larger error bar and stronger scattering of the data.
Within the experimental error, the κ-Cl and κ-NCS salts exhibit the same behavior. The initial rapid decrease of the mass occurring as
we are moving away from the MIT slows down with increasing pressure and saturates above 1 GPa at the level m_c,β≃ 2.5 m_0. Note that this value is close to the band cyclotron mass m_c,β,band = 2.6m_0 <cit.> calculated for κ-NCS from the band structure neglecting many-body interactions. Thus, it appears that at pressures above 1 GPa we have essentially a noninteracting electron system. Taking into account that the only significant change in our materials within the rather narrow range 0 < p < 1 GPa is the electronically driven MIT, it is natural to attribute the effective mass renormalization entirely to electronic correlations.
Other renormalization effects, such as due to electron-phonon interactions, not directly linked to the MIT, are not expected to change notably within this pressure range and, therefore, are most likely insignificant.
The above argument justifies the analysis of the pressure-dependent effective mass in terms of the electronic correlation strength ratio U/t. In particular, the fact that the masses m_c,β in κ-Cl and κ-NCS are very similar at all pressures and approach the same high-p limit m_c,β,band further substantiates the earlier conclusion <cit.>, inferred from lower-pressure data, that the correlation strength is indeed essentially the same in both salts.
Turning to a more quantitative analysis, the previous, low-pressure experiment <cit.> has revealed a simple inverse-linear p-dependence of the mass, which was interpreted in terms of a Brinkman-Rice-like (BR-like) renormalization <cit.>. The present data, obtained in a broader pressure range, reveal a deviation from this behavior starting from at p ≃ 0.25 GPa.
This is clearly seen in the inset in Fig. <ref>, where the inverse mass of κ-Cl is plotted and the dashed straight line is the linear fit to the low-pressure data <cit.>. It should be noted that the linear dependence, m_c^-1∝ (p-p_0), was inferred from the BR theory <cit.>, assuming a linear pressure dependence of the correlation strength ratio. This approximation works well in a narrow pressure interval near the MIT, where the change of the ratio U/t does not exceed 1-2% <cit.>.
However, for a broader range one should take into account that the inter-site transfer integral t is significantly more sensitive to pressure than the on-site (intra-dimer) Coulomb repulsion U <cit.>.
In this case, a more reasonable approximation is <cit.> to assume U = const and expand t rather than U/t linearly in pressure:
t(p) ≈ t_0[1 + γ(p-p_0)],
where p_0 is the critical pressure where the mass diverges in the BR model, t_0 ≡ t(p_0) and γ is a proportionality factor. Further following <cit.>, we set the critical on-site repulsion U_0 proportional to the conducting bandwidth, hence to t(p). Then, the BR renormalization of the effective mass <cit.>, m_c = m_c,band/[1 - (U/U_0)^2 ], can be as:
m_c =
m_c,band[1-1/[1+γ(p-p_0)]^2]^-1 .
Here, we replaced the usual quasiparticle effective mass m^∗ and band mass m_band considered in the original BR theory <cit.> by the respective cyclotron masses considered in the LK theory of magnetic quantum oscillations, since the many-body renormalization effects are the same in both cases <cit.>.
Despite its rather simple form, Eq. (<ref>) fits the experimental data remarkably well throughout the whole pressure range. This is shown by the red dashed line in Fig. <ref> fitting the κ-Cl data <cit.>.
We, therefore, assume that it provides a realistic estimate of the parameters characterizing the electronic system: m_c,β,band = (2.07 ± 0.1)m_0, p_0 = (-0.28 ± 0.04) GPa, and γ = (0.77 ± 0.11) GPa^-1. The fit to the κ-NCS dataset yields very similar parameters, although with considerably larger error bars, see the Supplemental Material <cit.>. The evaluated band mass is somewhat smaller than the abovementioned calculated value, 2.6m_0 <cit.>, but the difference does not exceed the uncertainty of the band structure calculations <cit.>. The BR critical pressure is comparable to that obtained from an even simpler low-p fit <cit.> shown by the dashed straight line in the inset in Fig. <ref>. Finally, within the present approach, the sensitivity of the electronic correlation strength to pressure is basically determined by the coefficient γ = dt/t_0/dp.
The obtained value is an order of magnitude higher than that inferred from the band-structure calculations <cit.>. The calculations yielded the U/t ratio in κ-NCS decreasing by only 5% upon increasing pressure from 0 to 0.75 GPa, which would imply γ≈ 0.07 GPa^-1. A similarly strong disagreement with the theoretical predictions has already been detected in the experimental data taken in a narrow pressure interval very close to the MIT <cit.>. Now it is confirmed to exist over a much broader range where the effective mass is no longer inversely-linear in p and even approaches the noninteracting band mass value.
Thus far, we have considered the cyclotron mass on the magnetic-breakdown β orbit encircling the entire 2D Fermi surface. It is interesting to compare it with the mass on the classical orbit α, which involves only one-half of the charge carriers. In this way we may obtain information on the momentum dependence of the electronic interactions. For example, in another organic salt, κ-(BETS)_2Mn[N(CN)_2]_3, displaying the MIT, the mass renormalization on the α orbit was found to be enhanced in comparison to the rest of the Fermi surface in close proximity to the transition <cit.>. Figure <ref> summarizes our results on m_c,α(p) in κ-Cl and κ-NCS. All the data in the figure have been taken in the perpendicular field configuration, as at the tilted field the α oscillations in κ-Cl were too weak (see Fig. <ref>) for an accurate mass determination.
For κ-Cl, the α mass is almost exactly one-half of the β mass and this relation is virtually independent of pressure. To illustrate this, we plot the ratio m_c,β/m_c,α in the inset in Fig. <ref>. A linear fit to the data (dashed line in the inset), has a very slight slope, (-0.055 ± 0.04) GPa^-1, lying within the error bar. Such a weak variation, even if it reflects a real trend, may be attributed to a weak pressure dependence of the band masses.
The blue dashed line in the main panel of Fig. <ref> is the BR fit according to Eq. (<ref>). It yields the parameters: m_c,α,band = (1.05 ± 0.07)m_0, p_0 = (-0.29 ± 0.04) GPa, and γ = (0.81 ± 0.16) GPa^-1.
As expected, the band mass is approximately one-half of the β band mass obtained above. The other two parameters are very close to those obtained for the β orbit. All in all, we observe no evidence of a difference in the mass renormalization on the α and β orbits. Thus, within the accuracy of our experiment the electronic correlations appear to be momentum-independent in the κ-Cl salt, in contrast to those in κ-(BETS)_2Mn[N(CN)_2]_3.
For the κ-NCS salt (green triangles in Fig. <ref>), the α-mass values lie slightly higher than for κ-Cl <cit.>. The difference is likely caused by a larger, than in κ-Cl, size of the α orbit (see Sec. <ref>), hence a higher band cyclotron mass. Indeed, the fit with Eq. (<ref>) gives m_c,α,band = (1.18 ± 0.06)m_0, that is 10% higher than in the κ-Cl salt.
As to the other fitting parameters, the sensitivity to pressure, γ = (0.78 ± 0.13) GPa^-1 is almost the same as for κ-Cl, whereas the BR critical pressure p_0 = (-0.33 ± 0.03) GPa is slightly lower.
It is also lower than the value, p_0 ≃ -0.29 GPa, obtained from the fit to the m_c,β(p) dependence in the same salt, see Fig. S4 in the Supplemental Material <cit.>.
The relatively low p_0 value for the α orbit might be a sign of weaker electronic correlations. If so, this would imply that the correlation effects are different on different parts of the Fermi surface in κ-NCS, weaker on the α pocket and stronger on the open sheets. However, we should keep in mind that the mentioned differences in p_0 are small and comparable to the evaluation error bars. Here, the limiting factor is the rather large error bars in the m_c,β(p) dependence caused by the low amplitude of the magnetic-breakdown β oscillations in κ-NCS. For making a definitive conclusion, further measurements at higher magnetic fields, B>30 T, would be very helpful. While high-field quantum oscillations experiments have been done on κ-NCS at ambient pressure, see, e.g., refs. <cit.>, we are unaware of similar measurements under pressure.
§ CONCLUSIONS
Using the SdH oscillation technique, we have been able to trace the evolution of the electronic correlation strength as well as the spin frustration ratio in the κ-(BEDT-TTF)_2X salts with X = Cu[N(CN)_2]Cl and Cu(NCS)_2 in a broad pressure range up to 1.5 GPa corresponding to an almost two-fold change of the conduction bandwidth, according to our estimations.
From the systematic analysis of the SdH amplitude, we have determined the renormalized
effective cyclotron masses. The renormalization is found to be the same for the α and β orbits. This suggests that electronic correlations are homogeneous over the Fermi surface.
Throughout the entire pressure range studied the behavior of the effective cyclotron mass is remarkably well described by the BR model under the assumption of a linear-in-pressure transfer integral t(p) and a p-independent on-site Coulomb repulsion. This approximation was recently shown to work well for the inorganic 3D Mott insulator NiS_2 at pressures between 3 and 11 GPa <cit.> and also seems to be very reasonable in our case. The sensitivity of the transfer integral to pressure, γ = dt/t_0/dp≃ 0.8 GPa^-1, was estimated by fitting the experimental data with the model Eq. (<ref>). This result is consistent with the previous estimations based on low-pressure data <cit.>, but is an order of magnitude higher than inferred from the band structure calculations <cit.>. This stark discrepancy challenges our understanding of the correlation effects on the band structure near the bandwidth-controlled MIT.
Our data confirm and further extend to a broad pressure range the earlier finding <cit.> that the correlation strength is the same in the ambient-pressure insulator κ-Cl and in the superconducting κ-NCS. By contrast, the spin frustration turns out to differ considerably in the two salts. To estimate the frustration ratio t^'/t, we use the fact that this parameter is intimately connected with the shape of the Fermi surface and thus can be extracted from the relationship of the SdH frequencies F_α and F_β. To this end we followed the approach based on the effective dimer model <cit.>. The resulting ambient-pressure values are t^'/t ≈ 0.57 and 0.69 for κ-Cl and κ-NCS, respectively. Both values are somewhat higher than those obtained in first-principles band structure calculations <cit.>. It would be highly interesting to revise the calculations, taking into account our results on the SdH frequencies. At the same time, in line with the theoretical predictions, for the κ-NCS salt the t^'/t ratio is significantly higher than for κ-Cl. This result clearly demonstrates the dominant role of the spin frustration in the “chemical pressure” effect within, at least, the present pair of κ salts with different anions. It is interesting to perform a similar study on other κ salts, in particular, on the metallic salt with X = Cu[N(CN)_2]Br isostructural to κ-Cl and on the spin-liquid candidate with X = Cu_2(CN)_3.
Finally, our analysis of the SdH frequencies clearly reveals a considerable pressure effect on the spin frustration. For both salts, the t^'/t ratio increases by ≳ 20% within the studied pressure range. Thus, one has to take into account the influence of pressure on both the electronic correlations and the magnetic ordering instability when studying the electronic phase diagram of our materials and, possibly, of the other κ salts.
We are thankful to K. Kanoda, P. Reiss, and V. Zverev for stimulating and illuminating discussions. The work was supported by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) via Grants No. KA 1652/5-1 and No. GR 1132/19-1.
101
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Kanoda(1997)]kano97
author author K. Kanoda, title title Recent progress in NMR
studies on organic conductors, @noop journal
journal Hyperfine Interact. volume
104, pages 235 (year 1997)NoStop
[Miyagawa et al.(2004)Miyagawa, Kanoda, and Kawamoto]kano04
author author K. Miyagawa, author K. Kanoda, and author A. Kawamoto, title title NMR studies on two-dimensional
molecular conductors and superconductors: Mott transition in
κ-(BEDT-TTF)_2X, https://doi.org/10.1021/cr0306541
journal journal Chem. Rev. volume 104, pages 5635 (year
2004)NoStop
[Powell and McKenzie(2006)]powe06
author author B. J. Powell and author R. H. McKenzie, title title Strong electronic
correlations in superconducting organic charge transfer salts, https://doi.org/10.1088/0953-8984/18/45/R03 journal journal J. Phys.: Condens. Matter volume 18, pages R827 (year 2006)NoStop
[Seo et al.(2004)Seo,
Hotta, and Fukuyama]seo04
author author H. Seo, author C. Hotta, and author H. Fukuyama, title title Toward systematic understanding of diversity of
electronic properties in low-dimensional molecular solids, @noop
journal journal Chem. Rev. volume 104, pages 5005 (year
2004)NoStop
[Mori(2016)]mori16
author author T. Mori, title title Non-stripe charge order in
dimerized organic conductors, https://doi.org/10.1103/PhysRevB.93.245104 journal journal Phys. Rev. B volume 93, pages 245104 (year 2016)NoStop
[Oshima et al.(2017)Oshima,
Cui, and Kato]oshi17
author author Y. Oshima, author H.-B. Cui, and author R. Kato, title title Antiferromagnetic insulating ground state of
molecular π-d system λ-(BETS)_2FeCl_4 (BETS =
bis(ethylenedithio)tetraselenafulvalene): A theoretical and experimental
review, https://doi.org/10.3390/magnetochemistry3010010 journal journal Magnetochemistry volume 3, pages 10 (year 2017)NoStop
[Okazaki et al.(2013)Okazaki, Ikemoto, Moriwaki, Shikama, Takahashi, Mori, Nakaya, Sasaki, Yasui, and Terasaki]okaz13
author author R. Okazaki, author Y. Ikemoto,
author T. Moriwaki, author T. Shikama, author
K. Takahashi, author
H. Mori, author H. Nakaya, author T. Sasaki, author Y. Yasui, and author I. Terasaki, title title
Optical conductivity measurement of a dimer Mott-insulator to charge-order
phase transition in a two-dimensional quarter-filled organic salt compound, https://doi.org/10.1103/PhysRevLett.111.217801 journal
journal Phys. Rev. Lett. volume 111, pages 217801 (year 2013)NoStop
[Riedl et al.(2021)Riedl,
Gati, Zielke, Hartmann,
Vyaselev, Kushch, Jeschke,
Lang, Valentí, Kartsovnik, and Winter]ried21
author author K. Riedl, author E. Gati,
author D. Zielke, author S. Hartmann, author
O. M. Vyaselev, author
N. D. Kushch, author
H. O. Jeschke, author
M. Lang, author R. Valentí, author M. V. Kartsovnik, and author S. M. Winter, title title Spin vortex
crystal order in organic triangular lattice compound, https://doi.org/10.1103/PhysRevLett.127.147204 journal
journal Phys. Rev. Lett. volume 127, pages 147204 (year 2021)NoStop
[Lunkenheimer et al.(2012)Lunkenheimer, Müller, Krohns,
Schrettle, Loidl, Hartmann,
Rommel, de Souza, Hotta,
Schlueter, and Lang]lunk12
author author P. Lunkenheimer, author J. Müller, author S. Krohns,
author F. Schrettle, author A. Loidl, author
B. Hartmann, author
R. Rommel, author M. de Souza, author C. Hotta, author J. A. Schlueter, and author M. Lang, title title
Multiferroicity in an organic charge-transfer salt that is suggestive of
electric-dipole-driven magnetism, https://doi.org/10.1038/NMAT3400
journal journal Nat. Mater. volume 11, pages 755 (year 2012)NoStop
[Gati et al.(2018)Gati,
Fischer, Lunkenheimer, Zielke, Köhler, Kolb, von
Nidda, Winter, Schubert, Schlueter, Jeschke, Valentí, and Lang]gati18
author author E. Gati, author J. K. H. Fischer, author P. Lunkenheimer, author D. Zielke, author S. Köhler,
author F. Kolb, author
H.-A. K. von Nidda, author
S. M. Winter, author
H. Schubert, author
J. A. Schlueter, author
H. O. Jeschke, author
R. Valentí, and author
M. Lang, title title Evidence for electronically driven ferroelectricity in a strongly
correlated dimerized BEDT-TTF molecular conductor, https://doi.org/10.1103/PhysRevLett.120.247601 journal
journal Phys. Rev. Lett. volume 120, pages 247601 (year 2018)NoStop
[Kagawa et al.(2013)Kagawa,
Sato, Miyagawa, Kanoda,
Tokura, Kobayashi, Kumai, and Murakami]kaga13
author author F. Kagawa, author T. Sato,
author K. Miyagawa, author K. Kanoda, author
Y. Tokura, author K. Kobayashi, author R. Kumai, and author Y. Murakami, title title
Charge-cluster glass in an organic conductor, https://doi.org/10.1038/nphys2642 journal journal Nat. Phys. volume 9, pages
419 (year 2013)NoStop
[Sasaki et al.(2017)Sasaki,
Hashimoto, Kobayashi, Itoh,
Iguchi, Nishio, Ikemoto,
Moriwaki, Yoneyama, Watanabe,
Ueda, Mori, Kobayashi,
Kumai, Murakami, Müller, and Sasaki]sasa17
author author S. Sasaki, author K. Hashimoto,
author R. Kobayashi, author K. Itoh, author
S. Iguchi, author Y. Nishio, author Y. Ikemoto, author T. Moriwaki, author N. Yoneyama, author M. Watanabe, author A. Ueda, author H. Mori, author K. Kobayashi,
author R. Kumai, author Y. Murakami, author
J. Müller, and author
T. Sasaki, title title Crystallization and vitrification of electrons in a glass-forming
charge liquid, https://doi.org/10.1126/science.aal3120 journal journal Science volume
357, pages 1381 (year 2017)NoStop
[Kanoda and Kato(2011)]kano11
author author K. Kanoda and author R. Kato, title title Mott physics in organic conductors
with triangular lattices, https://doi.org/10.1146/annurev-conmatphys-062910-140521 journal journal Annu. Rev. Condens. Matter Phys. volume 2, pages 167 (year
2011)NoStop
[Shimizu et al.(2016)Shimizu, Hiramatsu, Maesato, Otsuka, Yamochi, Ono, Itoh,
Yoshida, Takigawa, Yoshida, and Saito]shim16
author author Y. Shimizu, author T. Hiramatsu,
author M. Maesato, author A. Otsuka, author
H. Yamochi, author A. Ono, author M. Itoh, author M. Yoshida,
author M. Takigawa, author Y. Yoshida, and author
G. Saito, title title Pressure-tuned exchange coupling of a quantum spin liquid in the
molecular triangular lattice
-(ET)_2Ag_2(CN)_3, https://doi.org/10.1103/PhysRevLett.117.107203 journal
journal Phys. Rev. Lett. volume 117, pages 107203 (year 2016)NoStop
[Zhou et al.(2017a)Zhou, Kanoda, and Ng]kano17
author author Y. Zhou, author K. Kanoda, and author T.-K. Ng, title title Quantum spin liquid states, https://doi.org/10.1103/RevModPhys.89.025003 journal
journal Rev. Mod. Phys. volume 89, pages 025003 (year 2017a)NoStop
[Shimozawa et al.(2017)Shimozawa, Hashimoto, Ueda, Suzuki, Sugii, Yamada, Imai, Kobayashi, Itoh, Iguchi, Naka, Ishihara, Mori, Sasaki, and Yamashita]shim17
author author M. Shimozawa, author K. Hashimoto, author A. Ueda,
author Y. Suzuki, author K. Sugii, author
S. Yamada, author Y. Imai, author R. Kobayashi, author K. Itoh,
author S. Iguchi, author M. Naka, author
S. Ishihara, author
H. Mori, author T. Sasaki, and author M. Yamashita, title title
Quantum-disordered state of magnetic and electric dipoles in an organic mott
system, https://doi.org/10.1038/s41467-017-01849-x journal journal Nat. Comm. volume
8, pages 1821 (year 2017)NoStop
[Hassan et al.(2018)Hassan,
Cunningham, Mourigal, Zhilyaeva, Torunova, Lyubovskaya,
Schlueter, and Drichko]hass18
author author N. Hassan, author S. Cunningham,
author M. Mourigal, author E. I. Zhilyaeva, author
S. A. Torunova, author
R. N. Lyubovskaya, author
J. A. Schlueter, and author
N. Drichko, title title Evidence for a quantum dipole liquid state in an organic
quasi-two-dimensional material, https://doi.org/10.1126/science.aan6286 journal journal Science volume 360, pages
1101 (year 2018)NoStop
[Urai et al.(2022)Urai,
Miyagawa, Watanabe, Zhilyaeva, Torunova, Lyubovskaya,
Drichko, and Kanoda]urai22
author author M. Urai, author K. Miyagawa,
author Y. Watanabe, author E. I. Zhilyaeva, author
S. A. Torunova, author
R. N. Lyubovskaya, author
N. Drichko, and author
K. Kanoda, title title Anomalously field-susceptible spin clusters emerging in the
electric-dipole liquid candidate κ-(ET)_2Hg(SCN)_2Br, https://doi.org/10.1126/sciadv.abn1680 journal
journal Sci. Adv. volume 8, pages eabn1680 (year 2022)NoStop
[Riedl et al.(2019)Riedl,
Valentí, and Winter]ried19
author author K. Riedl, author R. Valentí, and author S. M. Winter, title title Critical spin liquid versus
valence-bond glass in a triangular-lattice organic antiferromagnet, https://doi.org/10.1038/s41467-019-10604-3 journal journal Nat. Comm. volume 10, pages
2561 (year 2019)NoStop
[Shimizu et al.(2007)Shimizu, Akimoto, Tsujii, Tajima, and Kato]shim07
author author Y. Shimizu, author H. Akimoto,
author H. Tsujii, author A. Tajima, and author
R. Kato, title title Mott transition in a valence-bond solid insulator with a triangular
lattice, https://doi.org/10.1103/PhysRevLett.99.256403 journal journal Phys. Rev. Lett. volume 99, pages 256403 (year
2007)NoStop
[Miksch et al.(2021)Miksch,
Pustogow, Rahim, Bardin,
Kanoda, Schlueter, Hübner, Scheffler, and Dressel]miks21
author author B. Miksch, author A. Pustogow,
author M. J. Rahim, author A. A. Bardin, author
K. Kanoda, author J. A. Schlueter, author R. Hübner, author M. Scheffler, and author M. Dressel, title title Gapped
magnetic ground state in quantum spin liquid candidate
κ-(BEDT-TTF)_2Cu_2(CN)_3, https://doi.org/10.1126/science.abc6363 journal journal Science volume 372, pages
276 (year 2021)NoStop
[Pustogow(2022)]pust22
author author A. Pustogow, title title Thirty-year anniversary
of κ-(BEDT-TTF)_2Cu_2(CN)_3: Reconciling the spin gap in a
spin-liquid candidate, https://doi.org/10.3390/solids3010007
journal journal Solids volume 3, pages 93 (year 2022)NoStop
[Powell and McKenzie(2011)]powe11
author author B. J. Powell and author R. H. McKenzie, title title Quantum frustration in
organic Mott insulators: from spin liquids to unconventional
superconductors, https://doi.org/10.1088/0034-4885/74/5/056501
journal journal Rep. Prog, Phys. volume 74, pages 056501 (year
2011)NoStop
[Ardavan et al.(2012)Ardavan, Brown, Kagoshima, Kanoda, Kuroki, Mori, Ogata, Uji, and Wosnitza]arda12
author author A. Ardavan, author S. Brown,
author S. Kagoshima, author K. Kanoda, author
K. Kuroki, author H. Mori, author M. Ogata, author S. Uji, and author J. Wosnitza, title title Recent topics of organic superconductors, https://doi.org/10.1143/JPSJ.81.011004 journal journal J. Phys. Soc. Jpn. volume 81, pages 011004 (year 2012)NoStop
[Clay and Mazumdar(2019)]clay19
author author R. Clay and author S. Mazumdar, title title From charge- and
spin-ordering to superconductivity in the organic charge-transfer solids, https://doi.org/https://doi.org/10.1016/j.physrep.2018.10.006
journal journal Phys. Rep. volume 788, pages 1 (year 2019)NoStop
[Toyota et al.(2007)Toyota,
Lang, and Müller]toyo07
author author N. Toyota, author M. Lang, and author J. Müller, @noop title Low-Dimensional Molecular Metals (publisher Springer-Verlag Berlin Heidelberg, year
2007)NoStop
[Ishiguro et al.(1998)Ishiguro, Yamaji, and Saito]ishi98
author author T. Ishiguro, author K. Yamaji, and author G. Saito, @noop title Organic Superconductors, edition 2nd ed. (publisher Springer-Verlag, address Berlin Heidelberg, year 1998)NoStop
[Hotta(2003)]hott03
author author C. Hotta, title title Classification of quasi-two
dimensional organic conductors based on a new minimal model, https://doi.org/10.1143/JPSJ.72.840 journal journal J. Phys. Soc. Jpn. volume 72, pages 840 (year 2003)NoStop
[Kandpal et al.(2009)Kandpal, Opahle, Zhang, Jeschke, and Valentí]kand09
author author H. C. Kandpal, author I. Opahle,
author Y.-Z. Zhang, author H. O. Jeschke, and author R. Valentí, title title Revision of model parameters for
-type charge transfer salts: An ab initio study, https://doi.org/10.1103/PhysRevLett.103.067004 journal
journal Phys. Rev. Lett. volume 103, pages 067004 (year 2009)NoStop
[Koretsune and Hotta(2014)]kore14
author author T. Koretsune and author C. Hotta, title title Evaluating model parameters
of the - and
^-type Mott insulating organic
solids, https://doi.org/10.1103/PhysRevB.89.045102 journal journal Phys. Rev. B volume
89, pages 045102 (year 2014)NoStop
[Mori et al.(1999)Mori,
Mori, and Tanaka]mori99a
author author T. Mori, author H. Mori, and author S. Tanaka, title title Structural genealogy of BEDT-TTF-based organic
conductors II. Inclined molecules: θ, α, and κ
phases, https://doi.org/10.1246/bcsj.72.179 journal
journal Bull. Chem. Soc. Jpn. volume
72, pages 179 (year 1999)NoStop
[Pustogow et al.(2018)Pustogow, Bories, Löhle, Rösslhuber, Zhukova, Gorshunov,
Tomić, Schlueter, Hübner, Hiramatsu, Yoshida,
Saito, Kato, Lee,
Dobrosavljević, Fratini, and Dressel]pust18
author author A. Pustogow, author M. Bories,
author A. Löhle, author R. Rösslhuber, author E. Zhukova, author
B. Gorshunov, author
S. Tomić, author J. A. Schlueter, author R. Hübner, author T. Hiramatsu, author Y. Yoshida, author G. Saito, author R. Kato, author T.-H. Lee, author V. Dobrosavljević, author S. Fratini, and author M. Dressel, title title Quantum spin liquids
unveil the genuine Mott state, https://doi.org/10.1038/s41563-018-0140-3 journal journal Nat. Mater. volume 17, pages
773 (year 2018)NoStop
[Zhou et al.(2017b)Zhou, Kanoda, and Ng]zhou17
author author Y. Zhou, author K. Kanoda, and author T.-K. Ng, title title Quantum spin liquid states, https://doi.org/10.1103/RevModPhys.89.025003 journal
journal Rev. Mod. Phys. volume 89, pages 025003 (year 2017b)NoStop
[Wietek et al.(2021)Wietek,
Rossi, ŠŠimkovic, Klett, Hansmann,
Ferrero, Stoudenmire, Schäfer, and Georges]wiet21
author author A. Wietek, author R. Rossi,
author F. ŠŠimkovic, author M. Klett, author P. Hansmann,
author M. Ferrero, author E. M. Stoudenmire, author T. Schäfer, and author A. Georges, title
title Mott insulating states with competing orders in the
triangular lattice Hubbard model, https://doi.org/10.1103/PhysRevX.11.041013 journal journal Phys. Rev. X volume 11, pages 041013 (year 2021)NoStop
[Misumi et al.(2017)Misumi,
Kaneko, and Ohta]misu17
author author K. Misumi, author T. Kaneko, and author Y. Ohta, title title Mott transition and magnetism of the
triangular-lattice Hubbard model with next-nearest-neighbor hopping, https://doi.org/10.1103/PhysRevB.95.075124 journal
journal Phys. Rev. B volume 95, pages 075124 (year 2017)NoStop
[Watanabe et al.(2006)Watanabe, Yokoyama, Tanaka, and Inoue]wata06
author author T. Watanabe, author H. Yokoyama,
author Y. Tanaka, and author J.-i. Inoue, title
title Superconductivity and a Mott transition in a Hubbard
model on an anisotropic triangular lattice, https://doi.org/10.1143/JPSJ.75.074707 journal journal J. Phys. Soc. Jpn. volume 75, pages 074707 (year 2006)NoStop
[Ohashi et al.(2008)Ohashi,
Momoi, Tsunetsugu, and Kawakami]ohas08
author author T. Ohashi, author T. Momoi,
author H. Tsunetsugu, and author N. Kawakami, title title Finite temperature Mott transition in Hubbard
model on anisotropic triangular lattice, https://doi.org/10.1103/PhysRevLett.100.076402 journal
journal Phys. Rev. Lett. volume 100, pages 076402 (year 2008)NoStop
[Oberbauer et al.(2023)Oberbauer, Erkenov, Biberacher,
Kushch, Gross, and Kartsovnik]ober23
author author S. Oberbauer, author S. Erkenov,
author W. Biberacher, author N. D. Kushch, author
R. Gross, and author
M. V. Kartsovnik, title
title Coherent heavy charge carriers in an organic conductor
near the bandwidth-controlled Mott transition, https://doi.org/10.1103/PhysRevB.107.075139 journal journal Phys. Rev. B volume 107, pages 075139 (year 2023)NoStop
[Brinkman and Rice(1970)]brin70
author author W. F. Brinkman and author T. M. Rice, title title Application of
Gutzwiller's variational method to the metal-insulator transition, https://doi.org/10.1103/PhysRevB.2.4302 journal journal Phys. Rev. B volume 2, pages
4302 (year 1970)NoStop
[Georges et al.(1996)Georges, Kotliar, Krauth, and Rozenberg]geor96
author author A. Georges, author G. Kotliar,
author W. Krauth, and author M. J. Rozenberg, title title Dynamical mean-field theory of strongly correlated
fermion systems and the limit of infinite dimensions, https://doi.org/10.1103/RevModPhys.68.13 journal journal Rev. Mod. Phys. volume 68, pages 13 (year 1996)NoStop
[Wosnitza(1996)]wosn96
author author J. Wosnitza, @noop title Fermi Surfaces of
Low-Dimensional Organic Metals and Superconductors (publisher
Springer-Verlag, address Berlin Heidelberg, year
1996)NoStop
[Singleton(2000)]sing00
author author J. Singleton, title title Studies of
quasi-two-dimensional organic conductors based on BEDT-TTF using high
magnetic fields, https://doi.org/10.1088/0034-4885/63/8/201
journal journal Rep. Prog. Phys. volume 63, pages 1111 (year
2000)NoStop
[Kartsovnik(2004)]kart04
author author M. V. Kartsovnik, title title High magnetic fields:
A tool for studying electronic properties of layered organic metals, https://doi.org/10.1021/cr0306891 journal journal Chem. Rev. volume 104, pages
5737 (year 2004)NoStop
[Audouard et al.(2016)Audouard, Fortin, Vignolles, Laukhin, Kushch, and Yagubskii]audo16
author author A. Audouard, author J.-Y. Fortin, author D. Vignolles,
author V. N. Laukhin, author N. D. Kushch, and author E. B. Yagubskii, title title New insights on frequency combinations and
`forbidden frequencies' in the de Haas–van Alphen spectrum of
κ-(ET)_2Cu(SCN)_2, https://doi.org/10.1088/0953-8984/28/27/275702 journal
journal J. Phys.: Condens. Matter volume
28, pages 275702 (year 2016)NoStop
[sm-()]sm-hp
@noop note See Supplemental Material at [URL will be
inserted by publisher] for a brief account of the SdH oscillations in
κ-NCS, description and examples of the cyclotron mass evaluation, and
for fitting the P-dependent mass m_c,β in κ-NCS with
Eq. (3). The Supplemental Material also includes Refs.
<cit.>NoStop
[Caulfield et al.(1994)Caulfield, Lubczynski, Pratt, Singleton, Ko, Hayes, Kurmoo, and Day]caul94
author author J. Caulfield, author W. Lubczynski, author F. L. Pratt, author J. Singleton,
author D. Y. K. Ko, author W. Hayes, author
M. Kurmoo, and author
P. Day, title title Magnetotransport studies of the organic superconductor
κ-(BEDT-TTF)_2Cu(NCS)_2 under pressure: the relationship between
carrier effective mass and critical temperature, https://doi.org/10.1088/0953-8984/6/15/013 journal journal J. Phys.: Condens. Matter volume 6, pages 2911 (year 1994)NoStop
[Pratt(2010)]prat10b
author author F. L. Pratt, title title Using Shubnikov - de Haas
data to estimate the magnetic frustration parameter t'/t in the spin-liquid
system κ-ET_2Cu_2(CN)_3, https://doi.org/https://doi.org/10.1016/j.physb.2010.01.125 journal journal Physica B volume
405, pages S205 (year 2010)NoStop
[Williams et al.(1990)Williams, Kini, Wang, Carlson, Geiser, Montgomery, Pyrka, Watkins, and Kommers]will90
author author J. M. Williams, author A. M. Kini,
author H. H. Wang, author K. D. Carlson, author
U. Geiser, author L. K. Montgomery, author G. J. Pyrka, author D. M. Watkins, and author J. M. Kommers, title title From
semiconductor-semiconductor transition (42 K) to the highest-T_c
organic superconductor, κ-(ET)_2Cu[N(CN)_2]Cl (T_c =
12.54 K), https://doi.org/10.1021/ic00343a003 journal journal Inorg. Chem. volume
29, pages 3272 (year 1990)NoStop
[Urayama et al.(1988)Urayama, Yamochi, Saito, Nozawa, Sugano, Kinoshita, Sato, Oshima, Kawamoto, and Tanaka]uray88
author author H. Urayama, author H. Yamochi,
author G. Saito, author K. Nozawa, author
T. Sugano, author M. Kinoshita, author S. Sato, author K. Oshima, author A. Kawamoto, and author J. Tanaka, title title A new ambient pressure organic
superconductor based on bedt-ttf with tc higher than 10 k (tc=10.4 k), https://doi.org/10.1246/cl.1988.55 journal journal Chemistry Letters volume 17, pages 55 (year 1988)NoStop
[Müller and Ueba(1993)]muel93
author author H. Müller and author Y. Ueba, title title A facile synthesis of
bis(ethylenedithio)tetrathiafulvalene, https://doi.org/10.1055/s-1993-25953 journal journal Synthesis volume 9, pages
853 (year 1993)NoStop
[Müller et al.(1997)Müller, Jouan, and Salhi]muel97
author author H. Müller, author C. Jouan, and author F. Salhi, title title BEDT-TTF and related donor molecules - made
easy, https://doi.org/https://doi.org/10.1016/S0379-6779(97)80314-1 journal journal Synth. Metals volume
85, pages 1457 (year 1997)NoStop
[Müller and Bourcet(2022)]muel22
author author H. Müller and author L. Bourcet, title title
[1,3]-dithiolo-[4,5-d][1,3-dithiole]-2,5-dione, https://doi.org/10.1055/s-0040-1720891 journal journal Synthesis volume 54, pages
1817 (year 2022)NoStop
[Nomura et al.(1979)Nomura,
Yamamoto, Ochiai, and Fujiwara]nomu79
author author M. Nomura, author Y. Yamamoto,
author Y. Ochiai, and author H. Fujiwara, title
title The measurement of the resistance of manganin wire with
the cubic-anvil type pressure apparatus, https://doi.org/10.1143/JJAP.18.363 journal journal Jpn. J. Appl. Phys. volume 18, pages 363 (year 1979)NoStop
[Kartsovnik et al.(1995)Kartsovnik, Biberacher, Andres, and Kushch]kart95c
author author M. V. Kartsovnik, author W. Biberacher, author K. Andres, and author N. D. Kushch, title title Shubnikov-de Haas effect in the
organic superconductor κ-(BEDT-TTF)_2Cu[N(CN)_2]Cl under
pressure, http://jetpletters.ru/ps/1222/article_18475.shtml
journal journal JETP Lett. volume 62, pages 905 (year 1995)NoStop
[Yamauchi et al.(1996)Yamauchi, Kartsovnik, Ishiguro,
Kubota, and Saito]yama96
author author Y. Yamauchi, author M. V. Kartsovnik, author T. Ishiguro, author M. Kubota, and author G. Saito, title title Angle-dependent magnetoresistance and
Shubnikov-de Haas oscillations in the organic superconductor
κ-(BEDT-TTF)_2Cu[N(CN)_2]Cl under pressure, https://doi.org/10.1143/JPSJ.65.354 journal journal J. Phys. Soc. Jpn. volume 65, pages 354 (year 1996)NoStop
[Com()]Comment_MB
@noop note The inversion symmetry of the crystal
structure of κ-Cl <cit.> implies no gap between the open
Fermi sheets and the α pocket. However, a small gap,
Δ_MB≃ 1.6 meV, has been predicted to arise due to the
spin-orbit interaction <cit.>, which is fully consistent with the
experimental estimation, see Supplemental Material for Ref.
<cit.>.Stop
[Kartsovnik et al.(1988)Kartsovnik, Kononovich, Laukhin, and Schegolev]kart88b
author author M. V. Kartsovnik, author P. A. Kononovich, author V. N. Laukhin, and author I. F. Schegolev, title title Anisotropy of
magnetoresistance and the Shubnikov-de Haas oscillations in the organic
metal β-(ET)_2IBr_2, http://jetpletters.ru/ps/0/article_16777.shtml journal
journal JETP Lett. volume 48, pages 541 (year 1988)NoStop
[Kang et al.(1989)Kang,
Montambaux, Cooper, Jérome, Batail, and Lenoir]kang89
author author W. Kang, author G. Montambaux,
author J. R. Cooper, author D. Jérome, author
P. Batail, and author
C. Lenoir, title title Observation of giant magnetoresistance oscillations in the
high-T, phase of the two-dimensional organic conductor
β-(BEDT-TTF)_2I_3, https://doi.org/10.1103/PhysRevLett.62.2559 journal journal Phys. Rev. Lett. volume 62, pages 2559 (year 1989)NoStop
[Weiss et al.(1999a)Weiss, Kartsovnik, Biberacher, Steep,
Balthes, Jansen, Andres, and Kushch]weis99a
author author H. Weiss, author M. V. Kartsovnik, author W. Biberacher, author E. Steep,
author E. Balthes, author A. Jansen, author
K. Andres, and author
N. Kushch, title title Magnetotransport studies of the Fermi surface in the organic
superconductor κ-(BEDT-TTF)_2Cu[N(CN)_2]Br, https://doi.org/10.1103/PhysRevB.59.12370 journal journal Phys. Rev. B volume 59, pages 12370 (year 1999a)NoStop
[Schiller et al.(2000)Schiller, Schmidt, Balthes, Schweitzer, Koo, Whangbo, Heinen, Klausa, Kircher, and Strunz]schi00
author author M. Schiller, author W. Schmidt,
author E. Balthes, author D. Schweitzer, author
H. Koo, author M. H. Whangbo, author I. Heinen, author T. Klausa, author P. Kircher, and author W. Strunz, title title
Investigations of the Fermi surface of a new organic metal:
(BEDT-TTF)_4[ Ni(dto)_2], https://dx.doi.org/10.1209/epl/i2000-00329-2 journal
journal Europhys. Lett. volume 51, pages 82 (year 2000)NoStop
[Bergemann et al.(2000)Bergemann, Julian, Mackenzie, Nishizaki, and Maeno]berg00
author author C. Bergemann, author S. R. Julian, author A. P. Mackenzie, author S. Nishizaki, and author Y. Maeno, title title Detailed topography of the
Fermi surface of Sr_2RuO_4, https://doi.org/10.1103/PhysRevLett.84.2662 journal journal Phys. Rev. Lett. volume 84, pages 2662 (year 2000)NoStop
[Carrington(2011)]carr11
author author A. Carrington, title title Quantum oscillation
studies of the Fermi surface of iron-pnictide superconductors, http://stacks.iop.org/0034-4885/74/i=12/a=124507 journal
journal Rep. Prog. Phys. volume 74, pages 124507 (year 2011)NoStop
[Sebastian and Proust(2015)]seba15
author author S. E. Sebastian and author C. Proust, title title Quantum oscillations in
hole-doped cuprates, https://doi.org/10.1146/annurev-conmatphys-030212-184305 journal journal Annu. Rev. Condens. Matter Phys. volume 6, pages 411 (year
2015)NoStop
[Oliviero et al.(2022)Oliviero, Benhabib, Gilmutdinov,
Vignolle, Drigo, Massoudzadegan, Leroux, Rikken,
Forget, Colson, Vignolles, and Proust]oliv22
author author V. Oliviero, author S. Benhabib,
author I. Gilmutdinov, author B. Vignolle, author
L. Drigo, author M. Massoudzadegan, author M. Leroux, author G. L. J. A. Rikken, author A. Forget, author D. Colson,
author D. Vignolles, and author C. Proust, title title Magnetotransport signatures of antiferromagnetism
coexisting with charge order in the trilayer cuprate
HgBa_2Ca_2Cu_3O_8+δ, https://doi.org/10.1038/s41467-022-29134-6 journal journal Nat. Comm. volume 13, pages
1568 (year 2022)NoStop
[Eaton et al.(2024)Eaton,
Weinberger, Popiel, Wu,
Hickey, Cabala, Pospís̆il, Prokles̆ka, Haidamak,
Bastien, Opletal, Sakai,
Haga, Nowell, Benjamin,
Sechovský, Lonzarich, Grosche, and Valis̆ka]eato24
author author A. G. Eaton, author T. I. Weinberger, author N. J. M. Popiel, author Z. Wu, author A. J. Hickey, author
A. Cabala, author J. Pospís̆il, author J. Prokles̆ka, author T. Haidamak, author G. Bastien, author P. Opletal, author H. Sakai, author Y. Haga, author R. Nowell, author S. M. Benjamin, author V. Sechovský, author G. G. Lonzarich, author F. M. Grosche, and author M. Valis̆ka, title title Quasi-2D Fermi
surface in the anomalous superconductor UTe_2, https://doi.org/10.1038/s41467-023-44110-4 journal journal Nat. Comm. volume 15, pages
223 (year 2024)NoStop
[com(a)]comm_PhSh
@noop note When the Landau-level
spacing becomes comparable to the interlayer transfer integral, the beats of
SdH oscillations acquire an additional field-dependent phase shift, which
breaks the exact 1/B-periodicity <cit.>. In the layered
metal β-(BEDT-TTF)_2IBr_2 displaying a similar anisotropy, the beat
phase is shifted by about 10% of the period at B ∼ 15 T
<cit.>. Thus, a straightforward estimation of the Fermi surface
warping from the positions of two beat nodes leads to an error ≲
10%.Stop
[Singleton et al.(2002)Singleton, Goddard, Ardavan, Harrison, Blundell, Schlueter, and Kini]sing02
author author J. Singleton, author P. A. Goddard, author A. Ardavan,
author N. Harrison, author S. J. Blundell, author
J. A. Schlueter, and author
A. M. Kini, title title Test for interlayer coherence in a quasi-two-dimensional
superconductor, https://doi.org/10.1103/PhysRevLett.88.037001
journal journal Phys. Rev. Lett. volume 88, pages 037001 (year
2002)NoStop
[Shoenberg(1984)]shoe84
author author D. Shoenberg, @noop title Magnetic Oscillations
in Metals (publisher Cambridge University Press, address Cambridge, year 1984)NoStop
[Wosnitza et al.(1992)Wosnitza, Crabtree, Wang, Geiser, Williams, and Carlson]wosn92
author author J. Wosnitza, author G. W. Crabtree, author H. H. Wang,
author U. Geiser, author J. M. Williams, and author K. D. Carlson, title title de haas–van alphen studies of the organic
superconductors α-(ET)_2NH_4Hg(SCN)_4 and
κ-(ET)_2Cu(NCS)_2 [with ET =
bis(ethelenedithio)-tetrathiafulvalene], https://doi.org/10.1103/PhysRevB.45.3018 journal journal Phys. Rev. B volume 45, pages 3018 (year 1992)NoStop
[Kovalev et al.(1993)Kovalev, Kartsovnik, and Kushch]kova93
author author A. E. Kovalev, author M. V. Kartsovnik, and author N. D. Kushch, title title Quantum and semi-classical
magnetoresistance oscillaitons in a new organic metal
(BEDT-TTF)_2TlHg(SeCN)_4, @noop journal
journal Solid State Commun. volume
87, pages 705 (year 1993)NoStop
[Meyer et al.(1995)Meyer,
Steep, Biberacher, Christ,
Lerf, Jansen, Joss,
Wyder, and Andres]meye95
author author F. A. Meyer, author E. Steep,
author W. Biberacher, author P. Christ, author
A. Lerf, author A. G. M. Jansen, author W. Joss, author P. Wyder, and author K. Andres, title title High-field de Haas-van
Alphen studies of κ-(BEDT-TTF)_2Cu(NCS)_2, https://doi.org/10.1209/0295-5075/32/8/011 journal journal Europhys. Lett. volume 32, pages 681 (year 1995)NoStop
[Weiss et al.(1999b)Weiss, Kartsovnik, Biberacher, Balthes,
Jansen, and Kushch]weis99b
author author H. Weiss, author M. V. Kartsovnik, author W. Biberacher, author E. Balthes,
author A. G. M. Jansen, and author N. D. Kushch, title title Angle-dependent magnetoquantum oscillations in
κ-(BEDT-TTF)_2Cu[N(CN)_2]Br, https://doi.org/10.1103/PhysRevB.60.R16259 journal journal Phys. Rev. B volume 60, pages R16259 (year 1999b)NoStop
[Sasaki and Fukase(1999)]sasa99
author author T. Sasaki and author T. Fukase, title title Spin splitting at the
high-magnetic-field phase transition of the organic conductor α-(BEDT-TTF)_2KHg(SCN)_4, https://doi.org/10.1103/PhysRevB.59.13872 journal journal Phys. Rev. B volume 59, pages 13872 (year 1999)NoStop
[Wosnitza et al.(2008)Wosnitza, Gvozdikov, Hagel, Ignatchik, Bergk, Meeson, Schlueter, Davis, Winter, and Gard]wosn08
author author J. Wosnitza, author V. M. Gvozdikov, author J. Hagel,
author O. Ignatchik, author B. Bergk, author
P. J. Meeson, author
J. A. Schlueter, author
H. Davis, author R. W. Winter, and author G. L. Gard, title title
Spin-zero anomaly in the magnetic quantum oscillations of a two-dimensional
metal, https://doi.org/10.1088/1367-2630/10/8/083032 journal journal New J. Phys. volume
10, pages 083032 (year 2008)NoStop
[Yamaji(1989)]yama89
author author K. Yamaji, title title On the angle dependence of
the magnetoresistance in quasi-two-dimensional organic superconductors, @noop journal journal J. Phys. Soc.
Jpn. volume 58, pages 1520 (year 1989)NoStop
[Kartsovnik et al.(1990)Kartsovnik, Kononovich, Laukhin,
Pesotskii, and Schegolev]kart90
author author M. V. Kartsovnik, author P. A. Kononovich, author V. N. Laukhin, author S. I. Pesotskii, and author I. F. Schegolev, title title Galvanomagnetic
properties and the Fermi surface of the organic superconductor
β-(ET)_2IBr_2, http://jetp.ras.ru/cgi-bin/e/index/e/70/4/p735?a=list journal
journal Sov. Phys. JETP volume 70, pages 735 (year 1990)NoStop
[Wosnitza et al.(1993)Wosnitza, Crabtree, Williams, Wang, Carlson, and Geiser]wosn93
author author J. Wosnitza, author G. W. Crabtree, author J. M. Williams, author H. H. Wang,
author K. D. Carlson, and author U. Geiser, title title De Haas - van Alphen studies and Fermi surface
properties of organic superconductors (ET)_2X, https://doi.org/10.1016/0379-6779(93)90052-X journal
journal Synth. Met. volume 55-57, pages 2891 (year 1993)NoStop
[Peschansky(2002)]pesc02
author author V. G. Peschansky, title title Galvanomagnetic
phenomena in organic layered conductors, https://doi.org/10.1134/1.1484997 journal journal JETP volume 94, pages 1035
(year 2002)NoStop
[Kartsovnik(2008)]kart08a
author author M. V. Kartsovnik, title title Layered organic
conductors in strong magnetic fields, in @noop booktitle The Physics of Organic Superconductors and Conductors, editor edited by editor A. G. Lebed (publisher Springer Verlag, address Berlin Heidelberg, year 2008) pp. pages 185–246NoStop
[Watanabe et al.(1999)Watanabe, Nogami, Oshima, Ito, Ishiguro, and Saito]wata99
author author M. Watanabe, author Y. Nogami,
author K. Oshima, author H. Ito, author
T. Ishiguro, and author
G. Saito, title title Low temperature superstructure and transfer integrals in
κ-(BEDT-TTF)_2Cu[N(CN)_2]X: X = Cl, Br, https://doi.org/https://doi.org/10.1016/S0379-6779(98)00615-8 journal journal Synt. Metals volume
103, pages 1909 (year 1999)NoStop
[Schultz et al.(1991)Schultz, Beno, Geiser, Wang, Kini, Williams, and Whangbo]schu91
author author A. J. Schultz, author M. A. Beno,
author U. Geiser, author H. Wang, author
A. M. Kini, author
J. M. Williams, and author
M.-H. Whangbo, title
title Single-crystal X-ray and neutron diffraction
investigations of the temperature dependence of the structure of the T_c
= 10 K organic superconductor κ-(ET)_2Cu(NCS)_2, https://doi.org/https://doi.org/10.1016/0022-4596(91)90201-R journal journal J. Solid State Chem. volume 94, pages 352 (year
1991)NoStop
[com(b)]comm_F0
@noop note For the κ-Cl salt
the zero-pressure value was estimated from the linear extrapolation of the
F_β(p) data in the p interval 20 to 200 MPa, see
inset in Fig. <ref>(a).Stop
[Schultz et al.(1994)Schultz, Wang, Williams, Finger, Hazen, Rovira, and Whangbo]schu94
author author A. J. Schultz, author H. H. Wang,
author J. M. Williams, author L. W. Finger, author
R. M. Hazen, author
C. Rovira, and author
M.-H. Whangbo, title
title X-ray diffraction and electronic band structure study of
the organic superconductor κ-(ET)_2Cu[N(CN)_2]Cl at pressures up
to 28 kbar, https://doi.org/https://doi.org/10.1016/0921-4534(94)90577-0 journal journal Physica C: Supercond. volume 234, pages 300 (year
1994)NoStop
[Rahal et al.(1997)Rahal,
Chasseau, Gaultier, Ducasse,
Kurmoo, and Day]raha97
author author M. Rahal, author D. Chasseau,
author J. Gaultier, author L. Ducasse, author
M. Kurmoo, and author
P. Day, title title Isothermal compressibility and pressure dependence of the crystal
structures of the superconducting charge-transfer salt
κ-(BEDT-TTF)_2Cu(NCS)_2 [BEDT-TTF =
bis(ethylenedithio)tetrathiafulvalene], https://doi.org/10.1107/S0108768195012122 journal journal Acta Crystallogr. volume B53, pages 159 (year 1997)NoStop
[Lifshitz and Kosevich(1956)]lifs55
author author I. M. Lifshitz and author A. M. Kosevich, title title Theory of magnetic
susceptibility in metals at low temperatures, http://jetp.ras.ru/cgi-bin/e/index/e/2/4/p636?a=list journal
journal Sov. Phys. JETP volume 2, pages 636 (year 1956)NoStop
[Merino and McKenzie(2000)]meri00a
author author J. Merino and author R. H. McKenzie, title title Cyclotron effective
masses in layered metals, https://doi.org/10.1103/PhysRevB.62.2416
journal journal Phys. Rev. B volume 62, pages 2416 (year
2000)NoStop
[Semeniuk et al.(2023)Semeniuk, Chang, Baglo, Friedemann, Tozer, Coniglio, Gamża, Reiss, Alireza, Leermakers, McCollam, Grockowiak, and Grosche]seme23
author author K. Semeniuk, author H. Chang,
author J. Baglo, author S. Friedemann, author
S. W. Tozer, author
W. A. Coniglio, author
M. B. Gamża, author
P. Reiss, author P. Alireza, author I. Leermakers, author A. McCollam, author A. D. Grockowiak, and author F. M. Grosche, title title Truncated
mass divergence in a Mott metal, https://doi.org/10.1073/pnas.2301456120 journal journal Proc. Natl. Acad. Sci. USA volume 120, pages e2301456120 (year 2023)NoStop
[com(c)]comm_rang
@noop note Three lowest-pressure
points for κ-Cl, taken at p < 40 MPa, were excluded from fitting. These
points belong to the phase-coexistence region of the phase diagram, where an
anomalous acceleration of the pressure dependence is observed <cit.>.
Consistently, these lie above the fitting curve.Stop
[Com()]Comment_mass
@noop note The band cyclotron mass is ultimately
determined by the electron density of states on the Fermi level
<cit.>, which is quite sensitive to the details of the band-structure
calculations and may differ by up to 30% in different works, cf., e.g.,
Refs. <cit.> and <cit.>.Stop
[Zverev et al.(2019)Zverev,
Biberacher, Oberbauer, Sheikin, Alemany, Canadell, and Kartsovnik]zver19
author author V. N. Zverev, author W. Biberacher,
author S. Oberbauer, author I. Sheikin, author
P. Alemany, author E. Canadell, and author M. V. Kartsovnik, title title
Fermi surface properties of the bifunctional organic metal
-(BETS)_2Mn[N(CN)_2]_3
near the metal-insulator transition, https://doi.org/10.1103/PhysRevB.99.125136 journal journal Phys. Rev. B volume 99, pages 125136 (year 2019)NoStop
[com()]comm_caul-
@noop note Our data on the α mass in κ-NCS is
about 10% lower than reported by Caulfield et al. <cit.>. While the
reason for the discrepancy between the two datasets is not clear, both
exhibit essentially the same pressure dependence.Stop
[Goddard et al.(2004)Goddard, Blundell, Singleton, McDonald, Ardavan, Narduzzo, Kini, and Sasaki]godd04
author author P. A. Goddard, author S. J. Blundell, author J. Singleton,
author R. D. McDonald, author A. Ardavan, author
A. Narduzzo, author
J. A. S. A. M. Kini, and author T. Sasaki, title
title Angle-dependent magnetoresistance of the layered organic
superconductor κ-(ET)_2(NCS)_2: Simulation and experiment, @noop journal journal Phys. Rev. B volume 69, pages 174509 (year 2004)NoStop
[Gutman and Maslov(2008)]gutm08
author author D. B. Gutman and author D. L. Maslov, title title Boson-assisted tunneling
in layered metals, https://doi.org/10.1103/PhysRevB.77.035115
journal journal Phys. Rev. B volume 77, pages 035115 (year
2008)NoStop
[Ho and Schofield(2005)]ho05
author author A. F. Ho and author A. J. Schofield, title title c-axis transport in
highly anisotropic metals: Role of small polarons, https://doi.org/10.1103/PhysRevB.71.045101 journal journal Phys. Rev. B volume 71, pages 045101 (year 2005)NoStop
[Analytis et al.(2006)Analytis, Ardavan, Blundell, Owen, Garman, Jeynes, and Powell]anal06
author author J. G. Analytis, author A. Ardavan,
author S. J. Blundell, author R. L. Owen, author
E. F. Garman, author
C. Jeynes, and author
B. J. Powell, title
title Effect of irradiation-induced disorder on the conductivity
and critical temperature of the organic superconductor
-(BEDT-TTF)_2Cu(SCN)_2, https://doi.org/10.1103/PhysRevLett.96.177002 journal
journal Phys. Rev. Lett. volume 96, pages 177002 (year 2006)NoStop
[Hiramatsu et al.(2015)Hiramatsu, Yoshida, Saito, Otsuka, Yamochi, Maesato, Shimizu, Ito, and Kishida]hira15
author author T. Hiramatsu, author Y. Yoshida,
author G. Saito, author A. Otsuka, author
H. Yamochi, author M. Maesato, author Y. Shimizu, author H. Ito, and author H. Kishida, title title Quantum
spin liquid: design of a quantum spin liquid next to a superconducting state
based on a dimer-type ET Mott insulator, https://doi.org/10.1039/C4TC01701C journal journal J. Mater. Chem. C volume 3, pages 1378 (year 2015)NoStop
[Winter et al.(2017)Winter,
Riedl, and Valentí]wint17
author author S. M. Winter, author K. Riedl, and author R. Valentí, title title Importance of spin-orbit coupling in
layered organic salts, https://doi.org/10.1103/PhysRevB.95.060404
journal journal Phys. Rev. B volume 95, pages 060404 (year
2017)NoStop
[Grigoriev et al.(2002)Grigoriev, Kartsovnik, Biberacher,
Kushch, and Wyder]grig02b
author author P. D. Grigoriev, author M. V. Kartsovnik, author W. Biberacher, author N. D. Kushch, and author P. Wyder, title title Anomalous beating phase of
the oscillating interlayer magnetoresistance in layered metals, https://doi.org/10.1103/PhysRevB.65.060403 journal journal Phys. Rev. B volume 65, pages 060403(R) (year 2002)NoStop
[Grigoriev(2003)]grig03
author author P. D. Grigoriev, title title Theory of the
Shubnikov–de Haas effect in quasi-two-dimensional metals, https://doi.org/10.1103/PhysRevB.67.144401 journal journal Phys. Rev. B volume 67, pages 144401 (year 2003)NoStop
[Xu et al.(1995)Xu,
Ching, Jean, and Lou]xu95
author author Y.-N. Xu, author W. Y. Ching,
author Y. C. Jean, and author Y. Lou, title title First-principles calculation of the electronic and
optical properties of the organic superconductor
κ-(BEDT-TTF)_2Cu(NCS)_2, https://doi.org/10.1103/PhysRevB.52.12946 journal journal Phys. Rev. B volume 52, pages 12946 (year 1995)NoStop
[Ferber et al.(2014)Ferber,
Foyevtsova, Jeschke, and Valentí]ferb14
author author J. Ferber, author K. Foyevtsova,
author H. O. Jeschke, and author R. Valentí, title title Unveiling the microscopic nature of
correlated organic conductors: The case of κ-(BEDT-TTF)_2Cu[N(CN)_2]Br_xCl_1-x, https://doi.org/10.1103/PhysRevB.89.205106 journal journal Phys. Rev. B volume 89, pages 205106 (year 2014)NoStop
|
http://arxiv.org/abs/2409.02305v1 | 20240903213216 | Kinesthetic Teaching in Robotics: a Mixed Reality Approach | [
"Simone Macci`o",
"Mohamad Shaaban",
"Alessandro Carf`ı",
"Fulvio Mastrogiovanni"
] | cs.RO | [
"cs.RO"
] |
Topological characterization of modified Kane-Mele-Rashba models via local spin Chern marker
Tarik P. Cysne
September 9, 2024 – Version 1.0
============================================================================================
empty
§ ABSTRACT
As collaborative robots become more common in manufacturing scenarios and adopted in hybrid human-robot teams, we should develop new interaction and communication strategies to ensure smooth collaboration between agents. In this paper, we propose a novel communicative interface that uses Mixed Reality as a medium to perform Kinesthetic Teaching (KT) on any robotic platform. We evaluate our proposed approach in a user study involving multiple subjects and two different robots, comparing traditional physical KT with holographic-based KT through user experience questionnaires and task-related metrics.
Human-Robot Interaction, Mixed Reality, Kinesthetic Teaching, Software Architecture.
§ INTRODUCTION
In smart factories, robots are expected to coexist and work alongside humans rather than replace them. This new manufacturing paradigm has led to the development of collaborative robots, which are adaptive and highly versatile platforms <cit.> that can work alongside human workers. Despite its growing popularity, Human-Robot Collaboration (HRC) is still far from reaching maturity, as multiple research facets are yet to be tackled. One such aspect involves developing a structured communication enabling agents to exchange information intuitively <cit.>. As multiple social studies have shown <cit.>, effective bi-directional communication is crucial for successful collaboration, as it allows agents to infer each other's actions, synchronize, and receive appropriate feedback from their teammates. Conversely, poor communication can lead to misunderstandings, failed interactions, and consequent distrust in the robot teammate <cit.>.
Designing a comprehensive communication interface is a complex task that requires selecting an appropriate communicative channel. One of the most promising approaches combines Mixed Reality (MR) with wearable Head-Mounted Displays (HMD), enabling the creation of engaging holographic interfaces where users perceive 3D digital content superimposed onto the surrounding scene <cit.>. This virtual layer can act as a communicative channel to achieve intuitive human-robot communication. In this regard, few works have focused on using MR to preview a robot's intentions and upcoming actions <cit.>, offering helpful visual feedback to the human teammate during collaboration. In our previous work <cit.>, we mainly focused on robot-to-human communication, introducing the concept of communicative act and formalizing the communication for conveying the robot's intentions via holographic cues.
In this paper, we investigate human-to-robot communication by leveraging MR to allow operators to teach robots through holographic communication. In particular, we embrace the Learning from Demonstration (LfD) approach <cit.>, postulating that LfD sessions can be viewed as communication acts aimed at transferring skills from a human operator to a robot teammate through explicit actions or gestures. Specifically, our work is focused on one branch of LfD, namely Kinesthetic Teaching (KT), a well-known teaching technique in which human operators manually drive the robot's arm or end-effector, enabling the machine to learn new actions from direct demonstration. In the context of this work, we claim that such teaching methodology can be framed into the communicative space introduced in <cit.>. Therefore, throughout the paper, we provide an analytical formalization of KT in the proposed communicative framework and translate it into a modular software component, which enables KT in human-robot interactive scenarios through holographic communication. Our proposed approach, while leveraging MR for intuitive and straightforward communication between humans and robots, adheres to the LfD paradigm, providing a holographic tool to demonstrate skills to the robot teammate in HRC. Furthermore, given the unconstrained nature of the MR space where the KT session takes place, our proposed strategy potentially opens up the possibility of performing KT on any robotic platform compatible with the Universal Robot Description Format (URDF).
In addition to presenting such a holographic-based tool for KT, we evaluate its effectiveness in demonstrating tasks to robots and its perceived user experience (UX). Specifically, we claim that the holographic-based KT approach can serve as a suitable alternative to traditional, hand-guided KT in scenarios where the latter is not available or not implemented for a particular robot platform. To test this hypothesis, we conducted a preliminary user study with 12 subjects and two robots, comparing the two KT alternatives using task-based metrics and UX questionnaires.
The paper is organized as follows. Section <ref> reports a review of relevant literature. Section <ref> formalizes KT inside the holographic communication space, whereas Section <ref> details the implementation of the software components. Section <ref> and Section <ref> respectively discuss the experimental scenario devised to test the holographic KT approach and the user study results. Finally, Section <ref> provides conclusions and possible extensions for this work.
§ BACKGROUND
Over the years, various communication strategies have been explored and adopted in HRC, involving both explicit media (e.g., voice <cit.>, upper limb gestures <cit.>, light and visual cues <cit.>) and implicit ones (e.g. gaze <cit.>, posture and body motions <cit.>). However, most of these approaches have intrinsic limitations and cannot be employed for developing a bi-directional communication interface, thus limiting their adoption to a subset of collaborative applications. For example, human-like communication involving gestures and gaze may be expressive and intuitive, but most collaborative platforms physically lack the features needed to replicate such cues.
With the introduction of Augmented Reality (AR) in mobile devices like smartphones and tablets, a new virtual layer could be exploited by researchers to enable intuitive and straightforward communication between human and robot teammates <cit.>. This approach has become even more relevant with the adoption of MR-HMD devices, which offer a whole new level of immersion and make it possible to develop interfaces for either programming robots' behaviours <cit.> or getting intuitive feedback throughout the interaction <cit.>. In this context, researchers also focused on conveying robot's intentions via MR, evaluating intuitive and expressive strategies for robots to anticipate their actions via holographic cues during interactive tasks efficiently <cit.>.
While extensive research covers how robots can effectively communicate with human teammates via MR, only a few works have explored how we can leverage this holographic medium for intuitive and straightforward human-to-robot communication, particularly in LfD. In this context, popular approaches at LfD rely on computer vision to transfer desired motions using passive observation of human actions <cit.>, or make use of hand-tracking devices to teach skills through teleoperation-based LfD <cit.>. While providing a straightforward communication interface to transfer skills to the robotic teammate, these approaches generally require a structured environment and complex calibration routines, which may limit their application in real-world settings. On the contrary, adopting MR as a communication medium for LfD could mitigate these drawbacks, as MR-HMDs are naturally designed for unstructured environments and could provide similar demonstration capabilities with minimum calibration and setup.
Focusing on the particular branch of KT, some of the earliest attempts at combining KT and MR still relied on the physical robot for hand guidance and demonstration and employed the holographic medium only for later visualizing the learned robot action and for adding constraints to the motion <cit.>. MR-based communication to achieve KT is foreshadowed in <cit.>, where the authors exploit the hand-tracking capabilities of MR-HMD devices to manually drive the individual joints of an industrial robotic manipulator, teaching motions to the machine in the process. Similarly, in <cit.> a system is presented where a tabletop holographic robot can be taught a simple pick-and-place task via holographic hand guidance. Finally, a recent work <cit.> proposed an MR interface for intuitively teaching trajectories to a holographic collaborative manipulator. All of the aforementioned works, however, lack a homogeneous, structured representation of the underlying communication acts allowing operators to transfer skills to the robotic teammate. Additionally, they lack an empirical assessment of the demonstration capacities and perceived users' experience of these solutions.
Therefore, unlike previous research, in the present article we aim at consistently framing KT inside the holographic communication space introduced in <cit.> and present a standalone approach for MR-based KT for any robot which can be described through the URDF format. Furthermore, we provide an experimental evaluation of the communicative capabilities offered by our MR-based KT tool, assessing the learned robot skills in an interactive human-robot task. Finally, the proposed framework, adhering to the open-source paradigm, is made publicly available to other researchers and companies, who can employ it off-the-shelf as an alternative to traditional KT with any URDF-compatible robot, with minimum hardware setup required[<https://github.com/TheEngineRoom-UniGe/RICO-MR/tree/kt>].
§ FORMALIZATION
Recalling the definition provided in <cit.>, we describe communication as the act of conveying or transmitting pieces of information (I) through one or more communicative channels. It is noteworthy to mention that, in general, conveying a single piece of information may involve simultaneously multiple channels to strengthen the clarity of the communicative act itself. For example, human-human communication often combines verbal and gestural media to be meaningful and unambiguous. Following this principle, and denoting M = {m_1, …, m_|M|} the set of all possible communicative media available (e.g., voice, gestures, gaze and so on), we provided the general formulation of a communicative act, namely
C(I, t) = ⋃_i=1^N C_m_i (I, t_i) ,
where t represents the time interval associated with the overall communication, whereas the intervals t_i span the duration of the individual components of the communication act.
Here, we leverage such formalization to frame KT inside the holographic communication space developed for <cit.>. The first step requires identifying the relevant information exchanged during KT sessions. In particular, we argue that the act of KT implies teaching robots about their future states, denoted as τ. Without loss of generality, such a notion of robot state includes the robot's pose x(t) (that is, its position and orientation in the environment) and its joint configuration q(t). Consequently, we can formalize the robot's state as
τ(t) = {x(t), q(t) } .
This, in turn, provides us with a suitable representation of the set of information I which can be conveyed through KT, namely I = {τ(t) } . Having defined the set I, we observe that KT is achieved by hand-guiding the robot's wrist or end-effector. According to our proposed formalism, this act involves a gesture-mediated communication C_gest that enables users to teach robots about their future states in a simple way and can be described as follows:
C_gest(I, t_gest) = (t_gest) ,
where (t_gest) describes the robot trajectory that is conveyed via gestural guidance during the interval t_gest spanning the KT session and is defined as
(t_gest) = {τ(t_gest, s), …, τ(t_gest, e) } ,
with t_gest, s and t_gest, e representing the temporal endpoints of the taught robot trajectory.
With this formalization in mind, we claim that KT can be translated and framed into the holographic communication space envisioned in <cit.> by letting users convey robots' trajectories via gestural guidance on a virtual counterpart of the robot. As already mentioned, the unconstrained nature of the MR space allows for such a form of KT while solely relying on the built-in hand-tracking capabilities of the MR-HMD device. Additionally, such decoupling between physical and holographic layers could be particularly effective in production environments, as the operators could leverage the virtual robot to program or teach upcoming tasks, without halting the execution of real robotic chains.
To further strengthen the communicative framework and ensure a more natural interaction, we postulate that adding the vocal medium would improve users' experience, enabling them to control more detailed aspects of the KT session, including the start and stop on the taught robot trajectory, or the possibility to open and close the robot's gripper for teaching pick-and-place actions. According to such modelling, the holographic-based KT process is translated into a communication act combining gestural and vocal interaction and, as such, can be formalized as follows:
C^KT(I, t) = C_gest(I, t_gest) ∪ C_voc(I, t_voc) .
This formalization, combined with equation (<ref>), describes the building blocks of the communication act taking place during the proposed holographic-based KT process. In the following paragraph, these building blocks are translated into modular software components and integrated into a preexisting MR-based architecture.
§ SOFTWARE ARCHITECTURE
The software components developed in the context of this work constitute a modular extension of the open-source architecture, named Robot Intent Communication through Mixed Reality (RICO-MR), which is introduced and detailed in <cit.>. The features described in this paragraph are publicly available under MIT licence in a separate branch of the main RICO-MR repository. A link to the repository is included at the end of Section <ref>.
The proposed architecture exploits functionalities developed for RICO-MR to achieve the holographic KT envisioned in Section <ref>. However, currently, the architecture allows holographic KT with fixed manipulators only. As such, we introduce a simplification in the formalization provided in (<ref>), and we hereafter refer to the notion of robot state to indicate its joint configuration q(t) only.
§.§ Mixed Reality Application
A MR Application, built with Unreal Engine 4.27 (UE4) and deployed on the embedded HMD device worn by the user, drives the whole holographic interface. A hand-attached menu enables the user to select robot models from a list of predefined ones, making it possible to load and spawn holographic robots in the environment. Aside from the pre-loaded models that ship with the current architecture version, the list of supported robots can be extended by uploading relevant resources (i.e., URDF files) to a remote repository, which can be customized in the application's settings. As such, it is possible to employ the proposed application to carry out KT with any URDF-compliant robot.
Upon selecting the robot model, users can spawn it in the environment using a QR code as a spatial anchor, taking advantage of Unreal's marker detection capabilities. Along with the robot model, a grey holographic sphere, visible in Fig. <ref>, is spawned and superimposed on the robot's wrist.
This sphere serves as a point of interaction between the human and the robot. Using the hand-tracking capabilities of the HMD, the human can directly manipulate the sphere by controlling its rotation and translation in space. The robot, in turn, follows the sphere and aligns its wrist's pose with it by solving the Inverse Kinematics (IK). To this extent, the Denavit-Hartenberg (DH) parameters necessary for the computation of the IK are extracted from the robot model's URDF and fed to the IK Module, which continuously computes the joint configuration needed to achieve the desired pose of the wrist. Specifically, the IK computation occurs with a rate of 30 Hz. As such, by interacting with the grey sphere and hand-guiding it, users can communicate future robot's states and, consequently, teach trajectories and actions to the robot teammate.
Consistently with the formalization given in Section <ref>, a voice interface is also active inside the MR application. Four basic commands are available, ensuring that the user can control the start/stop of the KT session and the open/closed state of the robot's gripper, offering the possibility to teach more complex motions such as pick-and-place or handover actions.
§.§ Recording and Playback
While the MR application provides the holographic interface to perform KT, recording and subsequent playback of the robot's actions are respectively managed through Apache Kafka and the Robot Operating System (ROS) <cit.> framework. On the one hand, we take advantage of Kafka, an open-source, high-performant data streaming platform, for input/output data exchange with the MR application. Kakfa provides numerous advantages for real-time data streaming applications, including cloud integration and scalability, and it has been adopted for developing RICO-MR <cit.>. In this context, we use Kafka to stream the robot's states at a rate of 20 Hz, beginning as soon as the user signals the start of the KT session through vocal command.
On the other hand, two ROS nodes act respectively as Buffer for the robot trajectory streamed through Kafka and Playback of the recorded motion. The Buffer Node subscribes to the Kafka topic to access the robot's states, and it saves them to file for later execution. To this end, a ROS-Kafka Interface has been developed to convert incoming Kafka messages into their equivalent ROS representation. Finally, the Playback Node forwards state commands to the internal low-level controller of the robot at the same rate as the recording to reproduce the desired motion.
§ EXPERIMENTAL VALIDATION
§.§ Hypotheses and Experimental Scenario
The experimental campaign carried out in this study aims to determine if our proposed holographic KT approach can act as a suitable alternative to standard, physical KT, both in terms of demonstration capabilities and perceived user experience.
To achieve our goal, we devised a human-robot interactive scenario to compare traditional physical kinematic teaching (KT), where the operator manually controls the robot's kinematic chain, with our proposed holographic approach. To ensure more generalized results, we conducted experiments using two different robots. In particular, we opted for Baxter <cit.> from Rethink Robotics and Tiago++ <cit.> from Pal Robotics, both being well-known platforms adopted in relevant research studies <cit.> and natively endowed with the necessary software and hardware components to achieve physical KT. Similarly, the HMD platform employed for rendering the holographic medium is a Microsoft HoloLens 2, a popular MR headset offering many features, including state-of-the-art hand tracking and voice interaction.
From a formal point of view, to provide a thorough comparison between physical KT and holographic KT, we have come up with the following hypotheses, which have been evaluated through preliminary user study:
H1 There is no observable difference between actions taught through physical or holographic KT, namely the two approaches provide equivalent communicative power, leading to similar playback outcomes;
H2 No difference can be observed in terms of temporal overhead when demonstrating actions through either physical or holographic KT;
H3 No difference can be observed between the two approaches in terms of perceived UX during the demonstration process.
Regarding the interactive task employed to evaluate the two KT alternatives, a simple stacking task has been devised. Specifically, the human should use KT to teach a sequence of pick-and-place actions aimed at stacking four cubes on top of each other according to a predefined order. Fig. <ref> depicts the experimental scenario, showing a user in the middle of a physical KT session with the Baxter robot.
§.§ User Study
We carried out a within-subject experimental campaign with K=12 volunteers (9 males and 3 females), all aged between 21-32 (Avg =26.3, StdDev = 3.07) and having limited or null experience with MR and HMD devices. The subjects were divided into two groups. The first group performed the experiment with Tiago++, while the second group used Baxter. In both groups, subjects were asked to perform the KT session in two different experimental conditions, namely
C1 Without wearing the HMD and performing physical, hand-guided KT.
C2 Wearing the HMD and performing holographic KT.
To avoid introducing unwanted biases, the starting experimental condition for each subject was randomized. Participants were initially instructed on the stacking task and assigned an arbitrary order for the cubes to be collected. Then, they performed their first trial, in condition C1 or C2. However, before beginning the experiment with HMD on (i.e., condition C2), subjects were also briefly instructed on how to interact with the HoloLens holographic menus and interface. Then, once accustomed, they proceeded to carry out their trial. Subsequently, each subject repeated the experiment in the opposite condition. To achieve a consistent KT experience, the holographic interface in condition C2 also included four virtual cubes placed coherently with their real-world counterparts, as shown in Fig. <ref>. Such virtual cubes were physics-enabled and behaved like the real ones, aiding the participant in recording the holographic KT session. In both cases, the voice interface was active for controlling the start/stop of the KT session and the open/closed state of the robot's gripper. However, while in condition C2 the vocal interface was embedded into the MR application running on the HoloLens 2, in condition C1 it was simulated thanks to a Wizard of Oz approach.
After successfully completing each KT session, the playback phase was manually triggered, causing the robot to reproduce the taught action. This phase allowed us to rank the KT session quantitatively by combining two distinct variables, useful in evaluating H1 and H2. On the one hand, we counted the number of cubes successfully stacked by the robot during playback. As such, we were able to evaluate the communicative capabilities of each KT alternative, assessing how well the combination of vocal and gestural interface translated into the corresponding robot action. On the other hand, we recorded the duration of each demonstration session and employed such quantity to compare the two KT techniques in terms of time necessary to teach the full stacking task.
Finally, after completing their trials, each participant was required to fill out the User Experience Questionnaire (UEQ) <cit.>, a well-known survey useful for ranking and comparing interactive products. In particular, such a questionnaire allows grading the UX of a given product through six evaluation scales, namely attractiveness, perspicuity, efficiency, dependability, stimulation and novelty. In accordance with hypothesis H3, to provide a consistent comparison between the two KT techniques, each participant compiled the UEQ twice, thus evaluating both physical and holographic KT sessions from a UX point of view.
§ RESULTS
We hereby report and discuss the results obtained from our preliminary user study. In particular, we observed that, regardless of the robot, the two groups of subjects achieved comparable results when teaching the stacking task in both experimental conditions. As such, Fig. <ref> reports only the aggregated results, comparing conditions C1 and C2 without discerning the interactions occurred with Tiago++ or Baxter. The histograms show the percentage of playback sessions where the robot successfully stacked a certain number of cubes. For example, in both experimental conditions, around 40% of the subjects achieved a flawless KT, resulting in the robot successfully stacking all four cubes while replaying the taught trajectory.
By observing the plots of Fig. <ref>, it is possible to note how physical and holographic KT yielded comparable results. Keeping into account that such distributions could not be assumed normal, we chose to perform a statistical evaluation of the two conditions via a non-parametric test, namely through a one-tailed Wilcoxon signed-rank test <cit.>. The test provided a statistic W = 20, with p-value> 0.3. Such result was compared with the critical value W_c obtained from the literature <cit.> by fixing the population size K and the significance level α = 0.05. As such, the corresponding critical value was W_c = 17. Observing the condition W > W_c, we could not reject the null hypothesis.
This result may indicate that our initial hypothesis H1 was correct, suggesting that the two communicative interfaces (i.e., physical and holographic) ensure consistent performances while executing KT.
Regarding the overall time needed to perform KT, we observed that in condition C2 participants were always slower because of their limited expertise with MR devices. As such, we chose to perform a differential analysis by computing, for each participant, the difference in terms of time taken to complete the KT session between condition C2 and C1. These results are reported in Fig. <ref>. The boxplots highlight that, on average, holographic KT lasted, respectively, for Tiago++ and Baxter, 44 and 32 seconds longer than the corresponding physical sessions. Compared with the average times measured to complete the physical KT sessions with the two robots, the MR-based approach introduced, respectively, a mean temporal overhead of 37% and 33%.
Statistically, this result is corroborated by a one-tailed t-test carried out on the original distributions, which yielded p-values < 0.05, therefore enabling us to reject the null hypothesis for H2. Nevertheless, although these preliminary results suggest that the holographic demonstration process is slower than the physical one, we argue that the individuals' limited experience with MR devices played a major role in increasing the time taken to teach the stacking task. Consequently, further study could be undertaken with a more expert population to corroborate or revisit this finding.
Nonetheless, Fig. <ref> shows no significant difference between temporal overheads when using one robot or the other. This result is also confirmed by a one-tailed t-test on the two differential distributions, which yielded a p-value> 0.2. In other words, the overhead introduced by the MR medium was consistent among the two robots.
Finally, Fig. <ref> reports the results obtained from the UEQ questionnaires, grouped per evaluation scale and robot type. Here, scores range in the interval [-3, 3], with positive values indicating features that users appreciate given a particular interface. Specifically, Fig. <ref> and <ref> highlight that both KT approaches provided comparable results in terms of efficiency and perspicuity (i.e., how intuitive and pragmatic the interface appeared to users), regardless of the robot employed. Such results are corroborated by statistical analysis performed through the Kruskal-Wallis test <cit.>, a non-parametric ANOVA. The test yielded, for both scales, p-values > 0.05, indicating no significant difference between the distributions. Again, this result could suggest that the hypothesis H3 was correct, with both KT strategies leading to similar perceived UX. It is also worth mentioning that holographic KT scored particularly well in terms of attractiveness, stimulation and novelty, suggesting that participants found the interaction with the holographic environment more engaging and original compared to the physical one. The only scale where holographic KT did a slightly worse job is dependability, which measures how safe and predictable the users perceive a given interface. In this case, physical KT was still perceived as more predictable, particularly with the robot Baxter, compared to the MR-based approach, which nonetheless obtained positive scores with both robots.
§ CONCLUSIONS
In this paper, we proposed a novel communicative interface based on MR to achieve KT with any URDF-compatible robotic manipulator platform. We built on top of our previous works and expanded our communicative framework <cit.> to account for holographic-based KT as a form of human-to-robot communication. Then, we presented a software architecture translating the formalization into a practical MR application running on embedded HMD devices. We compared holographic KT with standard, physical KT in a preliminary user study involving multiple subjects and two different robots. The results suggest that holographic KT behaves comparably to physical KT, achieving similar task-based performances and user experience. This finding suggests that the proposed methodology could be adopted as a suitable alternative to physical KT in experimental and manufacturing scenarios, decoupling the demonstration process and enabling operators to program robot tasks in the MR space, without halting the production flow of the machine.
In future works, we will evaluate whether these findings can be generalized by conducting user studies on a wider population, considering different robots, and more structured human-robot interaction scenarios where the individual is required to teach more complex tasks through holographic KT.
IEEEtran
|
http://arxiv.org/abs/2409.03288v1 | 20240905065059 | Enhancing Clinical Data Warehouses with Provenance and Large File Management: The gitOmmix Approach for Clinical Omics Data | [
"Maxime Wack",
"Adrien Coulet",
"Anita Burgun",
"Bastien Rance"
] | q-bio.QM | [
"q-bio.QM"
] |
Enhancing Clinical Data Warehouses with Provenance and Large File Management: The gitOmmix Approach for Clinical Omics Data
Maxime Wack1,2,3,4,
Adrien Coulet1,2,
Anita Burgun1,2,5,
Bastien Rance1,2,3,⋆
1 Centre de Recherche des Cordeliers, Inserm, Université Paris Cité, Sorbonne Université, Paris, France
2 Inria Paris, Paris, France
3 Department of Biomedical Informatics, Hôpital Européen Georges Pompidou, AP-HP, Paris, France
4 Centre Hospitalier National d'Ophtalmologie des Quinze-Vingts, IHU FOReSIGHT, 75012 Paris, France
5 Imagine Institute, Inserm UMR 1163, Université Paris Cité, Paris, France
⋆ corresponding author:
Abstract
Background
Clinical data warehouses (CDWs) are essential in the reuse of hospital data in observational studies or predictive modeling.
However, state-of-the-art CDW systems present two drawbacks. First, they do not support the management of large data files, what is critical in medical genomics, radiology, digital pathology, and other domains where such files are generated.
Second, they do not provide provenance management or means to represent longitudinal relationships between patient events.
Indeed, a disease diagnosis and its follow-up rely on multiple analyses.
In these cases no relationship between the data (, a large file) and its associated analysis and decision can be documented.
Method
We introduce , an approach that overcomes these limitations, and illustrate its usefulness in the management of medical omics data.
relies on
(i) a file versioning system: git, (ii) an extension that handles large files: git-annex, (iii) a provenance knowledge graph: PROV-O, and (iv) an alignment between the git versioning information and the provenance knowledge graph.
Results
Capabilities inherited from git and git-annex enable retracing the history of a clinical interpretation back to the patient sample, through supporting data and analyses.
In addition, the provenance knowledge graph, aligned with the git versioning information, enables querying and browsing provenance relationships between these elements.
Conclusion
adds a provenance layer to CDWs, while scaling to large files and being agnostic of the CDW system.
For these reasons, we think that it is a viable and generalizable solution for omics clinical studies.
Keywords: phenotyping, clinical texts, feature extraction, reproducible computing, open science
§ GRAPHICAL ABSTRACT
§ INTRODUCTION
Background
With the rise of personalized medicine, patient omics data such as RNA or whole genome sequencing (WGS) enrich traditional clinical data, and by consequence find their place in electronic health records (EHR) and clinical data warehouses (CDW) <cit.>.
In this perspective, CDWs enriched with omics data offer an alternative to prospective cohorts for translational studies, , studies typically searching for genotype – phenotype associations such as genetic profiling of sub-groups of diseases or drug responses <cit.>.
CDW-based translational platforms present two main drawbacks that motivated this work.
The first drawback is the lack of management of data provenance.
These platforms record patient events (such as observations, interventions, decisions) about patients in chronological order, but they seldom explicitly record historical relationships between these events.
Accordingly, the questions “what are the observations that supported this decision?”, and inversely “what decisions were made from this observation?” can not be answered by these systems. The second drawback is the lack of management of large data files.
Relating a clinical decision, such as a diagnosis, to the content of a large file, such as the files of a whole-genome sequencing, remains difficult with CDWs.
However, these two functionalities are crucial for the management of clinical omics studies.
In computer science, data provenance is defined as the documentation of where data comes from, and how it was transformed <cit.>.
Among other aspects, provenance facilitates reproducibility in research <cit.>, the ability to obtain the same results by applying the same procedure to the same data <cit.>.
For this reason, standards and tools for data provenance have been developed <cit.> and widely adopted in fields such as bioinformatics <cit.>, but only parsimoniously diffused to medical informatics.
However, provenance and reproducibility are crucial for applications such as clinical decision support tools and their successful transfer to clinical practice.
This is particularly true when results are generated from prone-to-error biotechniques, potentially requiring several runs before confirming their validity.
One reason for the lack of data provenance management in medical informatics is its absence from CDW systems.
Most successful CDW models and their implementation, such as the i2b2 star model <cit.>, the OMOP Common Data Model (CDM) <cit.>, or the eHOP model in France <cit.> are not supporting detailed provenance.
The management of large files, , larger than several hundred megabytes, or longer than 10 thousand lines, is also limited in CDWs.
This is mainly due to their use of relational database management systems, which are not designed to handle large files.
When such files are supported, they are usually stored aside from the CDW, and the CDW stores an unique reference such as an URL to the file.
This is prone to inconsistencies and missing data, as file location and availability rely on independent file management systems, not synchronized with the CDW.
Objective and motivation
Our objective is to design an approach that allows storing large data files involved in clinical diagnoses and decisions, the relations between these data and diagnoses and decisions, as well as potential relations between clinical diagnoses and decisions; and to provide ways to query those relations and access the underlying data.
A common issue is to identify links between facts.
For example, identifying patients with a liver metastasis within a cohort of lung cancer patients.
This group of patients is not simply the set of patients with both diagnoses, as proof of the causal link between the original tumor and metastasis is necessary.
Our approach should enable querying that specific relation unambiguously as well as retrieving the supporting data (, the digital pathology images of the primary lesion), and analyses (, the search for variants in sequencing data associated with the disease progression tracking) for that relation.
Accordingly, it would help identifying patients satisfying complex inclusion criteria by querying the CDW in a more clinically meaningful way.
A core requirement of translational research is to access data obtained from high-throughput experiments and associated clinical data.
Our approach should enable finding all the observations related to a condition and its longitudinal follow-up, as well as retrieving the corresponding data.
More generally, it would allow the query of longitudinal information to access follow-up results or decision (or inversely to past data that motivated a decision).
Proposed solution
allows provenance tracing, large file management, and the encoding of longitudinal relationships in CDWs, by combining:
(i) the file versioning system git and its git-annex extension to manage large file histories,
(ii) a knowledge graph to encode provenance metadata,
(iii) a data model providing an alignment between these two systems, mapping data with metadata.
The rest of the article is organized as follow:
Section 2 presents the building bricks of our approach ;
Section 3 presents itself ;
Section 4 illustrates its use for the management of clinical omics studies.
§ MATERIAL
§.§ Semantic Web tools for data and provenance
The Semantic Web proposes a set of standards and tools that facilitate sharing, linking, and processing data by associating them with formally defined semantics <cit.>.
This work relies on three Semantic Web standards: RDF (Resource Description Framework) <cit.>, SPARQL (SPARQL Protocol And RDF Query Language), and PROV-O (PROV Ontology).
RDF, the Semantic Web standard for encoding knowledge graph, is a data model that represents data in the form ⟨subject, predicate, object⟩ triples, to describe a binary relation associating a subject and an object.
SPARQL is a query language for RDF knowledge graphs <cit.>.
PROV-O is a standard ontology recommended since 2013 by the W3C for the encoding of provenance metadata <cit.>.
PROV-O is built around three main concepts: , , and .
Entities represent physical or virtual objects, such as data sets or atomic elements of data.
Entities can be generated or modified by Activities.
Activities are realized by Agents.
Entities can also be directly attributed to Agents.
(Figure <ref>a)
Adopting Semantic Web technologies provides additional tools contributing to the adherence to the FAIR principles <cit.>.
§.§ git and git-annex
git is a distributed open-source file versioning system created in 2005 to support the Linux kernel development and now ubiquitously used in software development <cit.>.
git traces historical changes within files in a directory, called a repository.
It uses a directed acyclic graph (DAG) structure, the git graph, to record repository states, called commits.
Because repositories are distributed and thus need to follow independent changes in various locations, branching and merging of histories is permitted, and is a core mechanism of collaborative development in software engineering.
File additions, removals, or modifications are recorded in commits, which are accompanied by a commit message describing the changes assigned to a commit author.
Commits are uniquely identified in a repository by a cryptographic hash code, which can be seen as a signature of the content of the commit.
Any commit in the history of a repository and its associated files can be retrieved from the corresponding unique commit hash code.
git has originally been designed to trace changes in source code files, usually relatively small text files, but does not scale to large files.
git-annex, a third-party extension to git, has been created to overcome this limitation and handle large files <cit.>.
git-annex stores the designated files contents aside from the git repository and takes over the management of those files, while still recording the historical information by tracing a reference to the file within git.
It provides its own operations for adding and retrieving files, supporting a range of popular efficient file-hosting back-ends.
The stored reference is a cryptographic hash of the file content, making git-annex a content-addressable file store: any change in a file has the consequence of modifying its cryptographic hash, enabling the unique identification of multiple versions of the same file.
§ METHODS -
We designed with three main components:
* a data model that records and semantically links clinical data and decisions that are related in term of provenance,
* a system that traces changes in data, pointing at their up-to-date clinical interpretations,
* an association between the data model and the system to ensure a progressive encoding of data provenance, at the time of data changes.
We defined a set of operations to build, manage, and query patient data history represented with the data model.
The data model uses the PROV-O concepts of , , and , and various possible relations between those concepts.
We use them to represent clinical data and their provenance relations:
the concept represents data providers, which can either be a human or a machine;
the concept represents analyses, software runs or other methods that produce one data element;
the concept represents any data element recorded in a CDW, associated or not with files.
We extend entities into five subtypes: patients, samples, data, results, and diagnoses.
Patients and samples are considered as data elements because in the context of a CDW, they are indeed identifier of patients or samples.
Sample is a general naming encompassing identifiers of biological samples, but also of images or audio recordings.
Diagnoses can be any kind of clinical decision, but we restrict our study to diagnoses only, for simplicity.
The most central relation of PROV-O, linking together, is .
The relation
is also used in the specific case of derivations that are data modifications.
Figure <ref>a illustrates these concepts and relations, and their use to represent data elements of a CDW and how one might derive from another.
This derivation is a many-to-many relation, as a sample can generate multiple data elements, and multiple results can lead to a single diagnosis.
This relation between two entities is the atomic block that is repeated to build sequences of data elements derived from a patient, as illustrated in Figure <ref>b.
In our model, a diagnosis a result, which some data, which a sample, which a patient.
In this figure and throughout the rest of the article, we adopt the PROV-O prescribed shapes to distinguish between Entities, Activities and Agents.
In addition we use different colors to distinguish entities: blue for data, green for results, and red for diagnoses.
Other entities are kept blank.
For a more concrete example, a diagnosis of diabetes (an ICD10 code in the CDW), was derived from a laboratory result of high blood glucose concentration (a LOINC code), which in turn was derived from a blood glucose analysis (identified by an internal lab number), which was derived from a blood sample (a nursing procedure code).
In the specific case of modifications, updates or invalidations of diagnosis, results or data, the relation is replaced by .
Providers and methods can optionally be added to further document the derivation relationship between entities. Accordingly, an entity E_1 a provider P, which a method M.
E_1 M, which a previous entity E_2.
For example, E_1 is a WGS assay attributed to a lab technician P, themselves associated with a short read sequencing `, which generated the sequence files E_1 using the patient sample E_2.
Implementing patient data history with git
In , we propose to rely on the git versioning system to trace patient data elements and associated clinical decisions.
Each patient is represented with its own single git repository, the git commit graph progressively built with new data, changes, and interpretations.
Small and large data files associated with patients observations are referenced and relationships between data elements and decisions are implemented using commits and git branching mechanisms.
To facilitate maintaining this structure, we describe two “layers": a sample layer for the histories of data derived from every sample acquired from the patient (, a biological sample, an image, or an audio file), and a diagnosis layer for the relations between clinical decisions.
These two layers are illustrated in Figure <ref>.
The sample layer encompasses sample branches.
Each new sample acquisition is materialized by a new git branch in the patient's git graph.
Each data and result derived from a sample is sequentially added to that sample branch as new commits.
Multiple revisions of these data or results can be added to a sample branch, possibly invalidating a previous version.
The second layer encompasses diagnosis branches.
Clinical diagnoses are materialized by new branches on top of the sample branches, following a different construction rule.
Diagnosis commits can derive from one or multiple results (thus from multiple sample branches) using the git branch merging function.
Such a merge represents the joint contribution of multiple results to a single diagnosis.
Diagnoses can be further revised, combined, or invalidated by new merges of new results or diagnoses.
In git, branches are pointers to the latest commit in that branch, called the HEAD.
In a sample branch, the HEAD points to the most up-to-date information and data related to that sample.
In a diagnosis branch, the HEAD points to the most up-to-date diagnosis.
Alignment between the data model and patient git graph
Sequences of provenance relations represented with the data model can be aligned to git commit graphs, as illustrated in Figure <ref>b.
In this alignment, each PROV-O Entity corresponds to a commit in the git graph, and derivation relationships between Entities corresponds to parent-child commit relationships.
We implement this alignment by reusing the structure git offers to associate metadata to commits.
Indeed, every commit has an author, a date, and a message composed of a subject and a body.
We use the author and date to record the provider and date, respectively.
The message subject records the entity type and its id in the following form: (, ); and the message body records the associated metadata, encoded in turtle RDF.
For example, when adding a biopsy sample to a patient, the following RDF pattern is written into the commit message body:
Using this commit metadata, the formal representation of provenance is preserved and closely associated with the corresponding data files, with the relationships between entities mirrored in the git graph structure.
Concatenating all the commit message bodies of a particular git history builds the corresponding RDF knowledge graph by incrementally adding nodes and relations.
The resulting knowledge graph has the advantage of offering query and reasoning facilities beyond those provided by git alone.
Examples of patient data histories
Figure <ref> illustrates the possible events in a patient history and their representation in .
The `Simple' box in Figure <ref> illustrates the trivial case of a diagnosis obtained from a single sample and a single biomedical analysis.
A sample branch is created (blank node); a data file is added with a commit (blue node); a result is added with a new commit (green node).
Next, a diagnosis is added on top of the result by the creation of a new diagnosis branch (red node).
Target nodes are the HEAD of their respective branch.
The `New data' box in Figure <ref> illustrates the case of new data or results obtained from the same sample.
Those are added sequentially, accumulating the information produced from a single sample in the same history.
The graph structures of the RDF and git graphs can slightly differ here, as the git history stays linear while the RDF graph splits, as data always derives from the sample.
The `Update data' box in Figure <ref> illustrates the case of data or results updating or replacing previous ones Those are added sequentially, as in the previous case, with the use of the
relation intead of .
In the case of invalidation, the invalidated entity is additionally documented with a temporal relationship , and optionally with a relationship to the method it .
Note that multiple entities of the same kind can be invalidated at once by a single new entity.
In all those cases, accessing the repository “at" a diagnosis gives access to all the files in the version that was used to lead to that diagnosis.
The `Update diagnosis' and `Combine diagnoses' boxes in Figure <ref> illustrate when a diagnosis emerges from the combination of multiple exam results, possibly originating from multiple samples, and from previous partial or erroneous diagnoses.
This is achieved in git using the merge operation to combine entities contributing to this diagnosis.
In both the update and combine cases a new diagnosis branch is created.
The update diagnosis case is useful for aggregating more information related to a single diagnosis, or combining previous partial or symptomatic diagnoses into an etiological diagnosis.
The combined diagnoses case can be used to signal that multiple co-existing diagnoses are related through a syndromic diagnosis.
The new diagnosis can either stay unchanged when new analysis stay compatible with that diagnosis, or a different one derived from the additional information.
Figure S1 in Supplementary Data shows the actual git graph (a) and PROV graph (b) of the implementation of these operations in .
A script listing all the commands needed to build this repository is provided with the software package.
§ RESULTS
We implemented as a set of operations and queries that can be called from a command line interface.
These commands are mostly abstractions over the underlying git commands managing the repositories.
Editing git graphs and associated provenance
allows users to formulate simple
commands such as “add the NGS assay files of this patient's lung biopsy".
Each command triggers a series of git operations and enriches the commit metadata with provenance.
Note that only enriches the git graph and metadata and does not suppress any of it, following the philosophy of version control systems.
Accordingly, provides operations to add, revise, or invalidate elements (patients, samples, data, results, diagnoses).
These operations can be combined to enable more complex ones, such as “add the variant calling result derived from the NGS assay of this lung biopsy, and makes it revise the inconclusive pathology report produced earlier".
Queries with
provides three main types of queries, illustrated in Figure <ref>.
(1) retrieving the provenance of any entity in the patient history.
This is supported by the simple fact that visiting a repository “at" a commit accesses all the files accumulated up to that commit.
Large files stored in the annex are downloaded on demand only.
git logging facilities enable provenance to be further narrowed to specific entities, time periods, providers, etc.
Figure <ref>(a) illustrates this type of query listing all the data that contributed to an input diagnosis.
(2) retrieving the most up-to-date data, results, or diagnoses of a branch.
As each sample and diagnosis is represented as a branch containing its whole history, navigating to its HEAD returns the most recent version of data, results, and diagnoses, as illustrated Figure <ref>(b).
Chained with the first operation, this enables retrieving the additional data that documents the evolution of a diagnosis, , a pathology report that documents the recurrence of a lung cancer.
(3) returning the timeline of a patient's successive diagnoses. Because the RDF representation of the provenance is represented in a piece-wise manner in the metadata of each entity, any subgraph of provenance can be built by concatenating a selection of pieces.
The resulting RDF can in turn be returned, queried in SPARQL, or used to produce graph visualizations.
The timeline operation displayed in Figure <ref>(c) is implemented with a SPARQL query that only selects diagnoses and diagnosis-diagnosis relations.
Indeed, in addition to these three common queries, supports running arbitrary SPARQL queries on patients histories.
A real-world clinical case
We consider the previously published case report of a patient with a metastatic HPV-induced high grade anal intraepithelial neoplasia (HGAIN) <cit.>.
We schematized the git ommix graph associated with this patient history in Figure <ref>:
(1) The patient was diagnosed with an HGAIN during high-resolution anoscopy that led to the realization of a biopsy and pathologist confirmation of the diagnosis, completed with PCR identification of HPV6, 11, and 16.
(2) The HGAIN was subsequently surgically resected, with confirmed free resection margins.
(3) Two years later, the patient presented feverish back pains accompanied by weight loss, that were misdiagnosed at first.
After a couple of months and a new feverish episode, a new bone biopsy conducted the pathologist to conclude to a diagnosis of squamous cell carcinoma (SCC) metastasis.
Further investigations using immunohistochemical and PCR assays detected HPV16 DNA in the biopsy tissue.
This allowed linking the vertebral lesion to the HGAIN despite the absence of other signs of anal SCC contemporary to the metastasis diagnosis.
(4) Furthermore, as the patient had been participating in a research protocol that included the collection of plasma samples, those samples were retrospectively analyzed using quantitative digital droplet PCR, showing the presence of HPV16 circulating DNA in increasing blood concentration between the two diagnoses.
(5) Further investigations using HPV capture and NGS showed that the exact same HPV16 subvariant was detected throughout all samples, from the initial HGAIN to the vertebral metastasis.
The actual PROV graph generated for this example is shown in Figure S2 of Supplementary Data.
Implementation
An implementation of is available at <https://www.github.com/gitOmmix/gitOmmix> as Open Source Software.
It is implemented in bash and offers a user interface in the form of a command-line tool which includes help and auto-completion.
It relies on the command-line versions of git, git-annex, the rapper and roqet command-line tools from <https://librdf.org> for RDF file management, and graphviz <cit.> for graph visualization.
§ DISCUSSION
Integration to clinical data warehouses
is designed to enrich CDWs to enable the support of data provenance and large files.
Entities within a patient's repository are uniquely identified by an automatically generated identifier (a SHA1 hash of the git commit introducing the entity).
Thus, the pair (patient id, unique hash) is an unambiguous reference to an object into an instance of .
Accordingly, entities can be referenced in a CDW by associating the corresponding unique hashes to observations (, the sample associated to the surgical procedure, the data file associated with a biological analysis, the full result report associated with an image exam, the diagnosis associated with a stay in the billing system).
In observation-based models such as i2b2, this can be achieved by re-using an existing column
to store the associated hash.
In other scenarios where no free column is available, adding a specific column to the schema for this purpose is sufficient and does not interfere with the CDW.
In CDWs using the OMOP CDM, although a similar mechanism could be used, a cleaner implementation using a new concept (, “gitommix_hash") and a fact_relationship linking the observation to its hash would be preferable.
It is consequently possible to navigate back and forth between and the CDW as necessary, filtering on patient identifier and commit hash in the observations table on the CDW side or targeting the commit hash on the side.
Related works
In a 2017 article <cit.>, Murphy et al. described three methods for combining clinical and genomic data within the i2b2 CDW.
The first one involves integrating genomic results as structured data using the Sequence Ontology.
The second one uses the i2b2/tranSMART platform with its ad hoc ontology and data storage mechanism.
The third one uses a NoSQL database containing functionally annotated results, linked to i2b2 via a custom i2b2 module.
All those methods involve transformations of the genetic results, making it impossible to access the primary data; and do not embed links between results, making it impossible to track longitudinal relationships between assays.
Our solution is architecturally similar to the third method, in that it adds an external layer to the CDW.
However, it is more independent as it does not rely on a specific CDW implementation and does not necessitate in-depth adaptation of CDW, but only relies on common tools.
is compliant with any particular data schema or controlled vocabulary.
It could be added seamlessly to the first described method using the Sequence Ontology, linking each structured result to its source data.
Various initiatives have tentatively added structured representations of genomic data in the OMOP CDM, such as the genomic CDM (G-CDM) <cit.>.
This complements by enabling further structured representation of data hosted in .
Limitations
The current implementation of is a proof of concept and for this reason presents some limitation.
First, it is local and centralized, and does not yet support the management of shared and remote repositories, as permitted by git and git-annex and intended for .
Second, its interface is still rudimentary and limited to experienced users.
Advantages
However, our solution has multiple advantages over previously described systems, by giving the ability to integrate arbitrary large data and by enabling the representation of relations between these data points.
It does so without relying on an entirely new paradigm around data representation in CDW, or needing heavy adaptations to the system currently in use.
It acts as a plug-in solution, agnostic from the CDW system in use, and is supported by standards and tools that were originally designed to address the specific issues at hand: keeping record of file histories, managing large files, and tracing provenance in a formal way.
, as well as the tools it relies on, is open source backed by open standards.
This allows to readily benefit from the capabilities of these tools, enabling powerful file management, and clearly defined semantics, reasoning capabilities, and interoperability.
Although our system prescribes a general structure to its data model, it does not restrict the usage of additional features from the underlying systems.
For example, the base PROV-O triples generated for each entity can be supplemented at will to construct richer provenance graphs.
git-annex supports a diverse collection of file storage back-ends to host arbitrary large files, locally or remotely, on-premise or in the cloud, offering to benefit from efficient storage solutions.
Regarding scalability, as git-annex separates the management of large file from the management of the repository, repositories themselves stay very light in term of memory and thus responsive to query.
And because each patient exists as its own git repository, operations can inherently parallelized by running as many processes as needed.
Perspectives
By enriching data with provenance metadata and enabling access to versioned source data files, contributes to a better adherence to FAIR principles in the management of complex clinical data.
In particular, it ensures findability by assigning persistent and unambiguous identifiers, providing rich metadata, and a standard way to search within this metadata; accessibility by making source data available through git; interoperability by relying on standard knowledge representations; and reusability by adding detailed provenance and allowing access to all versions of the data.
For these reasons allows reproducibility and consistency in conducting translational studies, particularly retrospective studies based on large data that are more and more routinely collected during care.
In addition, it may also benefit clinical care as it documents clinical decisions explicitly and in a FAIR format <cit.>.
On the technical side, using established standards and tools allows for the addition of features supported by those tools.
For example, git enables authors to cryptographically sign their commits, which could be used to add a layer of security to the tracing of provenance.
git repositories can contain references to other git repositories using submodules.
Using submodules could allow even richer provenance tracing by directly referencing the actual analysis code, pipeline, or tool at the version in which it was used to produce observations.
§ CONCLUSION
We introduce , a relatively simple and lightweight system combining semantic web, file versioning, and content-addressable distributed file storage to represent and manage provenance and large source data in clinical data warehouses.
It includes all the functions required to build a patient's history graph and store associated files, navigate and query history using SPARQL, and retrieve the specific files related to any event.
We base our proposition on widely accepted systems and a model leveraging the shared DAG structure underlying these systems.
We provide a proof of concept implementation demonstrating feasibility and practical use of , and illustrate its use with real-word use case about diagnosis based on clinical omics data.
It is open to contributions and will be extended to support additional functions.
§ ABBREVIATIONS
CDW: Clinical Data Warehouse
ctDNA: circulating DNA
DAG: Directed Acyclic Graph
ddPCR: digital droplet Polymerase Chain Reaction
EHR: Electronic Health Record
FAIR: Findable, Accessible, Interoperable, Reusable
G-CDM: Genomic Common Data Model
HGAIN: High Grade Anal Intraepithelial Neoplasm
HPV: Human Papilloma Virus
i2b2: Informatics for Integrating Biology and the Bedside
ICD-10: International Classification of Diseases, 10th revision
LOINC: Logical Observation Identifiers Names and Codes
NLP: Natural Language Processing
NoSQL: non-SQL
OMOP CDM: Observational Medical Outcomes Partnership Common Data Model
OWL: Web Ontology Language
POC: Proof of Concept
PROV-O: Provenance Ontology
RDF: Resource Description Framework
RDFS: RDF Schema
SCC: Squamous Cell Carcinoma
SHA-1: Secure Hash Algorithm 1
SPARQL: SPARQL Protocol and RDF Query Language
WGS: Whole Genome Sequencing
§ AUTHORS CONTRIBUTIONS
MW: conceptualization, software, visualization, writing (original draft, review and editing).
AC: supervision, conceptualization, writing (review and editing).
AB: writing (review and editing), validation, funding acquisition.
BR: supervision, conceptualization, funding acquisition, writing (editing and review)
§ FUNDING RESOURCES
We benefit from a government grant managed by the Agence Nationale de la Recherche under the France 2030 program, reference ANR-22-PESN-0007 ShareFAIR.
§ ACKNOWLEDGMENTS
Dr. Hélène Péré and Dr. David Veyer for the fruitful discussions about their inspiring clinical projects.
Linus Torvalds for creating linux and git, on which this contribution is based.
|
http://arxiv.org/abs/2409.03560v1 | 20240905141851 | Dynamic Hybrid Beamforming Designs for ELAA Near-Field Communications | [
"Mengzhen Liu",
"Ming Li",
"Rang Liu",
"Qian Liu"
] | eess.SP | [
"eess.SP"
] |
propProposition
approApproximation
definDefinition
theoremTheorem
lemmaLemma
corollaryCorollary
|
http://arxiv.org/abs/2409.03084v1 | 20240904211432 | Quantum geometric protocols for fast high-fidelity adiabatic state transfer | [
"Christian Ventura Meinersen",
"Stefano Bosco",
"Maximilian Rimbach-Russ"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall"
] |
QuTech, Delft University of Technology, Lorentzweg 1, 2628 CJ Delft, The Netherlands
§ ABSTRACT
Efficient control schemes that enable fast, high-fidelity operations are essential for any practical quantum computation. However, current optimization protocols are intractable due to stringent requirements imposed by the microscopic systems encoding the qubit, including dense energy level spectra and cross talk, and generally require a trade-off between speed and fidelity of the operation. Here, we address these challenges by developing a general framework for optimal control based on the quantum metric tensor. This framework allows for fast and high-fidelity adiabatic pulses, even for a dense energy spectrum, based solely on the Hamiltonian of the system instead of the full time evolution propagator and independent of the size of the underlying Hilbert space. Furthermore, the framework suppresses diabatic transitions and state-dependent crosstalk effects without the need for additional control fields. As an example, we study the adiabatic charge transfer in a double quantum dot to find optimal control pulses with improved performance. We show that for the geometric protocol, the transfer fidelites are lower bounded ℱ>[99]% for ultrafast [20]ns pulses, regardless of the size of the anti-crossing.
Quantum geometric protocols for fast high-fidelity adiabatic state transfer
Maximilian Rimbach-Russ
September 9, 2024
===========================================================================
§ INTRODUCTION
Coherent control of quantum information is the central part of the advancement of emerging quantum technologies such as quantum processors, quantum sensors, and quantum communication <cit.>. However, the inherently fragile nature of quantum states makes their coherent control a challenging task. Much research is dedicated to finding so-called quantum optimal control protocols, that allow fast and high-fidelity operations by appropriately shaping the control pulses <cit.>. Optimized initialization and readout protocols are of particular interest, as they are an integral part of any error correction algorithm <cit.>.
To achieve a fast and high-fidelity protocol, one has to carefully compose pulse shapes to avoid undesired transitions, which are summarized in shortcut-to-adiabaticity methods <cit.>. Through the addition of new control fields, one can suppress these transitions <cit.>. However, this approach requires additional experimental overhead and precise control of new driving parameters. Approximate methods, based on the minimization of diabatic transitions, circumvent new control fields and only affect the experimentally accessible parameters while providing fast and quasiadabatic (fast-QUAD) protocols <cit.>. Unfortunately, these methods cannot be straightforwardly generalized to bigger parameter spaces and beyond transitions between two energy levels. Most protocols are based on classical optimization problems, which are computationally challenging for larger systems <cit.> and hence make purely numerical methods unattractive. Geometric approaches, including the space curve quantum control <cit.>, allow for a simple geometric understanding of noisy time dynamics <cit.>. Notwithstanding, this geometric picture is limited because it relies on the computation of the time evolution operator and is hence constrained to small system sizes <cit.>. In addition, the derived pulse shapes are natively discontinuous due to the closed-curve and closed-area constraints of the formalism. Similar constraints on the control field to suppress first-, and second-order errors can also be found in <cit.>.
In this work, we develop a general geometric approach to provide a general framework, based on geodesics provided by the quantum metric tensor <cit.>, that can be generalized to any multi-level Hamiltonian and allows for fast and high-fidelity adiabatic operations. We refer to this approach as the geometric fast-QUAD. Our geometric fast-QUAD does not require imposing any new external control fields like in the counter-diabatic approach, it is resistant to miscalibration errors and allows for fast operations even in dense energy landscapes. It reduces undesired level transitions and the state-dependent crosstalk in the qubit operation and initialization/readout phases. In addition, since it does not require the computation of the full time-ordered evolution operator, we can easily adjust it to allow for operational flexibility of different quantum platforms. We show the advantages of the geometric fast-QUAD through an optimal protocol for initialization and readout of semiconductor spin-qubits.
Semiconductor spin qubits are a platform for quantum computing based on confined semiconductor quantum dots with a promise to be scalable <cit.>, with long coherence times <cit.>, operability at high temperatures <cit.>, and their similarity in fabrication to the classical semiconductor industry <cit.>. However, their small size can lead to dense energy spectra that may hinder fast and high-fidelity quantum control. For reading out and initializing such qubits, the measured signal is typically enhanced through spin-to-charge conversion techniques <cit.>. Common spin-to-charge techniques, such as Pauli-Spin-Blockade (PSB), rely on an adiabatic transition between a spin and a charge state <cit.> passing through multiple anticrossings. We illustrate advantages of the geometric fast-QUAD through optimizing the PSB initialization and readout in a double quantum dot (DQD) with experimentally feasible parameters.
The paper is structured as follows. In Section <ref>, the general framework is introduced, starting with the geometric formalism, relating optimal protocols to geodesics, and applying it to a general qubit Hamiltonian. Subsequently, in Section <ref>, upon reviewing the general DQD model in the presence of strong spin-orbit interaction <cit.>, an effective three-level Hamiltonian describing the readout and initialization subject to PSB is introduced. In addition, we include decoherence sources to provide a detailed analysis of the geometric fast-QUAD.
§ QUANTUM GEOMETRIC FORMALISM: FAST-QUASIADIABATIC DYNAMICS AS GEODESICS
§.§ The quantum metric
Optimal control schemes rely on the control of parameters x^μ=(x^1,x^2…, x^n) of the physical Hamiltonian to provide high-fidelity state transfer. The task of optimizing the fidelity of state transfer can be captured in the geometric structure of the Hilbert space through the quantum metric tensor <cit.>. The quantum metric tensor g_μν describes the infinitesimal distance between two pure states via the local infidelity (up to second order in parameter changes) <cit.>
1-|⟨ψ(x)|ψ(x+dx)⟩|^2≈1/2g_μν(x)dx^μ dx^ν.
Here, x^μ∈ℳ, where ℳ is the set of all possible parameter values, define a set of parameters that define an embedding for the set of pure states P(ℋ)=ℋ/U(1). For Greek indices, we will opt for the Einstein summation convention. The quantum metric tensor constitutes the real and symmetric part of the full quantum geometric tensor q_μν=g_μν+iΩ_μν, whose antisymmetric component Ω_μν is related to the Berry curvature, which captures topological effects <cit.> allowing for a possibility to straightforwardly connect quantum dynamics and topology. The quantum metric tensor, with respect to a given target state, |ψ_0⟩, can conveniently be written in terms of the Hamiltonian Ĥ, its eigenvalues E_n, and eigenvectors {|ψ_n⟩}
g_μν=∑_n≠ 0ψ_0∂_μĤψ_nψ_n∂_νĤψ_0/(E_n-E_0)^2= q_μν,
where ∂_μ = ∂/∂ x^μ is the derivative with respect to the parameters of the Hamiltonian.
§.§ Minimal energy fluctuations and geodesic equations
The quantum metric tensor g_μν allows us now to connect fast and quasi-adiabatic (fast-QUAD) dynamics with the geometry of the parameter space. For coherent population transfer, experimentally controlled parameters can be written as x^μ(t). State transfer is then given by a path connecting the set of initial parameter values x^μ_i≡ x^μ(0) to some final set x^μ_f≡ x^μ(t_f). The task of fast and high-fidelity population transfer then relates to the optimization problem of finding an optimal path x^μ_geo(t) between these two points described by the following functional
ℒ[x,ẋ,t]=∫_0^t_f dt √(g_μν(x)x^μtx^νt).
This functional describes the length of a path x^μ(t) parametrized by time t and can be minimized for functions x^μ_geo(t) (See Fig. <ref>) that solve the Euler-Lagrange equations, which in this context are known as the geodesic equations. For adiabatic protocols, the geodesics have a conserved quantity, namely the total energy, which leads to the geometric adiabatic condition (See Appendix <ref>)
g_μν(x)x^μtx^νt=δ^2≪ 1.
Here δ can be interpreted as the adiabaticity parameter. Since the above relationship minimizes the local infidelity (<ref>), the geodesics also minimize the energy fluctuations <cit.>
σ^2_E=Ĥ^2-Ĥ^2≈ g_μν(x)x^μtx^νt
motivating the name of the adiabaticity δ=√(σ^2_E). If we restrict ourselves to a single parameter x^μ=ε(t), we can solve for the adiabaticity parameter as follows
δ = 1/t_f∫_ε(0)^ε(t_f)dε √(g_εε)=ℒ[ε]/t_f≪ 1.
Therefore, adiabatic protocols can be understood as paths that minimize locally the length of the path that they trace out, i.e. short geodesics with respect to the time t_f. The above equation also converges to the quantum speed limit bound for pure states as found in <cit.>. Finally, this allows us to draw a connection between adiabatic dynamics and geometry. To find an optimal time evolution of ε(t), we need to solve
g_εεε̇^2=∑_n≠ 0|ψ_0∂_εĤψ_n|^2/(E_n-E_0)^2(εt)^2=δ^2.
Unsurprisingly, this is similar to the known fast-QUAD equation <cit.>, differing from the historical fast-QUAD equation only by an additional exponent of 2 of the energy splitting in the denominator <cit.>. In contrast, however, it allows for a clear extension to multiple energy levels. In the remainder of the article, we refer to Eq. (<ref>) as geometric fast-QUAD equation.
§.§ Two-level system
The geometric structure of Hilbert space allows us to optimize adiabatic population transfer. For instance, a two-level Hamiltonian in cylindrical coordinates (ρ, ϕ, z)
Ĥ_Pauli=[ z ρ e^-iϕ; ρ e^iϕ -z ],
leads to the quantum metric tensor resembling the Bloch sphere (see Appendix <ref>)
[g_μν(θ,ϕ)]= 1/4[ 1 0; 0 sin^2 θ ].
Here θ=arctan2(ρ,z) describes the azimuthal angle of the Bloch sphere. Figure <ref> shows the simulated probability p(t_f)=|⟨ψ_0(t_f)|ψ(t_f)⟩|^2 under adiabatic evolution for the standard linear protocol ρ_linear(t)=(ρ_f-ρ_0) t/t_f+ρ_0 and the geometric protocol as defined in Eq. (<ref>). Using the quantum adiabatic protocol (<ref>) we find analytically that <cit.>
θ_geo(t)=(θ_f-θ_0) t/t_f+θ_0,
where θ_0,θ_f are the initial and final values of θ(t). Remarkably, our analytic expression θ_geo(t) and the fully numerically simulated pulse ρ_geo(t) cannot be distinguished. In both cases, because of the minimization of the energy fluctuations, the transfer errors arising from undesired diabatic transitions are drastically reduced with respect to the linear protocol.
The geometric fast-QUAD can be easily extended to an arbitrary multi-level system as illustrated in Eq. (<ref>). In addition, the quantum metric tensor only scales with the number of control parameters and is hence also useful for large systems, making it a reliable tool for analyzing and optimizing large-scale quantum architectures.
§ APPLICATION: CHARGE TRANSFER IN A DOUBLE QUANTUM DOT
Given the advantages of the quantum metric tensor, we aim to apply the geometric fast-QUAD for the adiabatic initialization and readout processes. We model an effective model for a double quantum dot (DQD) system that may be used directly for the initialization and readout of singlet-triplet qubits <cit.>. After providing a brief overview of the microscopic model in <ref>, we investigate a truncated two-level model of the full model in Sec. <ref>, to extend the previous result of a two-level Landau-Zener problem to one in the presence of ST_- coupling <cit.>. Following, in Sec. <ref>, we will introduce a low-dimensional effective DQD model, which captures the spin-to-charge transition, while taking into account the spin state. Using this model, we will aim to provide a detailed analysis and comparison of the geometric fast-QUAD with the linear protocol under coherent and non-unitary noise. In the remaining text, we will work in units of ħ.
§.§ Full model
The results in the single qubit case (Fig. <ref>) can be extended to the full 6x6 DQD <cit.>, consisting of two spins in the lowest orbitals of two quantum dots. The Hamiltonian is a sum of the Fermi-Hubbard Hamiltonian and the Zeeman Hamiltonian
Ĥ_DQD=Ĥ_FH+Ĥ_Zeeman,
where the spin-degenerate part is described by the Fermi-Hubbard model
Ĥ_FH = -Ω∑_ij,σ(ĉ_i,σ^†ĉ_j,σ +h.c.) +∑_⟨ ij⟩U_ijn̂_in̂_j
+∑_j (U/2n̂_j(n̂_j-1)+V_jn̂_j)
where ĉ^†_j,σ(ĉ_j,σ) creates (annihilates) a fermion on sites j with spin σ. The fermionic number operator is n̂_j=∑_σĉ^†_σ,jĉ_σ,j, U and U_ij are the intra- and inter-dot Coulomb repulsion, Ω is the tunnel coupling originating from the overlap of the wavefunctions in nearby quantum dots, and V_i are the chemical potentials in each dot. The spin degeneracy is lifted through the Zeeman term
Ĥ_Zeeman = 1/2μ_B ∑_j ℬ⃗^j ·σ⃗^j,
where μ_B is the Bohr magneton, σ⃗=(σ_x,σ_y,σ_z)^T is the Pauli vector consisting of the conventional Pauli matrices, ℬ^j_a=∑_b 𝒢^j_ab B_b the effective magnetic field and 𝒢^j_ab the g-tensor at site j. The indices a, b =x,y,z run over the spatial components. Additionally, we define E_a=E_a,1+E_a,2 and Δ E_a=E_a,1-E_a,2 as the total Zeeman energy and the Zeeman splitting difference, which may arise due to a spatially varying g-factor as usually found in germanium-based platforms <cit.> or a magnetic field gradient as appears in silicon-based architectures with an additional micromagnet <cit.>. The matrix representation in the full basis is given in Appendix <ref>.
§.§ Truncated two-level model
Here, we restrict our optimization protocol to suppress only the transition of the ground state to the closest state. This way we effectively work in a low-energy two-dimensional subspace of the full 6x6 DQD, which in the eigenbasis takes the schematic form
Ĥ_DQD≈∑_n,m = 0,1H_n,m|ψ_n⟩⟨ψ_m|.
The state |T_-⟩=|↓↓⟩ is initialized by shifting the detuning ε(t), which is the difference of the left and right chemical potentials of each dot ε:=V_L-V_R. Due to the small ST_- anti-crossing, as found in <cit.> for out-of-plane magnetic fields, we again find that the geometric fast-QUAD is superior to the linear pulse (See Fig. <ref>). Even under this simplification, we report a transfer fidelity of >[99.99]% after around [150]ns pulse time.
§.§ Three-level model
To obtain a simplified model that captures spin-to-charge conversion (See Fig. <ref>(a)), we restrict ourselves to a double dot system with a magnetic field pointing purely in the z-direction, neglecting the fully polarized states |T_±⟩, and focus on the singlet-triplet basis with two charge degrees of freedom (Fig. <ref>(b)). The Hilbert space is spanned by the following basis states
|S(2,0)⟩ =ĉ^†_L,↑ĉ^†_L,↓|vac⟩
|S(1,1)⟩ =1/√(2)(ĉ^†_L,↑ĉ^†_R,↓- ĉ^†_L,↓ĉ^†_R,↑)|vac⟩
|T_0(1,1)⟩ =1/√(2)(ĉ^†_L,↑ĉ^†_R,↓+ ĉ^†_L,↓ĉ^†_R,↑)|vac⟩,
where (n_L, n_R) describes the number of charges in the left and right dots, respectively, and |vac⟩ represents the vacuum state. In this subspace, we find that the matrix representation of the DQD Hamiltonian, in the above basis, is
Ĥ(t)=[ U-ε(t) Ω 0; Ω 0 Δ E_Z; 0 Δ E_Z 0; ].
The energy spectrum of the Hamiltonian in Eq. (<ref>) is seen in Fig. <ref>(c), which displays an anti-crossing at ε=U. The size of the anti-crossing is now determined by the combination of the tunnel coupling Ω and the Zeeman splitting difference Δ E_Z. In contrast to the Landau-Zener-Majorana-Stueckelberg anticrossing, the energy spectrum is not symmetric in the detuning ε. Also, the existence of a third energy level, makes it possible for diabatic transitions from the ground state to two upper energy eigenstates.
§.§ Incoherent dynamics
During coherent spin-to-charge conversion, a dominant error source are diabatic transitions in the vicinity of the anti-crossings, where the energy level difference is minimal. Our geometric protocol Eq. (<ref>) is designed to minimize such errors. However, non-unitary dynamics arise due to couplings with ambient degrees of freedom, which alter the time evolution and hence may affect the optimal pulse shape. In the following, we will describe two ubiquitous noise types that may affect the protocol, low- and high-frequency charge noise.
One of the most known types of noise in semiconducting and superconducting devices is the appearance of low-frequency noise <cit.>, whose noise spectral density follows S(f)∝ 1/f. Under sufficient conditions, the noise spectral density is the Fourier transform of the auto-correlation of the noise. For the Hamiltonian in Eq. (<ref>), the noise spectral density arises from the correlation function of the fluctuations of the detuning parameter δε(t), which we include as a perturbation
δĤ(t)=-δε(t) |S(2,0)⟩⟨=|-δε(t) Π̂_S(2,0),
where δε is drawn from a Gaussian distribution δε∼𝒩(0,σ^2), which is centered at zero with a variance given by σ^2. In the simulations, we will model this behavior by fluctuating boundary conditions of the pulse shape ε_0,f→ε_0,f+δε <cit.> and illustrate the differences between a noisy and a noiseless time evolution.
To study the high-frequency noise, we will adopt the Lindblad master equation, which describes non-unitary evolution of a quantum system subject to Markovian noise. It takes the form
ρ̂t=-i [Ĥ,ρ̂]+∑_j ( L̂_jρ̂L̂_j^† -1/2{L̂_j^†L̂_j, ρ̂}),
where L̂_j are the conventional Linblad operators. Explicitly, we describe dephasing from high-frequency noise via the dephasing jump operator <cit.>
L̂_dephasing =√(1/2T_2)[ 1 0 0; 0 -1 0; 0 0 -1; ].
Here the Lindblad operator acts on the charge states (1,1) and (2,0). The strength of the dephasing is captured by the decoherence time T_2. Since the dephasing operator has real entries hermitian, we can also shift the Lindbladian to obtain the equivalent dephasing operator
L̂_dephasing'= L̂_dephasing + √(1/2T_2) 1̂= √(2/T_2) Π̂_S(2,0),
which will generate the same time dynamics. Since for spin qubits relaxation is several orders of magnitude longer than dephasing for spin <cit.> and charge qubits <cit.>, we neglect it in our analysis. Note that relaxation usually benefits adiabatic charge transfer of the ground-state as it counteracts the diabatic transitions to energetically excited states, thus effectively improving the population transfer fidelity.
Using the Uhlmann fidelity ℱ defined as
ℱ(ρ, σ)=( √(√(ρ)σ√(ρ)))^2,
we can determine the overlap of the lowest energy eigenstate |ψ_0(t)⟩ and the time-evolved one under the non-unitary evolution given by (<ref>) for the linear and geometric state transfer protocols.
§.§ Results
For concreteness, we will focus in our analysis only on the initialization process, as the readout process is directly provided by the reverse pulse shape. The initial state is the singlet state in the (2,0) charge state and is adiabatically pulsed to the desired final state. Under coherent evolution, the only error source is due to undesired diabatic transitions, inducing interference effects that reduce the transfer fidelity. We scan multiple pairs of parameters and simulate the transfer error 1-p(t_f) using the protocol in Eq. (<ref>) and the linear protocol. For the geometric protocol, we generate an appropriate adiabaticity using the boundary conditions of the detuning (ε_0,ε_f) and the pulse time t_f with Eq. (<ref>) and then feed the numerically solved pulse ε_num(t) into our Hamiltonian or Lindbladian based time evolution, depending on whether we want to study unitary or non-unitary dynamics.
Unitary dynamics Figure <ref> shows the results of the geometric pulse for the initialization sequence of the |↓↑⟩ state of the Hamiltonian (<ref>). The transfer error is reduced as a function of the pulse time as we move more adiabatic at larger pulse times. We observe the advantage of using the geometric fast-QUAD over the linear protocol as a reliable protocol for circumventing coherent errors, even for very small anti-crossings and extremely fast pulse times. Strikingly, we observe as a common trend, that the transfer fidelity for the geometric fast-QUAD for t_f>[20]ns yields a transfer fidelity ℱ>[99]% for all investigated settings of tunnel couplings Ω and Zeeman splitting differences Δ E_Z. Note that these results, for the same parameter settings besides the detuning boundary conditions, also hold for the adiabatic readout protocol, as the energy is conserved along these paths.
Non-unitary dynamics
In addition to providing higher fidelities for very short pulse times and dense energy spectra, our protocol is also highly resilient against quasistatic noise as seen in Fig. <ref>, where we plotted the susceptibility of the transfer fidelity with respect to detuning fluctuations. Notably, after [20]ns the effects of the quasistatic noise affect the fidelities below 10^-4, even for strong fluctuation of δε=[5]GHz. In Fig. <ref> we also show that the geometric protocol is robust against pulse miscalibration regarding Ω→Ω + δΩ, for small δΩ and for pulse times bigger than [20]ns. Remarkably, assuming larger tunnel couplings for the pulse leads to smaller deviations from the calibrated result. This may be caused due to favorable interference effects in the initial ramp towards the anti-crossing. Namely, assuming a smaller anti-crossing will usually result in an initial fast ramping ending in a slow-down at the anti-crossing, leading to interference effects that do not destructively interfere beyond the anti-crossing.
Lastly, we compare the optimal control sequences of the linear and geometric protocol given some fixed decoherence time T_2 in Fig. <ref>. Here, we compute the linear and geometric fast-QUAD pulse shapes and simulate the Uhlmann fidelity of the lowest energy eigenstate |ψ_0(t_f)⟩ and the resulting mixed density matrix ρ(t_f) under Lindbladian time evolution. For a fixed interval of allowed pulse operation times t_f≤[50]ns, we find the maximum Uhlmann fidelity at fixed decoherence time and compare the linear and geometric protocol. The maximum Uhlmann fidelity ℱ(t_f^*) is given by the optimal pulse operation time t_f^* for any given decoherence time T_2. The linear protocol will always perform best at the longest allowed pulse times to suppress the diabatic transitions which dominate in this regime. On the other hand, the geometric fast-QUAD will be optimal at ultrafast operation times (t_f^*<[10]ns) to simultaneousely reduce the coherent and incoherent errors through dephasing. Therefore, the geometric fast-QUAD will always outperform the linear protocol under fixed decoherence rates.
§ CONCLUSION
In this work, we established a relationship between the quantum geometric structure of the Hilbert space and quasiadiabatic time dynamics. Our main focus lied in providing a general framework to deal with coherent errors arising from undesired diabatic transitions between multiple energy levels while operating at fast pulse times. Special emphasis was put on applying these methods to enable fast and high-fidelity adiabatic initialization and readout for a DQD system, which is integral for the minimization of state-dependent crosstalk. Nevertheless, we stress that the framework is applicable to all quantum systems with non-degenerate eigenvalues and only requires optimization on the level of the Hamiltonian and not the time-ordered evolution operator. For coherent evolution, we found that independent of the parameter configuration, the geometric fast-QUAD provides an upper bound on the transfer error at 10^-2 for pulse times of [20]ns. Miscalibration of the pulse and quasistatic noise did not yield significant deviations. Errors arising due to dephasing effects were studied, and it was found that the geometric fast-QUAD was always superior to the linear protocol with respect to the Uhlmann fidelity, while allowing ultrafast operation times.
Nevertheless, further efforts in the understanding of the quantum geometric approach have to be made, especially with a focus on including the effects of noise directly into the formalism, understanding the geometric difference between pure and mixed states, and what the impact of a non-abelian connection and degenerate eigenstates would be for transfer protocols. So far, the case of mixed states has been tackled only for full and finite rank density matrices <cit.>, making it challenging to capture non-unitary time evolution and systems with infinite-dimensional Hilbert spaces. Nevertheless, the quantum geometry of parameter space will provide new opportunities for ultrafast adiabatic operations allowing for significant improvements in the coherent processing of quantum information, accelerating the advancement of emerging quantum technologies.
§ ACKNOWLEDGMENTS
We thank the members of the Veldhorst, Scappucci, and Vandersypen groups for helpful discussions on practical applications. Additionally, we are grateful for discussions with Amanda Seedhouse and Edmondo Valvo about the thoeretical model. This research was partly supported by the EU through the H2024 QLSI2 project and partly sponsored by the Army Research Office under Award Number: W911NF-23-1-0110. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
§ FUNDAMENTALS OF QUANTUM RIEMANNIAN GEOMETRY
Bloch sphere from quantum geometry
To define the metric g we need to find a basis t_μ(x) that spans the tangent space T_ρ̂ P(ℋ). A natural choice is given by the set of traceless and Hermitian matrices
t_μ(x)=∂_μρ̂(x)=|∂_μψ⟩⟨ψ|+|ψ⟩⟨∂_μψ|,
where we assume that the density matrices are pure ρ̂(x)=|ψ(x)⟩⟨$| and that the derivative is with respect to the parametersx^μ. We can define the quantum geometric tensor as the Killing form on the tangent spaceT_ρ̂ P(ℋ)g_μν =1/2(t_μ t_ν)
=[⟨∂_μψ|∂_νψ⟩]+⟨∂_μψ|ψ⟩⟨ψ|∂_νψ⟩.
We note that the quantum geometric tensor (QGT) has certain symmetries, which, in part, will constrain our dynamics. First, the QGT is invariant under shifts in the ground state energyĤ→Ĥ+ω(x) 1̂, which is the known invariance that only energy differences are measurable and is the expected invariance underU(1). Secondly, the QGT does not change if we rescale the Hamiltonian globally with a functionΩ(x), i.e.Ĥ→Ω(x)Ĥ, which we will refer to as conformal invariance. We also need to rescale the time variable to not affect the time dynamics. The conformal invariance will constrain our dynamics to a(ℳ-1)-dimensional subspace. To see this, we will work through the example in the main text: A general2×2Hamiltonian can be written in the Pauli basis, which in polar coordinates(ρ, ϕ, z)takes the form
Ĥ_Pauli=[ z ρ e^-iϕ; ρ e^iϕ -z ],
where we note that, for pure states,P(ℋ_Pauli)=2<ℳ=3, as pure states can be fully described by the angles(θ, ϕ)on the Bloch sphere. This condition restricts the notion ofℳbeing an embedding of the projective Hilbert spaceP(ℋ_Pauli)as the map is no longer injective. Due to the conformal invariance, however, we may restrict ourselves to subspaces that span the projective Hilbert space and hence form a well-defined embedding. For instance, if we identifyx^μ={ρ, ϕ, z}then we may find a functionΩ(x)such that we can reduce the number of parameters. If we want to work in the subspace ofx^μ={ρ, z}we find that the quantum metric tensor is singular, i.e.g=0, which alludes to the fact that the embedding is ill-defined. This feature can be seen by the fact that there is no non-trivial functionΩ(x)that removes theϕ-dependence. On the other hand, the subsetsx^μ={ρ, ϕ}andx^μ={ϕ, z}can be well-defined. For instance, ifΩ(x)=zand we redefineρ/z→ρ, the Pauli Hamiltonian takes the form
Ĥ_Pauli=[ 1 ρ e^-iϕ; ρ e^iϕ -1 ],
which leads to a non-singular quantum metric tensor
[g_μν(ρ, ϕ)]= 1 /4(1+ρ^2)[ 1/(1+ρ^2) 0; 0 ρ^2 ],
which captures the fact that the embedding is well-defined. This can also be seen by the fact that nowP(ℋ_Pauli)=ℳ. This metric is the metric on the Bloch sphere, as can be seen if we useρ=tanθand use the transformation rule for the quantum metric tensor to arrive at
[g_μν(θ,ϕ)]= 1/4[ 1 0; 0 sin^2 θ ].
Beltrami identity, Killing charges and energy fluctuations
To derive that adiabatic geometric condition (<ref>) we start by simplifying the length functional in the main text via the Cauchy-Schwarz relation to the following functional <cit.>
ℒ'[x,ẋ,t]=∫_0^t_f dt [g_μν(x)ẋ^μẋ^ν],
where the integrand can be understood as a LagrangianL[x,ẋ,t]and the functional as the action. If the Lagrangian does not explicitly depend on time, i.e.∂L/∂t=0, then Beltrami's identity holds
ẋ^αLẋ^α-L=const.
The left-hand side is the expression of the Hamiltonian and hence, in this case, Beltrami's identity is a consequence of conservation of energy. Computing the partial derivative of the Lagrangian above using∂ẋ^μ/∂ẋ^α=δ^μ_αwe find the adiabatic-geometric condition
g_μν(x)ẋ^μẋ^ν=const.
Another way to see this identity is that conservation laws arise due to symmetries. As we are considering unitary systems, we have time-reversal symmetry and hence energy conservation. The connection between symmetry and conserved charges in the geometrical context is illustrated by the Killing vectorsξ^μ. Each Killing vector has an associated conserved charge <cit.>
∂_t Q_ξ = ∂_t ( g_μν(x) ξ^μẋ^ν)=0.
If the Killing vector is proportional to the tangent vectorξ^μ∝ẋ^μwe also find the adiabatic-geometric relation. This shows the explicit relation between energy conservation and geometry. In order to find the geodesics of a manifold one also only needsℳ-1Killing vector fields <cit.>, which aligns with the parameter subspace after the constraint due to the conformal invariance of the QGT.
§ DQD EIGENSPECTRUM AND EFFECTIVE 2X2 HAMILTONIAN
We want to describe the effective dynamics of the DQD Hamiltnoian in Eq. (<ref>). The total Hamiltonian describing an array of quantum dots is given by
Ĥ_DQD=Ĥ_FH+Ĥ_Zeeman,
where these two Hamiltonians can be written as <cit.>
Ĥ_DQD = [ U + ε 0 0 -Ω Ω 0; 0 U - ε 0 -Ω Ω 0; 0 0 E_Z Δ E_X -Δ E_X 0; -Ω -Ω Δ E_X Δ E_Z 0 Δ E_X; Ω Ω -Δ E_X 0 -Δ E_Z -Δ E_X; 0 0 0 Δ E_X -Δ E_X -E_Z ]
if we constrain ourselves to the charge states (1,1), (2,0), and (0,2) including the spin states|↑↑⟩,|↑↓⟩,|↓↑⟩,|↓↓⟩}. In addition, we defineE_j=E_j,1+E_j,2andΔE_j=E_j,1-E_j,2withj=X,Y,Zfor each component of the Pauli vector, and we set they-component to zero for simplicity. The energy spectrum of the above Hamiltonian is plotted in Figure <ref>.
As seen in the main text, when restricting to the singlet and triplet sectors for the charge configurations (2,0) and (1,1), we find the Hamiltonian (<ref>). Given that the Zeeman splitting differenceΔE_Zis usually a much smaller energy scale than the tunnel coupling or detuning we want to find an effective 2-dimensional model. Using standard Schrieffer-Wolff transformation we find that the effective 2d model is spanned only by the singlet sector in the two different charge states(1,1), (2,0)Ĥ_eff(t)=[ -ε(t)(1 -𝒥^2) Ω(1-𝒥^2/2); Ω(1-𝒥^2/2) 0 ],
where𝒥^2= ΔE_Z^2/Ω^2is the expansion parameter. For this2×2Hamiltonian we can find the fast-QUAD equation explicitly in the regime of𝒥≪1up to second order in the Zeeman splitting difference
(1+3𝒥^2) Ω^2+ε(t)^2/(Ω^2+ε(t)^2)^5/2(εt)=δ/Ω.
In the above equation, we rescaledΩ→Ω/2for readability. For𝒥=0we recover the fast-QUAD equation for the Landau-Zener model as seen in <cit.>. The difference of pulse shapes of the effective 2d and the full 3d Hamiltonian is plotted in Figure <ref>. We observe that the derived pulse shapes deviate significantly between the effective and the full model.
§ NUMERICAL METHODS
Here we outline the numerical methods used in the main text, including the generation of the pulse shapes, the Hamiltonian, and Lindblad master equation solvers.
§.§ Pulse shape interpolation
First, we compute the quantum metric tensor using Eq. (<ref>). Making use of the quantum adiabatic condition (<ref>), we obtain a differential equation for the pulse in terms of the adiabaticity parameterδ. Once, we have chosen the boundary conditions of the pulse, we can compute the adiabaticity parameter using Eq. (<ref>) and hence solve the differential equation for the pulse consistently between these boundary conditions.
§.§ Hamiltonian simulation
For the numerical simulations of the coherent population transfer, we first generate a pulse according to the boundary conditions for initialization or readout. Next, we replace the numerically solved pulseε_num(t)in the Hamiltonian operator and then solve the Schrödinger equation (in units ofħ=1)
it|ψ(t)⟩=Ĥ[ε_num(t)]|ψ(t)⟩.
Given the initial state|ψ_0(t=0)⟩, we evolve the state and project it onto the lowest energy eigenstate at the final pulse time to see whether coherent errors occurred. For that, we compute the transfer probabilityp(t_f)=|⟨ψ_0(t_f)|ψ_geo(t_f)⟩|^2, where
|ψ_geo(t_f)⟩=𝒯exp(-i∫_0^t_fĤ[ε_num(t)])|ψ_0(t=0)⟩.
§.§ Lindblad simulation
For the numerical simulations of the Lindblad master equation, we switch to the vectorized form. We choose to use vectorization by row, which means that, for instance,
ρ = [ a b; c d ]→ρ =vec[ρ]= [ a; b; c; d ]
In this notation, the Lindblad master equation can be written as a linear equation
tρ=ℒ̂·ρ,
where the Lindbladian takes the form
ℒ̂=-i(Ĥ⊗1̂-1̂⊗Ĥ^T)
+∑_jL̂_j⊗L̂_j^* - 1/2(L̂_j^†L̂_j⊗1̂ + 1̂⊗ [L̂_j^†L̂_j]^T ).
Note that the expression explicitly depends on the basis chosen, as the transpose is basis-dependent. To compute the success of the state transfer protocols we define the Uhlmann fidelity
ℱ(ρ, σ)=( √(√(ρ)σ√(ρ)))^2.
We will use this to quantify the overlap between the time-evolved pure initial state|ψ_0(0)⟩≈|S(2,0)⟩to the mixed state at the end of the non-unitary evolutionρ(t_f). In this case, the fidelity simplifies to
ℱ(ρ(t_f), |ψ_0(t_f)⟩⟨)|=|ψ_0(t_f)ρ(t_f)ψ_0(t_f)|.
Definingψ_0=vec[|ψ_0(t_f)⟩⟨]|we find that the Uhlmann fidelity reduces to
ℱ(t_f)=|ρ(t_f)ψ_0(t_f)|.
Full-time evolutions are shown in Fig. <ref>. We observe that the geometric fast-QUAD provides a better overlap with the energy eigenstate for weak dephasing. Figure <ref> shows the procedure to obtain the optimal control simulation in Fig. <ref>. We start by simulating the Uhlmann fidelity for a fixed decoherence timeT_2for pulse timest_f∈[0,50] ns and extract the highest overlap for both the linear (dashed line) and geometric (full line) protocol. These are shown in the figure as blue circles/red stars for two exemplary decoherence timesT_2=[1,100]ns, respectively. Note that the geometric protocol provides a higher fidelity at shorter pulse times. From each simulation, therefore, we extract the fidelityℱ(t_f^*), the pulse timet_f^*at which the highest fidelity is reached and the corresponding decoherence timeT_2. |
http://arxiv.org/abs/2409.03542v1 | 20240905140656 | Risk-based Calibration for Probabilistic Classifiers | [
"Aritz Pérez",
"Carlos Echegoyen",
"Guzmán Santafé"
] | cs.LG | [
"cs.LG"
] |
Risk-based Calibration for Probabilistic Classifiers
Aritz Pérez, Carlos Echegoyen and Guzmán Santafé
Aritz Pérez is at the Basque Center for Applied Mathematics, 48009 Bilbao, Spain. Email: [email protected]
Carlos Echegoyen and Guzmán Santafé are with the Spatial Statistics Group and INAMAT^2, Public University of Navarre, 31006 Pamplona, Spain. Email: {carlos.echegoyen, guzman.santafe}@unavarra.es
September 9, 2024
====================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
We introduce a general iterative procedure called risk-based calibration (RC) designed to minimize the empirical risk under the 0-1 loss (empirical error) for probabilistic classifiers. These classifiers are based on modeling probability distributions, including those constructed from the joint distribution (generative) and those based on the class conditional distribution (conditional). RC can be particularized to any probabilistic classifier provided a specific learning algorithm that computes the classifier's parameters in closed form using data statistics. RC reinforces the statistics aligned with the true class while penalizing those associated with other classes, guided by the 0-1 loss. The proposed method has been empirically tested on 30 datasets using naïve Bayes, quadratic discriminant analysis, and logistic regression classifiers. RC improves the empirical error of the original closed-form learning algorithms and, more notably, consistently outperforms the gradient descent approach with the three classifiers.
Supervised classification, probabilistic classifier, empirical risk minimization, iterative learning algorithm, gradient descent.
§ INTRODUCTION
Supervised classification is one of the most crucial problems in machine learning, entailing the acquisition of a classifier that minimizes the risk for the 0-1 loss (error). A classifier essentially maps input variables to a set of class labels. However, given the truth probability distribution is unknown, we can not compute the error and classifiers have to be learned from i.i.d. data according to the truth distribution. Traditionally, the learning problem is tackled by minimizing empirical surrogates of the error <cit.>. Support vector machines (SVM) <cit.> and logistic regression (LR) <cit.> exemplify such approaches, minimizing the average hinge and negative log loss on training samples, respectively.
Three main approaches to constructing classifiers are discriminative, conditional, and generative <cit.>. Discriminative approaches directly model decision boundaries between classes, typified by SVMs. Conditional methods, on the other hand, construct class conditional distributions and determine a classifier by selecting the class label with the maximum probability, as seen in LR. LR achieves this by minimizing the average negative log loss through gradient descent (GD). Generative classifiers present a third alternative, rooted in a joint distribution framework and utilizing Bayes' rule to get class conditional distributions <cit.>. We call probabilistic classifiers conditional and generative approaches.
Unlike their counterparts, generative classifiers do not rely on minimizing an empirical error surrogate but instead on measures quantifying data fitness, such as the maximum likelihood principle. To this category belong classifiers from the exponential family, like Quadratic Discriminant Analysis (QDA), and those grounded in Bayesian networks under varied assumptions, including discrete Bayesian networks <cit.> and conditional Gaussian networks <cit.>. Notably, the direct impact of the likelihood function on classification performance can become negligible as the dimensionality increases <cit.>. Efforts have been made to learn generative classifiers by minimizing the average negative log loss for discrete Bayesian networks through GD <cit.>.
However, gradient descent could suffer several drawbacks, particularly those concerning constrained parameters. Violations of these constraints may yield invalid parameter values, numerical instability, and reduce model performance. Techniques like projection, parameter transformation <cit.>, or Lagrange multipliers <cit.> are indispensable for enforcing constraints but may introduce convergence challenges that hinder efficient optimization. Therefore, specialized methods become imperative to mitigate these challenges and ensure reliable learning of probabilistic classifiers with constrained parameters. Additionally, GD requires differentiable objective functions, typically addressing empirical error through surrogate losses like the negative log loss <cit.>. Furthermore, GD may entail high computational costs; for instance, computing the inverse of a covariance matrix represents a computationally intensive operation that can limit practical applications in high-dimensional domains.
One key advantage of discriminative classifiers concerning generative ones is that they directly model the decision boundary between classes, avoiding the more complex task of estimating the joint distribution, which often results in smaller errors. Additionally, discriminative classifiers tend to require fewer assumptions about the underlying data distribution, making them more flexible and generally more effective when the true data distribution is complex or unknown.
On the other hand, generative classifiers offer several key advantages. First, they provide comprehensive data modeling by capturing the joint probability distribution of input features and class labels, offering a better understanding of the data-generating process <cit.>. Generative classifiers handle missing data effectively by marginalizing the missing values, enhancing robustness in practical scenarios <cit.>. Generative models can also generate synthetic data samples <cit.>, aiding in data augmentation and anomaly detection. Moreover, they facilitate the integration of prior knowledge and domain expertise through prior distributions in a Bayesian framework. They also fit well into a Bayesian decision theory framework, optimizing decisions under uncertainty <cit.>. Additionally, generative classifiers often perform better with smaller training set sizes, because they approach their best performance faster, possibly with sizes logarithmic in the number of parameters <cit.>. Finally,
When the class conditional distribution is accurately modeled within the joint distribution, generative classifiers can provide optimal predictions for a given cost-sensitive loss function without further adjustments <cit.>. Conditional classifiers represent an intermediate step between discriminative and generative approaches. They directly model the class conditional distribution while avoiding the need to model the marginal distribution of the input features, which is irrelevant for classification <cit.>.
In this work, we present a method that combines the strengths of both probabilistic and discriminative approaches. The proposed method, called risk-based calibration (RC), is designed for learning the parameters of probabilistic classifiers, ensuring their performance is comparable to discriminative classifiers while preserving the advantages derived from modeling probability distributions. focuses on minimizing the empirical risk for 0-1 loss (empirical error) by using learning algorithms that compute the parameters in closed-form from data statistics, such as maximum log-likelihood (ML) or maximum a posterior (MAP) learning procedures.
The rest of the paper is organized as follows. Section <ref> provides the background of the proposal. Section <ref> presents a detailed and formal explanation of the proposed method. Section <ref> introduces the datasets used in the experiments as well as some common aspects of all the experiments. Sections <ref> and <ref> show empirical results for generative and conditional classifiers respectively. Section <ref> summarizes the main conclusions of the current work. Finally, in the appendices, the reader can find further details on the connection of with previous methods (Appendix <ref>), maximum a posteriori estimation of the parameters (Appendix <ref>), the implementation of GD (Appendix <ref>), and additional experimental results (Appendix <ref>).
§ PRELIMINARIES
§.§ Supervised classification
The supervised classification task involves learning a classifier from a training data set that minimizes the expected loss (risk). However, this is often unreliable since we do not know the underlying probability distribution, and the problem is reformulated using surrogate functions of the available training data, such as the average loss in training (empirical risk) <cit.>.
Let X⊂^n and Y={1, ⋯, r} be the input space and the set of class labels, respectively. A classifier h is a function from instances to labels, h: X→Y, and the set of classifiers is denoted by H. Classifier families can be defined in terms of a particular functional form of their parameters. We denote by H_Θ={h(;): ∈Θ} the family of classifiers with the parameter space Θ⊂^d. One of the simplest classifiers is the linear discriminant model <cit.>
h()=max_y_y^T · (1,),
for every input ∈X, where _y=(θ_0,y,θ_1,y,⋯,θ_n,y) ∈^n+1 and n is the dimension of . In a linear discriminant model, the decision boundaries that separate the classes are linear functions of the inputs given in terms of hyperplanes in the input space X. A more general form of linear discriminant function is given by
h()=max_y^T ·ϕ(,y),
where ∈^d are the parameters and ϕ(,y):X,Y↦^d is the feature mapping. Intuitively, the feature mapping defines what is relevant for classifying . The linear discriminant function of Eq. <ref> corresponds to the parameters =(_1, ⋯, _r), with _y ∈^n+1 for y ∈Y, and the one-hot class encoding of the linear function ψ()=(1,),
ϕ(,y)= (y=1·ψ(), ⋯, y=r·ψ()),
where · is the indicator function that takes value one when its argument is true, and zero otherwise. We call Eq. <ref> with the linear function ψ()=(1,) linear feature mapping. In general, these classifiers are linear models in the feature space defined by the mapping ϕ(·). Other alternatives to feature mappings that are not linear include polynomial feature mapping <cit.>, radial basis functions <cit.>, Fourier random features <cit.>, and embeddings based on deep learning models.
The loss function measures the discrepancy between the predicted class labels and the true class labels. Formally, the loss of a classifier h evaluated at instance (,y), is a function l: H,(X,Y) → [0,∞). The natural loss in classification is the 0-1 loss, also known as the misclassification loss, l_01(h,(,y))= y≠ h(). Formally, the goal of supervised learning can be defined as selecting the classifier h ∈H that minimizes the risk under 0-1 loss (expected 0-1 loss or error):
min_h ∈H E_p^* [l_01(h,(,y))],
where p^* ∈Δ(X,Y) is the underlying (unknown) distribution of the data. The classifier that minimizes the error is named the Bayes classifier.
In practice, the supervised classification problem is adapted to be tractable. Generally, following a divide-and-conquer approach, the supervised classification problem is addressed for a specific parametric family of classifiers H_Θ, where Θ is the support of the parameters. This approach enables the development of efficient learning algorithms that leverage the functional form of the chosen classifier family. Besides, the adaptations of the supervised classification problem typically involve minimizing a surrogate for the error. In the standard supervised classification settings, the underlying distribution of the data p^* ∈Δ(X,Y) is unknown, and we have access to a supervised training set, (X,Y) ∈X^m ×Y^m, with i.i.d. instances according to p^*, (X, Y)={(^i,y^i)}_i=1^m. Often, the empirical risk under the 0-1 loss (the average of the 0-1 loss in the training data or empirical error) is used as a surrogate for the true error, and the learning reduces to the empirical error minimization:
min_h ∈H_Θ1/m∑_,y ∈ X,Y l_01(h,(,y)).
This work is focused on the minimization of the empirical error. For alternatives to the empirical error minimization, see robust risk minimization approaches <cit.>.
Even when the supervised classification problem is restricted to a particular parametric family, optimizing the empirical error can be challenging, and therefore is often replaced by an alternative loss with suitable properties for its minimization. An example of such modifications is learning logistic regression by minimizing the empirical negative log loss, l_log(h,(,y))=- log h(y|), facilitated by its differentiability.
§.§ Generative classifiers and closed-form learning algorithms
Conditional classifiers are constructed upon a class conditional distribution. The usual approach to learning conditional classifiers is focused on obtaining an accurate model of the class conditional distribution, h(· | ) ∈Δ(Y) for each ∈X, and the classification corresponds to the class label with higher probability h():= max_y h(y|). A typical example of conditional classifiers is LR,
h(y|)∝^T ·ϕ(,y),
for y ∈Y and for each ∈X.
The generative classifiers are constructed upon a joint probability distribution h(,y) ∈Δ(X,Y), which by the Bayes rule obtains the class conditional distribution h(y|)=h(,y)/∑_y' ∈Yh(,y'). Generative classifiers are fundamentally motivated by their ability to represent the Bayes classifier, assuming they accurately capture the conditional distribution. Generative classifiers are usually constructed upon a joint distribution from a parametric family, such as the exponential family. Common examples of generative classifiers are the quadratic discriminant analysis (QDA) and the classifiers based on Bayesian networks <cit.>. Generative classifiers focus the learning on obtaining a good estimate of the joint distribution, and thus, they use surrogates for the empirical risk indirectly related to classification.
Hence, generative classifiers are typically learned by maximizing the log-likelihood of the joint distribution. In contrast, this work proposes a learning procedure guided by classification performance.
The current work is mainly devoted to generative classifiers with a closed-form learning algorithm based on statistics obtained from data. We say that a learning algorithm, a, has closed-form when it is a function composition of a statistics mapping function s: X^m,Y^m →^k and a parameter mapping function θ: ^k →Θ, a:= θ∘ s. Statistics mapping summarizes the relevant information in the training data (X,Y) into k statistics, which are used to compute analytically the d parameters of classifier. Generative classifiers based on the exponential family have closed-form algorithms that maximize the likelihood of the training data. In this family, the feature mapping corresponds to statistics mapping, ϕ(·)=s(·), and therefore k=d. Usually, statistics mapping s(·) and feature mapping ϕ(·) are closely related, however, the statistics mapping can include more terms than the feature mapping (see for instance Section <ref>).
From here on, we consider that the statistics mapping involved in the closed-form learning algorithm is additively decomposable, i.e., for (X,Y) ∈X^m,×Y^m, we have that s =s(X,Y)=∑_,y ∈ X,Y s(,y), where with a slight abuse in the notation s(X,Y) and s(,y) denotes the statistics mapping over a training set (X,Y) and over an instance (,y) respectively.
§.§ Examples of generative classifiers and learning algorithms
Next, we illustrate how the statistics mapping, s(·), manages the statistics calculated from the training sets with two well-known generative classifiers: naïve Bayes (NB) for discrete variables, and quadratic discriminant analysis (QDA) for continuous variables. Most of the classifiers use one-hot encoding for the class-related statistics, and this is the case for both NB and QDA. The statistics mapping for both classifiers can be given by the class one-hot encoding
s(,y)= (y=1·ψ(),⋯ ,y=r·ψ()),
where ψ() extracts the statistics from the features that typically correspond to those required to compute the zeroth, first, and second moments.
In the case of NB with discrete variables, the i-th input feature, x_i, has support X_i={1,...,r_i} for i=1,...,n. NB assumes that the input features are independent given the class variable, which leads to the classification rule for :
h()= max_y p(y)·∏_i=1^n p(x_i|y),
where p(y) is the marginal probability of the class label y and p(x_i|y) is the probability of i-th input variable taken the value x_i given the class label y. These conditional distributions are assumed to be categorical, and their parameters can be given by the maximum likelihood estimates obtained from the counting statistics. In NB ψ(x)=(1,1=x_1,...,r_1=x_1,...,1=x_n,...,r_n=x_n). For NB the statistics obtained from data are s(X,Y)=(s_0,1,s_1,1,1,...,s_1,r_1,1,...,s_n,r_n,1,...,s_0,r,s_1,1,r,...,s_1,r_1,r,...,
s_n,r_n,r), where s_0,y is the statistics associated to the label y of the class variable and s_i,x_i,y is the statistics associated to the value x_i of input i given the label y. The maximum likelihood parameter mapping is given by p(y)=s_0,y/∑_y' ∈Ys_0,y' and p(x_i|y)=s_i,x_i,y/∑_x_i' ∈X_is_i,x_i',y for y ∈Y, i=1,...n and x_i=1,...,r_i.
On the other hand, the QDA's classification rule is given by:
h()= max_y p(y)· |Σ_y|^-1/2· exp{(-μ_y)^T Σ_y^-1 (-μ_y)},
where μ_y and Σ_y are the mean vector and covariance matrix of given the class y. This classifier is learned by estimating the maximum likelihood mean vector and covariance matrix using the statistics mapping given by ψ()=(1,, ^2), with ^2=·^T being a n × n matrix. The statistics obtained from data are s(X,Y)= (s_0,1,s_1,s^2_1,...,s_0,r,s_r,s^2_r) with s_y'=∑_,y ∈ X,Yy'=y·x, and s^2_y'=∑_,y ∈ X,Yy'=y·^2 for y' ∈Y. The components 1, , and ^2 are used to get the maximum likelihood estimation for p(y), μ_y, and Σ_y using the parameter mapping p(y)=s_0,y/∑_y' ∈Ys_0,y', μ_y= s_y/s_0,y and Σ_y= s^2_y/s_0,y - μ_y ·μ_y^T, for each class label y ∈Y.
Using the statistics mapping described here, we can also adapt the parameter mapping for NB and QDA to create a closed-form algorithm that maximizes the a posteriori distribution using conjugate priors (see Appendix <ref> for further details).
§ RISK-BASED CALIBRATION
The risk-based calibration algorithm (RC) is an iterative heuristic method to improve the empirical risk under 0-1 loss of a probabilistic classifier using a closed-form learning algorithm. This method is founded on the very basic intuition of modifying the statistics used by the closed-form learning algorithm, guided by the stochastic 0-1 loss for each instance. The stochastic 0-1 loss of the probabilistic classifier h in (,y) is given by
l_s01 (h,(,y)) = 1-h(y|)= ∑_y'≠ y h(y'|),
that corresponds to the expected loss of a randomized classifier that selects
label y with probability h(y|) for y ∈Y.
We aim at finding the parameters ^* that minimize the empirical risk of the stochastic 0-1 loss (empirical stochastic error). Given a training set (X,Y), the empirical stochastic error is zero when h(y|)=1 for all ,y ∈ X,Y. Intuitively, it is possible to lead the statistic s, and thus the model parameters, θ(s), towards the optimal classifier by strengthening the statistics s(,y) and weakening s(,y') for ,y ∈ X,Y and every y' ∈Y with y' ≠ y. The strengthening-weakening update is given by the classifier's stochastic 0-1 loss at point (,y).
We propose to shrink the empirical stochastic error by raising h(y|) and by dropping h(y'|) for (,y) according to the stochastic 0-1 loss by calibrating the set of statistics, s, used to obtain the model parameter. The calibration of s is performed by adding s(,y) with a weight 1 - h(y|) (strengthening h(y|)), and by subtracting s(,y') with a weight h(y'|) for all y'≠ y (weakening h(y'|)). The strengthening-weakening calibration is directly give by Ep. <ref> Combining both terms, given the classifier h, we have the following updating rule of the statistics s given (,y):
s= s + s(,y) - ∑_y' ∈Y h(y'|) · s(,y').
Given the data (X,Y) and the classifier h, and due to the additive nature of the statistics s, the updating rule is simply given by:
s= s + s(X,Y) - s(X,h),
where s(X,h)= ∑_∈ X∑_y' ∈Y h(y'|)· s(,y') is the probabilistic estimate of the statistics given the probabilistic classifier h.
The procedure is described in Algorithm <ref>, where, at each iteration, h corresponds to the probabilistic classifier with parameters θ(s), and lr > 0 is the learning rate.
(X,Y)
* s← s(X,Y)
*
* s←s - lr ·(s(X,h)- s(X,Y)), for h with θ(s)
* Stop criterion is met
The initialization of the statistics (step 1 in Algorithm <ref>) can be arbitrary, as long as the statistics remain consistent and produce valid parameters. However, we recommend initializing them using s(X,Y). The statistics obtained from training data provide a more competitive starting point from the empirical error point of view. However, it is possible to try different runs using bootstrap samples from the training data to avoid the convergence to a poor local optima.
The computational cost of each iteration is given by the statistics mapping s(·) and parameter mapping θ(·). The computational complexity of
the statistics mapping is linear in the number of training samples
m and the dimension of the statistics k, O(d · k), while the computation of the parameter mapping is independent of m
and is usually linear in the number of statistics and parameters O(d + ||). Due to the additively decomposable assumption for the statistics mapping, it is possible to speed up the by using stochastic and minibatch versions, which process subsets of the data in each iteration.
A relevant property of is that the sample size of the updated statistics remains invariant to iterations. This is because the strengthening-weakening strategy comes from the stochastic 0-1 loss (Eq. <ref>) and satisfies ∑_y' ∈Yy=y'-h(y'|)= 0 for any ,y ∈X,Y. However, discrepancies between the class conditional probabilities given by the classifier and the class labels in the training set can sometimes lead to invalid parameters. To address this, we simply allow the learning process to continue by updating only those statistics that would result in valid parameters, freezing the rest.
The is a general-purpose algorithm for learning probabilistic classifiers that presents some connections to other existing methods proposed in the literature. The analysis of the connections with three iterative learning algorithms is presented in Appendix <ref>: Discriminative frequency estimate for classifiers based on discrete Bayesian networks <cit.>, GD for LR, and the TM algorithm for generative classifiers from the exponential family <cit.>.
§ DATASETS AND EXPERIMENTAL SETTING
In this section, we will introduce the datasets used in the experiments and some common elements to all the experiments in the current paper. All the experiments are focused on the minimization of the empirical error.
The implementations of the classifiers, learning algorithms, and experiments are available online in the public Python repository at <https://gitlab.bcamath.org/aperez/risk-based_calibration>.
We utilize 30 public available datasets <cit.>, each characterized by different numbers of instances (m) and variables (n).
Table <ref> provides an overview of the datasets used for the experiments, where the "Index" column serves as the identifier referenced along the empirical analysis. This comprehensive collection covers a wide range of domains and complexities, ensuring a robust evaluation of the proposed method across diverse real-world scenarios. Note that the 512 features of the datasets {3, 4, 8, 20, 30} are provided by ResNet18 <cit.> pre-trained deep neural networks for image classification problems. These specific datasets contain a larger number of instances and input variables compared to the others.
All algorithms have been run 64 iterations with a fixed learning rate of lr= 0.1. The performance is measured in terms of the empirical error. We use two closed-form learning algorithms for the probabilistic classifiers: Maximum Likelihood (ML) and maximum a posteriori (MAP) (see Section <ref> and <ref>, and Appendix <ref>, respectively). The parameters of the classifiers are also initialized according to ML and MAP, respectively.
The proposed method, , is compared with GD using the same parameter initialization. The main results are summarized in tables indexed by dataset with the following description of the columns: The "ML" (or "MAP") column shows empirical error given by ML (or MAP) parameters and constitutes the initialization of and GD; the "" and "GD" columns provide the minimum empirical error reached by and GD algorithm in 64 iterations, respectively, and the best result for each dataset is highlighted in bold; the "Iter" columns contain the iteration at which and GD reach the minimum empirical error in 64 iterations. The "Reach" column shows the number of iterations required by RC to achieve an error that is less than or equal to the lowest error obtained by GD in 64 iterations. As the reach is smaller, it indicates that the error reduction of RC is steeper and faster than that of GD. The "avg." row presents the average errors and number of iterations across the 30 datasets, and the average reach is computed with those datasets in which obtains an error that is less than or equal to the lowest error of GD.
§ EXPERIMENTS WITH NB AND QDA
NB and QDA were introduced in section <ref> as illustrative examples of generative classifiers alongside the closed-form algorithm that utilizes to perform the calibration of these classifiers. In this section, we summarize the empirical results obtained with NB and QDA.
§.§ for NB
NB is a classification model that deals with discrete variables. To adapt the datasets from Table <ref> for use with NB models, each continuous variable is discretized into 5 categories according to a k-means strategy <cit.>, where the values in each bin belong to the same cluster. The results are presented in Tables <ref> and <ref>.
According to Table <ref>,
improves the ML initialization in 27 out of 30 datasets. In all these cases, obtains lower errors than GD. The average reach of is 5 indicating that it produces a steeper and faster empirical error reduction than GD.
In Table <ref>, when MAP learning is used, reduces the initial error across 21 datasets, with proving superior to GD in 18 of them. The average reach is 5. The behavior of is clearly superior to GD for learning NB.
§.§ for QDA
The experimental results obtained with QDA are summarized in Tables <ref> and <ref> for ML and MAP, respectively.
According to Table <ref>,
improves the ML initialization in 29 out of 30 datasets. In these datasets, obtains lower errors than GD except in (5) where they tie. The average reach in this scenario is 6. In Table <ref>, when MAP learning is used, also reduces the initial error in 29 datasets. In all these datasets, achieves lower errors than GD except in three cases, obtaining equal results in two of them. The average reach is also 6. Again, the behavior of is clearly superior to GD for learning QDA.
§ FOR LOGISTIC REGRESSION
One of the most popular conditional classifiers is LR (see Eq. <ref>), which is typically learned using GD to minimize the empirical risk under the negative log loss. In this section, we present a closed-form algorithm for LR, enabling its learning using . The proposed closed-form learning algorithm is based on a generative formulation of LR under parametric assumptions.
§.§ Closed-form algorithm for LR
Let h(,y)= h(y)∏_i=1^n h(x_i|y), where h(y) is a categorical distribution with parameters p_1,...,p_r, h(x_i|y) is a Gaussian distribution with mean μ_i,y and variance σ_i^2 for i=1,...,n and y ∈Y. This generative classifier corresponds to the naïve Bayes classifier based on conditional Gaussian networks <cit.> under homoscedasticity assumption, i.e., the variance of each continuous variable does not depend on the class. The connection between both models is explained and analyzed in <cit.>. By the Bayes rule h(x,y) leads to the conditional class distribution:
h(y|) ∝ exp{ϕ(,y)^T· - c_y},
where ϕ(,y) is the linear feature mapping, and =(_1,...,_r) are the parameters with _y=(θ_0,y, θ_1,y,⋯θ_n,y) for y ∈Y; and θ_0,y= log p_y, θ_i,y= logμ_i,y/σ_i^2, and c_y= ∑_i=1^n μ_i,y^2/σ_i^2 + 1/2 logσ_i^2 for y=1,...,r and i=1,...,n. By grouping terms, the conditional class distribution corresponds to Eq. <ref> with =(_1,...,_r), θ_y,0= c_y + log p_y for y=1,...,r.
Now, we are in a position to define a closed-form learning algorithm for LR that maximizes the likelihood of the data (ML). The statistics mapping is simply given by the concatenation of the linear feature mapping, and the squares of the input features (x_1^2,...,x_n^2), s(,y)= (ϕ(,y),x_1^2,...,x_n^2), with s(X,Y)= (s_0,1,s_1, ⋯, s_0,r,s_r, s^2) with s^2=(s_1^2,...,s_n^2)=∑_∈ X(x_1^2,...,x_n^2). The parameter mapping corresponds to p(y)= s_0,y/∑_y' s_0,y', the mean vector μ_y= (μ_1,y,...,μ_n,y)= s_y/s_0,y for y ∈Y, and the variance vector σ^2= (σ_1^2,...,σ_n^2)= s^2/∑_y'∈Ys_0,y' - ∑_y' ∈Yp(y)·μ_y^2 with μ_y^2=(μ_1,y^2,...,μ_n,y^2).[In this work, we are considering ψ(·)=(1,) to be linear on , but the proposal can be easily extended to arbitrary ψ(·) by using its corresponding class one-hot feature mapping (Eq. <ref>), and a statistics mapping corresponding to ϕ(,y) concatenated with the squares of ψ(·), s(,y)=(ϕ(,y), ψ()^2).]
Using the same statistics mapping, we can adapt the parameter mapping to create a closed-form algorithm that maximizes a posteriori distribution using the conjugate priors (see Appendix <ref> for further details).
§.§ Experiments using for LR
Next, we provide the set of experiments on the minimization of the empirical error for LR. The experiments have been performed with using the closed-form algorithms given in <ref> (ML and MAP) and GD of the empirical risk under the negative log loss. and GD starts from the same initialization corresponding to ML and MAP parameters obtained from data. Additionally, in Appendix <ref> we include results for random initialization that highlights the robustness and efficiency of .
Tables <ref> and <ref> show the performance of for ML and MAP, respectively. reduces the initial error in 29 datasets for both initializations, obtaining better results than GD in 24 out of 29 cases with ML and in 25 out of 29 cases with MAP. The average reach of is 13 and 16, for ML and MAP respectively.
Once again, the results show the ability of to improve the closed-form algorithms and its superiority concerning GD for learning LR.
§ CONCLUSIONS
This work proposes an iterative learning algorithm called risk-based calibration () to minimize the empirical error of probabilistic classifiers.
can be used to learn any probabilistic classifier, whether generative or conditional, as long as the classifier has a closed-form learning algorithm that involves two steps: statistics collection from data and analytical parameter computation from the obtained statistics. Typical examples include classifiers from the exponential family, such as naïve Bayes and quadratic discriminant analysis, which can benefit from the algorithm. Additionally, we show how to use the proposed procedure to learn the logistic regression classifier. The main difference between and other alternatives is that focuses on the calibration of the statistics rather than the parameters. This calibration is performed using information from the stochastic 0-1 loss function, ensuring that the learning process directly targets the minimization of the empirical error.
In the experiments, consistently achieved lower empirical errors than the gradient descent approach when learning naïve Bayes, quadratic discriminant analysis, and logistic regression classifiers. This demonstrates the effectiveness and robustness of to learn probabilistic classifiers by minimizing the empirical error. The ability of to reduce the empirical error reveals its potential as a preferred choice when learning probabilistic classifiers from data.
The implementations of the classifiers, learning algorithms, and experiments are available online in the public Python repository at <https://gitlab.bcamath.org/aperez/risk-based_calibration>.
IEEEtran
§ CONNECTIONS BETWEEN AND OTHER METHODS
This section shows the connection of with three iterative learning procedures: Discriminative frequency estimate for learning classifiers based on discrete Bayesian networks, GD for LR, and the TM algorithm for generative classifiers from the exponential family.
§.§ Discriminative frequency estimate
Discriminative Frequency Estimate (DFE) <cit.> is used for learning the parameters of classifiers based on discrete Bayesian networks <cit.>, such as naïve Bayes (NB). The main motivation for DFE is that learning discrete Bayesian networks based on gradient descent over the empirical risk under the negative log loss <cit.> is computationally demanding. DFE is an iterative procedure where, at each iteration, the statistics used to derive the parameters of a discrete Bayesian network classifier are updated. Using our notation, the updating rule is given by:
s= s + ∑_,y ∈ X,Y(1-h(y|))· s(,y)
This corresponds to a heuristic that strengthens the statistics s(,y) according to the stochastic 0-1 loss. The main difference from in the context of discrete Bayesian network classifiers is that DFE does not weaken the incorrect statistics associated with the instance (,y), s(,y') for y' ≠ y. An important negative consequence is that DFE increases the equivalent sample size of the statistics
s, at each iteration by
O(ϵ_s01· m), where ϵ_s01 is the empirical risk under the stochastic 0-1 loss of the classifier obtained in the previous iteration. This can have dramatic effects with large training sets or when the number of iterations for convergence is large. The main limitation of DFE is that it only applies to discrete Bayesian network classifiers.
In Table <ref> of Appendix <ref> we have summarized an experimental comparison between using ML and DFE for learning NB. These results clearly show that outperforms DFE.
§.§ The connection between and gradient descent for LR
In the case of LR, is equivalent to GD for the empirical risk under the negative log loss when uses a particular closed-form learning algorithm. This algorithm is related to the empirical average of the feature mapping used by LR. In Section <ref>, we present LR with linear feature mapping, while here we consider Eq. <ref> an arbitrary function ψ(·).
Let h be a classifier given by Eq. <ref> for the feature mapping given in Eq. <ref> with arbitrary ψ(·), a= θ∘ s be a learning algorithm for h, and (X,Y) be a training set of size m. The update rule is equivalent to GD of the empirical risk under the log loss if the learning algorithm a is given by the statistics mapping s(,y)= ϕ(,y) and the parameter mapping θ(s)=s/m.
On the one hand, the updating rule with learning rate lr>0 is given by:
s_y'= s_y' - lr·∑_,yϕ(x)· (h(y'|)- y'=y),
for s_y'∈Y. Then, using the parameter mapping, we have that
= - lr/m·∑_,y ∈ X,Y∑_y' ∈Yϕ(,y) · (h(y'|) - y'=y)
On the other hand, the gradient descent of the average negative log loss of h, R(h)=-1/m ∑_(,y)log h(y|), with respect to is
δ R(h)/δ=1/m∑_,y ∈ X,Y∑_y'∈Yϕ(,y)· (h(y'|)-y'=y),
which leads to the same parameter updating rule as .
§.§ The connection between and TM
The TM <cit.> is an iterative algorithm for maximizing the conditional likelihood using maximum likelihood learning procedures. Conditional log likelihood is proportional to the empirical risk under the negative log loss. TM is a general purpose algorithm that can be used for both regression (Y⊂ continuous) and classification (Y={1,⋯ , r} categorical).
At each iteration t, the TM solves two steps:
* T-step: Compute the gradient of the marginal log-likelihood with respect to the parameters:
δ LL(X;)/δ,
with LL(X,)= ∑_∈ Xlog∑_y ∈Y h(,y;).
* M-step: solve the maximization problem.
^(t+1)= max_ LL(X,Y;) - δ LL(X;^(t))/δ·,
where δ LL(X;^(t))/δ denotes the derivative of the marginal log-likelihood with respect to the parameters evaluated at ^(t).
The T-step is based on an approximation to the conditional log-likelihood function, obtained by linearizing the marginal log-likelihood.
In classification, Y={1,⋯ , r}, the M-step is equivalent to finding so that the next equality holds:
δ LL(X,Y;)/δ = δ LL(X,h(· | X;^(t)))/δ,
with LL(X,h(· | X;))=∑_∈ X∑_y ∈Y h(y|;) ·log h(,y; ). The TM differentiates the minimal sufficient statistics that depend on the class variable, u, from those independent of the class v, s(X,Y)=(u(X,Y),v(X)). For the particular case of generative classifiers from the exponential family, the TM iterates by updating the minimal sufficient statistics that depend on the class. Following our notation, it can be shown that TM reduces to:
u^(t+1)= u^(t) + u(X,Y) - u(X,h^(t)),
where the parameters of the model at iteration t+1 are given by the maximum likelihood sufficient statistics s=(u^(t+1),v^(0)), with v^(0) = v(X). In summary, for the particular case of generative classifiers from the exponential family using the maximum likelihood learning algorithm, and a learning rate of lr=1, TM and are equivalent.
We believe the main reason for the limited use of TM is the difficulty of its implementation. This is mainly due to the necessity of understanding the intricate details of the exponential family model, such as minimal sufficient statistics and those dependent or independent of the class variable. For instance, one of the few examples of TM usage is outlined in <cit.>, where the authors used it to learn classifiers based on Bayesian networks with categorical variables. The primary methodological challenge in that work involves updating the minimal sufficient statistics of conditional distributions over categorical variables while maintaining their consistency. This unnecessarily complicates the implementation of the learning method, particularly since its complexity strongly depends on the number of states of the categorical variables. This complexity is apparent when compared to , which merely requires the iterative application of maximum likelihood learning with probabilistically labeled data.
§ MAXIMUM A POSTERIORI PARAMETER MAPPING
An alternative to the maximum likelihood parameter estimation is a Bayesian estimation of the parameters. In Bayesian estimation, we assume a prior for the distribution of the parameters. Then, given the data, we obtain the posterior distribution of the parameters and select the parameters of its mode (maximum a posteriori parameters, MAP). For certain, parametric distributions, there are prior distributions over their parameters that allow obtaining the MAP parameters in closed form.
NB is based on the categorical distribution and the Bayesian conjugate of the categorical distribution is the Dirichlet distribution. Let's take the categorical distribution p(y) for y ∈Y={1,...,r} and the prior Dirichlet distribution for its parameters
=(θ_1,...,θ_r) ∼ Dir(α),
with hyperparameters α=(α_1,...,α_r). The posterior distribution of the parameters of p(y) after observing Y={y_i}_i=1^m is given by:
∼ Dir(α + (m_1,...,m_r)),
where m_y'= ∑_y ∈ Yy'=y for y' ∈Y. The MAP corresponds to:
'_y= m_y + α_y - 1/m + α - r,
with α= ∑_y=1^r α_y. By taking α_y= m_0/r +1 for y ∈ 1 we have the more intuitive MAP
'_y= m_y + m_0/r/m + m_0,
where m_0 can be interpreted as the equivalent sample size of the prior. In the experiments with MAP we have taken m_0= r. The same analysis follows for all the conditional distributions that are involved in NB, p(x_i|y) for y ∈Y, and i=1,...,n with x_i ∈{1,...,r_i}.
QDA is based on a categorical distribution p(y) and n-dimensional Gaussian density functions p(|y) for y ∈Y. The Bayesian conjugate of the parameters of a multivariate Gaussian distribution is the normal distribution for the mean and the inverse-Wishart distribution for the covariance:
μ|Σ∼ N(μ_0,1/κ_0Σ)
Σ∼ W^-1(T_0, ν_0)
The posterior distribution for the parameters given the observations X={_1,...,_m} is given by
μ|Σ∼N(μ', 1/κ'Σ)
Σ∼W^-1(Σ',ν')
Then, the MAP parameters for the mean and the covariance matrix are
μ' = κ_0μ_0 + mμ̂/κ_0 + m
Σ' = T_0 + mΣ̂ + κ_0 · m/κ_0 + m(μ̂-μ_0)· (μ̂-μ_0)^T/ν_0+m+n+1,
where μ̂ and Σ̂ are the sample mean and covariance matrix. We propose the following re-parametrization in terms of m_1, m_2 ≥ 0, κ_0= m_1, T_0= m_2Σ_0 and ν_0=(m_2- n -1), for the sake of interpretability. Under this re-parametrization and neglecting the term (κ_0 · m)(μ̂-μ_0)· (μ̂-μ_0)^T/((ν_0+m+n+1)·(κ_0 + m)) because usually κ_0<<m, we have the next intuitive MAP parameters in terms of prior mean vector μ_0 and covariance matrix Σ_0 with weights m_1 and m_2, respectively:
μ'= m_1·μ_0 + mμ̂/m_1 + m
Σ'= m_2·Σ_0 + mΣ̂/m_2 + m
Here, m_1 and m_2 can be interpreted as the equivalent sample size of the priors for the mean vector and covariance matrix, respectively. In the experiments with MAP we have taken m_1=m_2=10.
This MAP estimate is used for the parameters of p(|y) for y ∈Y, while the MAP parameters of p(y) are obtained using the same procedure of NB. We have followed a similar approach for the parameter mapping used in the closed-form algorithm of LR.
§ GRADIENT DESCENT
In this section, we give the details of the gradient descent updating rule for the empirical risk under the negative log loss for the three classifiers considered in the experiments of this paper. In addition, we explain the transformations used to satisfy the parameter constraints. All the classifiers have been expressed into the exponential family form,
h(y|) ∝η^T· s(,y) + A_y(η^T).
This simplifies obtaining a clear expression for the gradient of the parameters and puts all the considered models into a comparable form.
The exponential family form of the conditional probability modeled by NB is given by feature mapping corresponding to the one-hot encoding of each feature and the class
ϕ(,y)=(1=y,1=y1=x_1,...,1=yr_1=x_1,...,,1=y,1=y1=x_n,...,r=yr_n=x_n,...,r=y1=x_n,...,r=yr_n=x_n) and the parameters
η=(η_0,η_1,0,...,η_n,0,... ,η_1,r,...,η_n,r) with η_0=(η_0,y= log p(y))_y=1^r and η_i,y=(η_i,x_i,y = log p(x_i|y))_x_i=1^r_i for i=1,...,n and y ∈Y. For NB A_y(η^T)=0. The gradient descent updating rule is given by:
η_0,y'= η_0,y' - lr/m∑_,y ∈ X,Y (h(y'|)- y'=y)
η_i,x_i',y'= η_i,x_i',y'
- lr/m∑_,y ∈ X,Yx_i'= x_i·(h(y'|) - y'=y)
for y'=1,...,r, i=1,...,n, and x'_i=1,...,r_i.
Then, after applying the gradient descent updating rule, the natural parameters are transformed into probabilities by exponentiation and by projecting them into the simplex. Alternatives to the projection to the simplex include using softmax to obtain proper probability distributions. Unfortunately, by using these transformations the descent in the average negative log loss is no longer guaranteed.
The exponential-family distribution form of the conditional probability modeled by QDA is given by the feature mapping corresponding to the class one-hot coding ϕ(,y)=(1=y,1=y·,1=y·^T,..., r=y,r=y·,r=y··^T); the parameters η=(η_0,η_1,η_2) with η_0= (η_0,y=log p(y))_y=1^r, η_1= (η_1,y= Σ_y^-1·μ_y)_y=1^r and η_2=(η_2,y= -1/2 ·Σ_y^-1)_y=1^r, being μ_y and Σ_y the mean vector and covariance matrix conditioned to y; and A_y(η)= 1/4·η_2,y^-1·η_1,y·η_1,y^T ·η_2,y^-1 - 1/2·η_2,y^-1. The gradient descent updating rules are given by:
η_1,y' = η_1,y' - lr/m·∑_x,y (h(y'|) - y=y')
·(x + 1/2 ·η_2,y'^-1·η_1,y')
η_2,y' = η_2,y' - lr/m·∑_,y ∈ X,Y (h(y'|) - y=y')
·(^2 - 1/4 ·η_2,y'^-1·η_1,y'·η_1,y'^T ·η_2,y'^-1 + 1/2· tr(η_2,y'^-1))
for y'=1,...,r, and being tr(·) the trace of a matrix. Then, after every gradient descent updating the natural parameters η_0 are transformed into probabilities by exponentiation and by projecting them into the simplex; η_2,y is transformed into the covariance matrix Σ_y=-1/2·η_2,y^-1 and is ensured to be a positive semi-definite matrix by: i) obtaining the singular value decomposition, ii) guaranteeing that all the eigenvalues are no smaller than ϵ= 10^-2, and iii) reconstructing the matrix using the constrained eigenvalues. Again, by transforming the obtained parameters to fulfill their associated constraints (probabilities and covariance matrices) we can not ensure that the average negative log loss descents.
LR is directly given in the exponential-family distribution form, η=. In this work, we simply consider the feature mapping corresponding to the one-hot class encoding ϕ(,y)=(1=y, 1=y·,..., r=y, r=y·) and the parameters =(_0,_1) with _0=(θ_0,y∈)_y=1^r _1=(_1,y∈^n)_y=1^r. In this model A_y()=0. The gradient descent updating rule for LR is:
= + lr/m∑_,y ∈ X,Yϕ(,y).
In this model, the parameters have no constraint, and thus it is guaranteed a monotonic descent of the average negative log loss, but not in terms of the empirical error.
§ ADDITIONAL RESULTS
§.§ Convergence curves in mnist
Next, we show the curves of the evolution of the empirical error with respect to the number of iterations of and GD in mnist dataset (20) for NB, QDA, and LR. The error curves represent 128 iterations.
In Figure <ref>, we show the error curves of using ML and MAP for learning NB. can reduce the error of the initial model using ML and MAP parameters. with MAP converges to a local optimum in 60 iterations, while with ML does not reach convergence in 128 iterations. However, the behavior using ML is better than MAP for mnist.
Figure <ref> shows the error curves of QDA with ML and MAP. shows the same behavior with ML and MAP. RC shows a significant reduction in error in the first iterations and achieves an error close to the minimum in less than 30 iterations.
Figure <ref> shows the error curves of LR with ML and MAP. with ML and MAP shows a similar behavior. In 128 iterations, reduces the error from 0.14 to 0.05, and the curve's downward slope suggests that the error will keep decreasing if more iterations are used. The error curve of decreases faster than that obtained with GD.
§.§ Random initialization of parameters
Table <ref> shows the behavior of for LR with random initialization and updating the parameters using ML. In this case, the column "Random" contains the empirical error when the model is initialized at random. obtain lower errors than GD in 27 datasets. In two of the three remaining datasets, both algorithms tie. The results from Table <ref> suggest that reaches the best GD result, on average, in 19 iterations.
The minimum errors and number of iterations required to obtain them with using ML and random initialization are similar (see Table <ref>). Figure <ref> and Figure <ref> show the evolution of the error with random and ML initialization in mnist, respectively. When departing from random initialization produces a steeper descent in the first iterations and obtains a similar error to the ML initialization in only 2 iterations. These results suggest that exhibits a robust behavior independently of the initial parameters.
§.§ Comparison with DFE
Table <ref> summarizes the comparison between with ML as closed-form learning algorithm and the discriminative frequency estimate (DFE) for discrete NB (see Appendix <ref>). achieves lower errors in 25 datasets and equal in 5. The average reach of is 12, indicating that it produces a steeper and faster empirical error reduction than DFE. These results clearly show that using ML is better than DFE for learning NB.
|
http://arxiv.org/abs/2409.02426v1 | 20240904041402 | Diffusion Models Learn Low-Dimensional Distributions via Subspace Clustering | [
"Peng Wang",
"Huijie Zhang",
"Zekai Zhang",
"Siyi Chen",
"Yi Ma",
"Qing Qu"
] | cs.LG | [
"cs.LG",
"cs.CV"
] | |
http://arxiv.org/abs/2409.02663v1 | 20240904124602 | Generalized Individual Q-learning for Polymatrix Games with Partial Observations | [
"Ahmed Said Donmez",
"Muhammed O. Sayin"
] | cs.GT | [
"cs.GT",
"cs.SY",
"eess.SY"
] |
[1]widecheck#1
widecheck#1#2
@@th#1#2
@@th#1
width@height@
height@width@
@-@
tempdima@ tempdima2@ tempdima@@
@
tempdima1[-1]tempdima@
@ @
definition
theorem
assumption
proposition
lemma
problem
remark
|
http://arxiv.org/abs/2409.02653v1 | 20240904122844 | Skip-and-Play: Depth-Driven Pose-Preserved Image Generation for Any Objects | [
"Kyungmin Jo",
"Jaegul Choo"
] | cs.CV | [
"cs.CV"
] |
[
Skip-and-Play: Depth-Driven Pose-Preserved Image Generation for Any Objects
Kyungmin Jo
KAIST
Daejeon, Korea
[email protected]
Jaegul Choo
KAIST
Daejeon, Korea
[email protected]
September 9, 2024
====================================================================================================================
hypcap=false
< g r a p h i c s >
figureOur method, Skip-and-Play (SnP), generates images of any objects from either image prompts (top) or text prompts (bottom), reflecting the given poses of conditions. While a depth (DP)-conditional ControlNet generates images reflecting object shapes from the condition, SnP produces images where the shapes reflect the prompt rather than the condition, despite employing the same model without additional training. For instance, when using the prompt "pig" and the depth map of a horse image as the condition, ControlNet produces a pig with the shape of a horse, while SnP does not. Extra results and the full text prompts are in the Supplementary (Suppl.).
]
§ ABSTRACT
The emergence of diffusion models has enabled the generation of diverse high-quality images solely from text, prompting subsequent efforts to enhance the controllability of these models. Despite the improvement in controllability, pose control remains limited to specific objects (, humans) or poses (, frontal view) due to the fact that pose is generally controlled via camera parameters (, rotation angle) or keypoints (, eyes, nose).
Specifically, camera parameters-conditional pose control models generate unrealistic images depending on the object, owing to the small size of 3D datasets for training. Also, keypoint-based approaches encounter challenges in acquiring reliable keypoints for various objects (, church) or poses (, back view).
To address these limitations, we propose depth-based pose control, as depth maps are easily obtainable from a single depth estimation model regardless of objects and poses, unlike camera parameters and keypoints. However, depth-based pose control confronts issues of shape dependency, as depth maps influence not only the pose but also the shape of the generated images.
To tackle this issue, we propose Skip-and-Play (SnP), designed via analysis of the impact of three components of depth-conditional ControlNet on the pose and the shape of the generated images. To be specific, based on the analysis, we selectively skip parts of the components to mitigate shape dependency on the depth map while preserving the pose. Through various experiments, we demonstrate the superiority of SnP over baselines and showcase the ability of SnP to generate images of diverse objects and poses. Remarkably, SnP exhibits the ability to generate images even when the objects in the condition (, a horse) and the prompt (, a hedgehog) differ from each other.
§ INTRODUCTION
With the advent of large-scale text-to-image diffusion models <cit.>, one can generate diverse high-quality images from given text. However, since these models primarily rely on text for adjusting the generated images, subsequent research has shifted focus towards enhancing their controllability by incorporating image prompts for content control <cit.>, as well as extra conditions for structure or pose control <cit.>.
Despite remarkable advances in the controllability of diffusion models, pose controllability remains limited, notably enabling it only on specific objects (, a human) or poses (, near the frontal view) due to the fact that pose is commonly controlled through camera parameters (, rotation angle) or keypoints (, eyes, nose).
Specifically, approaches <cit.> using camera parameters for pose control generate realistic images of only a limited scope of objects compared to models <cit.> trained on large-scale 2D datasets <cit.>, primarily due to the limited objects in 3D datasets <cit.>. Additionally, keypoint-based pose control studies <cit.> face difficulties in applying to diverse objects and poses, stemming from the absence of reliable keypoints. For example, the difficulty of defining keypoints of the pose of churches hinders generating the image of them from keypoints. Similarly, depicting side views of humans using keypoints is complicated, often failing in the generation of side views compared to the frontal views (the fifth row in <ref>).
To enable generating images of any objects reflecting the given poses accurately, we propose depth-based pose control for two reasons: 1) accessibility, and 2) accuracy. While obtaining camera parameters and keypoints necessitate training distinct estimation models for each class of object (, human, chair), depth can be universally acquired using a single depth estimation model <cit.> for any objects.
Also, while keypoints lack 3D information due to their projection onto a 2D plane, depth inherently encodes 3D spatial information, making it more suitable for controlling pose (<ref>), defined by rotations and translations in 3D space. For the same reason, depth maps are superior for pose control to other structural control conditions such as segmentation maps, edge maps, .
However, since depth maps contain information not only about the pose but also about the shape, images generated using them as conditions inherit both poses and shapes of them.
For instance, generating an image of a hedgehog guided by a depth map of a horse image results in a hedgehog with a horse-like shape (the last example of ControlNet-DP in <ref>). For this reason, previous studies <cit.> have utilized depth not for pose control but for structure control.
To overcome this issue, we introduce Skip-and-Play (SnP), designed through a comprehensive analysis of the effects of three key components of ControlNet on the pose of the generated images: 1) the time steps using ControlNet, 2) the features generated from ControlNet using negative prompts, and 3) the ControlNet features passed to each decoder block. By selectively skipping a part of three elements, SnP enables the image generation of various objects reflecting the specified pose dictated by depth, without having a depth-dependent shape.
To sum up, our key contributions are as follows:
* We propose utilizing depth for pose control in a diffusion model, as depth is obtainable for any objects and poses and inherently encodes 3D information, making it suitable for representing poses defined in this space.
* We propose Skip-and-Play, designed by the empirical insights of depth-conditional ControlNet, to generate images reflecting the given pose without the shape being dependent on the depth map.
* We experimentally demonstrate the superiority of our model, both qualitatively and quantitatively, compared to previous studies on pose control in diffusion models.
§ RELATED WORK
Pose-guided Image Generation.
After the inception of Generative Adversarial Networks (GANs), a concerted effort has been made to generate images reflecting given poses.
3D GANs <cit.> and 3D diffusion models <cit.> directly manipulate poses by training Neural Radiance Fields <cit.>-based networks using datasets composed of images and the corresponding camera parameters.
Unlike 3D models, there are also studies that control poses in 2D space. SeFa <cit.> controls pose in pre-trained GANs by decomposing their weights.
Several studies <cit.> control poses of the images by moving the features of keypoints towards target positions through test-time optimization. Other approaches <cit.> generate human images guided by estimated keypoints of the reference images obtained via keypoint detection models <cit.>.
However, these direct pose control methods face challenges in generating realistic images or accurately reflecting poses. Specifically, training them requires datasets that pair images with corresponding camera parameters or keypoints, complicating the construction of datasets with diverse objects and resulting in unrealistic images depending on the target objects. Moreover, models that use a limited number of keypoints for pose control often struggle to achieve precise pose accuracy.
Structure-guided Image Generation.
Unlike the pose-guided generation methods, studies have indirectly guided poses of generated images by using structures containing pose information. Diffusion-based image-to-image translation <cit.> and editing <cit.> models generate new domain or style images while preserving the structure of the reference image by injecting attention from the reference into the new image. SDEdit <cit.> adds noise to the reference image and generates an image from it through a denoising process. Also, several approaches <cit.> add networks to reflect the structure of given conditions, such as segmentation maps, edge maps, and depth maps, to the generated images. These structure-guided image generation methods can generate images of desired poses, however, they face the issue of controlling not only the pose but also the shape due to the shape information in the structural control conditions.
Image Generation from Rough Conditions.
Recent models <cit.> have emerged that generate images from rough conditions, reducing the need for precisely aligned conditions in controllable generation methods <cit.>.
LooseControl <cit.> generates images reflecting the prompt from depth maps composed of 3D boxes, rather than precise shapes of objects.
SmartControl (SC) <cit.>, closely related to SnP, uses an additionally trained control scale predictor (SCP) to adjust local control scales for ControlNet feature maps. Specifically, it reduces the weights of areas conflicting between the condition and the prompt, ensuring faithful reflection of the given condition while guiding conflicting areas to reflect the prompt.
These models are designed to generate images from rough conditions, not to control pose, thus they do not accurately reflect the pose of the condition.
To the best of our knowledge, we are the first to utilize depth for pose control in diffusion models. Despite using depth for control, we generate images with shapes reflecting the content of the prompt across various objects, surpassing previous studies (<ref>).
§ PRELIMINARY
ControlNet.
To enhance the controllability of existing pre-trained diffusion models, ControlNet <cit.> adds a ControlNet encoder E_C that takes conditions c_i (, edge map) as inputs to diffusion models, which consist of the encoder E and the decoder D of UNet <cit.>.
The architecture of the ControlNet encoder E_C is the same as the encoder E, except for additional zero convolutions to the output of each block and four convolution layers for the condition c_i.
For reflecting the condition c_i in the generated images, ControlNet utilizes it along with the input z_t at the time step t and a prompt c to obtain outputs ϵ_θ as follows:
ϵ_θ(z_t, t, c, c_i) = D(E(z_t, t, c), E_C(z_t, t, c, c_i))).
In this process, the features generated from the ControlNet encoder E_C are added to the corresponding features from the encoder E before passing to the decoder D.
In the case of applying classifier-free guidance <cit.>, two outputs ϵ_θ^+ and ϵ_θ^- are estimated using the positive c^+ and negative prompts c^-, respectively, as follows:
ϵ^+_θ(z_t, t, c^+, c_i) = D(E(z_t, t, c^+), E_C(z_t, t, c^+, c_i))),
ϵ^-_θ(z_t, t, c^-, c_i) = D(E(z_t, t, c^-), E_C(z_t, t, c^-, c_i))),
where the positive c^+ and negative prompts c^- refer to the conditions to be included and excluded, respectively, in the generated image.
Using two outputs, the final output ϵ_θ is defined as:
ϵ_θ(z_t, t, c^+, c^-, c_i) = ϵ_θ^-(z_t, t, c^-, c_i)
+ s · (ϵ_θ^+(z_t, t, c^+, c_i) - ϵ_θ^-(z_t, t, c^-, c_i)),
where s is the guidance scale with a value greater than 1.
§ METHOD
We elucidate the methodology for generating images that reflect the poses of the conditions and the contents of prompts.
To reflect the pose of the conditions, we adopt depths for two reasons: 1) accessibility, and 2) accuracy. Specifically, depths are easily obtainable for any objects and poses using a single depth estimation model <cit.>, unlike camera parameters or keypoints. Additionally, unlike 2D projected keypoints, depths inherently encode 3D spatial information, enabling more precise control of poses defined in 3D space (<ref>).
For depth-conditional image generation, we adopt ControlNet <cit.> based on Stable Diffusion (SD) <cit.> as a baseline to reflect the pose of the given condition.
In this section, we first provide an analysis of depth-conditional ControlNet in <ref>, followed by an explanation of SnP designed based on this analysis (<ref>).
For the experiments in this section, we utilize the IP-Adapter <cit.> to employ image prompts, aiming to discern whether the characteristics of the generated images originate from the prompt or the condition. Although we use image prompts for analysis, our approach is not restricted to image prompts and can also utilize text prompts (<ref>).
§.§ Analysis of ControlNet on the Pose of Image
Depths provide information not only about the pose but also about the shape, resulting in depth-dependent shapes in images generated by depth-conditional ControlNet (<ref>).
To mitigate this problem and reflect contents including the shapes from the prompts (the results of SnP in <ref>),
inspired by <cit.>, we thoroughly analyze the influence of three components of ControlNet on the pose of the generated images: 1) time step using ControlNet, 2) ControlNet features generated using the negative prompt (NP), and 3) ControlNet features passed to each decoder block (DB).
Time Steps using ControlNet.
Since the shape of the generated image is determined during the initial time steps <cit.>, the simplest way to minimize the influence of depths on the shape of the generated images is to halt the use of ControlNet at early time steps as follows:
ϵ_θ(z_t, t, c, c_i) = ϵ_θ(z_t, t, c, c_i),
if t ≦λ_t ,
ϵ_θ(z_t, t, c), otherwise,
where λ_t is a threshold of time steps using ControlNet.
As depicted by the blue line in <ref>, the pose error exhibits different patterns depending on whether the last time step (ts) using ControlNet is below or over 0.4.
Specifically, if ControlNet is used beyond this point, the pose error decreases slightly, but depths not only affect the pose of the generated image but also has a significant impact on their shapes (<ref>). Conversely, if we halt the use of ControlNet before this point, the generated image adopts a shape akin to the prompt rather than the depth map (<ref>), but the pose of the generated image deviates from that of the depth map (the blue line in <ref>).
This indicates that both the pose and shape are simultaneously affected and altered by depths in ControlNet, thus merely adjusting the time steps for applying ControlNet does not generate images that reflect both the pose from the depth map and the shape from the prompt.
However, although adjusting the time steps using ControlNet is insufficient for reflecting the pose and shape in the generated image from the depth map and prompt, respectively, ceasing the use of ControlNet early enough can mitigate the effect of depth on the shape of the generated images (<ref>).
Thus, to decrease the impact of depth on the shape of images, in SnP, we control the usage of ControlNet based on time steps to ensure that ControlNet features are applied until early time steps.
Nevertheless, this leads to the pose of the depth map not being accurately reflected in the generated images, as previously mentioned. To address this issue, we shift our attention to the ControlNet features generated from the negative prompt.
ControlNet Features Obtained from Negative Prompt.
According to ControlNet <cit.>, removing the feature maps E_C^-=E_C(z_t, t, c^-, c_i))) obtained from the ControlNet encoder E_C using a negative prompt, boosts the reflection of conditions c_i in the generated images.
Taking it one step further, we have found that eliminating E_C^- enhances the reflection of the poses of the condition without compromising the reflection of the prompt in the generated images regardless of the time steps λ_t using ControlNet (the orange line in <ref>). For example, when ControlNet is used up to 0.2 time steps, utilizing E_C^- results in an average pose error of 14.42 degrees, whereas removing E_C^- lowers the pose error to 6.58 degrees. On the other hand, the content reflection evaluated based on the CLIP cosine similarity is similar in both cases.
The effects of removing E_C^- on the pose of the generated images can be explained by comparing the noise estimation process of classifier-free guidance in terms of the usage of E_C^-.
Compared to the outputs estimated using E_C^- in <ref>, the outputs estimated without using E_C^- is calculated as
ϵ_θ(z_t, t, c^+, c^-, c_i) = ϵ_θ^-(z_t, t, c^-)
+ s · (ϵ_θ^+(z_t, t, c^+, c_i) - ϵ_θ^-(z_t, t, c^-)).
According to GLIDE <cit.>, the classifier-free guidance can be interpreted as moving the output of each time step away from ϵ_θ^- towards the direction of ϵ_θ^+.
Based on this explanation, we can intuitively elaborate on the effect of removing E_C^- on the reflection of conditions.
When using E_C^-, in <ref>, the condition c_i is applied to the generated images along with the negative prompt c^- in the first term on the right-hand side, and in the next term, the output moves in the direction from applying c^- to c^+.
Conversely, in <ref>, removing E_C^-, the output moves in the direction from applying c^- to simultaneously applying both c_i and c^+, with s amplifying this movement.
Thus, c_i and c^+ are more jointly and rapidly applied to the generated images when removing E_C^- compared to using it.
This tendency is also apparent in the visual results when E_C^- is utilized and omitted. In <ref>, the images depict the denoised image predicted at each time step, with applying ControlNet until 0.2 time step. When comparing the outcomes before halting the use of ControlNet (images on the left of the blue dashed line), the removal of E_C^- (bottom) benefits a smooth integration of pose and prompt reflection. In contrast, the use of E_C^- (top) yields precise pose reflection but insufficient prompt reflection, leading to depth-dependent shape issues.
Furthermore, removing E_C^- ensures pose consistency even after terminating the use of ControlNet.
ControlNet Features for Each Decoder Block.
We assess the impact of each feature map generated from every block in the ControlNet encoder E_C on the pose of the images and have found that only a subset of blocks significantly influence the pose of the generated images.
Specifically, we generate images using only the feature map of one block at a time and compare the pose error between the generated images and depth maps. Also, we divide the evaluation into two cases (<ref>): one where the features of the middle block (MB) are used (orange line) and the other where they are not used (blue line).
As a result, only two blocks—specifically, the MB and the block corresponding to the fourth decoder block—influence the pose of the generated images.
To be specific, the MB has the most significant impact on the pose of the generated images, followed by the fourth block in the decoder. The remaining blocks have minimal influence on the pose. Also, as shown in <ref>, the MB only impacts the pose, whereas the fourth block impacts both the pose and the shape.
According to our analysis, the blocks that influence the pose of the generated images vary depending on the baseline model and are independent of the type of condition. Refer to the Suppl. for more details.
§.§ Skip-and-Play
Based on the empirical insights obtained via the analysis (<ref>), we propose a new approach called Skip-and-Play (SnP) for pose-preserved image generation for any objects by reducing the influence of the depth on the shapes of generated images.
As shown in <ref>, we skip on a part of the three components in ControlNet explained in <ref>.
Specifically, to minimize influence of the depth condition on aspects other than the pose of the generated images, we apply ControlNet features to the pose-related DB and use ControlNet up to λ_t. Also, we use NP only for the encoder E to accurately reflect the pose of depth maps in the generated images even in the early time steps.
In addition, we optionally apply the Weight Map Control Module (WCM) to reduce the influence of the depth maps on the shape of objects in the generated images. The WCM detects edges of the depth map and assigns lower weights to these areas to minimize their impact on shape. Specifically, we use an edge detector <cit.> on the depth condition to identify edges, then expand these edges through dilation and invert them. Since depth maps, unlike images, are smoothed and lack fine details, this process effectively identifies the boundaries between objects and the background. Next, we resize the results to match the resolution of ControlNet features and rescale the values to ensure they fall within a specific range. Our analysis indicates that applying weights above a certain threshold to ControlNet features minimizes their impact on pose while primarily influencing shape. Thus, we adjust the weight maps accordingly before applying them to the ControlNet features. Refer to the Suppl. for more details.
§ EXPERIMETAL RESULTS
In this section, we delve into our experimental findings. We begin by substantiating the superiority of SnP through both quantitative and qualitative comparisons with pose-guided and rough conditional image generation models in <ref>.
Following that, we compare the performance of SnP with methods that indirectly control pose via structure (<ref>).
Also, we show that despite utilizing a depth as a conditioning factor, SnP generates images with shapes more closely aligned with the prompts than depth conditions (<ref>).
In <ref>, we conduct ablation studies based on combinations of components in SnP and show validity of SnP not only on SD <cit.> used in our analysis but also on SDXL <cit.>.
Lastly, in <ref>, we show the superiority of depth-based pose control over keypoint-based pose control.
Refer to the Supple. for additional qualitative results, experimental settings, implementation details.
§.§ Comparison of Direct Pose Control
To show the superiority of SnP, we compare the quantitative and qualitative results of SnP to those of four baseline models: Zero 1-to-3 (Z123) <cit.>, DragDiffusion (DD) <cit.>, OpenPose (OP) <cit.> conditional ControlNet (CN) <cit.>, and SmartControl (SC) <cit.>.
Our goal is to generate images reflecting the given pose, we select three diffusion models that directly control pose for image generation as baselines. Zero-1-to-3 controls pose using camera parameters, while DragDiffusion and ControlNet control pose using keypoints.
Additionally, we utilize SC, which generates images from rough conditions, as a baseline. Although it does not aim to directly control pose, it reflects conditions by reducing ControlNet feature weights only in areas that conflict with the prompt. This aligns with the concept of generating images that reflect the pose of the given conditions and the content of the prompt, making it suitable as a baseline. For a fair comparison, we use depth as the input condition for SC.
Since Zero-1-to-3 and DragDiffusion focus on altering the pose of a given image, for a fair comparison, we employ image prompts for ControlNet, SmartControl, and SnP. Furthermore, since OpenPose-conditioned ControlNet only targets humans, we evaluate models utilizing the human face dataset, FFHQ <cit.>. However, since in-the-wild datasets often consist of images that are mostly biased toward frontal poses and have narrow pose ranges, we construct the PoseH dataset from images rendered with a uniform pose distribution from a single 3D mesh to evaluate pose reflection across various angles. Refer to the Suppl. for details about datasets.
§.§.§ Quantitative Comparison.
The quantitative comparison is based on three metrics: a pose error, CLIP cosine similarity <cit.>, and Frechet Inception Distance (FID) <cit.>. We calculate the pose error between the ground truth pose and the estimated pose of generated images from the off-the-shelf pose estimation model <cit.>.
As depicted in <ref>, despite controlling pose using depths, SnP excels at accurately reflecting the given poses of conditions compared to all baselines, especially models directly controlling pose.
This highlights the advantage of leveraging depths for controlling poses defined in 3D space, in contrast to 2D keypoint-based pose control methods such as DragDiffusion <cit.> and ControlNet-OP <cit.>, which aligns with the results in <ref>. Zero-1-to-3 <cit.> directly controls pose via camera parameters, which leads to high pose accuracy expectations. However, due to training on a limited 3D dataset, it fails to generate realistic images, resulting in degraded pose estimation performance.
SmartControl exhibits lower pose errors than other baselines by adopting depth for condition. However, its training on a small dataset occasionally leads to failures in preserve pose accurately, leading to higher pose errors compared to the training-free SnP.
§.§.§ Qualitative Comparison.
We also compare SnP to baselines qualitatively in <ref>, which aligns with the results in <ref>. Specifically, Zero-1-to-3 generates the most unrealistic images due to training on a 3D dataset containing limited objects. On the other hand, DragDiffusion uses LoRA <cit.>, allowing it to create the most realistic images reflecting the image prompts, but pose control via moving points is ineffective, especially when the distance between the poses of the given image and the target is far. ControlNet-OP can generate photorealistic images of a given pose, but, in cases like side views, it creates images with completely different poses due to the failure of OP detection (the fifth and sixth column in <ref>). Like ControlNet-OP, SmartControl fails to maintain the pose of the condition in some cases as it reflects the pose of the image prompt.
In contrast to baselines, our proposed model generates pose-preserved photorealistic images reflecting the image prompt.
§.§ Comparison to Structure-based Pose Control
In this section, we compare the performance of SnP with structure-guided image generation models, namely Plug-and-Play (PnP) and ControlNet (CN) conditioned depth (DP). Unlike the aforementioned studies, these models that generate images by controlling structure do not aim at controlling poses, and there are no restrictions on target objects. Therefore, rather than comparing the pose accuracy for specific objects, we qualitatively compare SnP with these models across various objects. As depicted in <ref>, structure-guided image generation models, as mentioned earlier, reflect both pose and shape from the condition to the generated images. Hence, the generated images resemble the shape of the given condition more than the given prompt.
For example, PnP and ControlNet-DP struggle to generate various chair images because they rely on the structure within the given condition.
Furthermore, images generated by both PnP and ControlNet-DP using the face of a leopard as the reference consistently feature ears resembling those of the leopard, irrespective of the species of the target animal.
On the other hand, SnP controls poses using depth conditions but reduces the dependence of shapes on these conditions, resulting in images that reflect the given prompts in shape while maintaining the poses from the depth conditions.
§.§ Effects on the Shape of Generated Images
Compared to ControlNet-DP, SnP generates images having shapes affected more by the prompt than by the depth condition. To reveal the effectiveness of SnP, we compare the qualitative results of it and ControlNet-DP on various objects. Specifically, we sample the reference images from two datasets <cit.> consisting of car and church images, respectively, and generate images using depth conditions extracted from these reference images and various text prompts. As described in <ref>, while ControlNet-DP generates images with shapes similar to the condition, images generated by SnP reflect the pose from the condition but have the shape more influenced by the prompt than by the condition.
§.§ Ablation studies
We conduct ablation studies on the baseline models and the combination of four components of SnP: 1) time steps (TS) using CN, 2) CN features generated from negative prompts (NP), 3) CN features passed to each decoder block (DB), and 4) Weight Map Control Module (WCM).
We evaluate models based on the pose error and CLIP scores to assess pose and prompt reflection, respectively.
In the results of SD in <ref>, even combined with other components, NP and TS still positively influence pose and prompt reflection, respectively. Comparing the results of using all three components (Skip3) with TS+NP, DB slightly compromises pose but positively affects prompt reflection. Additionally, the optionally applied WCM shows a similar trend as DB. These results are also evident in the visual outcomes (<ref>). Furthermore, we conduct the same experiment with SDXL, and the results, excluding those of DB+TS, show a similar trend to SD 1.5. With both models, applying three components yields the best performance.
§.§ Effects of Depth on Pose Reflection
To demonstrate the superiority of depth-based pose control, we compare its accuracy in pose control against the commonly used keypoints, generally obtained from OpenPose (OP).
For this, we meticulously assess the accuracy of pose reflection from the reference image to the generated image across two conditions.
To be specific, we generate images using either OP or depth (DP) extracted from the given reference images and then compare the poses between the generated and provided images utilizing an off-the-shelf pose estimation model <cit.>. For this, we randomly sample 100 images from FFHQ <cit.> with a uniform pose distribution, and use them as reference images. From each condition extracted from the reference image, we generate 10 images to evaluate the pose reflection.
As depicted in the left graph of <ref>, employing the DP as input of ControlNet for pose control better preserves the given pose compared to using the OP as input. Furthermore, as demonstrated in the right graphs of <ref>, utilizing the DP as input consistently reflects the given poses across various pose ranges, while the pose error increases dramatically as the view moves away from the frontal view when using the OP as input of ControlNet.
§ CONCLUSION
In this paper, we propose Skip-and-Play to generate images reflecting given poses across various objects. Specifically, we introduce depth-based pose control as opposed to the keypoints or camera parameters used in previous works for two reasons: 1) depth maps can be effortlessly obtained regardless of objects or poses, and 2) depth conditions inherently encode 3D spatial information, making them beneficial for controlling pose accurately in 3D space. However, the usage of the depth condition for pose control positions a challenge as it influences both the pose and shape of the generated images. To address this, we analyze the influence of the three components of the depth-conditional ControlNet on the shape and pose of generated images: 1) time steps using ControlNet, 2) ControlNet features obtained from negative prompts, and 3) ControlNet features passed to each decoder block. Based on empirical insights from the analysis, we design SnP by selectively skipping a part of three components.
Our experimental results demonstrate that SnP outperforms diffusion-based pose control models, qualitatively and quantitatively. While previous models are limited to generating images for specific objects or a restricted range of poses, SnP generates images across various objects and poses.
Our model is not free from limitations caused by leveraging the prior knowledge of ControlNet for pose-preserved image generation. Specifically, poses that are not adequately represented in ControlNet remain challenging for SnP to accurately express. This limitation arises from using ControlNet without additional training, but it can be mitigated as the performance of ControlNet improves.
ieee_fullname
|
http://arxiv.org/abs/2409.03497v1 | 20240905131024 | Quantum features of the transport through ion channels in the soft knock-on model | [
"Mateusz Polakowski",
"Miłosz Panfil"
] | physics.bio-ph | [
"physics.bio-ph"
] |
Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland
§ ABSTRACT
Ion channels are protein structures that facilitate the selective passage of ions across the membrane cells of living organisms. They are known for their high conductance and high selectivity. The precise mechanism between these two seemingly contradicting features is not yet firmly established. One possible candidate is the quantum coherence. In this work we study the quantum model of the soft knock-on conduction using the Lindblad equation taking into account the non-hermiticity of the model. We show that the model exhibits a regime in which high conductance coexists with high coherence. Our findings second the role of quantum effects in the transport properties of the ion channels.
Quantum features of the transport through ion channels in the soft knock-on model
Mateusz Polakowski and Miłosz Panfil
September 9, 2024
=================================================================================
§ INTRODUCTION
Ion channels form a large family of protein complexes, which role is to regulate flow of different ions across cellular membranes. They are an ubiquitous feature among all excitable cells and can be found in both the largest animals and the smallest bacteria. Since their discovery, they have been the subject of intense study, with dozens of channel families and subfamilies identified along the way. Among them, there are selective potassium channels, which are mainly responsible for reestablishing the resting membrane potential after the upstroke of action potential <cit.>. One of their most significant features is that in their open state they permeate potassium ions at a near-diffusion rate of around 10^8 ions/s, while being 10,000 times more selective in favour of potassium ions over sodium <cit.>. Crystallization of the ion channels, starting with KcsA potassium channel of the bacteria Streptomyces lividans <cit.>, and direct experimental observations of a single channel's action <cit.> allowed us to better understand these proteins on an atomic level. Those and other findings led to the development of mathematical and physical models of permeation <cit.>, and enabled for the use of molecular dynamics (MD) simulations <cit.>. However, despite the advances in modelling and experimental probing of ion channels, the problem of seemingly paradoxical simultaneous high conductance and high selectivity has remained unsolved <cit.>.
Structurally, ion channels are protein complexes inserted into the cell membrane. They are made of multiple subunits, which spatial arrangement forms a pore, through which ions can flow in or out of the cell. The narrowest section of that pore is called selectivity filter (SF), as it is believed that this is the place where the discrimination between ions takes place. In KcsA, the filter is located near the extracellular mouth of the channel. It is made of a sequence of five residues, T_75 V_76 G_77 Y_78 G_79, which are located on the P-loop. Four P-loops of the four subunits delimit a narrow pathway, just 12 Å in length and around 3 Å in diameter, through which the ions must pass in order to reach the extracellular solution. The backbone carbonyl oxygens of Thr 75, Val 76, Gly 77 and Tyr 78 as well as the side-chain hydroxyl of Thr 75 point directly into this pathway, creating four binding sites inside the SF. These binding sites are conventionally labeled S1 through S4, with S1 being at the extracellular site of the SF. Each of the binding sites is able to accommodate one fully dehydrated K^+ ion, which is then coordinated by eight negatively charged oxygen atoms, four above the ion and four below it <cit.>. Although the gating mechanism varies greatly across different types of potassium channels, the amino-acid sequence of the selectivity filter is very similar in all of them <cit.>.
Currently, there are two competing models of ion permeation through potassium ion channels. In the first one, the ions move through the pore separated by water molecules; the binding sites are alternately occupied by ions and water molecules. This model is known as water knock-on or soft knock-on model. It is based on the assumption that due to the electrostatic repulsion between the ions inside the selectivity filter, there can be at most two ions in the SF at any given time. For this reason, the earlier crystallographic experiments showing potassium ions in all four binding sites were interpreted as a superposition of S1-S3 and S2-S4 ion configurations <cit.>. In the competing model, called hard knock-on, the conduction occurs without the presence of water. The ions jump between the binding sites as a result of the short-range ion-ion interaction inside the SF. This model is supported by more recent studies: molecular dynamics simulations <cit.>, X-ray diffraction measurements <cit.> and solid-state NMR experiments <cit.>. Although recent computational and experimental findings seem to favour the hard knock-on model, the debate has not been concluded, especially since the experimental observation of the ion motion through the channel has remained elusive. Hence, the soft knock-on has not been disproved and is still widely studied <cit.>. Moreover, exploring its general properties may be beneficial with regard to other ion channel families, which could employ this conduction mechanism <cit.>.
The discovery of long lived quantum coherence in photosynthetic energy transfer demonstrated the importance of quantum phenomena in transport processes of biological systems <cit.>. This quantum coherence survives despite the dephasing noise originating from fluctuations of an environment. Furthermore, in certain cases, the environmental coupling enhances the efficiency of the system, with its optimal regime being neither completely quantum, nor completely classical <cit.>.
Since ion channels operate on the sub-nanoscale, both spatially and temporally, they are a natural candidate to look for the quantum effects. Vaziri and Plenio suggested that quantum coherence in SF may play a role in selectivity and conduction process <cit.>. They proposed a model in which an ion can hop between the adjacent sites via quantum tunneling or thermal activation. Interplay between quantum coherence and dephasing noise is then essential for conduction properties. De March et al. took this idea further by including Coulomb repulsion between the ions inside SF <cit.>. Summhammer et al. <cit.> found that quantum mechanical MD simulations yield higher conduction rates compared to classical MD. Salari et al. claim that the classical coherence between carbonyl groups oscillations is not sufficient to explain high conduction rates and selectivity <cit.>. In <cit.> Salari et al. investigated the possibility of quantum interference of potassium ions in neighbouring channels. However, they concluded that the coherence times are too short to play any significant role.
In this paper we focus on the water-mediated transport method and taking inspiration from the work of Seifi et al. <cit.>, we represent the transport sequence as a three state system, resembling quantum spin-1 system. We then study the system using the adjusted Lindblad framework to account for the non-hermiticity of the model. We also consider a generalization of the three states model to account for bidirectional transition rates.
Our work is organized as follows. We start with the short description of the Lindblad equation for Hermitian and non-Hermitian Hamiltonians and measures of coherence. We then introduce in details two models of the transport through ion channels. In the following section we present the results and discuss the late-time values of conductance and coherence. The results show that the dynamics of the system has two regimes which differ in the properties of the stationary states. We explain this phenomena using the so-called effective Hamiltonian technique known from the non-Hermitian quantum mechanics. We conclude our work with a summary and further perspectives.
§ LINDBLAD EQUATION AND COHERENCE QUANTIFIERS
Biological phenomena occur almost invariably in a relatively hot and complex environment, which is the opposite of the ideal quantum mechanical setting. The system that is of interest to us interacts with the environment around it and this interaction can drastically change its dynamics. Very large, or even infinite, number of degrees of freedom of the environment means that it is impossible to efficiently describe the whole system only by means of Hamiltonian and the von Neumann equation for the density matrix, even if the total Hamiltonian is known to us (which is usually not the case anyway). Instead, one can describe the evolution of the system of interest, called the open quantum system, with the non-unitary master equation. In the Markovian limit the general form of the Lindblad equation is
dρ/dt = - i [H, ρ] + D[ρ] ,
where is the density matrix of the open system, H its Hamiltonian and
D[ρ] = ∑_k γ_k ( L_k ρ L^†_k - 1/2{ L^†_k L_k, ρ})
is the Lindblad superoperator <cit.>. We have assumed ħ to be one. The first term on the right side of Lindblad equation describes the unitary evolution of the density matrix. The second part describes non-unitary evolution due to interactions with the environment and is responsible for the loss of coherence. The curly brackets denote the anticommutator, the operators L_k are called the Lindblad jump operators and the coefficients γ_k are real and non-negative.
When using the non-Hermitian Hamiltonian we need to adjust the description of the evolution of the density matrix. An appropriate framework (hereafter we refer to it as the adjusted framework) was proposed by Brody and Graefe <cit.>. The time evolution of a density matrix is then given by
dρ/dt = - i [ℋ, ] - {Γ, } + 2 (Γ ) ,
where a non-hermitian Hamiltonian H is decomposed into a sum of Hermitian and anti-Hermitian parts
H = ℋ - i Γ,
with two Hermitian operators
ℋ = 1/2( H + H^†),
Γ = i/2( H - H^†) .
The first two terms in the evolution equation eq:nonhermitian time evolution can be easily inferred by inspecting the standard von Neumann equation for the evolution of the density matrix. However, it turns out that with these two terms alone, the trace of a density matrix would not be conserved. Therefore, a third term is needed that ensures the conservation of the trace. This guarantees the retention of probabilistic interpretation of the density matrix and allows calculating statistical averages of operators. If the Hamiltonian ℋ is already Hermitian, then Γ = 0 and one recovers the traditional equation of motion.
To obtain the complete formula for the evolution of the density matrix, we add Lindblad operator Lindblad_op on the right-hand side of eq. eq:nonhermitian time evolution, and therefore work within what Zloshchastiev and Sergi call a hybrid formalism <cit.>. The total evolution of the density matrix is governed by the equation
dρ/dt = - i [ℋ, ] - {Γ, } + 2 (Γ ) + 𝒟 [ ] ,
Equation eq:total nonhermitian evolution is the equation that we use to model dynamics of the ion channels. We will specify the Hamiltonian and Lindblad jump operators L_k when discussing the details of the soft knock-on model. Instead now, we turn our attention to measures of coherence.
The framework for quantifying coherence was laid out by Baumgratz et al. <cit.>. For a function C: 𝒮 (ℋ) ⟶ℝ_+, that maps a density matrix into the set of non-negative real numbers, to be a valid coherence measure the following conditions have to be satisfied:
(C1) C( δ) = 0 for every δ∈ℐ,
where ℐ is the set of completely incoherent states in a given basis, i.e. diagonal density matrices. A stronger condition can be demanded that C() is non-zero only, and only if contains any coherence. The coherence measure should not increase under the incoherent operations, i.e. operations that cannot create coherence from incoherent states:
(C2a) C() ≥ C (Φ_ICPTP ()) for all incoherent completely positive trace-preserving maps Φ_ICPTP;
(C2b) C () ≥∑_n p_n C(_n) with _n = K_n K^†_n / p_n and p_n = (K_n K^†_n), for any set of Kraus operators { K_n } satisfying ∑_n K^†_n K_n = 1 and K_n ℐ K^†_n ⊆ℐ.
Finally, state mixing should only decrease coherence, and therefore the coherence measure should be a convex function,
(C3) non-increasing under mixing of quantum states (convexity): ∑_n p_n C(_n) ≥ C (∑_n p_n _n) for any set of quantum states {_n } and any p_n ≥ 0 such that ∑_n p_n = 1.
Sometimes, a term coherence quantifier is used for the functions satisfying C1-C3 , while coherence measures satisfy two additional conditions: uniqueness and additivity under tensor products <cit.>.
So far several coherence quantifiers and measures have been identified, including ones based on l_p-norms, affinity of coherence, robustness of coherence, coherence cost and so on <cit.>. One canonical measure of coherence is distillable coherence. It describes the optimal number of maximally coherent states |Ψ_d⟩, which can be obtained from a given state in the asymptotic limit. The maximally coherent state |Ψ_d⟩ of the dimension d is defined as
|Ψ_d⟩ = 1/√(d)∑_i=1^d |i⟩,
where { |i⟩} is a chosen basis. Its uniqueness comes from the fact that any d × d state can be obtained from |Ψ_d⟩ by means of incoherent operations. Distillable coherence, also known as the relative entropy coherence introduced by Baumgratz et al. <cit.>, assumes a simple form
C_d () = S (Δ[ ]) - S() ,
where S () = - ( log_2 ) is the von Neumann entropy, and Δ is the dephasing operator, which returns only the diagonal part of the density matrix <cit.>.
Another popular coherence quantifier is the l_1-norm of coherence, given by the sum of absolute values of the off-diagonal elements
C_l_1 () = ∑_i≠ j |_ij| .
In this work we will use the distillable coherence which from now we simply denote by C(ρ). However, both of the coherence meassures have been employed to study quantum biology phenomena, including avian magnetoreception <cit.> and transport through ion channels <cit.>.
§ MODEL
In the soft knock-on model the transport process of the ion can be conceptualized as proceeding through 3 states. The three states in question are: potassium ions at the sites two and four and water molecules at the sites one and three, denoted |1⟩; potassium ions at the sites one and three and water molecules at the sites two and four, denoted |2⟩; a potassium ion at the site two and water molecules at sites one and three with an additional potassium ion exiting the ion channel and another one entering the selectivity filter at the site number four, denoted |3⟩. The jump of a potassium ion from the fourth site to the extracellular solution is considered a transition form |3⟩ to |1⟩ <cit.>. The three configurations are presented graphically in the figure <ref>.
For simplicity, we will assume ħ to be one. The Hamiltonian of the system reads
H = ω_0 S_z + c ( |2⟩⟨1|+ |3⟩⟨2| + |1⟩⟨3| ) ,
where ω_0 is a transition frequency, S_z is the z-component of the spin-1 operator and c is the coherent transition rate which we assume to be the same between different pairs of states. The Lindblad operator describing the coupling with the environment is of the form
𝒟 [ ] = γ( S_- S_+ - 1/2{ S_+ S_-, }) ,
where S_± = 1/√(2)( S_x ± i S_y ) are the spin-1 ladder operators.
The question regarding value range of the parameters used in the model is a difficult one. Vaziri and Plenio provided the first estimation <cit.>. They argued that the effective hopping rate parameter c_eff = c^2/ω_0 should be of the order of the transfer rate of the channel, which is in the range of 10^6 - 10^8 s^-1. However, they pointed out that this is valid while c ≪ω_0, with ω_0 not larger than 10^12 s^-1. They themselves set ω_0 to be of an order of magnitude larger than c, with c ∼ 10^9 s^-1. De March et al. <cit.> opt for a much larger c ∼ 10^11 - 10^13 s^-1, to compensate the electrostatic repulsion and retain the expected effective hopping rate c_eff. The dephasing rate γ varied from a fraction of c up to 100c. However, note that in the mentioned papers, the authors work with a model based on a tight-binding chain, in which the ions hop independently between the subsequent binding sties. Therefore, transferring these values one-to-one to "soft knock-on" model may not be completely justified. As one may have noticed, there is no general consensus regarding the objective values of the parameters, which therefore are treated rather instrumentally. Although a revised, in-depth discussion would be highly beneficial, currently established values should still provide qualitative results. Hence, we shall take the parameters used by Seifi et al. as the starting point but also go beyond them to probe the whole parameter space of the model.
We note that the Hamiltonian eq:Seifi og hamiltonian is non-Hermitian and therefore using standard framework for the density matrix evolution may lead to unphysical results. This becomes apparent when we extend the range of parameters beyond the one used by <cit.>. By numerically solving the standard Lindblad equation eq:lindblad for the density matrix evolution,
for value of c comparable to that of ω_0 and with the initial condition being the maximally coherent state
|ψ⟩ = 1/√(3)(|1⟩ + |2⟩ + |3⟩) ,
we can clearly see the formalism completely breaks down. Figure <ref> compares the evolution of the diagonal elements of the density matrix for c=1 × 10^7 s^-1 (solid lines) and c=6 × 10^7 s^-1 (dashed lines). Beyond a certain threshold, the system becomes unstable and the matrix elements, instead of settling down, start oscillating with an increasing amplitude. But even for a more conservative choice of parameters the problems emerge when inspecting the coherence of the system. Even for relatively small values of c, the relative entropy of coherence ventures into the negative values (cf. inset in the fig. <ref>), which should not happen under normal circumstances, as per definition of coherence measure.
This clearly shows that the problems of the standard formalism are not only of conceptual nature – no guarantee of the total probability conservation – but generically lead to unphysical results.
As we will see the adjusted framework cures these problems at a minimal cost of essentially introducing an extra term in the evolution equation. For the completeness of the presentation we report here the two Hermitian operators resulting from the decomposition of the original Hamiltonian eq:Seifi og hamiltonian
ℋ = ω_0 S_z + c/2( |2⟩⟨ 1|+ |3⟩⟨ 2| + |1⟩⟨ 3| + h.c.) ,
Γ = i c/2( |2⟩⟨ 1|+ |3⟩⟨ 2| + |1⟩⟨ 3| - h.c.) ,
as required for the adjusted formalism. Here h.c. denotes Hermitian conjugate.
§.§ Four states model
The three states model is special because transitions between different states are unidirectional. Whereas directionality is crucial to model the transport phenomenon, one might wonder how stable are the predictions under introducing transitions in both ways (not necessarily with the same rate). This is the motivation between introducing the four states model.
The simplest way to include transitions in both ways is to add a fourth state in which there is one ion in the S3 site of the selectivity filter, one ion entering the S1 site from the extracellular solution, and another ion leaving the channel through the cavity. We denote this state |0⟩ for convenience. All permitted transitions are presented graphically in the fig. <ref>. The coefficients c_in and c_out denote the transition rates in the pathways from the outside to the inside and vice versa. The rest of the Hamiltonian and the dissipator remain the same, but the operators are now in the spin 3/2 representation of the su(2) algebra. We will solve and analyze the four-state model in the next section in detail. For now let us confront it with the three states model on a qualitative level.
For c_in = 0 we essentially obtain the same system as in the previous sections. Still some differences can arise from the different forms of the ladder operators S_± and the z-coordinate spin operator S_z. Without the coupling and for c_in = c_out = c none of the pathways is favoured in terms of coherent transitions, but one of them may still be favoured due to the self-energy ω_0 S_z. The addition of the coupling, as we already know, greatly changes the dynamics. But even though the system is now bidirectional, the exclusion of some transitions makes the Hamiltonian non-Hermitian. Therefore, we have to use the adjusted framework once again.
§ RESULTS
Figure <ref> presents the evolution of the diagonal elements of the density matrix evaluated using the adjusted framework. For small values of c the dynamics of the system remain similar to the ones obtained without the adjusted framework. For γ = 0 behaviour of the system is periodic. When incoherent coupling is included, the system tends to a quasi-stationary state (oscillations of probability become negligible), with state |3⟩ most probable. For c ∼ω_0 the system is now stable, but its behaviour is significantly different. The system also assumes quasi-stationary state, but the probability of the state |3⟩ is considerably lower than one. Furthermore, the system becomes quasi-stationary even without the coupling, i.e. for γ = 0. We also find that the relative entropy of coherence is now always non-negative and that it remains strictly positive asymptotically (cf. fig <ref>).
Because the system is asymptotically quasi-stationary, it is sensible to ask about asymptotic behaviour of the observables as functions of the parameters of the model. In addition to the coherence measure C, we can investigate the conduction rate of the system, which we define as I(t) = c _3,3(t) (or I(t) = c_in_3,3(t) - c_out_0,0(t) for the four-state model). The results for the two models and both quantities are presented in the figures <ref> and <ref> respectively. It is no surprise that increasing the coherent hopping rate increases the asymptotic coherence, and increasing the incoherent coupling with the environment decreases it. However, the results indicate a more nuanced dynamics.
In both models the system exhibits two regimes. The self-energy regime occurs for relatively small values of c. It can be associated with the situation where after a sufficient time, it is almost certain to find the system in the state |3⟩, which hints very fast transitions through the states |1⟩ and |2⟩. Hence, asymptotic conduction increases linearly with c. Furthermore, the value of the coupling constant γ (supposing it is positive) plays no role in the asymptotic behaviour of the observables, since state |3⟩ wins nearly all probability anyway. Yet, coupling with the environment is essential since for γ = 0 the system does not achieve any quasi-stationary state and the probabilities of different states oscillate in time with frequency and amplitude governed by c and ω_0.
In the second regime, the stationary state is achieved even without the coupling, i.e. for γ = 0. There, given constant transition frequency ω_0, the probabilities in the stationary state are determined by the interplay between incoherent coupling rate γ and coherent hopping rate c. The larger the coherent hopping rate c, the more uniform the probabilities of different states tend to be. In this regime, the system dwells longer in the states |1⟩ and |2⟩, relatively to the state |3⟩. In the limiting case, where the probabilities are equal, conduction occurs steadily through all three states. The overall speed of conduction increases with c. Coupling constant γ tends to favour the state |3⟩ by faster transition through the first two states. Changing ω_0 changes the transition point between the two regimes.
The main difference between the three and four-state models is that in the latter one the conduction reaches plateau and eventually starts decreasing as c increases. The reason is that, as the transition rate increases, transitions along the intracellular in pathway become more and more prominent and eventually start to contest the extracellular out pathway.
In the following section we offer an explanation for the existence of the two regimes based on the spectrum of the Hamiltonians.
§ HAMILTONIAN EIGENVALUES
Consider the Hamiltonian eq:Seifi og hamiltonian itself without the coupling to the environment described by the Lindblad operator. Despite its non-hermiticity, we can still solve for its spectrum. In fact, due to the relatively easy form of the Hamiltonian in the three-state model, we can obtain an analytical solution for its eigenvalues
ε_1 = 2 · 3^1/3·ω_0^2 + 2^1/3·α^2/3/6^2/3·α^1/3,
ε_2 = -(3^1/3 + 3^5/6 i) ω_0^2 + 2^1/3·α^2/3/6^2/3·α^1/3,
ε_3 = - (3^1/3-3^5/6 i) ω_0^2 + (-2)^1/3·α^2/3/6^2/3·α^1/3,
where
α = 9c^3 + √(81 c^6 - 12 ω_0^6).
The spectrum is shown in the fig. <ref> for ω_0 = 1 × 10^8 s^-1. The blue color denotes the real parts of the eigenvalues, and the orange shows the imaginary parts. At some point two of the eigenvalues coalesce, and their imaginary parts become non-zero. Such points are called exceptional and they play an important role in, among others, 𝒫𝒯-symmetric non-Hermitian quantum mechanics, photonics and optics <cit.>. This transition happens at the point, where α becomes purely real, i.e. at c = (12/81)^1/6ω_0 ≈ 0.72742 ω_0. A similar thing happens in the four-state model, with two of the eigenvalues remaining real beyond the exceptional point (not shown).
Turning on the coupling to the environment has an impact on the dynamics of the system. Nevertheless, the two regimes in this case should be the result of a similar phenomenon. To this end, we need to incorporate to the Hamiltonian,at least partially, the Lindblad term. This can be achieved by the technique known from non-Hermitian quantum mechanics where it is a common practice to partially include the Markovian part into the effective Hamiltonian. The effective Hamiltonian becomes then non-Hermitian (if it has not been already). The prescription is <cit.>,
H_eff = H - i/2∑_k Γ_k L^†_k L_k ,
where L_k are the Lindblad operators as in eq:lindblad. Then the density matrix evolves according to the original Hamiltonian with the addition of the so-called quantum jumps Γ_k L_k L^†_k <cit.>. We introduce such effective Hamiltonian in the three-state model
H_eff = H - i/2γ S_+ S_-
and plot its eigenvalues for γ = 0.5 × 10^7 s^-1. The result is shown in the fig. <ref>. Although the imaginary parts of the eigenvalues are now shifted and therefore non-zero, we can still identify two distinct regimes. In the first one, the imaginary parts are orders of magnitude smaller than the real parts and vary slowly. In the second one two of them start to diverge quite rapidly.
As we will now show the eigenvalue structure determines the dynamics and, more specifically, is responsible for distinct asymptotic regimes observed in fig. <ref> and <ref>.
Let us start with the evolution of the density matrix for non-Hermitian close system as given by eq. eq:nonhermitian time evolution. Simple manipulations yields
∂ρ/∂ t = - i (H_ effρ - ρ H_ eff^†) + i ρ (H_ effρ - ρ H_ eff^†) ,
with the second term enforcing the normalization ρ = 1. At large times, the dynamics is dominated by the eigenstates with the largest imaginary eigenvalue. This can be seen by writing the first term in the energy eigenbasis where it becomes -i ( E_a - E_b^*)ρ_ab. Here, we introduced eigenstates |a⟩ such that H_ eff|a⟩ = E_a |a⟩ with E_a potentially complex and ρ_ab = ⟨ a |ρ |b⟩. Therefore, at large times system evolves towards maximizing the imaginary part of E_a - E_b^*. However
max_a, b im (E_a - E_b^*) = 2max_a im E_a ,
which shows that this is indeed achieved by choosing an eigenstate with the largest imaginary eigenvalue. The eigenspectrum, as shown in Fig. <ref>, reveals that the largest imaginary eigenvalue undergoes a transition from mainly flat and small to quickly increasing defining the two regimes visible in the relative entropy of coherence and conductance.
§ DISCUSSION
In this work, we have studied the soft knock-on model of transport through ion channels in the quantum-mechanical setting. We have argued that the non-hermiticity of the Hamiltonian requires modifying the usual Lindblad formalism. We have used the "hybrid" formalism of <cit.> which applies to non-hermitian systems coupled to an environment. The results revealed two regimes of the model visible at long times. At small transition rates, c/ω_0 ≪ 1, the stationary state is indifferent to the coupling with the environment. In this regime conductance increases linearly with the transition rate c. On the other hand, for c > ω_0 the coherence and conductance of the stationary state depend on the environmental noise. The noise increases the transport, but at the same time has a detrimental effect on the coherence. Still, the coherence, as witnessed by the relative entropy, remains large enough for the transport phenomena to be in a quantum regime.
Thus, the results show there is no contradiction between the high level of coherence and high conductance. Depending on the values of the parameters of the model, the transport properties of ion channels are determined either by the internal conductance mechanism or by the interplay between it and the external noise. Resolving which way the transport occurs in actual ion channels would require a more precise estimation of the parameters from the experiments. We have also offered an explanation of the existence of the two regimes based on the spectrum of the effective Hamiltonian. As a by-product of using the adapted formalism for the evolution of the density matrix, we have solved the problem of unphysical results as seen in <cit.>.
From a wider perspective, we have demonstrated that the "hybrid" formalism is an effective tool to describe non-Hermitian open systems, which are widespread in biophysical context.
Finally, it would be also interesting to perform further analysis for the competing hard knock-on model, beyond the already existing work <cit.>, which might help discriminate between the two based on experimental evidence.
Acknowledgments: We thank Bert de Groot for useful discussions.
§ REFERENCES
unsrt
|
http://arxiv.org/abs/2409.02209v1 | 20240903182338 | Estimand-based Inference in Presence of Long-Term Survivors | [
"Yi-Cheng Tai",
"Weijing Wang",
"Martin T. Wells"
] | stat.ME | [
"stat.ME"
] |
Equivariant Poincaré duality for cyclic groups of prime order and the Nielsen realisation problem
Kaif [email protected] Dominik [email protected] Christian [email protected]
September 9, 2024
======================================================================================================================
1 Department of Statistics and Data Science, Cornell University, Ithaca, NY, USA
2 Institute of Statistics, National Yang Ming Chiao Tung University, Taiwan, ROC
§ ABSTRACT
In this article, we develop nonparametric inference methods for comparing survival data across two samples, which are beneficial for clinical trials of novel cancer therapies where long-term survival is a critical outcome. These therapies, including immunotherapies or other advanced treatments, aim to establish durable effects. They often exhibit distinct survival patterns such as crossing or delayed separation and potentially leveling-off at the tails of survival curves, clearly violating the proportional hazards assumption and rendering the hazard ratio inappropriate for measuring treatment effects. The proposed methodology utilizes the mixture cure framework to separately analyze the cure rates of long-term survivors and the survival functions of susceptible individuals. We evaluate a nonparametric estimator for the susceptible survival function in the one-sample setting. Under sufficient follow-up, it is expressed as a location-scale-shift variant of the Kaplan-Meier (KM) estimator. It retains several desirable features of the KM estimator, including inverse-probability-censoring weighting, product-limit estimation, self-consistency, and nonparametric efficiency. In scenarios of insufficient follow-up, it can easily be adapted by incorporating a suitable cure rate estimator.
In the two-sample setting, besides using the difference in cure rates to measure the long-term effect, we propose a graphical estimand to compare the relative treatment effects on susceptible subgroups. This process, inspired by Kendall's tau, compares the order of survival times among susceptible individuals. The proposed methods' large-sample properties are derived for further inference, and the finite-sample properties are examined through extensive simulation studies. The proposed methodology is applied to analyze the digitized data from the CheckMate 067 immunotherapy clinical trial.
Keywords: Cure mixture model, Estimand, Immunotherapy, Insufficient follow-up, Kaplan-Meier estimator, Kendall's tau, Nonproportional hazards, Self-consistency, Sufficient follow-up, Two-sample comparison
Equivariant Poincaré duality for cyclic groups of prime order and the Nielsen realisation problem
Kaif [email protected] Dominik [email protected] Christian [email protected]
September 9, 2024
======================================================================================================================
§ INTRODUCTION
In clinical trials for novel cancer therapies, these treatments often exhibit distinct effect profiles compared to conventional treatments such as chemotherapy, likely due to their different pharmacological mechanisms <cit.>. For instance, immunotherapy stimulates the immune system to attack cancer cells, while targeted therapies disrupt cancer cell growth by focusing on specific genetic alterations or proteins, thus minimizing harm to healthy cells. Chemotherapy, often used as the control group in these trials, primarily aims to directly destroy rapidly dividing cancer cells. These varying mechanisms of action lead to different temporal effects on patient outcomes, which are evident in the survival curves. The crossing or delayed separation and subsequent leveling-off in the tails of survival curves, as shown in many reports, clearly violate the proportional hazards assumption.
In such scenarios, the log-rank test may lose power, and the hazard ratio becomes an inappropriate measure for evaluating treatment effects <cit.>.
While novel therapies aim to improve long-term survival, the presence of both long-term survivors and patients who do not respond favorably to treatments introduces complexity into clinical trial analyses <cit.>. The heterogeneity in patient responses often makes interpreting trial results challenging <cit.>.
In accordance with the ICH guidelines <cit.>, formulating suitable estimands for clinical trials involving long-term survivors is crucial. This is essential for the accurate analysis and interpretation of trial results, particularly when traditional statistical methods are inadequate.
Empirical evidence indicates that the long-term benefits of promising novel therapies might be limited to a subset of patients <cit.>. This observation aligns with the mixture formulation approach commonly used in analyzing survival data with a cure fraction <cit.>.
It assumes that there are two groups, one with the potential to be “cured,” and the other “uncured” or susceptible subjects. In this context, the term 'cure' does not necessarily imply the complete eradication of the disease but may instead represent a long-term survival probability that indicates no further risk of the event of interest.
Therefore, we adopt this framework to develop suitable estimands for separately measuring the treatment effects in the two sub-populations.
Nonparametric inference of the mixture cure model requires that the follow-up period is sufficiently long to ensure the potential observation of maximum lifetimes in the susceptible sub-population, necessary for the purpose of identifiability <cit.>. In the medical field, landmark trials that aim to establish the long-term efficacy of treatments often include results from extended follow-up periods.
An example is the CheckMate 067 trial, which enrolled 1,296 patients across 137 sites globally and extended its follow-up to at least 6.5 years, providing crucial insights into the long-term outcomes and safety of the treatments <cit.>. On the other hand, statistical methods for scenarios of insufficient follow-up have also been developed. Escobar-Bach and colleagues have utilized extrapolation techniques from extreme value theory to reduce the bias due to missing tail information <cit.>. Furthermore, the paper by Maller et al. reviews recent progress in cure mixture models <cit.>.
In the presence of covariates, various approaches to analyzing the cure mixture model have been explored. Statistical Methods in Medical Research featured a special issue on this topic, edited by Balakrishnan in 2017 <cit.>.
In a two-sample setting, we assess treatment effects by examining differences across key measures. The difference in cure fractions highlights the impact on long-term survivors, while comparisons of survival times among susceptible sub-groups allow us to evaluate the treatment's efficacy for those who do not achieve cure status.
For the latter comparison, the Cox-TEL method, which utilizes a Taylor expansion technique to bridge Cox proportional hazards (PH) and PH cure models for data with long-term survival, adjusts the hazard ratio to more accurately quantify the treatment effect among the two susceptible groups <cit.>. However, the Cox-TEL approach relies on the proportional hazards assumption for the two susceptible groups, an assumption that may be applicable only in certain limited clinical trial scenarios.
In Section <ref>, we first review the cure mixture framework and related existing results. We then focus on a location-scale-shift variant of the Kaplan-Meier (KM) estimator to estimate the susceptible survival function. With sufficient follow-up, we demonstrate that this estimator possesses several useful representations that parallel those of the KM estimator. These include the product-limit form, the Inverse Probability of Censoring Weighting (IPCW) expression, and a self-consistency equation. We also establish the theoretical properties of this latency survival estimator, including weak convergence and nonparametric efficiency. If follow-up is not sufficient, we demonstrate that the susceptible survival function can be estimated by incorporating the cure rate estimator proposed by Escobar-Bach and Van Keilegom <cit.>. Their method involves adding a compensating component to the tail of the KM estimator.
Section 3 examines the cure mixture formulation within a two-sample framework. Beyond assessing long-term survivor outcomes by comparing differences in cure rates between treatment groups, we introduce a graphical estimand that captures the temporal impact of treatments on susceptible groups. The proposed method is a modification of the approach originally developed by Tai et al. <cit.>. This plot offers a clear interpretation of the relative treatment effect on the susceptible groups over time, which is preferred to a single number summary such as an average hazard rate or restricted mean survival rate <cit.>. Our approach does not make any additional assumptions about the relationship between the groups under comparison, enhancing its versatility compared to the Cox-TEL approach.
An extensive simulation study examining the finite-sample properties of the proposed methodology is presented in Section 4, and an application of this methodology to the CheckMate 067 clinical trial is detailed in Section 5. The technical details, proofs of theorems, and additional numerical results are provided in the Supplementary Material.
§ ESTIMATION OF SUSCEPTIBLE SURVIVAL FUNCTION
In the one-sample setting, let T be the failure time with the survival function S(t) = (T > t). Define ξ as the indicator of susceptibility such that T < ∞ if ξ = 1 and T = ∞ if ξ = 0. Under the mixture formulation, S(t) can be written as
S(t) = S_a(t) (1-η) + η,
where η = Pr(ξ = 0) represents the cure fraction, and S_a(t) = Pr(T > t | ξ = 1) denotes the survival function for susceptible individuals, also known as the latency survival function. Under right censoring, observations that are temporarily censored may become mixed with cured ones (long-term survivors), which can impact the identifiability of η and S_a(t). Let C be the censoring time with the survival function G(t) = (C > t).
Assume that C is proper, with G(∞) = 0, and that T and C are independent and do not experience simultaneous jumps. Observed variables include X = min(T, C) and δ = I(T ≤ C), where I(A) is the indicator function that equals 1 if the event A is true and 0 otherwise. Denote ζ_C and ζ as the right end points of the supports of C and T|ξ = 1, respectively. The condition of sufficient follow-up is related to the requirement that ζ≤ζ_C, which means that the duration of follow-up is long enough to observe the largest event time among susceptible individuals <cit.>.
§.§ Nonparametric Analysis of the Cure Mixture Framework: A Review
Denote (T_i,C_i,ξ_i) (i = 1,…,n) as identically and independently distributed replications of (T,C,ξ).
Observed variables include (X_i, δ_i), where X_i = T_i ∧ C_i and δ_i = I(T_i ≤ C_i) (i = 1, …, n). Let 0 < t_(1) < … < t_(K) be the distinct ordered failure times, t_(0) = 0 and K be the number of distinct failure points.
The KM estimator of S(t) can be written as the product-limit form:
Ŝ(t) = ∏_k: t_(k)≤ t(1 - d_(k)/y_(k)),
where d_(k) = ∑_i=1^n I(X_i = t_(k), δ_i = 1) and y_(k) = ∑_i=1^n I(X_i ≥ t_(k)) (k = 1, …, K).
The KM estimator possesses several desirable properties.
It has been demonstrated that F̂(t) = 1- Ŝ(t) can also be expressed as an IPCW estimator <cit.> such that
F̂(t) = ∑_i=1^n I(X_i ≤ t, δ_i = 1)/nĜ(X_i) = 1/n∑_k: t_(k)≤ td̃_(k),
where d̃_(k) = d_(k)/Ĝ(t_(k)) and Ĝ(t) is the KM estimator of G(t) given by
Ĝ(t) = ∏_k:u ≤ t(1 - ∑_i=1^n I(X_i = u, δ_i = 0)/∑_i=1^n I(X_i ≥ u)).
The mass assigned to t_(k) is given by
Ŝ(Δ t_(k)) ≡Ŝ(t_(k-1)) - Ŝ(t_(k)) = d_(k)/nĜ(t_(k)) = 1/nd̃_(k),
where Ŝ(t_(0)) = Ŝ(0) =1.
When the largest observation is a censored observation, Ŝ(t_(K)) remains greater than 0, indicating that the KM curve reaches a plateau or levels off.
Under sufficient follow-up with ζ≤ζ_C, the cure fraction η can be estimated by
η̂ = Ŝ(t_(K))= 1- ∑_k=1^K d̃_(k)/n.
Maller and Zhou established the properties of η̂, such as consistency and asymptotic normality, given specific regularity conditions <cit.>.
Under insufficient follow-up, Escobar-Bach and Van Keilegom <cit.> propose a new estimator under the assumption that T|ξ = 1 belongs to the maximum domain of attraction of an extreme value
distribution. To estimate cure rates, they suggest utilizing the tail of the KM estimator, enhanced with a compensating term derived from extrapolation techniques in extreme value theory. Their formula is given by:
η̌_b = η̂ - Ŝ(b t_(K)) - Ŝ(t_(K))/b̌_γ - 1,
where b ∈ (0,1) is a scaling factor, γ is an extreme value index that controls the tail behavior of S_a(t) and
b̌_γ = Ŝ(b t_(K)) - Ŝ(b^2 t_(K))/Ŝ(t_(K)) - Ŝ(b t_(K)).
Consequently, the survival function can be estimated by directly substituting an estimator of η, either η̂ or η̌_b, into equation (<ref>). The referenced paper suggests using bootstrap resampling to select the value of b <cit.>.
§.§ The Susceptible Survival Estimator under Sufficient Follow-Up
Under sufficient follow-up, substituting η̂ for η in equation (<ref>) and applying it to equation (<ref>) yields the estimator:
Ŝ^LS_a(t) = Ŝ(t) - η̂/1- η̂,
which can be interpreted as a location-scale-shift variant of Ŝ(t).
We now demonstrate that Ŝ^LS_a(t) in (<ref>) is equivalent to the alternative IPCW and product-limit estimators:
Ŝ^W_a(t)= ∑_i=1^n I(X_i > t, δ_i = 1)/n(1-η̂)Ĝ(X_i) =
1/n̂_a∑_k: t_(k) >td̃_(k)
= ỹ_(k)/n̂_a
and
Ŝ_a^PL (t) = ∏_k: t_(k)≤ t(1 - d̃_(k)/ỹ_(k)),
where n̂_a = n(1-η̂), and ỹ_(k) is the adjusted number at risk at t_(k) given by
ỹ_(k)=∑_i=1^n I(X_i ≥ t_(k),δ_i=1)/Ĝ(X_i-) = ∑_j=k^Kd̃_(k).
Note that ỹ_(1) = ∑_k=1^Kd̃_(k)= n̂_a. The expression Ŝ_a^W(t) in (<ref>) is an IPCW estimator of S_a(t), with the mass assigned to each observed failure point adjusted by the estimated sample size n̂_a for the susceptible group. In addition, Ŝ_a^PL (t) in (<ref>) is an adjusted variant of the product-limit estimator in (<ref>).
Based on mass calculations, it can be shown that the three expressions in (<ref>)-(<ref>) are identical. Specifically, from the product-limit expression in (<ref>), the mass at t_(k+1) is given by:
Ŝ_a^PL(Δ t_(k+1)) = Ŝ_a^PL (t_(k)) d̃_(k)/ỹ_(k).
Since Ŝ^PL_a(t_(0)) =Ŝ^PL_a(0) =1, Ŝ^PL_a(Δ t_(1)) = d̃_(1)/ỹ_(1). It is straightforward to show that Ŝ^PL_a(t), Ŝ^W_a(t) and Ŝ^LS_a(t) are equal and can be written as
Ŝ_a(t) = ∑_i=1^n I(X_i > t, δ_i = 1)/n(1-η̂)Ĝ(X_i)
= 1/n̂_a∑_k: t_(k)>td̃_(k) = ỹ_(k)/n̂_a.
We observe that the effect of long-term survivors on the estimation of S_a(t) simply involves reducing the sample size to the susceptible subgroup. Consider the following three risk sets:
R(t) = {i: X_i ≥ t | i=1,…,n},
R_a(t) = {i: X_i ≥ t, ξ_i=1 | i=1,…,n},
R(t) = {i: X_i ≥ t, δ_i=1 | i=1,…,n}.
When there are long-term survivors with ξ_i=0, not all members in R(t) are susceptible to the event of interest. However, in the presence of censoring, the susceptible risk set R_a(t) is not fully recoverable from the available data. Given that R(t) ⊂ R_a(t) ⊂ R(t), R(t) can be used as a proxy for R_a(t). To account for selection bias among observations in R(t), the inverse-probability-of-censoring weighting technique is employed to estimate the hazard rate of T|ξ = 1 at time t_(k) as d_(k)/y_(k), which explains (<ref>).
The theoretical properties of Ŝ_a(t) outlined in this subsection are instrumental for extending the analysis to different censoring scenarios, such as interval censoring, and offer a rationale for employing bootstrap re-sampling techniques for additional inferential objectives.
In a remarkable article, Gill <cit.> showed that the nonparametric maximum likelihood estimator is often determined as the solution of the likelihood equations for a collection of smooth parametric submodels. These equations are, in fact, precisely the “self-consistency” equations introduced by Efron <cit.>. Consequently, in many settings, a solution of the self-consistency equation is equivalent to the nonparametric maximum likelihood estimator, and in general settings, the nonparametric maximum likelihood estimator will be asymptotically efficient.
Following the approach of Strawderman and Baer <cit.>, we derive the self-consistency property for
Ŝ^PL_a(t) in the cure model setting. The proposed self-consistency equation is a data-dependent function that represents the expected number of observations which are both susceptible and surviving at each time point in the sample. This function is expressed in terms of the target function S_a(t) and specific nonparametric estimates. The representation of the self-consistency function, along with its proof, is provided in Section 1 of the Supplementary Materials. The adaptability of the self-consistency property to interval-censored data and more complex censoring schemes presents a potential avenue for future extensions within the cure mixture framework
<cit.>.
We further study the weak convergence and nonparametric efficiency of the susceptible survival function. The weak convergence property of Ŝ_a(t) is given in Theorem <ref> and the property of nonparametrically efficiency is stated in Theorem <ref>.
Assume that 0 ≤η < 1,
sup_0≤ t < ∞|Y(t)/n - S(t)G(t)|=o_P(1)
and
∫_0^ζdF(u)/G(u-)<∞.
Under sufficient follow-up with ζ≤ζ_C, √(n)(Ŝ_a(t) - S_a(t)) converges weakly to a mean-zero Gaussian process with t∈[0,ζ].
Note that when ζ < ζ_C, the condition in equation (<ref>) is satisfied. Therefore, it is necessary to verify condition (<ref>) only when ζ=ζ_C. The sufficiency of the condition in (<ref>) is discussed in detail by Gill <cit.>.
The theory of regular and asymptotically linear (RAL) estimators can be applied to analyze Ŝ_a(t). This estimator is represented in terms of the efficient influence function, a concept introduced in the referenced book <cit.>.
The influence function of the RAL estimator, which possesses the lowest asymptotic variance (van der Vaart, 1998, Theorem 25.20), corresponds to that of the asymptotically efficient estimator <cit.>.
Recall that the estimators Ŝ^PL_a(t), Ŝ^W_a(t), and Ŝ^LS_a(t) all equal Ŝ_a(t) in (<ref>).
The efficient influence function form of a survival function estimate for general right-censored data is detailed in Section 4 of the book by Robins and Rotnitzky <cit.> and on page 133 of the book by Van der Laan and Robins <cit.>.
Specializing those results for the model in (<ref>) and the estimation of η gives the efficient influence function of Ŝ_a(t).
ψ_t(X,δ)=I(X>t,δ=1)/(1-η)G(Y-)+∫_0^ζq_t(u)/(1-η)π(u)dM_C,1(u)-S_a(t),
where π(u)=lim_n→∞ Y(u)/n,
q_t(u) = lim_n →∞∑_i=1^n I(X_i>t,δ_i=1,X_i≥ u)/{nG(X_i-) },
and
M_C,i(t)=I(X_i≤ t,δ_i=0)-∫_0^tI(Y(u)>0)-dG(u)/G(u-).
Under sufficient follow-up with ζ≤ζ_C, Ŝ_a(t) is regular and asymptotically linear (RAL) with the influence function
ψ_t(X, δ) = δ{I(X>t) - S_a(t)}/(1-η)G(X) + ∫_0^ζq_t(u)/π(u) dM_C,1(u),
where π(u) and q_t(u) are the probability limits of Y(u)/n and
∑_i=1^nδ_iI(X_i≥ u) [I(X_i>t) - S_a(t)]/n(1-η)G(X_i),
respectively, λ_c(t) is the hazard function of C, and the censoring martingale is
M_C,i(t) = I(X_i ≤ t, δ_i = 0) - ∫_0^t I(X_i ≥ u) λ_c(u)du.
It follows that Ŝ_a(t) is nonparametrically efficient and √(n)(Ŝ_a(t) - S_a(t)) converges in distribution pointwise to a mean-zero normal random variable for t∈[0,ζ].
The representation in Theorem <ref> is reminiscent of the representation of the KM estimator process as an identically and independently distributed process, as given by Lo and Singh <cit.>. This representation justifies the bootstrap method for estimating the standard error of functionals of the susceptible survival function estimate and its quantiles. Based on the bootstrap resampling approach, it provides a way of constructing confidence intervals (bands) for the unknown parameters (functionals of the distribution or quantile function). The proofs of the two theorems above are provided in Subsections 2.2 and 2.3 of the Supplementary Material.
§.§ The Susceptible Survival Estimator under insufficient Follow-Up
Under conditions of insufficient follow-up, by substituting η̌_b from equation (<ref>) into equation (<ref>), we obtain:
Š_a^LS(t;b) = Ŝ(t) - η̌_b/1 - η̌_b.
The properties of Š_a^LS(t;b) are determined by those of Ŝ(t) and η̌_b. For detailed properties of η̌_b, refer to Escobar-Bach and Van Keilegom <cit.>. Although η̌_b does not ensure convergence to the true cure rate η, it helps mitigate the underestimation typically associated with η̂ under insufficient follow-up.
As the duration of follow-up increases and approaches ζ, the estimator η̌_b becomes more accurate in approximating the true cure rate η. This improvement in the accuracy of η̌_b directly enhances the performance of Š_a^LS(t;b), making it more reliable in estimating the survival function S_a(t). In the supplementary materials, we offer heuristic discussions on the properties of Š_a^LS(t;b), with a particular focus on analyzing its bias relative to S_a(t).
§ TWO-SAMPLE APPLICATION
We propose a flexible alternative by adapting the methodology of Tai et al. <cit.>, which does not rely on specific model assumptions such as proportional hazards. Specifically, the tau process, which measures the relative performance of two groups under comparison, is defined as follows:
τ(t) = ∫_0^t S_1(u) dF_0(u) - ∫_0^t S_0(u) dF_1(u).
Each component of the integrand of τ(t) represents the process where, at each failure time in one group, the survival probability of the other group is evaluated at that specific time point. The function τ(t) sums these differences in survival probabilities up to time t, providing a cumulative measure of the disparity between the two groups over time. A positive value of τ(t) indicates that the treatment group (Group 1) exhibits a better effect up to time t. Note that τ(t) is a unitless and model-free measure, which makes it a robust and interpretable treatment-effect estimand, especially in the presence of non-proportional hazards <cit.>. Let τ̂(t) denote the estimator of τ(t).
In the absence of a cure, such that (T < ∞) = 1, τ(∞) represents Kendall's tau correlation between the group indicator and the failure time. When cure is a possibility, the cross-comparison via the difference in the integrands of τ(t) is blurred by the presence of long-term survivors, as S_1(u) and S_0(u) may plateau at different levels. Specifically, when Group 1 exhibits a much higher cure rate than Group 0, leading to S_1(u) ≫ S_0(u) at large values of u, the differences in early stages may become obscured. In the next subsection, we define another tau process to compare event times between the two susceptible subgroups. We then explore its relationship with τ(t), as well as the cure rates η_0 and η_1.
§.§ Tau Process for Susceptible Subgroups
Under the two-sample setting, let (T_ℓ,C_ℓ,ξ_ℓ) be the failure time, censoring time and the indicator of susceptibility for Group ℓ with ℓ=0,1. The mixture model described in (<ref>) is adapted for each subgroup with ξ_ℓ = 1, such that
S_ℓ(t) = S_a,ℓ(t) (1-η_ℓ) + η_ℓ,
where S_ℓ(t) = (T_ℓ > t), S_a,ℓ(t) = (T_ℓ > t | ξ_ℓ = 1) and η_ℓ = (ξ_ℓ = 0) for ℓ = 0,1. Define F_a,ℓ(t) = 1- S_a,ℓ(t) as the distribution function of T_j|ξ_j = 1 (j = 0,1). The long-term treatment effect can be described by the difference of η_1 and η_0. The treatment effect on the susceptible groups is evaluated by quantifying the difference between S_a,1(t) and S_a,0(t).
To characterize the treatment effect on the two susceptible groups, we introduce the susceptible tau process as follows:
τ_a(t)=∫_0^tS_a,1(u)dF_a,0(u)-∫_0^tS_a,0(u)dF_a,1(u).
For susceptible patients who do not ultimately achieve a cure, a positive value of τ_a(t) suggests that the treatment may still prolong the time until the occurrence of the unfavorable event of interest. Given that τ(t) = (T_0 < T_1 ∧ t) - (T_1 < T_0 ∧ t),
we may think of τ_a(t) as follows:
τ_a(t) = 1/(1-η_0)(1-η_1)[ ((T_0 < T_1 ∧ t) - η_1 F_0(t)) - ((T_1 < T_0 ∧ t) - η_0 F_1(t)) ],
where -η_1 F_0(t) reflects the adjusted probability, excluding comparisons between susceptible subjects who die before t in group 0 and long-term survivors in group 1; -η_0 F_1(t) reflects the adjusted probability, excluding comparisons between susceptible subjects who die before t in group 1 and long-term survivors in group 0; and {(1-η_0)(1-η_1)}^-1 serves as a normalizing constant.
Note that the signs of τ(t) and τ_a(t) may differ, indicating a potential reversal in effects between susceptible individuals and those considered long-term survivors. Specifically, we can write
τ(t)=(1-η_0)(1-η_1)τ_a(t)+(1-η_0)η_1F_a,0(t)-(1-η_1)η_0F_a,1(t).
Consider a situation that η_1 - η_0 is far greater than zero but S_a,1(t) < S_a,0(t) (equivalently, F_a,1(t) > F_a,0(t)). Equation (<ref>) indicates that it is still possible for τ(t) > 0 while τ_a(t) < 0. In certain oncology clinical trials, this scenario may correspond to a situation where a subset of patients in Group 1 does not respond favorably to immunotherapy, resulting in their classification as short-term survivors, with even shorter survival.
It has been argued that high-risk individuals are more likely to be depleted over time, which can make the hazard ratio susceptible to selection bias <cit.>. Additionally, differences in cure rates may not fully capture treatment effects in the early stages of a trial, which are crucial for assessing overall efficacy. Therefore, to provide a comprehensive assessment of treatment effects, we recommend using multiple measures, including τ(t), τ_a(t), and η_1 - η_0.
§.§ Estimation of τ_a(t)
Observed data in the two-sample setting can be denoted as (X_ℓ,i, δ_ℓ,i) for i = 1, …, n_ℓ and ℓ=0,1.
Let X̃_ij = X_0,i∧ X_1,j and O_ij=I(X_0,i<X_1,j,δ_0,i=1)+I(X_0,i>X_1,j,δ_1,j=1). Under sufficient follow-up, η̂_ℓ is a legitimate estimator of η_ℓ for ℓ = 0,1. The resulting estimator of τ_a(t) is given by
τ̂_a(t) = ∑_i,jψ̂_ij(t)ŵ_ij/n_0 n_1(1-η̂_0)(1-η̂_1),
where
ψ̂_ij(t) = Õ_ijsign(X_1,j-X_0,i)I(X_ij≤ t)/Ĝ_0(X_ij)Ĝ_1(X_ij),
and
ŵ_ij = [(1 - η̂_0)Ŝ_a,0(X_0,i)/(1 - η̂_0)Ŝ_a,0(X_0,i) + η̂_0]^(1 - δ_0,i)×[(1 - η̂_1)Ŝ_a,1(X_1,j)/(1 - η̂_1)Ŝ_a,1(X_1,j) + η̂_1]^(1 - δ_1,j).
The asymptotic property of τ̂_a(t) is stated in Theorem <ref>, and its asymptotic variance can be obtained using the bootstrap approach, with the proof provided in Subsection 2.3 of the Supplementary Material.
Suppose that the conditions of Theorem <ref> hold for both groups, and let n_1/n converge to p_1 as n_0 and n_1 tend to infinity, where 0 < p_1 < 1. If E[ψ_ij(t)w_ij]^2 is finite for t up to min(ζ_0, ζ_1), then √(n)(τ̂_a(t) - τ_a(t)) converges pointwise to a mean-zero normal random variable.
One can apply a bootstrap approach for right censored data to develop inferential procedures. The validity of such bootstrap confidence interval and tests follow using classical arguments for right censored data <cit.>. The processes τ̂(t) and τ̂_a(t) can be implemented using the R package 'tauProcess' <cit.>.
Under insufficient follow-up, the estimator of η_ℓ can be obtained by modifying Equation (<ref>) for the two-sample setting, and is denoted as η̌_ℓ, b_ℓ for ℓ = 0, 1.
The corresponding estimator of τ_a(t) can be modified by replacing η̂_ℓ with η̌_ℓ, b_ℓ, and is denoted as τ̌_a(t; b_0, b_1).
Note that the tail behaviors of the two groups may differ, necessitating the separate estimation of b_0 and b_1 in practical applications. The implementation procedure suggested by Escobar-Bach and Van Keilegom will be summarized in the data analysis section <cit.>.
§ SIMULATION STUDY
In the first design, we assess the performance of the estimators for S_a(t) and η under both sufficient and insufficient follow-up conditions. In the second simulation design, we evaluate the estimators of τ_a(t) for two scenarios—crossing and non-crossing survival functions S_a,0(t) and S_a,1(t), with η_ℓ values of 0.2 and 0.4 for ℓ = 0, 1. For both simulation settings, we analyze the average bias of each estimator, along with the bootstrap standard deviation estimates and the empirical coverage probabilities of the estimated confidence intervals. These results are derived using 2000 bootstrap resamples across 500 simulation runs. Additional simulation results are detailed in the Supplementary Material.
§.§ Finite-Sample Performance of Estimators for S_a(t) and η
In the initial design, we assess the finite-sample performance of estimators for S_a(t) and η, setting η = 0.2 and 0.4. To model T|ξ = 1 with bounded support, we employ a Beta distribution with parameters α_1 = 1 and α_2 = 3.
We first discuss the results under conditions where ζ = ζ_C and (<ref>) is met, with the censoring variable C following a Uniform[0, 4] distribution. As shown in Table <ref>, the average biases of Ŝ_a(t) and η̂ are close to zero, and the bootstrap estimates of the standard deviation for Ŝ_a(t) closely match the empirical estimates.
The empirical coverage probabilities are also around the 95% nominal level. Note that confidence intervals are wider under η = 0.4 compared to η = 0.2. Additional analyses based on other scenarios are presented in the Supplementary materials.
[Insert Table <ref> & Table <ref>]
Under insufficient follow-up, the censoring variable C following a Uniform[0, 0.8] distribution.
We evaluate the performance of Š_a^LS(t;b_*) using η̌_b_* to estimate η, where b_* is selected such that the corresponding estimator η̌_b closely matches the average outcome of a bootstrap experiment <cit.>. This choice minimizes the deviation between η̌_b and the average bootstrap estimator across multiple resampled datasets.
Compared with the performance of η̂ shown in Table <ref>, Š_a^LS(t;b_*) exhibits slight bias, as indicated in row (a) of the last column in Table <ref>.
Nevertheless, the performance of Š_a^LS(t;b_*) remains satisfactory as an estimator of S_a(t), although the coverage probability tends to deviate more from 95% as t increases.
§.§ Finite-Sample Performance of τ̂_a(t)
In this subsection, we present results for τ̂_a(t) under conditions of sufficient follow-up. Note that under insufficient follow-up, the performance of τ̌_a(t; b_0, b_1) is influenced by the shapes of the unobserved tail distributions. Therefore, we apply this modified estimator in our data analysis but exclude it from the simulations.
Figures S4 and S5 in the Supplementary material depict two non-crossing susceptible survival functions with the same cure rates and the corresponding τ_a(t). We present the results for the case with apparent disparity, and the other case is provided in the Supplementary material.
From Table <ref>,
we observe that the average bias of τ̂_a(t) is almost zero. The bootstrap estimates of the standard deviation of τ̂_a(t) closely align with the empirical estimates, and the empirical coverage probabilities are close to the 95% nominal level.
Notice that the lengths of the confidence intervals are associated with the values of η_0 and η_1. Higher cure rates lead to wider confidence intervals for τ_a(t). According to Table <ref>, τ̂_a(1) yields a significant result for testing H_0: τ_a(1) = 0.
[Insert Table <ref>]
In the Supplementary Material, Figures S6 and S7 depict scenarios where the susceptible survival functions intersect at a susceptible survival probability of approximately 0.5. The former corresponds to η_0=η_1 = 0.2, and the latter corresponds to η_0=0.2, η_1 = 0.4. The corresponding simulation results are summarized in Table <ref>. The performance of τ̂_a(t) is similar to that of the previous settings. The result from the last column indicates that there is no significant result for testing H_0: τ_a(1) = 0 based on τ̂_a(1).
[Insert Table <ref>]
In Figure S8 of the Supplementary Material, we present simulation results that examine the situation where η_1 > 0 and η_0 = 0. These results confirm the validity of the inference procedure based on τ̂_a(t).
§ DATA ANALYSIS
The CheckMate 067 trial, a randomized, multicenter, phase 3 study conducted from July 2013 to March 2014, evaluated three different immunotherapy strategies for advanced melanoma in a cohort of 945 patients. Participants were assigned to receive either a combination of nivolumab and ipilimumab, nivolumab alone, or ipilimumab alone, with each regimen designed to boost the immune system's ability to combat the disease.
First, we apply our proposed methodology by digitizing the overall survival KM curves for the three treatment groups based on the 4-year report <cit.>.
The estimated cure rates based on the tail values of the KM curves are 0.516 for the combination treatment, 0.452 for nivolumab alone, and 0.272 for ipilimumab alone. Given that the 4-year data may not represent sufficient follow-up, we also applied the extrapolation method proposed by Escobar-Bach and Van Keilegom <cit.>. This approach provided cure rate estimates based on η̌_b_*, yielding values of 0.481 for combination therapy, 0.421 for nivolumab alone, and 0.199 for ipilimumab alone.
The difference in cure rates between the combined group and 'ipilimumab alone' is significant, calculated as 0.516 - 0.272 = 0.244 (0.164, 0.323) with p-value 1.93× 10^-9 using the KM tail estimates, and 0.481 - 0.199 = 0.282 (0.128, 0.445) with p-value 3.97× 10^-4 using the modified estimator. Figure <ref> offers a graphical assessment, showcasing the KM curves, τ̂(t), as well as both versions of the estimated susceptible survival functions and the susceptible tau process. In plot (A), the combined group shows a higher survival curve and exhibits a higher cure rate. Plot (B) shows that τ̂(t) becomes positive starting from the third month onward.
Using η̂_ℓ as the estimate of η_ℓ, plots (C) and (D) display Ŝ_a,ℓ(t) for ℓ = 0, 1 and τ̂_a(t), respectively.
Using η̌_ℓ,b_*,ℓ as the estimate of η_ℓ, plots (E) and (F) display Š_a,ℓ^LS(t;b_*,ℓ) for ℓ = 0, 1 and τ̌_a(t; b_*,0, b_*,1), respectively. By comparing the last two rows of Figure <ref>, we observe that the modified cure rate estimators do not significantly impact the estimation of the susceptible survival functions and the susceptible tau process.
Although the combined therapy clearly outperforms 'ipilimumab alone' based on the KM estimators and τ̂(t), the analysis of susceptible groups indicates reversal relationships. This suggests that while a larger proportion of patients in the combined therapy group are long-term survivors, those who do not achieve a durable effect have similar or slightly shorter survival times compared to their counterparts in Group 0.
The difference in cure rates between 'nivolumab alone' and 'ipilimumab alone' is significant, calculated as 0.452 - 0.272 = 0.18 (0.101, 0.260) with p-value 9.08 × 10^-6 using the KM tail estimates, and 0.421 - 0.199 = 0.222 (0.067, 0.384) with p-value 0.005 using the modified estimator. According to Figure <ref>, although the 'nivolumab alone' group exhibits a higher cure rate, the estimated susceptible survival functions are roughly similar. The estimated curves of τ_a(t) demonstrate their proximity to zero and even suggest a slight reversal, as observed in the third row.
[Insert Figure <ref> & Figure <ref>]
§ CONCLUDING REMARKS
Long-term survivorship, often referred to as cure, is a common outcome in various fields. In the context of cancer treatment, the sustained effectiveness of immunotherapy and other advanced options has generated optimism among both patients and healthcare professionals.
However, understanding the potential heterogeneity in patient responses to these treatments remains a crucial aspect of devising appropriate treatment strategies for the right individuals. Cure mixture models provide a useful framework that allows for separate evaluation of long-term survivors and susceptible individuals who do not achieve the desired long-term status.
While the estimation of the cure rate assumes sufficient follow-up, which may not always be attainable, extreme value theory provides methods to develop new estimators <cit.>. These estimators can reduce bias in the tail estimates of the KM curves under conditions of insufficient follow-up.
Under sufficient follow-up, we demonstrate that the estimator of the susceptible survival function, Ŝ_a(t), inherits many favorable properties of the KM estimator, which paves the way for further extensions across various data structures and settings.
Under insufficient follow-up, the location-scale-shift version adapts the modified cure rate estimator suggested by Escobar-Bach and Van Keilegom <cit.>. To evaluate the effect on long-term survivors, we recommend using the difference in cure rates. Additionally, our proposed graphical estimand, τ_a(t), offers insights into the treatment effects over time for individuals who have not been cured, highlighting the timing and impact of the therapy. The susceptible tau process can be estimated nonparametrically, provided that a suitable cure rate estimator is available.
Building on these methods, employing multiple estimands in clinical trial analysis provides a clear framework that enhances understanding of survival outcomes. This approach not only enriches the analysis but also informs future therapeutic strategies and research directions.
§ ACKNOWLEDGEMENTS
We express our gratitude to Robert Strawderman and Benjamin Baer for their invaluable discussions on self-consistency and efficiency theory. Additionally, we are thankful to Mikael Escobar-Bach and Ingrid Van Keilegom for kindly providing the code for their cure rate estimator.
The authors thank Robert Strawderman and Benjamin Baer for their helpful discussions on self-consistency and efficiency theory.
The authors declare no potential conflict of interests.
Wang's research received support from the National Science and Technology Council of Taiwan through grants 111-2118-M-A49-008 and 112-2118-M-A49-002. Wells’ research was partially supported by National Institutes of Health awards and R01GM135926 1P01-AI159402.
unsrt
|
http://arxiv.org/abs/2409.02357v1 | 20240904005936 | Volume bounds for hyperbolic rod complements in the 3-torus | [
"Norman Do",
"Connie On Yu Hui",
"Jessica S. Purcell"
] | math.GT | [
"math.GT",
"57K32 (primary) 57K10, 57K35, 57Z15 (secondary)"
] |
arrows.meta
#1
theoremTheorem[section]
proposition[theorem]Proposition
lemma[theorem]Lemma
corollary[theorem]Corollary
definition
definition[theorem]Definition
remark[theorem]Remark
question[theorem]Question
*namedtheorem
named[1]
1.05
Volume bounds for hyperbolic rod complements in the 3-torus]Volume bounds for hyperbolic rod complements
in the 3-torus
School of Mathematics, Monash University, VIC 3800, Australia
[email protected]
School of Mathematics, Monash University, VIC 3800, Australia
[email protected]
School of Mathematics, Monash University, VIC 3800, Australia
[email protected]
§ ABSTRACT
The study of rod complements is motivated by rod packing structures in crystallography. We view them as complements of links comprised of Euclidean geodesics in the 3-torus. Recent work of the second author classifies when such rod complements admit hyperbolic structures, but their geometric properties are yet to be well understood. In this paper, we provide upper and lower bounds for the volumes of all hyperbolic rod complements in terms of rod parameters, and show that these bounds may be loose in general. We introduce better and asymptotically sharp volume bounds for a family of rod complements. The bounds depend only on the lengths of the continued fractions formed from the rod parameters.
[
Jessica S. Purcell
September 9, 2024
======================
§ INTRODUCTION
The present work is motivated by the notion of rod packing structures in crystallography. In 1977, O'Keeffe and Andersson observed that many crystal structures can be described as a packing of uniform cylinders, representing linear or zigzag chains of atoms or connected polyhedra <cit.>. In 2001, O'Keeffe et al. classified some of the simplest so-called rod packings in terms of arrangements in Euclidean space <cit.>. Rod packings have also appeared in the biological science and materials science literature <cit.>.
In previous work, the second and third authors initiated the mathematical study of rod packing structures using techniques from 3-dimensional geometry and topology, considering them as links in the 3-torus <cit.>. Using the theory of links in the 3-sphere, they identified an infinite family of rod complements in the 3-torus that admit complete hyperbolic structures. Following this, the second author provided a complete classification of the geometric structures on rod complements in the 3-torus <cit.>. As a consequence of this work, checking the hyperbolicity of a rod complement reduces to a linear algebra problem involving certain parameters that specify the rods and the way that they interleave. In particular, many rod complements are hyperbolic or have a hyperbolic rod complement component in their JSJ decomposition. It is thus natural to further study the hyperbolic geometry of rod complements, which we initiate in this paper by considering their volumes.
The Mostow–Prasad rigidity theorem asserts that a complete hyperbolic metric on a finite-volume hyperbolic 3-manifold is unique, so hyperbolic volume is a topological invariant. For a rod complement in the 3-torus, each rod has an associated direction in the unit cube fundamental region of the 3-torus. We encode the direction of each rod by integer vector coordinates, which we call rod parameters. Our most general result provides upper and lower volume bounds in terms of the number of rods and their rod parameters.
<ref>
VolBoundsDetLet R_1, R_2, …, R_n be disjoint rods in the 3-torus whose complement is a hyperbolic 3-manifold M. After applying a linear homeomorphism and renumbering, if necessary, we may assume that there is a positive integer k < n such that R_k+1, R_k+2, …, R_n are exactly the (0,0,1)-rods. Suppose that R_i has direction vector (p_i, q_i, z_i), for i = 1, 2, …, n. Then we have the inequalities
n < (M) ≤ 8 ( ∑_1 ≤ i < j ≤ kp_i q_j - p_j q_i + ∑_1 ≤ i ≤ k( (p_i,q_i) - 1 ) ),
where ≈ 1.01494 is the volume of the regular ideal tetrahedron.
The lower bound derives from a volume bound proved by Adams, which applies to any cusped hyperbolic 3-manifold <cit.>. Such a bound can be loose in general; indeed, we find families of rod complements for which the number of rods is fixed at n = 3, but for which the volumes approach infinity.
The upper bound uses more recent results of Cremaschi and Rodríguez Migueles <cit.>, which can be applied to many complements of geodesic links in Seifert fibred spaces. Again, such a bound can be loose, even when restricted to rod complements; there are families of rod complements for which the volumes are bounded but for which the right side of the inequality above grows to infinity.
Thus, while <ref> provides reasonable initial bounds that may be strong in certain cases, they are somewhat unsatisfying in general. It would be desirable to have upper and lower volume bounds that depend linearly on the same quantity. For example, hyperbolic volumes of 2-bridge knots <cit.>, alternating knots <cit.>, and highly twisted knots <cit.> are known to be bounded above and below by linear functions of the number of twist regions. For all of these knot complements, the upper bound is asymptotically sharp. The lower bound is asymptotically sharp in the 2-bridge case <cit.>, and sharp, realised by the Borromean rings, in the alternating case <cit.>. Similarly, there are upper and lower volume bounds for adequate knots in terms of coefficients of coloured Jones polynomials <cit.>. There are also upper and lower volume bounds for fibred 3-manifolds in terms of a quantity related to the action of the monodromy map <cit.>, with analogous results for cusp volumes <cit.>. One would like to obtain such results for rod complements.
While we have not obtained coarse volume bounds of this form in general, we do find improved, asymptotically sharp volume bounds for infinite families of rod complements in terms of the lengths of the continued fractions formed from their rod parameters. These lengths of continued fractions can remain the same when rod parameters increase hugely.
<ref>
MainOrthogonalRodsLet R_1, R_2, …, R_n be disjoint rods in the 3-torus whose complement is M, where n ≥ 3. Suppose that R_n has direction vector (0,0,1) and for i < n, R_i has direcction vector (p_i, q_i, 0), with (p_i, q_i) ≠ (p_i+1, q_i+1) for i = 1, 2, …, n-2 and (p_n-1, q_n-1) ≠ (p_1,q_1). Suppose that R_1, R_2, …, R_n-1 are positioned from top to bottom in the unit cube representation of the 3-torus. Let [c_i1; c_i2, …, c_im_i] be a continued fraction expansion for p_i/q_i. Then M is hyperbolic and its volume satisfies the asymptotically sharp upper bound
(M) ≤ 2 ∑_i=1^n-1 m_i.
Suppose in addition that
C min_1 ≤ i ≤ n-1
j ≥ 2{|c_ij|, |c_i1-c_(i-1) 1|}≥ 6,
where c_01 is interpreted as c_(n-1)1. Then the volume satisfies the lower bound
(M) ≥( 1 - 4π^2/C^2+4)^3/2 2 ∑_i=1^n-1 m_i.
<ref> leads to the following consequences.
<ref>
BadUpperBoundThere exists a sequence of hyperbolic rod complements with bounded volume, but for which the upper bound of <ref> grows to infinity.
<ref>
3CuspedInfVolThere exists a sequence of hyperbolic rod complements, each with three rods, whose volumes grow to infinity.
The structure of the paper is as follows.
* In <ref>, we introduce some terminology, notation and foundational results that are used throughout the paper. These pertain to rod complements, continued fractions and homeomorphisms from the n-dimensional torus to itself.
* In <ref>, we provide general volume bounds for all hyperbolic rod complements in the 3-torus (<ref>). The upper bound is in terms of the rod parameters, while the lower bound is only in terms of the number of rods.
* In <ref>, we introduce the notion of nested annular Dehn filling in the 3-torus.
* In <ref>, we use the notion of nested annular Dehn filling to provide more refined volume bounds for a particular class of rod complements (<ref>). This is sufficient to exhibit a family of rod complements with bounded volumes for which the upper bound of <ref> grows to infinity (<ref>) and another family with bounded number of rods whose volumes grow to infinity (<ref>).
* In <ref>, we conclude with brief discussion of open questions that are motivated by the present work.
§.§ Acknowledgements
We thank José Andrés Rodríguez-Migueles for helpful conversations. This work was partially supported by the Australian Research Council grant DP240102350.
§ PRELIMINARIES
§.§ Rod complements
We consider the 3-torus ^3 as the unit cube [0,1] × [0,1] × [0,1] in 3-dimensional Euclidean space, with opposite faces glued identically, as in <cit.>. Its universal cover is ^3 and it inherits the Euclidean metric from ^3.
A rod is the projection of a Euclidean straight line with rational slope in ^3 to ^3 under the covering map.
For n a positive integer, an n-rod complement is the complement of n disjoint rods in the 3-torus. When n is unspecified, we refer to such a manifold simply as a rod complement.
Let p, q, z be integers, not all zero, with (p,q,z) = 1. A (p,q,z)-rod is a geodesic in ^3 that has (p,q,z) as a tangent vector. We call (p,q,z) a direction vector of the rod, where we consider (p,q,z) only up to a change of sign. We call the integers p, q, z the rod parameters of the rod. A standard rod is a (1,0,0)-rod, a (0,1,0)-rod, or a (0,0,1)-rod.
A rod complement is said to be hyperbolic if it admits a complete hyperbolic structure; for further details on hyperbolic geometry, see for example <cit.>. In previous work, the second author classified exactly when rod complements are hyperbolic, Seifert fibred or toroidal.
Let R_1, R_2, …, R_n be disjoint rods in ^3. The rod complement ^3 ∖ (R_1 ∪ R_2 ∪⋯∪ R_n) is:
* hyperbolic if and only if {R_1, R_2, …, R_n} contains three linearly independent rods and each pair of disjoint parallel rods are not linearly isotopic in the complement of the other rods;
* Seifert fibred if and only if all rods have the same direction vector; and
* toroidal if
* the direction vectors of the rods all lie in the same plane; or
* there exist two distinct rods that are linearly isotopic in the complement of the other rods.
In case (3)(b), suppose without loss of generality that R_n-1 and R_n are linearly isotopic in the complement of the other rods. Then an essential torus encircling the linearly isotopic rods will cut the rod complement into a solid torus containing R_n-1 and R_n, and a new rod complement with rods R_1, R_2, …, R_n-1. So if there were three linearly independent rods to begin with, there would be a unique hyperbolic rod complement appearing as a component of the JSJ decomposition; see <cit.>. The upshot of this discussion is that rod complements are very commonly hyperbolic, in a certain sense.
Observe that in a hyperbolic rod complement, there may be several rods with the same direction vector, provided that for any two such rods, at least one other rod intersects the linear annuli bound by them. Two or more rods with the same direction vector are said to be parallel.
§.§ Continued fractions
Let p, q be nonzero relatively prime integers. The rational number p/q can be expressed as a finite continued fraction
p/q = [c_1;c_2,…,c_m] c_1+1c_2+1c_3 + 1⋱ + 1c_m ,
where c_1 is an integer and c_2, …, c_m are non-zero integers. The integers c_1, c_2, …, c_m are called coefficients or terms of the continued fraction and the number m is called the length of the continued fraction.
Observe that a continued fraction expansion for a given rational number is not unique. For example, the rational number 7/4 can be expressed in several ways, including [1; 1, 3], [1; 1, 2, 1] and [2; -4]. The upper bound of <ref> is strengthened by using continued fraction expansions that have minimal length. In particular, if m ≥ 2, we do not allow c_m = 1 in the continued fraction expansion above.
Note that the length of the continued fraction [0] = 0/1 is one. For convenience, we define the “empty” continued fraction [ ] = 1/0 and consider its length to be zero.
The rational numbers whose continued fraction expansions we consider arise as slopes on the 2-torus. We consider the 2-torus ^2 as the unit square [0,1] × [0,1] in 2-dimensional Euclidean space, with opposite faces glued identically. Its universal cover is ^2 and it inherits the Euclidean metric from ^2.
Let p and q be integers, not both zero, with (p, q) = 1. A simple closed geodesic on ^2 is said to have slope p/q or to be a (p,q)-curve if it is isotopic to the projection of a line in ^2 with slope q/p. Observe that our definition of slope on the torus is the reciprocal of the corresponding slope on the plane. We defined slope of simple closed geodesics in this way because of our choices of notations in Section <ref>.
§.§ Homeomorphisms of the n-torus
The following are useful results concerning homeomorphisms of the n-dimensional torus ^n. The statements are well-known, but short proofs have been provided for completeness.
For n≥ 2, an element A ∈(n, ) induces a homeomorphism from ^n to itself.
The element A ∈(n,) gives rise to a homeomorphism from ^n to itself that sends the integer lattice ^n to itself. In particular, it takes the standard basis of ^n to a basis formed by the columns of A, whose coordinates are integers. This produces a new fundamental domain for the torus. The induced homeomorphism simply maps the standard fundamental domain of the torus to this new fundamental domain via A.
In fact, it is known that when n=2 or n=3, (n, ) is the mapping class group of ^n. (The result for n=2 appears in <cit.> while the result for n=3 follows from work of Hatcher <cit.>.)
Given a rod complement in the 3-torus that contains an (a,b,c)-rod R, there exists an element of (3, ) that sends (a, b, c) to (0, 0, 1). By <ref>, we may change the fundamental region of the to ensure that R is a (0,0,1)-rod. In the rest of the paper, we often assume without loss of generality that one of the rods in a rod complement has direction vector (0,0,1).
Let n≥ 2 be an integer. Suppose that a_n = (a_1n, a_2n, …, a_nn)^⊺ is a nonzero vector in ^n ⊂^n with (a_1n, a_2n, …, a_nn) = 1. Then there exist vectors a_1, a_2, …, a_n-1 in ^n such that (a_1, a_2, …, a_n) = 1.
We prove the result by induction on n. Suppose that a_2 = (a_12, a_22)^⊺ is a nonzero vector in ^2 with (a_12,a_22) = 1. By Bézout's lemma, there exist integers a_11, a_21 such that a_11a_22-a_21a_12 = 1. So defining a_1 = (a_11, a_21)^⊺ leads to (a_1, a_2) = 1. This proves the base case n = 2.
Now let n ≥ 3 be an integer. Suppose that a_n = (a_1n, a_2n, …, a_nn)^⊺ is a nonzero vector in ^n with (a_1n, a_2n, …, a_nn) = 1. Without loss of generality, suppose that a_nn≠ 0 so that the vector a_n (a_2n, a_3n, …, a_nn)^⊺ is nonzero. Let
d ( a_2n, a_3n, …, a_nn).
Since (a_1n, d) = (a_1n, a_2n, …, a_nn) = 1, by Bézout's lemma, there exist integers s and t such that s d - t a_1n = 1.
Set a_11 = s and
(a_21, a_31, …, a_n1) t/d a_n^ ⊺ = t/d( a_2n, a_3n, …, a_nn).
Since 1/d a_n ∈^n-1 and ( a_2n/d, a_3n/d, …, a_nn/d) = 1, by induction there exist a_2, a_3, …, a_n-1 in ^n-1 such that (a_2, a_3, …, a_n-1, 1/d a_n) = 1.
Now define a_1 (s, t/d a_n^ ⊺)^⊺, a_2 (0, a_2^ ⊺)^⊺, …, a_n-1 (0, a_n-1^ ⊺)^⊺. Then by expanding along the first row, we find that
( a_1, a_2, …, a_n-1, a_n)
= a_11(a_2, …, a_n-1, a_n) + (-1)^1+n a_1n (td a_n, a_2, …, a_n-1)
= s d (a_2, …, a_n-1, 1d a_n) + (-1)^(1+n)+(n-2)a_1n t (a_2, …, a_n-1, 1d a_n)
= sd - a_1n t
= 1.
This concludes the induction.
For fixed n ≥ 2, all 1-rod complements in the n-torus are homeomorphic.
Let R be a rod in the n-torus whose fundamental region is [0,1]^n. Suppose a_n = (a_1n, a_2n, …, a_nn)^⊺ is the direction vector of R. We may translate the rod R so that it intersects the origin. As R is a simple closed curve, we must have (a_1n, a_2n, …, a_nn) = 1. By <ref>, there exist vectors a_1, a_2, …, a_n-1 in ^n⊂^n such that
(a_1, a_2, …, a_n) = 1.
Hence, the matrix (a_1, a_2, …, a_n) lies in (n,) and by <ref>, it induces a homeomorphism that maps the (0, 0, …, 0,1)-rod to the a_n-rod. Therefore, any 1-rod complement ^n ∖ R is homeomorphic to ^n ∖ R_z, where R_z represents a standard (0, 0, …, 0, 1)-rod.
§ VOLUME BOUNDS FOR ALL ROD COMPLEMENTS
In this section, we obtain upper and lower bounds on the volumes of all hyperbolic rod complements.
An n-rod complement in the 3-torus with k ≥ 1 parallel rods is an (n-k)-rod complement in the Seifert fibred space ^2_k×^1, where ^2_k is a torus with k punctures.
Let M be an n-rod complement in the 3-torus with k parallel rods R_1, R_2, …, R_k. Suppose that these parallel rods have direction vector (a, b, c), where a, b, c are integers such that (a, b, c) = 1. By <ref>, there exist integers f, g, h, p, q, r such that
[ a f p; b g q; c h r ] = 1
⇒ [ a f p; b g q; c h r ]∈(3,).
By <ref>, such a matrix represents an orientation-preserving homeomorphism of ^3 sending the rods with direction vectors (1,0,0), (0,1,0), (0,0,1) to rods with direction vectors (a,b,c), (f,g,h), (p,q,r), respectively.
Define T ⊂^3 to be a 2-torus spanned by the vectors (f,g,h) and (p,q,r). Note that T∖ (R_1∪ R_2∪⋯∪ R_k) is a k-punctured torus. As the homeomorphism represented by the above matrix sends the standard fundamental region of the 3-torus to the fundamental region spanned by the vectors (a,b,c), (f,g,h), (p,q,r), M is homeomorphic to an (n-k)-rod complement in the Seifert fibred space T ∖ (R_1∪ R_2 ∪⋯∪ R_k) ×^1.
VolBoundsDet
From <ref>, we deduce that n ≥ 3. Adams proved that an n-cusped hyperbolic 3-manifold M with n ≥ 3 satisfies the inequality (M) > n, which is the desired lower bound <cit.>.
We obtain the upper bound using a result of Cremaschi and Rodríguez-Migueles <cit.>. They prove that for a link in an orientable Seifert fibred space N over a hyperbolic 2-orbifold O in which projects injectively to a filling geodesic multi-curve ⊆ O, one has the volume bound
(N ∖) < 8 i(, ).
Here, i(, ) denotes the geometric self-intersection number of .
In our particular setting, <ref> asserts that M is homeomorphic to a k-rod complement in the Seifert fibred space
N = ( T ∖ (R_k+1∪ R_k+2∪⋯∪ R_n) ) ×^1,
where T ⊆^3 is a 2-torus such that the intersection number between R_n and T is 1. Denote by the k-component link R_1 ∪ R_2 ∪⋯∪ R_k in N. Here, R_i is a (p_i, q_i, z_i)-rod with (p_i, q_i) ≠ (0, 0) for i = 1, 2, …, k.
Let 𝒫 N → T ∖ (R_k+1∪ R_k+2∪…∪ R_n) be the bundle projection map. Note that the link projects to , a union of k rods in the base space T ∖ (R_k+1∪ R_k+2∪⋯∪ R_n), which is a 2-torus in ^3 with n-k punctures. The rod R_i projects to a (p_i,q_i)-curve on this punctured torus.
After a small deformation of the rods, we may ensure that their projections intersect transversely, with at most two arcs meeting at each intersection point. Any pair of projections (R_i) and (R_j) intersect at least p_iq_j-p_jq_i times; see for example <cit.>. The (p_i,q_i)-curve (R_i) intersects itself at least (p_i, q_i) - 1 times. Hence, the total geometric intersection number of () is
∑_1 ≤ i < j ≤ kp_i q_j - p_j q_i + ∑_1 ≤ i ≤ k( (p_i,q_i) - 1 ).
Thus, applying the result of Cremaschi and Rodríguez-Migueles leads to the upper bound.
The upper volume bound in <ref> depends on the choice of rod that is sent to the (0,0,1)-rod via a homeomorphism of ^3. For example, if we consider four rods R_1, R_2, R_3, R_4 with direction vectors (2,4,3), (5,7,1), (9,8,6), (0,0,1), respectively, <ref> will give us an upper volume bound 8 × 50. Using the constructive proof of <ref>, we obtain the following matrices in (3,) that map (0,0,1) to R_1, R_2, R_3, respectively.
[ 1 0 2; 0 -1 4; 0 -1 3 ] [ 1 0 5; 0 1 7; 0 0 1 ] [ -4 0 9; -4 -1 8; -3 -1 6 ]
By taking the inverses of these matrices and computing the new rod parameters, we now obtain upper volume bounds of 8 × 116, 8 × 114, and 8 × 132, respectively. We naturally take the minimum among all such choices to obtain a suitable upper bound.
§ NESTED ANNULAR DEHN FILLING IN THE 3-TORUS
We will show that neither the upper nor lower bound of <ref> can be part of a two-sided coarse volume bound in terms of the given parameters. That is, we exhibit a family of rod complements with fixed number of cusps whose volumes grow to infinity as well as a family of rod complements with bounded volume for which the intersection number in the upper bound of <ref> grows to infinity. For both of these results, we use the machinery of annular Dehn filling.
Let A be an annulus embedded in a 3-manifold M, with boundary curves L^+ and L^-. Let μ^± denote a meridian of N(L^±) and let λ^± denote a longitude of N(L^±) that is parallel to A.
For an integer n, define (1/n)-annular Dehn surgery to be the process of drilling N(L^+) and N(L^-) from M, performing (+1/n)-Dehn filling on N(L^+) and performing (-1/n)-Dehn filling on N(L^-).
The surgery can be realised by cutting along A, performing n Dehn twists along the core of A in the anticlockwise direction (where the induced orientation puts L^+ on the right of the core of A), and then regluing; see for example <cit.>.
If the curves L^± are already drilled, such as in the case of a link complement, define (1/n)-annular Dehn filling along A to be the process of performing (+1/n)-Dehn filling on L^+ and performing (-1/n)-Dehn filling on L^-, where the framing on the link components is as above.
In our case, we perform annular Dehn filling on an annulus bounded by a pair of parallel rods in the 3-torus. Note that parallel rods bound many annuli in ^3. The following result confirms that the resulting link is well defined, regardless of our choice of annulus.
Let R^+ and R^- be parallel rods in ^3 that form the boundary of two non-isotopic annuli A^+ and A^- with disjoint interiors. Suppose that A^+ is the annulus oriented with R^+ on the right of the core, under the induced orientation from ^3. Then (1/n)-annular Dehn filling on A^+ and (-1/n)-annular Dehn filling on A^- result in homeomorphic manifolds.
More generally, suppose that A_1 and A_2 are disjoint annuli with A_1 cobounded by rods R_0 and R_1, with R_1 to the right, and A_2 cobounded by R_1 and a rod R_2, with R_1 to the left. Let M be the result of performing (1/n)-annular Dehn filling on A_1 followed by (1/m)-annular Dehn filling on A_2. Then M is also the result of performing (-1/n)-Dehn filling on R_0, followed by (1/(n-m))-Dehn filling on R_1, followed by (1/m)-Dehn filling on R_2, when R_0 ≠ R_2. If R_0=R_2, then the Dehn filling coefficient on R_0=R_2 is 1/(m-n).
Let N^+ be the manifold obtained by 1/n-annular Dehn filling A^+ and let N^- be the manifold obtained by -1/n-annular Dehn filling A^-. The fact that N^+ and N^- are homeomorphic follows from the fact that the link complements have the same Dehn surgery coefficients. Thus, the results of the Dehn fillings must be homeomorphic.
To prove the more general statement, we again consider the Dehn surgery coefficients. Annular Dehn filling first along A_1 gives surgery slope μ + nλ on R_1 and μ - n λ on R_0, where μ denotes a meridian and λ is parallel to A_1. Then performing 1/m-annular Dehn filling along A_2 adjusts the surgery slope on R_1 by subtracting m longitudes, giving μ + (n-m)λ. It gives a surgery slope of μ +mλ on R_2 when R_0 ≠ R_2. When R_0 = R_2, the slopes combine as on R_1 to give μ - (n-m)λ.
Let m be an even positive integer. Consider a unit cube fundamental region of ^3. For each i = 1, 2, …, m/2, let (R_2i-1^+, R_2i-1^-) be a pair of (1,0,0)-rods bounding a vertical xz-plane annulus within the unit cube, with R_2i-1^+ above and R_2i-1^- below. Let (R_2i^-, R_2i^+) be a pair of (0,1,0)-rods bounding a vertical yz-plane annulus with R_2i^- above and R_2i^+ below. A rod R is said to be sandwiched along the xy-plane by nested pairs of rods with order (R_1^+, R_2^-, …, R_m-1^+, R_m^-) if and only if R lies in an xy-plane and the rods are positioned from top to bottom in the unit cube in the order
(R_1^+, R_2^-,…, R_m-1^+, R_m^-, R, R_m^+, R_m-1^-, …, R_2^+, R_1^-).
Similarly, for m an odd positive integer, we can say R is sandwiched along the xy-plane by nested pairs of rods with order (R_1^+, R_2^-, …, R_m-1^-, R_m^+) if and only if R lies in an xy-plane and the rods are positioned from top to bottom in the unit cube in the order
(R_1^+, R_2^-,…, R_m-1^-, R_m^+, R, R_m^-, R_m-1^+, …, R_2^+, R_1^-).
See the top-left picture of <ref> for an example of a rod sandwiched by nested pairs of rods with m = 3.
Let p and q be integers with (p,q)=1. Suppose that [c_1;c_2, …, c_m] is a continued fraction expansion of p/q. If m is even, consider a (1,0,0)-rod R_x sandwiched along the xy-plane by nested pairs of rods with order
(R_1^+, R_2^-, …, R_m-1^+, R_m^-).
If m is odd, consider a (0,1,0)-rod R_y sandwiched along the xy-plane by nested pairs of rods with order
(R_1^+, R_2^-, …, R_m-1^-, R_m^+).
Sequentially apply (1/c_i)-annular Dehn filling to the pair (R_i^+, R_i^-) of rods, starting with i = m and ending with i = 1. Then the rod R_x for m is even (respectively, R_y for m odd) is transformed to a (p,q,0)-rod.
We will focus on the case when the length m of the continued fraction is odd. The argument for m even follows similarly.
Starting with the (0,1,0)-rod R_y and applying (1/c_m)-annular Dehn filling to (R_m^+, R_m^-) transforms the (0,1,0)-rod R_y to a (c_m, 1, 0)-rod R^(1). See the first and second pictures of <ref> for an example.
The (c_m, 1, 0)-rod R^(1) intersects the annulus bounded by R_m-1^- and R_m-1^+ a total of c_m times. Applying (1/c_m-1)-annular Dehn filling to (R_m-1^+, R_m-1^-) transforms the (c_m, 1, 0)-rod R^(1) into a (c_m, 1+c_m c_m-1, 0)-rod R^(2). See the second and third pictures of <ref> for an example. Observe that the ratio of the rod parameters satisfies
1+c_m c_m-1/c_m = c_m-1 + 1c_m.
The (c_m, 1+c_m c_m-1, 0)-rod R^(2) intersects the annulus bounded by R_m-2^+ and R_m-2^- a total of 1+c_m c_m-1 times. Applying (1/c_m-2)-annular Dehn filling to (R_m-2^+, R_m-2^-) transforms the (c_m, 1+c_m c_m-1,0)-rod R^(2) into a (c_m + (1+c_m c_m-1)c_m-2, 1+c_m c_m-1,0)-rod R^(3). See the third and fourth pictures of <ref> for an example. Now observe that the ratio of the rod parameters satisfies
c_m + (1+c_m c_m-1)c_m-2/1+c_m c_m-1 = c_m-2 + c_m/1+c_m c_m-1 = c_m-2 + 1c_m-1 + 1c_m
Continuing in this way, we apply (1/c_m-3)-annular Dehn filling, (1/c_m-4)-annular Dehn filling, and so on, until we finally apply (1/c_1)-annular Dehn filling. Each successive annular Dehn filling prepends a term to the continued fraction expansion for the ratio of the rod parameters. Hence, the final rod R^(m) has direction vector (p,q,0), where p/q = [c_1; c_2, …, c_m].
Note that <ref> holds for any continued fraction expansion of p/q, without any restriction on the signs of the terms.
Let p and q be integers with (p,q) = 1. Suppose that [c_1; c_2, …, c_m] is a continued fraction expansion of p/q. Define (p,q)-nested annular Dehn filling to be the process of performing the sequence of (1/c_i)-annular Dehn fillings from i = m to i=1 on the rod R_x or R_y, as described in <ref>. The rod R_x or R_y is called the core rod of the nested annular Dehn filling. The rods R_i^+ and R_i^- for i = 1, 2, …, m are called the filling rods of the nested annular Dehn filling.
For example, consider (p,q)-nested annular Dehn filling with (p,q) = (5,3), using the continued fraction expansion p/q = 5/3 = [1;1,2]. Since the number of terms is odd, we start with a (0,1,0)-rod R_y sandwiched along the xy-plane by nested pairs of rods with order (R_1^+, R_2^-, R_3^+), as shown in the top-left picture of <ref>. After applying (1/2)-annular Dehn filling to the pair of innermost red rods (R_3^+, R_3^-), we obtain the rod complement shown in the top-right picture of <ref>. Then after applying (1/1)-annular Dehn filling to the pair of green rods (R_2^+, R_2^-), we obtain the rod complement shown in the bottom-left picture of <ref>. Finally, after applying a (1/1)-annular Dehn filling to the outermost pair of red rods (R_1^+, R_1^-), we obtain the rod complement shown in the bottom-right picture of <ref>. The result is a single rod with direction vector (5,3,0).
Any rod that does not intersect the annulus used in annular Dehn filling is unaffected by the filling. In particular, such rods maintain their direction vectors. This straighforward observation is crucial for our use of annular Dehn fillings below.
§ ASYMPTOTICALLY SHARP VOLUME BOUNDS
With nested annular Dehn filling introduced in the last section, we can now proceed to show some asymptotically sharp volume bounds for a family of rod complements.
Let R_1, R_2, …, R_n be disjoint rods in ^3 with n ≥ 3. Suppose that R_n has direction vector (0, 0, 1) while each of the other rods R_i has direction vector of the form (p_i, q_i, 0). If any two neighbouring rods, ordered by z-coordinate, are not parallel, then the rod complement ^3 ∖ (R_1 ∪ R_2 ∪⋯∪ R_n) is hyperbolic.
The direction vectors of rods R_1, R_2, R_n are linearly independent, since R_1 and R_2 are not parallel, and R_n is orthogonal to the plane spanned by the direction vectors of R_1 and R_2. Since no two neighbouring rods are parallel, each pair of disjoint parallel rods are not linearly isotopic in the complement of the other rods. Thus, the result follows from <ref>.
A standard rod complement is the complement of a finite number of rods in ^3, each with direction vector (1,0,0), (0,1,0) or (0,0,1).
A standard parent manifold of a rod complement ^3 ∖ (R_1 ∪ R_2 ∪⋯∪ R_n) is a standard rod complement from which ^3 ∖ (R_1 ∪ R_2 ∪⋯∪ R_n) can be obtained after a finite sequence of Dehn fillings.
Let R_1, R_2, …, R_n be disjoint rods in ^3 with n ≥ 3. Suppose that R_n has direction vector (0, 0, 1) while each of the other rods R_i has direction vector of the form (p_i, q_i, 0). Suppose that p_i/q_i has a continued fraction expansion with m_i terms. Let E denote the number of (p_i,q_i,0)-rods with even m_i and let O denote the number of (p_i,q_i,0)-rods with odd m_i. Then there exists a standard rod complement M with E (1,0,0)-core rods and O (0,1,0)-core rods together with 2∑_i=1^n-1 m_i filling rods such that ^3 ∖ (R_1 ∪ R_2 ∪⋯∪ R_n) can be obtained by applying (p_i, q_i)-nested annular Dehn filling to the core rods of M for i = 1, 2, …, n-1.
For i = 1, 2, …, n-1, since (p_i, q_i) = 1, <ref> and <ref> ensure that the (p_i, q_i, 0)-rod R_i can be obtained by applying a (p_i, q_i)-nested annular Dehn filling to one of the E+O core rods. The 2m_i filling rods sandwiching the core rod will be removed in the process of Dehn filling. Observe that a (p_i, q_i)-nested annular Dehn filling does not affect the isotopy classes of rods disjoint from the associated annuli. Hence, after applying n-1 nested annular Dehn fillings on the E+O = n-1 core rods, we obtain a 3-manifold homeomorphic to ^3 ∖ (R_1 ∪ R_2 ∪⋯∪ R_n).
<ref> provides an explicit procedure to obtain a standard parent manifold of a rod complement with the particular form for which the result applies. The manifold M in <ref> is a standard parent manifold of ^3 ∖ (R_1 ∪ R_2 ∪⋯∪ R_n). Note that for each sandwich of a nested annular Dehn filling, the outermost pair of filling rods are (1,0,0)-rods. Between each pair of adjacent (possibly the same) sandwiches, the bottom filling rod of the top sandwich is linearly isotopic to the top filling rod of the bottom sandwich, so there is a natural choice of essential plane annulus between these two filling rods. To obtain a hyperbolic standard parent manifold, we cut along any such essential plane annuli in M. An example of a hyperbolic standard parent manifold is shown in <ref>.
Consider a standard parent manifold M with exactly one (0,0,1)-rod and m≥ 2 additional rods, which alternate between (1,0,0)-rods and (0,1,0)-rods. Then M is hyperbolic and can be decomposed into m regular ideal octahedra. Thus, its volume is (M) = m, where ≈ 3.66386 is the volume of the regular ideal octahedron.
The fact that M is hyperbolic follows from <ref>. Alternatively, one can construct the hyperbolic structure directly as follows. Cut M along an xz-plane torus, a yz-plane torus, and all xy-plane tori that contain (1,0,0)-rods or (0,1,0)-rods. We obtain m three-dimensional balls, each with six arcs removed from the boundary. By shrinking these arcs, one obtains m ideal octahedra; see <ref>.
We can assign a complete hyperbolic metric on M by setting each ideal octahedron to be regular. Such a polyhedron has dihedral angles equal to π/2. The gluing of the octahedra identifies four such dihedral angles around each edge and tiles each cusp by Euclidean squares, so one obtains a complete hyperbolic structure; see <cit.>. The volume of M is then m, the sum of the volumes of the octahedra.
Let M be a hyperbolic standard parent manifold. The fundamental region of the torus cusp boundary corresponding to each filling rod of M is a Euclidean rectangle formed by gluing two squares corresponding to cusp neighbourhoods of ideal vertices of octahedra. The meridian forms one of the sides of the rectangle, running along one edge of each square. The longitude forms the other side of the rectangle, running along an edge of one of the squares. Finally, there exists a choice of horoball neighbourhoods with disjoint interiors for the rod complement such that the meridian has length 2 and the longitude has length 1.
Consider how the octahedra in the proof of <ref> fit together. Since the rod complement can be decomposed into ideal octahedra, the cusps corresponding to the filling rods are tiled by Euclidean squares that are cusp neighbourhoods of the ideal vertices of the octahedra.
Note that each horizontal rod R meets exactly two octahedra: one above the xy-plane containing R, which we cut along to obtain the decomposition, and one below. The meridian μ runs once through each and can be isotoped to run through the xz- or yz-plane as in the left of <ref>. Hence, it lies on faces of the two octahedra. Thus, the meridian forms a closed curve running along one edge in each of the two squares corresponding to the two octahedra.
The longitude may be isotoped to run through a single octahedron, say the one above the xy-plane containing R, as in the left of <ref>. Thus, it forms one side of a cusp square. Finally, observe that the square is glued to itself by the identity, with one side glued to the opposite side.
The cusp is a Euclidean rectangle, comprised of two squares, with the meridian running along the long edge of the rectangle and the longitude running along the short edge.
It remains to argue that the lengths of the meridian and the longitude are 2 and 1, respectively. To do so, we show that we can choose horoballs about the cusps of M with disjoint interiors such that when we intersect with the ideal octahedra, the boundary of the intersection is a collection of squares, each with side length 1. The horoball expansion we use is the same as that appearing in <cit.> or <cit.>.
That is, each edge e of the octahedron borders two triangular faces. The midpoint of the edge e with respect to one of the triangles is the unique point on the edge e that lies on a perpendicular hyperbolic geodesic running from the opposite vertex to e; see the right of <ref>. Since our ideal octahedron is regular, the midpoints obtained from either adjacent triangle agree. When the vertices of the ideal triangle are placed at 0, 1 and ∞, the midpoint has height 1. If we place a regular ideal octahedron containing a side with vertices at 0, 1, and ∞, the midpoints of each of the edges meeting infinity also have height 1. This remains true after applying a Möbius transformation taking any vertex to infinity. Thus, we may expand horoballs about each ideal vertex to the height of the midpoints of the four edges meeting that vertex. This gives a collection of horoballs that are tangent exactly at the midpoints of edges, with disjoint interiors. The boundary of each horoball meets the octahedron in a square of side length 1. Finally, since the octahedra are glued in such a way that cusp squares glue to cusp squares with the same side lengths, this gluing must preserve this choice of horoballs. Hence, these define horoball neighbourhoods with disjoint interiors and lengths as claimed.
Let M be a hyperbolic standard parent manifold, with slope 1/n on one of the horizontal rods. Then in the horoball neighbourhood described in <ref>, the length of the slope is √(n^2+4).
The slope 1/n runs once along a meridian and n times along the longitude. In the universal cover of the cusp torus, it can be lifted to an arc with one endpoint at (0,0) and the other at (2,n). The meridian and longitude are orthogonal, with the meridian of length 2 and the longitude of length 1. Hence, length of the slope is √(n^2+2^2).
We are now ready to prove the coarse volume bound discussed in the introduction.
MainOrthogonalRods
By <ref>, the manifold M must be hyperbolic.
We construct standard parent manifolds with ideal octahedral decompositions. By <ref>, there exists a standard rod complement N with n-1 core rods and ∑_i=1^n-1 2m_i filling rods such that M can be obtained by applying a (p_i,q_i)-nested annular Dehn filling to each of the core rods of N. Observe that the outermost pair of filling rods for each nested annular Dehn filling are (1,0,0)-rods. Each of the two outermost filling rods for each nested annular Dehn filling will be linearly isotopic to an outermost filling rod for another nested annular Dehn filling. By cutting along the essential annuli arising from all of these linear isotopies, we obtain a standard parent manifold N_ with exactly one (0,0,1)-rod, namely R_n, and alternating (1,0,0)-rods and (0,1,0)-rods.
By <ref>, N_ has a decomposition into ∑_i=1^n-1 2m_i regular ideal octahedra and it admits a complete hyperbolic structure.
We obtain M=^3 ∖ (R_1 ∪ R_2 ∪⋯∪ R_n) by Dehn filling the standard parent manifold N_. Since Dehn filling decreases volume <cit.>, we obtain the bound
(M) < (N_) = ∑_i=1^n-1 2m_i.
Furthermore, this bound is asymptotically sharp. Taking larger and larger values for the coefficients c_ij of the continued fraction expansion while fixing the lengths m_i will produce Dehn fillings of the same parent manifold whose volumes converge to that of the parent manifold.
For the lower bound, we consider the slopes of the Dehn filling. These are of the form 1/c_ij for filling components with 2 ≤ j ≤ m_i. For the outermost filling rods, the coefficient of the Dehn filling combines the 1/c_i1 from one side with -1/c_(i-1)1 from the other side, as in <ref>. Thus, the slope is 1/(c_i1-c_(i-1)1).
By <ref>, for any integer ℓ, the length of the slope 1/ℓ on a filling rod is √(ℓ^2 + 4). So under the hypotheses required for the lower bound, the minimum length slope will be at least √(6^2+4) > 2π. We may now apply a theorem of Futer, Kalfagianni and Purcell, which states that if the minimum slope length is larger than 2π, then the volume change under Dehn filling is a multiple of the volume of the unfilled manifold <cit.>. In our case, this leads to
(M) ≥( 1 - 4π^2/C^2+4)^3/2 2 ∑_i=1^n-1 m_i.
The upper bound of <ref> motivates one to seek an efficient expression for such rod complements, with the complexity measured by ∑_i=1^n-1 m_i, the sum of the lengths of the continued fractions. One may simultaneously switch each (p_i,q_i,0)-rod to a (q_i,p_i,0)-rod, which may change ∑_i=1^n-1 m_i. Recall that we allow negative terms in our continued fractions, as per the discussion in <ref>. Typically, one obtains shorter continued fractions this way than if one restricts to using positive integers as terms.
BadUpperBound
For n a positive integer, let R_1^(n) be an (n,1,0)-rod, let R_2 be a (0,1,0)-rod, and let R_3 be a (0,0,1)-rod. These rods satisfy the hypotheses of the first part of <ref>. Note that the continued fraction associated to the rod R_1^(n) is n/1 = [n]. Thus, in the notation of <ref>, we have m_1 = 1 for any choice of n and we also have m_2 = 1. So the upper bound of <ref> implies that
( ^3 ∖ (R_1^(n)∪ R_2 ∪ R_3) ) ≤ 4 .
On the other hand, we have (p_1, q_1) = (n, 1) and (p_2, q_2) = (0, 1), so
|p_1q_2 - p_2q_1| = n, which is unbounded as n grows to infinity.
3CuspedInfVol
Define the sequence of rational slopes
p_k/q_k = [k; k, k, …, k_k terms].
for k ≥ 6. For example, we have
p_6/q_6 = [6; 6, 6, 6, 6, 6] = 53353/8658,
p_7/q_7 = [7; 7, 7, 7, 7, 7, 7] = 927843/129949,
p_8/q_8 = [8; 8, 8, 8, 8, 8, 8, 8] = 18674305/2298912.
Let R_1^(k) be a (p_k, q_k, 0)-rod, let R_2 be a (0,1,0)-rod, and let R_3 be a (0,0,1)-rod. Let M_k = ^3 ∖ (R_1^(k)∪ R_2 ∪ R_3) be the associated rod complement. Using the notation of <ref>, we have m_1 = k, m_2 = 1, and C = k ≥ 6. So <ref> implies that
(M_k) ≥( 1 - 4π^2/k^2+4)^3/2 2 (k+1) > ( 1 - 4π^2/6^2+4)^3/2 2 k > 0.01091 k.
Since the right side grows to infinity with k, the volume of M_k also grows to infinity.
§ FURTHER DISCUSSION
Our results on the volumes of rod complements suggest various natural questions worthy of further exploration, such as the following.
Do there exist two-sided coarse volume bounds for all rod complements in terms of the rod parameters?
By <ref>, such bounds cannot depend only on the number of rods nor on the number of intersections of the rods in a particular projection. It would be natural to wonder whether two rod complements with the same rod parameters have volumes with bounded ratio.
Does hyperbolic volume distinguish rod complements up to homeomorphism?
It would be surprising if any two rod complements with the same hyperbolic volume were necessarily homeomorphic. It is well-known that hyperbolic volume does not distinguish hyperbolic 3-manifolds in general. In particular, mutation of cusped hyperbolic 3-manifolds can change its homeomorphism class, but necessarily preserves the hyperbolicity and volume <cit.>. An example of mutation involves cutting along an essential embedded 4-punctured sphere bounding a tangle in a ball, rotating the ball via a certain involution, and then regluing. Mutation can also be performed with respect to surfaces of other topologies that possess a suitable involution. It is not immediately obvious whether rod complements contain such embedded essential surfaces along which mutation can be performed.
Does there exist a rod complement with a non-trivial mutation?
amsplain
|
http://arxiv.org/abs/2409.02463v1 | 20240904062611 | Combined voltage assignments, factored lifts, and their spectra | [
"C. Dalfó",
"M. A. Fiol",
"S. Pavlíková",
"J. Širáň"
] | math.CO | [
"math.CO"
] |
Combined voltage assignments,
factored lifts, and their spectra
C. Dalfó^a, M. A. Fiol^b, S. Pavlíková^c, and J. Širáň^d
^aDepartament. de Matemàtica
Universitat de Lleida, Igualada (Barcelona), Catalonia
[email protected]
^bDepartament de Matemàtiques
Universitat Politècnica de Catalunya, Barcelona, Catalonia
Barcelona Graduate School of Mathematics
Institut de Matemàtiques de la UPC-BarcelonaTech (IMTech)
[email protected]
^c Inst. of Information Engineering, Automation, and Math., FCFT,
Slovak Technical University, Bratislava, Slovakia
[email protected]
Department of Mathematics and Descriptive Geometry, SvF
Slovak University of Technology, Bratislava, Slovak Republic
[email protected]
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
We consider lifting eigenvalues and eigenvectors of graphs to their factored lifts, derived by means of a
combined voltage assignment in a group. The latter extends the concept of (ordinary) voltage assignments known from regular coverings and corresponds to the cases of generalized covers of Potočnik and Toledo (2021) in which a group of automorphisms of a lift acts freely on its arc set. With the help of group representations and certain matrices over complex group rings associated with the graphs to be lifted, we develop a method for the determination of the complete spectra of the factored lift graphs and derive a sufficient condition for lifting eigenvectors.
.5cm
Keywords : Lift graph, voltage assignment, group representation, spectrum.
MSC 2020 : 05C25, 05C50.
§ INTRODUCTION
A well-known and prolific construction of new graphs from old relies on (regular) graph covers, made popular in the past through the monograph by Gross and Tucker <cit.>. Algebraically, one starts with a `base graph' equipped with a `voltage assignment' on its arcs in a group, which gives rise to an `ordinary lift' with vertex- and edge-set being a product of the vertex- and edge-set of the base graph with the voltage group, and with incidence defined in such a way that the voltage group acts freely on the vertex-set of the lift.
Conversely, if one has a graph with a group of automorphisms acting freely on vertices, then the graph arises as a regular lift as indicated (and the base graph is simply a quotient of the given graph by its group of automorphisms in question). Notable examples of such a situation are Cayley graphs, which admit a group of automorphisms acting regularly on the vertex set, being thus ordinary lifts of one-vertex graphs with loops and/or semi-edges attached.
The versatility of ordinary lift graphs is exemplified across various research areas of graph theory, spanning fundamental problems like the degree/diameter problem to intricate theorems such as the Map Color Theorem. The advantages of covering construction lie in the fact that, in a number of important situations, the properties of the lift can be conveniently expressed in terms of the properties of a base graph and the voltage assignment.
A notable advancement in understanding lift graphs is the methodology developed by some of the authors alongside Miller and Ryan <cit.>, enabling the determination of spectrum and eigenvectors. Subsequent extensions of this method encompass its adaptation to digraphs <cit.>, its generalization to arbitrary lifts of graphs <cit.>, and its further expansion to deal with the universal adjacency matrix of such lifts <cit.>.
Our aim is to further extend these advancements by introducing the concept of a `factored lift,' which is motivated by replacing the free action of a subgroup of automorphisms on the vertex set in the description of ordinary lifts with a free action on the arc set. Such a viewpoint is a special but important case of the recently developed general approach to coverings by Potočnik and Toledo <cit.> (allowing arbitrary subgroups of automorphisms). The concept, independently introduced by Reyes, Dalfó, Fiol, and Messegué <cit.> and originally referred to as `overlift', represents a significant generalisation akin to permutation voltage lifts, with implications for broader theoretical and practical applications.
The structure of this paper is as follows. In the next section we give definitions and a formal statement of the equivalence between factored lifts and quotients by a free action of an automorphism group of a graph on its arcs.
Section <ref> is devoted to lifts of walks and their enumeration. Our main results on lifts of spectra and eigenvectors from base graphs to factored lifts are in Section <ref>. In Section <ref> we illustrate our results on two examples, follows by concluding remarks in Section <ref>.
§ COMBINED VOLTAGE ASSIGNMENTS AND FACTORED LIFTS
Let Γ be a finite graph with vertex set V=V(Γ) and arc set A=A(Γ), and let G be a finite group. Let α: A→ G be a voltage assignment on Γ in the usual sense <cit.>, that is, with α satisfying α(a^-)=α(a)^-1 for any arc a∈ A and its reverse a^-. Let ω be a function assigning to every vertex u∈ V a subgroup G_u of G. The pair (α,ω) is a combined voltage assignment on the base graph Γ, and the graph thus becomes a combined voltage graph, with voltage group G.
The factored lift Γ^(α,ω) is a graph with vertex set V^(α,ω) consisting of all pairs (u,H), where u∈ V and H∈ G/G_u = {hG_u | h∈ G}, the set of left cosets of G_u in G. The arc set A^(α, ω) of the factored lift is defined as follows. For a pair u,v∈ V of adjacent vertices in the base graph Γ, let uv denote the set of arcs of Γ emanating from u and terminating at v (not necessarily distinct from u). Let a∈uv be an arc carrying a voltage α(a)∈ G. Then, for every h∈ G, the arc a together with the element h determine a unique arc (a,h)∈ A^(α, ω) in the factored lift, emanating from the vertex (u,hG_u)∈ V^(α,ω) and terminating at the vertex (v,hα(a)G_v)∈ V^(α, ω). Equivalently, if a∈uv is an arc in the base graph Γ carrying voltage α(a)∈ G and if H∈ G/G_u and K∈ G/G_v are left cosets, then
(u,H)[](a,h) (v,K) for every h∈ H and each K such that hα(a)∈ K .
As a simple example, Fig. <ref> shows a combined voltage graph on the cyclic group _4 (on the left) and the resulting factored lift (the graph of an octahedron) in the centre.
If the subgroup G_u < G is trivial for every vertex u of Γ, a factored lift reduces to an ordinary lift Γ^α with vertex set V^α = {(u,h) | u∈ V, h∈ G}, in which for every arc a ∈uv of Γ and every h∈ G the pair (a,h) is an arc in the ordinary lift Γ^α from the vertex (u,h) to the vertex (v,hα(a)), see Gross and Tucker <cit.>. The way a factored lift Γ^(α,ω) arises from an ordinary lift Γ^α should now be obvious. Indeed, for a general assignment ω: u↦ G_u and for every u∈ V one identifies left G_u-orbits in the fibre {(u,h) | h∈ G} of Γ^α to form vertices (u,hG_u), but making no identification among the existing arcs. This process can be regarded as a `factorisation' induced by `local left actions' of the subgroups G_u<G for u∈ V; hence the term factored lift.
To illustrate this by the example of Figure <ref>, if all the vertices of the voltage graph are assigned the trivial group, the obtained (standard) lift graph is shown in Figure <ref> (right). The factored lift in the middle of Figure <ref> is then obtained by identifying the pairs {(v,0),(v,2)} and {(v,1),(v,3)} to a single vertex each, resulting in the graph of an octahedron.
The factorisation induces a graph epimorphism f: Γ^α→Γ^(α,ω), given simply by (u,h) ↦ (u,H) for H∈ G/G_u such that h∈ H (we have deliberately replaced g by h in the notation for future use). A more detailed description of the action of f is on the following diagram:
[ (u,h) (a,h) (v,hα(a)); ; ↓f ↓f; (u,hG_u) (a,h) (v,hα(a)G_v) ]
This factorisation results in [G:G_u] vertices of the form (u,H) for left cosets H of G_u in G, each as an f-image of the |G_u| vertices (u,h) for h∈ H. Every vertex (u,H) then has valency |G_u|d_u, where d_u is the valency of u in Γ. Moreover, the group G acts on the factored lift Γ^(α,ω) by left multiplication as a subgroup of automorphisms. The action is free on the arc set A^(α,ω), and is transitive on every fibre F(u)={(u,H)∈ V^(α, ω) | H∈ G/G_u}, with G_u being the stabiliser of the vertex (u,G_u)∈ V^(α,ω) for every u∈ V. In particular, this action of G produces the base graph Γ as a G-quotient of its factored lift; formally, it gives rise to a graph isomorphism Γ≅Γ^(α, ω)/G.
We remark that combining left cosets of G with right multiplication of elements of G by α(a) in (<ref>) is essential for the algebra in the factorisation and morphisms to work.
Factored lifts turn out to be a special case of the generalised voltage assignments and lifts, introduced by Potočnik and Toledo <cit.>, where the objects assigned to arcs are also allowed to be cosets of the voltage group rather than individual elements of this group. In this generalisation, both vertices and arcs of a lift are `indexed' by subgroups of the voltage groups, subject to several technical conditions. The difference between our treatment and the (more general) one in <cit.> is the use of right cosets and left multiplication by voltages in <cit.> versus left cosets and right multiplication by voltages in (<ref>); the latter agrees with the way ordinary voltage graphs and lifts have been introduced in the monograph by Gross and Tucker <cit.>. We refer the interested reader to <cit.> for more details; here, we state a version of Theorem 4 of <cit.> that applies to our setting.
Let Γ^* be a graph and let G be a group of automorphisms of Γ^* that acts freely on the arcs of the graph. Then, on the quotient graph Γ=Γ^*/G with vertex set V and arc set A there exists a voltage assignment α: A→ G with the property that α(a^-) = α(a)^-1 and a function ω that assigns to every u∈ V a subgroup G_u of G, such that the factored lift Γ^(α, ω) is isomorphic to the original graph Γ^*.
Thus, combined lifts are, on the one hand, a generalisation of ordinary voltage lifts, dealing with a group of automorphisms acting freely on vertices, to the situation of a group of automorphisms acting freely on arcs. On the other hand, combined lifts are a special case of the general lifts of <cit.> that allow for an arbitrary action of a group of automorphisms. For completeness, there is another well-known generalisation of ordinary lifts, which are the so-called relative or permutation lifts, see Gross and Tucker <cit.> and Dalfó, Fiol, Pavlíková and Širáň <cit.>. This generalisation, however, is in terms of coverings —it extends consideration of regular coverings arising from ordinary lifts to general coverings— and does not refer to group actions.
§ COUNTING LIFTS OF WALKS
Lifts of walks are of central importance in the study of coverings. In an ordinary lift Γ^α of a base graph Γ with a voltage assignment α in a group G, every walk W in Γ starting at a vertex u lifts, for each h∈ G, to a unique walk W_h in Γ^α starting at the vertex (u,h). Here `lifting' means that the projection π: Γ^α→Γ given by erasing the group coordinate maps arcs of the walk W_h bijectively onto those of W. The situation is, however, a bit different in a factored lift Γ^(α,ω) arising from Γ by a combined voltage assignment (α,ω) in G, and this is what we now aim to explain.
By the definition of a factored lift, for every h∈ G, an arc a∈uv of Γ lifts to the arc (a,h) in Γ^(α,ω) emanating from the vertex (u,hG_u) and terminating at the vertex (v,hα(a)G_v). But for a fixed h and an arbitrary g∈ G_u, every arc of the form (a,hg) emanates from the same vertex (u,hG_u) since hgG_u=hG_u. This way, one obtains |G_u| arcs (a,hg) in the factored lift, all emanating from the vertex (u,hG_u) and projecting onto the arc a of Γ by the same projection π as above. Note, however, that their terminal vertices (v, hgα(a)G_v) may be different.
A useful way to look at lifts of arcs by combined voltage assignments is to imagine that every arc a∈uv of the base graph has been assigned a set β of |G_u| voltages of the form β(a) = gα(a) for g∈ G_u, inducing the |G_u| lifts (a,hg) of the arc a, all emanating from the same vertex (u,hG_u) but terminating at possibly distinct vertices (v,hβ(a)G_v)=(v,hgα(a)G_v).
This feature propagates when one continues to follow arcs along a walk in the base graph Γ. To see this, let W=a_1a_2… a_ℓ be a walk of length ℓ in Γ consisting of ℓ consecutive arcs, where a_j∈v_jv_j+1 for j∈ [ℓ] = {1,2,…, ℓ}. Recalling the notation ω(u)=G_u for every vertex u of Γ, let us choose ℓ elements g_j∈ω(v_j) for j∈ [ℓ] in an arbitrary way. Further, let h_1=1 and for j∈ [ℓ] we recursively define h_j+1 = h_jg_j α(a_j). Then, the walk W lifts to a walk W in Γ^(α,ω) of the form
W = (a_1,h_1g_1)(a_2,h_2g_2)… (a_j,h_jg_j) … (a_ℓ,h_ℓg_ℓ),
where, for every j∈ [ℓ], the arc (a_j,h_jg_j) of Γ^(α,ω) starts at the vertex (v_j,h_jω(v_j)) and ends at the vertex (v_j+1,h_jg_jα(a_j)ω(v_j+1)) coinciding with (v_j+1,h_j+1ω(v_j+1)) due to the definition of the sequence (h_j)_j∈ [ℓ]. It may be checked that every lift of a walk arises this way. In particular, our walk W gives rise to ∏_j∈ [ℓ]|ω(u_j-1)| lifts W in Γ^(α,ω) as in (<ref>), each projecting onto W by π; their count matches the number of choices of elements g_j∈ω(v_j) for j∈ [ℓ]. Note that if ω is a trivial assignment, this gives a generalisation of the unique walk-lifting property for ordinary lifts.
In the special case when the walk W in Γ as above is closed, that is, when v_1=v_ℓ+1, the lift W given by (<ref>) is a closed walk in Γ^(α,ω) if and only if ω (v_ℓ+1)=ω(v_1) and, at the same time, h_ℓ+1∈ω(v_1). But by our recursion (with h_1=1), the last condition means that
h_ℓ+1 = g_1α(a_1)g_2α(a_2)… g_ℓα(a_ℓ) ∈ω(v_1) .
Further, as g_1 already belongs to ω(v_1), it follows from (<ref>) that there is an element g̅_1∈ω(v_1) such that
g̅_1α(a_1)g_2α(a_2)… g_ℓα(a_ℓ) = e,
where e is the unit element of G. Equation (<ref>) may be usefully interpreted by recalling the modified voltages β from Remark <ref>, with values in the sets G_uα(a)={gα(a) | g∈ G_u} for arcs α emanating from u in the base graph. Namely, if one lets β(a_1)= g̅_1α(a_1) and β(a_j)= g_jα(a_j) for j∈{2,3,…,ℓ}, then the product β(W) = β(a_1)β(a_2) …β(a_ℓ), representing the total voltage (also known as `net' voltage) of the walk W under the assignment β accumulated by multiplying voltages as one moves along arcs of W in Γ, is the unit element e∈ G. Moreover, multiplying equation (<ref>) from the left by an arbitrary element g∈ G_u or a direct reference to (<ref>) gives a one-to-one correspondence between the closed walks in Γ of length ℓ rooted at v_1 that have net voltage e on the one hand, and net voltage g on the other hand. We summarize this observation for future reference.
Let Γ be a base graph equipped with a combined voltage assignment (α,ω) in a group G, and let β be the corresponding `set voltage assignment' on Γ introduced in Remark <ref>. Then, in the factored lift Γ^(α,ω), the lifts of closed base-graph walks W of length ℓ rooted at a vertex u and of net voltage β(W)=g for any particular g∈ G_u are in a one-to-one correspondence with lifts of walks W with the same parameters but with net voltage β(W)=e, the unit element of G.
Our intended study of lifts of eigenvectors and eigenvalues from a combined voltage graph to a factored lift requires introducing further notation, the origins of which come from <cit.>. Given a combined voltage graph =(V,A) of order k under the assignment (α, ω) in a voltage group G we first assign to it a k× k matrix = (;α,ω) indexed with the set V, entries of which are elements of the complex group algebra [G] of G. For every u∈ V, we first introduce a specific element G_u^+ ∈[G] by letting G_u^+=∑_g∈ G_ug. With this in hand, for every u,v∈ V the (u,v)^ th element of is defined by
_u.v = ∑_a∈uvG_u^+α(a) = G_u^+∑_a∈uvα(a) ,
where, as before, uv is the set of all arcs from u to v in , with _u,v=0 if uv=∅.
The matrix = (;α,ω) associated with a combined voltage graph Γ by (<ref>) enables us to determine the number of closed walks of a given length and rooted at a given vertex in the factored lift Γ^(α, ω) as follows.
Assume that, for a given ℓ≥ 0, the (u,u)^ th entry of the ℓ^ th power ^ℓ of the matrix =(;α,ω) is equal to the element (^ℓ)_uu=∑_g∈ Gb_g^(ℓ)g of the group algebra [G]. Then, the number n(u,ℓ) of closed walks in the factored lift ^(α,ω), rooted at a vertex (u,G_u), is equal to
n(u,ℓ)=|G_u|· b_e^(ℓ),
where e is the identity element of G.
We begin by pointing out that, by (<ref>), every entry of the u^ th row of is a left multiple by the element G_u^+=∑_g∈ G_ug of the group algebra (G). Because of this, every arc a emanating from a vertex u in may be viewed as being equipped with a set of voltages β(a)= G_uα(a) = {gα(a); g∈ G_u} as stated in Remark <ref>. This set of voltages may, in turn, be identified with the element G_u^+α(a) ∈[G] constituting one term in the definition of the entry _u.v for a specific arc a∈uv. For the rest of the argument, assume that a vertex u of Γ has been fixed.
Invoking Remark <ref> again and making use of the obvious interpretation of entries of a power of a matrix, it follows that for g∈ G the coefficient b_g^(ℓ) of the group-algebra element (^ℓ)_uu= ∑_g∈ Gb_g^(ℓ)g is equal to the number of closed walks W in Γ of length ℓ, rooted at u and of net `set voltage' β(W) = g. In particular, for g=e, the coefficient b_e^(ℓ) counts the number of closed walks W in Γ of length ℓ, rooted at u but with trivial net voltage β(W)=e. But such walks are in a one-to-one correspondence with closed walks of length ℓ in Γ^(α,β), rooted at the vertex (u,G_u), and having the form (<ref>) for v_1=u, with g_1 replaced by g̅_1 from (<ref>) to have net voltage e in the projection to Γ. Finally, by Remark <ref>, there is a one-to-one correspondence between closed walks of length ℓ rooted at (u,G_u) in the factored lift, with projections onto the base graph having net voltages respectively e and g for an arbitrary g∈ G_u. This translates to the fact that b_e^(ℓ) = b_g^(ℓ) for every g∈ G_u and so the number of closed walks of length ℓ in the factored lift, rooted at a vertex (u,G_u), is equal to |G_u|· b_e^(ℓ), as claimed.
As an illustration of Proposition <ref>, consider again the example of Figure <ref> for the cyclic voltage group G=_4=⟨ g | g^4=e⟩ with G_u={e} and G_v={e,g^2}. The matrix from (<ref>) associated with the combined voltage graph on the left-hand side of Figure <ref> has the form
=(
[ g+g^-1 e+g; (e+g^2)(e+g^-1) 0 ])=(
[ g+g^-1 e+g; e+g+g^-1+g^2 0 ]).
Taking ℓ=5, one may check that (^5)_uu=176g^3+160g^2+176g+160 and (^5)_vv=80g^3+80g^2+80g+80. By Proposition <ref>, for the number of closed walks rooted at (u,G_u) and at (v,G_v) in the factored lift one obtains n(u,5)=|G_u|· b_e^(5)=1·160=160 and n(v,5)=|G_v|· b_e^(5)=2· 80=160. The two values coincide as the factored lift is vertex-transitive.
§ VOLTAGE GROUP REPRESENTATIONS, LIFTS OF SPECTRA
To explain connections between factored lifts and representations of voltage groups, let Γ=(V,A) be a base graph of order k with a combined voltage assignment (α,β) in a group G. Let ρ be a complex irreducible representation of G in ℂ^d of dimension d=d(ρ). Recalling the matrix =(; α, β) defined by (<ref>) in the previous section, we now link this matrix with the representation ρ by introducing a dk× dk complex block matrix (ρ). For every ordered pair (u,v) of vertices of V the (u,v)^ th block entry of (ρ) is defined to be the d× d matrix
_u,v(ρ) = ∑_a∈uv ∑_h∈ G_u ρ(hα(a)),
where the sum of matrices is defined in the usual way; the block entry _u,v(ρ) is the all-zero d× d matrix if uv=∅. (We will assume throughout that the indexation within the d× d blocks of _u,v(ρ) by the set {1,2,…,d} is the same across all the k^2 blocks of this kind, which themselves are indexed by pairs of elements of V.)
In order to simplify the forthcoming calculations, for G_u and its [G]-variant G_u^+ = ∑ _g∈ G_u g introduced in the previous section, we let
ρ(G_u) = ρ(G_u^+) = ∑_h∈ G_uρ(h) .
Combined with the fact that ρ is a group homomorphism, the notation of (<ref>) enables one to rewrite the defining equation (<ref>) in the form
_u,v(ρ) = ρ(G_u) ∑_a∈uvρ(α(a)),
which can be advantageously interpreted by saying that the u^ th row of the matrix (ρ) is a left multiple by the d× d factor ρ(G_u) which is `constant' for any fixed u∈ V.
In fact, when ρ_0 is the trivial representation, then (ρ_0) is a quotient matrix of a regular (or equitable) partition of the factored lift graph, where cells correspond to fibres F(u)={(u,H)∈ V^(α, ω) | H∈ G/G_u} introduced in the previous section; see the example in Figure <ref>. In particular, the largest eigenvalue of (ρ_0) corresponds to the spectral radius of the factored lift.
To work with group representations, let (G) be a complete set of irreducible representations of a finite group G. Let H<G be an arbitrary subgroup of G and, for any ρ∈(G) with dimension d(ρ), let ρ(H) = ∑_h∈ Hρ(h). That is, ρ(H) is the sum of d(ρ)-dimensional complex matrices ρ(h) taken over all elements h∈ H. We note that, in general, ρ(H) may be the zero-matrix, although, of course, all the matrices ρ(h) for h∈ H are non-singular. We make use of the following result of <cit.> obtained earlier by the authors of the present paper.
For every group G and every subgroup H<G of index n=[G:H] one has
∑_ρ∈(G)
d(ρ)·(ρ(H)) = n.
We also need some preparation for working with column vectors of dimension a multiple of d, say, dℓ for some ℓ≥ 1. A complex vector of dimension dℓ will be represented in the form ^⊤=(_1,_2, …,_ℓ)^⊤, where, for t∈ [ℓ] = {1,2,…,ℓ}, each _t is a d-dimensional column vector called a d-segment of . For such a vector of dimension dℓ and for every j∈ [d]={1,2,…, d}, the j-section of will be the ℓ-dimensional column vector _[j] given by _[j]^⊤= (_1,j, _2,j, …,_ℓ,j)^⊤, where _t,j is the j-th coordinate of _t for each t∈ [ℓ].
Suppose now that a dk-dimensional complex column vector is an eigenvector of our dk× dk matrix (ρ) for some complex eigenvalue λ. Assuming consistent indexation of d× d blocks of (ρ) and d-segments of by the vertex set V, for every v∈ V the v^ th d-segment of will be denoted (v).
It remains to introduce a condition for eigenvectors, which we refer to in what follows. We say that, for a d-dimensional representation ρ of G, an eigenvector for some eigenvalue of the dk× dk dimensional matrix (ρ) satisfies the condition (C) if the following is fulfilled:
(C)
where _̧v is a d-dimensional column vector that may depend on v but is constant over the elements h∈ G_v. The important fact is that the condition (C) implies that the dk^ω-dimensional vector ^+, with k^ω d-segments indexed by k^ω ordered pairs (v,hG_v) for v∈ V and hG_v∈ G/G_v, given by ^+(v,hG_v) = ρ(h)(v), is well defined and does not depend on representatives of cosets in G/G_v.
We are now ready to state and prove our first result, linking eigenvectors and eigenvalues of a factored lift with representations of its voltage group.
Let be a graph on a set V of k vertices, with a combined voltage assignment (α,ω) in a finite group G that assigns a subgroup G_v of G to every vertex v∈ G, and let k^ω = ∑_v∈ V[G:G_v] be the number of vertices of the factored lift ^(α,ω). Let ρ be a complex irreducible representation of G of dimension d≥ 1 and let (ρ) be the associated complex dk× dk matrix defined by (<ref>). Further, let be a dk-dimensional eigenvector of (ρ) for some complex eigenvalue λ of (ρ), with d-blocks (v) for v∈ V, which fulfils the condition (C). Then:
(i)
The dk^ω-dimensional vector ^+ with k^ω d-segments indexed by k^ω ordered pairs (v,hG_v) for v∈ V and hG_v∈ G/G_v, given by ^+(v,hG_v) = ρ(h)(v), is well defined and does not depend on representatives of cosets in G/G_v.
(ii)
For every j∈ [d] the j-section ^+_[j] of ^+ is a k^ω-dimensional eigenvector of the factored lift ^(α,ω) for the same eigenvalue λ as above.
(iii)
Let S be the system of k^ω linear equations of the form cρ(h)(v) = 0 for v∈ V and hG_v∈ G/G_v for an unknown row vector c=(c_1,c_2,…,c_d) of dimension d. The set of j-sections {^+_[j] | j∈ [d]} is linearly independent if and only if the system S has only a trivial solution c = 0. In particular, this is satisfied if V contains a subset U of d vertices such that the d-segments _u for u∈ U are linearly independent.
Part (i) is a consequence of the condition (C), and so we move onto (ii). Our assumption that is a column eigenvector of (ρ) for an eigenvalue λ is, with the help of (<ref>), equivalent to stating that the d-segments (v) of for v∈ V satisfy
λ(u) = ∑_v∼ u(ρ)_u,v(v) = ∑_v∼ u ρ(G_u) ∑_a∈uvρ(α(a)) (v)
for every vertex u∈ V. We also assume that satisfies our assumption (C) as stated in (<ref>). For such a λ and we introduce a new column vector ^+ of dimension dk^ω whose d-segments, indexed by the k^ω pairs (v,gG_v)∈ V^(α,ω), are defined by
^+(v,hG_v)= ρ(h)(v) for every v∈ V and every h∈ G ;
the fact that ^+ is well defined is a direct consequence of the assumption (C). Multiplying (<ref>) by the d× d matrix ρ(g) from the left and using (<ref>) with (<ref>) gives
λρ(g)(u) = ∑_v∼ u ∑_a∈uv ∑_h∈ gG_u ρ(h)ρ(α(a)) (v) .
Using now the equations (<ref>), one sees that (<ref>) is equivalent to the statement that
^+(u,gG_u) = ∑_v∼ u∑_a∈uv∑_h∈ gG_u^+(v,hα(a)G_v) .
The important conclusion that follows from (<ref>) is that, for every j∈ [d], the j-section ^+_[j] of is a k^ω-dimensional vector satisfying the equation
^+_[j](u,gG_u) = ∑_v∼ u∑_a∈uv∑_h∈ gG_u^+_[j](v,hα(a)G_v) .
But (<ref>) demonstrated that, for every j∈ [d], the j-section ^+_[j] is an eigenvector of the factored lift ^(α,ω) which belongs to the same eigenvalue λ we started with.
For part (iii), the j-sections ^+_[j] for j∈ [d] form a linearly independent set if and only if the linear combination c_1^+_[1] + c_2^+_[2] + ⋯ + c_d^+_[d] results in a k^ω-dimensional zero vector only in the trivial case, that is, when c = (c_1,c_2,…,c_d) is a zero row vector. This must hold for every d-segment of the j-sections, but since by (<ref>) the (u,gG_u)-th coordinate of ^+_[j] is the j-th coordinate (ρ(g)(u))_j of the d-segment ρ(g)(u), the above linear combination equates to a zero vector if and only if the following system of k^ω equations (for every vertex (u,gG_u) of the factored lift)
c_1(ρ(g)(u))_1 + c_2(ρ(g)(u))_2 + ⋯ + c_d(ρ(g)(u))_d = 0
has only the trivial solution, namely, the zero row vector c. But the system (<ref>) may simply be rewritten in the form c·ρ(g)·(u) = 0. This implies the validity of the statement (iii), including the particular observation about a subset U of d linearly independent d-segments (u) for u∈ U.
If one is interested only in calculating the spectrum (^(α,β)) of a factored lift, it turns out that it is sufficient to consider the spectra ((ρ)) of the complex matrices (ρ), taken over a complete set of irreducible complex representations ρ of the voltage group G, as our next result shows.
Let (α,ω) be a combined voltage assignment on a graph =(V,A) with k vertices in a group G and let n_u=|G:G_u| for every u∈ V, and with the order of the factored lift ^(α,ω) equal to k^ω= ∑_u∈ Vn_u. Let G have order n, with ν conjugacy classes, and let {ρ_r : r=0,1,…,ν-1} be a complete set of complex irreducible representations of G, of dimensions d(ρ_r)=d_r, so that ∑_r=0^ν-1d_r^2=n. Let B be the multiset of eigenvalues
B = ⋃_r=0^ν-1d_r ((ρ_r)).
of cardinality ∑_r=0^ν-1kd_r^2=kn. Then, the following statements hold:
(i)
The multiset B contains at most k^ω=∑_u∈ Vn_u non-zero eigenvalues.
(ii)
The spectrum of the factored lift ^(α,ω) is the multiset B∖ Z, where Z is a multiset containing kn-k^ω zeros.
For part (i), let u∈ V be a vertex with voltage subgroup ω(u)=G_u<G. Then, for a given irreducible representation ρ_r of dimension d_r, the u^ th block-row of (ρ_r), which is a d_r× d_rk matrix denoted (ρ_r)_u in what follows, is a multiple of the matrix ρ_r(G_u)=∑_h∈ G_uρ_r(h) by the equation (<ref>). Thus, ((ρ_r)_u)≤(ρ_r(G_u)) and hence, for each u∈ V, the matrix (ρ_r)_u has at most ((ρ_r)_u) non-zero eigenvalues. The number of non-zero eigenvalues of the entire d_rk× d_rk matrix (ρ_r) then does not exceed ∑_u∈ V(ρ_r(G_u)). With the help of Proposition <ref>, this gives at most
∑_r=0^ν-1d_r∑_u∈ V(ρ_r(G_u))=
∑_u∈ V∑_r=0^ν-1d_r(ρ_r(G_u))=
∑_u∈ V[G:G_u]=k^ω
non-zero eigenvalues in the multiset B, as claimed.
To establish (ii), we use the irreducible characters χ_r associated with each irreducible representation of G. Let be the adjacency matrix of ^(α,ω), with entries a_(u,G_u)(v,G_v) for u,v∈ V. It is well known that the total number of rooted closed walks of length ℓ in the factored lift is equal to the trace of the ℓ^ th power ^ℓ of , with elements a^(ℓ)_(u,G_u)(v,G_v). But the same trace is also equal to the sum of the ℓ^ th powers of the eigenvalues (including multiplicities) of ^ℓ. This implies that
(^ℓ) =∑_u∈ Vn_u a_(u,G_u)(u,G_u)^(ℓ) =∑_λ∈(^(α,ω))λ^ℓ,
where the middle part of (<ref>) is a consequence of the fact that left multiplication by elements of G induces automorphisms of the factored lift that act transitively on its fibres.
By Proposition <ref>, the u^ th diagonal entry of ^ℓ is a |G_u|-multiple of the coefficient b_e^(ℓ) at the `identity term' e of the [G]-group ring element (^ℓ)_uu = ∑_g∈ Gb_g^(ℓ)g. On the other hand, in <cit.>, it was proved that, in terms of characters, the same
`identity' coefficient b_e^(ℓ) admits the following evaluation:
b_e^(ℓ)=1/n∑_r=0^ν-1d_rχ_r((^ℓ)_uu).
Combining (<ref>) with the aforementioned result of Proposition <ref> gives
a_(u,G_u)(u,G_u)^(ℓ)=|G_u|b_e^(ℓ)=1/n_u∑_r=0^n-1d_rχ_r((^ℓ)_uu).
Consequently, putting the pieces together and using μ as a variable for eigenvalues of the matrices (ρ_r), one obtains
∑_λ∈(^(α,ω))λ^ℓ =(^ℓ)
= ∑_u∈ Vn_u a_(u,G_u)(u,G_u)^(ℓ)
=∑_u∈ V∑_r=0^ν-1 d_rχ_r((^ℓ)_uu)
=∑_r=0^ν-1d_r∑_u∈ V((ρ_r)^ℓ)
=∑_r=0^ν-1d_r∑_μ∈(B(ρ_r)^ℓ)μ^ℓ,
where the leftmost sum contains k^ω=∑_u∈ Vn_u terms, as opposed to the formally nk terms of the rightmost sum, but by part (i) one is free to remove kn-k^ω zeros from the last sum without affecting the equality. Hence, since such `adjusted equalities' hold for every ℓ=0,1,…,k^ω-1, the multisets of eigenvalues of the factored lift ^(α,ω) and those in B must coincide up to a multiset Z of kn-k^ω zeros (see, for instance, Gould <cit.>). This completes the proof.
§ ILLUSTRATION EXAMPLES
Consider the graph ^(α,ω) shown on the left-hand side of Figure <ref>.
The partition shown in the centre of Figure <ref> indicates that the graph is a factored lift induced by an action of a dihedral group D_3=⟨ a,b | a^3= b^2- (ab)^2=e⟩ of order 6, and the corresponding combined voltage graph Γ is shown on the right-hand side of the same figure. The matrix =(), introduced in general by (<ref>), is in this case the 3× 3 matrix shown in (<ref>):
=(
[ 0 e 0; e a+a^-1+b e; 0 e 0 ]).
A complete set of irreducible complex representations of the group D_3 presented above consists of the trivial and alternating 1-dimensional representations ρ_0 and ρ_1 together with a 2-dimensional representation ρ_2 with ζ a primitive complex 3-rd root of 1; they are all displayed in Table <ref>. To save space, we use the symbols I, Dia(x,y) and Off(x,y) for the identity matrix, a diagonal matrix with entries x,y (from top left to bottom right), and an off-diagonal matrix with entries x,y (from top right to bottom left), all of dimension 2.
In this example one has ω(u)=G_u=D_3, ω(v)=G_v{e} and ω(w)=G_w=⟨ a⟩, so ρ_0(G_u)=6 and ρ_1(G_u)=0, ρ_0(G_v)=ρ_1(G_v)=1, and ρ_0(G_w)=ρ_1(G_w)=3, while, for example, (ρ_1)_u,v=0 and (ρ_1)_v,v = ρ_1(G_v)(ρ_1(a)+ρ_1(a^-1)+ρ_1(b))=1. The 3× 3 matrices (ρ_0) and (ρ_1) are thus given by
(ρ_0)=(
[ 0 6 0; 1 3 1; 0 3 0 ]), (ρ_1)=(
[ 0 0 0; 1 1 1; 0 3 0 ]).
The entries of (ρ_2) are determined similarly, bearing in mind that this time they are 2× 2 blocks; for example, (ρ_2)_u,v is the sum of all the six matrices appearing in the last row of Table <ref>. An evaluation of (ρ_2)_u,v gives
(ρ_2)=(
[ 0 0 ∑_i=0^2 ζ^i ∑_i=0^2 ζ^i 0 0; 0 0 ∑_i=0^2 ζ^-i ∑_i=0^2 ζ^-i 0 0; 1 0 ζ+ζ^-1 1 1 0; 0 1 1 ζ+ζ^-1 0 1; 0 0 ∑_i=0^2 ζ^i 0 0 0; 0 0 0 ∑_i=0^2 ζ^-i 0 0; ]).
Eigenvalues and eigenvectors of (ρ_r) for r=0,1,2 are listed in Table <ref>, taking into the account that the non-zero eigenvalues of (ρ_0) and (ρ_1) are, respectively, μ^(0)_1,2= 3(1±√(5))/2 and μ^(1)_1,2= (1±√(13))/2.
Observe that for r=0,1, except for the eigenvalues 0 marked by an additional star (0^*), all the remaining eigenvalues of (ρ_0) and (ρ_1) satisfy the condition (C) in (<ref>) for d=1, since ρ_0|_G_u=ρ_0|_G_w=ρ_1|_G_w=1. This, however, does not hold for the eigenvector of (ρ_1) that belongs to the starred eigenvalue 0^* because (1)≠ 0 and ρ_1|_G_u≠1 (that is, ρ_1(h)(1) is not constant for h∈ G_u).
The similar happens for the eigenvectors of (ρ_2) with the marked zero eigenvalues (0^*), as neither ρ_2|_G_u and ρ_2|_G_w are a non-zero constant. On the other hand, the first and last eigenvectors of (ρ_2), that is, those corresponding to the eigenvalues 0 and -2, do satisfy the condition since, in both cases, (u)=(w)=(0,0)^⊤. By Theorem <ref>, these eigenvalues have multiplicity 2 in the factored lift graph.
To see in more detail what happens in the case of the eigenvalue -2 for (ρ_2), the 2-dimensional vectors constituting the corresponding 6-dimensional eigenvector are (u)=(0,0)^⊤, (v)=(-1,1)^⊤, and (w)=(0,0)^⊤; here we use transposes to save space. Following Theorem <ref>, to construct the eigenvectors _1^+ and _2^+ of the factored lift, one subsequently calculates
^+(u,G_u) = ^+(w,G_w) = (0,0)^⊤ ,
^+(v,eG_v) = I(-1,1)^⊤ = (-1,1)^⊤ ,
^+(v,aG_v) = Dia(ζ,ζ^-1)(-1,1)^⊤ = (-ζ,ζ^-1)^⊤ ,
^+(v,a^2G_v) = Dia(ζ^2,ζ^-2)(-1,1)^⊤ = (-ζ^2,ζ^-2)^⊤ ,
^+(v,bG_v) = Off(1,1)(-1,1)^⊤ = (1,-1)^⊤ ,
^+(v,abG_v) = Off(ζ,ζ^-1)(-1,1)^⊤ = (ζ,-ζ^-1)^⊤ ,
^+(v,a^2bG_v) = Off(ζ^2,ζ^-2)(-1,1)^⊤ = (ζ^2,-ζ^-2)^⊤ .
Then, taking the first or the second entries, we obtain _1^+ and _2^+, respectively:
_1^+ =(0,-1,-ζ,-ζ^2,1,ζ,ζ^2,0,0)^⊤;
_2^+ =(0,1,ζ^-1,ζ^-2,-1,-ζ^-1,-ζ^-2,0,0)^⊤,
which can be checked to be eigenvectors for the eigenvalue -2 of the factored lift ^(α,ω) of Figure <ref>. In summary, the spectrum of the factored lift ^(α,ω) of Figure <ref> is
(^(α,ω))={ 3(1+√(5))/2, (1+√(13))/2, 0^[3], (1-√(13))/2, 3(1-√(5))/2, -2^[2] } .
Consider the graph ^(α,ω) shown on the left-hand side of Figure <ref>.
Thus, is the 2× 2 matrix shown in (<ref>), where Σ(G_u)=e+b.
=(
[ (Σ(G_u))(a+a^-1) Σ(G_u); e a+a^-1 ]).
The matrices (ρ_i) for i=0,1,2 are shown in (<ref>) and (<ref>):
(ρ_0)=(
[ 4 2; 1 2; ]), (ρ_1)=(
[ 0 0; 1 2; ]),
(ρ_2)=(
[ ζ+ζ^-1 ζ+ζ^-1 1 1; ζ+ζ^-1 ζ+ζ^-1 1 1; 1 0 ζ+ζ^-1 0; 0 1 0 ζ+ζ^-1 ])=(
[ -1 -1 1 1; -1 -1 1 1; 1 0 -1 0; 0 1 0 -1 ]).
The corresponding eigenvalues and eigenvectors are listed in Table
<ref>.
The eigenvector of (ρ_1) not satisfying the condition (C) corresponds again to the eigenvalue 0 with an asterisk. But observe that although the eigenvectors of (ρ_2) corresponding to the eigenvalues 0^+ and 0^++ do not satisfy this condition, their sum, that is, the vector ^*=(1,1,1,1)^⊤ does! This is because ρ_2(h)^*(u) and ρ(h)_2^*(v) are constant for every h∈ gG_u=gG_w={g,gb}. Then, by Theorem <ref>, the sum eigenvector ^* produces two eigenvectors in the factored lift that correspond to the eigenvalue 0. The same reasoning applies to the eigenvector of the eigenvalue -3 so that it also satisfies the condition (<ref>). This example was chosen to illustrate the interesting feature that may happen: A sum of two eigenvectors for the eigenvalue 0 may satisfy the condition (C) even if neither of the two vectors does.
Finally, the spectrum of the factored lift ^(α,ω) is
(^(α,ω))={ 3+√(3), 2, 3-√(3), 0^[2], -1^[2], -3^[2] }.
To show that the eigenvectors of the factorized lift Γ^(α,ω) of Figure <ref>
are linearly independent, we apply part (iii) of Theorem <ref>. Indeed, such eigenvectors are obtained from the matrix product , where is a 12× 12 matrix with block form
= (_0 | _1 | _2) = (
[ _0,1 Ø _1,1 Ø _2,1 Ø _2,2 Ø; Ø _0,1 Ø _1,1 Ø _2,1 Ø _2,2 ]),
where
_0,1 =(ρ_0(e),ρ_0(a),ρ_0(a^2),ρ_0(b),ρ_0(ab),ρ_0(a^2b))^⊤
=(1,1,1,1,1,1)^⊤,
_1,1 =(ρ_1(e),ρ_1(a),ρ_1(a^2),…,ρ_1(a^2b))^⊤=(1,1,1,-1,-1,-1)^⊤,
_2,1 =(ρ_2(e)_1,ρ_2(a)_1,ρ_2(a^2)_1,…,ρ_2(a^2b)_1)^⊤
=
(
[ 1 z z^2 0 0 0; 0 0 0 1 z z^2 ])^⊤,
_2,2 =(ρ_2(e)_2,ρ_2(a)_2,ρ_2(a^2)_2,…,ρ_2(a^2b)_2)^⊤
=
(
[ 0 0 0 1 z^-1 z^-2; 1 z^-1 z^-2 0 0 0 ])^⊤.
Note that, as required in the proof, _2,1 and _2,2 are formed, respectively, out of the first and second rows of the matrices ρ_2(e), ρ_2(a), ρ_2(a^2), and so on. The matrix in this case is a 12× 12 matrix with block form = (_0,_1,_2), where _0 = _0, _1=_1, and _2 = (_2,_2), so that
=(
[ _0 Ø Ø Ø; Ø _1 Ø Ø; Ø Ø _2 Ø; Ø Ø Ø _2 ]).
Here, one needs to be careful about the indexation of rows and columns to align eigenvectors with the corresponding eigenvalues. In accordance with the proof of Theorem <ref>, for each ρ_i∈{ρ_0,ρ_1,ρ_2} of dimension d_i, the d_i× d_i matrix _i is formed by a choice of the corresponding eigenvectors of ρ_i(). To proceed, we choose to list the eigenvalues in the order 3+√(3), 3-√(3), 2, 0, -1, -3, 0, 0, together with a choice of the corresponding eigenvectors as follows:
_0 = (
[ 1+√(3) 1-√(3); 1 1 ]), _1 = (
[ 0 -2; 1 1 ]),
_2 =
(
[ 0 -2 1 1; 0 -2 1 0; -1 1 1 1; 1 1 1 0 ]),
where, from left to right, the columns of _0 correspond to eigenvalues 3+√(3) and 3-√(3), the columns of _1 to the eigenvalues 2 and 0, and finally the columns of _2 correspond to the eigenvalues -1, -3,0 and 0, respectively. Then, we get
=(
[ 1+√(3) 1-√(3) 0 -2 0 -2 1 1 0 -2 1 0; 1+√(3) 1-√(3) 0 -2 0 -2z z z 0 -2z^-1 z^-1 0; 1+√(3) 1-√(3) 0 -2 0 -2z^2 z^2 z^2 0 -2z^-2 z^-2 0; 1+√(3) 1-√(3) 0 2 0 -2 1 0 0 -2 1 1; 1+√(3) 1-√(3) 0 2 0 -2z z 0 0 -2z^-1 z^-1 z^-1; 1+√(3) 1-√(3) 0 2 0 -2z^2 z^2 0 0 -2z^-2 z^-2 z^-2; 1 1 1 1 -1 1 1 1 1 1 1 0; 1 1 1 1 -z z z z z^-1 z^-1 z^-1 0; 1 1 1 1 -z^2 z^2 z^2 z^2 z^-2 z^-2 z^-2 0; 1 1 -1 -1 1 1 1 0 -1 1 1 1; 1 1 -1 -1 z z z 0 -z^-1 z^-1 z^-1 z^-1; 1 1 -1 -1 z^2 z^2 z^2 0 -z^-2 z^-2 z^-2 z^-2 ]).
Then, the eigenvectors of the lift are obtained, first removing the columns 4,8,12, where the first and fourth rows (corresponding to the elements of G_u={e,b}) are different, and second removing the rows 1,2,3 (whose entries are equal to the rows 4,5,6, respectively):
(
[ 1+√(3) 1-√(3) 0 0 -2 1 0 -2 1; 1+√(3) 1-√(3) 0 0 -2z z 0 -2z^-1 z^-1; 1+√(3) 1-√(3) 0 0 -2z^2 z^2 0 -2z^-2 z^-2; 1 1 1 -1 1 1 1 1 1; 1 1 1 -z z z z^-1 z^-1 z^-1; 1 1 1 -z^2 z^2 z^2 z^-2 z^-2 z^-2; 1 1 -1 1 1 1 -1 1 1; 1 1 -1 z z z -z^-1 z^-1 z^-1; 1 1 -1 z^2 z^2 z^2 -z^-2 z^-2 z^-2 ]).
§ CONCLUDING REMARKS
In Section <ref> we presented a method for determination of a complete spectrum of a factored lift from the spectrum of a special matrix reflecting structure of a base graph together with a combined voltage assignment in a group, with entries in a complex group algebra associated with the voltage group. For a similar derivation of eigenvectors of the lift, we derived a sufficient condition, and in Section <ref>, we illustrated the complexity of the situation with lifting eigenvectors in general.
In a way, this is a bit of a paradox when one considers the previous work <cit.> of the same set of authors. The two papers offer a complete description of permutation lifts of both spectra and eigenspaces, together with details of the underpinning theory in <cit.>. While permutation lifts represent a generalization of ordinary lifts in a completely different direction compared with factored lifts (as explained in Section <ref>), it still feels like a paradox that our methods enable a complete description of lifts of spectra but not lifts of all the eigenspaces. A complete determination of the latter remains open.
The factored lifts of base graphs equipped with a combined voltage assignment in a given group, considered in this paper, are an equivalent way of studying the quotients of graphs that admit a free action of a given group on arcs. As alluded to earlier, an investigation of quotients of graphs by general subgroups of automorphisms and a formal study of their reconstruction by `general lifts' was set out by Potočnik and Toledo in <cit.>. The question of determination of spectra and eigenspaces in such a completely general setting is also open.
§.§ Acknowledgment
The first two authors' research has been supported by
AGAUR from the Catalan Government under project 2021SGR00434 and MICINN from the Spanish Government under project PID2020-115442RB-I00.
The second author's research was also supported by a grant from the Universitat Politècnica de Catalunya, with references AGRUPS-2022 and AGRUPS-2023. The third and fourth authors acknowledge support of this research from the APVV Research Grants 19-0308 and 22-0005 and the VEGA Research Grants 1/0567/22 and 1/0069/23.
xx
agrr07
K. Audenaert, C. Godsil, G. Royle, and T. Rudolph,
Symmetric squares of graphs,
J. Combin. Theory B 97 (2007) 74–90.
dfmr17
C. Dalfó, M. A. Fiol, M. Miller, and J. Ryan, On quotient digraphs and voltage digraphs, Australasian J. Combin. 69 (2017), no. 3, 368–374.
dfmrs19
C. Dalfó, M. A. Fiol, M. Miller, J. Ryan, and J. Širáň,
An algebraic approach to lifts of digraphs,
Discrete Appl. Math. 269 (2019) 68–76.
dfps21
C. Dalfó, M. A. Fiol, S. Pavlíková and J. Širáň, Spectra and eigenspaces of arbitrary lifts of graphs, J. Algebraic Combin. 54 (2021) 651–672.
dfps23
C. Dalfó, M. A. Fiol, S. Pavlíková, and J. Širáň,
On the spectra and eigenspaces of the universal adjacency matrices of arbitrary lifts of graphs,
Linear Multilinear Algebra 71(5) (2023) 693–710.
dfs19
C. Dalfó, M. A. Fiol, and J. Širáň,
The spectra of lifted digraphs,
J. Algebraic Combin. 50 (2019)
419–426.
ffhhuw12
R. Fabila-Monroy, D. Flores-Peñaloza, C. Huemer, F. Hurtado, J. Urrutia, and D. R. Wood,
Token graphs,
Graphs Combin. 28 (2012), no. 3, 365–380.
g99
H. W. Gould, The Girard-Waring power sum formulas for symmetric functions and Fibonacci
sequences Fibonacci Quart. 37 (1999), no. 2, 135–140.
gt87
J. L. Gross and T. W. Tucker,
Topological Graph Theory,
Wiley, New York (1987).
pt21
P. Potocnik and M. Toledo,
Generalised voltage graphs,
European J. Combin. 94 (2021) 103313.
rdfm23
M. A. Reyes, C. Dalfó, M. A. Fiol, and A. Messegué,
On the spectra of token graphs of cycles and other graphs,
Linear Algebra Appl. 679 (2023) 38–66.
|
http://arxiv.org/abs/2409.03714v1 | 20240905171908 | Simulating the Galactic population of axion clouds around stellar-origin black holes: Gravitational wave signals in the 10-100 kHz band | [
"Jacob R. Sprague",
"Shane L. Larson",
"Zhiyuan Wang",
"Shelby Klomp",
"Andrew Laeuger",
"George Winstone",
"Nancy Aggarwal",
"Andrew A. Geraci",
"Vicky Kalogera"
] | astro-ph.HE | [
"astro-ph.HE",
"gr-qc"
] |
APS/123-QED
^1Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), Department of Physics and Astronomy, Northwestern University, Evanston, Illinois 60208
^2Center for Fundamental Physics, Department of Physics and Astronomy, Northwestern University, Evanston, Illinois 60208, USA
^3The Division of Physics, Mathematics and Astronomy, California Institute of Technology, Pasadena, California 91125, USA
^4Department of Physics and Astronomy, University of California, Davis, Davis, California 95616, USA
§ ABSTRACT
Ultralight scalar fields can experience runaway `superradiant' amplification near spinning black holes, resulting in a macroscopic `axion cloud' which slowly dissipates via continuous monochromatic gravitational waves. For a particular range of boson masses, 𝒪(10^-11 – 10^-10) eV, an axion cloud will radiate in the 10 – 100 kHz band of the Levitated Sensor Detector (LSD). Using fiducial models of the mass, spin, and age distributions of stellar-origin black holes, we simulate the present-day Milky Way population of these hypothetical objects. As a first step towards assessing the LSD's sensitivity to the resultant ensemble of GW signals, we compute the corresponding signal-to-noise ratios which build up over a nominal integration time of 10^7 s, assuming the projected sensitivity of the 1-m LSD prototype currently under construction, as well as for future 10-m and 100-m concepts. For a 100-m cryogenic instrument, hundreds of resolvable signals could be expected if the boson mass μ is around 3×10^-11 eV, and this number diminishes with increasing μ up to ≈ 5.5×10^-11 eV. The much larger population of unresolved sources will produce a confusion foreground which could be detectable by a 10-m instrument if μ∈ (3-4.5)×10^-11 eV, or by a 100-m instrument if μ∈ (3-6)×10^-11 eV.
Simulating the Galactic population of axion clouds around stellar-origin black holes:
Gravitational wave signals in the 10 - 100 kHz band
Jacob R. Sprague^1, Shane L. Larson^1, Zhiyuan Wang^2, Shelby Klomp^2, Andrew Laeuger^3, George Winstone^2, Nancy Aggarwal^4, Andrew A. Geraci^2, and Vicky Kalogera^1
(The LSD Collaboration)
September 9, 2024
===================================================================================================================================================================================================
§ INTRODUCTION
The era of gravitational-wave (GW) astronomy is in full-swing. During their first three observing runs, the GW interferometers Advanced LIGO and Advanced Virgo detected 90 compact binary coalescences (CBC) involving neutron stars (NS) and stellar-mass black holes (BH) <cit.><cit.><cit.>. The most notable events included the first NS-NS merger (GW170817) <cit.>, the first highly-asymmetric binary (GW190412) <cit.>, the first merger with an intermediate-mass BH remnant (GW190521) <cit.>, and the first object in the mass gap separating the most massive neutron stars from the lowest-mass BH's (GW190814) <cit.>. The first half of the fourth observing run has already seen a new lower-mass-gap event (GW230529) <cit.>.
Adding to the excitement, evidence for a stochastic background has been reported in the 15-year dataset from the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) <cit.>. The most well-motivated scenario for the origin of this background is the extragalactic population of inspiralling supermassive BH binaries.
Finally, the launch of the Laser Interferometer Space Antenna (LISA) in the mid-2030's will open up the millihertz band for exploration. The Galactic population of compact binaries, and the extragalactic population of supermassive BH binaries and extreme-mass-ratio-inspirals (EMRI's), are all highly anticipated LISA sources <cit.>.
These observatories cover multiple windows in the GW spectrum from the nanohertz up to several hundred Hz. The push to higher frequencies is now underway, with cosmic strings, axion clouds, primordial black hole (PBH) binaries, and early-universe stochastic backgrounds as the main science drivers <cit.>.
One such concept, currently in development at Northwestern University, is the Levitated Sensor Detector (LSD). With sensitivity to GW's at tens to hundreds of kHz, the LSD employs optically-trapped micron-scale disks as GW sensors. The instrument is a Michelson interferometer with two perpendicular 1-meter Fabry-Pérot arm cavities. In each arm, a disk is levitated at an antinode of a standing-wave formed by two counter-propagating beams. The trapped object behaves like a driven damped harmonic oscillator, with the corresponding trap frequency being widely-tunable with laser intensity. The periodic changes in arm-length induced by a GW manifest as a periodic shift in the position of the antinode. If the trap frequency matches the GW frequency, the levitated sensor is resonantly driven <cit.> <cit.> <cit.>.
As a resonant detector, the LSD is well-suited to search for continuous monochromatic signals. A popular scenario involves the interaction between spinning black holes and `ultralight' bosonic fields – i.e. those with masses several orders-of-magnitude smaller than an electron-volt (eV). Such fields can extract rotational energy from spinning BH's via `superradiant amplification' of certain bound-states <cit.>. The result is a macroscopic cloud of bosons all living in the same state – commonly known as a `gravitational atom' or `axion cloud' <cit.>. These oscillating non-axisymmetric clouds generate continuous monochromatic GW's at a frequency primarily determined by the boson's mass. Tens to hundreds of kHz corresponds to μ = 𝒪(10^-11 - 10^-10) eV.
This scenario can be realized with physics beyond-the-Standard-Model (BSM). For example, a large number of ultralight fields may occur as a result of the compactification of extra dimensions <cit.>. One of these may be the QCD axion – the pseudoscalar boson proposed to solve the strong-CP problem <cit.><cit.><cit.>. The axion is a Goldstone boson of a spontaneously-broken global symmetry which acquires a small mass through non-perturbative effects. Its mass, μ, is determined by the energy scale f_a associated with the broken symmetry <cit.>,
μ≈ 6× 10^-10 eV(10^16 GeV/f_a)
where 10^16 GeV≡Λ_GUT is the grand unification (GUT) scale. An axion of mass 𝒪(10^-10) eV corresponds to f_a being at the GUT scale. However, as we will see in Sec. <ref>, signals in the LSD band are only expected up to ≈ 32 kHz, corresponding to a 6.6×10^-11 eV boson.
At boson masses 𝒪(10^-11) eV, superradiance occurs optimally for BH's with masses between 0.1 and a few solar masses. Sub-solar BH's may exist as PBH's <cit.>, and BH's in the 1 - 4 M_⊙ range might be formed dynamically in binary neutron star mergers <cit.>, accretion-induced collapsing neutron stars <cit.>, or supernovae with unusually high fallback <cit.> <cit.>.
The 1 - 4 M_⊙ range of BH masses is gradually being populated by microlensing candidates, X-ray binary candidates, and GW events such as GW190814 <cit.> and GW230529 <cit.>. Since the mass distribution for these objects is still unknown, we will limit our attention to stellar-origin BH's with masses between 5 and 20 M_⊙, typical of BH's in X-ray binary systems.
As a first step towards building the LSD search pipeline, we simulate the Galactic population of axion clouds with 5-20 M_⊙ BH hosts. The essential data returned by these simulations are the gravitational-wave frequency & dimensionless strain amplitude emitted by each cloud. Together with the LSD's projected sensitivity curve, we estimate the number of resolvable signals, i.e. those whose signal-to-noise ratio (SNR) rises above a given threshold after a coherent observation time T_coh = 10^7 s (a little less than four months). We adopt the idealization of a `freely-floating' detector orbiting the Milky Way at the same radius as the Solar System, but not situated on a rotating planet orbiting a star. In doing so, we neglect the amplitude and frequency modulations induced by the Earth's sidereal rotation and by its orbital motion in the Solar System. Our results establish a baseline from which a more in-depth analysis, including the aforementioned modulations, can be undertaken in future work.
In Sections II & III, we introduce the essential physics of axion clouds and their GW emission. To simulate the population of axion clouds, we require a model of the stellar-origin BH population. The parameters of a black hole – mass, spin, age, and location in the Milky Way – are taken to be independent random variables, and we discuss their distributions in Section IV. The procedure for determining whether a BH of given mass, spin, and age presently hosts an axion cloud is described in Section V. The simulated cloud populations, and the corresponding ensembles of GW signals, are discussed in Section VI. Section VII provides a summary of the results, as well as tasks for future work. Throughout the paper, we adopt the metric signature (-, +, +, +), and we retain all factors of G, c, and ħ. We hope our decision not to set physical constants to unity will make this work more accessible to those unaccustomed to the conventions of fundamental physics theory.
§ SUPERRADIANT BOUND-STATES
The creation of macroscopic clouds around spinning black holes can occur for any massive bosonic field. The simplest scenario, and the one we adopt, is that of an electrically-neutral massive scalar field freely propagating in the Kerr spacetime; We denote the BH mass and dimensionless spin by M and χ≡ Jc/(GM^2), respectively (J is the BH angular momentum). We also assume no self-interactions to avoid complications such as the bosenova instability <cit.>. The scalar field then obeys the Klein-Gordon equation <cit.>,
[g^μν∇_μ∇_ν - m_*^2]Φ(x, t) = 0
where the constant m_* has dimensions of inverse length; In the quantum theory of a scalar field, the physical meaning of m_* is 1/λ_c, where λ_c≡ħ/(mc) = ħ c/μ is the reduced Compton wavelength of the boson, m is the mass of the particle, and μ = mc^2.
In Boyer-Lindquist coordinates, and with the ansatz
Φ(x, t) = e^-i ω te^i m ϕ S(θ) R(r)
the Klein-Gordon equation separates into two ordinary differential equations (ODE's) for R(r) and S(θ). We seek a bound-state solution which is `in-going' at the event horizon – i.e. a solution which goes to zero at infinity and looks like an in-going-wave at the horizon. The in-going boundary condition causes the eigenfrequency ω to be complex,
ω = ω_R + iω_I
with the consequence that bound-states must either grow or decay:
e^-i ω t = e^-i (ω_R + iω_I) t = e^-i ω_R te^ω_I t
Φ(x, t) = e^ω_I t[e^-i ω_R te^i m ϕ S(θ) R(r)]
For ω_I > 0, the field amplitude grows exponentially. A necessary and sufficient condition for the growth of a bound-state with azimuthal number m is that the event horizon's angular speed Ω_H (times m) be faster than the oscillation of the field <cit.>,
ω_R < mΩ_H
This requirement is called the `superradiance condition'. As the field amplitude grows, the BH loses rotational energy, and Ω_H decreases until the inequality becomes an equality. At that point, the superradiant growth ceases, and the resultant bound-state slowly dissipates by emitting GW's.
It is conventional to define a dimensionless `coupling parameter' α as the ratio of the BH's gravitational radius r_g to the reduced Compton wavelength λ_c of the scalar field:
α= r_g/λ_c = GM/c^2μ/ħ c = GMμ/ħ c^3
The `weak-coupling' limit, defined by α≪ 1, corresponds to the Compton wavelength of the boson being much larger than the characteristic size r_g of the BH. In this limit, the bound-state energy, given by the real part of ω, can be written in closed-form <cit.>:
ħω_R = μ[1 - α^2/2n^2 + (2l - 3n + 1/l + 1/2 - 1/8)α^4/n^4 +
2mχα^5/n^3l(l + 1/2)(l + 1) + ...]
The small `fine-structure' corrections beyond the leading 1 depend on the angular momentum of the cloud and the spin of the BH. The quantity in large square brackets depends on the BH & boson masses only through their dimensionless product α. This motivates the introduction of a dimensionless eigenfrequency ξ = ξ_R + iξ_I:
ξ_R≡ħω_R/μ ξ_I≡ħω_I/μ
Once we have computed ξ over a sufficiently large region of the (α, χ) parameter space for all bound-states {n, l, m} of interest, we can freely plug-in any BH masses and axion masses of our choosing. For example, taking M=10^7 M_⊙ and μ = 10^-17 eV, we get α = 0.748. The same value is obtained taking M=10 M_⊙ and μ = 10^-11 eV. The essential consequence is that, for a given BH spin χ, the same set of superradiant bound-states exists for both scenarios.
From a practical point of view, this also means the superradiance condition
ξ_R < mχ/2α[1 + √(1 - χ^2)]≡ξ_crit
becomes a tool for rapidly determining, for a given parameter set {μ, M, χ}, which states are superradiant.
Additionally, far from the BH where relativistic effects are negligible, the radial equation reduces to (r measured in units of λ_c) <cit.>
[-1/2r^2d/dr(r^2d/dr) - α/r + l(l + 1)/2r^2 + 1 - ξ^2/2]R(r) = 0
which is the radial Schrödinger equation for a non-relativistic particle in a Coulomb potential V(r) = α/r – hence the moniker `gravitational atom'. The α^2/(2n^2) term in Eq. <ref> is precisely the `hydrogen atom' solution to Eq. <ref>. The complete small-α solutions to the full Klein-Gordon equation have been computed order-by-order using the method of matched asymptotic expansions <cit.>,
ξ_R = 1 - α^2/2n^2 + [2l - 3n + 1/l + 1/2 - 1/8]α^4/n^4 +
2mχα^5/n^3l(l + 1/2)(l + 1) + ...
ξ_I = 2(1 + √(1 - χ^2))[mχ/2α(1 + √(1 - χ^2)) - ξ_R]α^4l + 5
×2^4l + 1(n + l)!/n^2l + 4(n - l - 1)!(l!/(2l)!(2l + 1))^2
×∏_j = 1^l[j^2(1 - χ^2) + [mχ - 2αξ_R(1 + √(1 - χ^2))]]
In general, ξ is a function of {n, l, m, α, χ}. For fixed α and χ, the superradiance rate is largest when m = l = n - 1. We consider only such bound states in our simulations of the Galactic axion cloud population, reducing the parameter list to {n, χ, α}.
Since our fiducial model of the Galactic BH population will assume M ≥ 5 M_⊙, as well as boson masses 𝒪(10^-11 - 10^-10) eV, the corresponding values of α are always greater than 1, but still of order unity. In this `intermediate' regime, there are no closed-form solutions for ξ. As detailed in Appendix A, we must resort to the series-solution method for solving the radial Klein-Gordon equation. The coefficients of the infinite-series ansatz obey a three-term recurrence relation whose solution is equivalent to the solution of a corresponding non-linear continued-fraction equation <cit.> <cit.>.
Denoting the peak mass of the cloud as M_c, the cloud's growth timescale is given by <cit.>
τ_c≡τ_nlmln N = τ_nlmln(M_cc^2/μ)
with N the number of bosons in the cloud, and τ_nlm the reciprocal of the superradiance rate,
τ_nlm≡1/Γ_nlm, Γ_nlm≡ 2ω_I
τ_nlm is the e-folding timescale, and we follow the authors of <cit.> in taking τ_c as the time to fully grow the bound-state. The factor of two in Γ_nlm occurs because the cloud's density is proportional to the 00-component of the stress-energy, ρ∝ T^0_0∝exp(2ω_It).
As the cloud grows, the BH gradually loses mass and angular momentum. The growth timescales are long enough to permit an adiabatic treatment of the BH's evolution <cit.>. The metric can be thought of as Kerr with slowly changing M and χ. Denoting the initial BH parameters as (M_i, χ_i), the cloud's mass is
M_c≡ M_i - M_f
and the hole's final mass & spin (M_f, χ_f) are given by <cit.><cit.>
M_f = M_i[m^3 -√(m^6 - 16m^2ξ_R^2α_i^2(m - ξ_Rα_iχ_i)^2)/8ξ_R^2α_i^2(m - ξ_Rα_iχ_i)]
χ_f = (M_i/M_f)^2(χ_i - m M_c/ξ_Rα_i M_i)
Since our simulation of the Galactic axion cloud population requires us to follow the evolution of each BH-cloud system, – of which there could be millions – we save computation time by relying on these expressions for the final BH parameters.
The final mass & spin become the new parameters (M_f→ M_i, χ_f→χ_i) for determining which bound-state will grow after the present cloud has dissipated. For our simulations, the superradiance condition (Eq. <ref>) is used to determine, from the set {1, 2, 3, ...}, the smallest value of m for which superradiance occurs. The final state of the BH-boson system at the cessation of cloud growth is determined by Eqs. <ref>, <ref>, and <ref>.
§ GRAVITATIONAL WAVES FROM AXION CLOUDS
At a particle-physics level, GW production by axion clouds can be understood in terms of two processes: annihilation of two bosons to a single graviton (with the BH absorbing the recoil momentum), and downward-transitions between bound-states <cit.>. However, just as superradiance is a purely classical kinematic effect, the GW emission can also be understood classically in terms of the cloud's time-dependent quadrupole moment. That being said, the GW signals considered in this work correspond to the annihilation channel.
Since our simulation of the Galactic axion cloud population requires us to compute the GW amplitude for each cloud, – of which there could be millions – we save computation time by relying on semi-analytic formulas for the amplitudes <cit.> <cit.>. Following <cit.>, the GW signal seen by a detector with perpendicular arms takes the general form
h(t) = F_+(t)a_+cos[ϕ(t)] + F_(t)a_sin[ϕ(t)]
where F_+(t) and F_(t) are the detector's angular pattern functions. The amplitudes a_+ / are expanded in terms of spheroidal harmonics with spin-weight s = -2,
a_+ / = -∑_l≥ 2l h_0^(l)[_-2S_l,m,ω±_-2S_l,-m,-ω]
where ω = 2ω_R is the GW angular frequency, (l, m) refer to the scalar bound-state, and (l, m) refer to the GW modes, with l≥ 2l and m = 2m. For each mode, there is a polarization-independent characteristic amplitude h_0^(l) <cit.>:
h_0^(l) = c^4/GM_c/M_f1/2 π^2 M_f f^2 d𝒜_lm(α_i, χ_i)
where f is the GW frequency, d is the source distance, and the 𝒜_lm(α, χ) are dimensionless numerical factors which measure how much energy is carried by each mode. The corresponding luminosity in each mode is given by
E_GW(l, m, ω) = c^5/4π G(c^3/GM_fω)^2M_c^2/M_f^2𝒜_lm^2(α_i, χ_i)
In principle, the coefficients 𝒜_lm must be computed numerically by solving the Teukolsky equation governing linear perturbations of the Kerr metric. The authors of <cit.> express E_GW in the form
E_GW = c^5/GM_c^2/M_f^2d E/dt
and invoke an analytic solution for dE/dt which is formally valid for α≪ l, and which remains a good approximation up to α∼ l <cit.>:
dE/dt = 16^l + 1 l(2l - 1) Γ^2(2l - 1) Γ^2(n + l + 1)α_f^4l + 10/n^4l + 8(l + 1)Γ^4(l + 1)Γ(4l + 3) Γ^2(n - l)
where Γ is the gamma function, and α_f denotes the value of α corresponding to the final mass of the BH (i.e. after the cloud has finished growing),
α_f = α_iM_f/M_i
Comparing Eqs. <ref> and <ref>, we see that 𝒜_lm∝√(dE/dt), allowing us to express h_0^(l) directly in terms of dE/dt. Restricting ourselves to the dominant mode m = l = 2l, we obtain a closed-form solution for the characteristic amplitude, which we use without abandon to compute the GW amplitudes of the axion clouds resulting from our simulations (we will drop the superscript (2l) henceforth),
h_0^(2l)(d) = GM_c/c^2d2√(π)M_i/ξ_Rα_iM_f√(d E/dt)
The corresponding GW frequency is given by
f = ω/2π = 1/2π2μ/ħξ_R≡ f_0ξ_R
f = f_0ξ_R
where we've introduced the zeroth-order frequency f_0 = ω_0/2π, ω_0≡ 2μ/ħ.
It is often remarked that the GW frequency is proportional to twice the axion mass, f ∝ 2μ. We see that this is, indeed, true in the small-α limit by noting that ξ_R→ 1 as α→ 0 (Fig. <ref>, Eq. <ref>). The frequency monotonically decreases with increasing α, and for axion clouds in the kHz band, with stellar-mass BH hosts (where α is generically greater than 1), GW frequencies can be upwards of 10% smaller than the nominal value f_0.
Eq. <ref> gives the frequency as measured in the rest-frame of the axion cloud. For an observer located elsewhere in the Milky Way, the measured signal is Doppler-shifted due to the differential rotation of the Galaxy. We assume all bodies in the Galaxy move in the azimuthal direction, v = v_ϕϕ̂, and we assume the following Galactic rotation curve <cit.> (r, in kpc, is the cylindrical radial distance from the Galactic center):
v_ϕ(r) (km/s) =
265 - 1875(r - 0.2)^2 r < 0.2
225 + 15.625(r - 1.8)^2 0.2 < r < 1.8
225 + 3.75(r - 1.8) 1.8 < r < 5.8
240 r > 5.8
Denoting the source-frame frequency as f_s, the non-relativistic Doppler-shifted frequency we observe is
f_obs = (1 - v_r/c)f_s
where v_r is the line-of-sight component of the relative velocity between source and observer. v_r is defined to be positive when the source and observer are moving away from each other.
When a cloud finishes growing, it emits GW's whose initial amplitude h_0 is given by Eq. <ref>. As the cloud dissipates, the amplitude decreases as <cit.>
h(t) = h_0/1 + t/τ_GW
where τ_GW is the time for h to drop to half its initial value.
§ THE GALACTIC POPULATION OF ISOLATED STELLAR-ORIGIN BLACK HOLES
With the results of the previous sections in hand, we can follow the `superradiance history' of any given BH – i.e. we can determine the sequence of scalar field bound-states, their growth & dissipation timescales, the BH mass and spin decrements, and, above all, the GW frequency & amplitude of each successive cloud. To simulate the entire Galactic population of axion clouds, we must assign each BH a mass, spin, age, and location – taken to be independent random variables – in accordance with known or assumed distributions.
Our knowledge of the stellar-origin BH mass distribution is informed by mass measurements in X-ray binary systems <cit.> <cit.> <cit.>, microlensing events <cit.>, and astrometry <cit.>, as well as through modelling of the complex physics of core-collapse supernovae <cit.>. Known BH's typically have masses between 5 M_⊙ and 20 M_⊙, and power-law models are favored when fitting the mass function of low-mass X-ray binaries <cit.>. Not coincidentally, the massive stars which produce BH remnants are also characterized by a power-law distribution, ψ(M)dM ∝ M^-2.35dM – the `Salpeter' function. We will assume M_BH to be Salpeter-distributed on the interval 5 - 20 M_⊙.
BH spins have been measured in several X-ray binaries <cit.>, but none have been measured for isolated BH's. In the case of binaries, the distribution of spin magnitudes is more-or-less uniform, so we take the BH spin to be uniformly distributed, χ∼ U[0, 1].
The stellar content of the Milky Way can be divided into three primary regions – the thin disk, the thick disk, and the central bulge. The age distribution of stellar-origin BH's is tied to the star formation history in each region. As the Milky Way's star formation history is a topic of ongoing research, we take an agnostic approach by assigning each BH an age of 10^x yr, with x uniformly distributed on an interval which varies between the three Galactic regions. For the thin disk and thick disk, we take x ∼ U[3, log_10(810^9)] and x ∼ U[3, 10], respectively <cit.>. For the bulge, we assign each BH an age 10^x yr, with x ∼ U[9, log_10(1310^9)] <cit.>.
We assume black holes are distributed in space according to the mass profiles of the disks & bulge described in Ref. <cit.>. Both disks have the same axisymmetric form, with the corresponding scale lengths, scale heights, and surface densities quoted in Table <ref>:
ρ_disk(r, z, ϕ) = Σ_d, 0/2 z_d e^-z/z_d e^-r/R_d
The bulge is also axisymmetric, with the corresponding parameters also given in Table <ref>:
ρ_b = ρ_b,0/(1 + r'/r_0)^αe^-(r'/r_cut)^2
r' ≡√(r^2 + (z/q)^2)
We apportion the BH's among the three Galactic regions according to the fractions f_thin, f_thick, and f_bulge, defined by f_i = M_i/∑_iM_i, i ∈ {thin, thick, bulge}. The disk masses are obtained by integrating ρ_disk, with the radial integral cut-off at to 25 kpc, and the vertical integral cut-off at 3 scale heights. This gives 3.9710^10 M_⊙ and 1.510^10 M_⊙ for the thin and thick disks, respectively. We take the bulge mass to be 8.910^9 M_⊙, the value quoted in <cit.>. The corresponding f_i are 62%, 24%, 14%, respectively. We will assume the Galactic population of N_BH BH's to be apportioned likewise: 62% in the thin disk, 24% in the thick disk, and 14% in the bulge.
§ SIMULATION PROCEDURE
The simulation is a procedure by which, for a given axion mass, and from an initial population of N_BH BH's sprinkled throughout the Milky Way, we determine the number N_c of extant axion clouds. Each simulation outputs the physical properties, distances, and the GW frequencies & amplitudes of the N_c clouds.
At the outset, each BH is assigned a mass, spin, and age. We will illustrate the procedure with an example, and then summarize the procedure with a flowchart: Taking μ = 4 10^-11 eV, consider the evolution of a 5 M_⊙, χ = 0.95 BH with an age of 10^8 yrs. The superradiance condition, Eq. <ref>, determines which bound-state grows first.
ξ_R = 1.03 ξ_crit = 0.24 (m = 1)
ξ_R = 0.76 ξ_crit = 0.48 (m = 2)
ξ_R = 0.89 ξ_crit = 0.73 (m = 3)
ξ_R = 0.94 ξ_crit = 0.97 (m = 4)
Since ξ_R > ξ_crit for m = 1, 2, 3, the first superradiant bound-state is n = 5, l = m = 4, and it grows on a timescale of τ_c = 3.4 yrs. The BH's mass and spin are decreased to 4.94 M_⊙ and 0.938, respectively. Once the cloud has finished growing, it dissipates on a timescale τ_GW = 0.8 yrs. The time from the BH's birth to the cloud's dissipation is only τ_c + τ_GW = 4.2 yrs, leaving plenty of time for new clouds to develop. We denote by t_r the time remaining to the present. In this case, t_r = 10^8 - 4.2 ≈ 10^8 yrs.
The next bound-state is n = 6 with τ_c = 3536 yrs. The BH's mass and spin are decreased to 4.67 M_⊙ and 0.84, respectively. Once the cloud has finished growing, it dissipates on a timescale τ_GW = 2445 yrs. At this point, t_r = 9.999410^7 yrs – still plenty of time left for further superradiance.
The next (and final) bound-state is n = 7 with τ_c = 6 10^7 yrs. The BH's mass and spin are decreased to 4.5 M_⊙ and 0.74, respectively. Once the cloud has finished growing, t_r = 3.910^7 yrs remain. The dissipation timescale τ_GW = 7 10^7 yrs. Since τ_GW > t_r, the n = 7 cloud is still present today. It has an initial mass M_c = 0.16 M_⊙, and it radiates at f = 18.9 kHz. Placing the source at d = 1 kpc (for example), the initial strain amplitude h_0 = 10^-26 (Eq. <ref>). The signal observed today was emitted d/c = 3300 yrs ago, so the corresponding amplitude h(t) = 6.910^-27 (Eq. <ref> with t = t_r - d/c).
Our simulation of the Galactic cloud population consists of applying the foregoing procedure to each of the BH's in the galaxy. If a given BH only permits a bound-state whose growth timescale is greater than the age of the universe (τ_c > τ_uni = 1.38×10^10 yr), the host BH is removed from the simulation.
Our criterion for whether a given cloud is still present today is τ_GW > t_r. For each black hole, there are only two final options: Either a cloud has finished growing and is still present today, or a cloud is growing on a timescale greater than the age of the universe.
Those BH's with an extant cloud are assigned a location in the Milky Way (Eqs. <ref> and <ref>). Earth is assigned to an arbitrary, but fixed, point on the circle of radius 8.3 kpc in the Galactic midplane. For a cloud located at distance d, we check the inequality ct_r > d to determine if there has been enough time for GW's to propagate to Earth since the cloud formed. Those clouds for which d > ct_r are presently unobservable, and we retain only those clouds for which ct_r > d. We summarize this section with the following flowchart:
For a given μ, M_BH, χ, and BH age, find the lowest superradiant value of n. → If τ_c > τ_uni, the BH is removed from the simulation.
→ Otherwise, the dissipation timescale τ_GW determines whether a new cloud will start growing in accordance with τ_GW > t_r (cloud still present) or τ_GW < t_r (cloud has dissipated, and a new cloud begins growing).
→ Repeat the previous steps until one of two possibilities is obtained: a.) A cloud is growing with τ_c > the age of the universe, or b.) A cloud is still present & radiating GW's today.
→ If the cloud hasn't dissipated yet, assign it a random position, and compute the GW strain at Earth's location only if the travel-time inequality ct_r > d is true.
§ GW'S FROM THE AXION CLOUD POPULATION
§.§ Cloud populations
The total number of stellar-origin black holes has been estimated to be 𝒪(10^8) from the Milky Way's supernova rate of 𝒪(1) century^-1 <cit.>, and from population-synthesis estimates <cit.>. We take N_BH = 10^8, bearing in mind that the true number could be larger by a factor of a few, or even another order-of-magnitude <cit.>. We have simulated the axion cloud population for μ = (3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5)×10^-11 eV.
The output of a simulation is a collection of all extant BH-cloud systems in the Milky Way. Those BH's which have experienced the growth of a single cloud are described by a list comprising the BH age, the initial and final values of the BH mass & spin, the bound-state {n, l, m, ξ_R, ξ_I}, the cloud's properties – mass M_c, growth timescale τ_c, and dissipation timescale τ_GW, – the source distance d, and the GW frequency and amplitude (f, h). BH's which have experienced the growth of multiple bound-states are each characterized by a set of such lists, one per bound-state. The GW frequency and amplitude are only computed for the extant cloud, all previous bound-states having already dissipated.
For a given axion mass, the number of extant clouds is a random variable whose mean and standard deviation are estimated by performing twenty simulations with 5×10^6 BH's per simulation, computing the sample mean & sample standard deviation of N_c over the 20 trials, and then multiplying them by 20 and √(20), respectively.
An ensemble of GW signals from axion clouds is a scatter plot in the h vs. f plane, as in Figs. <ref>, <ref>, and <ref>. The distribution of amplitudes and frequencies is not random, but consists of well-defined bands corresponding to the various occupied bound-states. The lowest bound-state resulting from our simulations is n = 6, reflecting the general difficulty for stellar-mass BH's to produce clouds in the LSD band.
Also reflecting this difficulty is the rapid decline in the number of clouds N_c with increasing boson mass μ (Fig. <ref>). For μ=310^-11 eV, N_c = (9.323±0.007)×10^5, while at μ=6.510^-11 eV, the number has dropped to 130±10. N_c goes to zero around 6.6× 10^-11 eV, corresponding to a nominal upper limit of ≈ 32 kHz for signals expected in the LSD band. Higher-frequency signals could occur from BH's with M_BH < 5 M_⊙, especially in light of the recent discoveries of lower-mass-gap objects.
In the introduction (Sec. <ref>), we noted a potential connection between the QCD axion and the GUT scale Λ_GUT (Eq. <ref>): An axion of mass 𝒪(10^-10) eV corresponds to f_a≈Λ_GUT. If the solution to the strong-CP problem is tied to GUT phenomenology, then discovery of an 𝒪(10^-10) eV axion would be an exciting, albeit indirect, form of evidence for grand unification. The number of clouds in the Milky Way dropping to zero around 6.6× 10^-11 eV would seem to preclude the possibility of detecting an 𝒪(10^-10) eV axion – and, by extent, of probing GUT-scale physics with the LSD. Lower-mass-gap BH's could produce clouds at higher μ, thereby reviving hopes of finding a GUT-scale axion. Another possibility is that Λ_GUT is model-dependent, giving rise to a range of possible values including 10^17 GeV, which corresponds to 𝒪(10^-11) eV bosons.
§.§ Resolvable signals
The standard result for coherent detection of a continuous monochromatic signal, h(t) = h_0cos (ω t), is that the signal-to-noise ratio (SNR) ρ grows as the square-root of the coherent integration time T_coh <cit.>
ρ = h_0√(T_coh)/√(S_n(f))
where √(S_n(f)) is the one-sided amplitude spectral density (ASD) of the detector noise (the `sensitivity curve') evaluated at the GW frequency, and the trapping frequency of the levitated sensor is constant during the entire observation time. Although the LSD is an Earth-bound detector for which the observed signal is modulated by the Earth's daily (diurnal) rotation and orbital motion, we, instead, compute the SNR for the idealized case of a detector freely orbiting the Milky Way at the same radius as the Solar System (i.e. not attached to a planet or star system). This scenario isolates the intrinsic sensitivity of the LSD to continuous-wave signals from incidental factors, such as the Earth's rotational and orbital periods.
Taking T_coh = 10^7 s, and with the projected sensitivity curves for the current 1-m LSD prototype, as well as for future 10-m and 100-m versions <cit.>, we compute the corresponding SNR's for all sources in the galaxy. We count those with ρ > ρ_t as resolvable, and we adopt the threshold ρ_t = 10 (Fig. <ref>).
The `loudness' of a signal is determined primarily by the source distance. The distance, in turn, is a random variable determined by the randomly-assigned position vector (Eqs. <ref> and <ref>) of the source. Thus, for a given set of extant clouds, the number of individually-resolved sources N_res will vary each time we re-assign their position vectors. We estimate the mean & standard deviation of N_res for a given population of extant clouds by laying them down in the Galaxy N_reshuffle = 100 times and counting how many are resolvable in each `re-shuffling'. The mean & standard deviation are then computed as
N_res = 1/N_reshuffle∑_i = 1^N_reshuffleN_res, i
σ_res = 1/N_reshuffle - 1∑_i = 1^N_reshuffle(N_res, i - N_res)^2
With a 100-m detector, assuming μ = 3× 10^-11 eV, N_res = 600 with σ_res = 20. In the most pessimistic case (μ = 5.5× 10^-11 eV), there are only 𝒪(1) resolvable signals, and we have not estimated the associated uncertainty. The 10 - 26 kHz range is where we expect resolvable signals to be present for a 100-m LSD. For a 10-m instrument, 𝒪(1) resolvable signals appear at μ = 3× 10^-11 eV, while a 1-m instrument does not have the required sensitivity to detect individual sources.
In the event a continuous monochromatic signal is detected by the LSD, we will have to answer the question: Is this signal from an `axion cloud' – a superradiant bound-state of a scalar (spin-0) field – or from a cloud involving a spin-1 (`Proca') field? In general, Proca fields give rise to stronger GW signals than scalar fields <cit.>. As a result, we would expect resolvable signals from Proca clouds to be found at greater distances than those from scalar clouds. For the 100-m detector, with μ = 3×10^-11 eV, the resolvable signals are depicted in terms of their SNR's and source distances in Fig. <ref>. The vast majority are less than 3 kpc away. Turning this on its head, the detection of a continuous monochromatic signal with an inferred distance significantly greater than 3 kpc could be a potential indicator of a spin-1 field.
§.§ Unresolved signals
For all boson masses, the majority of GW signals have amplitudes less than 10^-23, with the weakest having h = 𝒪(10^-29). The unresolvable signals incoherently combine to form a Galactic confusion foreground which manifests as an excess noise in the detector. As before, we neglect the diurnal and annual modulations of the background, and instead provide a preliminary estimate of the foreground's strength compared to the nominal 1-m, 10-m, and 100-m LSD sensitivity curves. In a strain-frequency plot (e.g. Fig. <ref>), we bin the cloud amplitudes (with bin width δ f = 10^-2f_c, where f_c is the center frequency of a given bin, and the factor 10^-2 is the full-width-at-half-maximum (FWHM) of the trapped object's response function around f_c), and we associate an rms amplitude, defined as follows, with each bin.
We start by creating a bin centered on the frequency of the cloud with the smallest GW frequency in a strain-frequency plot, e.g. Fig. <ref>. All axion clouds emit monochromatic signals,
h_i(t) = h_0,icos(2π f t + ϕ_i)
where the phases ϕ_i are uniformly-distributed between 0 and 2π, and i runs over all clouds in the bin. The squared sum of all signals in the bin is time-averaged over a period T_c = 1/f_c, where f_c is the frequency at the center of the bin; The result is a dimensionless time-averaged power associated with that bin. The square-root of the power represents an effective amplitude h_eff of the confusion foreground in the bin,
h_eff= √(1/T_c∫_0^T_cdt [∑_ih_i(t) ]^2)
We then create a new bin with center frequency f_c|_new and width δ f|_new,
f_c|_new = f_c|_old + 10^-2δ f|_old
δ f|_new = 10^-2f_c|_new
and we compute h_eff for this bin. The center frequency is shifted rightwards by a fraction (arbitrarily chosen to be 10^-2) of the previous bin width so that adjacent bins overlap, ensuring some degree of continuity in h_eff vs. f. We continue until we reach the rightmost end of the cloud population. Each bin is then characterized by an ordered pair (f_c, h_eff) (Fig. <ref>).
A preliminary method for estimating the LSD's sensitivity to the confusion foreground is to treat each pair (f_c, h_eff) as if they were the frequency and amplitude of a hypothetical monochromatic signal whose corresponding effective SNR ρ_eff, computed via Eq. <ref>, is then compared to a threshold ρ_t. We continue to require ρ_t = 10. The numerator and denominator of Eq. <ref> (h_eff√(T_coh) and √(S_n(f_c)), respectively) are shown separately in Fig. <ref>, and their ratio (the SNR) is shown in Figs. <ref>, <ref>, and <ref> for the 1-, 10-, and 100-m instruments, respectively.
We find that a single 1-m LSD does not appear to have the required sensitivity to detect the foreground for any value of μ. A 10-m detector could detect the foreground with ρ_eff = 𝒪(10) if the axion mass μ∈ (3-4)×10^-11 eV, while in the (4-4.5)×10^-11 eV range, only the peak of the foreground rises to the threshold, and just barely so (Fig. <ref>). A 100-m instrument could detect the foreground with large ρ_eff if μ∈ (3-6)×10^-11 eV (Fig. <ref>). In the range (3-3.5)×10^-11 eV, the peak value of ρ_eff is 𝒪(10^3), and remains 𝒪(10^2) up to 5× 10^-11 eV.
§ CONCLUSION
We have produced Galactic-scale populations of the hypothetical GW sources known as `axion clouds' with the axion mass chosen to correspond to frequencies in the 10-100 kHz band. By computing superradiant bound-states up to n = 9, we have accounted for nearly all clouds with growth timescales less than the age of the universe.
The largest number of clouds occurs for the lightest boson mass capable of producing GW's at the frequencies of interest. This was to be expected, as superradiance occurs more readily for small α∝μ M_BH. For a BH of mass M ≥ 5 M_⊙, the smallest value of α is obtained with the smallest allowed boson mass, 3×10^-11 eV. In this most optimistic case, the total number of extant clouds is close to 1 million.
The population of axion clouds has been assumed to be spatially-distributed within the Milky Way in the same way as the stellar disks and central bulge. Statistically, some may be near enough that the continuous monochromatic signal can be detected by observing over a long enough period of time, e.g. 10^7 s, such that the SNR rises above a given threshold ρ_t; We have imposed a stringent threshold ρ_t = 10, but we leave it for future work to determine the most appropriate threshold for our search pipeline. For a 100-m instrument, several hundred resolvable signals are predicted to occur if μ≈ 3×10^-11 eV, but this number could be upwards of an order-of-magnitude larger if the total number of stellar-origin BH's is also larger than we have assumed (see the comment made at the beginning of Sec. VI). For a 10-m detector, only 𝒪(1) resolvable signals occur in our simulation at μ = 3×10^-11 eV.
Meanwhile, the ensemble of unresolved signals produces a confusion foreground which is estimated to be detectable with potentially large SNR by a 100-m LSD, assuming μ∈(3-6)×10^-11 eV, or by a 10-m instrument at moderate SNR, assuming μ∈(3-4.5)×10^-11 eV.
Finally, we note the following limitations of this work, as well as directions for future work: First, since isolated BH's have no EM counterpart, we do not know, ahead of time, the direction to these GW sources. Targeted & directed searches for axion clouds will, therefore, not be possible for isolated BH's, and we must resort to blind searches. A full template search will require us to pixelate the sky, and it is an open question what is the smallest angular size Δθ for which the number of templates in a blind all-sky search is not prohibitively large: We do not want the time required for data analysis to become larger than the four-month observation period. Transverse proper motions will also need to be accounted for if, over the observation time, a source moves out of the pixel on the sky it was in initially.
Diurnal and annual modulations of the GW frequency & amplitude must also be included in our detection scheme. The angular dependence of the detector sensitivity, encoded in the detector's pattern functions, produces a daily modulation of the signal amplitude. The rotational and orbital motions of the Earth produce time-varying Doppler shifts. The Doppler modulation can be corrected, but doing so requires knowledge of the source position to a precision determined by the observation time and GW frequency.
In this work, we have used the SNR as a baseline detection statistic. We have not yet determined what are the most appropriate detection statistics & criteria for the monochromatic signals from individual clouds. Moreover, our treatment of the confusion foreground has not accounted for the intrinsic anisotropy of the signal: The axion clouds will be distributed throughout the disks and bulge of the Milky Way, so the strength of the foreground will vary over the sky in a complicated way. Additionally, searches for stochastic signals typically involve an `excess-power' method, as well as cross-correlation between multiple detectors. Plans to build a second 1-m instrument at UC Davis (in addition to the Northwestern detector) are in development, so while a single 1-m detector might not have the requisite sensitivity, the prospects for a two- or multi-detector scheme are an exciting avenue of future study.
We would like to thank Vedant Dhruv for making public his Mathematica notebook for scalar bound-states in Kerr. We also thank Timothy Kovachy for clarifying issues of numerical precision when using Mathematica's root-finding routines; and Richard Brito for several clarifying discussions on axion clouds & their gravitational-wave emission. JS, AG, and SL are supported by the W.M. Keck Foundation. AG, GW, and NA are supported in part by NSF grants PHY-2110524 and PHY-2111544, the Heising-Simons Foundation, the John Templeton Foundation, and ONR Grant N00014-18-1-2370. NA is partially supported by the CIERA Postdoctoral Fellowship from the Center for Interdisciplinary Exploration and Research in Astrophysics at Northwestern University and the University of California-Davis.
SL is also supported by EPSRC International Quantum Technologies Network Grant EP/W02683X/1 and is grateful for EPSRC support through Standard Research Studentship (DTP) EP/R51312X/1. VK is supported by a CIFAR Senior Fellowship and through Northwestern University through the D.I. Linzer Distinguished University Professorship. A.L. is supported by the Fannie and John Hertz Foundation.
This work used
the Quest computing facility at Northwestern.
*
§ SUPERRADIANT BOUND-STATES
The creation of an axion cloud corresponds to an instability of the Kerr space-time due to the presence of a massive scalar field. The amplifying mechanism, `superradiance', is the Penrose process in which rotational energy is extracted by a bosonic wave rather than by a particle. In the process, the Kerr BH loses mass and angular momentum, subject to the condition that its `irreducible mass' does not decrease.
In the Penrose scenario, a particle travelling through a BH's ergoregion can split in two, one of which falls into the hole, while the other escapes to infinity. If the orbital angular momentum of the infalling particle is of opposite sign to that of the hole, the BH's loses rotational energy to the escaping particle: Energy has been extracted from the ergoregion.
The story for waves runs analogously: An incident wave with amplitude ℐ splits into a part transmitted into the BH (with amplitude 𝒯) and a part which escapes (the reflected wave with amplitude ℛ). If the transmitted wave is counter-rotating, the rotational energy of the BH decreases, leading to an outgoing wave with ℛ > ℐ.
The novelty of a massive scalar field is that its mass acts like a mirror: Unlike a massless field, a massive field can become trapped in a bound-orbit, leading to continuous extraction of rotational energy. The end result of the runaway amplification is a macroscopic scalar field bound-state – the `axion cloud'. In an astrophysical context, rather than a wave incoming from infinity, the initial seed for superradiance can be any arbitrary quantum fluctuation in the scalar field, even if the field is in its classical ground state <cit.><cit.>. As a result, the growth of an axion cloud begins immediately after the birth of a BH.
An axion cloud's binding energy (which determines the GW frequency) and growth timescale depend on the dynamics of the scalar field. For the scenario we have adopted, the field obeys the Klein-Gordon equation on the Kerr space-time. The Kerr metric describes an axisymmetric, neutral, and rotating black hole:
ds^2 = -[1 - 2GMr/c^2ρ^2]c^2dt^2 - 4GMar sin^2θ/c^2ρ^2cdtdϕ
+ ρ^2/Δdr^2 + ρ^2dθ^2 + [r^2 + a^2 + 2GMa^2rsin^2θ/c^2ρ^2]sin^2θ dϕ^2
where M is the BH mass, J is the BH angular momentum, ρ^2≡ r^2 + a^2cos^2θ, a ≡ J/(Mc) is the Kerr parameter, and Δ≡ r^2 - 2r_gr + a^2, where we have defined the gravitational radius r_g≡ GM/c^2. In terms of the dimensionless Kerr parameter, χ≡ a/r_g = Jc/(GM^2), the inner and outer horizons – the two roots of Δ = (r - r_+)(r - r_-) – are
r_± = r_g[1 ±√(1 - χ^2)]
It follows that χ is restricted to the interval
0 < χ < 1
The event horizon is located at r = r_+, and the angular velocity of the horizon is
Ω_H = c χ/2 r_+
The scalar field obeys the Klein-Gordon equation,
[∇_μ∇^μ - m_*^2]Φ(x, t) = 0
where ∇_μ is the covariant derivative with respect to the Kerr metric, and, as mentioned in the text, m_* has the quantum-mechanical interpretation as the reciprocal of the boson's Compton wavelength. In Boyer-Lindquist coordinates, the Klein-Gordon equation is separable via the ansatz
Φ(x, t) = Re[ e^-i ω te^i m ϕ S(θ) R(r)]
Invoking the identity
∇_μ∇^μΦ = 1/√(-g)∂_μ[√(-g) g^μν∂_νΦ]
√(-g) = ρ^2sinθ
the Klein-Gordon equation separates into two 2^nd-order linear homogeneous ODE's for R(r) and S(θ):
𝒟_θ[S] + [χ^2α^2(ξ^2 - 1 )cos^2θ -m^2/sin^2θ + Λ]S(θ) = 0
𝒟_r[R] + [α^2ξ^2(r^2 + χ^2)^2 - 4χ m αξ r + m^2χ^2
- Δ(α^2r^2 + χ^2α^2ξ^2 + Λ) ]R(r) = 0
𝒟_θ≡1/sinθd/dθ[sinθd/dθ], 𝒟_r≡Δd/dr[Δd/dr]
We have expressed the decoupled equations in terms of the dimensionless variables (χ, α and ξ) used in the main text. The radial coordinate in <ref> is measured in units of r_g.
Bound-state solutions must go to zero at infinity and be in-going at the event horizon. The in-going condition means that R(r) ∝ e^-ikr_* as r_*→ -∞, with r_* the Kerr tortoise coordinate which maps the event horizon to -∞,
dr_*/dr = r^2 + a^2/Δ
This means that plane waves at the event horizon (r_*→ -∞) can only move `to the left', i.e. into the black hole.
The spectra of both bound-states and BH quasi-normal modes can be found via Leaver's continued-fraction method <cit.> <cit.>. The radial function R(r) is represented by an infinite series,
R(r) = (r - r_+)^-iσ(r - r_-)^iσ + β - 1e^qr∑_n = 0^∞a_n(r - r_+/r - r_-)^n
σ = α(1 + √(1 - χ^2))(ξ - ξ_crit)/√(1 - χ^2)
q = α√(1 - ξ^2)
β = α^2(1 - 2ξ^2)/q
(The quantity we denote by β is the same as the quantity denoted by χ in Ref. <cit.>.) With this ansatz, <ref> implies a three-term recurrence relation for the unknown coefficients a_n,
α_na_n + 1 + β_na_n + γ_na_n - 1 = 0, n = 1, 2, …
a_1 = -β_0/α_0a_0
where the coefficients α_n, β_n and γ_n are defined by
α_n = n^2 + (c_0 + 1)n + c_0
β_n = -2n^2 + (c_1 + 2) + c_3
γ_n = n^2 + (c_2 - 3)n + c_4
and c_0, c_1, c_2, c_3 and c_4 are given by
c_0 = 1 - 2 i αξ -2 i (αξ -m χ/2)/√(1-χ ^2)
c_1 = -4 + 4i[αξ - iα√(1-ξ ^2)(1 + √(1 - χ^2))]
+ 4i (αξ -m χ/2)/√(1 - χ^2) - 2 [α ^2 ξ ^2+α ^2 (1-ξ^2)]/α√(1 - ξ^2)
c_2 = 3 - 2i αξ - 2 [α^2 (1 - ξ^2)-α^2 ξ^2]/α√(1-ξ^2) - 2 i (αξ -m χ/2)/√(1 - χ^2)
c_3 = 2i (αξ - iα√(1 -ξ^2))^3/α√(1 - ξ^2) + χ^2α^2(1 - ξ^2)
- Λ_lm - 1 + 2√(1 - χ^2)(αξ - iα√(1 - ξ^2))^2
+ 2imχα√(1 - ξ^2) - (αξ - iα√(1 - ξ^2))^2/α√(1 - ξ^2)
+ 2α√(1 - ξ^2)√(1 -χ^2)
+ 2i/√(1 - χ^2)[1 + (αξ - iα√(1 - ξ^2))^2/α√(1 - ξ^2)][αξ - mχ/2]
c_4 = (αξ - iα√(1 - ξ^2))^4/α^2 (1 - ξ^2) + 2iξ(αξ - iα√(1 - ξ^2))^2/√(1 - ξ^2)
- 2i (αξ - iα√(1 - ξ^2))^2 (αξ
-mχ/2)/α√(1 - ξ^2)√(1 - χ^2)
The series coefficients are related by an infinite continued-fraction <cit.>
a_n + 1/a_n = -γ_n + 1/β_n + 1 - α_n + 1γ_n + 2/β_n + 2 - …
Continued-fractions are commonly written in the slightly-less cumbersome notation
a_n + 1/a_n = -γ_n + 1/β_n + 1 -α_n + 1γ_n + 2/β_n + 2-α_n + 2γ_n + 3/β_n + 3-…
Since a_1/a_0 = -β_0/α_0, we obtain a condition whose roots are the desired bound-state frequencies:
β_0 - α_0γ_1/β_1 -α_1γ_2/β_2 -α_2γ_3/β_3 -… = 0
Strictly speaking, the radial and angular eigenvalues, ξ and Λ, must be found simultaneously. Leaver's method can also be applied to <ref> <cit.>, resulting in a continued-fraction condition analogous to <ref>. We then have two equations for the two unknowns.
Conveniently, we can reduce the root-finding problem to merely solving <ref> by using the Mathematica function `SpheroidalEigenvalue'. With the change of variable z = cosθ, and in terms of the following quantities,
γ^2≡χ^2α^2(1 - ξ^2)
λ≡Λ - γ^2
the angular equation <ref> takes the standard form implemented in Mathematica:
(1 - z^2)d^2S/dz^2 - 2zdS/dz + [γ^2(1 - z^2) + λ - m^2/1 - z^2]S(z) = 0
`SpheroidalEigenvalue' yields λ, and `SpheroidalPS' yields S(z). The continued-fraction equation <ref>, with Λ replaced by γ^2 + `SpheroidalEigenvalue', can then be solved for ξ with the Mathematica function FindRoot.
For our axion cloud simulations, we have needed to compute ξ for bound-states up to, and including, n=9. As an example, we have plotted the real & imaginary parts of the n=8 bound-state in Figs. <ref>, <ref> and <ref>; Figs. <ref> and <ref> are analogous to Figs. <ref> and <ref>.
|
http://arxiv.org/abs/2409.02892v1 | 20240904173446 | Accelerating Kaluza-Klein black hole and Kerr/CFT correspondence | [
"Haryanto M. Siahaan"
] | hep-th | [
"hep-th",
"gr-qc"
] | |
http://arxiv.org/abs/2409.03605v1 | 20240905151140 | SegTalker: Segmentation-based Talking Face Generation with Mask-guided Local Editing | [
"Lingyu Xiong",
"Xize Cheng",
"Jintao Tan",
"Xianjia Wu",
"Xiandong Li",
"Lei Zhu",
"Fei Ma",
"Minglei Li",
"Huang Xu",
"Zhihu Hu"
] | cs.CV | [
"cs.CV",
"cs.MM"
] |
South China University of Technology
Guangzhou
China
[email protected]
[1]
Zhejiang University
Hangzhou
China
[email protected]
South China University of Technology
Guangzhou
China
Huawei Cloud Computing Technologies Co., Ltd
Shenzhen
China
Huawei Cloud Computing Technologies Co., Ltd
Shenzhen
China
Peking University
Shenzhen
China
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ)
Shenzhen
China
Huawei Cloud Computing Technologies Co., Ltd
Shenzhen
China
Huawei Cloud Computing Technologies Co., Ltd
Shenzhen
China
Corresponding author.
South China University of Technology
Guangzhou
China
[email protected]
§ ABSTRACT
Audio-driven talking face generation aims to synthesize video with lip movements synchronized to input audio. However, current generative techniques face challenges in preserving intricate regional textures (skin, teeth). To address the aforementioned challenges, we propose a novel framework called SegTalker to decouple lip movements and image textures by introducing segmentation as intermediate representation. Specifically, given the mask of image employed by a parsing network, we first leverage the speech to drive the mask and generate talking segmentation. Then we disentangle semantic regions of image into style codes using a mask-guided encoder. Ultimately, we inject the previously generated talking segmentation and style codes into a mask-guided StyleGAN to synthesize video frame. In this way, most of textures are fully preserved. Moreover, our approach can inherently achieve background separation and facilitate mask-guided facial local editing. In particular, by editing the mask and swapping the region textures from a given reference image (e.g. hair, lip, eyebrows), our approach enables facial editing seamlessly when generating talking face video. Experiments demonstrate that our proposed approach can effectively preserve texture details and generate temporally consistent video while remaining competitive in lip synchronization. Quantitative and qualitative results on the HDTF and MEAD datasets illustrate the superior performance of our method over existing methods.
<ccs2012>
<concept>
<concept_id>10010147.10010178.10010224.10010225</concept_id>
<concept_desc>Computing methodologies Computer vision tasks</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003227.10003251.10003256</concept_id>
<concept_desc>Information systems Multimedia content creation</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Computing methodologies Computer vision tasks
[500]Information systems Multimedia content creation
SegTalker: Segmentation-based Talking Face Generation with Mask-guided Local Editing
Zhihu Hu
September 9, 2024
====================================================================================
§ INTRODUCTION
Talking face generation, which aims to synthesize facial imagery precisely synchronized with input speech, has garnered substantial research attention in the field of computer vision and multimedia <cit.> due to its numerous applications including digital human, virtual conference and video dubbing <cit.>.
There are many attempts to realize high-fidelity talking face. Early approaches first predict mouth shapes from speech using recurrent neural networks, then generate the face conditioned on the shapes <cit.>. Recent end-to-end methods directly map speech spectrograms to video frames leveraging different intermediate representations <cit.>. Zhang et al. <cit.> takes advantage of 3D Morphable Models (3DMMs), a parametric model that decomposes expression, pose, and identity, to transfer facial motions. Zhou et al. <cit.> employs the landmark as the representation. Meshry et al. <cit.> factorize the talking-head synthesis process into spatial and style components through coarse-grained masks, but they do not facilitate texture disentanglement and facial editing. More recently, Kicanaoglu et al. <cit.> perform unsupervised vector quantization on intermediate feature maps of StyleGAN to generate abundant semantic regions for local editing. Despite improvements in photo-realism, current talking face methods still face challenges in preserving identity-specific details such as hair, skin textures and teeth. Furthermore, within the current landscape of talking face generation methods, there is no single technique that can concurrently accomplish facial editing and background replacement. Our method elegantly incorporates facial editing into talking face generation in an end-to-end manner through the intermediate representation of segmentation.
In this paper, we aim to design a unified approach that realizes the controllable talking face synthesis and editing. We propose a novel framework termed SegTalker that explicitly disentangles textural details with lip movements by utilizing segmentation. Our framework consists of an audio-driven talking segmentation generation (TSG) module, followed by a segmentation-guided GAN injection (SGI) network to synthesize animation video. We utilize a pre-trained network <cit.> to extract segmentation as prior information to decompose semantic regions and enhance textural details, seamlessly enabling fine-grained facial local editing and background replacement. Specifically, given the input image and speech, we first conduct face parsing to obtain the segmentation. Subsequently, TSG module extracts image and speech embedding and then combines these embeddings to synthesize new segmentation with lips synchronized to the input speech. After that, SGI module employs a multi-scale encoder to project the input face into the latent space of StyleGAN <cit.>. Each facial region has a set of style codes for different layers of the StyleGAN generator. Then We inject the synthesized mask and style codes into the mask-guided generator to obtain the talking face. In this way, the structural information and textures of facial components are fully disentangled. Furthermore, facial local editing can be accomplished by simply modifying the synthesized mask or swapping the region textures from a given reference image, achieving seamless integration with talking face synthesis.
In summary, our contributions are:
* We propose a novel framework that utilizes segmentation as an intermediate representation to disentangle the lip movements with image reconstruction for talking face generation, achieving consistent lip movements and preserving fine-grained textures.
* We employ a multi-scale encoder and mask-guided generator to realize the local control for different semantic regions. By manipulating the masks and smoothly swapping the textures, we can seamlessly integrate the facial local editing into the talking face pipeline and conduct swapping background.
* Experiments on HDTF and MEAD datasets demonstrate our superiority over state-of-the-art methods in visual quality, ID preservation and temporal consistency.
§ RELATED WORK
§.§ Audio-driven Talking Face Generation
Talking face generation, which aims to synthesize photo-realistic video of a talking person giving a speech as input, has garnered increasing research attention in recent years. With the emergence of generative adversarial networks (GANs) <cit.>, many methods <cit.> have been proposed for synthesizing animation video. In terms of the intermediate representations, the existing works can be categorized into landmark-based, 3D-based and others. In the landmark-based methods, Suwajanakorn et al. <cit.> use recurrent neural network (RNN) to build the mapping from the input speech to mouth landmark, and then generate mouth texture. Zhou et al. <cit.> combine LSTM and self-attention to predict the locations of landmarks. Zhong et al. <cit.> utilizes transformer to predict landmarks, then combines multi-source features (prior information, landmarks, speech) to synthesize talking face. Recently, DiffTalk <cit.> takes speech and landmarks as conditioned inputs and utilizes a latent diffusion model <cit.> to generate talking faces. For 3DMM-based method, SadTalker <cit.> learns realistic 3D motion coefficients for stylized audio-driven single-image talking face animation, achieving high-quality results by explicitly modeling audio-motion connections. Some styleGAN-based method <cit.> such as StyleHEAT <cit.> leverages a pre-trained StyleGAN to achieve high-resolution editable talking face generation from a single portrait image, allowing disentangled control via audio. More recently, the emergence of neural radiance field (NeRF) provides a new perspective for 3D-aware talking face generation <cit.>.
However, these intermediate representations have difficulty in capturing fine-grained details and preserving identity i.e., teeth and skin textures which degrade the visual quality heavily. Wav2Lip <cit.> adopts the encoder-decoder architecture to synthesize animation videos. However, there are conspicuous artifacts with a low resolution in the synthesized videos. In this work, we employ a novel representation, segmentation, to disentangle lip movement with image reconstruction, and further extract per-region features to preserve texture details.
§.§ GAN Inversion
GAN inversion aims to invert real images into the latent space of a generator for reconstruction and editing. Several StyleGAN inversion methods have been proposed, they can typically be divided into three major groups of methods: 1) gradient-based optimization of the latent code <cit.>, 2) encoder-based <cit.> and 3) fine-tune methods <cit.>. The gradient-based optimization methods directly optimize the latent code using gradient from the loss between the real image and the generated one. The encoder-based methods train an encoder network over a large number of samples to directly map the RGB image to latent code. The gradient-based optimization methods always give better performance while the encoder-based cost less time. The fine-tuning methods make a trade-off between the above two and use the inverted latent code from encoder as the initialization code to further optimization. However, existing works focus on global editing and cannot make fine-grained control of the local regions. Our method uses a variation of <cit.> to realize local editing via manipulating a novel 𝒲^c+ <cit.> latent space.
§ PROPOSED METHODS
§.§ Overview
To tackle the lack of regional textures in talking face generation, we explicitly disentangle semantic regions by introducing segmentation mechanism. Leveraging segmentation as an intermediate representation, our approach decouples audio-driven mouth animation and image texture injection. The speech is solely responsible for driving the lip contours, while the injection module focuses on extracting per-region textures to generate the animation video. The overall framework of our proposed model, termed SegTalker, is illustrated in <ref>. The pipeline consists of two sub-networks: (1) talking segmentation generation (TSG) and (2) segmentation-guided GAN injection network (SGI), which are elaborated in <ref> and <ref>, respectively.
§.§ Talking Segmentation Generation
The first proxy sub-network is the talking segmentation generation (TSG) module. Given speech and image frame, this network first employs parsing network <cit.> to extract mask and then synthesizes talking segmentation. The original network generates 19 categories in total. For the sake of simplicity, we merge the same semantic class (e.g. left and right eyes), resulting in 12 final classes. During pre-processing, video is unified to 25fps with speech sampled at 16kHz. To incorporate temporal information, Following <cit.>, the global and local features are extracted as speech embedding. We employ mask encoder to extract visual embedding from two masks: a pose source and an identity reference. The two masks are concatenated in the channel dimension. The pose source aligns with the target segmentation but with the lower half occluded. The identity reference provides facial structural information of the lower half to facilitate training and convergence. Without concerning textural information, the model only focuses on learning the structural mapping from speech to lip movements.
We employ a CNN-based network to extract the embedding of a 0.2-second audio segment whose center is synchronized with the pose source. Similar to text, speech always contains sequential information. To better capture temporally relevant features, we employ the pre-trained AV-Hubert <cit.> as a part of audio encoder to extract long-range dependencies. AV-Hubert has conducted pre-training for audio-visual alignment <cit.>, so the extracted embedding is very close to the semantic space of the video. When using AV-Hubert to extract audio embedding, we only need to feed speech and the visual signal is masked. Specifically, given a 3s speech chunk, we feed it into the Transformer-based AV-Hubert to produce contextualized speech features. We then extract the feature embedding corresponding to the given image segment. Given the mixed speech embedding and visual embedding, Generator synthesizes the final talking segmentation. We adopt U-Net <cit.> as the backbone architecture. In addition, skip connections and transposed convolutions are utilized for feature fusion and up-sampling.
Given a mask synthesized by the model and the ground truth mask, we employ two types of losses to improve generation quality i.e., the reconstruction loss and the syncnet loss.
Reconstruction Loss
Unlike previous generative tasks that often synthesize RGB image and adopt L1 loss for reconstruction, talking segmentation involves generating segmentation where each pixel denotes a particular class. To stay consistent with semantic segmentation task, we employ cross entropy loss as our reconstruction loss. Given N_i generated masks y_i and ground truth ŷ_̂î with a categories of M regions, the cross entropy loss is defined as:
ℒ_𝒸ℯ = 1/N_i∑_i^N_i∑_c=1^My_iclog(ŷ_̂îĉ)
where y_ic denotes the one-hot encoded vector for the i-th generated segmentation belonging to the c-th category. For generated segmentation, different classes occupy varying proportions of areas. Semantically important regions like lips and eyes constitute small fractions, while background dominates most areas. To mitigate the class imbalance issue, a weighted cross entropy is formulated as:
ℒ_𝓌-𝒸ℯ = 1/N_i∑_i^N_i∑_c=1^M w_c y_iclog(ŷ_̂îĉ)
where w_c denotes the weight for the corresponding category and is determined on the inverse proportionality of the areas of different regions on the whole dataset.
SyncNet Loss
The reconstruction loss mainly restores images at the pixel level without effective semantic supervision. Therefore, we train a segmentation-domain SyncNet from <cit.> to supervise lip synchronization. During training, a speech chunk is randomly sampled from speech sequences, which can be either synchronized (positive example) or unsynchronized (negative example). The SyncNet consists of a speech encoder and a mask encoder. For the mask, we use one-hot encoding as input and concatenate T_v masks along the channel dimension. Specifically, the SyncNet takes inputs of a window T_v of consecutive lower-half frames and a speech segment S. After passing through the speech encoder and mask encoder, 512-dim embeddings s = E_speech(S) and m = E_mask(T_v) are obtained respectively. Cosine similarity distance and binary cross entropy loss are then calculated between the embeddings. The losses are formally defined as:
P_sync = s · m/max(||s||_2 ·||m||_2, ϵ)
ℒ_𝓈𝓎𝓃𝒸 = 1/N∑_i^N -log(P^i_sync)
where P_sync is a single value between [0, 1] and N is the batch size. ϵ is used to prevent division by zero.
We train the lip-sync expert on the HDFT dataset <cit.> with a batch size of 8, T_v = 5 frames, S = 0.2s segment, using the Adam optimizer with a learning rate of 1e-4. After approximately one day of training, the model converges. Our expert network eventually achieves 81% accuracy on the test set.
§.§ Segmentation-guided GAN Injection
The second sub-network illustrated in <ref> is the segmentation-guided GAN injection (SGI) network. Giving a portrait and its corresponding mask, SGI first encodes the image into the latent space to obtain the latent code, then inverts the generated latent code back to the image domain through style injection.
There exist various latent spaces such as 𝒲, 𝒲+ and 𝒮 space. Many works <cit.> have investigated their representational abilities from the perspectives of distortion, perception, and editability. Here, we choose 𝒲^c+ space, a variation of 𝒲+ space originated from <cit.> as representation of latent code. To leverage this representation, a powerful encoder is required to accurately map each input image to a corresponding code. Although many encoders <cit.> have been proposed, they focus on extracting global latent code for global editing, such as age, emotion, making them unsuitable for textures disentanglement and local editing. To this end, we adopt a variation of <cit.> for latent code extraction. The encoder utilizes a feature pyramid network (FPN) <cit.> for feature fusion, ultimately generating fine-grained, medium-grained, and coarse-grained feature maps at three different scales. The mask is then resized to match each feature map. Subsequently, a global average pooling (GAP) is employed to extract semantic region features according to the segmentation, resulting in multi-scale style vectors. These are concatenated and further passed through an MLP to obtain the 𝒲^c+ style codes.
Specifically, given a source image I and its corresponding mask M, we first utilize a multi-scale encoder E_ϕ to obtain the feature maps F = [F_i]_i=1^N at different resolutions:
F = E_ϕ(I)
Here N is equal to three. We then aggregate per-region features based on the mask M and features F. Specifically, for each feature map F_i, we first downsample the mask to match the feature map size, then perform global average pooling (GAP) to aggregate features for different regions:
u_ij = (F_i ∘ ((M)_i = j)), {j = 1, 2, ..., C}
Where u_ij denotes the averaged feature of region j in feature map i, C is the number of semantic regions, (…) is the downsampling operation to align with F_i, and ⟨∘⟩ is the element-wise product. Subsequently, the multi-scale feature vectors {u_ij}^N_i=1 of region j are concatenated and passed through a multi-layer perceptron (MLP) to obtain the style codes:
s_j = ([u_ij]_i=1^N)
where s_j denotes the style code of j-th categories. Then, the mask and style codes s∈ℝ^C×18×512 are fed into the mask-guided StyleGAN generator to synthesize the talking face. For the detailed architecture of the Mask-guided StyleGAN, please refer to the supplement.
Prior Learning
To seamlessly integrate SGI into the overall framework, we randomly select a mask from the images within a 15-frame range of the input image. Through such a training strategy, the model can learn the priors of semantic regions like teeth and eyes. Specifically, when given an image with closed mouth but a randomly selected mask corresponds to a visible-teeth state, it learns to model the teeth' prior information and can naturally connect with the TSG module.
Loss Functions SGI is trained with a series of weighted objectives: pixel-wise and LPIPS loss for perpetual quality, id loss to prevent identity drift, a face parsing loss and an adversarial loss. The details are described in our Appendix.
§ EXPERIMENTS
§.§ Experimental settings
Dataset Since StyleGAN <cit.> typically generates high resolution images, e.g. 512 or 1024, while most existing talking face datasets have a lower resolution of 256 or below, we opt to train on the HDTF dataset <cit.> for high-quality talking face synthesis. The HDTF dataset is collected from YouTube website published in the last two years, comprising around 16 hours of videos ranging from 720P to 1080P resolution. It contains over 300 subjects and 10k distinct sentences. We collected a total of 392 videos, with 347 used for training and the remaining 45 for testing. The test set comprises videos with complex backgrounds and rich textures, thereby offering a comprehensive evaluation of the model performance.
Metrics
We conduct quantitative evaluations on several widely used metrics. To evaluate the lip synchronization, we adopt the
confidence score of SyncNet <cit.> (Sync) and Landmark Distance around mouths (M-LMD) <cit.>. To evaluate the accuracy of generated facial expressions, we adopt the Landmark Distance on the whole face (F-LMD). To evaluate the quality of generated talking face videos, we adopt PSNR <cit.>, SSIM <cit.>, FID <cit.> and LPIPS <cit.>. To measure the Temporal coherence of generated videos, we employ FVD <cit.>. Higher scores indicate better performance for Sync, PSNR, and SSIM, while lower scores are better for F-LMD, M-LMD, FID, LPIPS and FVD.
Implementation Details
We use PyTorch <cit.> to implement our framework. We train TSG module on a single NVIDIA A100 GPU with 40GB, while SGI module is trained on 4 NVIDIA A100 GPUs. In stage 1, We crop and resize face to 512×512. Speech waveforms are pre-processed to mel-spectrogram with hop and window lengths, and mel bins are 12.5ms, 50ms, and 80. The batch size is set to 20 and the Adam solver with an initial learning rate of 1e-4 (β_1=0.5, β_2=0.999) is utilized for optimization. In stage 2, we set the batch size to 4 for each GPU and initialize the learning rate as 1e-4 with the Adam optimizer (β_1=0.9, β_2=0.999). The generator is initialized with StyleGAN weights <cit.>. In the first stage, we train the model with only the cross entropy loss for approximately 50K iterations, then incorporate the expert SyncNet to supervise lip movements for an extra 50k iterations. In the second stage, we train the model for 400K iterations.
§.§ Experimental Results
Qualitative Talking Segmentation Results
In the first sub-network, we visualize the talking segmentation results illustrated in <ref>. It can be observed that the generated segmentations effectively delineate distinct facial regions, even elaborating details such as earrings. Additionally, the synthesized lips exhibit strong synchronization with the ground truth. Subsequently, the high-quality segmentations produced by TSG are utilized as guidance for the SGI to deliver the final output.
Quantitative Results
We compare several state of the art methods: Wav2Lip <cit.>, SadTalker <cit.> (3DMM-based), DiffTalk <cit.> (diffusion-based), StyleHEAT <cit.> (styleGAN-based) and AD-NeRF <cit.> (NeRF-based). We conduct the experiments in the self-driven setting on the test set, where the videos are not seen during training. In these methods, the head poses of Wav2Lip, DiffTalk, and SegTalker are fixed in their samples. For other methods, head poses are randomly generated. The results of the quantitative evaluation are reported in <ref>.
Our method achieves better visual quality, temporal consistency, and also shows comparable performance in terms of lip synchronization metrics. Since DiffTalk takes ground truth landmark as conditional input, it is reasonable for DiffTalk to achieve the lowest LMD in self-driven sets. However, DiffTalk performs poorly in frame-to-frame coherence, especially with significant jitter in the mouth region (see supplementary video). In synchronization, despite scoring slightly lower on metrics relative to Wav2lip, our method achieves a similar score with ground truth videos. Furthermore, Our method outperforms existing state-of-the-art approaches on both pixel-level metrics such as PSNR, as well as high-level perceptual metrics including FID and LPIPS, thereby achieving enhanced visual quality. We additionally measure the FVD metric and Our FVD score is the best. This means that our method is able to generate temporal consistency and visual-satisfied videos. This is largely attributed to the implementation of SGI module. By explicitly disentangling different semantic regions via segmentation, SGI can better preserve texture details during image reconstruction. Moreover, our method is the only approach that can simultaneously achieve facial editing and background replacement which will be discussed in the following section.
Qualitative Results
To qualitatively evaluate the different methods, we perform uniformly sampled images from two synthesized talking face videos which are shown in <ref>. Specifically, the ground truth videos are provided in the first row where synthesized images of different methods follow the next and ours are illustrated in the bottom row. In comparison to Wav2lip <cit.>, our results exhibit enhanced detail in the lip and teeth regions.
For SadTalker <cit.>, It employs single-frame animation, which inevitably causes background movement and generates artifacts when wrapping motion sequences. Additionally, it also cannot handle the scenarios with changing background. The incorporation of segmentation in our approach allows high-quality background replacement. DiffTalk <cit.> can generate visually satisfying results; however, diffusion-based methods still face significant challenges in terms of temporal consistency. The mouth area of DiffTalk is prone to shaking and leads to poor lip synchronization performance. StyleHEAT <cit.> is also a StyleGAN-based approach, but it cannot directly drive speech to generate a talking face video. Instead, it requires the assistance of SadTalker to extract features from the first stage, then warps the features to generate video. Therefore, the quality of the video generated by StyleHEAT is limited by the quality of the output generated by SadTalker.
AD-NeRF <cit.> is a NeRF-based method capable of generating high-quality head part but consistently exists artifact in the connection between the head and neck. Moreover, its inference is time-consuming (10s per image) and requires fine-tuning for each speaker (about 20 hours).
In addition, to demonstrate the generalization ability of the proposed method, we conduct validation on another dataset, as illustrated in <ref>. Compared to other methods, our method excels in preserving texture details, particularly in fine-grained structural regions such as teeth.
In contrast, our method can produce more realistic and high-fidelity results while achieving accurate lip sync, satisfactory identity preservation and rich facial textures. For more comparison results, please refer to our demo videos in the supplement materials.
Disentangled Semantics Visualization
To demonstrate the Disentanglement of the model across different semantic regions, we employ t-distributed stochastic neighbor embedding(t-SNE) <cit.> visualization to illustrate the per-region features, as depicted in <ref>(a). For Clarity, We select eight sufficiently representative regions (appear in all videos) and utilize the mask-guided encoder to extract style codes from these semantic regions. In <ref>(a), each region is marked with a distinct color. As shown, the style codes of same region cluster in the style space and different semantic regions are explicitly separated substantially. This demonstrates that our mask-guided encoder can accurately disentangle different region features. Furthermore, in <ref>(b), we visualize the features of different IDs within a particular region to demonstrate the capability of the encoder. It can be seen that the style codes of different IDs are fully disentangled and our model can learn meaningful features.
Facial Editing and Swapping Results
Our method also supports facial editing and background swaps while generating video. Given a reference image and a sequence of source images, our method can transfer the candidate region texture to the source images. As depicted in <ref>, we illustrate three local editing tasks, including fine-grained hair editing, lip makeup, and eyebrow modifications. Besides, we also can manipulate blinking in a controllable manner by simply editing the eye regions of mask, illustrated in <ref>. Compared with existing blink methods, our method does not design a specialized module for blinking editing, as well as enables other types of local editing, substantially enhancing model applicability and scalability. Additionally, our model intrinsically disentangles the foreground and background, allowing for seamless background swapping and widening the application scenarios of talking faces. As shown in <ref>, with a provided reference background image and a video segment, we can not only generate synchronized talking face video but also achieve video background swapping, resulting in high-fidelity and photo-realistic video.
§.§ Ablation Study
In this section, we perform an ablation study to evaluate the 2 sub-agent networks, which are shown in <ref>. We develop 3 variants with the modification of the framework corresponding to the 2 sub-network: 1) w/o prior learning, 2) w/o cross entropy and 3) w/o SyncNet.
The first component is the implementation of priors learning. Without priors learning, the method produces poor visual quality. This mechanism offers structural prior information for the mouth and teeth regions, which facilitates the model learning personalized details of these areas.
The second component is the cross entropy. Without cross entropy, the method exhibits very poor performance whenever on both lip synchronization and visual quality. By employing cross-entropy loss instead of L1 loss, we overcome the issue of erroneous segmentation predictions around region boundaries, improving the model's control over different semantic areas. Furthermore, cross entropy also facilitates learning lip movements from speech, exhibiting a certain extent of lip synchronization.
The last component is the SyncNet, which is performed to reinforce the model learning the mapping from speech to lip. The performance of visual quality is comparable to the baseline when we do not apply SyncNet. However, without SyncNet would lead to poor lip synchronization performance, which is demonstrated in <ref>.
§.§ User Study
We conduct a user study to evaluate the performance of all the methods. We choose 20 videos with 10 seconds clips as our test set. These samples contain different poses, ages, backgrounds and expressions to show the generalization of our method. We invite 15 participants and let them choose the best method in terms of face quality, lip synchronization, identity preservation and overall naturalness. The results are shown in <ref>. It demonstrates that our model outperforms other methods across multiple dimensions.
§ CONCLUSION
In this paper, we present a new framework SegTalker for talking face generation, which disentangles the lip movements and textures of different facial components by employing a new intermediate representation of segmentation. The overall framework consists of two sub-networks: TSG and SGI network. TSG is responsible for the mapping from speech to lip movement in the segmentation domain and SGI employs a multi-scale encoder to project source image into per-region style codes. Then, a mask-guided generator integrates the style codes and synthesized segmentation to obtain the final frame. Moreover, By simply manipulating different semantic regions of segmentation or swapping the different textures from a reference image, Our method can seamlessly integrate local editing and support coherent swapping background.
ACM-Reference-Format
|
http://arxiv.org/abs/2409.02613v2 | 20240904110557 | Performance and tolerance study of the rectilinear cooling channel for a muon collider | [
"Ruihu Zhu",
"Chris Rogers",
"Jiancheng Yang",
"He Zhao",
"Cheng Guo",
"Jiangdong Li"
] | physics.acc-ph | [
"physics.acc-ph"
] |
APS/123-QED
Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China
University of Chinese Academy of Sciences, Beijing 100049, China
[email protected]
STFC Rutherford Appleton Laboratory, Didcot OX11 0QX, United Kingdom
[email protected]
Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China
University of Chinese Academy of Sciences, Beijing 100049, China
Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China
University of Chinese Academy of Sciences, Beijing 100049, China
Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China
University of Chinese Academy of Sciences, Beijing 100049, China
Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000, China
University of Chinese Academy of Sciences, Beijing 100049, China
§ ABSTRACT
The muon collider has the potential to be a powerful tool for the exploration of frontiers in particle physics. In order to reach high luminosity, the 6D emittance of the muon beam needs to be reduced by several orders of magnitude. The cooling process for a muon collider involves two parts; initial six-dimensional cooling and final transverse cooling. This paper focuses on the former and proposes a conceptual design of the rectilinear cooling channel with additional dipole magnets. In this paper, we first introduce a general method for designing the rectilinear cooling channel. Subsequently, we apply this method to develop two rectilinear cooling channels before and after a bunch merging system. Furthermore, we investigate the impact on cooling performance by employing π-mode RF cavities and considering the effect of errors in the magnetic and RF fields.
Performance and tolerance study of the rectilinear cooling channel for a muon collider
Jiangdong Li
September 9, 2024
======================================================================================
§ INTRODUCTION
Particle physicists seek to collide particles at the highest possible energy with high luminosity. Historically, this has been achieved using electron-positron colliders, which additionally offer clean events enabling precision studies. However, achieving multi-TeV collision energies is challenging for electron colliders due to the small mass of the electron, resulting in significant energy loss from synchrotron radiation <cit.>. Hadron colliders have also been used. Synchrotron radiation is suppressed in hadron colliders, owing to the large proton mass, but the wide proton parton distribution leads to a reduction in the energy of collision products relative to the centre of mass energy. Muons, on the other hand, have a much larger mass compared to electrons, making them less affected by synchrotron radiation. Additionally, muons have electron-like properties, making muon colliders a promising choice for high-energy physics <cit.>.
A technical challenge for the muon collider arises from the large emittance of the initial muon beam. This emittance significantly exceeds the acceptance limits of downstream accelerator components and is unsuitable for achieving high luminosity collisions, requiring a dedicated cooling channel to reduce the beam emittance. Additionally, due to the extremely short lifetime of muons (~2μs in the rest frame), the cooling process must be completed before the muons decay completely. This requirement makes ionization cooling the only feasible method to cool the muons. Ionization cooling can be classified into two types: 4D cooling and 6D cooling. In 4D ionization cooling, when the muon beam passes through an absorber, it simultaneously loses transverse and longitudinal momentum, with the longitudinal momentum restored by RF cavities. Consequently, the transverse phase space of the muons decreases over time. 4D ionization cooling was demonstrated by the Muon Ionization Cooling Experiment (MICE) collaboration <cit.>. This arrangement does not achieve reduction of longitudinal emittance. In order to realize 6D ionization cooling, a dipole field and wedge-shaped absorber is envisaged to be introduced into the apparatus <cit.>. This setup ensures that particles with higher longitudinal momentum traverse a thicker part of the wedge, leading to greater longitudinal momentum loss. Consequently, both longitudinal and transverse emittance can be reduced simultaneously.
Four main types of ionization cooling channel have been developed in the past. The first one is a ring-shaped cooling channel which uses tilted solenoids to generate the dispersion and bend the beam <cit.>. Simulations indicate that it successfully reduces the 6D emittance of the muons by a factor of ~50. However, significant challenge remains with injection into and extraction from the ring. To address this challenge, another cooling channel, known as the Guggenheim design <cit.>, was proposed. Simulation results indicate that it achieves nearly the same cooling performance as the cooling ring. However, as the cooling cells in Guggenheim are set on a vertical helix, it will be very difficult to construct. The third one is the helical cooling channel. This is also a 6D cooling channel, but it uses helical magnets and homogeneous absorbers instead of solenoids and wedge absorbers <cit.>. The fourth one is rectilinear cooling channel. In this design, the components of the cooling cell are the same as those used in the cooling ring or Guggenheim. However, in the rectilinear design, the cooling cells are arranged along a straight line. The rectilinear channel, initially proposed by Balbekov <cit.>, has a much simpler geometry compared to the Guggenheim design. This simplicity makes it easier to construct. Additionally, unlike the fixed focusing and dispersion in the cooling ring, the rectilinear channel allows for adjustment of these parameters at different stages. This flexibility enables the rectilinear channel to reduce the emittance of muons to much smaller values <cit.>. For these reasons, we choose the rectilinear cooling channel as the baseline in our studies.
For the design of the rectilinear cooling segments before and after the bunch merging system in this paper, uncoupled RF cells are used. These RF cells are short and have a small RF phase difference between adjacent cells. Although shorter RF cells have a higher transit time factor, leading to more acceleration for a given electric field (see details in Section <ref>), each uncoupled RF cell requires a cryo-module feed-through, which complicates the engineering process. The International Muon Collider Collaboration (IMCC) proposes a 6D muon cooling demonstrator <cit.> using π-mode RF <cit.>, where the accelerating phase difference between two adjacent RF cells is π. Sites such as CERN in Switzerland, Fermi National Laboratory in the US and the High Intensity heavy-ion Accelerator Facility <cit.> in China are under consideration. π-mode RF offers numerous advantages, such as its compact waveguide structure and the requirement for only one RF coupler and cryo-module feed-through to supply all the RF cells. However, it also has some disadvantages, including high RF power requirements for each coupler and a low transit time factor. Since a low transit time factor might negatively impact beam dynamics, it is crucial to examine the impact on cooling performance with π-mode RF.
A tolerance study including magnetic and RF errors is also performed to assess the robustness of the cooling lattice. It is important to note that this tolerance study is conducted in only one cooling stage with π-mode RF for convenience, but the results are expected to hold for designs with uncoupled RF cells as well.
This paper is structured as follows: Section <ref> provides a review of the theory of 6D ionization cooling. Section <ref> presents the parameters and simulation results of the proposed rectilinear cooling channel. In Section <ref>, we discuss the simulation results using π-mode RF and analyze how magnetic and RF errors influence the cooling performance. Finally, Section <ref> summarizes the conclusions drawn from this study.
§ PRINCIPLES OF 6D IONIZATION COOLING
When it undergoes ionization cooling, a muon beam gradually loses both transverse and longitudinal momentum owing to the ionization of atoms within the absorber material. RF cavities restore longitudinal momentum but not transverse momentum. As a result, the muons’ momentum becomes more parallel, leading to a reduction in emittance. The evolution of transverse emittance is described as follows <cit.>:
dε_T/ds=-1/β^2dE_μ/dsε_T/E_μ+1/β^3β_TE_s^2/2E_μm_μc^2L_R
where ε_T is the normalized transverse emittance, E_μ is the muon beam energy, m_μ is the muon mass, β is the muon particle velocity, c is the speed of light, β_T is the transverse beta value, dE_μ/ds is the energy loss per unit length, L_R is the radiation length of absorber material and E_s is the characteristic scattering energy (~13.6 MeV).
The energy loss dE_μ/ds can be estimated by the Bethe-Bloch equation <cit.>:
dE_μ/ds=4πN_Ar_e^2m_ec^2ρZ/A[1/β^2ln(Kγ^2β^2)-1-δ/2β^2]
where r_e is the classical electron radius, ρ is the density, N_A is Avogadro's number, A is the atomic weight, Z is the atomic number and m_e is the electron mass. K=2m_ec^2/I and I is the mean excitation energy. δ is the density effect factor which is negligible for the muons with longitudinal momentum being around 200 MeV.
The first term of Eq. (<ref>) can be interpreted as the cooling from energy loss due to atomic ionization, while the second term represents the heating from the Coulomb scattering. The equilibrium transverse emittance is defined when dε_T/ds in Eq.(<ref>) is 0 <cit.>:
ε_T,eq=β_TE_s^2/2βm_μc^2L_R|dE_μ/ds|
From Eq. (<ref>), it is evident that in order to achieve a lower equilibrium transverse emittance, two key factors should be considered. Firstly, the focusing at the absorber should be strong, indicated by a smaller transverse beta value. Secondly, the absorber material should possess a large product of L_R and |dE_μ/ds|, which is satisfied by materials with low atomic numbers, such as liquid hydrogen and lithium hydride.
For a muon collider, it is crucial to also decrease the longitudinal emittance to meet the required acceptance of downstream accelerator components. To achieve longitudinal cooling, a well-known scheme called emittance exchange is employed. This involves using wedge-shaped absorbers and introducing dispersion. The introduction of dispersion causes the beam to spread transversely, allowing particles with higher momentum to pass through a thicker part of the absorber and lose more energy, thereby reducing the longitudinal emittance at the cost of increasing the transverse emittance. The value of dispersion must be carefully chosen to simultaneously achieve both transverse and longitudinal cooling, also known as 6D cooling.
For the 6D cooling using the wedge, formulas for the evolution of transverse and longitudinal emittance have been provided in <cit.> and are shown as below:
dε_T/ds=-g_T/β^2E_μdE_μ/dsε_T+β_TE_s^2/2β^3m_μc^2L_RE_μ
dε_L/ds=-g_L/β^2E_μdE_μ/dsε_L+β_L/2d<ΔE^2>/ds
where g_T and g_L are the transverse and longitudinal partition numbers, respectively, and β_L is the longitudinal beta function. They can be expressed as follows <cit.>:
g_T=1-D/w
g_L=2γ^2-2ln[K(γ^2-1)]/γ^2ln[K(γ^2-1)]-(γ^2-1)+D/w
β_L=√(λ_RFβ^3γm_μc^2α_p/2πeV^'cosφ_s)
where D is the dispersion, w is the distance between the beam center and the apex of the wedge, α_p is the slip factor, which can be estimated as -1/γ^2 (γ being the Lorentz factor) since it is an approximation in the linac and the rectilinear cooling channel is roughly a linac, V^' is the average RF gradient, φ_s is the RF phase and λ_RF is the RF wavelength.
The partition numbers g_T and g_L describe how the total damping from cooling is distributed between the transverse and longitudinal planes. The longitudinal beta function β_L represents the focusing strength in the longitudinal plane.
The second term in Eq. (<ref>) arises from random fluctuations in the energy loss known as energy straggling. It can be approximately described as <cit.>:
d<ΔE^2>/ds=4π(r_eγm_ec^2)^2n_e(1-β^2/2)
The equillibrium emittance can be derived when dε/ds=0, so, from Eq. (<ref>) and Eq. (<ref>), we get <cit.>:
ε_T^eq=β_TE_s^2/2|dE_μ/ds|βg_Tm_μc^2L_R
ε_L^eq=β_Lm_ec^2γ^2(1-β^2/2)/2g_Lβm_μc^2[ln(Kγ^2β^2)/β^2-1]
The expressions for the evolution of transverse and longitudinal emittance can also be obtained from Eq. (<ref>) and Eq. (<ref>) <cit.>:
ε_i(s)=(ε_i,0-ε_i,eq)exp(-s/L_cool,i)+ε_i,eq
where i=T or L corresponding to the transverse or longitudinal direction, ε_i,0 is the initial emittance and L_cool,i is the cooling length shown in Eq. (<ref>).
L_cool,i=(g_i/β^2E_μ<dE_μ/ds>)^-1
where dE_μ/ds with angular brackets is the energy loss averaging over the full transport length.
Eqs. (<ref>) to (<ref>) are used to calculate the theoretical emittance at the end of each rectilinear cooling stage discussed in section <ref>.
§ LINEAR LATTICE OPTICS AND GENERAL DESIGN METHODS
§.§ Lattice function and beam dynamics
Studying the fundamental lattice features and beam dynamics in the rectilinear cooling channel is essential as it provides valuable guidance for the design and simulation of the channel, aiding in optimizing its performance and efficiency. We analyze the lattice function and beam dynamics of the channel without absorbers and RF.
§.§.§ Transverse beta function, phase advance and momentum acceptance
The beta function is a crucial parameter in accelerator physics as it represents the focusing strength of the external magnetic field. In the case of solenoids, due to cylindrical symmetry, it is more common to use the transverse beta function β_T instead of individual beta function β_x and β_y. The transverse beta function evolves as <cit.>:
2β_Tβ_T^”-β_T^'2+4β_T^2k^2-4=0
where k is the solenoid focusing strength,
k=qB_z/2p_z
where q is the charge of the particle, B_z is the longitudinal magnetic field and p_z is the longitudinal momentum.
One can solve Eq. (<ref>) periodically to obtain the value of transverse beta function along the cooling cell.
Phase advance φ is defined as:
φ=∫1/β_Tdz
When the phase advance is close to or equal to nπ (n=1,2,3,...), the value of transverse beta function increases significantly leading to the integer or half-integer resonance.
It is evident that the phase advance of the cooling cell is dependent on the beam longitudinal momentum, as indicated by Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>).
The momentum acceptance is defined as the range of beam longitudinal momentum values where the phase advance lies between nπ and (n+1)π.
It is crucial to ensure that the momentum acceptance remains sufficiently large (e.g., 5 or 6 times larger than the RMS longitudinal momentum spread) in order to minimize particle loss.
§.§.§ Closed orbit and dispersion
Given the symmetry of the magnetic field in the y direction around the middle point of the cooling cell (shown in Fig. (<ref>)), it is necessary for the closed orbits in both the x and y directions to exhibit symmetry as well. This symmetry requires that the derivatives of closed orbits at the boundary of the cooling cell are zero. Consequently, this simplifies the search for closed orbits, as the initial momenta in both the x and y directions can be set to zero. If the beam center is placed on the closed orbit, most particles in the beam will perform betatron oscillation around the closed orbit, resulting in the overall beam motion being nearly periodic and leading to reduced particle loss. The horizontal and vertical dispersion components can be calculated from the closed orbit as:
D_x=Δx/Δp/p
D_y=Δy/Δp/p
where Δx, Δy and Δp represent the difference of the closed orbit in x and y direction, respectively, and momentum relative to the reference particle.
§.§ Cooling channel design process
The design process consists of four steps as follows:
a) Calculation of transverse beta function and momentum acceptance: The first step is to choose an appropriate transverse beta function value at the center of the wedge and a sufficient momentum acceptance. We use the well-known differential evolution algorithm <cit.> to adjust the relative solenoid parameters (position, current density and length) and the target functions which need to be minimized are:
f=(β_T-β_T,ref)^2+(φ_low-π)^2
f=(β_T-β_T,ref)^2+(φ_low-2π)^2+(φ_high-π)^2
where β_T,ref is the transverse beta function value we choose, φ_low and φ_high denote the phase advances of the cooling cell, which are obtained from the lowest and highest momenta of the chosen momentum acceptance. Eq. (<ref>) is for the phase advance of the cooling cell below π and Eq. (<ref>) is for the phase advance between π and 2π.
b) Calculation of closed orbit and dispersion: The second step is to determine the value for the dispersion which decides the emittance exchange rate. As the dispersion is calculated from closed orbit difference shown in Eqs. (<ref>) and (<ref>), we need to find the closed orbit first. The target function used to find the closed orbit is:
f=(x_final-x_init)^2+(y_final-y_init)^2
The dispersion is controlled by the strength of the dipole field. It should be noted that the cell lattice used in this step is without the wedges and RF cavities.
c) Obtaining the list of RF parameters: The dispersion in the RF cavities region results in a coupled transverse and longitudinal beam motion. This means that finding the proper RF parameters involves more than just compensating for energy loss in the wedge absorber. It also requires maintaining the same closed orbit as in step b). We manually set the dispersion and choose the maximum accelerating gradient and accelerating phase as two variables. The wedge absorber length is adjusted manually in a certain range. The target function for finding the closed orbit in this step is the same as Eq. (<ref>). Additionally, since selecting the correct longitudinal momentum of the reference particle affects the timing of the RF cavities, we iterate over a certain range of the reference particle’s z-momentum to obtain several lists of RF parameters. In this step, for each wedge absorber length, several lists of RF parameters corresponding to different reference momenta are obtained.
d) Running the multi-particle tracking simulation: The lists obtained from step c) are utilized as inputs for the RF cavities, and the multi-particle tracking simulation is initiated. G4Beamline-3.08 <cit.> is employed to complete the tracking simulation, while the emittance calculation is carried out using the code Ecalc9f <cit.>. is chosen for the physics model in G4Beamline for the tracking simulations, as it includes all relevant physics processes such as multiple scattering, energy straggling, energy loss, and muon decay. A 4σ cut is applied in Ecalc9f for the emittance calculation. We introduce a merit factor to select the best outcome and quantify cooling efficiency. It is described as <cit.>:
M(s)=T(s)^2/ε_T(s)/ε_T(0)√(ε_L(s)/ε_L(0))
where T is the transmission, ε_T(s) and ε_T(0)
are the normalized transverse emittance at a specific position and start of the cooling section, respectively, while ε_L(s) and ε_L(0) refer to the normalized longitudinal emittance. This merit factor is indicative of the improvement in luminosity arising from the cooling provided by the rectilinear cooling channel.
All calculations and simulations in the above steps are performed in parallel using two AMD EPYC 7642 processors with a total of 96 cores.
§ LATTICE PARAMETERS AND TRACKING STUDIES OF THE RECTILINEAR COOLING CHANNEL
§.§ Layout of the cooling cell
The basic lattice layout of one cell used in this paper is depicted in Fig. (<ref>) which resembles the previous design <cit.>. However, instead of tilting solenoids to generate dipole field, separate dipole magnets are incorporated to facilitate tuning of the dipole field, for example during commissioning.
As shown in Fig. (<ref>), each cooling cell consists of solenoids with opposite polarity, dipole magnets for dispersion generation, RF cavities for beam energy loss compensation and liquid hydrogen (LH_2) wedge absorbers. This paper utilizes two types of cooling cell layouts, referred to as A-type and B-type, which are shown in Fig. (<ref>) and Fig. (<ref>). The primary difference between these layouts lies in the period of the squared longitudinal magnetic field, B_z^2. For the A-type layout, the period is half the length of the cooling cell, whereas for the B-type layout, the period is the full length of the cooling cell.
The A-type lattice has the advantage that the transverse beta function at the center is equal to that at the start and end of the cooling cell, enabling the placement of a wedge absorber at the middle of the cell. Assuming fixed energy loss in wedge absorbers, this arrangement results in a reduction of the length of each wedge absorber, consequently lowering the average transverse beta function as described by Eq. (<ref>).
β_T,ave=∫_0^Lβ_T dz/L
where L is the length of the absorber.
The B-type lattice can achieve a smaller transverse beta function compared to the A-type, which aids in further emittance reduction. However, the momentum acceptance of the B-type layout is smaller than that of the A-type.
Windows are also taken into account for both the liquid hydrogen absorbers and RF cavities, although they are not displayed in Fig. (<ref>). Beryllium (Be) is chosen as the window material for the absorber due to its low atomic number, which has minimal impact on the cooling performance. Although there is some risk associated with using beryllium in conjunction with liquid hydrogen, an R&D program is currently underway to establish safe design parameters for beryllium. Beryllium is also selected as the window material for the RF because it can increase the operational gradient <cit.>. For the absorber, the window thickness varies from 300 μm (stage 1 in pre-merging section) to 40 μm (stage 10 in post-merging section). For the RF, the window thickness varies from 120 μm (stage 1 in pre-merging section) to 10 μm (stage 10 in post-merging section). The geometry of the absorber and RF windows are both simple disks. Each absorber and RF cavity is enclosed with two windows at both ends. The windows of the wedge absorbers are oriented to completely cover the sides of the triangular prism-shaped wedges.
In characterizing the fringe field of the dipole magnets, we employ the expressions derived from <cit.>. The fringe field components are described by the following equations:
B_y=B_01+e^a_1zcosa_1y/1+2e^a_1zcosa_1y+e^2a_1z
B_z=B_0-e^a_1zsina_1y/1+2e^a_1zcosa_1y+e^2a_1z
where B_0 is the nominal dipole field strength, a_1 is a coefficient set to 5 in the simulations, and z and y are the coordinates normalized by the dipole magnet aperture. It is also noteworthy that while only the integrated dipole field significantly influences the closed orbit of the particles motion, utilizing fully Maxwellian fringe field expressions is always more physical.
§.§ Design of the pre-merging cooling section
We use the output beam file from the front end of the previously proposed Neutrino factory <cit.> as the starting point for our tracking simulation, following a similar approach to the previous design <cit.>. However, we have made changes by selecting 352 MHz and 704 MHz as our RF frequencies instead of the 325 MHz and 650 MHz used in the previous design. To match the new 352 MHz RF, we adjust the input beam by compressing it in time by a factor of 325/352 and stretching its z-momentum by 352/325. This ensures the input beam matches the new 352 MHz RF frequency while keeping its longitudinal phase space volume conserved.
A 4-stage rectilinear channel is utilized to cool the beam to meet the required initial emittance for the bunch merging system. Given the large emittance of the input beam, it is necessary to avoid over-focusing in the first stage. Therefore, a relatively large beta function value of 70 cm is chosen at the wedge. The layout of the cooling cell in stage 1 is depicted in Fig. (<ref>), including two solenoids with opposite polarity, two positive dipole magnets, six 352 MHz RF cells, and two liquid hydrogen wedge absorbers. Detailed parameter information is provided in Table <ref>. Fig. (<ref>) illustrates the on-axis B_z generated from the solenoid coils in G4Beamline and the B_y generated from Eqs. (<ref>) and (<ref>) of the cooling cell in stage 1 in the pre-merging section. The shape of the on-axis B_z is sinusoidal and flips at the middle of the cell to eliminate angular momentum accumulation. The maximum on-axis B_z is 2.5 T, corresponding to a large beta function value compared to later cooling cells. As shown in Fig. (<ref>), the beta functions at the start, middle, and end of the cooling cell are the same, with wedge absorbers placed at these three positions. Fig. (<ref>) shows the dependence of the transverse beta function at the wedge absorber and the phase advance of the cooling cell on momentum. The transverse beta function at the wedge absorber is approximately proportional to the z-momentum. The phase advance at 145 MeV/c exceeds π, indicating a momentum acceptance above 145 MeV/c. It can be seen from Fig. (<ref>) that the momentum acceptance gradually decreases throughout these four stages. Fig. (<ref>) shows how the closed orbit changes in the x and y directions for momenta of 200 MeV/c, 210 MeV/c, and 190 MeV/c. As expected, only the x-direction orbit varies noticeably with different momenta since the dipole field acts only in the y-direction. Fig. (<ref>) shows that dispersion mainly exists in the x-direction, while the y-direction dispersion remains nearly zero. Stage 1 terminates at 104.4 m and connects with a later stage which has a smaller beta function value at the wedge absorber. Stage 4 has the highest B_z field (7.2 T for the on-axis field) and smallest transverse beta function (23 cm at the wedge absorber) before the bunch merging in order to decrease the transverse and longitudinal emittance of the muon beam to the required values for the bunch merging system (normalized transverse emittance ~=1.3 mm and normalized longtudinal emittance ~=1.7 mm) <cit.>. It is worth mentioning that we double the RF frequency from 352 MHz to 704 MHz for stages 3 and 4. This allows for an increase in the RF accelerating gradient and a decrease in the longitudinal beta function, as indicated by Eq. (<ref>). This adjustment contributes to reducing the longitudinal emittance.
The final emittance, transmission and reference momentum of each stage are listed in Table <ref>. The reference momentum refers to the longitudinal momentum of the reference particle, which is used for the timing of the RF cells. The evolution of the transverse and longitudinal emittance is shown in Fig. (<ref>). As depicted in Fig. (<ref>), a significant spike is evident at the junction of stage 3 and stage 4. This spike is due to the longitudinal mismatching as we start to use the 704 MHz RF in the stage 4. In summary, the pre-merging cooling section consists of 4 stages with a total length of 362.8 m. It effectively reduces the transverse and longitudinal emittance from 16.96 mm and 45.53 mm to 1.239 mm and 1.741 mm, respectively, with an overall transmission rate of 49.6% including the muon decays. The particle distribution in phase spaces at the beginning and end of the cooling section is illustrated in Fig. (<ref>), (<ref>) and (<ref>). As shown in Fig. (<ref>), the centers of the initial and final beams are noticeably different. This difference arises because the beam center approximately follows the closed orbit, which reduces as the solenoid focusing increases in later stages. It is encouraging to observe from Fig. (<ref>) that the merit factor increases at the end of each stage indicating each cooling stage is well-designed. Similar to the emittance evolution Fig. (<ref>), the merit factor in Fig. (<ref>) drops significantly at the start of the stage 3 mostly because of the longitudinal mismatching resulting from the sudden jump in RF frequency from 352 MHz to 704 MHz.
§.§ Design of the post-merging cooling section
After the bunch merging system, both the transverse and longitudinal emittance of the muon beam increase by a factor of ~4 <cit.>. We choose to maintain the phase advance of the cooling cell in the post-merging section between π and 2π to achieve a smaller transverse beta function. Despite the fact that this choice results in a narrower longitudinal momentum acceptance, the significantly smaller initial longitudinal emittance in the post-merging section, compared to the pre-merging section, allows for such an approach. As the period of B_z^2 for the cooling cells of all stages in this section is the length of the cooling cell, the layout shown in Fig. (<ref>) is adopted for all cooling cells in this section. It is also important to note that a single pair of solenoid coils with opposite polarity is used in each cooling cell from stages 1 to 3. Two pairs are used in stages 4 and 5, and three pairs are employed from stages 6 to 10. The use of multiple pairs of coils helps reduce the current density in the coils. Since the output beam of the post-merging rectilinear cooling section serves as the input beam for the final cooling section through deceleration, our objective is to minimize the emittance of the output beam of the post-merging rectilinear cooling section while mitigating beam loss. In other words, our aim is to ensure that the merit factor defined in Eq.(<ref>) increases along the channel.
A 10-stage rectilinear cooling channel is used in the post-merging section and its main parameters are summarized in Table <ref>. Given that the initial transverse emittance of this section is approximately one third of that in the pre-merging section, a smaller transverse beta function value of 35 cm is chosen for stage 1. The on-axis field profile of stage 1 is illustrated in Fig. (<ref>). Figure (<ref>) depicts the evolution of the transverse beta function along the cooling cell in stage 1. In comparison with the pre-merging A-stage 1, the curve shape remains similar but has a smaller value at the start and end due to higher B_z. Fig. (<ref>) illustrates the transverse beta function and phase advance versus momentum in stage 1. The phase advance of the 150 MeV/c and 238 MeV/c beams approaches 2π and π, respectively, indicating a momentum acceptance range from 150 MeV/c to 238 MeV/c. A shorter cooling cell is used in later stages to achieve tighter focusing and reduce the transverse beta function. However, tighter focusing means the cell has a poorer longitudinal momentum acceptance if the increase in magnetic field strength does not scale with the reduction of the cooling cell length. From Fig. (<ref>), the momentum acceptance generally reduces throughout the stages. Therefore, it is important to gradually decrease the cell length and transverse beta function in each stage to match the momentum spread of the muon beam with the momentum acceptance. The final cooling system requires the input transverse emittance to be less than 0.3 mm which is achieved at the end of the stage 8. However, we find it is still possible to further reduce the emittance by adding two more stages with moderate particle loss. The evolution of emittance and merit factor are shown in Fig. (<ref>) and Fig. (<ref>), respectively. The particles distribution in the phase space of the post-merging section is displayed in Fig. (<ref>), (<ref>) and (<ref>). Here, we provide a brief explanation of the initial beam distribution. The bunch merging process occurs in two stages: longitudinal and transverse merging. Initially, three bunches are merged longitudinally into one so that three distinct sets of particles can be seen in the longitudinal phase space shown in Fig. (<ref>). Following this, seven bunches are merged transversely, with the particle distribution in the x-y plane illustrated in <cit.>, which yields the substructure seen in Figs. (<ref>) and (<ref>).
We also calculate the theoretical emittance for the end of each stage in the post-merging section from Eqs. (<ref>) and (<ref>) and the results are shown in Table <ref>. The initial emittance in Eq. (<ref>) is equal to the simulated emittance at the end of each stage. Compared with the simulation results shown in Table <ref>, the largest discrepancy between theory and simulation is 23.9%. This discrepancy between the simulation and theoretical predictions is expected, given that the theory is entirely linear and treats transverse and longitudinal beam motion separately. In most cases, the theory yields higher output emittance at the end of each stage. This discrepancy arises primarily because the particles with higher amplitudes are lost on the beam pipe, which the theory does not account for.
In summary, the 10-stage post-merging cooling section is able to reduce the normalized transverse and longitudinal emittance of the muon beam from 5.129 mm and 9.991 mm to 0.1396 mm and 1.558 mm, respectively. The channel length is 487.26 m with the transmission of 28.5% including the muon decays. The output transverse emittance of this updated design is half that of the previous design <cit.>. A lower initial transverse emittance is always beneficial for the final cooling. Previous simulation studies on final cooling have successfully reduced the normalized transverse emittance of the muon beam to 55 μm <cit.>. The current studies initiated by the International Muon Collider Collaboration (IMCC) aspire to surpass this achievement, aiming for a value as low as 25 μm <cit.>. We anticipate that the output muon beam of this updated post-merging rectilinear cooling channel design will significantly facilitate the final cooling system to achieve the goal of a normalized transverse emittance of 25 μm.
§ TRACKING SIMULATION USING Π-MODE RF AND ERROR ANALYSIS
When a particle traverses the RF structures of the cooling channel, the phasing between adjacent cells is required to match the time of flight across each cell. The RF cell length is related to the frequency and particle velocity according to
L=βc/2πfΔϕ_RF
where L is the length of a RF cell, Δϕ_RF is the relative phase between adjacent RF cells (e.g., π/2, π...), βc is the velocity of the muon beam and f is the RF frequency. In the lattice described in section <ref>, adjacent cells have a phase difference around π/2. Each cavity is expected to have an individual power coupler and feed-through to the cavity.
In order to simplify the engineering, coupled RF cells with a phase difference of π can be used. In this case, one power coupler and feed-through is required for an entire structure, with adjacent cells coupled either through the iris or through the cavity walls. In order to achieve a correct phasing of the beam, their length must be doubled compared to that of π/2-mode RF cells.
The transit time factor describes the degradation in effective voltage resulting from phase variation of the cavity during passage of the beam across the cavity. The transit time factor is given by
T=sinω_rfL/2βc/ω_rfL/2βc
where ω_rf is the angular frequency of the RF, L is the length of the RF and βc is the speed of the reference particle. If the length of the cavity is extended, the transit time factor is reduced so that higher RF gradients are required to restore the energy loss in the absorber, as seen from Eq. (<ref>).
ΔE_rf=N_rfTV_rfLsinφ_s
where ΔE_rf is the energy gain from the RF, N_rf is the number of RF cells, T is the transit time factor, L is the length of each RF cell and φ_s is the RF phase.
Due to the simplicity of the power couplers, the International Muon Collider Collaboration will use π-mode RF for a 6D muon cooling demonstrator. As the initial emittance of stage 5 in the post-merging cooling section is similar to that in <cit.>, we choose the cooling cell in stage 5 as a baseline to check the impact of π-mode RF on cooling performance. Since the length of one π-mode RF cell is 18.8 cm, nearly double that of the normal mode RF cell discussed in Section <ref>, we have increased the length of the cooling cell from 80 cm to 90 cm. The lattice parameters and tracking results of the emittance and transmission for the two cases of π-mode and normal mode RF are summarized in Table <ref> and Table <ref>, respectively. In order to only investigate if π-mode RF influences the cooling performance, we maintain identical magnetic field and wedge absorber settings in both cases, with the only differences being in the RF length, gradient, and phase. The tracking results in Table <ref> indicate that there is no obvious difference in cooling performance between the normal and π-mode cases. From Table <ref>, it is evident π-mode RF has higher peak gradient and phase compared with the normal mode, which is due to a lower transit time factor.
To understand the robustness of the cooling lattice, error analysis studies are conducted. Since the beam emittance and lattice parameters differ in each cooling stage, detailed results on error analysis would vary. For convenience, this analysis is performed on the π-mode lattice in this paper. Error analysis for other cooling stages will be conducted in future work.
Two sources of errors are considered: those originating from the solenoid coils and the RF cells. For the solenoid coils, errors are classified into three types: current, position, and rotation. For the RF cells, errors are classified into four types: gradient, phase, position and rotation. The steps of a simulation for a specific error are as follows: (a) Generate random numbers (errors) from a truncated Gaussian distribution truncated at 3 standard deviations from the mean.; (b) Apply these errors to the solenoid coils or RF cells in the simulation. (c) Repeat step (a) and (b) for 100 iterations. (d) Average the output emittance and transmission from these 100 simulation results. For clarity, values of different types of errors in the following paragraphs and Figs. (<ref>), (<ref>) and (<ref>) denote the RMS values used to generate these random errors. The error bars in Figs. (<ref>), (<ref>) and (<ref>) represent the 95% confidence interval of the estimated mean values.
Fig. (<ref>) illustrates the variation in normalized 6D emittance and transmission due to different types of errors in the solenoid coils. Fig. (<ref>) addresses current errors, with the horizontal axis representing percentage changes in the solenoid coil current relative to the nominal values. Fig. (<ref>) focuses on position errors, with errors added to both transverse (x and y) and longitudinal (z) positions of the coils in the simulations. Fig. (<ref>) addresses rotation errors, where the coils are randomly rotated around the x, y, and z axes. Fig. (<ref>) depicts the impact of combined current, position, and rotation errors in the solenoid coils, with the horizontal axis representing five cases corresponding to different error magnitudes: (0%, 0 mm, 0°), (0.1%, 0.1 mm, 0.01°), (0.2%, 0.2 mm, 0.02°), (0.3%, 0.3 mm, 0.03°), and (0.4%, 0.4 mm, 0.04°). It can be estimated from Figs. (<ref>), (<ref>) and (<ref>) that, when only one type of error is applied, the thresholds for current, position, and rotation errors to begin noticeably degrading the cooling performance (with a transmission reduction >=1%) are approximately 0.6%, 0.3 mm, and 0.03°, respectively. For the combined errors, it can be seen from Fig. (<ref>) that the threshold is approximately (0.2%, 0.2 mm, 0.02°). It should be noted that position and rotation errors deserve the most attention, as it is possible for these errors to reach the threshold values in reality. This suggests that a correction scheme may be necessary. However, controlling current errors to under 0.2% is entirely feasible.
Fig. (<ref>) illustrates the variation in normalized 6D emittance and transmission due to different types of errors in the RF cavities. Compared to the errors in the solenoid coils, the errors in the RF cavities are made deliberately much larger in the simulations in order to show the degradation in cooling performance caused by RF errors. Fig. (<ref>) shows gradient errors, with the horizontal axis representing percentage changes in the RF gradient relative to the nominal values. When gradient error reaches a very large value, around 10%, it begins to impact the cooling performance. Fig. (<ref>) shows phase errors. The emittance is less sensitive to phase errors compared to gradient errors, with the transmission reduced by only about 1% when the phase error reaches about 7°. Fig. (<ref>) depicts position errors, with errors added to both transverse (x and y) and longitudinal (z) positions of the RF cavities. Even when the position error reaches an extreme value of 7 mm, the transmission remains unaffected, and the 6D emittance increases by only about 4%. Fig. (<ref>) shows rotation errors, where the RF cavities are randomly rotated around the x, y, and z axes. Similar to the case of position errors, the transmission remains largely unaffected by rotation errors. In practice, RF errors are typically not as large as those depicted in Fig. (<ref>). The purpose of scanning to very large errors is to demonstrate the excellent robustness of the cooling performance against RF errors. The influence on cooling performance caused by RF failure is also simulated, as shown in Fig. (<ref>). Here, "one RF cavity" refers to all RF cells within a single π-mode RF cavity. Since π-mode RF cells are coupled, a failure in one RF cell causes the entire RF cavity to fail. Simulations show that in the case of one RF cavity failure, the 6D emittance increases by about 6% and the transmission decreases by 1%. In the extreme case of four RF cavities failing, the 6D emittance increases by about 22% and the transmission decreases by 6%.
§ CONCLUSION
In this paper, a general method is introduced for designing the rectilinear cooling channel. Using this method, two rectilinear cooling channels with separate dipole magnets before and after a bunch merging system are designed. The output emittance of the segment before bunch merging meets the requirements of the downstream system. The segment after bunch merging achieves an output transverse emittance that is half of what was achieved in previous studies, aiding in achieving a lower output emittance in the final cooling section. The cooling performance employing π-mode RF cavities is investigated. Simulations indicate no significant difference in cooling performance between π-mode and normal mode RF cavities, except that π-mode RF requires a higher gradient due to a lower transit time factor. Error analysis regarding magnetic and RF errors has been conducted for the demonstrator-like B-stage 5. Simulation results indicate that the lattice of this stage is highly robust against RF errors, including gradient, phase, position, and rotation, unless these errors reach extremely large values, which are nearly impossible in reality. For magnetic errors, when the current, position and rotation errors are applied simultaneously, the threshold which corresponds to about 1% reduction in transmission is (0.2%, 0.2 mm, 0.02°). It is feasible to control the RMS value of current errors to under 0.2% in actual practice, but controlling the RMS value of position and rotation errors to under 0.2 mm and 0.02°, respectively, might be more challenging.
For future studies, it will be interesting to investigate whether using lower frequency RF (e.g., 176 MHz) for stage 1 before bunch merging influences cooling performance. The transmission in stage 1 before bunch merging is lower compared to the other stages due to the constrained iris radius of the RF cavities. RF cavities operating at 176 MHz are larger than those at 352 MHz, potentially allowing for larger irises that may yield improved transmission. Additionally, beyond error analysis for only a demonstrator-like B-stage 5, it would be valuable to conduct a comprehensive error analysis for the two rectilinear cooling channels before and after bunch merging.
The authors would like to thank Scott Berg for his valuable feedback on the manuscript. We also appreciate the discussions with Scott Berg, Alexej Grudiev, Siara Sandra Fabbri, and other members of the International Muon Collider Collaboration. This work is supported by the China National Funds for Distinguished Young Scientists (Grant No. 12425501). Ruihu Zhu also acknowledges funding from the China Scholarship Council (File No. 202304910408).
§ SIMULATION FILES
The beam input files, Ecalc9f control files, lattice files and beam emittance calculations for all stages in the pre-merging and post-merging sections presented in Section <ref> are available at https://github.com/MuonCollider-WG4/rectilinear/tree/main/2024-8-12_lattice_fileshttps://github.com/MuonCollider-WG4/rectilinear/tree/main/2024-8-12_lattice_files.
§ SELECTION OF CONDUCTOR MATERIAL FOR SOLENOID COILS
REBCO at 20 K has been selected as the conductor material for all coils in the rectilinear cooling channel discussed in this paper. This choice allows for higher magnetic fields at potentially lower costs, and a detailed engineering design is currently being investigated by the magnet design group of the International Muon Collider Collaboration <cit.>. Fig. (<ref>) presents the engineering critical current density alongside the current density utilized in the design described in Section <ref>, plotted against the radial magnetic field. As shown in Fig. (<ref>), the current density employed in the coils throughout all stages of the design remains below the engineering critical values.
|
http://arxiv.org/abs/2409.03577v1 | 20240905143105 | CHIRPs: Change-Induced Regret Proxy metrics for Lifelong Reinforcement Learning | [
"John Birkbeck",
"Adam Sobey",
"Federico Cerutti",
"Katherine Heseltine Hurley Flynn",
"Timothy J. Norman"
] | cs.LG | [
"cs.LG"
] |
Exploring the dynamic rotational profile of the hotter solar atmosphere: A multi-wavelength approach using SDO/AIA data
[
=======================================================================================================================
§ ABSTRACT
Reinforcement learning agents can achieve superhuman performance in static tasks but are costly to train and fragile to task changes. This limits their deployment in real-world scenarios where training experience is expensive or the context changes through factors like sensor degradation, environmental processes or changing mission priorities. Lifelong reinforcement learning aims to improve sample efficiency and adaptability by studying how agents perform in evolving problems. The difficulty that these changes pose to an agent is rarely measured directly, however. Agent performances can be compared across a change, but this is often prohibitively expensive. We propose Change-Induced Regret Proxy (CHIRP) metrics, a class of metrics for approximating a change's difficulty while avoiding the high costs of using trained agents. A relationship between a CHIRP metric and agent performance is identified in two environments, a simple grid world and MetaWorld's suite of robotic arm tasks. We demonstrate two uses for these metrics: for learning, an agent that clusters MDPs based on a CHIRP metric achieves 17% higher average returns than three existing agents in a sequence of MetaWorld tasks. We also show how a CHIRP can be calibrated to compare the difficulty of changes across distinctly different environments.
§ THE VALUE OF MEASURING CHANGE
Humans are experts at adaptation. When we detect changes in the world, we can predict their consequences and replan our behaviours accordingly. In contrast, Reinforcement Learning (RL) agents perform poorly when change occurs; their adaptation is a product of trial and error rather than anticipation. While RL agents have outperformed humans in controlled conditions, they require vast amounts of experience to do so, and lose their ability rapidly as conditions begin to vary <cit.>.
These weaknesses mean RL agents are inapplicable in problems with a wide variety of potential conditions or tasks. Consider an aerial drone designed to monitor a hazardous area. The drone could suffer sensor or actuator damage, be exposed to dangerous weather and adversaries, or be assigned a new mission. This variety precludes pre-training against all possibilities and instead requires agents to achieve human-like adaptation from small samples of experience.
In Lifelong Reinforcement Learning (LRL), agents are exposed to change to test whether they exhibit fast adaptation and fast remembering <cit.>, avoid interference <cit.>, or incrementally learn upon prior knowledge <cit.>. However, we rarely analyze the relationship between change and agent performance directly. Understanding this relationship could help agents achieve these qualities through proactive behavioural adjustments that mitigate a change's potential impact.
Metrics calculated from agent performance, such as regret, could be used for this purpose but require agents to be trained for each change to be measured; this is prohibitively expensive for more than a handful of examples and requires the designer to be aware of all the possible changes. To our knowledge, no performance-based metric has been used to quantify the difficulty of MDP change. This is likely due to the high computational costs involved <cit.>.
Model-based metrics measure differences in Markov Decision Process components, avoiding the need for trained agents, but are unsuitable for other reasons. In <cit.>, restricted Boltzmann machines are trained to represent MDPs, with the MDP distance defined as a difference between model weights. This becomes prohibitively expensive as the number of measurements needed increases. Other metrics require a calculation over every state in both MDPs, such as <cit.>'s method for finite MDPs, and are also inapplicable for continuous state or action spaces.
Although variation in these components is the cause of degraded agent performance, the link between model- and performance-based metrics has not yet been investigated, likely due to restrictive assumptions like finite state-action spaces or their sometimes considerable calculation costs <cit.>.
Metrics between MDPs are rarely used for lifelong reinforcement learning. Arguably the closest to the CHIRP concept is Lipshitz Lifelong Reinforcement Learning <cit.>, in which a pseudometric is defined on the transitions of finite MDPs to be used for lifelong learning. However, no method for efficiently estimating this metric for complex MDPs is discussed.
This paper introduces Change-Induced Regret Proxies (CHIRPs), a class of metrics that use the difference between Markov Decision Process components to approximate their impact on agent performance; an example CHIRP is constructed from the Wasserstein distance between MDP transition distributions and a relationship between it and agent performance is identified in a simple gridworld and MetaWorld's suite of robotic arm tasks. The value of CHIRPs in lifelong reinforcement learning is demonstrated by an agent that reuses policies across MDPs clustered by CHIRP value and an example calibration of a CHIRP across environments is used to allow benchmark-agnostic difficulty comparisons for the first time.
§ PRELIMINARIES
The standard Reinforcement Learning (RL) problem is defined as a Markov Decision Process (MDP) ℳ = {𝒮, 𝒜, ℛ, 𝒫, γ} with state space 𝒮, action space 𝒜, reward space ℛ: 𝒮×𝒜×𝒮ℝ, transition probability density function p(s', r | s, a) ∈ [0, 1] and discount rate γ∈ [0, 1] <cit.>.
An RL agent is tasked with learning an action policy π: 𝒮→𝒜 which maximises the returns, the discounted sum of future rewards G_t = ∑_k=0^∞γ^k R_t+k+1 from the current timestep t.
Lifelong Reinforcement Learning modifies the RL MDP by allowing its components[We treat the discount rate γ as a static component due to its common treatment as a tunable hyper-parameter rather than a pre-defined aspect of the problem.] to vary in time: ℳ_L(t) = {𝒮(t), 𝒜(t), ℛ(t), 𝒫(t), γ}. These components may change discretely, continuously, or as a semi-continuous mixture. As ℳ_L(t) can always be represented as a sequence of static MDPs (in the extreme case as a unique MDP per timestep), we use static MDPs ℳ_i to denote versions of ℳ_L(t) as it evolves through time.
In episodic RL, a policy π's performance in MDP ℳ_a can be judged by its expected returns over s ∈𝒮^0_a, the set of possible initial states: 𝔼[G_t |𝒮^0_a, π]. In Lifelong RL, the expected returns of a policy also depend upon the current components of ℳ_L(t); for clarity below, we omit 𝒮^0_a as part of the definition of ℳ_a and include ℳ_a as an explicit dependence of the expectation: 𝔼[G_t |ℳ_a, π].
§ DEFINING A TARGET FOR PROXY MEASUREMENT: SCALED OPTIMAL POLICY REGRET
A natural choice for measuring the impact of an MDP change is regret, the drop in an agent's returns <cit.>. However, comparing returns is unsuitable when the two MDPs have different minimum and maximum returns. Returns and rewards bounds can change independently; a transition function change can block high-reward states from being visited, lowering the maximum returns without changing the reward function.
To avoid this issue, we propose Scaled Optimal Policy Regret (SOPR) to account for the returns' scales in each MDP:
SOPR(ℳ_i, ℳ_j) =
∑_π^+_i ∈Π^+_i[𝔼[G_t |ℳ_j, π^+_j ] - 𝔼[G_t |ℳ_j, π^+_i ]/𝔼[G_t |ℳ_j, π^+_j ] - 𝔼[G_t |ℳ_j, π^-_j ]],
where π^+_j is an optimal returns-maximising policy of ℳ_j and π^-_j is an optimal returns-minimising policy of ℳ_j. Note that SOPR(ℳ_i, ℳ_j) ∈ [0, 1] for any pair of MDPs where SOPR is calculable.
§.§.§ When is SOPR calculable?
By inspection of (<ref>), SOPR(ℳ_i, ℳ_j) is defined if and only if the four expectations are defined. It is reasonable to assume three of the four terms are defined as they are expectations of policies in their respective MDPs. This leaves only the expectation that `crosses' the MDPs, 𝔼[G_t |ℳ_j, π^+_i ] to be analyzed.
Equation (<ref>) is the standard definition <cit.> of expected returns for a single state s_j ∈𝒮_j. For 𝔼[G_t |ℳ_j, π^+_i ] to be defined, (<ref>) must be defined for all initial states of ℳ_j. This implies that 𝒫_i(s', r | s, a) must be defined ∀ a ∈π^+_i(a|s), and therefore that 𝒜_i ⊆𝒜_j. Additionally, π^+_i(a|s) must be defined ∀ s' ∈𝒮_j, therefore s' ∈𝒮_j s' ∈𝒮_i, i.e. 𝒮_i ⊆𝒮_j.
Less formally, SOPR(ℳ_i, ℳ_j) is calculable so long as ℳ_i's optimal policies are executable in every state of ℳ_j.
𝔼[G_t |s_j, π^+_i ] =
∑_aπ^+_i(a|s_j)∑_s_j', r𝒫_i(s_j', r | s, a)[r + γ𝔼[G_t+1 | s_j', π^+_i ]].
§ CHANGE-INDUCED REGRET PROXY (CHIRP) METRICS
A desirable proxy for SOPR would have the following properties:
* Positively correlated with SOPR: as the measured proxy distance between MDPs grows, the drop in performance should increase.
* Monotonicity with SOPR: It is undesirable to have a single value of the proxy metric map to multiple SOPR values. A monotonic relationship between them would avoid this.
* Computational efficiency: There is little value in an estimator of performance-based metrics with comparable computational costs.
* Captures change across MDP components: In lifelong RL we may encounter changes in any MDP component.
With these requirements in mind, we modify <cit.>'s existing metric for use as an example CHIRP. In their work, a distance d'(s_i, s_j) between states in MDPs ℳ_i and ℳ_j is defined as the maximum Wasserstein distance between transitions. This is aggregated over all state pairs to define a distance between MDPs,
d(ℳ_i, ℳ_j) = min_k=1, …, |𝒮_1|,
t=1, …, |𝒮_2|∑_k=1^|𝒮_1|∑_t=1^|𝒮_2|γ_ktd'(s_k, s_t),
with γ_kt a constraint not discussed here. This is intractable for continuous state and action spaces and is difficult to estimate from samples as the minimum is taken over state spaces 𝒮_1 and 𝒮_2.
Instead, we use the Wasserstein distance between the transition distributions directly and propose a sampling method for its approximation. Generally, the 1-Wasserstein (W_1) distance between two probability distributions 𝒫_i(ℝ_d), 𝒫_j(ℝ_d) is defined as,
W_1(𝒫_i, 𝒫_j) = inf_γ∈Γ(𝒫_i, 𝒫_j)∫_ℝ^d ×ℝ^d||𝐱 - 𝐲||_2 γ(δ𝐱,δ𝐲),
where γ∈Γ(𝒫_i, 𝒫_j) is the set of all joint probability distributions with marginal distributions 𝒫_i and 𝒫_j <cit.>. Informally, the W_1 distance is often called the `earth mover's distance', the minimal earth (probability mass) that must be moved from one pile, 𝒫_i, to produce another, 𝒫_j.
Focusing upon MDPs, we define the W_1-MDP distance as the W_1 distance between transition probability density functions,
W_1(ℳ_i, ℳ_j) = W_1(𝒫_i(s', r |s, a), 𝒫_j(s', r |s, a))
= inf_γ∈Γ(𝒫_i, 𝒫_j)∫_ℝ^d ×ℝ^d||𝐬^∗_i - 𝐬^∗_j||_2 γ[δ𝐬^∗_i,δ𝐬^∗_j],
with 𝐬^∗_i = (𝐬', r | 𝐬, 𝐚)_i distributed by 𝒫_i(𝐬', r | 𝐬, 𝐚).
This metric is still incalculable in continuous state and action spaces, as it requires evaluating 𝒫_i and 𝒫_j for all possible (s, a) pairs. However, W_1(𝒫_i, 𝒫_j)'s infimum is evaluated over the distributions 𝒫_i and 𝒫_j rather than (<ref>)'s minimum over state pairs. By estimating 𝒫_i and 𝒫_j with empirical distributions 𝒫̂_i, 𝒫̂_j from transition samples, we can estimate W_1(𝒫_i, 𝒫_j) with W_1(𝒫̂_i, 𝒫̂_j) so long as a suitable sampling scheme is identified.
§.§ Approximating the W_1-MDP Distance with Sampling
The accuracy and precision of W_1(𝒫̂_i, 𝒫̂_j) depends upon how representative our transition samples are. A natural approach would be to randomly sample state-action pairs to construct the 𝒫̂_i and 𝒫̂_j distributions. We propose an alternate scheme based on the intuition that sampling transitions from a range of low and high reward states may better represent an optimal policy's trajectory than random samples.
We first sample the base MDP's reward bounds and use this as a target for Monte Carlo Cross-Entropy (MCCE) sampling of state-action pairs <cit.>. The state-action pair is then executed in both MDPs n_t times to capture any stochasticity of transitions. The result is two sets of transition samples of size n_s × n_t; these sets form our empirical distributions 𝒫̂_i, 𝒫̂_j. The W_1 distance between these samples, W_1(𝒫̂_i, 𝒫̂_j) is the estimate of W_1(ℳ_i, ℳ_j).
Algorithm <ref> documents this sampling scheme; results comparing this scheme to random sampling are presented for two environments below.
§ VALIDATING W_1-MDP AS A CHIRP
Before using the W_1-MDP distance as a CHIRP, we must establish whether it exhibits the desired characteristics listed above. It is unlikely that W_1-MDP is perfectly correlated with SOPR in every case or that the relationship is the same in every environment, so understanding a CHIRP's limits is vital for its correct use.
§.§ Validation in SimpleGrid
A bespoke toy environment inspired by MiniGrid <cit.> is designed for initial validation. SimpleGrid is a 20 × 20 square grid implemented in the gymnasium framework <cit.>, illustrated in Figure <ref>. MDP variants are constructed by varying the agent and goal positions. The agent's state space is 4-dimensional and is constructed from the (x, y) coordinates of the agent and goal: s = ((x, y)_agent,(x, y)_goal). At each time step the agent chooses an action from 𝒜 = {up, down, left, right} to minimize its Manhattan distance to the goal,
r(s, a, s') = -C|(x, y)_agent - (x, y)_goal|_1,
with C as a scaling constant determined by the agent's initial position.
The results in Figure <ref> show SOPR and W_1-MDP distances from a base MDP to 10,500 MDPs with random agent and goal positions. The Pearson correlation coefficient indicates SOPR is strongly positively correlated with W_1-MDP (ρ=0.875, p<0.001) and is strongly monotonic (r_s=0.861, p<0.001) by Spearman's rank correlation.
Figure <ref> shows a strong linear relationship between our CHIRP and SOPR, but SimpleGrid is a trivial example where W_1-MDP can be calculated in full. For this to be useful in practice, the viability of estimating W_1-MDP has to be established, either through random sampling or through Algorithm <ref>'s reward-shaped sampling.
Our analysis of the two sampling methods is primarily focused on variance, rather than bias; in this case, a bias when estimating a proxy metric is relatively unimportant compared to the estimates' variance, as a systematic bias is easily removed during calibration.
Estimates of the true distances measured for Figure <ref> were calculated under the two sampling schemes with n_s=15 and n_t=1; in contrast, the full calculation uses 1,296 states. The biases and variances for both schemes are provided in Table <ref>.
Random sampling produces estimates of W_1-MDP with lower variance than reward-shaped sampling; reward-shaped sampling also has a statistically significant (p<0.001) bias under a two-sided t-test with a null hypothesis of zero-mean errors. The bias is small when compared to the scale, however: 0.079 is 0.76% of the median W_1-MDP of 10.3.
Though reward-shaped sampling achieves a higher variance, these results validate it as a viable sampling scheme. A similar analysis is also performed below in Metaworld with contrasting results.
§.§ Verification in MetaWorld
SimpleGrid has provided evidence that the CHIRP concept is valid and that W_1-MDP can be estimated through sampling. The SimpleGrid environment is unrepresentative of real-world problems, however; we therefore choose MetaWorld <cit.>, a suite of robotic arm tasks, as a more representative setting for further testing.
MetaWorld's continuous state and action spaces make calculating W_1-MDP impossible. The correlation between W_1-MDP and SOPR must be analyzed using estimates of W_1-MDP, requiring us to choose a sampling scheme: reward-shaped or random.
Though accuracy is indeterminable, estimates' variance under each sampling scheme can still be measured. Using 270 standard deviations of estimates for both methods over the 10 MetaWorld tasks (Figure <ref>) with n_s ∈{25, 50}, n_t ∈{1, 5, 10}, we find that estimates using reward-shaped sampling had a smaller average standard deviation of 0.331 in comparison to random sampling's 0.384, a 14% reduction (p<0.001).
We use reward-shaped sampling in MetaWorld due to its lower variance and the argument presented above of the ease of adjusting for systematic bias during calibration.
To measure SOPR's relationship with W_1-MDP, 200 soft actor-critic agents were trained on one of 10 MetaWorld tasks, with 20 agents per task. After training, their weights were frozen. Each agent was exposed to increasing state and action errors to simulate sensor and actuator degradation. The SOPRs between these MDPs were estimated using the highest and lowest returns observed across all agents to avoid the otherwise extreme cost of calculating SOPR.
Figure <ref> shows the aggregated Ŵ_1-SOPR relationship across the 200 agents and 10 tasks. A positive monotonic relationship exists with a moderate positive correlation (ρ=0.60, p<0.001) and monotonicity (r_s=0.70, p<0.001). The variance of SOPR values within each bin is wider than for SimpleGrid; this stems from the tendency of agents to either succeed or fail in MetaWorld, with fewer examples of mediocre returns than in SimpleGrid.
The relationship displayed in Figure <ref> is not linear, and total performance loss (a SOPR of 1) occurs at much smaller CHIRP distances compared with SimpleGrid's results. Therefore, our CHIRP measurements are incomparable across environments in their raw form; we address this with calibration further below.
§ LIFELONG REINFORCEMENT LEARNING WITH CHIRPS IN METAWORLD
Analyzing a CHIRP's correlation, bias, and variance with SOPR is useful, but lacking context. What degree of correlation is required for use, and does the variance in Figure <ref> prohibit this?
To test our CHIRP's value to lifelong reinforcement learning we modify <cit.>'s Lifetime Policy Reuse (LPR) agent. In LPR, a multi-armed bandit learns a strategy to reuse k policies across n tasks with k <= n. The bandit and k policies begin untrained and are learned with standard algorithms such as PPO <cit.> and Q-learning <cit.>.
Intuitively, LPR's bandit can be seen as finding groups of MDPs over which a single policy can achieve good returns; the learned groupings may be similar to clustering MDPs with low CHIRP values together. Therefore, the need to learn the mapping from experience (and suffer sub-optimal choices during this) could be avoided with mappings pre-computed from CHIRP metrics.
We formalize this idea as CHIRP Policy Reuse (CPR). To pre-compute CPR's reuse strategy, the CHIRP values between the n(n-1)/2 unique MDP pairs are estimated, producing a distance matrix as in Figure <ref>. k-medoids clustering <cit.> is applied to this matrix to identify clusters with low intra-cluster CHIRP values. Figure <ref> visualizes the resulting clusters.
Comparing Figures <ref> and <ref> with the tasks shown in Figure <ref> provides some insight into the task space; MDPs 5 through 9 all share low CHIRP values and visually appear to be similar tasks, and are all clustered together. 3 and 4 are similar, while 0, 1 and 2 are more distinct from other MDPs. With only 3 policies to share between 5 MDPs, the most distinct MDPs of 0 and 2 have dedicated policies assigned, while 1, 3, and 4 share the last policy.
We compare CHIRP Policy Reuse (CPR) against Lifetime Policy Reuse (LPR) and two other LRL methods; Lifelong Reinforcement Learning with Modulating Masks (Mask-LRL) <cit.> and Lifelong Policy Gradient Learning of Factored Policies for Faster Training Without Forgetting (LPG-FTW) <cit.>, two approaches to LRL that are distinct from policy reuse. These methods were each chosen based on their strong results in MetaWorld and Continual World <cit.>, a MetaWorld derivative. A variant of CPR is also included in these comparisons, `Bandit CPR'. This agent uses CPR's clusters to initialize a multi-armed bandit as in LPR.
To test each method's sample efficiency and robustness, random sequences of the ten MetaWorld MDPs were generated with a p=0.1 probability of changing at the end of each episode. Each method will be judged based on the lifetime average returns they achieve (Equation (<ref>)). The task sequences were limited to T=1×10^6 timesteps; the shorter duration and frequent MDP changes require agents to demonstrate high sample efficiency and robustness to change.
R̅ = 1/T∑_T=0^TR_t.
Every method but the Mask-LRL approach has a policy library size parameter; the three policy reuse methods have a fixed library size of k, while LPG-FTW grows its library until a limit k is reached. To select k for each method, a search over k ∈{2, 4, 6, 8, 10} was used to identify the value which maximized Equation <ref> across 20 agents and T=3×10^5 timesteps.
With each method's best k determined, 20 new agents per method were trained for T=1×10^6 timesteps resulting in Table <ref>. CPR achieves the highest lifetime average return of 1516.1 ± 1398.7, 17% higher than the next best method with a mean increase of 220.6 (p<0.001) above LPR. Figure <ref> shows a rolling average of episodic performance; CPR also consistently achieves higher returns on average than other methods.
All experimentation was performed on Intel Xeon Gold 6138 and AMD Ryzen 5600X CPUs with 16Gb RAM.
§.§.§ Why don't LPR agents learn to outperform CPR's fixed strategy?
CPR's results are significantly higher than both the LPR and Bandit CPR methods; this is surprising, as methods that update their reuse strategies should outperform strategies fixed on imperfect information like a proxy metric.
Analyzing one LPR agent's (k=6, R̅ = 1120) task-policy mapping against the equivalent medoid clustering result where k=6 shows that LPR is susceptible to becoming trapped in local optima. The LPR agent reused five policies across the ten MDPs, leaving one policy `spare'. This suboptimality is unlikely to be escaped in future learning; the spare policy's episodic returns are lower in every MDP than at least one other policy, and it is only used when selected by the bandit's epsilon-greedy exploration. Its lack of selection causes a lack of training, exacerbating this policy's low performance, and cementing its exclusion from the strategy.
§.§.§ Why do Mask-LRL and LPG-FTW perform poorly?
The rolling averages in Figure <ref> show a significant difference in performance between Mask-LRL and LPG-FTW and the reuse methods.
One cause for this may be the interleaving of tasks in our benchmarking. In Mask-LRL and LPG-FTW's original benchmarking, agents are trained against each MDP in a single block of one million time steps, for ten million total. This approach allows the agents to converge to optimal policies in each task individually.
The frequent MDP changes in our testing may provide specific difficulties to these methods. In LPG-FTW, each new task is solved in part by combining solutions to previous tasks. Mask-LRL similarly combines prior tasks' maps to transfer learning into new tasks. If the agent has not had the opportunity to converge on previous tasks, then the new tasks will be learned from combining prior task solutions that are themselves changing. This likely provides an additional level of difficulty to the learning problem.
§ CALIBRATING A CHIRP TO COMPARE DIFFICULTY ACROSS ENVIRONMENTS
Comparing Figures <ref> and <ref>'s distinctly different CHIRP-SOPR relationships highlights how they can only be assumed to hold in the `local area' of the MDP. This illustrates the need for calibration when comparing across distinctly different problems.
As an example of calibration, two B-spline functions were fit to the data from figures <ref> and <ref>; the results are indicated by the dashed lines in these figures. These functions are effectively a predictive relationship between the `raw' CHIRP values and SOPR; we refer to these SOPR predictions as the `calibrated CHIRP' in Figures <ref> and <ref>.
Figures <ref>'s results for SimpleGrid exemplify a desirable calibration; a calibrated CHIRP value can be used directly as an estimate of SOPR; Combined with MetaWorld's calibrated results in <ref>, the calibrated CHIRP values can be compared across environments to determine a predicted SOPR. Calibrating CHIRPs in this way permits novel comparisons of difficulties that are agnostic to the benchmarking environments used.
§.§ Understanding the limits of a CHIRP-SOPR relationship
The relationship between a proxy metric and its target cannot be assumed to be globally fixed; the need to calibrate to the local conditions is important, and conversely, understanding the limits of where an established relationship ends.
To highlight this, we discuss one inherent limitation of choosing W_1-MDP as our example CHIRP. The Wasserstein metric is symmetric by definition; it cannot capture non-symmetry in SOPR. In general, SOPR(ℳ_i, ℳ_j) ≠SOPR(ℳ_j, ℳ_i) as two MDPs' optimal policies may not achieve the same returns in the other MDP. Table <ref> shows the non-symmetry of SOPR for MDPs A and B shown in Figure <ref>.
§ CONCLUSIONS
Lifelong Reinforcement Learning studies the impacts of change on agents, but our understanding of this relationship is currently limited. Directly measuring agent performance across many changes is too costly, and model-based metrics are generally unsuitable for non-trivial problems.
To our knowledge, this paper is the first to investigate whether measuring a change directly can be predictive of its performance impact. We used one example to demonstrate the utility of a potential class of proxy metrics, CHIRPs, in two ways: lifelong reinforcement learning and benchmark-agnostic difficulty comparisons.
For lifelong reinforcement learning, MDPs were clustered for policy reuse based on their CHIRP values. Despite this agent using a fixed policy, it achieved a 17% performance increase above the next best method tested. We also include a simple calibration of a CHIRP to two environments to discuss how change-induced difficulty can be compared across otherwise incomparable problems.
|
http://arxiv.org/abs/2409.03150v1 | 20240905005747 | Field Theory of Non-Newtonian Turbulence | [
"Esteban Calzetta"
] | physics.flu-dyn | [
"physics.flu-dyn",
"hep-ph"
] |
[email protected] de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Departamento de Física, Buenos Aires, Argentina,
and CONICET-Universidad de Buenos Aires, Instituto de Física de Buenos Aires (IFIBA), Buenos Aires, Argentina § ABSTRACT
Providing a compelling derivation of Kolmogorov turbulence is a fascinating open challenge in field theory. Here, we pose a more modest question: if we had a field-theoretic description of Kolmogorov turbulence, could we use it to describe deviations caused, for example, by adding a polymer additive or by relativistic corrections? To investigate this issue, we assume a description of developed, homogeneous, and isotropic turbulence along the lines of Martin, Siggia, and Rose, and we work out the first corrections to the equal-time, two-point spectrum caused by adding non-Newtonian terms to the fluid stress tensor. While the results are not conclusive, they show a promising resemblance to turbulent spectra found in both experiments and large-scale numerical simulations.
Field Theory of Non-Newtonian Turbulence
Esteban Calzetta
September 9, 2024
========================================
§ INTRODUCTION
Almost since the inception of the Kolmogorov theory of turbulence <cit.>, there have been attempts to derive it from the Navier-Stokes equations by treating them as a non-relativistic field theory <cit.>. While substantial contributions have been made <cit.>, it is fair to say that this is still an open challenge <cit.>. But we can also ask whether, if we had a field theoretic description of Kolmogorov turbulence, we could use it to investigate other situations of interest, such as turbulence in strongly relativistic fluids <cit.>, or in viscoelastic fluids <cit.>.
Actually, there are grounds to expect that such an extension of the theory is feasible. Some time ago Goldenfeld, Gioia, Chakraborthy and others <cit.> proposed a quantitative link between friction in a wall bounded flow and the turbulent spectrum in isotropic turbulence. Subsequently the original authors and others showed that the relevant correlation between friction and spectrum subsisted in more general situations, such as flows in different number of dimensions <cit.>, time dependent flows <cit.>, and polymeric solutions <cit.>. In the present work we similarly ask whether a theory built to describe Kolmogorov turbulence under the Navier-Stokes equations, may still be relevant when the fundamental equations are perturbed.
Concretely, we shall begin by assuming that the Navier-Stokes equations (NSE) provide the “bare” description of the flow and will use the Martin - Siggia and Rose (MSR) formalism <cit.> to build an “effective action” (EA) <cit.> from which the full correlations of the theory may be derived. If the EA were computed in a diagrammatic expansion it would reproduce the formalism by Wyld and Lee, see <cit.>. Actually, we shall never need to write down this effective action, but only assume we know what the full correlations are in homogeneous, isotropic Kolmogorov turbulence.
Then we shall perturb the NSE by the addition of non-Newtonian terms in the fluid stress tensor <cit.>. The particular perturbation we shall consider arises in models of viscoelastic behavior, such as characteristic of polymeric solutions <cit.>. We shall show in Appendix <ref> that this model also describes the nonrelativistic limit of a conformal fluid <cit.>. We then work out the first order correction to the spectrum from the perturbed EA. We compare this result with spectra found in viscoelastic turbulence both in experiment and numerical simulations.
This paper is organized as follows. Next section <ref> presents the basic notations and the equations of the fluid, both the NES and the perturbed one. To make the work self-contained, we have included a basic introduction to the MSR approach and the EA therefrom. In section <ref> we proceed to compute the kernels which are necessary to write down the perturbed Schwinger-Dyson equations. In section <ref> we solve the Schwinger-Dyson equations and find the energy spectrum.
We conclude with some brief final remarks in section <ref>.
We have included three appendices. In Appendix <ref> we show how the model in Section <ref> describes the nonrelativistic limit of a conformal fluid. In Appendix <ref> we discuss how to account for the random Galilean invariance <cit.> of the NSE in the EA formalism, a subject that we left out of the main text for simplicity, but has deep implications for the development of the theory. Appendix (<ref>) fills in the details of one of the derivations in the text.
§ THE MODEL
The model is represented by the equations
Q^i=V^j_,t+V^kV^j_,k+P^jk_,k+1/ρP^,j=0 Q^ij=τ_1[P^jk_,t+V^lP^jk_,l]-τ_2[P^jlV^k_,l+V^j_,lP^lk]+P^jk+νΣ^jk=0
V^j_,j=P^jk_,jk=0
where V^j is the incompresible fluid velocity, P^ij is the stress tensor, P is the pressure, μ is the constant fluid mass density, σ^ij is the shear tensor
Σ^jk=V^j,k+V^k,j
and ν is the kinematic viscosity. When τ_1,2→ 0 at fixed ν we get an ordinary Newtonian fluid. When τ_1=τ_2=τ, the derivative terms in the second of eqs. (<ref>) add up to the upper convected derivative of p^ij. When τ_2=0 it reduces to a material derivative, which is the case that describes the nonrelativistic limit of a conformal fluid, see Appendix <ref>. In this note we shall assume τ_1=τ_2=τ.
We note that the right hand side of equations (<ref>) ought to display stochastic sources necessary to put the fluid in motion. However, since we wish to work in the regime where fluid fluctuations are self-sustained, we shall not consider these sources explicitly.
To be able to derive equations (<ref>) from a variational principle we introduce Lagrange multipliers A_j and B_jk such that A^j_,j=0, and write
S=∫ d^3ydt {A_jQ^j+B_jkQ^jk}
We delete the pressure term from Q^j, since it integrates to zero anyway.
§.§ The MSR EA and its perturbations
We see that the action functional eq. (<ref>) depends on four different fields, the physical fields V^j and P^ij and the auxiliary fields A_j and B_ij. This diversity makes for a rather complex field theory.
To avoid unnecessary complications, we shall adopt an scheme based on three levels of description. Eqs. (<ref>) and (<ref>) belong to the first level, where we treat both physical and auxiliary fields as distinct. In the second level, however, we drop this distinction and gather together the physical fields into a single string V^a=( V^j,P^jk), and similarly the auxiliary fields into a string A_a=(A_j,B_jk). For higher compression, in the third level of description we regard all variables as components of a single object X^J=( V^a,A_a). In the second and third levels space-time indexes are included into the indexes a,J and we apply Einstein's convention to sums over indexes, both discrete and continuous.
Given an action S[ X] we define a generating functional
e^iW[ J] =∫ DX e^i( S[ X] +J_KX^K)
were the J_K are a string of external sources. Differentiation yields the mean fields
X̅^J=δ W/δ J_J
We shall work under conditions where symmetry forces all background fields to zero, namely homogeneous, isotropic turbulence. Further differentiation produces the higher cumulants, in particular the two-point correlations
δ^2 W/δ J_Jδ J_K=i⟨ X^JX^K⟩
where we are already using that the mean fields vanish. It is convenient to choose the mean fields, rather than the sources, as independent variables. To achieve this, we introduce the effective action Γ as the Legendre transform of the generating functional
Γ[ X̅] =W[ J] -J_KX̅^K
whereby we get the equations of motion for the mean fields
δΓ/δX̅^J=-J_J
Differentiating eq. (<ref>) with respect to the mean fields and using eq. (<ref>) we get
δ^2 Γ/δX̅^JδX̅^K⟨ X^KX^L⟩ =iδ^L_J
δ^L_J denotes the identity operator in the corresponding functional space. Similarly, from eq. (<ref>) we get
⟨ X^JX^K⟩δ^2 Γ/δX̅^KδX̅^L =iδ^J_L
These are the Schwinger-Dyson equations of the theory. From either of these equations we can derive the two-point correlations from the effective action.
§.§ Auxiliary and physical fields
We will now elaborate on the analysis above by distinguishing physical fields V^a from auxiliary fields A_a. We also distinguish the external sources J_a coupled to physical fields from the sources K^a coupled to auxiliary fields. The action eq. (<ref>) is written as
S=A_aQ^a[V]
The equations of motion Q^a are causal and we assume (<cit.>)
Detδ Q^a/δ V^b= constant
We may choose the constant to be 1. The generating functional eq. (<ref>) is expanded into
e^iW[J,K]=∫ DADV e^i(A_aQ^a[V]+A_aK^a+J_aV^a)
Observe that
W[0,K]=0
identically, so all the expectation values of products of auxiliary fields vanish. Eq. (<ref>) becomes
([ Γ_,A̅_aA̅_b Γ_,A̅_aV̅^b; Γ_,V̅^aA̅_b Γ_,V̅^aV̅^b ])([ 0 ⟨ A_bV^c⟩; ⟨ V^bA_c⟩ ⟨ V^bV^c⟩ ])=i([ δ^a_c 0; 0 δ^c_a ])
This implies that Γ_,A̅_aV̅^b and ⟨V̅^bA̅_c⟩ are non singular, since
Γ_,A̅_aV̅^b⟨ V^bA_c⟩=iδ^a_c
and then it must be
Γ_,V̅^aV̅^b=0
when the mean auxiliary fields vanish.
§ COMPUTING THE EA WITH THE BACKGROUND FIELD METHOD
According to the usual rule <cit.>, the EA is the classical action plus a “quantum” correction
Γ=S+Γ_Q
To compute Γ_Q, we split all fields into a background value plus a fluctuation X^J→X̅^J+ x^J etc., expand the action eq. (<ref>) and discard terms independendent or linear in the fluctuations. Then
Γ_Q=(-i)ln∫ Dx^J e^i(S[x]+X̅^JS̅_J[x]+J_QJx^J)
where S is just the action eq. (<ref>) evaluated on the fluctuation fields, and S̅ is linear on the background fields. The sources J_Q enforce the constraints that the expectation value of the fluctuations vanish ⟨ x^J⟩=0. For this reason all one particle insertions in the diagrammatic evaluation of the effective action vanish, and it is enough to consider one-particle irreducible graphs only.
Expanding the exponential, we see that the contribution to the quadratic part of the EA from Γ_Q is
Γ_Q^(2)=i/2X̅^JX̅^K⟨S̅_J[x]S̅_K[x]⟩
where
⟨𝒳⟩=∫ Dx^J e^i(S[x]+J_QAx^A)𝒳
We are using that ⟨S̅_J[x]⟩=0 at zero background fields, which is easily verified.
Expanding
S=S_0+τ S_1
then to first order in τ we have
⟨𝒳⟩=⟨𝒳⟩_0+iτ⟨𝒳S_1⟩_0
where
⟨𝒳⟩_0=∫ Dx^J e^i(S_0[x]+J_QAx^A)𝒳
The S̅_J may be similarly expanded
S̅_J=S̅_0J+τS̅_1J
Therefore, up to first order in τ we get
Γ_Q^(2)=i/2X̅^JX̅^K{⟨S̅_0J[x]S̅_0K[x](1+iτ S_1)⟩_0+2τ⟨S̅_0J[x]S̅_1K[x]⟩_0}
We shall assume viscosity effects are negligible in computing Γ_Q, whereby
S_0 = ∫ d^3yds [a_j(v^j_,t+v^kv^j_,k+p^jk_,k)+b_jkp^jk]
S_1 = ∫ d^3yds b_jk(p^jk_,t+v^lp^jk_,l-p^jlv^k_,l-v^j_,lp^lk)
Because S_0 only contains b_jk in the combination b_jkp^jk, we find a Novikov-type formula
⟨ p_jk𝒳⟩_0=i⟨δ𝒳/δ b_jk⟩_0
To apply this formula, we use
δ b_jk(y,s)/δ b_j'k'(y',s')=δ(s-s')Δ^j'k'_jk(y-y')
where
Δ^j'k'_jk(y-y')=1/2{δ^j'_jδ^k'_k+δ^j'_kδ^k'_j}δ(y-y')
For our purposes we need the noise kernels, when both X̅^J and X̅^K are auxiliary fields, and the self-energies, when one is an auxiliary field and the other a physical field. The noise kernels are computed with the substitutions
X̅^J=A̅_j⇒S̅_J= S^j_a(x,t)=S^j_0a(x,t)=(v^kv^j_,k)(x,t)
or
X̅^J=B̅_jk⇒S̅_J= S^jk_b(x,t)=S^jk_1b(x,t)=(v^lp^jk_,l-p^jlv^k_,l-v^j_,lp^lk)(x,t)
We see that expressions containing S^jk_b display more p^jk than b_jk factors, therefore according to the Novikov formula are obviously zero. So
δ^2Γ_Q/δA̅_j(x,t)δB̅_j'k'(x',t')=δ^2Γ_Q/δB̅_jk(x,t)δB̅_j'k'(x',t')=0
The calculation of δ^2Γ_Q/δB̅_jn(x,t)δV̅^k(y,t') involves S^jk_b and
X̅^J=V̅^j⇒S̅_0J= S_0vj(x,t)=(a_kv^k_,j-a_k,jv^k)(x,t) X̅^J=V̅^j⇒S̅_1J= S_1vj(x,t)=[a_lkp^kl_,j+2(b_jkp^kl)_,l](x,t)
Once again we see that there are more p^jk than b_jk factors, therefore
δ^2Γ_Q/δB̅_jn(x,t)δV̅^k(y,t')=0
Let's spell out the expectation values involved in computing δ^2Γ_Q/δA̅_j(x,t)δV̅^k(y,t')⟨(v^lv^j_,l)(x,t)(a_mv^m_,k-a_m,kv^m)(y,t')(1+iτ S_1)⟩_0+2τ⟨(v^lv^j_,l)(x,t)[a_mnp^mn_,k+2(a_knp^mn)_,m](y,t')⟩_0
Now
⟨(v^lv^j_,l)(x,t)(a_mv^m_,k-a_m,kv^m)(y,t')iτ S_1⟩_0=⟨(v^lv^j_,l)(x,t)[a_mnp^mn_,k+2(a_knp^mn)_,m](y,t')⟩_0=0
We conclude that there are no first order corrections to the noise and dissipation kernels, which remain at their Kolmogorov values
δΓ_Q/δA̅_j(x,t)δV̅^k(y,t')=δ(t-t')∫d^3k/(2π)^3e^ik(x-y)Δ^k_jκ_k
where on dimensional grounds
κ_k=ν_0(ϵ k^2)^1/3ν_0 is a dimensionless constant, and ϵ is the constant in Kolmogorov's 4/5 law, which we also assume (<cit.>)
⟨[r̂_j(v^j(r)-v^j(0))]^3⟩=-4/5ϵ r
We also have
δΓ_Q0/δA̅_j(x,t)δA̅_k(y,t')=iδ(t-t')∫d^3k/(2π)^3e^ik(x-y)Δ^kjN_k
Since A_j has dimensions of L^-4T, N_k has dimensions of L^5T^-3. Because N_k is analytical and isotropic near k≈ 0, there N_k≈ k^2. It peaks at a scale k_c≈ 1/L, where L is the linear dimension of the flow. In the inertial range, N_k may depend only on ϵ and k, so N_k≈ϵ/k^3. We interpolate between these behaviors as
N_k=N_0ϵ k^2/( k_c^2+k^2)^5/2N_0 is a dimensionless constant.
To compute variations with respect to P^km we need
X̅^J=P̅^jk⇒S̅_J= S_1pkm=-[v^lb_km,l+b_klv^l_,m+b_mlv^l_,k]
It is clear that δ^2Γ_Q/δB̅_jn(x,t)δP̅^km(y,t')=O(τ^2) at least.
The only remaining kernel we need to compute the Schwinger-Dyson equations to first order in τ is
δ^2Γ_Q/δA̅_j(x,t)δP̅^km(y,t')=-τ⟨(v^lv^j_,l)(x,t)[v^nb_km,n+b_knv^n_,m+b_mnv^n_,k](y,t')⟩_0
We assume this kernel is local in time to enforce random Galilen invariance, and then on dimensional grounds (P^km has units of L^2T^-2, B_km has units of L^-5T)
δ^2Γ_Q/δA̅_j(x,t)δP̅^km(y,t')=i/2τδ(t-t')∫d^3k/(2π)^3e^ik(x-y)[Δ^k_jk^m+Δ^m_jk^k]Λκ_k
where Λ is dimensionless
Λ≈α N_0(k/k_c)^2/3
where
α=1/12π^2Γ[13/6]Γ[1/3]/Γ[5/2]≈ 0.018
See appendix (<ref>).
§ COMPUTING THE ENERGY SPECTRUM
Because of eq. (<ref>), the velocity-velocity correlation reduces to
⟨ v^j(x,t)v^k(x',t')⟩=i∫ d^3yds d^3y'ds' ⟨ v^j(x,t)a_l(y,s)⟩δ^2Γ/δA̅_l(y,s)δA̅_m(y',s')⟨ a_m(y',s')v^k(x',t”)⟩
To find the causal propagator, we ought to solve the system
∫ d^3ydt'{δ^2Γ/δA̅_j(x,t)δV̅^k(y,t')⟨ v^k(y,t')a_l(x',t”)⟩
+δ^2Γ/δA̅_j(x,t)δP̅^km(y,t')⟨ p^km(y,t')a_l(x',t”)⟩}=iΔ^j_l ∫ d^3ydt'{δ^2Γ/δB̅_jn(x,t)δV̅^k(y,t')⟨ v^k(y,t')a_l(x',t”)⟩
+δ^2Γ/δB̅_jn(x,t)δP̅^km(y,t')⟨ p^km(y,t')a_l(x',t”)⟩}=0
The relevant kernels are computed in eqs. (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>), so we may finally write down the Schwinger-Dyson equations. Write
⟨ v^j(x,t)a_k(x',t')⟩=∫d^3k/(2π)^3dω/(2π)e^i[k(x-x')-ω(t-t')]Δ^j_kG[k,ω] ⟨ p^jm(x,t)a_k(x',t')⟩=i∫d^3k/(2π)^3dω/(2π)e^i[k(x-x')-ω(t-t')][Δ^j_kk^m+Δ^m_kk^j]G'[k,ω]
Then
[-iω+κ_k]G-k^2[1+Λκ_kτ]G' = iν G+[-iωτ+1]G' = 0
Elliminating G'[-iω+κ_k+ν k^2[1+Λκ_kτ]/[1-iωτ]]G=i
The correction is only meaningful when iω≈κ, so we may write
[-iω+κ_k+ν k^2[1+Λκ_kτ]/[1-κτ]]G=i
To first order in τ[-iω+κ_k+ν k^2+ν k^2κτ[1+Λ]]G=i
In the inertial range ν k^2≤κ, so we may write further
{-iω+κ_k[1+ν k^2τ(1+Λ)]}G=i
Once G is known, we may compute
⟨ v^j(x,t)v^k(x',t')⟩=∫d^3k/(2π)^3dω/(2π)e^i[k(x-x')-ω(t-t')]Δ^jkG_1[k,ω]
where
G_1[k,ω]=|G[k,ω]|^2N_k
The energy spectrum is computed from the coincidence limit of the velocity-self correlation
E[k]=k^2∫dω/(2π)G_1[k,ω]=k^2N_k/κ_k[1+ν k^2τ(1+Λ)]
We plot a typical spectrum in fig. (<ref>)
§ FINAL REMARKS
Eq. (<ref>) shows that non-Newtonian effects become relevant above the scale
k_NN=√(1/ντ)
It is important to compare this scale with the scale
k_D=(ϵ/ν^3)^1/4
which marks the upper limit of the inertial range. Observe that we may write
ϵ/ν=Re/T^2
where T≈ L/V is the revolving time of the largest eddies. Then
k_NN/k_D=1/(Re)^1/4√(T/τ)
which shows that non-Newtonian effects could be relevant well within the inertial range for turbulent enough flows.
Our goal in this paper has been to explore whether the well known difficulties of field theoretic approaches to turbulence were an intrinsic limitation of field theory, or just confined to Kolmogorov turbulence. We have attempted to show that if one had a field theory capable to account for the Kolmogorov spectrum and 4/5 Law, then this theory would retain some predictive power even in more general situations.
A more close to Earth goal was to establish a bridge between the theory of turbulence in non-Newtonian and in relativistic turbulence, with the hope that our growing understanding of the former could guide us in the exploration of the latter.
I thank P. Mininni for multiple talks.
E. C. acknowledges financial support from Universidad de Buenos Aires through Grant No. UBACYT
20020170100129BA, CONICET Grant No. PIP2017/19:11220170100817CO and ANPCyT Grant No. PICT 2018: 03684.
§ NON-NEWTONIAN FLUID AS THE NON-RELATIVISTIC LIMIT OF A CONFORMAL FLUID
We consider a relativistic fluid of massless particles.
At the macroscopic level, the theory is described by the energy-momentum tensor (EMT) T^μν. Adopting the Landau prescription for the four velocity u^μ and the energy density ρ
T^μ_νu^ν=-ρ u^μ
and observing that T^μν is traceless, we are led to write
T^μν=ρ[u^μu^ν+1/3Δ^μν+Π^μν]
where
Δ^μν=η^μν+u^μu^ν
and
Π^μ_νu^ν=Π^μ_μ=0
We must also provide an entropy flux. For an ideal fluid, namely when Π^μ_ν=0, the entropy density is
s=s_0=1/T(ρ+P)
where T is the temperature and P=ρ/3 is the pressure. From the thermodynamic relation
s_0=∂ P/∂ T
we conclude that
ρ=σ_SB T^4
for some constant σ_SB. The entropy flux is then
S_0^μ=s_0u^μ
When we consider the real fluid, Π^μ_ν≠0, we observe that because of (<ref>) we cannot make a vector out of u^μ and Π^μν. Therefore it makes sense to write
S^μ=su^μ
The entropy density ought to be maximum when the fluid is in equilibrium, namely when Π^μν vanishes. So at least close to equilibrium we should have
s=4/3σ_SBT^3e^-3/2λΠ^μνΠ_μν
for some dimensionless constant λ. If we further write
T=T_0e^δ
Then the conservation laws are
0 = δ_,νu^ν+1/3u^ν_,ν+1/4Π^μνu_μ,ν
0 = δ_,ν[Δ^μν+3Π^μν]+u^μ_,νu^ν+3/4Δ^μ_ρΠ^ρν_,ν
On the other part, positive entropy creation yields
0≤1/3S^μ_μ=u^ν[δ_,ν-λΠ^ρσΠ_ρσ,ν]+1/3u^ν_,ν
which using the conservation laws and the transversality of Π^ρσ may be written as
Π^ρσ[λ u^νΠ_ρσ,ν+1/8σ_ρσ]≤ 0
where
σ^ρσ=[Δ^ρμΔ^σν+Δ^ρνΔ^σμ-2/3Δ^ρσΔ^μν]u_μ,ν
is the covariant form of the shear tensor eq. (<ref>).
Therefore, positive entropy creation is achieved by adopting the Cattaneo-Maxwell equation
λ u^νΠ^ρσ_,ν+1/t_RΠ^ρσ+1/8σ^ρσ=0
We shall now consider the nonrelativistic limit. We write explicitly x^0=ct and
u^μ = ( 1,u^k/c) /√(1-u^2/c^2)Π^μν = ( [ Π_lmu^lu^m/c^2 Π_klu^l/c; Π_jmu^m/c Π_jk ]) +Π_lmu^lu^m/c^2/3-u^2/c^2( [ u^2/c^2 u^k/c; u^j/c δ_jk ])
where Π^j_j=0. Observe that
Δ^μ_ν=1/1-u^2/c^2( [ -u^2/c^2 u^k/c; -v_j/c δ^jk+(u^ju^k-u^2δ^jk)/c^2 ])
The first nontrivial terms in the energy conservation equation are of order 1/c and read
0=δ_,t+u^jδ_,j+1/3u^j_,j+1/4Π^jkv_j,k
From the momentum conservation equation we get
0=δ_,k[δ^jk+3Π^jk]+3/4Π^jk_,k+1/c^2[u^j_,t+u^ku^j_,k]
The Cattaneo-Maxwell equation (<ref>) yields
λ[Π^jk_,t+u^lΠ^jk_,l]+1/t_RΠ^jk+1/8(v_j,k+v_k,j)=0
A consistent nonrelativistic limit requires δ,Π^jk∝ 1/c^2. Then from energy conservation we get u^j_,j=0 to lowest order. Let us write
u_j = v_j+1/c^2ϕ_,jΠ^jk = 4/3c^2p^jkδ = 1/c^2ϵλ = 3c^2/32ντ
t_R = 32ν/3c^2
where v^j_,j=0.
Collecting again the leading terms we get
0=ϵ_,t+v^jϵ_,j+1/3Δϕ+1/3v^j,kp_jk
0=ϵ_,j+p^jk_,k+[v^j_,t+v^kv^j_,k]
Taking the divergence of this equation we get
0=Δϵ+v^k,jv^j_,k+p^jk_,jk
so we may write a scalar-free equation of motion
Q^l=Δ^l_j[v^j_,t+v^kv^j_,k+p^jk_,k]=0
where
Δ_jk=δ_jk-∂_jΔ^-1∂_k
Finally
Q^jk=τ[p^jk_,t+v^lp^jk_,l]+p^jk+ν[v_j,k+v_k,j]=0
We may define a mass density
μ=ρ/c^2
Then μ is constant to order 1/c^2. We see that ϵ=P/μ, where P is the non-constant part of the pressure. P is not a dynamical variable but it is determined from the constraint
0=1/μΔ P+ p^jk_,jk+v^k_,jv^j_,k
We see that we recover equations (<ref>) in the particular case τ_2=0.
§ RANDOM GALILEAN INVARIANCE
Let us go back to the action functional eq. (<ref>) and the corresponding generating functional eq. (<ref>), whose Legendre transform yields the 1PI effective action Γ, eq. (<ref>).
This construction misses the fact that the equations of motion (<ref>) are random galilean invariant, that is, they are invariant under the transformation
v^j(x^j,t)→ v^j(x^j-ϵ^j(t),t)+ϵ̇^j(t) p^jk(x^j,t)→ p^jk(x^j-ϵ^j(t),t) A_j(x^j,t)→ A_j(x^j-ϵ^j(t),t) A_jk(x^j,t)→ A_jk(x^j-ϵ^j(t),t)
where ϵ^j(t) is an arbitrary time dependent field. Of course we are using that
∫ d^3x A_jϵ̈^j=∫ d^3x A_j∂^j(ϵ̈_kx^k)=0
For this reason the path integral defining the generating functional, eq. (<ref>), is redundant. To eliminate the overcounting, we consider the non-invariant function
P^j(t)=∫ d^3x μ v^j
Assuming that μ transforms as μ(x^j,t)→μ(x^j-ϵ^j(t),t) we see that
P^j→ P^j+Mϵ̇^j(t)
where M is the total mass of the fluid. We now observe that
1=∫ Dϵ^j detδ P^j[ϵ]/δϵ^kδ(P^j[ϵ]-C^j)
Introducing this identity into the path integral, we can take the ϵ integral out as a constant factor (for this we make a change of variables within the integral, with unit Jacobian), integrate over the C^j with a Gaussian weight and exponentiate the determinant introducing Grassmann variables ζ_j and η^j, where now
e^iW[Z_a,H^a,z_a,h^a]=∫ DX^aDA_a e^i(S_RGI+Z_aX^a+H^aA_a+z_aη^a+h^aζ_a)
where
S_RGI=∫ dtd^3x {A_jQ^j+A_jkQ^jk}+1/2α∫ dt P_jP^j+i∫ dt ζ_jMη̇^j
Note that the ghost fields are decoupled. This action is still invariant under a BRST transformation defined as follows: the matter and auxiliary fields transform as in a random galilean transformation with parameter ϵ^j=θη^j, where θ is a Grassmann constant, ζ_j transforms into ζ_j+iθ P_j/α, and η^j is invariant. We thus obtain the Zinn-Justin equation
∫ d^dxdt {δΓ/δ v^j(η^l(t)v^j_,l(x^l,t)-η̇^j(t))+δΓ/δ p^jkη^l(t) p^jk_,l(x^l,t)+δΓ/δ A_jη^l(t)A_j,l(x^l,t). + .δΓ/δ A_jkη^l(t)A_jk,l(x^l,t)}-i/α∫ dt δΓ/δζ_jP_j(t)=0
Since the integral over ghost fields is just a decoupled Gaussian integral, we have
δΓ/δζ_j=iMη̇^j
Moreover
∫ d^dxdt η^l(t)v^j_,l(x^l,t)δ/δ v^j∫ dt P_k(t)P^k(t)=0
So eq. (<ref>) is consistent with
Γ=Γ_0+1/2α∫ dt P_j(t)P^j(t)
where Γ_0 is independent of α. Then Γ_0=lim_α→∞Γ, namely, it is an effective action without the Fadeev-Popov procedure. This implies that Γ_0 is identically zero when the auxiliary fields vanish, independently of the physical fields.
Taking a derivative of eq. (<ref>) with respect to η^j we see that the Zinn-Justin equation for Γ_0 is local in time, and so it must be Γ_0 itself.
In the presence of the gauge-fixing term ⟨ vA⟩ and ⟨ vv⟩ are unchanged, and now
⟨ AA⟩=μ^2/αδ(k)/ω^2+ν^2[0]
Because the velocity-velocity correlation vanishes at zero momentum, this does not affect the perturbation theory at non-vanishing momenta. This means there is no loss of generality if we take α→∞.
§ DERIVATION OF EQ. (<REF>)
We may estimate Λ as follows. First note that at τ=0p^jk becomes a lagrange multiplier enforcing the constraint
b_jk=1/2[a_j,k+a_k,j]
Now use a quasi-Gaussian approximation to get
δ^2Γ_Q/δA̅_j(x,t)δP̅^km(y,t')=-τ/2⟨(v^lv^j_,l)(x,t)[v^n[a_k,mn+a_m,kn]+[a_k,n+a_n,k]v^n_,m+[a_m,n+a_n,m]v^n_,k](y,t')⟩_0 = -τ/2⟨(v^lv^j_,l)(x,t)[∂_m(v^na_k,n)+∂_k(v^na_m,n)+a_n,kv^n_,m+a_n,mv^n_,k](y,t')⟩_0 = -τ/2{∂^2/∂ y^m∂ y^n[⟨ v^l(x,t)v^n(y,t')⟩⟨ v^j_,l(x,t)a_k(y,t')⟩+⟨ v^l(x,t)a_k(y,t')⟩⟨ v^j_,l(x,t)v^n(y,t')⟩]. + .⟨ v^l(x,t)a_n,k(y,t')⟩⟨ v^j_,l(x,t)v^n_,m(y,t')⟩+⟨ v^l(x,t)v^n_,m(y,t')⟩⟨ v^j_,l(x,t)a_n,k(y,t')⟩+(k↔ m)}
We use the Kolmogorov values
⟨ v^j(x,t)a_k(x',t')⟩=∫d^3k/(2π)^3Δ^j_ke^[ik(x-x')-κ_k(t-t')]θ(t-t') ⟨ v^j(x,t)v^k(x',t')⟩=∫d^3k/(2π)^3Δ^j_ke^[ik(x-x')-κ_k|t-t'|]N_k/2κ_k
Where N_k has been defined in eq. (<ref>). Then
δ^2Γ_Q/δA̅_j(x,t)δP̅^km(y,t')=iτ/2θ(t-t')∫d^3k/(2π)^3e^ik(x-y) [ (k_mk_n)∫d^3k'/(2π)^3Δ_k'^lne^-[κ_k'+κ_(k-k')](t-t')(k-k')_lΔ^j_(k-k')kN_k'/2κ_k'. + (k_mk_n)∫d^3k'/(2π)^3Δ_k'^jne^-[κ_k'+κ_(k-k')](t-t')k'_lΔ^l_(k-k')kN_k'/2κ_k' + ∫d^3k'/(2π)^3Δ_k'^jne^-[κ_k'+κ_(k-k')](t-t')k'_lk'_mΔ^l_(k-k')n(k-k')_kN_k'/2κ_k' + .∫d^3k'/(2π)^3Δ_k'^lne^-[κ_k'+κ_(k-k')](t-t')k'_mΔ^j_(k-k')n(k-k')_l(k-k')_kN_k'/2κ_k']+(k↔ m)
Since the k' integral is dominated by the infrared band, we approximate k'≪ k δ^2Γ_Q/δA̅_j(x,t)δP̅^km(y,t')=iτ/2θ(t-t')∫d^3k/(2π)^3e^ik(x-y) [ (k_mk_n)∫d^3k'/(2π)^3Δ_k'^lne^-κ_k(t-t')k_lΔ^j_(k)kN_k'/2κ_k'. + (k_mk_n)∫d^3k'/(2π)^3Δ_k'^jne^-κ_k(t-t')k'_lΔ^l_(k)kN_k'/2κ_k' + ∫d^3k'/(2π)^3Δ_k'^jne^-κ_(k)(t-t')k'_lk'_mΔ^l_(k)nk_kN_k'/2κ_k' + .∫d^3k'/(2π)^3Δ_k'^lne^-κ_(k)(t-t')k'_mΔ^j_(k)nk_lk_kN_k'/2κ_k']+(k↔ m)
The k' integral now has spherical symmetry, so it simplifies to
δ^2Γ_Q/δA̅_j(x,t)δP̅^km(y,t')=iτ/2θ(t-t')∫d^3k/(2π)^3e^ik(x-y) [ (k_mk_n)∫d^3k'/(2π)^3Δ_k'^lne^κ_k(t-t')k_lΔ^j_(k)kN_k'/2κ_k'. + .∫d^3k'/(2π)^3Δ_k'^jne^-κ_(k)(t-t')k'_lk'_mΔ^l_(k)nk_kN_k'/2κ_k']+(k↔ m)
Using the symmetry again this becomes (we also filter out a term proportional to k^j)
δ^2Γ_Q/δA̅_j(x,t)δP̅^km(y,t')=iτ/2θ(t-t')∫d^3k/(2π)^3e^ik(x-y) [ 2/3k_mk^2Δ^j_(k)k∫d^3k'/(2π)^3 e^-κ_k(t-t')N_k'/2κ_k'. + .1/15Δ^j_mk_k∫d^3k'/(2π)^3e^-κ_(k)(t-t')N_k'k'^2/2κ_k']+(k↔ m)
The dominant integral is the first. We also approximate
e^-κ_k(t-t')≈δ(t-t')/κ_k
The final step is a rescaling k'=rk whereby
δ^2Γ_Q/δA̅_j(x,t)δP̅^km(y,t')=iτ/2δ(t-t')∫d^3k/(2π)^3e^ik(x-y)κ_k[ k_mΔ^j_(k)k+ k_kΔ^j_(k)m] ∫d^3r/(2π)^3 N_0 r^4/3/3[ r_c^2+r^2] ^5/2
Where r_c=k_c/k. Computing the integral we find eq. (<ref>)
99
Chandra S. Chandrasekhar, The theory of turbulence, editado por E. Spiegel, Springer (2011).
MonYag71 A. S. Monin and A. M. Yaglom, Statistical Fluid Mechanics, MIT Press, 1971.
FRIS95 U. Frisch, Turbulence, the Legacy of A. N. Kolmogorov (Cambridge University Press, Cambridge, England, 1995).
Pop00 S. B. Pope, Turbulent Flows, Cambridge UP, 2000.
EF10 G. Eyink and U. Frisch, Robert H. Kraichnan, in P. Davidson et al. (editors)A Voyage through Turbulence (Cambridge U.P., Cambridge, 2011).
MCCO94 W.D. McComb, The Physics of Fluid Turbulence (Clarendon Press, Oxford, 1994).
MCCO04 W.D. McComb, Renormalization Methods (Clarendon Press, Oxford, 2004).
Tsin09 A. Tsinober, An informal conceptual introduction to turbulence (Springer, Dordretch, 2009).
ED18 G. L. Eyink and Th. D. Drivas, Cascades and Dissipative Anomalies in Relativistic Fluid Turbulence,
Phys. Rev. X 8, 011023 (2018).
EC1 Esteban Calzetta, Fully developed relativistic turbulence, Phys. Rev. D 103, 056018 (2021).
Zhang21 Zhang, Y. B., Bodenschatz, E., Xu, H. and Xi, H. D. Experimental
observation of the elastic range scaling in turbulent flow with
polymer additives. Sci. Adv. 7, eabd3525 (2021).
SZ22 Khalid M. Saqr and Iham F. Zidane, On non‑Kolmogorov turbulence
in blood flow and its possible role
in mechanobiological stimulation, Scientific Reports. 12, 13166 (2022).
MT21 Mitishita, R. S., MacKenzie, J. A., Elfring, G. J. and Frigaard, I. A.
Fully turbulent flows of viscoplastic fluids in a rectangular duct.
J. Non-Newtonian Fluid Mech. 293, 104570 (2021).
Ros23 Rosti, M. E., Perlekar, P. and Mitra, D. Large is different:
non-monotonic behaviour of elastic range scaling in polymeric
turbulence at large Reynolds and Deborah numbers. Sci. Adv. 9,
eadd3831 (2023).
ACR23 Abdelgawad MS, Cannon I, Rosti ME. Scaling and intermittency in turbulent flows of elastoviscoplastic fluids. Nature Physics. 2023 Jul;19(7):1059-63.
GioCha06 G. Gioia and P. Chakraborty, Phys. Rev. Lett. 96, 044502 (2006).
Turbulent friction in rough pipes and the energy spectrum of the phenomenological theory,
Gol06 N. Goldenfeld, Phys. Rev. Lett 96, 044503 (2006).
Roughness-induced critical phenomena in a turbulent flow,
GioBom02 G. Gioia and F. A. Bombardelli, Phys. Rev. Lett. 88, 014501 (2001).
Scaling and similarity in rough channel flows,
GiChBo06 G. Gioia, P. Chakraborty and F. A. Bombardelli, Phys. Fluids 18, 038107 (2006).
Rough-pipe flows and the existence of fully developed turbulence,
MehPou08 M. Mehrafarin and N. Pourtolami, Phys. Rev. E77, 055304 (R) (2008)
Imtermittency and rough-pipe turbulence,
GutGol08 N. Guttenberg and N. Goldenfeld, Physical Review E79, 065306 (2009).
The friction factor of two-dimensional rough-pipe turbulent flows,
Gioi09 G. Gioia, N. Guttenberg, N. Goldenfeld and P. Chakraborty, Nature Physics 6, 438 (2010).
The turbulent mean-velocity profile: it is all in the spectrum, arXiv: 0909.2714 (2009).
Cal09 E. Calzetta, Phys. Rev. E 79, 056311 (2009).
Friction factor for turbulent flow in rough pipes from Heisenberg's closure hypothesis,
Tran09
T. Tran, P. Chakraborty, N. Guttenberg, A. Prescott, H. Kellay, W. Goldburg, N. Goldenfeld
and G. Gioia,
Macroscopic effects of the spectral structure in turbulent flows, Nature Physics 6, 438-441 (2010).
Cal12 E. Calzetta, Extension of the momentum transfer model to time-dependent pipe turbulence, Phys. Rev. E 85, 026305 (2012).
Cal10 E. Calzetta, Drag reduction by polymer additives from turbulent spectra, Phys. Rev. E 82, 066310 (2010).
MSR73 P.C. Martin, E.D. Siggia and H.A. Rose, Statistical Dynamics of Classical Systems, Phys.
Rev. A8 423 (1973).
Eyink G.L. Eyink, Turbulence Noise, J. Stat. Phys. 83, 955 (1996).
Kamenev A. Kamenev, Field Theory of Non-Equilibrium Systems, Cambridge University Press,
Cambridge, U.K. (2011).
JZEC J. Zanella and E. Calzetta, Renormalization group and nonequilibrium action in stochastic
field theory, Phys. Rev. E 66 036134 (2002).
DeDGia06 C. de Dominicis and I. Giardina, Random fields and spin glasses: a field theory approach (Cambridge University Press, Cambridge, England, 2006).
Ram07 J. Rammer, Quantum field theory of nonequilibrium states (Cambridge University Press, Cambridge (England), 2007)
CalHu08 E. Calzetta and B-L. Hu, Nonequilibrium Quantum Field Theory (Cambridge University Press, Cambridge, England, 2008).
Kovtun P. Kovtun, Lectures on hydrodynamic
fluctuations in relativistic theories, J. Phys. A 45,
473001 (2012).
KMR P. Kovtun, G.D. Moore and P. Romatschke, Towards an effective action for relativistic
dissipative hydrodynamics, JHEP 07, 123 (2014).
HKR M. Harder, P. Kovtun and A. Ritz, On thermal
fluctuations and the generating functional
in relativistic hydrodynamics, JHEP 07, 025 (2015).
Haehl18F. M. Haehl, R. Loganayagam and M. Rangamani, Effective action for relativistic hydrodynamics: fluctuations, dissipation, and entropy inflow, JHEP 10, 194 (2018).MGKC N. Mirón Granese, A. Kandus and E.
Calzetta, Field Theory
Approaches to Relativistic
Hydrodynamics. Entropy 24,
1790 (2022).
Wyld61Wyld, H.W., Jr. Formulation of the Theory of Turbulence in an
Incompressible Fluid. Ann. Phys.1961, 14, 143–165.Lee65 L. L. Lee, A Formulation of the Theory of Isotropic Hydromagnetic
Turbulence in an Incompressible Fluid, Ann. Phys. 32, 292 (1965).
BSMCC13 A. Berera, M. Salewski and W. D. McComb, Eulerian field-theoretic closure formalisms for fluid turbulence, Phys. Rev. E 87, 013007 (2013).
GS72 R. J. Gordon and W. R. Schowalter, Anisotropic Fluid Theory: A Different Approach to the Dumbbell Theory of
Dilute Polymer Solutions, Trans. Soc. Rheol. 16, 79 (1972).
DE86 M. Doi and S. F. Edwards, The theory of polymer dynamics (Clarendon Press, Oxford, 1986).
Sar07 Saramito, P. A new constitutive equation for elastoviscoplastic
fluid flows. J. Non-Newtonian Fluid Mech. 145, 1–14 (2007).
MS15 Alexander Morozov and Saverio E. Spagnolie, Introduction to Complex Fluids, in S. Spagnolie (ed.), Complex Fluids in Biological Systems (Springer, New York, 2015).
Lum69 J. L. Lumley, Drag reduction by additives, Ann. Rev. Fluid Mech. 1, 367 (1969).
Virk75 P. S. Virk, Drag reduction fundamentals. AIChE J. 21, 625–656 (1975).
Gen90 P. G. de Gennes, Introduction to polymer dynamics (Cambridge UP, Cambridge (England), 1990)
Bir87 R. B. Bird, C. F. Curtiss, R. C. Armstrong and O. Hassager, Dynamics of Polymer Liquids, Vol 2, (John Wiley, New York, 1987)
RZ13 L. Rezzolla and O. Zanotti,
Relativistic Hydrodynamics
(Oxford University Press, Oxford, 2013).
RR19 P. Romatschke and U. Romatschke,
Relativistic fluid dynamics in and out equilibrium - Ten years of progress in theory and numerical simulations of nuclear collisions (Cambridge University Press, Cambridge (England), 2019).
LCEC L. Cantarutti and E. Calzetta, Dissipative-type theories for Bjorken and Gubser
flows, International Journal of Modern Physics A
Vol. 35, 2050074 (2020).
PR54 I. Proudman and W. H. Reid, “On the decay of a normally distributed and
homogeneous turbulent velocity field,” Philos. Trans. R. Soc. London, Ser.
A 247, 163 1954.
ChM07 Henry Chang, Robert D. Moser, An inertial range model for the three-point third-order velocity correlation, PHYSICS OF FLUIDS 19, 105111 2007
KZ18 A. V. Kopyev and K. P. Zybin, Exact result for mixed triple two-point
correlations of velocity and velocity gradients in isotropic turbulence, Journal of Turbulence, DOI:
10.1080/14685248.2018.1511055 (2018).
Krai64 R. Kraichnan, Phys. Fluids 7, 1723 (1964)
Kolmogorov's hypotheses and eulerian turbulence theory
HorLip79 H. Horner and R. Lipowsky, On the Theory of Turbulence: A non Eulerian Renormalized Expansion, Z. Physik B 33, 223 (1979).
BH05 A. Berera, and D. Hochberg, Galilean invariance and homogeneous anisotropic randomly stirred flows, PHYSICAL REVIEW E 72, 057301 2005.
BH07 A. Berera and D. Hochberg, Gauge symmetry and Slavnov-Taylor identities for randomly stirred fluids, Phys. Rev. Lett. 99, 254501 (2007).
BH09 A. Berera, and D. Hochberg, Gauge fixing, BRS invariance and Ward identities
for randomly stirred flows, Nuclear Physics B 814 [FS] (2009) 522–548.
Zin93 J. Zinn-Justin, Quantum Field Theory and Critical Phenomena (Clarendon Press, Oxforf, 1993).
Ras99 H. O. Rasmussen, A new proof of Kolmogorov's 4/5-law, Phys. Fluids 11, 3495 (1999).
Sre95 K. R. Sreenivasan, On the universality
of the Kolmogorov constant,
Phys. Fluids 7 (11), 2778 (1995).
|
http://arxiv.org/abs/2409.02175v1 | 20240903180002 | Primordial black holes from an aborted phase transition | [
"Wen-Yuan Ai",
"Lucien Heurtier",
"Tae Hyun Jung"
] | astro-ph.CO | [
"astro-ph.CO",
"gr-qc",
"hep-ph"
] |
KCL-PH-TH-2024-46
CTPU-PTC-24-28
[email protected]
[email protected]
Theoretical Particle Physics and Cosmology,
King’s College London, Strand, London WC2R 2LS, UK
[email protected]
Particle Theory and Cosmology Group, Center for Theoretical Physics of the Universe,
Institute for Basic Science (IBS), Daejeon, 34126, Korea
§ ABSTRACT
We propose a new mechanism of primordial black hole formation
via an aborted phase transition during the early matter-dominated stage of reheating after inflation.
In reheating, induced by the decay of a pressureless fluid dominating the Universe at the end of inflation, dubbed as reheaton, the temperature of the radiation bath typically increases, reaching a maximum temperature T_ max, and then decreases.
We consider a first-order phase transition induced by the increase of the temperature that is aborted as T_ max is higher than the critical temperature but not sufficiently high for the bubble nucleation rate to overcome the expansion of the Universe.
Although bubbles never fully occupy the space, some may be nucleated and expand until the temperature once again decreases to the critical temperature.
We argue that these bubbles shrink and disappear as the temperature drops further, leaving behind macroscopic spherical regions with positive density perturbations.
These perturbed regions accrete the surrounding matter (reheatons) and eventually collapse into primordial black holes whose mass continues to grow until the onset of radiation domination.
We estimate the abundance of these primordial black holes in terms of the bubble nucleation rate at T_ max, and demonstrate that the abundance can be significantly large from a phenomenological perspective.
Primordial black holes from an aborted phase transition
Tae Hyun Jung
Received ; accepted
==========================================================
Introduction—Primordial black holes (PBHs) are black holes that form in the early Universe in a non-stellar way (see Ref. <cit.> for a recent review).
Their possible existence throughout cosmic history has rich phenomenological implications <cit.>
and a broad mass range of PBHs are compelling candidates for the dark-matter component of the Universe <cit.> that might be on the verge of being probed using solar ephemerides precision measurements <cit.>.
Moreover, PBHs could also explain a variety of conundrums, including the recently observed microlensing signal candidates, the correlations in the cosmic infrared and X-ray backgrounds, and the origin of the supermassive black holes in galactic nuclei at high redshift <cit.>.
Moreover, it is possible that the LIGO/Virgo black hole mergers <cit.> has a primordial origin <cit.>.
So far, most of the PBH formation mechanisms
involved the gravitational collapse of large curvature perturbations generated during inflation. To generate such large curvature perturbations, the inflation model is required to have peculiar features, e.g., an inflection point or a plateau in a small field range of the potential <cit.>, a potential hill <cit.>, multiple phases of inflation or hybrid inflation <cit.>, a non-canonical kinetic term <cit.>, multifield inflation <cit.>, light spectator fields <cit.>, and other possibilities (e.g. <cit.>).
In addition, PBH formation has been considered in connection with preheating after inflation <cit.> although their formation in this context was recently questioned <cit.>.
Long after the idea was suggested in Refs. <cit.>, recent works reconsidered that PBHs may also be formed from a first-order phase transition (FOPT)
<cit.>. This idea was then further investigated in Refs. <cit.>. This possibility is particularly exciting, as FOPTs are naturally present in many particle physics models and have far-reaching phenomenological consequences, such as the emission of a stochastic gravitational wave background.
In this Letter, we propose a new PBH formation mechanism in which an FOPT occurs while the Universe's temperature increases during reheating after inflation. This FOPT is thus a heating phase transition <cit.> rather than a cooling phase transition that occurs as the temperature decreases in the early Universe.
The special ingredient of our scenario is an abortion of the FOPT
assuming that the maximal temperature reached in reheating is higher than the critical temperature but lower than the temperature that guarantees the phase transition to complete.
In the following, we introduce the specifics of the aborted phase transition, explain how PBH can form in this setup, and relate the PBH mass and abundance to the dynamics of the perturbative reheating and the phase transition sector considered.
Reheating sector—
Before going into the details, let us be clear in our setup.
When inflation ends, we consider the Universe to be filled with a pressureless fluid slowly decaying into particles that quickly get thermalized, producing a relativistic plasma.
We refer to this decaying matter component as the reheaton, χ.
As χ decays, the radiation sector's temperature first increases, reaching the maximal temperature T_ max, and decreases as the Universe expands.
The temperature evolution in terms of the scale factor a can be described by <cit.>
T(a) = c_1 T_ max[(c_2 a/a_ max)^-3/2-(c_2 a/a_ max)^-4]^1/4 ,
where a_ max is the scale factor at T=T_ max, c_1=2^6/5(27^1/55)^-1/4≈ 1.30 and c_2=2^6/53^-2/5≈ 1.48.
Denoting by Γ_χ the decay width of the reheaton, matter domination lasts until the plasma temperature reaches the reheating temperature T_ RH∼√(Γ_χ M_ Pl), with M_ Pl=2.4× 10^18 being the reduced Planck mass, below which radiation domination starts.
In general, there is no direct relation between T_ RH and T_ max, as the value of the latter depends on the time at which the reheating starts, so it is natural to assume that there is a large hierarchy between them.
Aborted phase transition—
Now, let us consider a real scalar field ϕ which breaks a symmetry[
The symmetry in our scenario actually does not play any role, and it is conceivable that a phase transition does not involve any symmetry being restored or broken.
However, we will maintain this terminology of symmetry restoration/breaking throughout for the sake of intuitive discussion.
], spontaneously, by getting a nonzero vacuum expectation value.
Assuming that the scalar sector undergoes an FOPT along the temperature change, one can define three characteristic temperatures that play an important role:
the critical temperature, T_c at which two local minima are degenerate, the spinodal (binodal) temperature T_1 (T_2) above (below) which the potential barrier disappears (see Fig. <ref> for the schematic description of thermal effective potential V_T(ϕ) at each temperature).
During inflation, the temperature is zero, and ϕ is stabilized in the symmetry-breaking vacuum assuming that the inflation scale is not too large compared to the curvature scale of the potential.
While the thermal bath is heated,
the scalar potential V(ϕ) receives thermal corrections and there can be two types of phase transitions in general.
During the change of T=0→ T_ max>T_c, the symmetry-restoring vacuum becomes more stable compared to the symmetry-breaking vacuum, and the phase transition occurs.
This phase transition is called symmetry-restoring, or heating phase transition (see, e.g. Ref. <cit.>, for related discussions in various contexts). In previous studies, it is assumed that the heating phase transition is completed and that the Universe settles down in the symmetry-restoring phase. Then, as the temperature drops back, the symmetry-breaking vacuum becomes more stable again, and the symmetry-breaking (or cooling) phase transition starts at the bubble nucleation temperature.
On the contrary, in this Letter, we assume that T_ max is greater than the critical temperature T_c, but not large enough
to make the bubble nucleation rate catch up with the spacetime expansion.
Thus, the phase transition is aborted at T_ max by the temperature's turning around.
Bubbles can still be formed, but since they never collide with each other, they just expand during T>T_c and shrink back when T<T_c.
We argue that these bubbles eventually lead to PBH formation and that the abundance of such PBHs can be significant.
Fate of bubbles in the aborted phase transition
—
Initially, the bubble grows since the free energy density difference, Δ V_T
≡ V_T(ϕ_b)-V_T(ϕ_s), is positive, where ϕ_b and ϕ_s denote the symmetry-breaking and restoring extrema of the thermal effective potential, respectively.
Once the wall starts expanding, the perturbed plasma would backreact to the wall, creating a backreaction force _ back(v_w), which has a dependence on the wall velocity.
In general, a terminal velocity exists and should be reached after a short acceleration period, determined by Δ V_T=_ back(v_w) <cit.>.
As the temperature changes, Δ V_T also changes, so the wall velocity adiabatically follows the terminal velocity at each temperature; it reaches a maximal value at T_ max and decreases as T decreases.
At T=T_c, since the terminal velocity becomes v_w=0 by definition, this is the moment when the bubble stops expanding and has the largest comoving radius, which we denote as r_c,2.
The subscript c,2 will be used to indicate quantities estimated at the critical temperature reached for the second time throughout this Letter. The critical temperature was reached for the first time during the temperature-increasing process, T=0 → T_ max, for which we use the labeling of c,1.
We can estimate r_c,2 as r_c,2=∫_t_ nuc^t_c,2ṭ' v_w(t')/a(t')∼v̅(η_c,2-η_ nuc) where η is the conformal time defined via ṭ= aη̣, v̅ denotes the averaged wall velocity in bubble expansion, and the subscript nuc indicates quantities estimated at the time when this bubble is nucleated.
In a matter-dominated universe, we have H(a)∝ a^-3/2
and thus
r_c,2∼v̅ (η_c,2 -η_ nuc) = 2 v̅/a_c,2 H_c,2[
1 - (a_ nuc/a_c,2)^1/2].
This shows that the comoving radius at t_c,2 is of the order of the comoving Hubble radius r_H=1/(a H).
Afterwards, at T<T_c, the net pressure Δ V_T becomes negative, and the bubble starts shrinking. The bubble wall velocity stays following its terminal (negative) velocity, which induces fluid motion of the plasma in this region.
This shrinking occurs slightly below T_c, while the vacuum energy difference is comparable to the pressure of the radiation plasma. Therefore, the bubble wall does not run away, as shown in more detail in the supplementary material, Section <ref>.
In the absence of any runaway during both its expansion and contraction phases, the energy budget of the bubble wall's kinetic motion is negligible. To understand what happens in this region, we can thus focus on the balance between vacuum and thermal energy, where the latter should be understood to include the fluid's bulk motion.
During bubble expansion, i.e. when T>T_c, the thermal energy is first transferred into vacuum energy which redshifts slower than the radiation plasma with cosmic expansion.
Later on, the bubble's contraction at T<T_c converts vacuum energy back to thermal energy.
Therefore, the energy density of the region perturbed by the wall's motion is greater than the unperturbed region far away from the nucleation site.
Because the Universe is still matter-dominated as we assume T_ max≳ T_c ≫ T_ RH,
the presence of such an overdensity easily leads to PBH formation via the post-collapse accretion mechanism <cit.> (see also Refs. <cit.> for PBH formation during matter domination in a variety of different aspects).
After the bubble completely disappears,
the overdense region, with a macroscopic size as large as the comoving radius r_c,2 defined in Eq. (<ref>), creates a gravitational potential and triggers an accretion of reheaton into this region.
The accretion of the reheaton (which is pressureless in our setup) finally leads to the whole region collapsing into a black hole with an initial mass of order 10^-2 M_H <cit.> where M_H=4π M_ Pl^2/H is the Hubble mass for a given background expansion rate H.
As shown in Ref. <cit.>, after it forms, the black hole quickly increases in mass by absorbing the surrounding matter.
Once the PBH mass reaches about one Hubble mass, the rapid accretion is expected to be slowed down, and the mass simply follows the scaling of one Hubble mass M_ BH∼ M_H ∝ a^3/2.
This mass-growing process ends when radiation domination starts. Eventually, the final PBH mass is simply determined by the value of the Hubble mass at the time of the reheating, which we evaluate by considering that there is a matter-radiation equality at T_ RH, giving
M_PBH ∼3.5×10^-12 M_⊙ α( 10^5 /T_RH )^ 2
( 100/g_*(T_RH) )^ 1/2 ,
where g_*(T_ RH) is the number of effective relativistic degrees of freedom present in the plasma at the reheating time, and α≲ 1 is an efficiency factor, which we take to be 𝒪(0.1) for simplicity.
As can be seen from Eq. (<ref>), M_ PBH is insensitive to the phase transition properties and solely determined by the value of T_ RH once they are formed. The distribution of PBHs formed from an aborted phase transition is thus expected to be monochromatic.
Note that the time scale of the PBH formation and mass-growing process is around the Hubble time scale <cit.> at the time when the density perturbation is generated. Therefore, as long as T_ max≳ T_c ≫ T_ RH, there is enough time for the PBH to form and grow.
The chronology of our PBH formation scenario is schematically summarized in Fig. <ref>, where one can also check our notations for important events.
PBH abundance—
The PBH relic abundance can be estimated by counting the expected number of symmetry-restoring bubble nucleations during the aborted phase transition.
It is thus sensitive to the bubble nucleation rate per unit volume, Γ(T)∼ T^4 ^-S_3/T where S_3 is the minimal energy of the scalar configuration to make a thermal escape from the local minimum, which can be obtained by the three-dimensional Euclidean action of the O(3) bounce solution <cit.>.
Since the phase transition is aborted, Γ is maximized at the moment where T=T_ max, and most of the symmetry-restoring bubbles are nucleated around this time.
To be specific, let us consider a sufficiently large comoving total volume V.
The number of nucleated bubbles at time t_ nuc, corresponding to a_ nuc, is given by
Ṇ_ PBH (a_ nuc)= ạ_ nuc/a_ nuc H(a_ nuc)×V a_ nuc^3 Γ(T(a_ nuc)) .
Integrating it from t_c,1 to t and dividing the result by Va(t)^3 gives the integrated number density at t
n_ PBH(t)=(a_ max/a(t))^3 ∫_a_c,1^a(t)(a_ nuc/a_ max)^2 Γ(T(a_ nuc))ạ_ nuc/a_ max H(a_ nuc) .
From this, one can obtain the PBH dark matter fraction f_ PBH=n_ PBH(t_ today) M_ PBH/(ρ_c Ω_ DM).
Here, we proceed with a rough estimation using a model-independent approach with the following approximations.
First of all, we take the Taylor expansion of S_3/T around T_ max in the log scale;
S_3/T ≃. S_3/T|_T=T_max
- β̂_max ln(T/T_max) ,
where we define the rapidity parameter β̂_ max as
β̂_max ≡- . (̣S_3/T)/ḷṇT|_T=T_max .
Then we can approximate Γ(T) as Γ(T) ≃Γ(T_ max) (T/T_ max)^β̂_ max+4.
In the Supplementary Material, we evaluate S_3/T and β̂_ max in the case of the so-called Abelian Higgs model and obtain β̂_ max around 10^4–10^6. We use this value as a benchmark in what follows.
In addition, using Eq. (<ref>) to evaluate T(a), we obtain in the limit a≈ a_ max
T(a)
≃
T_ maxexp[-3/4( a/a_ max-1 )^2 ] .
Although (<ref>) and (<ref>) are only valid around T_ max, we checked numerically that they lead to a good approximation for f_ PBH as long as β̂_ max>50 because the largest contribution to f_ PBH comes from Γ(T_ max).
Then, Eq. (<ref>) can be approximated as
n_PBH(t_RH)
≃√( 4π/3β̂_max)
(a_max/a_RH)^3
Γ(T_max)/H_max,
for β̂_ max>50.
Assuming that the PBH yield is unchanged after reheating temperature, we obtain f_ PBH as
f_PBH = M_PBH n_PBH/s/ρ_DM/s
∼1 α
( T_RH/10^5 )
( ( Γ(T_max)/H_max^4 )/10^-16 )
( 10^5/β̂_max )^ 1/2
( a_RH/a_max/10^2)^ 3/2 ,
where the observed dark matter relic abundance is taken to be ρ_ DM/s ≃ 0.4 <cit.>.
In Fig. <ref>, we depict Γ(T_ max)/H_ max^4 required to give a sizable f_ PBH for different M_ PBH (or T_ RH) for α=0.1, a_ RH/a_ max=10^2 and β̂_ max=10^5, taking g_*(T) to be the Standard Model value <cit.>.
We also show relevant constraints coming from the null observation of PBH evaporation signal (cyan), lensing by PBHs (purple), gravitational waves (blue), and accretion (green), taken from Ref. <cit.>.
The dotted line on the right edge represents the lower bound of T_ RH≳ 5 (and thus an upper bound of M_ PBH) coming from the big bang nucleosynthesis <cit.> while the one on the left edge depicts the critical PBH mass M_⋆≃ 5× 10^14 gram below which PBHs evaporate completely before the present <cit.>.
For masses smaller than M_⋆, we also indicate constraints from BBN (pink) and CMB anisotropies (orange) on evaporating PBHs <cit.>.
As one can see from this figure, a broad range of values for Γ(T_ max)/H_ max^4 lead to an abundance of PBHs that is of phenomenological interest, including PBHs that could constitute the whole dark matter of our Universe.
Summary and Discussion—
In this Letter, we have proposed a new PBH formation mechanism in aborted phase transition during reheating.
A symmetry-restoring bubble is nucleated and expands during T>T_c, and it shrinks back as the temperature drops below T_c.
This generates a macroscopic size of over-density perturbation with a spherical symmetry, which eventually collapses into a PBH via the post-collapse accretion mechanism during matter domination.
The mass of PBHs formed in this process grows quickly by absorbing the surrounding matter, and its final mass is determined by T_ RH as given in Eq. (<ref>).
We estimate the PBH abundance (<ref>) in terms of the bubble nucleation rate around T_ max parametrized by the effective rapidity parameter β̂_ max at T_ max, and show that it can be sizable in the aspect of phenomenology.
Our findings rely on the post-collapse accretion mechanism <cit.>, in which a small overdensity accretes the matter present in the Hubble patch during a matter-dominated era, leading to the formation of a black hole.
However, the formation of the black hole and its mass growth may be partially impeded by the velocity dispersion that can be either from the inhomogeneity of the surrounding matter or the non-sphericity of the initial density fluctuation as discussed in Refs. <cit.>.
In our case, the spherical symmetry is guaranteed because the bubble nucleation rate is maximized at an O(3) symmetric profile along the transition surface in the field configuration space (see, e.g. Ref. <cit.> and references therein).
We expect that even if small non-sphericities exist during nucleation, they get smoothed out due to the interaction of the bubble with the background quasi-homogeneous plasma during its expansion and contraction dynamics.
It is also conceivable that a velocity dispersion of matter (the reheaton in our case) may arise from the small inhomogeneities generated during inflation.
We leave a detailed investigation of all these effects for future work.
We thank Shao-Jiang Wang for the helpful discussions. The work of WYA was supported by EPSRC [Grant No. EP/V002821/1]. The work of LH is supported by the STFC (grant No. ST/X000753/1).
The work of THJ was supported by IBS under the project code, IBS-R018-D1.
apsrev4-1
Supplemental Material
§ AN EXAMPLE MODEL TO EVALUATE THE PHASE TRANSITION RAPIDITY PARAMETER
In this section, we consider a benchmark model and obtain β̂_ max.
The model we consider is a simple Abelian Higgs model where a complex scalar field Φ is charged under a U(1) gauge interaction with a charge unity.
We further assume that the theory is classically scale invariant, so the tree-level potential is given by
V(Φ) = λ|Φ|^4,
where λ is the self-quartic coupling.
The spontaneous symmetry breaking is radiatively generated as originally shown in Ref. <cit.>.
To include the loop effects conveniently, we take the RG scale μ=μ_* which is defined by λ(μ_*)=0.
The existence of such μ_* is guaranteed by the positive beta function of λ coming from the gauge boson loop.
Denoting ϕ for the radial degree of Φ, the one-loop effective potential can be written as
V(ϕ) = δλ/4ϕ^4 + 1/4β_λ ϕ^4 logϕ/μ_*,
where δλ = 3 g^4/16π^2(log g^2 - 5/6) and β_λ=6g^4/16π^2 with g being the gauge coupling.
This potential is minimized at v_ϕ = ^1/6μ_*/g and the potential energy difference is given by Δ V_0 = 3 ^2/3/128π^2μ_*^4.
We include the thermal correction coming from the gauge boson loop,
ΔV_T≠0 = 3T^4/2π^2
J_B(m_V^2/T^2),
with the field-dependent gauge boson mass m_V=gϕ and the J_B function given by
J_B(y^2)
=
∫_0^∞x̣ x^2 log[
1-^-√(x^2+y^2)
].
Note that the scalar-loop contribution vanishes due to our choice of RG scale, λ(μ_*)=0.
In this specific setup, we find two important model properties.
First, T_c is independent of the size of gauge coupling g since the zero-temperature potential energy difference is given by Δ V_0 = 3 ^2/3/128π^2μ_*^4 independently of g.
Second, the binodal temperature T_1 (where the potential barrier disappears) is also g-independent.
This is because the effective field range of the thermal correction ϕ_ eff, T has the same coupling dependence with v_ϕ∼μ_*/g; ϕ_ eff, T can be estimated by m_V(ϕ_ eff, T) ∼ T, so ϕ_ eff, T∼ T/g.
We numerically find that T_c≃ 0.37 μ_*
and T_1 ≃ 0.44 μ_*.
For our PBH formation scenario, T_ max must be between T_1 and T_c, which is not impossible although it requires tuning (note that there are already multiple coincidences of time scales in the standard cosmology).
These properties can be changed by including additional fields.
For instance, if we include a Dirac fermion ψ that couples to ϕ via a Yukawa interaction, T_c decreases while T_1 does not change much.
This can increase the ratio of T_1 and T_c, reducing the required level of tuning T_ max.
For a temperature between T_c and T_1, we obtain the bounce action by using the CosmoTransitions <cit.>.
The result of S_3/T is given in the left panel of Fig. <ref>.
Then, we obtain the rapidity parameter as shown in the right panel of Fig. <ref>, which shows that 10^4 ≲β̂_ max≲ 10^6.
§ DYNAMICS OF A SYMMETRY-RESTORING BUBBLE
In this section, we show that the bubble wall typically reaches a terminal velocity, i.e., has a non-runaway behavior, in both the expansion and contraction stages.
Bubble wall dynamics is a highly complicated subject, requiring one to solve the Boltzmann equations for the particle distribution functions (which are integro-differential equations), the background scalar equation of motion, and the fluid equations for the hydrodynamics <cit.>. To determine whether or not a bubble wall runs away, i.e. accelerates all the way until colliding with another bubble, friction in the γ_w→∞ limit is usually compared to the vacuum energy difference <cit.> (although this may not always be valid <cit.>).
Below, we also do a similar analysis, using the simple Bödeker-Moore criterion <cit.>.
§.§ Bubble expansion (T>T_c)
Let us first consider bubble expansion.
Note that, for T>T_c, the vacuum energy inside the bubble is greater than outside, i.e. Δ V_0 <0, so the vacuum energy always gives a negative pressure that tries to contract the bubble.
On the other hand, the thermal pressure difference Δ V_T≠ 0 is positive, and this is the driving force of the bubble expansion.
When a bubble is formed at T>T_c, the bubble wall gets accelerated since the net pressure is positive (by the definition of T_c).
When the bubble wall velocity is nonzero, the thermal driving force is reduced.
This can be seen from the fact that, in the example of 1-to-1 transmission processes, the momentum transfer in the wall-rest frame decreases as the fluid velocity increases; Δ p_z = √(p_z^2 + Δ m^2) - p_z ∼Δ m^2/2p_z where z is the direction of the bubble wall propagation, p_z is the momentum of a particle coming toward the bubble wall from outside, and Δ m^2>0 is the mass-squared difference.
Therefore, as velocity increases, the thermal driving force decreases until it reaches the equilibrium with the vacuum energy pressure.
As pointed out in Ref. <cit.>, the thermal driving force has a nonzero asymptotic value in v_w →∞ limit, which we also call Bödeker-Moore thermal force
_ BM=∑_i C_i g_ic_iΔ m_i^2 T^2/24 ,
where c_i=1 (1/2) for bosons (fermions). Here g_i is the number of internal degrees of freedom of species i that couple with the scalar ϕ, and Δ m_i^2 is the difference of the squared-mass in broken and symmetric phases. C_i is approximately given by
C_i T^2/24≈T^2/24 if m_i^ out≪ T ,
1/2 m_i^ out (m_i^ out T/2π)^3/2^-m_i^ out/T if m_i^ out≫ T ,
with m_i^ out being the mass outside of the wall, i.e. in the broken phase.
If m_i^ out are larger than the temperature, which is the case for our model considered in the last section, _ BM would be suppressed because the number density of those heavy particles is Boltzmann-suppressed.
This means that the asymptotic value of the driving force is small, ensuring the existence of equilibrium with Δ V_0 at some velocity.
We note that _ BM is the force caused only by the 1→ 1 processes. There can be additional forces caused by particle-production processes <cit.>, i.e., when a particle splits into two or more particles when it transits across the wall. These next-to-leading-order forces may behave as true friction as in a cooling phase transition <cit.>. We also note that hydrodynamic effects can induce a barrier of the frictional pressure at the Jouguet velocity <cit.>. All these factors would just make our conclusion more solid.
§.§ Bubble contraction (T<T_c)
For T<T_c, the dynamics of the bubble wall can be understood in the usual way although our bubble is still symmetry-restoring and contracts. Actually, the contraction process under consideration can be likened to the contraction of a false-vacuum bubble (sometimes referred to as a false-vacuum island) in a cooling and symmetry-breaking FOPT.
During contraction, the vacuum energy difference accelerates the bubble wall velocity while the thermal effect acts as friction.
In this case, when the wall velocity increases, the friction increases and has an asymptotic value of P_ BM <cit.>.
Thus, if |Δ V_0| < P_ BM, there exists a terminal velocity where the friction and Δ V_0 make an equilibrium.
Before proceeding, note that the temperature range in our process is all around T_c.
As shown in the previous section, T_ max/T_c < T_1/T_c cannot be large in the model-building aspect, and therefore, the temperature when the bubble shrinks and disappears, which we denote T_ zero, should be also close to T_c.
Now let us again consider the large-γ_w limit. In bubble contraction, the driving force is
_ driving=|Δ V_0| ,
while the Bodeker-Moore thermal force is <cit.>
_ friction=_ BM=∑_i g_ic_iΔ m^2 T^2/24∼ g_⋆,ϕΔ m^2 T^2/24 ,
is a true friction where g_⋆,ϕ is the effective degrees of freedom that couple to ϕ.
On the other hand, we have the relation of
Δ V_0 ≃ g_⋆,ϕπ^2/90T_c^4 which is smaller than _ BM for Δ m^2 > T^2.
Therefore, we conclude that the bubble wall still does not run away even without taking into account friction from 1-to-2 or 1-to-many processes and hydrodynamic obstruction <cit.>.
|
http://arxiv.org/abs/2409.02380v1 | 20240904020104 | Nodeless superconductivity and topological nodal states in molybdenum carbide | [
"Tian Shang",
"Yuting Wang",
"Bochen Yu",
"Keqi Xia",
"Darek J. Gawryluk",
"Yang Xu",
"Qingfeng Zhan",
"Jianzhou Zhao",
"Toni Shiroka"
] | cond-mat.supr-con | [
"cond-mat.supr-con",
"cond-mat.str-el"
] |
plain
Preprint: September 9, 2024,
These authors contributed equally[Corresponding authors:
] [email protected]
Key Laboratory of Polar Materials and Devices (MOE), School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China
These authors contributed equally
Co-Innovation Center for New Energetic Materials, Southwest University of Science and Technology, Mianyang, 621010, People's Republic of China
School of Science, Southwest University of Science and Technology, Mianyang 621010, P. R. China
Key Laboratory of Polar Materials and Devices (MOE), School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China
Key Laboratory of Polar Materials and Devices (MOE), School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China
Center for Neutron and Muon Sciences, Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland
Key Laboratory of Polar Materials and Devices (MOE), School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China
Key Laboratory of Polar Materials and Devices (MOE), School of Physics and Electronic Science, East China Normal University, Shanghai 200241, China
[Corresponding authors:
][email protected]
Co-Innovation Center for New Energetic Materials, Southwest University of Science and Technology, Mianyang, 621010, People's Republic of China
Center for Neutron and Muon Sciences, Paul Scherrer Institut, CH-5232 Villigen PSI, Switzerland
Laboratorium für Festkörperphysik, ETH Zürich, CH-8093 Zürich, Switzerland
§ ABSTRACT
The orthorhombic molybdenum carbide superconductor with T_c = 3.2 K was investigated by muon-spin rotation and relaxation (µSR) measurements and by
first-principle calculations. The low-temperature superfluid density, determined by transverse-field µSR, suggests a fully-gapped superconducting state in Mo_2C, with a zero-temperature gap Δ_0 = 0.44 meV and a magnetic penetration depth λ_0 = 291 nm. The time-reversal symmetry is preserved in the superconducting state, as confirmed by the absence of an additional muon-spin relaxation in the zero-field µSR spectra. Band-structure calculations indicate that the density of states at the Fermi level is dominated by the
Mo 4d-orbitals, which are marginally
hybridized with the C 2p-orbitals over a wide energy range.
The symmetry analysis confirms that, in the absence of spin-orbit coupling (SOC), Mo_2C hosts twofold-degenerate nodal surfaces and fourfold-degenerate nodal lines.
When considering SOC, the fourfold-degenerate nodal lines cross the Fermi level and contribute to the electronic properties.
Our results suggest that, similarly to other phases of carbides, also the orthorhombic
transition-metal carbides host topological nodal states and may be potential candidates for future studies of topological superconductivity.
Nodeless superconductivity and topological nodal states in molybdenum carbide
Toni Shiroka
=============================================================================
3pt
§ INTRODUCTION
8pt
The possibilities offered by topological superconductors, ranging from hosting Majorana fermion quasiparticles to potential applications in topological quantum computing <cit.>, have stimulated the researchers to explore different routes to realize them.
The most obvious approach consists in the introduction of extra carriers into a topological insulator to achieve superconductivity (SC). This route has been frequently attempted
in the copper- or strontium intercalated Bi_2Se_3 topological insulator <cit.>. Another approach utilizes the proximity effect between a conventional s-wave superconductor and a topological insulator or semiconductor <cit.>.
The surface states of a topological insulator can lead to a two-dimensional superconducting state with a p+ip pairing at the interfaces, known to support Majorana bound states at the vortices <cit.>. For instance, evidence of topological SC has been reported in
NbSe_2/Bi_2(Se,Te)_3 heterostructures <cit.>, where NbSe_2 represents a typical fully-gapped superconductor.
Despite continued efforts to identify topological SC in accordance with
the aforementioned approaches,
the intricacy of heterostructure fabrication, the rarity of suitable
topological insulators, and the inhomogeneity or disorder
effects induced by carrier doping, have
considerably constrained the investigation and potential applications of topological SC.
A more attractive way to achieve them is to combine
superconductivity and a nontrivial electronic band structure in the same material.
Clearly, it is of fundamental interest to be able to identify such new superconductors with nontrivial band topology, but
with a simple composition.
For example, topologically protected surface states have been found in superconducting CsV_3Sb_5 <cit.>, β-PdBi_2 <cit.>, and PbTaSe_2 <cit.>,
all of which are good platforms for studying topological SC.
In this respect, the binary transition-metal carbides (TMCs)
represent another promising family of materials.
TMCs exhibit essentially four different solid phases, which include the α (Fm3m No. 225)-,
β (Pbcn, No. 60)-, γ (P6m2, No. 187)-, and
η (P6_3/mmc, No. 194)-phase <cit.>. The γ-phase is noncentrosymmetric, while the other three are centrosymmetric. Due to the lack of space inversion, the γ-phase TMCs exhibit exotic topological features. The unconventional three-component fermions with surface Fermi arcs were experimentally observed in γ-phase WC <cit.>.
By applying external pressure, the topological semimetal MoP (isostructural to WC) becomes a superconductor, whose T_c rises up to 4 K (above 90 GPa) <cit.>, thus representing a candidate topological superconductor. Unfortunately, no SC has been observed in WC yet, but the γ-phase MoC (similar to WC and also not superconducting) was predicted to be a topological nodal-line semimetal with drumhead surface states <cit.>. The α-phase TMCs show a relatively high T_c value and some of them
were also predicted to exhibit nontrivial band topologies <cit.>.
For example, NbC and TaC are fully-gapped superconductors with T_c = 11.5 and 10.3 K, respectively <cit.>.
At the same time, theoretical calculations suggest that
α-phase TMCs are nodal line semimetals in the absence of spin-orbit coupling (SOC) <cit.>.
As for the molybdenum carbides, although their SC was already reported in the 1970s <cit.>, their physical properties have been overlooked due to difficulties in synthesizing clean samples. Only recently, the α-phase MoC_x (x < 1) and η-phase Mo_3C_2 (with T_c = 14.3 and 8.5 K) could be synthesized under high-temperature and high-pressure conditions (1700 ^∘C, 6–17 GPa) <cit.> and their superconducting properties studied via different techniques.
To date, the electronic properties of the other phases
of molybdenum carbides (e.g., the β-phase) remain mostly unexplored.
In this paper, we report on the superconducting properties of the
β-phase Mo_2C, investigated via magnetization- and muon-spin relaxation and rotation (µSR)
measurements. In addition, we also present numerical density-functional-theory (DFT) band-structure calculations.
We find that Mo_2C exhibits a fully-gapped superconducting state,
while its electronic band structure suggests that
it hosts twofold-degenerate nodal surfaces and fourfold-degenerate nodal lines.
Therefore, the β-phase
TMCs (of which Mo_2C is a typical example)
may be potential candidates for future studies of topological SC,
similar to the other TMC phases.
§ EXPERIMENTAL AND NUMERICAL METHODS
8pt
First, we tried to synthesize the β-phase Mo_2C by arc melting
Mo slugs (99.95%, Alfa Aesar) and C rods (99.999+%, ChemPUR).
Similarly to previous studies <cit.>, the obtained
polycrystalline samples showed a mixture of different phases, both before and after the annealing. Akin to the α-phase, the β-phase Mo_2C can be synthesized also under high-temperature and high-pressure conditions (1500–2300 K, 5 GPa) <cit.>.
However, the resulting Mo_2C samples have a rather low superconducting volume fraction.
Because of these issues, all our measurements were performed
on high-purity Mo_2C powders (99.5%) produced by
Alfa Aesar.
For the µSR investigation,
the powders were pressed into pellets, while for the magnetization
measurements, performed on a 7-T Quantum Design magnetic property
measurement system, loose powders were used.
Room-temperature x-ray powder diffraction (XRD)
was performed on a Bruker D8 diffractometer using Cu Kα radiation.
The µSR measurements were carried out at the multipurpose surface-muon spectrometer (Dolly) at the πE1 beamline of the Swiss muon source at Paul Scherrer Institut (PSI), Villigen, Switzerland.
The Mo_2C pellets were mounted on 25-µm thick copper foil to
cover an area 6–8 mm in diameter. The µSR spectra comprised
both transverse-field (TF) and zero-field (ZF)
measurements, performed upon heating the sample.
The µSR spectra were analyzed by means of the software package <cit.>.
The phonon spectrum and the electronic band structure of Mo_2C
were calculated via DFT, within the generalized gradient approximation (GGA) of Perdew-Burke-Ernzerhof (PBE) realization <cit.>, as implemented in the Vienna ab initio Simulation Package (VASP) <cit.>. The projector augmented wave (PAW) pseudopotentials were adopted for the calculation <cit.>. Electrons belonging to the outer atomic configuration were treated as valence electrons, here
corresponding to 6 electrons in Mo (4d^55s^1) and 4 electrons in C (2s^22p^2). The kinetic energy cutoff was fixed to 400 eV.
For the three different crystal structures of Mo_2C, the atomic
positions and the lattice constants were fully relaxed for the
calculations of the phonon dispersion spectrum.
The force convergence criterion was set to 1 meV.
For the structure optimization calculations, Monkhorst-Pack grids
of 16 × 16 × 10, 14 × 11 × 13, and
19 × 19 × 21 k-points were used for the space groups
P6_3/mmc, Pbcn, and P3̅1m, respectively.
To obtain the force constants and phonon spectra, we used the
density functional perturbation theory (DFPT)
in combination with the Phonopy package <cit.>.
A supercell of 2× 2× 2 was adopted for the calculation of force constants.
To calculate the phonon spectrum, the Brillouin zone integration
was performed on a Γ-centered mesh of 10 × 10 × 7,
7 × 5 × 6, and 6 × 6 × 7 k-points for
the space groups P6_3/mmc, Pbcn, and P3̅1m, respectively.
In the P6_3/mmc case, considering that only half of the 2a sites is occupied by C atoms, we simplified the structure such that C atoms are fully occupied only at the corners of the unit cell.
The spin–orbit coupling (SOC) was fully considered in our calculation.
After optimizing the parameters, the electronic- and phononic band structures, as well as the density of states (DOS)
were calculated.
§ RESULTS AND DISCUSSION
8pt
§.§ Crystal structure
The phase purity and the crystal structure of Mo_2C powders were
checked via XRD measurements at room temperature (see Fig. <ref>).
Unlike the arc-melted Mo_2C <cit.>, the purchased Mo_2C
powders show a clean phase. Several phases of molybdenum carbides
have been reported, which exhibit cubic, orthorhombic, hexagonal,
and trigonal structures <cit.>.
In our case, the XRD pattern of Mo_2C, was analyzed by means of
the FullProf Rietveld-analysis suite <cit.> to find that
only the latter three structures, with space groups
Pbcn (No. 60, orthorhombic), P6_3/mmc (No. 194, hexagonal), and
P3̅1m (No. 162, trigonal), reproduce the data reasonably well.
In the insets we depict the corresponding crystal structures,
known as β-, η-, and ζ-phases, respectively.
Among these, the β-phase exhibits the best agreement with
the measured XRD pattern, here reflected in the smallest
goodness-of-fit factor (see Table <ref>).
Moreover, both the η- and ζ-phases fail to reproduce some
of the low-intensity reflections. For instance, as illustrated
in the inset of Fig. <ref>(a), while neither the η- nor
the ζ-phases admit a reflection at 2θ≈ 30^∘,
the β-phase captures this reflection quite well.
In conclusion, the Rietveld refinements suggest that the investigated Mo_2C powders
adopt an orthorhombic structure with space group Pbcn, as
further confirmed by the calculated phonon-dispersion spectrum (see below). Furthermore, no impurity phases could be detected, indicating a good sample quality.
The refined crystal-structure information and atomic coordinates for
all the three different phases are listed in
Tables <ref> and <ref>.
§.§ Magnetization measurements
We first characterized the SC of Mo_2C powders by magnetic susceptibility, carried out in
a 5-mT field, using both field-cooled (FC) and zero-field-cooled (ZFC) protocols. As indicated by the arrow in Fig. <ref>(a), a clear diamagnetic signal appears
below the superconducting transition at T_c = 3.2 K. The reduced
T_c value compared to the previously reported T_c ∼ 6 K
(whose SC fraction was less than 1%)
is most likely attributed to the varying C-content <cit.>.
Such a variable T_c against C-content has been previously reported
in the α- and η-phase of molybdenum carbides <cit.>,
where the light C atom is expected to modify significantly the electron-phonon
coupling and the phonon frequencies and, ultimately, also T_c.
A large diamagnetic response (i.e., χ_V∼ -0.8 at 2 K)
indicates a bulk SC in Mo_2C, as further confirmed by our TF-µSR measurements.
The field-dependent magnetization curves M(H), collected at a few temperatures below T_c, are plotted in the inset of Fig. <ref>(b). The estimated lower
critical fields H_c1 as a function of temperature are summarized in Fig. <ref>(b). This yields a lower critical field μ_0H_c1 = 6.4(2) mT for Mo_2C at zero temperature (see solid line).
§.§ Transverse-field MuSR
To study the gap symmetry and superconducting pairing of Mo_2C,
we performed systematic TF-µSR measurements in an applied
field of 30 mT (i.e., much higher than H_c1) at various temperatures.
In a TF-µSR measurement, the magnetic field is applied perpendicular to the muon-spin direction, leading to the precession of the muon spin.
By performing TF-µSR, one can quantify the additional field-distribution broadening due to the flux-line lattice (FLL) and, thus, determine the superfluid density in type-II superconductors.
Figure <ref>(a) plots two representative superconducting- and normal-state TF-µSR spectra for Mo_2C.
The enhanced muon-spin relaxation in the superconducting state is clearly visible
and it is due to the formation of a FLL during the field-cooling process, which generates an inhomogeneous field distribution <cit.>.
The broadening of field distribution in the superconducting state is clearly reflected in the fast Fourier transform (FFT)
of the TF-µSR spectra [see Figs. <ref>(b)-(c)].
To describe the field distribution, the TF-µSR spectra can be modelled using <cit.>:
A_TF(t) = ∑_i=1^n A_i cos(γ_μ B_i t + ϕ) e^- σ_i^2 t^2/2 +
A_bgcos(γ_μ B_bg t + ϕ).
Here A_i, A_bg and B_i, B_bg
are the initial asymmetries and local fields sensed by implanted muons in the
sample and sample holder,
γ_μ/2π = 135.53 MHz/T
is the muon gyromagnetic ratio, ϕ is a shared initial phase, and σ_i
is the Gaussian relaxation rate of the ith component.
Here, we find that Eq. (<ref>) with n = 2 [solid line in Fig. <ref>(b)] shows a better agreement with the experimental data than with n = 1 [dashed line in Fig. <ref>(b)].
In the normal state, the derived muon-spin relaxation rates σ_i(T) are small and independent of temperature while, below T_c, they start to increase due to the onset of the FLL
and the increased superfluid density (see inset in Fig. <ref>).
The effective Gaussian relaxation rate σ_eff can be calculated from σ_eff^2/γ_μ^2 = ∑_i=1^2 A_i [σ_i^2/γ_μ^2 - (B_i - ⟨ B ⟩)^2]/A_tot <cit.>, where ⟨ B ⟩ = (A_1 B_1 + A_2 B_2)/A_tot and A_tot = A_1 + A_2. By considering the constant nuclear
relaxation rate σ_n in the narrow temperature
range (∼0.3–5 K) investigated here, confirmed also
by ZF-µSR measurements (see Fig. <ref>), the
superconducting Gaussian relaxation rate can be extracted from
σ_sc = √(σ_eff^2 - σ^2_n).
The effective magnetic penetration depth λ_eff can
then be calculated using σ_sc^2(T)/γ^2_μ = 0.00371Φ_0^2/λ_eff^4(T) <cit.>.
Figure <ref> summarizes the temperature-dependent inverse square of magnetic penetration depth λ_eff^-2(T), which is proportional to the superfluid density ρ_sc(T).
The various models used to analyze the ρ_sc(T) data, are generally described by the relation:
ρ_sc(T) = 1 + 2 ⟨∫^∞_Δ_kE/√(E^2-Δ_k^2)∂ f/∂ EdE ⟩_FS.
Here, f = (1+e^E/k_BT)^-1 is the Fermi function and ⟨⟩_FS represents an average over the Fermi surface <cit.>; Δ_k(T) = Δ(T) δ_k is an angle-dependent
gap function, where Δ is the maximum gap value and δ_k is the
angular dependence of the gap, equal to 1, cos2ϕ, and sinθ
for an s-, d-, and p-wave model, respectively, where ϕ
and θ are the azimuthal angles. The temperature-dependent gap is assumed to follow Δ(T) = Δ_0 tanh{1.82[1.018(T_c/T-1)]^0.51} <cit.>, where Δ_0 is the zero-temperature gap value.
The s- and p-wave models (see black solid and red dashed lines in Fig. <ref>) yield the same zero-temperature magnetic penetration depth λ_0 =291(3) nm, but different zero-temperature energy gaps Δ_0 = 0.44(1) and 0.60(1) meV, respectively. The magnetic penetration depth of the β-phase Mo_2C is much higher than that of the α-phase MoC_x (λ_0 ∼ 132 nm) and the η-phase Mo_3C_2 (λ_0 ∼ 197 nm) <cit.>.
A possible d-wave model (green dash-dotted line in Fig. <ref>) provides a gap size Δ_0 = 0.55(1) meV. This is comparable to the p-wave model,
but λ_0 = 255(3) nm is much shorter than that of both the s- and p-wave models.
As can be clearly seen in Fig. <ref>, below ∼1.2 K, the d-wave model deviates significantly from the experimental data.
At the same time, also the p-wave model shows a poor agreement with data in the 0.7–1.6 K range. The s-wave model, on the other hand,
reproduces the experimental data quite well over the entire temperature
range studied. In addition, the temperature-independent
λ_eff^-2(T) at T < 1 K (i.e., 1/3T_c) definitely
excludes possible gap nodes and suggests that a fully-gapped
superconducting state occurs in Mo_2C.
§.§ Zero-field MuSR
ZF-µSR is one of the few techniques sensitive enough to detect the tiny
spontaneous magnetic field occurring below the superconducting transition temperature.
Similarly, it is suitable also for detecting a possible short-range magnetic order or magnetic fluctuations. In view of this,
we performed also ZF-µSR measurements on Mo_2C. The ZF-µSR spectra collected in the normal- and superconducting states of Mo_2C are presented in Fig. <ref>.
The lack of a fast decay and of coherent oscillations in the ZF-µSR data confirms the absence of magnetic order and/or fluctuation in Mo_2C. As a consequence, owing to the absence of magnetic
fields of electronic origin, the muon-spin relaxation is
mainly due to the randomly oriented nuclear magnetic moments.
Considering that both Mo and C atoms have relatively small nuclear moments (<1 µ_n), Mo_2C exhibits a very
weak muon-spin relaxation. Therefore, the ZF-µSR spectra can be modeled by a Lorentzian-type Kubo-Toyabe relaxation function G_KT = [1/3 + 2/3(1 -Λ_ZFt) e^-Λ_ZFt] <cit.>, where Λ_ZF represents the zero-field Lorentzian relaxation rate.
As shown by solid lines in Fig. <ref>, the ZF-µSR spectra of Mo_2C were fitted to A_ZF(t) = A_s G_KT + A_bg, where A_s is the same as A_i in Eq. (<ref>). The obtained muon-spin relaxation rates are Λ_ZF = 0.021(1) µs^-1 at 0.3 K and 0.019(1) µs^-1 at 10 K.
Obviously, the relaxation rates are almost identical in the superconducting- and the normal state of Mo_2C, differing less than
their standard deviations. The absence of an additional muon-spin relaxation below T_c definitely excludes a possible time-reversal symmetry (TRS) breaking in the superconducting state of Mo_2C.
Hence, combined with TF-µSR data, our ZF-µSR
results suggest a conventional fully-gapped bulk SC with a preserved TRS in the β-phase Mo_2C superconductor.
§.§ Band-structure calculations
According to XRD refinements (see Fig. <ref>), in Mo_2C, the orthorhombic crystal structure shows the best agreement with the
XRD pattern. To confirm the crystal structure of Mo_2C, we performed comparative first-principle calculations of the phonon dispersion spectra of Mo_2C by using the space groups Pbcn (β-phase), P6_3/mmc (η-phase), and P3̅1m (ζ-phase), respectively.
As shown in Figs. <ref>(a)-(c), no soft phonon modes could be
identified in the spectra of these structures, implying that all of them are dynamically stable and can be synthesized experimentally. This is consistent with the mixture of different phases
we find in the samples obtained by arc melting. The calculated total energies versus the unit-cell volumes are summarized in Fig. <ref>(d)
for the three crystal structures. Among them, the β-phase Mo_2C has the lowest energy at the equilibrium volume, while this is highest for the η-phase. Therefore, the β-phase molybdenum carbides can be stabilized at a relatively low pressure and temperature compared to the other phases <cit.>.
Since both the experiment and the theory confirm that Mo_2C adopts the β-phase, we calculated the electronic-band structures solely for this phase. The theoretical results for the other phases can be found elsewhere <cit.>.
The calculated electronic band structures for the β-phase
Mo_2C, are summarized in Fig. <ref>.
Close to the Fermi level, the electronic bands are dominated by the 4d-orbitals of Mo atoms, while the contribution from the C 2p-orbitals is almost negligible. Indeed, over a wide range of energies,
the contribution from the C-2p orbitals is less than 4.4%. This situation is also reflected in the DOS shown in the right panels.
The estimated DOS at the Fermi level is about 1.93 states/(eV f.u.) [= 7.72 states/(eV cell)/Z, with Z = 4, the number of Mo_2C formula units per unit cell]. Such a relatively high DOS suggests a
good metallicity for Mo_2C, consistent with previous electrical resistivity data <cit.>. After including the SOC the bands separate, since SOC breaks the band degeneracy and brings one of the bands closer to the Fermi level [see Fig. <ref>(b)]. The band splitting due to the SOC is rather weak, here visible only along the Z–U line near the Fermi level. Although the band splitting along the S–Y line is quite significant, these bands are too far away from the Fermi level to have any meaningful influence on the electronic properties of Mo_2C. In the β-phase Mo_2C, the band splitting E_SOC is up to 100 meV. This is comparable to the α-phase NbC, but much smaller than in TaC <cit.>.
The Pbcn space group of Mo_2C is nonsymmorphic and it has an inversion symmetry.
After inspecting the band structure without SOC across the whole Brillouin zone [see Fig. <ref>(a)], the bands along the X–S–Y,
Z–U, and R–T–Z directions turn out to be
twofold degenerate, while the bands along U–R are fourfold
degenerate (without considering the spin degree of freedom).
By using symmetry arguments <cit.>, the X–U–S–R and U–R–Z–T planes are twofold degenerate nodal surfaces due to the combined
presence of a screw rotation and time-reversal symmetries.
The fourfold degenerate nodal lines along U–R are protected
by the combination of glide-mirror- and PT symmetries.
In the presence of SOC, the fourfold-degenerate bands are broken into two twofold-degenerate bands [see Fig. <ref>(b)],
except for the fourfold-degenerate nodal lines along the
R–T–Z direction, which are protected by a combination of glide-mirror- and PT symmetries.
Therefore, similar to other phases of carbides <cit.>, β-Mo_2C with nodal lines crossing the Fermi level could also be material candidates for future studies of topological superconductivity.
Among the 8 bands crossing the Fermi level, only two of them
contribute significantly to the DOS and have the largest Fermi surfaces (FSs). These two bands are highlighted in purple and cyan
in Fig. <ref>(a), and their corresponding FSs are depicted in Figs. <ref>(b) and (c), respectively. Clearly, these two bands form distinct FSs,
even though both are due to Mo 4d-orbitals.
The purple band exhibits two small hole pockets near the Brillouin center, which are much smaller than the analogous electron
pocket of the cyan band. Near the Brillouin boundary of the purple band two cylinder-like FSs extend along the Γ–Z direction. By contrast, in the cyan band, such FSs extend along the Γ–Y direction.
Clearly, the FSs of the orthorhombic Mo_2C are more three dimensional and more complex than those of the α-phase TMCs.
In the latter case, the largest FSs consist of three cylinders along the k_i (i = x, y, z) directions.
Such cylinder-like FSs originate from the strong hybridization between the transition metal d-orbitals and C p-orbitals.
By contrast, the p–d hybridization is rather weak in the orthorhombic Mo_2C.
The cylinderlike FSs are known to play an important role in the SC of high-T_c iron-based materials <cit.>. This may also be the case for α-phase TMCs, which have relatively
high T_c values in comparison to other carbide phases.
§ DISCUSSION
Now, we briefly discuss the different phases of molybdenum carbides.
To date, there are mainly two phases of molybdenum carbides that have been reported to become superconducting at low temperature, namely α-MoC_x and η-Mo_3C_2. The γ-MoC and ζ-phase Mo_2C adopt a noncentrosymmetric hexagonal and a centrosymmetric trigonal structure, respectively, but no SC has been reported in these phases yet. Recent theoretical work predicts that by introducing hole carriers, the γ-phase MoC could show SC with a T_c up to 9 K <cit.>. Here, by using the µSR technique, we reveal
that it is the β-phase Mo_2C, instead, to represent the third
member of molybdenum carbides to show bulk SC.
Among the latter, the α- and β-phases show the highest and the lowest T_c, i.e., ∼ 15 K <cit.> and ∼ 3.2 K, respectively. While the η-Mo_3C_2-phase shows an intermediate T_c of 7.4 K <cit.>.
The highest T_c in the α-phase TMCs is most likely due to their
strong p–d hybridization and, thus, to an enhanced electron-phonon coupling.
We recall that, the strong p–d hybridization produces large
cylinder-like FSs <cit.>, which play an important
role also in the SC of high-T_c iron-based materials <cit.>.
As for the β-phase Mo_2C, band-structure calculations indicate
a rather weak p–d hybridization (see Fig. <ref>), which
may justify their comparatively low T_c value.
The low-temperature superfluid density, determined by TF-µSR
in our study, suggests a fully-gapped superconducting state in the β-phase Mo_2C. A µSR study has not yet been performed in the α-phase MoC_x and η-phase Mo_3C_2.
This is related to the difficulties in synthesizing sufficient amounts of material under the demanding conditions (1700 ^∘C, 6–17 GPa) required in these cases <cit.>.
According to our previous TF-µSR studies, the α-phase NbC and TaC also exhibit a fully-gapped superconducting state <cit.>. We expect also the α-phase MoC_x to show similar SC properties to NbC and TaC.
In fact, the electronic specific heat of α-phase MoC_x
(and η-phase Mo_3C_2) shows an exponential temperature dependence
in the superconducting state, consistent with a nodeless SC <cit.>. Further, the small
zero-temperature energy gap (Δ_0 < 1.76 k_BT_c)
and a reduced specific-heat jump at T_c (ΔC/γT_c < 1.43) suggest a weakly coupled SC in the various phases of superconducting molybdenum carbides. Taking into account the preserved TRS in the superconducting state, as well as an upper critical field H_c2 well below the Pauli limit <cit.>, we conclude that the molybdenum
carbides exhibit a spin-singlet pairing, independent of
their crystal structure (phase).
Finally, we discuss the topological aspects of molybdenum carbides. The α-phase MoC, possesses a nonzero ℤ_2 topological invariant and Dirac surface states <cit.>.
The isostructural NbC, contains three closed node lines in the bulk band structure (without considering SOC) of its first Brillouin zone. These are protected by time-reversal and space-inversion symmetry <cit.>.
In case of a large SOC, such nodal loops become gapped. Since the 4d Nb and
Mo atoms exhibit a weaker intrinsic SOC than the 5d Ta atoms,
the SOC effects should be modest in both NbC and MoC. Consequently, the node lines — predicted by calculations neglecting SOC effects — are most likely preserved in both the above carbides. As such, the α-phase MoC and NbC might
be good candidates for observing the exotic two-dimensional surface states. Further, although the γ-phase MoC is not superconducting in its pristine form, it is predicted to be a topological nodal-line material, exhibiting drumhead surface states.
After introducing hole carriers, its SC can be tuned to reach a T_c of up to 9 K <cit.>.
Since the γ-phase MoC adopts a noncentrosymmetric hexagonal structure, it can be classified as a topological Weyl semimetal. Indeed, three-component fermions were experimentally observed in the γ-phase MoP and WC <cit.>. By applying external pressure, the topological semimetal MoP becomes a superconductor, whose T_c reaches 4 K (above 90 GPa) <cit.>, thus representing
a possible candidate topological superconductor. Here, we also find that the β-phase Mo_2C hosts twofold-degenerate nodal surfaces and fourfold-degenerate nodal lines near the Fermi level. In the case of SOC, the fourfold degenerate
nodal lines cross the Fermi level and, hence, could contribute to the
superconducting pairing. In general, all the various phases of molybdenum carbides are promising for studying topological superconductivity.
§ CONCLUSION
To summarize, we studied the superconducting properties of
Mo_2C mostly by means of the µSR technique,
as well as via numerical band-structure calculations.
The latter show that the phonon dispersion spectrum of Mo_2C
provides the lowest total energy in case of the orthorhombic β-phase
(with Pbcn space group), a result consistent with the experiment.
Magnetization measurements confirm the bulk superconductivity
of Mo_2C, with a T_c of 3.2 K. The temperature dependence of the superfluid density reveals a nodeless
superconducting state, which is well described by an isotropic s-wave
model. The lack of spontaneous magnetic fields below T_c indicates
that time-reversal symmetry is preserved in the superconducting state of Mo_2C.
Electronic band-structure calculations suggest that the density of states
at the Fermi level is dominated by the Mo-4d electrons, while the
contribution of the C-2p electrons is negligible over a broad
energy range. As a consequence, the p–d hybridization is rather weak
in the β-phase Mo_2C, resulting in a relatively low
T_c value. Topological nodal states including nodal surfaces and nodal lines could be identified
in the Mo_2C electronic band structure near the Fermi level.
This finding, together with the intrinsic superconductivity,
suggests that the β-phase Mo_2C, too, is a potential candidate for
studies of topological SC, similar to the other phases of molybdenum carbides.
The authors thank Weikang Wu for fruitful discussions.
This work was supported by the Natural Science Foundation of Shanghai
(Grant Nos. 21ZR1420500 and 21JC1402300), Natural Science
Foundation of Chongqing (Grant No. CSTB-2022NSCQ-MSX1678), National
Natural Science Foundation of China (Grant No. 12374105), Fundamental
Research Funds for the Central Universities, and the Schweizerische
Nationalfonds zur Förderung der Wissenschaftlichen
Forschung (SNF) (Grant Nos. 200021_169455 and No. 200021_188706).
We also acknowledge the allocation of beam time at the Swiss muon source (Dolly µSR spectrometer).
|
http://arxiv.org/abs/2409.02408v1 | 20240904032122 | Force-Limited Control of Wave Energy Converters using a Describing Function Linearization | [
"Rebecca McCabe",
"Maha Haji"
] | eess.SY | [
"eess.SY",
"cs.SY",
"42A10, 93C10",
"I.6.3; G.1.2"
] |
[footnoteinfo]This material is based on work supported by National Science Foundation Graduate Research Fellowship Grant No. DGE–2139899.
First]Rebecca McCabe
Second]Maha N. Haji
*Sibley School of Mechanical and Aerospace Engineering,
Cornell University,
Ithaca, NY 14853 USA
[First]e-mail: [email protected]
[Second]e-mail: [email protected]
§ ABSTRACT
Actuator saturation is a common nonlinearity. In wave energy conversion, force saturation conveniently limits drivetrain size and cost with minimal impact on energy generation. However, such nonlinear dynamics typically demand numerical simulation, which increases computational cost and diminishes intuition. This paper instead uses describing functions to approximate a force saturation nonlinearity as a linear impedance mismatch. In the frequency domain, the impact of controller impedance mismatch (such as force limit, finite bandwidth, or parameter error) on electrical power production is shown analytically and graphically for a generic nondimensionalized single degree of freedom wave energy converter in regular waves. Results are visualized with Smith charts. Notably, systems with a specific ratio of reactive to real mechanical impedance are least sensitive to force limits, a criteria which conflicts with resonance and bandwidth considerations. The describing function method shows promise to enable future studies such as large-scale design optimization and co-design.
Wave energy converters, constrained control, systems with saturation, nonlinear and optimal marine system control, describing functions, impedance mismatch, linearization.
§ INTRODUCTION
§.§ Motivation
Ocean wave energy converters (WECs) are an immature yet promising source of renewable energy to decarbonize coastal grids and offshore systems. Maximum power transfer requires impedance matching of WEC controls, powertrain, and hydrodynamics, but plant uncertainty, controller bandwidth, and physical constraints prevent perfect matching. Actuator force limits are especially relevant given waves' high-force, low-speed nature and the scaling of device cost with force. WEC controllers must maximize power while obeying force limits, although the resulting nonlinearity requires computationally-costly numerical optimization. There is a gap of rapid and intuitive methods suitable for early tradeoff analysis. To this end, the present paper demonstrates linear analytic treatment of force saturation limits and other sources of impedance mismatch.
§.§ Literature Review
Prior studies on WEC powertrain constraints investigate position, velocity, force, rate of change of force, and power flow direction. Constrained numerical optimization is typically used. For example, <cit.> review model predictive control while <cit.> apply pseudo-spectral methods.
A minority of work tackles the problem analytically. <cit.> use the Pontryagin principle to show that a bang-singular arc-bang controller is optimal, revealing that saturating the unsaturated optimal solution can still be optimal in certain cases.
They derive analytical piecewise expressions for the optimal control force, but finding the corresponding power still requires numerical simulation.
<cit.> present a geometric tool to analyze simultaneous force and position constraints. They provide analytical relationships between power and root-mean-square signals, which relate to upper and lower bounds on constraint values, but the bounds are not tight. <cit.> introduce another geometric tool which accounts only for position constraints and focuses on hydrodynamics over powertrain. The tool is visually and mathematically similar to a Smith chart. To the authors' knowledge, no prior work uses describing functions to address WEC constraints, though <cit.> use them to model a WEC with nonlinear damping, and <cit.> use the same technique to model drag, calling it a Fourier approximation. Outside of wave energy, <cit.> apply describing functions to model velocity and torque saturation on an impedance-controlled robot and experimentally validate the results.
§.§ Paper Outline and Contribution
Section <ref> presents the paper's first contribution: application of linear theory to analyze the relationship between impedance mismatch, signal amplitude, and power for a WEC in regular waves. Explicit analytical expressions are derived, and results are visualized on a Smith chart. As the second contribution, section <ref> suggests that constraints typically solved numerically, such as force limits, be linearized with describing functions in order to apply the previous section's results. The approximation is derived and discussed in the context of impedance mismatch.
These contributions provide three main benefits. First, they offer an intuitive process for rapid tradeoff analysis early in the design process, with the opportunity to apply standard linear frequency-domain tools. Second, they are computationally efficient enough to integrate with more expensive techniques like design optimization or control co-design, either directly or as an initial guess for solvers of higher fidelity dynamics. Third, the results are analytical and differentiable, providing gradients for sensitivities or to accelerate convergence of outer optimizations.
§ PEAK LIMITING IN THE LINEAR CASE
§.§ General Impedance-Mismatched System
The analysis starts with a generic linear system modeled as a Thévenin equivalent circuit with AC voltage source V_th and complex source impedance Z_th, shown in Fig. <ref>.
The load impedance Z_L is to be selected, with the conflicting goals of maximizing average power transfer P_L and minimizing the peak amplitude of load current |I_L| or voltage |V_L|. Maximum power transfer occurs when there is impedance matching, meaning Z_L = Z_th^* where * indicates complex conjugate. The load average power, peak voltage, and peak current at this matched point, denoted P_L^m, |V_L^m|, and |I_L^m| respectively, are found as:
P_L^m = |V_th|^2/8 (Z_th),
|V_L^m| = |V_th| |Z_th|/2 (Z_th),
|I_L^m| = |V_th|/2 (Z_th)
where means the real part and the imaginary part.
To consider all possibilities of the unmatched case, we set Z_L = z Z_th^* for arbitrary complex number z. The space of z can be visualized using a Smith chart, where (z) is on a curved horizontal axis and (z) is on a curved vertical axis. The axes are curved such that the chart can be simultaneously read as a standard polar plot of the complex reflection coefficient Γ, which is a transformation of z defined as Γ = z-1/z+1. The impedance-matched case of z=1, Γ = 0 is found at the center of the plot, the minimum voltage at z=0, Γ = -1 on the left, and the minimum current at z →∞, Γ = 1 on the right.
The average power, peak voltage, and peak current in the unmatched case can be found using standard circuit techniques and expressed as fractions of their matched counterparts. <cit.> derives the power ratio:
P_L/P_L^m = 1 - |Γ|^2
This relationship is visualized on the Smith chart in Fig. <ref>. As the impedance ratio z gets further away from the impedance-matched condition z=1 at the center of the circle, the power lowers quadratically.
The corresponding voltage and current ratios are:
|V_L|/|V_L^m|,|I_L|/|I_L^m| = √(|Γ|^2 + 2 ϵ(Γ) + 1/α^2 |Γ|^2 + 2 α(Γ) + 1)
where ϵ indicates sign: ϵ=1 for voltage ratio and ϵ=-1 for current ratio, and α = (Z_th)/(Z_th) is a parameter related to the phase of the source impedance. These relationships are visualized on the Smith charts in Fig. <ref> (a) and (b), where contours show the ratios as a function of z (and thus Γ) for various values of α. Only positive α (inductive Z_th) is shown for brevity. The contours for negative α (capacitive Z_th) can be found by reflecting the graphs over the horizontal axis ((Γ) = 0) due to symmetry. On the Smith charts, points where the voltage and current ratios exceed 1 are shaded. These points are undesirable because they absorb less power than the baseline z=1, Γ=0 matched case while having higher peaks. The optimal contours (lowest voltage and current ratios for a given power ratio) are traced out with dashed lines. Because the optimal voltage reduction path requires decreasing the impedance and the optimal current reduction path requires increasing the impedance, it is not possible to follow both paths simultaneously. For sufficiently high values of |α|, it is possible to reduce both current and voltage simultaneously (i.e. avoid the shaded region of both Smith charts in Fig. <ref> (a) and (b)), although for α=0, decreasing voltage implies increasing current and vice versa. This tradeoff is explored further in section (c) of Fig. <ref>, a pareto front showing the nondominated combinations of the voltage, current, and power ratios.
The optimal paths (·)^opt of Fig. <ref> are derived by setting the derivative of the voltage and current ratios (<ref>) with respect to ∠ Γ equal to zero, for ∠ Γ= tan^-1 ((Γ)/(Γ)):
∠ Γ ^opt = 2 tan^-1[ α^2 |Γ|^2 + 1/σ + ϵα (1 + |Γ|)^2] + ϵcos^-1[ -2α| Γ|/σ]
where σ = √((α^2|Γ|^2 + 1)^2 + α^2 (|Γ|^2 + 1)^2) and sign indicator ϵ=1 obtains the angle at minimum voltage and ϵ=-1 minimum current. <cit.> obtained a similar result for a two-port system. Further manipulation of (<ref>) shows that the minimum-voltage and minimum-current angles are supplementary: ∠ Γ^opt,V+ ∠ Γ^opt,I= π. Using this relation and plugging into (<ref>) reveals that the ratios are equal at their respective optima: |V_L|/|V_L^m|^opt,V= |I_L|/|I_L^m|^opt,I. Intuitively, voltage and current reductions are symmetric.
The optimal contours are aggregated in Fig. <ref> to show the tradeoff between power and voltage/current. Interestingly, as |α| grows (i.e., as Z_th becomes less resistive and more reactive), there is less of a power penalty for a given voltage or current reduction. In other words, power in the pure reactive Z_th case is least sensitive to voltage and current limits. For fixed (Z_th), (<ref>) shows the baseline matched power is independent of α, suggesting that plant design should maximize |α|.
Note that most of the optimal contours of Fig. <ref> (a) and (b) require a z with nonzero imaginary part, i.e. a Z_L that is more or less reactive than the impedance-matched case, rather than merely scaled up or down. This highlights the difference between “constrained optimal control" and “optimal constrained control." The former refers to the unconstrained optimal controller that has been scaled or saturated until it meets the constraint, while the latter refers to the controller that is optimal for the constrained problem, which is distinct under the present assumption of linear control. Scaling down or saturating the control signal computed with the unconstrained optimal impedance yields a signal with a fundamental amplitude identical to one computed with a proportionally scaled linear control impedance. This enforces (z)=0, which is evidently non-optimal for all but α=0. This is distinct from the complex z in the optimal profiles of (<ref>) and Fig. <ref>, which would be considered optimal constrained control. Interestingly, <cit.> show that the optimal constrained and constrained optimal controllers are identical if nonlinear control is allowed. In summary, classical theory reveals the effect of impedance mismatch on power, current, and voltage, informing the choice of a load impedance to balance power generation and peak limiting constraints.
§.§ Wave Energy Converters
Applying the preceding analysis to a WEC requires a Thévenin equivalent circuit for WEC dynamics. This study assumes a single degree of freedom floating body coupled to a power take-off (PTO) with a drivetrain and a linear or rotational synchronous surface permanent magnet electric generator. Substituting a hydraulic or other impedance is straightforward provided the system remains linear.
The generator model is non-ideal and the objective is electrical power. The frequency domain WEC dynamics are:
((m+A)s^2 + B_h s + K_h) X + F_P = F_e Body
G τ_PTO = F_PTO, G s X = Ω Gear ratio
τ_PTO = (B_d + K_d/s) Ω + τ_gen PTO
τ_gen = K_t I, V = I(R + s L) - K_t Ω Generator
V = (B_c + K_c/s) I Controller
P_elec = 0.5 (I^* V) Power
with Laplace variable s, mass m, added mass A, hydrodynamic damping B_h, hydrostatic stiffness K_h, WEC position X, power take-off and wave excitation forces F_P and F_e, effective gear ratio G, drivetrain mechanical stiffness and damping K_d and B_d, generator torque and rotation speed τ_gen and Ω, generator torque constant K_t, generator q-axis current and voltage I and V, generator resistance and inductance R and L, controller stiffness and damping K_c and B_c, and average electrical power P_elec. The controller acts between V and I rather than τ_gen and Ω as is more typical, providing equivalent dynamics and making the relevant Thévenin equivalent more convenient to define. B_d and K_d can capture mooring and drag forces. Frequency dependence of hydrodynamic coefficients A, B_h, and F_e is omitted since this study assumes regular waves. Fig. <ref> shows a block diagram of the dynamics.
Since electrical, not mechanical, power is the focus, choose the controller as the load impedance: Z_L = Z_c = B_c + K_c/s. Thus the Thévenin equivalent load voltage/current are the generator q-axis voltage/current: V_L = V, I_L = I. Thévenin parameters from <cit.> are then:
Z_th = Z_w + K_t^2 G^2/Z_m, |V_th| = K_t G F_e/|Z_m|
with winding impedance Z_w = R + s L and mechanical impedance Z_m = B_h + G^2B_d + (m+A)s + K_h + G^2 K_d/s.
Substituting (<ref>) into (<ref>) and applying the Haskind relation between B_h and F_e yields the matched power:
P_L^m = G_0 J/k𝒟/1+ℛ/𝒟 (1+α_m^2), G_0 =
1 heave
2 surge/pitch
with incident energy density J, wavenumber k, and gain G_0. <cit.> show this applies for any WEC shape. Nondimensional quantities ℛ=R B_h/(K_t^2 G^2), 𝒟=B_h/(B_h+G^2B_d), and α_m = (Z_m)/(Z_m) are the normalized resistance and ratios of hydrodynamic to total damping and reactive to real impedance, respectively.
This matched power is not possible when there is controller impedance mismatch, either intentional with the intent to obey a given amplitude limit, or unintentional due to parameter uncertainty or controller bandwidth limitations in broadband waves. The mismatched power P_L is found by multiplying the matched power P_L^m in (<ref>) by the power ratio in (<ref>). The required reflection coefficient Γ to meet constraints is found via (<ref>) or graphically via Fig. <ref> and <ref>. Doing so requires an expression for α, found via (<ref>):
α = (Z_th)/(Z_th) = ℒℛ (1+α_m^2) - 𝒟α_m/ℛ (1+α_m^2)+𝒟
with ℒ=ω L/ R. Typically ℒ≈ 0 since the wave period 2 π / ω far exceeds the winding electrical time constant L/R.
Section <ref> allows choice of Γ to limit q-axis variables V and I, but limiting other quantities may be desired. Limiting V has little direct utility. Instead, a limit on phase voltage V_s represents the voltage of a vector drive which affects the generator's torque-speed curve; a limit on position X represents actuator stroke or kinematic constraints; and a limit on apparent power S sizes the PTO including reactive power. These amplitudes are:
X = F_e/s/Z_m + K_t^2 G^2/Z_w - z Z_th^*, V_s^2 = V^2 + (L p Ω I)^2
S_max,min/P_L^m = P_L/P_L^m±|V_L|/|V_L^m||I_L|/|I_L^m|√(1+α^2)
where p is the number of machine poles. Smith plots similar to Fig. <ref> could be made to visualize the effect of these limits, but more parameters besides α must be swept to visualize the design space. Meanwhile, a generator force limit of |F_gen| ≤ F_max is achieved with a q-axis current limit of |I| ≤ I_max = F_max/K_t G, which is well-captured by the earlier parameterization. For brevity, the rest of this paper focuses on current (force) limits, which drive PTO cost and are of primary design interest. Thus, Γ is selected to set current ratio |I_L|/|I_L^m| = I_max/|I_L^m|, thereby enforcing the limit.
Besides facilitating control, the formulation also informs plant design. By inspection of (<ref>), the maximum-power plant design in the matched (unconstrained) case is (ℛ,𝒟,α_m)^opt=(0,1,0), intuitively minimizing loss due to resistance, friction, and reactive power. α_m relates to the uncontrolled mechanical damping ratio ζ and natural frequency ω_n as α_m = (ω^2-ω_n^2)/(2 ζωω_n), so the α_m=0 plant resonates (ω=ω_n) passively at the wave frequency.
Amplitude limits make design more complicated. Substituting (ℛ,𝒟,α_m)^opt into (<ref>) gives α=0, but section <ref> established that high |α| is desirable to minimize the effect of amplitude limits on power. Specifically, if ℒ=0, maximum |α| requires α_m^2=1+𝒟/ℛ instead of α_m=0. Meanwhile, the well-known Bode-Fano limit implies that for good broadband matching, |α| must instead be minimized, implying either α_m=0 or |α_m|→∞. Therefore, the three goals of maximizing matched power (min|α_m|), minimizing the effect of constraints (max|α|), and maximizing bandwidth (min|α|) conflict. For the best tradeoff, the plant α_m must maximize P_L across the wave spectrum, the subject of future control co-design work.
§ SATURATION NONLINEARITIES
Section <ref> considered a linear impedance mismatch Z_L = z Z_th^*. In linear control, the current waveform is sinusoidal, so the fundamental amplitude equals the peak amplitude and never exceeds current limit I_max.
Nonlinear control allows non-sinusoidal current, increasing the fundamental.
§.§ Describing Functions
Consider a nonlinear saturation control law of the form
I_L,sat(t) = I_max sat(I_temp(t)/I_max)
where sat is the unit saturation function and I_temp(t) is the time domain output of a linear impedance controller I_temp = V_L,sat/Z_C, stored temporarily in controller memory and not physically realized.
Fig. <ref> illustrates this nonlinearity. Unlike the unsaturated case (a), the orange saturation blocks in cases (b)-(d) create harmonics. If I_temp(t) is sinusoidal, I_L,sat(t) is a saturated sine, shown in (e). Technically, I_temp is nonsinusoidal because I_L,sat harmonics propagate through linear blocks, shown with multiple orange lines in (b). However, the second-order low-pass plant dynamics Z_th substantially reduce harmonics in downstream quantities V_S,sat, V_L,sat, and I_temp. Thus, (c)-(d) neglect harmonics of I_temp. Cases (c) and (d) differ in the location of the approximation: (c) depicts the sinusoidal-input describing function method from <cit.>, taking the fundamental of the saturated signal, and (d) is the higher order sinusoidal-input describing function from <cit.>, preserving harmonics of V_L,sat. The higher order method is useful if the low-pass assumption is poor, which may be true for broadband WECs. The saturated-sine current signal is decomposed into a sum of harmonics:
I_L,sat(t) ≈∑_n |I_L,sat,n| sin(n ω t + ψ)
where ψ is the same phase as I_temp(t), since saturation does not alter phase. The nth harmonic amplitude |I_L,sat,n| is defined as f_sat,n |I_temp|. The saturation factor f_sat,n is found via Fourier analysis:
f_sat,n =
1 ℐ≥ 1, n = 1
0 ℐ≥ 1, n ≠ 1
2/π(ℐ√(1 - ℐ^2) + θ) ℐ <1, n = 1
0 ℐ< 1, n = 2,4...
4/πn √(1 - ℐ^2)sinθ - ℐcosθ/n (n^2 - 1) ℐ< 1 , n = 3,5...
where ℐ=I_max/|I_temp| and θ = n sin^-1ℐ. Fig. <ref> depicts the first 7 harmonics. As the signal saturates, the fundamental decreases and higher harmonics emerge.
At each harmonic frequency n, the saturation block is approximated by gain f_sat,n, so the equivalent load impedance is Z_L = Z_C / f_sat,n rather than the unsaturated Z_L = Z_C. This makes the nondimensional impedance for the nth harmonic z_n = Z_C / (f_sat,n Z_th^*).
Now the response is the sum of linear responses at different frequencies:
P_L = ∑_nP_L,n(z_n)
In classic describing functions, only the fundamental current is used: I_L,sat≈ I_L,sat,1. This system contains only one harmonic, allowing the use of the tools from section <ref>, though it is still only quasi-linear because f_sat,1 depends on amplitude. The fundamental of the saturated current is a factor f_sat,1/ℐ above the peak, thus nonlinear control allows higher current ratios for the same current limit. The resulting power increase is found from Fig. <ref> by moving on the x-axis a factor of f_sat,1/ℐ higher than a given starting point. If |I_temp|≫ I_max, then ℐ→0 and f_sat,1/ℐ→4/π, yielding a square wave with fundamental 4/π higher than is possible in linear control.
To compute ℐ otherwise, the dynamics are used to find complex-valued I_temp:
I_L,sat,1/I_temp = V_th-V_L,sat/I_tempZ_th = V_th-Z_C I_temp/I_tempZ_th
Controller impedance Z_C is still undecided. A procedure aiming to maximize power might use equations (<ref>) and (<ref>) to select Z_C so fundamental |I_L,sat,1| approaches its unconstrained value |I_L^m|. However, invoking Pontryagin's principle avoids this effort. <cit.> show analytically that steady-state optimal nonlinear control of a force-limited WEC is simply unconstrained optimal control with saturation, so Z_C=Z_th^*. (<ref>) is then solved using amplitude condition f_sat,1 = |I_L,sat,1/I_temp| and phase condition (I_L,sat,1/I_temp) = 0. Combining with (<ref>) and (<ref>) gives a transcendental equation. If an analytical solution is desired, arcsine can be algebraically approximated.
§.§ Limitations and Extensions
The sinusoidal-input describing function assumes a saturated-sine current waveform. This is reasonable for WECs (a) in regular waves, (b) with low-pass dynamics |Z_th(nω)| ≪ |Z_th(ω)| for n≥3. Critically, (a) fails in a broadband ocean environment. Accuracy in irregular waves remains to be tested. <cit.> suggest that sinusoidal-input describing functions still apply for some non-sinusoidal excitation, but <cit.> show that systems with multiple sinusoids as input require other methods. Even for regular waves, the filter assumption (b) could be violated for broadband WECs, such as small WECs, or for narrow-band WECs analyzed far below the resonant frequency. This should be assessed with representative frequency-dependent hydrodynamics. Meanwhile, adding multiple constraints is unproblematic since arbitrarily many amplitudes may be analyzed/limited in the linear system, as long as all nonlinearity is filtered by subsequent low-pass dynamics. However, bump-stop position saturation is one problematic example where the nonlinearity occurs after the dynamics. Future work includes an error analysis compared to fully nonlinear simulations.
§ CONCLUSION
This work presented an analytical method to handle WEC generator force (current) constraints. First, linear theory was reviewed, and key relationships for an impedance-mismatched Thévenin-equivalent circuit were visualized on Smith charts and Pareto fronts. Then, specific wave energy dynamics were introduced, highlighting the implications of nondimensional resistance, damping, and mechanical impedance on power and sensitivity to amplitude limits. Next, describing function theory for the saturation nonlinearity was introduced, using the plant's low-pass nature to justify neglecting higher harmonics.
A variety of future work is possible. The effect of irregular waves and frequency-dependent dynamics on describing function accuracy should be investigated. The linear framework for voltage and current limits could easily extend to other constraints like position, velocity, phase voltage, or apparent power, and multiple degrees of freedom could be considered. When combined with cost estimates, this offers a powerful strategy for techno-economic tradeoff analysis and design optimization. All in all, while analytical approximations do not take the place of nonlinear numerical optimization, they provide the intuition and computational speed that may unlock improved WEC designs with substantial climate impact.
The code for this work is available open-source at <https://github.com/symbiotic-engineering/IFAC_CAMS_2024/>.
|
http://arxiv.org/abs/2409.02819v1 | 20240904153754 | Efficient Simulation of 1D Long-Range Interacting Systems at Any Temperature | [
"Rakesh Achutha",
"Donghoon Kim",
"Yusuke Kimura",
"Tomotaka Kuwahara"
] | quant-ph | [
"quant-ph",
"cond-mat.dis-nn",
"cond-mat.quant-gas",
"cond-mat.stat-mech",
"math-ph",
"math.MP"
] |
one
=260mm
#1
⟨⟩#1√( #1) Tr tr
theoremTheoremsubtheoremSubtheoremlemmaLemmacorol[lemma]Corollaryassump[lemma]AssumptiondefinitionDefinition
prop[theorem]Proposition
claim[subtheorem]Claim
conj[theorem]Conjecture
#1#2
[email protected]@[email protected]@riken.jp^1
Analytical quantum complexity RIKEN Hakubi Research Team, RIKEN Center for Quantum Computing (RQC), Wako, Saitama 351-0198, Japan
^2
Department of Computer Science and Engineering, Indian Institute of Technology (Banaras Hindu University), Varanasi, 221005, India
^3
PRESTO, Japan Science and Technology (JST), Kawaguchi, Saitama 332-0012, Japan^4
RIKEN Cluster for Pioneering Research (CPR), Wako, Saitama 351-0198, Japan
§ ABSTRACT
We introduce a method that ensures efficient computation of one-dimensional quantum systems with long-range interactions across all temperatures. Our algorithm operates within a quasi-polynomial runtime for inverse temperatures up to β=(ln(n)). At the core of our approach is the Density Matrix Renormalization Group algorithm, which typically does not guarantee efficiency. We have created a new truncation scheme for the matrix product operator of the quantum Gibbs states, which allows us to control the error analytically. Additionally, our method is applied to simulate the time evolution of systems with long-range interactions, achieving significantly better precision than that offered by the Lieb-Robinson bound.
Efficient Simulation of 1D Long-Range Interacting Systems at Any Temperature
Tomotaka Kuwahara^1,3,4
September 9, 2024
============================================================================
Introduction.—
Unraveling the complex patterns of quantum many-body systems presents one of the most significant challenges in modern physics. Among these challenges, characterizing the thermal equilibrium quantum state (also known as the quantum Gibbs state) at zero and non-zero temperatures stands as a central target. These states are generally difficult to manage in high-dimensional systems as mentioned in previous studies <cit.>. However, a variety of techniques have been developed to address their properties in one-dimensional systems <cit.>.
Notably, techniques such as the transfer matrix method used in classical Ising models <cit.> and its generalization to 1D quantum systems <cit.> have been well-recognized. More recently, the Density Matrix Renormalization Group (DMRG) algorithm has emerged as a promising approach <cit.>, enabling the numerical construction of the matrix product operator (MPO) that encapsulates all information about the equilibrium states <cit.>. Despite its promise, providing a rigorous justification for the precision guarantee in the DMRG algorithm remains a critical open problem at both zero and non-zero temperatures <cit.>. Nevertheless, several methods have been proposed to develop a `DMRG-type' algorithm to construct the MPO with guaranteed accuracy and time efficiency <cit.>.
In one-dimensional systems at non-zero temperatures, the construction of the MPO for short-range interacting systems has received extensive attention. The pioneering method, based on the cluster expansion technique by Molnar et al. <cit.>, introduced a polynomial-time algorithm. This algorithm works with a runtime scaling as (n/ϵ)^𝒪(β), where n is the system size, to approximate the 1D quantum Gibbs state by the MPO within an error ϵ. Subsequent advancements have refined the error bounds, with the leading-edge approach achieving a runtime of e^𝒪̃(β) + 𝒪̃(√(βln(n/ϵ))) <cit.>. This technique utilizes the imaginary-time adaptation of the Haah-Hastings-Kothari-Low (HHKL) decomposition, originally devised for real-time evolution <cit.>.
Extending from the study of short-range interactions in one-dimensional (1D) systems, we explore the realm of long-range interactions, characterized by power-law decay. These interactions impart high-dimensional traits to systems, manifesting behaviors not observed in short-range interacting systems. Notably, phenomena such as phase transitions, typically associated with higher dimensions, can occur even in 1D systems with long-range interactions <cit.>.
Additionally, long-range interactions are increasingly relevant in contemporary many-body physics experiments, such as those involving ultracold atomic systems <cit.>, highlighting their practical significance <cit.>.
Beyond the realm of short-range interacting systems, the complexity analysis of long-range interactions presents substantial challenges, especially at low temperatures. Most existing methods are tailored specifically for short-range interactions, and their direct application to long-range systems often results in a loss of efficiency <cit.>. While the cluster expansion method proves effective at high temperatures <cit.>, providing an efficient algorithm to analyze the properties of the quantum Gibbs state in systems with long-range interactions, it falls short at low temperatures. Notably, it does not offer an approximation by the MPO, leaving the computational approach at low temperatures as an unresolved issue.
In this letter, we introduce an efficiency-guaranteed method for constructing the MPO based on the DMRG algorithm. Our method achieves a runtime of:
e^𝒪(βln^3(n/ϵ)) (general cases),
and for Hamiltonians that are 2-local, as defined in Eq. (<ref>) below, the runtime improves to:
e^𝒪̃(βln^2(n/ϵ)) (2-local cases).
These results demonstrate a quasi-polynomial time complexity under conditions where β=polylog(n), a regime where the quantum Gibbs states typically approximate the non-critical ground state <cit.>.
Furthermore, while our primary focus has been on constructing MPOs for quantum Gibbs states, our approach is equally applicable to real-time evolutions.
To our knowledge, this is the first rigorous justification of a classical simulation method with guaranteed efficiency for managing the real/imaginary-time dynamics of long-range interacting Hamiltonians.
In terms of technical advancements, the traditional DMRG algorithm faced challenges in accurately estimating the truncation errors of bond dimensions during iterative steps <cit.> (see also Ref. <cit.>). To address this, we developed a new algorithm that captures the core mechanics, as illustrated in Fig. <ref>. Our approach comprises two main steps: (i) constructing the MPO at high temperatures with a precisely controlled error, using arbitrary Schatten p-norms, and (ii) merging high-temperature quantum Gibbs states into their low-temperature counterparts. The success of the first step allows us to apply techniques from Refs. <cit.> in the second step.
Hence, the primary challenge lies in the initial construction of the MPO. We begin by constructing the quantum Gibbs state in independent small blocks and iteratively merge them into larger ones.
This approach retains the spirit of the original DMRG algorithm developed by White <cit.>, focusing on local interactions and iterative improvements.
A key innovation of our algorithm is the efficient control of the precision in the merging process, quantified by the general Schatten p-norm (see Proposition 2 below).
Setup.— We consider a one-dimensional quantum system composed of n qudits, each in a d-dimensional Hilbert space. The total set of sites in this system is denoted by Λ, i.e., Λ = {1, 2, 3, …, n}. We define a k-local Hamiltonian H for this system by:
H = ∑_|Z| ≤ k h_Z, max_i ∈Λ∑_Z : Z ∋ ih_Z≤ g,
where Z ⊂Λ, and |Z| represents the cardinality of Z, meaning the number of sites involved in each interaction term h_Z. The term ⋯ denotes the operator norm, which is the maximum singular value of the operator. For any arbitrary subset L ⊆Λ, the subset Hamiltonian H_L is defined to include only those interaction terms involving sites in L, specifically:
H_L = ∑_Z:Z ⊂ L h_Z.
To elucidate the assumptions underpinning our analysis, let us consider a decomposition of the total system Λ into subsets A and B such that Λ = A ⊔ B. Here, subset A is defined as [i, i']∩Λ with 1 ≤ i < i' ≤ n. We assume that the norm of the boundary interaction between A and B, denoted by ∂ h_A, is finite:
∂ h_A≤g̃, ∀ A, ∂ h_A := H - (H_A + H_B),
where g̃ represents an 𝒪(1) constant. This condition is more general than the assumption of power-law decay of interactions. If we specifically consider the power-law decay condition in the form:
∑_Z: Z ∋{i, i'}h_Z≤J/|i - i'|^α i≠ i',
then the condition in (<ref>) is satisfied as long as α > 2
(see Supplementary materials <cit.>).
Throughout the paper, we analyze the quantum Gibbs state as follows:
ρ_β := 1/Z_βe^-β H, Z_β := tr(e^-β H).
We aim to approximate the Gibbs state ρ_β by a MPO, which is generally described as follows:
M = ∑_s_1, s_2, …, s_n = 1
s_1', s_2', …, s_n' = 1^d tr(M_1^[s_1, s_1'] M_2^[s_2, s_2']⋯ M_n^[s_n, s_n'])
|s_1, s_2, …, s_n⟩⟨ s_1', s_2', …, s_n'|,
where each {M_j^[s_j, s_j']}_j,s_j,s_j' is a D × D matrix, with D referred to as the bond dimension.
As measures of the approximation, we often use the Schatten p-norm as follows:
O_p :=[(|O|^p)]^1/p
with O an arbitrary operator and |O| = √(O^†O). For p=1, it is equivalent to the trace norm, and for p=∞, it corresponds to the operator norm.
We denote the bond dimension of the MPO form of the Hamiltonian by D_H, which is at most of n^k d^k in general <cit.>.
In particular, if we restrict ourselves to a 2-local Hamiltonian in the form of:
H = ∑_i < i'1/|i - i'|^α∑_ξ, ξ' = 1^d^2 J_ξ,ξ' P_i,ξ⊗ P_i',ξ',
where {P_i,ξ}_ξ=1^d^2 denotes the operator bases at site i and J_ξ,ξ' are the corresponding coefficients, the Hamiltonian can be approximated by an MPO with bond dimension D_H = c_H ln^2(n/ϵ), where c_H = 𝒪(1), up to an error of ϵ (see Supplementary material <cit.>).
We note that any subset Hamiltonian H_L is also described by an MPO with the bond dimension D_H of the global Hamiltonian.
Main Result.—
Our task is to develop a time-efficient method to approximate the quantum Gibbs state using an MPO with a fixed bond dimension.
The main result concerning the time-efficient construction of the MPO is as follows:
Theorem.
For an arbitrary β, we can efficiently compute the MPO M_β that approximates the unnormalized Gibbs state up to an error ϵ:
e^-β H - M_β_p ≤ϵe^-β H_p, ∀ p.
The bond dimension of M_β and the time complexity are given by
e^C_0 βln^3(n/ϵ),
where C_0 is an 𝒪(1) constant.
Typically, we consider ϵ = 1/(n), which leads to a time complexity of e^𝒪(β) ln^3(n).
If we consider the 2-local Hamiltonian in Eq. (<ref>),
the time complexity qualitatively improves to e^C_0 ln^2(n/ϵ) lnln (n/ϵ).
From the theorem, we can efficiently simulate the quantum Gibbs state in long-range interacting systems <cit.>.
In the following, we detail the explicit calculation steps for constructing the MPO M_β, which is also illustrated in Fig. <ref>:
* Initial State Preparation: We begin with a product state of the quantum Gibbs states of small blocks, each containing 2 sites, at an inverse temperature β_0. This temperature β_0 is chosen to be smaller than 1/(24gk^2). The exact MPO representation of the initial state has a bond dimension at most d^2, where d is the dimension of the local Hilbert space.
* Merging Operation: Adjacent quantum Gibbs states on blocks are merged using a merging operator Ψ. This operator is approximated by Ψ̃ using truncated expansions as described in Eq. (<ref>), up to m_0-th order, where m_0 = ln(n/ϵ). Consequently, the bond dimension of the MPO form of Ψ̃ becomes D_H^m_0, with D_H representing the bond dimension of the Hamiltonian.
* Approximation of High-Temperature Gibbs State: By repeating the merging processes log_2(n) times, we achieve the global quantum Gibbs state e^-β_0 H. This high-temperature quantum Gibbs state is approximated by the MPO M_β_0, which has a bond dimension D_H^m_0 ln(n).
* Approximation of Low-Temperature Gibbs State: The high-temperature quantum Gibbs state is connected (β/β_0) times to yield the MPO (M_β_0)^β/β_0, serving as the approximation of the desired quantum Gibbs state e^-β H.
In the next section, we will focus on analytically estimating the precision error and the time complexity resulting from these approximations.
Finally, we discuss the preparation of quantum Gibbs states on a quantum computer.
In conclusion, the time complexity (or circuit depth) for this preparation is also given by the quasi-polynomial form, i.e., e^C_0 βln^3(n/ϵ).
To achieve this, we first purify the Gibbs state into the form of a thermofield double state <cit.>, which can also be well-approximated by a Matrix Product State (MPS) with the same bond dimension as that of M_β <cit.>.
For the quantum circuit representation of the MPS, we refer to Ref. <cit.>, where it is generally shown that an arbitrary MPS with bond dimension χ can be constructed using a circuit with depth n ×poly(χ). By setting χ = e^C_0 βln^3(n/ϵ) for the quantum Gibbs state's preparation, we achieve the desired circuit depth.
Proof of main theorem.—
We here provide the outline of the proof, and we defer the details to Supplementary materials <cit.>.
We follow Steps 1 to 4 above and estimate the precision error depending on the bond dimension.
We denote the Gibbs state of the block L_s^(q) as e^-β_0 H_s^(q), where H_s^(q) serves as an abbreviation for the subset Hamiltonian H_L_s^(q) = ∑_Z : Z ⊂ L_s^(q) h_Z on block L_s^(q). Under this notation, e^-β_0 H_1^(q_0) = e^-β_0 H represents the Gibbs state of the entire system, encompassing all interactions within the system.
We then estimate the error arising from the merging operations.
In general, we define a merging operator Ψ that connects two subsets, A and B, as follows:
Ψ = e^-β_0 H_AB e^β_0 (H_A + H_B),
which implies the relation e^-β_0 H_AB = Ψ e^-β_0 H_A e^-β_0 H_B. Here, Ψ facilitates the merging of the Gibbs states of A and B to form the combined Gibbs state on the larger block AB, as illustrated in Fig. <ref>(a).
In this context, H_AB = H_A + H_B + ∂ h_A, where ∂ h_A represents the boundary interactions between systems A and B, as defined in Eq. (<ref>).
To construct an MPO with a small bond dimension for Ψ, we first approximate Ψ using a polynomial expansion in terms of β_0.
However, the independent approximations of e^-β_0 H_AB and e^β_0 (H_A + H_B) require polynomial degrees of at least (β_0H_AB)^1/2 and (β_0H_A + β_0H_B)^1/2, respectively <cit.>. This results in a demand for sub-exponentially large bond dimensions for an accurate approximation.
The key idea here is that by combining e^-β_0 H_AB and e^β_0 (H_A + H_B) the truncation order is significantly reduced for a good approximation of Ψ.
We adopt the following polynomial approximation of the merging operator (<ref>):
Ψ = ∑^m_0_m=0β_0^m∑_s_1+s_2=m (-1)^s_1H_AB^s_1(H_A+H_B)^s_2/s_1!s_2!,
where we apply the Taylor expansion for each of e^-β_0 H_AB and e^β_0 (H_A + H_B) and truncate the terms with β_0^m for m>m_0.
Then, we prove the following precision error:
Proposition 1. If β_0 is smaller than 1/(24gk^2), we achieve the approximation of
Ψ - Ψ≤δ_0 m_0≥log_2 c_0/δ_0 ,
with c_0=e^g̃/(6gk^2) which is an 1 constant.
To derive the MPO form of Ψ, we first create individual MPOs for each term in the expansion (<ref>). These individual MPOs are then combined to obtain the MPO representation of Ψ.
Given the MPO form of the Hamiltonians H_AB and H_A + H_B, the bond dimension required for the term H_AB^s_1(H_A + H_B)^s_2 is at most D_H^s_1D_H^s_2 = D_H^m, where s_1 + s_2 = m. The total bond dimension required to represent all terms up to m_0 in the expansion is calculated as:
∑_m=0^m_0 (m+1) D_H^m ≤ (m_0+1)^2 D_H^m_0 =: D̃_δ_0.
Thus, we construct the MPO with the bond dimension D̃_δ_0 to approximate the merging operator Ψ for any arbitrary adjacent subsets A and B.
After constructing the MPO approximation of the merging operator, we proceed to find the MPO for the combined Gibbs state.
We in general consider the merging of e^-β_0 H_2s-1^(q-1) and e^- β_0 H_2s^(q-1), which is given by
e^-β_0 H_s^(q)= Ψ_s^(q-1) e^-β_0 H_2s-1^(q-1) e^-β_0 H_2s^(q-1).
Our purpose is to estimate the approximation by Ψ̃_s^(q-1) M_2s-1^(q-1) M_2s^(q-1) when approximate MPOs M_2s-1^(q-1) and M_2s^(q-1) are given for e^-β_0H_2s-1^(q-1) and e^-β_0 H_2s^(q-1), respectively.
For simplicity, we denote Ψ_s^(q-1) as Ψ and Ψ̃_s^(q-1) as Ψ̃, by omitting the indices s and q.
Proposition 2. Let M_2s - 1^(q - 1) and M_2s^(q - 1) represent the MPOs approximating e^-β_0 H_2s - 1^(q-1) and e^-β_0 H_2s^(q-1), respectively, such that
e^-β_0 H_i^(q-1) - M_i^(q-1)_p ≤ϵ_q-1e^-β_0 H_i^(q-1)_p
∀ p with i = 2s-1,2s.
Using the approximate merging operator Ψ with Ψ - Ψ≤δ_0, the merged quantum Gibbs state e^-β_0 H_s^(q) can be approximated as:
e^-β_0 H_s^(q) - M_s^(q)_p ≤ ϵ_qe^-β_0 H_s^(q)_p,
with ϵ_q = a_2 δ_0 + a_1 ϵ_q-1,
where the MPO M_s^(q) is constructed as M_s^(q) = Ψ M_2s-1^(q-1) M_2s^(q-1), and a_1 and a_2 are 𝒪(1) constants with a_1 > 1.
This shows that ϵ_q and ϵ_q-1 are related by a linear equation, leading to a recursive relation on q. By solving this recursive relation, we find:
ϵ_q ≤ a_2 δ_0 q a_1^q-2,
using the fact that ϵ_1 = 0, because the MPO description of the initial state is exact.
We then obtain the MPO approximation M_β_0 of the Gibbs state e^-β_0 H, which results in the following error ∀ p:
e^-β_0 H - M_β_0_p ≤ϵ'_0 e^-β_0 H_p
with ϵ'_0:= a_2 δ_0 q_0 a_1^q_0-2.
The MPO M_β_0 = M_1^(q_0) is constructed through q_0 approximate merging processes Ψ, starting from the Gibbs states in layer 1. Consequently, the bond dimension increases by a factor of D̃_δ_0^q_0, where D̃_δ_0 was defined as the bond dimension of Ψ in Eq. (<ref>).
If we consider the error Ψ - Ψ≤δ_0 = ϵ/poly(n), then the bond dimension becomes D̃_δ_0^q_0 = D_H^ln^2 (n/ϵ), as q_0 = log_2(n) and D̃_δ_0 = D_H^ln (1/δ_0).
Finally, we use the MPO M_β_0 for the high-temperature Gibbs state e^-β_0 H to approximate the target low-temperature Gibbs state e^-β H, as illustrated in Fig. <ref>(b). We combine the (β/β_0) MPOs M_β_0 to approximate the low-temperature Gibbs state, resulting in:
(M_β_0)^β/β_0≈ e^-β H.
Consequently, the bond dimension is multiplied (β/β_0) times, leading to a final required bond dimension of the order D_H^βln^2 (n/ϵ).
To derive the error bound, we use the inequality from <cit.>, which states the following: for fixed positive integers p_1 and p_2, if
e^-β_0H - M_β_0_p_1p_2≤ϵ' e^-β_0H_p_1p_2,
then
e^-p_1β_0H - M_β_0^p_1_p_2≤ (3e/2)p_1ϵ' e^-p_1β_0H_p_2.
We have already derived the MPO approximation for a general Schatten p-norm in (<ref>).
Therefore, we can obtain
e^-β H - (M_β_0)^β/β_0_p ≤ϵ_βe^-β H_p,
where
ϵ_β = 5(β/β_0) ϵ'_0.
Since a_1^q_0 = poly(n) with q_0 = log_2(n), the choice of δ_0 = ϵ/poly(n) yields the main precision bound in Eq. (<ref>), where we defined M_β = (M_β_0)^β/β_0 and assume β<n without loss of generality <cit.>.
By estimating each of the algorithm steps, we can see that the runtime is also upper-bounded by D_H^βln^2 (n/ϵ) (see Supplementary materials <cit.>).
We thus prove the main theorem. □Extension to Real-Time Evolution–
In our method, we did not rely on the fact that the inverse temperature β is a real number. Therefore, it is possible to generalize our approach to cases where β is a complex number. By considering the case where β = it, we can address the simulation of real-time evolution.
Given an MPO M_t that approximates e^-iHt, we can estimate the error as:
e^-iHt|Ψ⟩ - M_t|Ψ⟩≤e^-iHt - M_t_∞ for an arbitrary quantum state |Ψ⟩.
By following our algorithm, we can construct an MPO M_t for e^-iHt that satisfies the error bound e^-iHt - M_t_∞≤ϵe^-iHt_∞=ϵ. The bond dimension and time complexity required are given by a quasi-polynomial form of e^tln^3 (n/ϵ).
Compared to methods based on the Lieb-Robinson bound, our approach significantly reduces the time complexity. Assuming the Lieb-Robinson bound as [O_i(t), O_j]≤ t^η/r^α for α > 2 <cit.>, the time cost to calculate the average value of a time-evolved local observable O_i(t) scales as e^(t^η/ϵ)^1/(α-1).
Summary and outlook.—
In this letter, we have presented an algorithm that achieves quasi-polynomial time complexity in approximating the quantum Gibbs state of 1D long-range interacting systems using MPOs. This was accomplished by adopting a DMRG-type method, as depicted in Fig. <ref>.
We note that achieving quasi-polynomial complexity is likely the best attainable result in general <cit.>.
However, there is hope to achieve polynomial time complexity for the 2-local cases, as described in Eq. (<ref>). Identifying the optimal time complexity for simulating long-range interacting systems remains an important open question.
Another intriguing avenue for exploration is whether our method can be extended to cases where the power-law decay is slower than r^-2 (α < 2), where the assumption of finite boundary interactions in Eq. (<ref>) may no longer hold. In specific cases, this condition can still be recovered. For instance, in fermion systems with long-range hopping and short-range fermion-fermion interactions, the condition in Eq. (<ref>) holds for α > 3/2 <cit.>. Another interesting class is the non-critical quantum Gibbs state, where power-law clustering is satisfied. In such systems, the MPO approximation is expected to hold for α > 1, since the mutual information for any bipartition obeys the area law <cit.>.
All the authors acknowledge the Hakubi projects of RIKEN.
T. K. was supported by JST PRESTO (Grant No.
JPMJPR2116), ERATO (Grant No. JPMJER2302),
and JSPS Grants-in-Aid for Scientific Research (No.
JP23H01099, JP24H00071), Japan.
Y. K. was supported by the JSPS Grant-in-Aid for Scientific Research (No. JP24K06909).
This research was conducted during the first author’s internship at RIKEN, which was supervised by the last author.
section0
Supplementary Material for "Efficient Simulation of 1D Long-Range Interacting Systems at Any Temperature"
Rakesh Achutha^1,2, Donghoon Kim^1, Yusuke Kimura^1 and Tomotaka Kuwahara^1,3,4
^1 Analytical quantum complexity RIKEN Hakubi Research Team, RIKEN Center for Quantum Computing (RQC), Wako, Saitama 351-0198, Japan
^2 Department of Computer Science and Engineering, Indian Institute of Technology (Banaras Hindu University), Varanasi, 221005, India
^3 PRESTO, Japan Science and Technology (JST), Kawaguchi, Saitama 332-0012, Japan
^4 RIKEN Cluster for Pioneering Research (CPR), Wako, Saitama 351-0198, Japan
§ SEVERAL BASIC STATEMENTS
§.§ Operator norm of the boundary interaction for long-range interacting systems when α > 2
We assume in the main text that the boundary interaction is bounded:
∂ h_A≤g, ∀ A ⊂Λ,
where the boundary interaction ∂ h_A is defined as the interaction acting on both subsystems A and B, derived by subtracting the Hamiltonians H_A and H_B, which act exclusively on A and B, from the total Hamiltonian,
∂ h_A := ∑_Z: Z ∩ A ≠∅, Z ∩ B ≠∅ h_Z = H - (H_A + H_B).
We demonstrate that this assumption is inherently satisfied for long-range interactions with a power law decay of α > 2:
Considering the power-law decay of interactions in the form
∑_Z: Z ∋i,i'h_Z≤J/|i-i'|^α i≠ i',
we prove that g̃ in Eq. (<ref>) is 1 as long as α>2.
Proof of Lemma <ref>.
Let us consider two adjacent subsystems, A and B.
From the definition of ∂ h_A, we obtain
∂ h_A ≤∑_Z: Z ∩ A ≠∅, Z ∩ B ≠∅h_Z≤∑_i ∈ A∑_i' ∈ B∑_Z: Z ∋ i, i'h_Z≤∑_i ∈ A∑_i' ∈ BJ/|i-i'|^α≤∑_i=1^∞∑_j=1^∞J/(i+j - 1)^α = ζ(α - 1),
where ζ(s) is the Riemann zeta function. For α > 2, we can choose g̃ = ζ(α - 1) = 𝒪(1).
This completes the proof. □
§.§ MPO approximation of the 2-local Hamiltonians
We generally obtain the MPO forms of k-local Hamiltonians which have the bond dimensions at most of n^k d^k.
In this section, we show that the bond dimension is improved in the specific cases of the 2-local Hamiltonians.
For a 2-local Hamiltonian of the form
H = ∑_1 ≤ i < i' ≤ n1/|i - i'|^α∑_ξ, ξ' = 1^d^2 J_ξ,ξ' P_i, ξ⊗ P_i', ξ',
where {P_i, ξ}_ξ = 1^d^2 are operator bases on site i and J_ξ,ξ' are constants, there exists a Hamiltonian H̃ that approximates H with an error bounded by H - H̃≤ϵ_H, and admits an MPO representation with bond dimension
D_H̃ = c_Hln^2(n/ϵ_H),
where c_H is an 1 constant.
Remark.
Here, the MPO form for the Hamiltonian H̃ is not exactly the same as that of the original Hamiltonian H.
We hence need to take into account the error between H and H̃ for the simulation of the quantum Gibbs states (see Sec. <ref>).
Proof of Lemma <ref>.
We approximate r^-α as an exponential series in accordance with Ref. <cit.>, as follows
We cannot directly employ Ref. <cit.> since it treats only the regime r<1.:
r^-α - ∑_s∈ℤ e^α s x e^-e^s xr ≤ϵ r≥ 1,
x= 2π/ln(3)+αln[1/cos(1)] + ln(1/ϵ).
We then consider the finite truncation as s∈ℤ_m= [-m,m].
We obtain
∑_s∈ℤ e^α s x e^-e^s xr - ∑_s∈ℤ_m e^α s x e^-e^s xr ≤∑_s<-m e^α s x + ∑_s>m e^α s x -e^s x.
We assume that m satisfies
α m x -e^m x≤ -α m x ⟶ e^m x≥ 2α m x ⟶
m x ≥ - ProductLog-1/2α
⟶ m ≥2/xln(2α) ,
where ProductLog(x) is equivalent to the Lambert W function W(x), defined by x=W(x)e^W(x).
Under the assumption, we reduce the inequality (<ref>) to
∑_s∈ℤ e^α s x e^-e^s xr - ∑_s∈ℤ_m e^α s x e^-e^s xr ≤2∑_s>m e^-α s x = 2e^-α x m/e^α x-1 = 2/e^α x-1ϵ/2α^2α
with
m=⌈2/xln(2α /ϵ) ⌉ .
Under the above choice, we obtain
r^-α - ∑_s∈ℤ_m e^α s x e^-e^s xr ≤ϵ+ 2/e^α x-1ϵ/2α^2α≤ζϵ ,
where ζ is an 1 constant for α≥ 2.
Based on this approximation, we define the Hamiltonian H̃ as:
H̃ = ∑_1 ≤ i < i' ≤ n∑_s∈ℤ_m e^α s x e^-e^s x|i - i'| ∑_ξ, ξ' = 1^d^2 J_ξ,ξ' P_i, ξ⊗ P_i', ξ'.
We then obtain
H- H̃≤ζϵ∑_1 ≤ i < i' ≤ n∑_ξ, ξ' = 1^d^2 J_ξ,ξ'P_i, ξ⊗ P_i', ξ'≤J̅ n^2ζϵ ,
with
J̅ := ∑_ξ, ξ' = 1^d^2 |J_ξ,ξ'| ,
where we set the operator bases such that P_i, ξ=1 for ∀ i and ∀ξ.
To ensure an approximation error of ϵ_H for H- H̃, we let J̅ n^2ζϵ=ϵ_H
(or ϵ=ϵ_H/(J̅ζ n^2)), which implies
m= ⌈2/xln(2α /ϵ) ⌉≤ c'_Hln^2(n/ϵ_H),
where c'_H is a constant of order 𝒪(1).
Moreover, it has been shown that each Hamiltonian of the form ∑_1 ≤ i < i' ≤ n e^α s x e^-e^s x |i - i'| ∑_ξ, ξ' = 1^d^2 J_ξ,ξ' P_i, ξ⊗ P_i', ξ'
with exponential decay admits an MPO representation with a bond dimension of 𝒪(d^2) <cit.>. Therefore, the MPO representation of H̃ can be constructed by combining the MPOs of each term, resulting in a total bond dimension of:
D_H̃ = c_Hln^2(n/ϵ_H)
for d=1, where c_H is an 𝒪(1) constant.
This completes the proof. □
§ PROOF OF MAIN PROPOSITIONS AND TECHNICAL DETAILS
§.§ Proof of Proposition 1 in the main text
We now present the proof of Proposition 1 in the main text.
For the convenience of readers, we recall the setup.
We define a merging operator Ψ that connects two subsets A and B as follows:
Ψ = e^-β_0 H_AB e^β_0 (H_A + H_B), H_AB = H_A + H_B + ∂ h_A,
which leads to the relation
e^-β_0 H_AB = Ψ e^-β_0 H_A e^-β_0 H_B.
Consider the polynomial expansion of the merging operator Ψ in powers of β_0, given by:
Ψ = e^-β_0 H_AB e^β_0 (H_A + H_B) = e^-β_0 H_AB e^β_0 (H_AB- ∂ h_A)
= ∑_m=0^∞β_0^m ∑_s_1 + s_2 = m1/s_1! s_2! (-H_AB)^s_1 (H_AB - ∂ h_A)^s_2.
We truncate the above series to m_0 terms and denote the resulting truncated series as Ψ̃:
Ψ̃ = ∑_m=0^m_0β_0^m ∑_s_1 + s_2 = m1/s_1! s_2! (-H_AB)^s_1 (H_AB - ∂ h_A)^s_2.
Then, if β_0 is smaller than 1/24gk^2, we achieve the approximation
Ψ - Ψ̃≤δ_0 m_0 ≥log_2 c_0/δ_0 ,
where c_0 =e^g̃/(6gk^2) is a constant of 𝒪(1).
Proof of Proposition <ref>.
We derive the approximation of the truncation of the expansion.
To this end, we first reformulate each term in the expansion (<ref>) by employing an alternative expansion, specifically using the interaction picture:
e^β_0 (H_AB - ∂ h_A) = e^β_0 H_AB𝒯( e^-∫_0^β_0∂ h_x dx),
where we define ∂ h_x := e^- x H_AB∂ h_A e^x H_AB and 𝒯 is the time-ordering operator. We then obtain
Ψ = e^-β_0 H_AB e^β_0 (H_AB- ∂ h_A)
= 𝒯( e^-∫_0^β_0∂ h_x dx) = ∑_s = 0^∞ (-1)^s∫_0^β_0 dx_1∫_0^x_1 dx_2⋯∫_0^x_s-1 dx_s ∂ h_x_1∂ h_x_2⋯∂ h_x_s.
By applying the decomposition
∂ h_x = ∑_p=0^∞(-x)^p/p!ad_H_AB^p (∂ h_A) =: ∑_p=0^∞(-x)^p/p!∂ h^(p),
the m-th order term in the expansion of Ψ in powers of β_0, which includes the contribution from β_0^m in Eq. (<ref>), is expressed as
Ψ_0 := 1,
Ψ_m := ∑_s=1^m ∑_p_1 + p_2 + ⋯ + p_s = m - s (-1)^s ∫_0^β_0 dx_1∫_0^x_1 dx_2⋯∫_0^x_s-1 dx_s (-x_1)^p_1/p_1!⋯(-x_s)^p_s/p_s! ∂ h^(p_1)⋯∂ h^(p_s)
for ∀ m ≥ 1.
As both Eq. (<ref>) and Eq. (<ref>) are expansions in powers of β_0, the term Ψ_m defined above corresponds to the mth order term in Eq. (<ref>):
Ψ_m = β_0^m ∑_s_1 + s_2 = m1/s_1! s_2! (-H_AB)^s_1 (H_AB - ∂ h_A)^s_2,
which we aim to upper-bound.
The norm of Ψ_m is upper-bounded as
Ψ_m ≤∑_s=1^m β_0^m-s∑_p_1 + p_2 + ⋯ + p_s = m - s∫_0^β_0 dx_1 ∫_0^x_1 dx_2⋯∫_0^x_s-1 dx_s ∏_j=1^s 1/p_j!∂ h^(p_j)
≤β_0^m ∑_s=1^m 1/s!∑_p_1 + p_2 + ⋯ + p_s = m - s∏_j=1^s 1/p_j!∂ h^(p_j).
From Theorem 2.1 in <cit.>, if H is a k-local and g-extensive Hamiltonian and A is a r-local operator, then
[H, A]≤ 6gkr A.
Since H_AB and ∂ h_A are k-local Hamiltonians, ad_H_AB^p-1 (∂ h_A) is at most kp-local. Using the inequality (<ref>), we get
∂ h^(p) = ad_H_AB^p (∂ h_A) ≤ 6gk(kp) ad_H_AB^p-1 (∂ h_A) .
By iterating the above process for ad_H_AB^p' (∂ h_A) with p' ≤ p-2 and using (<ref>), we obtain
∂ h^(p)≤ (6gk^2)^p p! g̃ =: C^p p! g̃,
where we define C := 6gk^2. Substituting Eq. (<ref>) into Eq. (<ref>), we get
Ψ_m ≤β_0^m ∑_s=1^m C^m-sg̃^s/s!∑_p_1 + p_2 + ⋯ + p_s = m - s 1 = β_0^m ∑_s=1^m C^m-sg̃^s/s!(m-ss).
Using (m-ss) = m - 1s≤ 2^m ,
Ψ_m ≤ (2Cβ_0)^m ∑_s=1^m1/s!g̃/C^s≤ (2Cβ_0)^m e^g̃/C.
Given the assumption β_0 ≤1/24gk^2 = 1/4C, it follows that
Ψ_m≤ 2^-m e^g̃/C,
which exhibits exponential decay with respect to m.
Thus, the truncation error resulting from approximating Ψ by Ψ̃ is bounded by
Ψ - Ψ̃≤∑_m=m_0+1^∞Ψ_m≤∑_m = m_0 + 1^∞ 2^-m e^g̃/C = e^g̃/C 2^-m_0.
Letting c_0 =e^g̃/C = e^g̃ / (6gk^2), to achieve the desired error Ψ - Ψ̃≤δ_0, it is sufficient to choose m_0 such that Ψ - Ψ̃≤ c_0 2^-m_0≤δ_0, i.e.,
m_0 ≥log_2 c_0/δ_0.
This completes the proof of Proposition <ref>. □
§.§ Proof of Proposition 2 in the main text
In this subsection, we prove Proposition 2 in the main text, which concerns the approximation of the merged quantum Gibbs state.
Let M_2s - 1^(q - 1) and M_2s^(q - 1) represent the MPOs approximating e^-β_0 H_2s - 1^(q-1) and e^-β_0 H_2s^(q-1), respectively, such that
e^-β_0 H_2s-1^(q-1) - M_2s-1^(q-1)_p ≤ϵ_q-1e^-β_0 H_2s-1^(q-1)_p,
e^-β_0 H_2s^(q-1) - M_2s^(q-1)_p ≤ϵ_q-1e^-β_0 H_2s^(q-1)_p,
for an arbitrary Schatten p-norm.
Using the merging operator Ψ̃_s^(q-1), which provides an approximation to Ψ_s^(q-1) with the error bounded by Ψ_s^(q-1) - Ψ̃_s^(q-1)≤δ_0,
the merged quantum Gibbs state e^-β_0 H_s^(q) can be approximated by the newly constructed MPO M_s^(q) based on M_s^(q) = Ψ_s^(q-1) M_2s-1^(q-1) M_2s^(q-1) as
e^-β_0 H_s^(q) - M_s^(q)_p ≤ϵ_qe^-β_0 H_s^(q)_p,
with the error
ϵ_q = a_2 δ_0 + a_1 ϵ_q-1.
Here, a_1 and a_2 are 𝒪(1) constants with a_1 > 1, defined as
a_1 := 12 e^g̃/(4gk^2) , a_2 := 2 e^g̃/(24gk^2) .
Proof of Proposition <ref>. The merging operator is defined as Ψ_s^(q-1) = e^-β_0 H_s^(q) e^β_0 (H_2s - 1^(q-1) + H_2s^(q-1)), and the approximate merging operator Ψ̃_s^(q-1) is given by Eq. (<ref>).
With the MPO defined as M_s^(q) = Ψ̃_s^(q-1) M_2s-1^(q-1) M_2s^(q-1), we proceed with the following inequality:
e^-β_0 H_s^(q) - M_s^(q)_p = Ψ_s^(q-1) e^-β_0 H_2s-1^(q-1) e^-β_0 H_2s^(q-1) - Ψ̃_s^(q-1) M_2s-1^(q-1) M_2s^(q-1)_p
≤Ψ_s^(q-1) - Ψ̃_s^(q-1) e^-β_0 H_2s-1^(q-1) e^-β_0 H_2s^(q-1)_p
+ Ψ̃_s^(q-1)e^-β_0 H_2s-1^(q-1) e^-β_0 H_2s^(q-1) - M_2s-1^(q-1) M_2s^(q-1)_p
≤Ψ_s^(q-1) - Ψ̃_s^(q-1)_∞e^-β_0 H_2s-1^(q-1) e^-β_0 H_2s^(q-1)_p
+ Ψ̃_s^(q-1)_∞e^-β_0 H_2s-1^(q-1) e^-β_0 H_2s^(q-1) - M_2s-1^(q-1) M_2s^(q-1)_p.
From Ref. <cit.>, we can prove for arbitrary operators and
e^+ -e^_p ≤ e^e^_p .
By applying the inequality and substituting -β_0 H_s^(q) for and β_0 H_s^(q) - β_0 H_2s^(q-1) -β_0 H_2s-1^(q-1) for , we obtain ≤g̃ from assumption (<ref>) and
e^-β_0 H_2s-1^(q-1) e^-β_0 H_2s^(q-1) - e^-β_0 H_s^(q)_p ≤ e^β_0 g̃e^-β_0 H_s^(q)_p.
The triangle inequality leads to
e^-β_0 H_2s-1^(q-1) e^-β_0 H_2s^(q-1)_p
≤e^β_0 g̃+1e^-β_0 H_s^(q)_p
≤ 2 e^β_0 g̃e^-β_0 H_s^(q)_p .
From the above inequality and the condition Ψ_s^(q-1) - Ψ̃_s^(q-1)_p ≤δ_0, we reduce inequality (<ref>) to
e^-β_0 H_s^(q) - M_s^(q)_p
≤ 2δ_0 e^β_0 g̃e^-β_0 H_s^(q)_p + Ψ̃_s^(q-1)_∞e^-β_0 H_2s-1^(q-1) e^-β_0 H_2s^(q-1) - M_2s-1^(q-1) M_2s^(q-1)_p.
To estimate the second term on the RHS of (<ref>),
we use the following mathematical lemma:
If two operators O_1 and O_2 act on disjoint supports, with their respective approximations Õ_1 and Õ_2 defined on the supports of O_1 and O_2, satisfying
O_1 - Õ_1_p ≤ϵO_1_p,
O_2 - Õ_2_p ≤ϵO_2_p,
with ϵ<1, we then obtain
O_1 O_2 - Õ_1 Õ_2_p ≤ 3 ϵO_1 O_2_p.
Proof of Lemma <ref>. When O_1 and O_2 operate on disjoint supports, we have O_1 O_2_p = O_1_p O_2_p. By leveraging this fact along with Eqs. (<ref>) and (<ref>), we derive
O_1 O_2 - Õ_1 Õ_2_p ≤O_1 (O_2 - Õ_2)_p + (O_1 - Õ_1) Õ_2_p
= O_1_p O_2 - Õ_2_p + O_1 - Õ_1_p Õ_2_p
≤ϵO_1_p O_2_p + ϵO_1_p Õ_2_p.
The relation Õ_2_p≤O_2_p + O_2 - Õ_2_p≤ (1 + ϵ) O_2_p and ϵ^2≤ϵ leads to
O_1 O_2 - Õ_1 Õ_2_p ≤ϵO_1_p O_2_p + ϵ (1 + ϵ) O_1_p O_2_p = (2 ϵ + ϵ^2) O_1_p O_2_p ≤ 3 ϵO_1_p O_2_p.
This completes the proof of Lemma 3.
□
Since e^-β_0 H_2s-1^(q-1) and e^-β_0 H_2s^(q-1) act on disjoint supports (i.e., L_2s-1^(q-1)∩ L_2s^(q-1) = ∅), applying the Lemma 3 yields:
e^-β_0 H_2s-1^(q-1) e^-β_0 H_2s^(q-1) - M_2s-1^(q-1) M_2s^(q-1)_p ≤ 3 ϵ_q-1e^-β_0 H_2s-1^(q-1) e^-β_0 H_2s^(q-1)_p
≤ 6 ϵ_q-1 e^β_0 g̃e^-β_0 H_s^(q)_p,
where we use the upper bound (<ref>) in the second inequality.
We then upper-bound the norm of Ψ̃_s^(q-1), which is the truncated series up to m_0 terms in the expansion of Ψ_s^(q-1) as in Eq. (<ref>).
By using the upper bound (<ref>), which gives Ψ_m≤ 2^-m e^g̃/C, we obtain
Ψ̃_s^(q-1)≤∑_m = 0^m_0Ψ_m≤∑_m=0^m_0 2^-m e^g̃/C≤ 2 e^g̃/C=2 e^g̃/(6gk^2) .
We thus derive the upper bound
Ψ̃_s^(q-1)_∞e^-β_0 H_2s-1^(q-1) e^-β_0 H_2s^(q-1) - M_2s-1^(q-1) M_2s^(q-1)_p
≤ 12 e^g̃/(6gk^2)+β_0 g̃ϵ_q-1e^-β_0 H_s^(q)_p .
By combining the above inequality with (<ref>), we have
e^-β_0 H_s^(q) - M_s^(q)_p
≤ 2δ_0 e^β_0 g̃ + 12 e^g̃/(6gk^2)+β_0 g̃ϵ_q-1e^-β_0 H_s^(q)_p
≤ 2δ_0 e^g̃/(24gk^2) + 12 e^g̃/(4gk^2)ϵ_q-1e^-β_0 H_s^(q)_p ,
where we use β_0 ≤ 1/(24gk^2).
Thus, we establish the desired inequality (<ref>), along with Eqs. (<ref>) and (<ref>).
This completes the proof of Proposition 2.
□
§.§ Explicit estimation of the bond dimension
From Proposition 2, we see that ϵ_q follows a recursive relation, with ϵ_1 = 0 since the blocks in layer 1 have exact MPOs. Using ϵ_1=0, the recursive relation yields the following relations:
ϵ_q = a_2 δ_0 + a_1 ϵ_q-1 = a_2 δ_0 (1 + a_1 + a_1^2 + ⋯ + a_1^q-2) + a_1^q-1ϵ_1 = a_2 δ_0 (a_1^q-1 - 1/a_1 - 1) + 0 ≤ a_2 δ_0 q a_1^q-2.
Given that e^-β_0 H_1^(q_0) = e^-β_0 H and M_β_0 := M_1^(q_0), Proposition 2 implies
e^-β_0 H - M_β_0_p ≤ϵ_q_0e^-β_0 H_p.
Thus, using the inequality (<ref>) and q_0= log_2 (n), we obtain the MPO approximation of the high-temperature Gibbs state e^-β_0 H given by:
e^-β_0 H - M_β_0_p ≤ a_2 δ_0 q_0 a_1^q_0-2e^-β_0 H_p
≤ a_2 δ_0 n^log_2 (2a_1)e^-β_0 H_p.
Starting from the approximate MPO M_β_0 for the high-temperature Gibbs state, we construct the MPO for the low-temperature Gibbs state. By concatenating the MPO M_β_0(β / β_0) times, we obtain (M_β_0)^β / β_0≈ e^-β H.
To derive the error bound for the low-temperature Gibbs state, we begin by considering the approximation in the (pQ)-norm, where Q = β / β_0. Given that we have already established the MPO approximation for any general Schatten p-norm, it follows that
e^-β_0 H - M_β_0_pQ ≤ a_2 δ_0 n^log_2 (2a_1)e^-β_0 H_pQ.
We now use the inequality from <cit.>, which states:
For fixed positive integers p_1 and p_2, if e^-β_0 H - M_β_0_p_1 p_2≤ϵ' e^-β_0 H_p_1 p_2, then
e^-p_1 β_0 H - M_β_0^p_1_p_2≤3e/2 p_1 ϵ' e^-p_1 β_0 H_p_2.
Applying this to our case, with p_1 =Q= β / β_0 and p_2 = p, we get:
e^-β_0 Q H - (M_β_0)^Q_p ≤3e/2 Q a_2 δ_0 n^log_2(2a_1)e^-β_0 Q H_p ≤ 5 (β/β_0) a_2 δ_0 n^log_2(2a_1)e^-β_0 Q H_p.
Now we choose
δ_0 = β_0ϵ/5β a_2 n^log_2 (2a_1).
Since a_2, β_0, and a_1 are 𝒪(1) constants, we have δ_0 = ϵ/poly(n) under the assumption β < n. By selecting δ_0 as in Eq. (<ref>), we obtain the final desired inequality,
e^-β H - M_β_p ≤ϵe^-β H_p.
In the following, we denote the bond dimensions of the MPOs M_β_0 and M_β by D_β_0 and D_β, respectively.
The MPO M_β_0 is constructed by (q_0-1) approximate merging processes of Ψ̃ starting from the layer 1.
The bond dimension of the MPO for the Gibbs states in layer 1 is d^2. This bond dimension increases by D̃_δ_0 with the application of the merging operator Ψ̃ at each subsequent layer, where D̃_δ_0 is defined as the bond dimension of Ψ̃ in Eq. (<ref>).
Hence, the final bond dimension after (q_0-1) mergings becomes d^2 D̃_δ_0^q_0-1.
From Eq. (<ref>), we have D̃_δ_0 = (m_0 + 1)^2 D_H^m_0≤ D_H^2m_0, where m_0 is chosen as
m_0 = log_2(c_0 / δ_0)
= log_25 c_0 β a_2 n^log_2 (2a_1)/β_0ϵ
according to Proposition 1, with a_0 > 1 ensuring that m_0 is an integer.
This yields
D_β_0 = d^2 D̃_δ_0^q_0-1≤D̃_δ_0^q_0≤ D_H^2 m_0 q_0.
Using Eq. (<ref>) and q_0= log_2 (n), we get
2 m_0 q_0 ≤ 2log_25 c_0 β a_2 n^log_2 (2a_1)/β_0ϵlog_2 (n)
= 2/ln^2(2)ln5 c_0 β a_2 n^log_2 (2a_1)/β_0ϵln (n)
≤ 5 ln5 c_0a_2/β_0+ log_2 (4a_1)ln(n/ϵ)ln (n)
≤ 5 ln5 c_0a_2/β_0+ log_2 (4a_1)ln^2 (n/ϵ)=:b_1 ln^2 (n/ϵ) (n/ϵ)≥ e,
with
b_1:= 5 ln5 c_0a_2/β_0+ log_2 (4a_1) ,
where we use β < n and define an 𝒪(1).
We recall the definitions of β_0, c_0, a_1 and a_2 as follows:
β_0 ≤1/24gk^2, c_0 =e^g̃/(6gk^2) , a_1:= 12 e^g̃/(4gk^2) , a_2 := 2 e^g̃/(24gk^2) .
Hence, the order of D_β_0 becomes D_H^𝒪(ln^2(n/ϵ)), which directly leads to the following bond dimension D_β for the MPO M_β corresponding to the low-temperature Gibbs state:
D_β = D_β_0^β / β_0≤ D_H^b_1(β/β_0 ln^2(n/ϵ))≤ D_H^(b_1/β_0)βln^2(n/ϵ) (n/ϵ)≥ e.
Hence the order of the bond dimension of M_β is D_H^βln^2 (n / ϵ) since b_1, β_0 are 𝒪(1) constants.
§.§ Estimation of the time complexity
We finally estimate the time complexity of constructing the MPO M_β. We begin with the construction of M_β_0. Given two MPOs M_1 and M_2 with bond dimensions D_1 and D_2, respectively, the time cost to calculate M_1 M_2 is given by n (D_1 D_2)^2 d^3.
The total number of merging processes is given by
2^q_0-1 + 2^q_0-2 + ⋯ + 1 = 2^q_0 - 1 < n,
and each merging process requires a time cost of at most n D_β^4 d^3,
where we use D_β_0 is smaller than D_β, i.e., the final bond dimension of the MPO M_β.
Thus, the total time cost for constructing M_β_0 is bounded by n^2 D_β^4 d^3.
For each multiplication of M_β_0 and M_β_0^s with s ≤β/β_0, the time cost is also upper-bounded by n D_β^4 d^3. Therefore, the time cost to calculate (M_β_0)^β/β_0 is at most n (β/β_0) D_β^4 d^3.
In total, the time cost to construct M_β is upper-bounded by
(n^2 + nβ/β_0) D_β^4 d^3 = D_H^βln^2 (n/ϵ).
By applying D_H = n^k d^k in general cases and D_H = c_H ln^2(n/ϵ) in 2-local cases, we confirm the desired time complexity.
§.§ MPO construction for the 2-local cases
In the 2-local case, the original total Hamiltonian H can be approximated by H̃ as an exponential series, following Sec. <ref>, such that H - H̃≤ϵ_H for any arbitrary ϵ_H. The MPO for the approximate Hamiltonian H̃ is then constructed according to Sec <ref>, satisfying
e^-βH̃ - M̃_β_p ≤ϵ/3e^-βH̃_p,
where we assume ϵ≤1.
Here the bond dimension for M̃_β is given by D_H^(b_1/β_0)βln^2(3n/ϵ) with D_H=c_H ln^2(n/ϵ_H).
We estimate the error between e^-β H and e^-βH using the Ref. <cit.>, which states that we can prove for arbitrary operators and
e^+ -e^_p ≤ e^·e^_p .
By applying the inequality and substituting -β H for and βH - β H for , we get
e^-β H - e^-βH_p ≤ e^βH-HβH-H·e^-β H_p ≤ e^βϵ_Hβϵ_H e^-β H_p ≤ 3βϵ_He^-β H_p,
where we choose ϵ_H such that ϵ_H≤ 1/β.
For a given β, if we choose ϵ_H ≤ϵ/ (6β)[From this inequality, the condition βϵ_H≤ 1 is satisfied because ϵ≤ 1.], we obtain
e^-β H - e^-βH_p ≤ϵ/2e^-β H_p.
Using inequalities (<ref>) and (<ref>),
e^-βH- M_β_p ≤ϵ/3e^-βH̃_p ≤ϵ/3e^-βH-e^-β H_p + e^-β H_p ≤ϵ/3ϵ/2 + 1e^-β H_p
≤ϵ/2e^-β H_p ,
where we use ϵ≤ 1 to derive ϵ/2 + 1≤ 3/2.
Therefore, using inequalities (<ref>) and (<ref>), we finally obtain the following error
e^-β H- M_β_p ≤e^-β H-e^-βH_p + e^-βH-M_β_p ≤ϵe^-β H_p.
This gives us the approximation MPO for the final Gibbs state e^-β H for the 2-local case.
|
http://arxiv.org/abs/2409.02652v1 | 20240904122827 | Novel Approach for solving the discrete Stokes problems based on Augmented Lagrangian and Global Techniques: Application to Saddle-Point Linear Systems from Incompressible flow | [
"A. Badahmane",
"A. Ratnani",
"H. Sadok"
] | math.NA | [
"math.NA",
"cs.NA"
] |
1]A. Badahmane
1]A. Ratnani
2]H. Sadok
[1]The UM6P Vanguard Center, Mohammed VI Polytechnic University, Benguerir 43150, Lot 660, Hay Moulay Rachid, Morocco.
[2]LMPA, Université du Littoral Côte d'Opale, 50 Rue F. Buisson, BP 699 - 62228 Calais cedex, France.
=msbm10
Z-5.5muZ
exoExercice
ideaIdea
prePreuve
proPropriété
exeExample
theoremTheorem[section]
propositionProposition
definitionDefinition[section]
remarkRemark[section]
lemLemma[section]
§ ABSTRACT
In this paper, a novel augmented Lagrangian preconditioner based on global Arnoldi for accelerating the convergence of Krylov subspace methods applied to linear systems of equations with a block three-by-three structure, these systems typically arise from discretizing the Stokes equations using mixed-finite element methods. In practice, the components of velocity are always approximated using a single finite element space. More precisely, in two dimensions, our new approach based on standard space of scalar finite element basis functions to discretize the velocity space. This componentwise splitting can be shown to induce a natural block three-by-three structure.
Spectral analyses is established for the exact versions of these preconditioners. Finally, the obtained numerical results claim that our novel approach is more efficient and robust for solving the discrete Stokes problems. The efficiency of our new approach is evaluated by measuring computational time.
Stokes equation, saddle point problem, Krylov subspace method, global Krylov subspace method, augmented Lagrangian-based preconditioning.
§ INTRODUCTION
The Stokes problem is discretized using conforming finite element spaces X^h⊂ Q_2 and Q^h_1⊂ Q_1 that satisfy the inf-sup condition for the Stokes velocity and pressure such as Taylor–Hood elements <cit.>.
The discrete form of the weak formulation can be cast as a block linear system of the form:
𝒜_3× 3𝐮=[ A O B_x^T; O A B_y^T; B_x B_y O ][ u_x; u_y; p ]
=
[ f_x; f_y; g ]_b,
assuming that n_u=2n and n_p are respectively, dimensions of velocity solution and pressure finite-dimensional spaces with (n_u+n_p=N).
Where A∈ℝ^n× n is the scalar-Laplacian matrix, it is worth nothing that A is symmetric positive definite (SPD) matrix, the n_p × n matrices B_x and B_y represent weak derivatives in the x and y directions,
f_x, f_y and g are given vectors. Besides, we assume, as is typically the case in most applications of Stokes problem, n_u>>n_p. The increasing popularity of mixed finite element methods for Stokes and Navier-Stokes flows has been a significant cause of saddle-point systems, such as the one in (<ref>). A major source of applications for saddle-point problems, can be found in <cit.>.
In general, inasmuch as the large dimension and sparsity of the matrices A and B,
it is sensible for systems (<ref>) to be solved by iterative methods. Additionally, since the
coefficient matrix A is nonsingular, numerous effective methods have been put forward
with great popularity. Such as the successive overrelaxation (SOR)-like methods <cit.>, variants of the Uzawa-type methods <cit.>,
Hermitian and skew-Hermitian (HSS) method, which was initially introduced by Bai, Golub, and Ng in <cit.>. Additionally, the PHSS iteration method has been presented in <cit.>. For a more in-depth understanding of the works related to the stationary iterative
methods, please refer to references <cit.>.
Generally speaking, iteration methods become more attractive than direct methods from
two aspects of storage requirements and computing time. In order to solve the linear system (<ref>) in an efficient manner, we often use valid preconditioning techniques to accelerate
Krylov subspace methods, such as GMRES method <cit.>. As is well known, a clustered spectrum of preconditioned matrix often results in rapid rate of convergence for Krylov
subspace methods. Therefore, to achieve rapid convergence rate and improve computational
efficiency, a large number of efficient iteration methods and preconditioning techniques have
been presented in recent years, such as block triangular preconditioner applied to the augmented linear system <cit.>, augmented Lagrangian-based preconditioning technique for a class of block three-by-three linear systems <cit.>, and so forth. We make organizations of this paper as follows. An example of modelling that leads to this
type of system is outlined in Section 1.
Section 2 introduces the 3×3 strategy. In Section 3, we recall and define the 2× 2 strategy.
Some numerical tests are implemented to show the effectiveness of the proposed preconditioners, in particular in the presence of inexact solvers. At the end, we conclude
with a brief summary in Section 5.
§.§ The Problem Setting
The Stokes equation describes the flow of a viscous fluid and is used in various fields, including aerodynamics, propulsion, and biomedical fluid analysis. In many cases, finding an exact solution to the Stokes equation can be challenging, so we often use numerical methods to approximate the solution <cit.>. Their discretization results in a linear system, as shown in Eq. (<ref>). In the incompressible case, the Stokes equation can be written as follows :
{[ -∇⃗^2u⃗ +∇⃗ p = 0⃗ in Ω,; ∇⃗·u⃗ = 0 in Ω. ].
The variable u⃗ is the unknown velocity field, the scalar function p is the unknown pressure field. It is important to acknowledge that the Laplacian and divergence operators are defined in <cit.>.
The first equation in Eq. (<ref>) represents conservation of the momentum of the fluid (and so is the momentum equation), and the second equation enforces conservation of mass. We consider the problem posed on a domain Ω of dimension d=2 with boundary conditions ∂Ω=∂Ω_D∪∂Ω_N defined by
[ u⃗ = w⃗ on ∂Ω_D,∂u⃗/∂ n-n⃗p= s⃗ on ∂Ω_N, ]
where :
* w⃗: is the vorticity variable, given by:
w⃗ = ∇⃗×u⃗,
where × is the curl operator,
* s⃗: function depends on the outflow boundary to ensure that mass is conserved,
* n⃗: the outward-pointing normal to the boundary,
* ∂u⃗/∂ n: denotes
the directional derivative in the normal direction.
In practice, the d components of velocity are always approximated using
a single finite element space <cit.>, then the discrete formulation of Eq.
(<ref>) can be expressed as a two-by-two partitioning of the discrete Stokes system, which the matrix of the system is a saddle point matrix defined as follows :
𝒜_2×2x=([ A B^T; B 0 ])([ u; p ])=([ f; g ])_h,
where A∈ℝ^n_u× n_u is the vector-Laplacian matrix, it is worth nothing that A is symmetric positive definite (SPD) matrix, B∈ℝ^n_p× n_u is divergence matrix with rank(B^T)=n_p, f∈ℝ^n_u and g∈ℝ^n_p are given vectors.
§ MOTIVATION:
The main motivation of this work, instead of using a single finite element space to discretize the velocity space and to obtain the two-by-two partitioning (<ref>), we use a standard space of scalar finite element basis functions {ϕ_j}_j=1^n, we set n_u = 2n and define the velocity basis set
{ϕ⃗_1, …, ϕ⃗_2n} := {(ϕ_1, 0)^T, …, (ϕ_n, 0)^T, (0, ϕ_1)^T, …, (0, ϕ_n)^T}. 5.38
This component-wise splitting can be shown to induce a natural block three-by-three partitioning of the discrete Stokes system (<ref>), for more details, we refer to <cit.>.
Specifically, with
u:= ([u_x]_1, …, [u_x]_n, [u_y]_1, …, [u_y]_n),
(<ref>) can be rewritten as :
[ A O B_x^T; O A B_y^T; B_x B_y O ][ u_x; u_y; p ]
=
[ f_x; f_y; g ],
where the n × n matrix A is the scalar Laplacian matrix (discussed in detail in <cit.>), and the n_p × n matrices B_x and B_y represent weak derivatives in the x and y directions,
where
A = [a_ij], a_ij=∫_Ω∇ϕ_i·∇ϕ_j,
B_x = [b_x,kj], b_x,ki=-∫_Ωψ_k∂ϕ_i/∂ x,
B_y = [b_y,kj], b_y,kj=-∫_Ωψ_k∂ϕ_j/∂ y.
§ MATHEMATICAL BACKGROUND:
Given a square matrix A, the set of all eigenvalues (spectrum) of A is denoted by σ(A). When the spectrum of A is real, we use λ_min(A) and λ_max(A) to respectively denote its minimum and maximum eigenvalues. When A is symmetric positive (semi)definite, we write A ≻ 0 (A ≽ 0). In addition, for two given matrices A and B, the relation A ≻ B (A ≽ B) means A - B ≻ 0 (A - B ≽ 0). Finally, for vectors x, y, and z of dimensions n, m, and p, (x; y; z) will denote a column vector of dimension n+m+p.
In this paper, I will denote the identity matrix, specifying its size as appropriate to the context.
§ 3×3 STRATEGY FOR SOLVING THREE-BY-THREE LINEAR SYSTEM (<REF>)
The 3×3 strategy, based on the motivation outlined in Section 1, is designed to solve three-by-three saddle-point problem (<ref>).
The 3×3 strategy can significantly reduce the computational cost compared using 2× 2 strategy for solving the classical structure of saddle-point problem (<ref>). The preconditioning technique helps to improve the convergence rate of the Krylov subspace methods.
This strategy is motivated by the use of a set of standard scalar finite element basis functions within a defined space, aimed at obtaining the three-by-three partitions of the saddle-point matrix (<ref>).
§.§ Novel Augmented Lagrangian-based preconditioning and global
techniques:
Krylov subspace methods (such as GMRES) in conjunction with suitable preconditioners are frequently the method of choice for computing approximate solutions of such linear systems of equations.
First, problem (<ref>) is reformulated as the equivalent augmented system 𝒜̅_3× 3𝐮̅= 𝐛̅, where
𝒜̅_3× 3=
[ A + γ B^T_x Q^-1B_x 0 B_x^T; 0 A + γ B^T_y Q^-1B_y B^T_y; B_x B_y 0 ],
and 𝐛̅ = (f_x+B^T_x Q^-1g;f_y + γ B^T_y Q^-1g; g), with Q being an arbitrary SPD matrix and γ > 0 a user-defined parameter. Evidently, the linear system of equations
𝒜̅_3× 3𝐮̅= 𝐛̅ is equivalent to 𝒜_3× 3𝐮 = 𝐛. This approach is inspired by the effectiveness of employing grad-div stabilization and augmented Lagrangian techniques to solve saddle-point problems.
§.§.§ Preconditioning:
In this section, we investigate a new augmented Lagrangian-based preconditioning and global approach for solving (<ref>). Left preconditioning of (<ref>) gives the following new linear system:
𝒫^-1𝒜̅_3× 3𝐮̅ = 𝒫^-1𝐛̅,
where 𝒫 is one of the preconditioners below:
* 𝒫_γ, α, x : is the augmented Lagrangian preconditioner in the x direction.
* 𝒫_γ, α, y : is the augmented Lagrangian preconditioner in the y direction.
The following two constraint-type preconditioners were proposed for accelerating the convergence of Krylov subspace methods, given as follows:
𝒫_γ, α,x =
[ A + γ B^T_x Q^-1B_x 0 B_x^T; 0 A + γ B^T_x Q^-1B_x (1 - γα^-1) B^T_y; 0 0 -α^-1Q ],
𝒫_γ, α,y =
[ A + γ B^T_y Q^-1B_y 0 B_x^T; 0 A + γ B^T_y Q^-1B_y (1 - γα^-1) B^T_y; 0 0 -α^-1Q ],
where α and γ are prescribed positive parameters.
§.§.§ Algorithmic implementation of the augmented Lagrangian preconditioners 𝒫_γ, α,x and 𝒫_γ, α,y.
In this part, we display the algorithmic implementation of 𝒫_γ, α,x and 𝒫_γ, α,y,
in which, inside Krylov subspace methods, the SPD subsystems were solved inexactly by the preconditioned conjugate gradient (PCG) method using loose tolerances. More precisely, the inner PCG solver for linear systems with coefficient matrix A, A + γ B^T_xQ^-1 B_x and A + γ B^T_yQ^-1 B_y was terminated when the relative residual norm was below 10^-6, with the maximum number of 100 iterations was reached. The preconditioner for PCG is incomplete Cholesky factorizations constructed using the function where opts.type = 'ict' with drop tolerance 10^-2.
In the following parts, we will work on some specific problems. Every step of the Krylov subspace
method such as GMRES method is used in combination with the augmented Lagrangian preconditioner to solve the saddle-point
problem (<ref>).
We summarize the implementation of preconditioners
𝒫_γ, α,x and 𝒫_γ, α,y in
Algorithms 1 and 2.
For the linear systems corresponding to A+ γ B^T_x Q^-1 B_x and A+ γ B^T_y Q^-1 B_y, we distinguish between two approaches:
* Approach I. Since A+ γ B^T_x Q^-1 B_x
is SPD matrix, we solve the linear systems corresponding to this matrix independently
by the preconditioned conjugate gradient method (PCG), the matrix is formed inside PCG with incomplete Cholesky preconditioning, ichol(A).
To implement the preconditioners 𝒫_γ, α,x and 𝒫_γ, α,y, we
use the following algorithms:
* The subsystems corresponding to (A+ γ B^T_x Q^-1 B_x) are solved by PCG method. Within the PCG process, we perform sequence of matrix-vector product, first multiplying vectors by B_x, Q^-1
and then by B^T_x.
We use the steps described in Algorithm 1 to implement Algorithm 2.
* Approach II.
In step 2 and 3 of Algorithms 1 and 2,
the secondary objective of this work is not to solve it independently, but instead to utilize 𝒫GCG method <cit.> for solving linear system with several right-hand sides of the following form :
(A+ γ B^T_y Q^-1 B_y)𝒳=ℋ,
where: 𝒳 and ℋ are both an n×2 matrices. Each column of matrix 𝒳 is denoted as 𝒳^(1)=x and
𝒳^(2)=y, each column of matrix ℋ is denoted as ℋ^(1)=r_1-B_x^Tz and ℋ^(2)=r_2-(1-γα^-1)B_y^Tz,
𝒳_0 is the initial guess of solution (<ref>) and R_0=ℋ-(A+ γ B^T_y Q^-1 B_y)𝒳_0 is the initial residual.
By leveraging the structure of the augmented based-Lagrangian preconditioners 𝒫_γ, α, x,
𝒫_γ, α, y and the approach II, in the rest of the paper, we refer
to the new preconditioners as 𝒫_γ, α,x, G and 𝒫_γ, α, y, G where
* 𝒫_γ, α,G,x: denotes the global augmented Lagrangian preconditioner in the x direction.
* 𝒫_γ, α,G,y: denotes global augmented Lagrangian preconditioner in the y direction.
To implement the preconditioners 𝒫_γ, α,x,G and 𝒫_γ, α,y,G, we
use the following algorithms:
* The subsystem with several right-hand sides corresponding to (A+ γ B^T_x Q^-1 B_x) is solved by 𝒫GCG method. Within the 𝒫GCG process, we perform sequence of matrix-vector product, first multiplying vectors by B_x, Q^-1
and then by B^T_x.
We apply a similar approach as in Algorithm 3 to implement Algorithm 4.
§ 2×2 STRATEGY FOR SOLVING TWO-BY-TWO LINEAR SYSTEM (<REF>)
In this strategy, we employ a single finite element space to discretize the velocity field and achieve the two-by-two partitioning (<ref>).
§.§ Novel Augmented Lagrangian-based preconditioning and global
techniques:
The iterative solution of the discrete Stokes equations has attracted considerable attention in recent years. Here we limit ourselves to discussing solution algorithms based on preconditioned Krylov subspace methods. In [13], the following two constraint-type preconditioners were proposed for accelerating the convergence of Krylov subspace methods.
First, problem (<ref>) is reformulated as the equivalent augmented system 𝒜̅𝐮 = 𝐛̅, where
𝒜̅_2× 2=
[ A + γ B^T Q^-1B B^T; B 0 ],
and 𝐛̅ = (f+B^T Q^-1g; g), with Q being an arbitrary SPD matrix and γ > 0 a user-defined parameter. Evidently, the linear system of equations
𝒜̅_2× 2𝐮 = 𝐛̅ is equivalent to 𝒜𝐮 = 𝐛 . This approach is motivated by the success of the use of grad-div stabilization and augmented Lagrangian techniques for solving saddle point problems.
§.§.§ Preconditioning:
In this section, we investigate a new augmented Lagrangian-based preconditioning and global approach for solving (<ref>). The idea of preconditioning is to transform the linear system (<ref>) into another one that is easier to solve. Left preconditioning of (<ref>) gives the following new linear system:
𝒫^-1_γ,α𝒜̅_2× 2𝐮 = 𝒫^-1𝐛̅,
where 𝒫_γ,α is given as follows
𝒫_γ, α=
[ A + γ B^T Q^-1B (1-γα^-1)B^T; 0 α^-1Q ],
To apply the preconditioner, we need to solve systems of the following form:
* We compute y,
* The matrix A+ γ B^T Q^-1 B is SPD, we solve it iteratively by PCG method. We address this by employing the PCG method for iterative solution. Within the PCG process, we
perform sequence of matrix-vector product, first multiplying vectors by B, by Q^-1 and then by B^T.
§.§ Spectral analysis
The distribution of eigenvalues and eigenvectors of a preconditioned matrix has a significant connection to how quickly Krylov subspace methods converge. Hence, it's valuable to analyze the spectral characteristics of the preconditioned matrix, denoted as 𝒫_γ,α^-1𝒜̅_2× 2. In the upcoming theorem, we will estimate the
lower and upper bounds for the eigenvalues of preconditioned
matrix 𝒫_γ,α^-1𝒜̅_2× 2.
Let the preconditioner 𝒫_γ,α be defined as in (<ref>). Then the eigenvalues of 𝒫_γ,α^-1𝒜̅_2× 2 are all real, positive and bounded. Furthermore
the matrix 𝒫_γ,α^-1𝒜̅_2× 2 is diagonalizable and has n_p+1 distinct eigenvalues
{1,λ_1,...,λ_n_p}.
Assume that λ represents an eigenvalue of the preconditioned matrix and u̅=(u;p) is the associated eigenvector. In order to deduce the distribution of eigenvalues, we analyze the following generalized eigenvalue problem
𝒜̅_2× 2u̅ =λ𝒫_γ,αu̅.
(<ref>) can be reformulated as follows
{[ (1-λ)(A+γ B^TQ^-1B)u+(1+λ(γα^-1-1))B^Tp =0,; Bu =-λα^-1 Qp. ].
In the case where λ=1, equation (<ref>) is always true for u∈Null(B), consequently, there exist n_u-n_p linearly independent eigenvectors ([ u^(i);
0 ]), i=1,..,n_u-n_p, corresponding to the eigenvalue 1, where u^(i)∈Null(B).
If λ=1 and u=0, from the second equation of (<ref>), it can be deduced that p=0. This conflicts with the initial assumption that the column vector (
u;
p) is an eigenvector of the preconditioned matrix 𝒫_γ,α^-1𝒜̅_2× 2. If λ≠ 1 and p=0, from the first equation of (15), it can be deduced that u must be 0. This contradicts
the initial assumption that (u; p) is the eigenvector of the preconditioned matrix and therefore u≠ 0 and
p≠ 0.
Since λ≠ 1, from (<ref>) we further obtain :
p =-α/λQ^-1 B u.
Substituting p from the above relation in the first equation of (<ref>), we get :
λ^2(A+γ B^T Q^-1 B)u-λ(A+α B^T Q^-1 B )u +α B^T Q^-1 Bu = 0.
Premultiplying (<ref>) with u^T/u^Tu
(<ref>) gives:
(a+γ q)λ^2-(a+α q )λ +α q=0,
which can be written
λ^2-bλ+c=0,
where a, q, b and c are given as follows:
a = u^T A u/u^Tu,
q=u^T B^T Q^-1 B u/u^Tu, b=a+α q/a+γ qand c=α q/a+γ q.
As a result, it is immediate to see that the roots of (<ref>) are real and positive, given by
n_p eigenvalues
λ_1 = b - √(b^2 - 4c)/2 and n_p eigenvalues
λ_2 = b + √(b^2 - 4c)/2 of the preconditioned matrix.
After some manipulations, λ_1 and λ_2 must hold the
following inequalities:
λ_1 ≥2λ_min(B^T Q^-1 B) /λ_max(A)+(1+α-γ)λ_max(B^T Q^-1 B),λ_2 ≤2αλ_max(B^T Q^-1 B) /λ_min(A)+(α-γ)λ_min(B^T Q^-1 B).
§ NUMERICAL RESULTS
In this section, we report on the performance of inexact variants of the proposed block preconditioner using a test problem taken from <cit.>, which corresponds to a 2D Stokes flow problem. The programs are performed on a computer with an Intel Core i7-10750H CPU @ 2.60 GHz processor and 16.0 GB RAM using MATLAB R2020b. In Tables 1 and 2, we report the total required number of outer GMRES iterations and elapsed CPU time (in seconds) under “Iter” and "CPU", respectively. The total number of inner GMRES (PCG) iterations to solve subsystems with coefficient matrices (A+ γ B^T_x Q^-1 B_x) and (A+ γ B^T_x Q^-1 B_x) are reported under Iter (Iter_pcg). No restart is used for either GMRES iteration. The initial guess is taken to be the zero vector and the iterations are stopped as soon as
𝒜x_k - b_2 ≤ 10^-7b_2,
where x_k is the computed k-th approximate solution. In the tables, we also include the relative error and relative residual
Err := x_k - x^* _2/ x^* _2,
and
Res :=
𝒜x_k - b_2/b_2
,
where x^* and x_k are respectively, the exact solution and its approximation obtained in the k-th iterate. In addition, we have used right-hand sides corresponding to random solution vectors.
L-shaped two dimensional domain Ω_∟, parabolic inflow boundary condition, natural outflow boundary condition. Consider the Stokes equation system (<ref>) posed in Ω_∟=( -1,5)×( -1,1). In this scenario, we have a situation where there is a slow flow occurring in a rectangular duct with a sudden expansion. This configuration is often referred to as "flow over a backward facing step". Dirichlet no-flow (zero velocity) boundary conditions on uniform streamline imposed in the inflow boundary (x=-1; 0≤ y ≤ 1), the Neumann condition (<ref>) is again applied at the outflow boundary (x = 5;-1<y< 1).
We use Q_2-P_1 mixed finite element approximation from IFISS library <cit.> to discretize this problem in Ω_∟, where:
* Q_2: biquadratic finite element approximation on rectangles for the velocity,
* P_1: triangular finite element approximation on triangle for the pressure,
the nodal positions of this mixed finite element are illustrated in the following Fig. <ref>:
Then we obtain the nonsingular saddle point problem (<ref>).
The numerical results of strategy 3×3 with approaches I and II for the tested example are listed in Tables 1 and 3.
In Tables 2 and 4, we list numerical results with respect to Iter, CPU and Res in the case of 2×2 and 3×3 strategies.
§ IN THE CASE Γ=1E-04 AND Α=1E+01.
It can be seen numerically that the Approach II incorporated with 𝒫_γ, α,x,G and 𝒫_γ, α,y,G
preconditioners
is more convenient than the Approach I
incorporated with 𝒫_γ, α a
in terms of
both iteration number and CPU time.
Table 2 reports the corresponding results of the two strategies with the proposed preconditioners, which show that the 3×3 strategy with 𝒫_γ, α,x,G perform much better than the 2×2 strategy with 𝒫_γ, α, especially for the large
problems. Numerical results are reported in Tables 3 for the tested methods
with respect to the number of outer iteration steps, inner iteration steps and elapsed CPU time in seconds, denoted
as "Iter", "Iter_pcg" and "CPU", respectively.
The Approach II incorporated with 𝒫_γ, α, y, G outperforms the Approach I with 𝒫_γ, α, y on efficiency and performance
concerning both iteration steps and CPU times. Moreover, the Approach II incorporated with 𝒫_γ, α, y
preconditioner is more economical and it is superior to the other two preconditioners regarding
execution time, especially for relatively large size problems.
In the case γ=1e-02 and α=1e+01.
It was observed in all the Tables that 2× 2 and 3× 3 strategies with the inexact augmented Lagrangian-based preconditioner exhibits faster convergence for smaller values of γ. However, for large γ the total timings increase due to the fact that the condition number of the blocks (A+ γ B^T_y Q^-1 B_y) and (A+ γ B^T_x Q^-1 B_x) goes up as increase. The 3× 3 strategy incorporated with 𝒫_γ, α,x,G and 𝒫_γ, α,y,G preconditioners
demonstrates significantly better performance.
This superiority is observed across various comparisons with 2× 2 strategy incorporated with 𝒫_γ, α. Moreover, 3× 3 strategy consistently requires less CPU time for convergence. Therefore, it can be concluded that the convergence behavior of 3× 3 strategy with 𝒫_γ, α,x,G and 𝒫_γ, α,y,G outperforms that of other methods.
To discretize problem (<ref>) using Taylor-Hood Q_2-Q_1 mixed-finite element approximation in Ω_∟, we utilize the nodal positions of Q_2-Q_1 from IFISS library <cit.>, where:
* Q_1: denotes a quadratic finite element approximation on rectangle,
and the nodal positions of Q_2-Q_1 are given below in the following Fig. <ref>:
Then we derive the nonsingular saddle point problem (<ref>).
To further confirm the effectiveness of the 3× 3 strategy incorporated with 𝒫_γ, α, x, G or 𝒫_γ, α, y, G preconditioners, numerical results of the 2× 2 and 3× 3 strategies incorporated with various preconditioners,
with respect to Iter, Iter_pcg, CPU, Res and Err for saddle point problems with different values of l,
are reported in the following Tables.
§ IN THE CASE Γ=1E-04 AND Α=1E+01.
It can be observed from Tables 10 and 11, that the 𝒫_γ, α, x, GGMRES and 𝒫_γ, α, y, GGMRES
methods have great advantage in the CPU compared with 𝒫_γ, α, xGMRES and 𝒫_γ, α, yGMRES methods, which shows with Approach II the total timings are much smaller than in the case of Approach I. Although the results in Tables 2 and 4 indicates
applying 3× 3 strategy and Algorithms 3 and 4 to solve the problem with several right-hand sides (A+ γ B^T_x Q^-1 B_x)𝒳=ℋ or (A+ γ B^T_y Q^-1 B_y)𝒳=ℋ, need less computing time than using 2× 2 strategy with Algorithms 1 and 2.
§ IN THE CASE Γ=1E-02 AND Α=1E+01.
By comparing the results in Tables 13, 14, 15 and 16 it can be seen that our proposed strategy incorporated with the
preconditioned 𝒫_γ, α, x, G and 𝒫_γ, α, y, G
GMRES methods succeed in producing high-quality approximate solutions in
all cases, while the
3× 3 strategy incorporated with
preconditioned 𝒫_γ, α,x,GGMRES, 𝒫_γ, α,y,GGMRES
methods, outperforms the classical 2× 2 strategy incorporated with
preconditioned 𝒫_γ, αGMRES method,
in terms of Iter and CPU times. Besides, numerical results in Tables above show that the 3× 3 strategy incorporated with
preconditioned 𝒫_γ, α,x,GGMRES and
𝒫_γ, α,y,GGMRES
methods with
proper α and γ is still very efficient even for larger size of problems.
§ CONCLUSION
In this paper, we introduce a new class of augmented Lagrangian-preconditioners based on global conjugate gradient (GCG) method for solving three-by-three linear systems, focusing on systems arising from finite element discretizations of the Stokes flow problem. Numerical experiments on a challenging 2D model problem indicate that the corresponding inexact preconditioner with 3×3 strategy can achieve significantly faster convergence compared to previous versions of the augmented Lagrangian-based preconditioner.
Future work will concentrate on replacing the incomplete Cholesky inner preconditioners with multilevel preconditioners to enhance the scalability of the global conjugate gradient.
elsarticle-num-names
9
Arno
W.E. Arnoldi, The principle of minimized iterations in the solution of the matrix eigenvalue problem, Quart. Appl. Math., 9 (1951), pp. 17-29.
badahmane
A. Badahmane, A.H. Bentbib and H. Sadok, Preconditioned global Krylov subspace methods for solving saddle point problems with multiple right-hand sides, Electron. Trans. Numer. Anal., 51 (2019), pp. 495-511.
AHSS
Z.-Z. Bai, G.H. Golub, Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems, IMA J. Numer. Anal., 27 (2007), pp. 1-23.
PHSS
Z.-Z. Bai, G.H. Golub and J.-Y. Pan, Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems, Numer. Math., 98 (2004), pp. 1-32.
HSS
Z.-Z. Bai, G.H. Golub and M.K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian
positive definite linear systems, SIAM J. Matrix Anal. Appl., 24 (2003), pp. 603-626.
Wang
Z.-Z. Bai, Z.-Q. Wang, On parameterized inexact Uzawa methods for generalized saddle point problems, Linear Algebra Appl., 428 (2008), pp. 2900–2932.
Parlett
Z.-Z. Bai, B.N. Parlett, Z.-Q.Wang, On generalized successive overrelaxation methods for
augmented linear systems. Numer. Math., 102 (2005), pp. 1–38.
Benzi
M. Benzi, G. H. Golub and J. Liesen, Numerical solution of saddle point problems, Acta
Numerica, 14 (2005), pp. 1-137.
Bramble
J.H. Bramble, J.E. Pasciak, A.T. Vassilev, Analysis of the inexact Uzawa algorithm for saddle point
problems. SIAM J. Numer Anal., 34 (1997), pp. 1072–1092
Vuik
G. Ebadi, N. Alipour, and C. Vuik, Deflated and augmented global Krylov subspace methods for the
matrix equations, Appl. Numer. Math., 99 (2016), pp. 137–150
benzi2024
F. P. A. Beik, M. Benzi, An augmented Lagrangian-based preconditioning technique for a class of block three-by-three linear systems, Applied Mathematics Letters., 149 (2024).
beik2022
F. P. A. Beik, M. Benzi,
Preconditioning techniques for the coupled Stokes-Darcy problem: Spectral and field-of-values analysis, Numerische Mathematik, 150 (2022), pp. 257-298.
Benzi2
M. Benzi, G. H. Golub and J. Liesen, Numerical solution of saddle point problems, Acta
Numerica, 14 (2005), pp. 1-137.
Elman
H.C. Elman, D.J. Silvester and A.J. Wathen, Finite Elements and Fast Iterative Solvers with Applications in Incompressible Fluid Dynamics, Oxford University Press, New York., (2005).
Golub2
G.H. Golub, X. Wu, J.-Y. Yuan, SOR-like methods for augmented systems, BIT Numer. Math., (2001), pp. 71-85.
Guo
P. Guo, C.-X. Li, S.-L. Wu, A modified SOR-like method for the augmented systems, J. Comput.
Appl. Math., 274 (2015), pp, 58–69.
GBICG
K. Jbilou, H. Sadok and A. Tinzefte, Oblique projection methods for linear systems with multiple right-hand sides, Electron. Trans. Numer. Anal., 20 (2005), pp. 119-138.
Ichol:Manteuffel
T.A. Manteuffel, An incomplete factorization technique for positive definite linear systems, Math. Comput., 34 (1980), pp. 473-497.
Wang
N.-N. Wang, J.-C. Li, A class of new extended shift-splitting preconditioners for saddle
point problems, J. Comput. Appl. Math., 357 (2019), pp. 123-145.
saad:2003
Y. Saad, Iterative Methods for Sparse Linear Systems, SIAM, Philadelphia, PA., (2003).
Saad
Y. Saad, M.H. Schultz, A generalized minimal residual algorithm for solving nonsymmetric linear systems, SIAM J. Sci. Stat. Comp., 7 (1986), pp. 856-869.
saad1993flexible
Y. Saad, A flexible inner-outer preconditioned GMRES algorithm, SIAM Journal on Scientific Computing, 14 (1993), pp. 461–469.
Zhang
J.-J. Zhang, J.-J. Shang, A class of Uzawa-SOR methods for saddle point problems, Appl. Math
Comput., 216 (2010), pp. 2163–2168.
|
http://arxiv.org/abs/2409.02994v1 | 20240904180003 | Spherical Evolution of the Generalized Harmonic Gauge Formulation of General Relativity on Compactified Hyperboloidal Slices | [
"Christian Peterson",
"Shalabh Gautam",
"Alex Vañó-Viñuales",
"David Hilditch"
] | gr-qc | [
"gr-qc"
] |
§ ABSTRACT
We report on the successful numerical evolution of the compactified
hyperboloidal initial value problem in general relativity using
generalized harmonic gauge. We work in spherical symmetry, using a
massless scalar field to drive dynamics. Our treatment is based on
the dual-foliation approach, proceeding either by using a height
function or by solving the eikonal equation to map between frames.
Both are tested here with a naive implementation and with
hyperboloidal layers. We present a broad suite of numerical
evolutions, including pure gauge perturbations, constraint violating
and satisfying data with and without scalar field matter. We present
calculations of spacetimes with a regular center. For black hole
spacetimes we use excision to remove part of the black hole
interior. We demonstrate both pointwise and norm convergence at the
expected rate of our discretization. We present evolutions in which
the scalar field collapses to form a black hole. Evolving nonlinear
scalar field perturbations of the Schwarzschild spacetime, we
recover the expected quasinormal frequencies and tail decay rates
from linear theory.
^1CENTRA, Departamento de Física, Instituto Superior Técnico IST,
Universidade de Lisboa UL, Avenida Rovisco Pais 1, 1049 Lisboa, Portugal
^2International Centre for Theoretical Sciences (ICTS), Survey No. 151,
Shivakote, Hesaraghatta Hobli, Bengaluru - 560 089, India
Spherical Evolution of the Generalized Harmonic Gauge
Formulation of General Relativity on Compactified Hyperboloidal
Slices
David Hilditch^10000-0001-9960-5293
Received xx; accepted xx
================================================================================================================================
§ INTRODUCTION
Asymptotic flatness is the natural assumption under which to model
isolated systems in general relativity (GR) <cit.>. It may be
formalized in a variety of ways, but common to all is the existence of
a region far from the `center' in which the metric becomes ever closer
to that of the Minkowski spacetime. This leads to the definition of
future null infinity, ℐ^+, which can be thought of as the
collection of endpoints of future directed null geodesics within this
asymptotic region. Future null infinity is crucial in various
mathematical definitions and, crucially for astrophysics, is the place
where gravitational waves (GW) can be unambiguously defined. It is
therefore of vital importance to numerical relativity (NR) to have
access to it.
The most common approach used in NR is to solve the Einstein field
equations (EFEs) in a truncated domain with a timelike outer boundary,
evaluating an approximation to outgoing waves on a set of concentric
spheres and then extrapolate this data to infinity at fixed retarded
time. Eventually waves computed in this way will be affected by
artificial boundary conditions. There are however several proposals to
include ℐ^+ within the computational domain directly. One
popular suggestion is to solve the field equations on compactified
outgoing null-slices. Depending on whether or not data from the
characteristic domain couples back to the method used to treat the
central region, this is called either
Cauchy-Characteristic-Matching (CCM) or
Cauchy-Characteristic-Extraction (CCE). For details see the
review <cit.>. Recent numerical work can be found
in <cit.>. Well-posedness and
numerical convergence analysis of CCE and CCM setups in Bondi-like
gauges can be found in <cit.>.
Another proposal, pioneered by Friedrich <cit.>, is
to foliate spacetime via hyperboloidal slices. These are by definition
spacelike hypersurfaces that terminate at null infinity rather than
spatial infinity like Cauchy slices. For numerical applications
hyperboloidal slices can be combined with a compactified radial
coordinate. Following <cit.> this strategy is now completely
standard for perturbative work in a range of applications, see for
instance <cit.>. The essential subtlety
in working with compactified coordinates, which do the work of
bringing ℐ^+ to a finite coordinate distance, is that they
introduce divergent quantities into the problem. Fortunately in the
asymptotically flat setting these can be off-set by the smallness
coming from decay near ℐ^+. The specific rates therefore
matter. For applications to full GR there are two broad approaches to
this regularization. The first is to introduce curvature quantities as
variables and then work with conformally related variables. This
ultimately leads to the conformal Einstein field
equations <cit.>, which have the advantage of complete
regularity at ℐ^+. Numerical work using the conformal EFEs
is reviewed in <cit.>,
see <cit.>. The second broad category is to
work with evolved variables involving at most one derivative of the
metric, which is more standard in NR. Unfortunately there is no known
formulation of this type that is completely regular. Instead we have
to cope with expressions that are formally singular, but which are
expected to take finite limits at ℐ^+. In this sense such
formulations exist as an edge-case for numerical applications. Key
contributions in this setting include those of
Zenginoğlu <cit.>, who uses harmonic coordinates
for full GR, those of Moncrief and
Rinne <cit.>, who offered a partially
constrained formulation with elliptic gauge conditions, of Bardeen,
Sarbach, Buchman and later Morales <cit.>, with
a frame based approach, and of Vañó-Viñuales and
collaborators <cit.>, who employ
variations of the popular moving-puncture gauge. All of these setups
use a conformally related metric but without making curvature an
evolved variable.
We turn now to give an overview of the approach we follow here, which
was proposed in <cit.>. We work with compactified
hyperboloidal coordinates x^μ, but with evolved variables
associated with a coordinate tensor basis _μ
and dX^ν, as one would obtain in the standard solution of
the Cauchy problem. The idea is to take the evolved variables to
include a rescaling that knocks out their leading order decay, and to
then arrive at equations of motion that are as regular as possible. We
work in the second broad category discussed above, introducing at most
first derivatives of the metric as evolved quantities. The generalized
harmonic gauge (GHG) formulation is among the most popular in use in
NR. It is symmetric hyperbolic, possessing a very simple
characteristic structure with speed of light propagation. We thus take
the uppercase coordinates X^α to be generalized
harmonic, so that X^α=F^α, with gauge
source functions F^α which we can choose freely. Working
in the heuristic asymptotic systems setting of
Hörmander <cit.> as applied to great effect in the proof
of nonlinear stability of Minkowski <cit.>, it was found that
specific constraint addition <cit.> and choices for the gauge
source functions <cit.>
can be expected to improve the leading asymptotic decay of solutions
to GHG within a large class of initial data.
In the GHG formulation the metric components satisfy a system of
coupled nonlinear wave equations. Therefore, to assess the numerical
feasibility of our approach, we previously studied model systems of
both linear and nonlinear type, focusing in particular on the GBU and
GBUF systems, which were constructed to capture the asymptotic leading
behavior of GR in GHG. Promising numerical results have been presented
both in spherical symmetry <cit.> and full
3d <cit.>. Therefore, here we move on to give a thorough
treatment of spherical GR.
In section <ref> we give details of our geometric
setup, the formulation of GR that we employ, and our approach to
solving for constraint satisfying initial data. Afterwards, in
section <ref>, we briefly discuss our numerical
implementation and then present a suite of hyperboloidal evolutions of
full GR, placing particular emphasis on convergence tests. Results are
given with `pure' hyperboloids and with hyperboloidal
layers <cit.>, for both a height-function and an eikonal
approach to the construction of the hyperboloidal slices. Our
evolutions include various different types of physical initial data,
including pure gauge waves, constraint violating and satisfying data,
spacetimes with and without scalar field matter, perturbations of the
Minkoswki and Schwarzschild spacetimes. For the latter we examine both
quasinormal modes (QNMs) and late time power-law tail decay. We also
present results for initial data that start from a regular center and
then collapse to form a black hole. Section <ref>
contains our conclusions. Latin indices are abstract. Unless otherwise
stated, underlined Greek indices refer to the X^μ basis,
while standard Greek refer to that of x^μ. The metric is taken to
have mostly + signature. Geometric units are used throughout.
§ GEOMETRIC SETUP AND THE EINSTEIN FIELD EQUATIONS
We work in explicit spherical symmetry with spherical polar
coordinates X^μ = (T,R,θ,ϕ). As we develop
our formalism we implicitly assume that in these coordinates the
metric asymptotes to the standard form of Minkowski in spherical
polars near both spatial and future null infinity. Such coordinates
are guaranteed to exist by any reasonable definition of asymptotic
flatness, but for now we do not try to establish the slowest possible
decay that could be dealt with, and instead focus on the inclusion of
a large class of physical spacetimes. The first two variables, C_+
and C_-, are defined by requiring that the vectors
ξ^a = _T^a + C_+ _R^a , ξ^a = _T^a+C_-_R^a
are null. Next, the function δ is defined by demanding that the
covectors
σ_a = e^-δξ_a , σ_a = e^-δξ_a
are normalized by
σ_a _R^a = -σ_a _R^a = 1 .
Finally, we define the areal radius
R≡ e^ϵ/2 R .
Due to spherical symmetry, all
variables { C_+, C_-, δ, ϵ} are functions of (T,R)
only. With all these elements, the metric takes the form
(g_μν) = (
[ 2 e^δ C_+ C_-/C_+ - C_- e^δ(C_- + C_+ )/C_- -C_+ 0 0; e^δ(C_- +C_+ )/C_- - C_+ 2 e^δ/C_+ - C_- 0 0; 0 0 R^2 0; 0 0 0 R^2 sin^2 θ; ]) .
As usual we denote the Levi-Civita derivative of g_ab
by ∇_a.
The metric is split naturally as
g_ab = g_ab + g_ab ,
where g_ab is the {T,R} part of the metric
and g_ab is the metric defined on a sphere of radius R
at time T. These metrics are converted into projection operators
onto their respective subspaces by raising one of their indices by the
inverse metric g^ab.
Our variables admit a simple interpretation. C_± are the local
radial coordinate lightspeeds in coordinates X^μ. The
variable δ determines the determinant of the
two-metric g_ab in these coordinates through
g = e^2δ ,
and ϵ paramaterizes the difference between the coordinate and
areal radii.
In spherical symmetry the stress-energy tensor reduces to
( T_μν ) = (
[ T_TT T_TR 0 0; T_TR T_RR 0 0; 0 0 T_θθ 0; 0 0 0 T_θθ sin^2 θ; ]) ,
with (T_TT,T_TR,T_RR,T_θθ) being functions
of (T,R) only. In trace reversed from, the field equations are
R_ab = 8 π( T_ab - 1/2 g_ab T_c^c ) ,
where R_ab is the Ricci tensor of g_ab. Defining D_a as the
covariant derivative associated with g_ab, and
contracting equation (<ref>) with our null-vectors and tracing
in the angular sector, we get
- D_σ D_σR
+ D_σR(D_σ - D_σ) C_+/κ
= 4 π R T_σσ ,
- D_σ D_σR
+ D_σR (D_σ - D_σ) C_-/κ
= 4 πR T_σσ ,
12_2 δ - D_a[e^δ/κ^2(σ^aD_σ C_-
- σ^aD_σC_+) ]
+ 2/R^3M_MS
+ e^δ/κ^3[ D_σC_+ D_σ C_-
- D_σ C_+ D_σ C_- ]
= 8 π T_θθ/R^2
+ 8 πe^δ/κT_σσ ,
_2R^2 - 2
+ 16 πe^δ/κR^2 T_σσ
= 0 ,
where we use the notation D_σ≡σ^aD_a
and D_σ≡σ^aD_a to denote directional
derivatives along the null-vectors σ
and σ. Likewise subscripts σ and σ
denote contraction with these vectors on that slot of the respective
tensor. For the two-dimensional d'Alembert operator in the TR plane
we write _2≡g^abD_aD_b and define the
shorthand κ≡ C_+ - C_-. Finally, the Misner-Sharp
mass <cit.> is given by
M_MS≡12R(
2e^δ/κ (D_σR) (D_σR) + 1 ) .
The difference between the projected d'Alembert operator and the
full 3+1 dimensional version, defined
by ≡ g^ab∇_a∇_b, is
(-_2)φ = - 2/Re^δ/κ( D_σRD_σφ
+ D_σRD_σφ)
when acting on a spherically symmetric function φ. We observe that
the field equations are highly structured when expressed in these
variables. For instance, null directional derivatives D_σ
and D_σ of C_± always appear with a κ^-1
prefactor and, taking this into account, outside of the principal part
the variable δ appears only in the
combination e^δ/κ. Regularity at the origin is discussed
below.
Due to spherical symmetry the two radial null-vectors σ^a
and σ^a must be tangent to outgoing and incoming geodesic
null-curves. They satisfy
D_σσ^a =(D_bσ^b)σ^a
=κ^-1[(D_σ-D_σ)C_+]σ^a ,
D_σσ^a = (D_bσ^b)σ^a
= κ^-1[(D_σ-D_σ)C_-]σ^a .
We shall consider a minimally coupled massless scalar field as the
matter model, whose equation of motion is
ψ≡ g^ab∇_a∇_bψ = 0 .
The null components of the stress energy tensor for this matter
content are given by
T_σσ = (D_σψ)^2 ,
T_σσ
= (D_σψ)^2 ,
T_σσ = 0 ,
T_θθ = e^δ/κR^2
D_σψ D_σψ .
§.§ Generalized Harmonic Gauge
In GHG the coordinates satisfy wave
equations X^α=F^α. In practice this is
imposed by rewriting these wave equations and defining constraints
that measure whether or not they are satisfied. This results in
C^μ≡Γ^μ + F^μ =
0 ,
which we will refer to as GHG or harmonic constraints,
where Γ^μ = g^νλ Γ^μ_νλ are the contracted
Christoffels with
Γ^μ_νλ
= 1/2 g^μρ
(_νg_ρλ
+ _λ g_νρ
- _ρ g_νλ) ,
and F^μ's are the gauge source functions, and we recall
that X^α=-Γ^α. When the GHG
constraints are satisfied throughout the evolution, the EFEs are
equivalent to the reduced Einstein equations (rEFEs),
R_ab - ∇_(a C_b) + W_ab = 8π(
T_ab - 12 g_ab T_c^c ) ,
where the constraint addition tensor W_ab=W_(ab)(C_c) may be any
tensor constructed from the harmonic constraints with the property
that W_ab(0)=0, so that constraint propagation is maintained. As
usual, curved parentheses in subscripts denote the symmetric
part. With this adjustment, metric components satisfy nonlinear
curved-space wave equations. In spherical symmetry, the only free
components of F^μ are F^T and F^R, or equivalently the
null components F^σ≡ F^aσ_a
and F^σ≡ F^aσ_a. The angular components
of these constraints are satisfied identically, provided that we
choose
F^θ = R^-2θ , F^ϕ = 0 .
The null components of these constraints then read
C^σ ≡ C^aσ_a = F^σ
+2D_σC_+/κ
-2D_σR/R
C^σ ≡ C^aσ_a = F^σ
-2D_σ C_-/κ
-2D_σR/R
The gauge source functions F^σ and F^σ will later
be used to help impose asymptotic properties of solutions to the rEFEs
towards ℐ^+, but we can already keep in mind
that F^σ≃ F^σ≃ 2/R, so that the first
and third terms in the right-hand-sides of these equations cancel each
other to leading order near ℐ^+. Demanding regularity of
the rEFEs at the origin will furthermore restrict the limit of these
functions at the origin.
In GHG the metric components would be naively expected to decay like
solutions to the wave equation. Incoming null-derivatives of C_+
and ϵ therefore ought to decay at best
like R^-1 as we head out to ℐ^+. The
harmonic constraints (<ref>), however,
assert that they should in fact be equal to terms that decay
faster. Following <cit.>, who used asymptotic expansions, it
is possible to include constraint additions in the rEFEs so that even
when the constraints are violated, these two specific incoming
derivatives are expected to decay more rapidly. Making such constraint
addition and subsequently redefining the constraint addition
tensor W_ab, we can write the rEFEs as,
D_σ(2/κR^2D_σC_+ )
+RD_σ(R F^σ)
-D_σR^2D_σ C_+/κ
-R^2 W_σσ
= -8πR^2T_σσ ,
D_σ(2/κR^2D_σ C_-)
-RD_σ(RF^σ)
-D_σR^2D_σ C_-/κ
+R^2 W_σσ
= 8πR^2T_σσ ,
_2δ + D_a(g^a_bF^b)
+ 2e^δ/κ^3[ D_σC_+ D_σ C_-
- D_σ C_+ D_σ C_- ]
+2/R^2(1-2M_MS/R)
+2e^δ/κW_σσ
= 16 π T_θθ/R^2 ,
_2R^2 - 2 -2R^2W_θθ
= -16πe^δ/κR^2 T_σσ .
When performing a fully first-order reduction of these equations in
terms of null derivatives, a commutator between σ^a
and σ^a derivatives has to be calculated. The action of
this commutator acting on a general spherically symmetric function f
is
[σ,σ]^a =1/κ
(D_σ C_- - D_σC_+)(σ^a - σ^a )
+(D_σδ)σ^a
-(D_σδ) σ^a .
§.§ Regularizing the origin
When describing a spacetime with regular origin in spherical-like
coordinates the metric components have a well-defined parity. Diagonal
components are even functions of R, whereas TR components are
odd. Translating these conditions to our variables we find
that C_+(T,R) + C_-(T,R) is an odd function of R,
whereas C_+(T,R) - C_-(T,R), δ(T,R) and ϵ(T,R) are
even. These parity conditions imply
C_± (T,-R) = -C_∓ (T,R) .
Derivatives inherit parity in the obvious way. Regular initial data
moreover require
lim_R → 0[ ϵ(T,R) - δ(T,R)
+ ln (κ(T,R)/2) ] = O(R^2) .
Since all the terms in the above limit are even functions of R, this
limit is automatically satisfied by imposing the condition
ϵ(T,0) = δ(T,0) - ln(κ(T,0)/2) .
In other words, this condition says that the 0th order term in
Taylor's expansion of ϵ at the origin should satisfy the
above condition.
All these parity conditions assure regularity of the EFEs at the
origin if the initial data are smooth there. To ensure a similar
regularity for the rEFEs, we need to choose F^μ such
that C^μ and its first derivatives remain regular
there. By definition, F^T and F^R are even and odd functions
of R, respectively. From the expressions of the
constraints (<ref>), it can be seen that
the necessary and sufficient conditions on constraints for a regular
origin are that F^T should be regular and F^R must take the
leading limit 2e^-ϵ/R at the origin, with an additional
regular odd function permitted. These conditions translate to the null
components as
F^σ≃2e^-ϵ/2/R ,
F^σ≃ -2e^-ϵ/2/R ,
near the origin.
The constraint addition tensor as defined
in (<ref>) also needs to be regular at the
origin. Since the rEFEs are already regular with the previous choices,
and the purpose of constraint addition is to regularize the
asymptotics, we just take a sufficiently rapidly vanishing constraint
addition tensor at the origin. With the adjusted definitions
of W_ab in (<ref>), this corresponds instead
to
W_σσ = (e^-δ_R C_+
+ D_σ(ϵ +2ln R))C^σ ,
W_σσ = (e^-δ_R C_-
+ D_σ(ϵ +2ln R))C^σ ,
W_θθ = Re^δ/κ R^2(D_σR C^σ + D_σR C^σ ) ,
W_σσ = 0 .
§.§ Gauge Sources and Regularization
at ℐ^+
Below we will change from the generalized harmonic
coordinates X^α to compactified hyperboloidal
coordinates x^α. Pushing the field
Eqs. (<ref>) through this change will only result
in a set of PDEs that can be treated numerically if solutions decay
fast enough. As discussed above, the asymptotic decay of C_+^R
and ϵ can be influenced by adding suitable combinations of
the harmonic constraints to the field equations,
see <cit.>. These have been incorporated
into (<ref>), so that
W_σσ = W_σσ =
W_θθ = W_σσ = 0 ,
already includes damping terms in the field equations that, at least
within a large class of initial data, should result in decay
like D_σC_+=D_σϵ=O(R^-2)
near ℐ^+. Ideally, we would like to obtain similar
improved decay (beyond that expected for the wave equation) for the
variables C_- and δ. The remaining tool we have to achieve
this is to use the gauge source functions F^σ
and F^σ. Following the approach of <cit.>, we take,
F^σ = 2/R
+ 2p/R(e^δ -1) ,
F^σ = -2/R
-p/R(1+C_-)
+ 1/Rf_D .
In spherical vacuum the choice p=0 and f_D=0 grants δ
and C_- asymptotic decay like solutions to the wave equation,
whereas p=1 should give improved decay on incoming null derivatives
like that of C_+ and ϵ. Once we introduce the scalar field
the situation is more complicated because, without care, the slow
decay of T_σσ results in poor decay for the
incoming lightspeed
like C_-∼-1+(lnR)/R. In 3+1 dimensions
without symmetry gravitational waves induce similar behavior.
Fortunately, this shortcoming of plain harmonic gauge can be overcome
by using the gauge driver function f_D to absorb the logarithmic
terms. Model problems for this have been studied both in spherical
symmetry and in full 3d <cit.>. In particular, we take the equation of motion
f_D -2/χ(R)∂_T f_D
- 32 π (∂_T ψ)^2 = 0 ,
for the gauge driver f_D. The basic idea is that by insisting on a
wave-equation principal part hyperbolicity is guaranteed whilst
simultaneously the second term suppresses the natural radiation field
associated with the wave operator, and the third forces it equal to a
desired value that eradicates the slowest decay in the worst behaved
of the Einstein equations (the wave equation for C_-
in (<ref>)). Details can be found in the
references above. Here we have defined χ≡√(1+R^2), so
that χ(R) is an even function of R, χ∼ 1 near the
origin and χ∼ R near ℐ^+. When starting from black
hole initial data we adjust slightly the
choice (<ref>) based on compatibility
with the Schwarzschild solution in a reference coordinate system. The
specifics are explained below.
We have described separately how the choice of gauge and constraint
addition have to be taken in order to have regularity at the two
potentially problematic ends, the origin and ℐ^+. To
transition smoothly from the origin choice (<ref>) to the
asymptotic choice (<ref>) in applications we multiply the
former (origin choice) by a function ι(R) that is
identically 1 in an open region containing the origin and decays as
a Gaussian asymptotically, namely
ι(R) =
1 , R<R_0
e^-((R-R_0)/σ_0)^4 , R≥ R_0 .
We multiply the latter (asymptotic choice) by 1-ι(R).
§.§ First order reduction and rescaling
In our numerical implementation we use a first order reduction of the
field equations. For this we introduce the first order reduction (FOR)
variables
θ^±≡D_σ C_±/κ ,
θ^±≡D_σ C_±/κ ,
ζ^+ ≡ D_σ ζ ,
ζ^- ≡ D_σ ζ ,
where ζ stands for either δ, ϵ, ψ or f_D.
Parity conditions for the FOR variables are easily obtained from the
definitions (<ref>) and the parity conditions of
the metric components described above, plus the fact that the scalar
field ψ and the gauge driver f_D are even functions
of R. These conditions are
θ^± (T,-R) = - θ^∓ (T,R) ,
θ^± (T,-R) = - θ^∓ (T,R) ,
ζ^+(T,-R) = ζ^-(T,R) , ζ^-(T,-R) = ζ^+(T,R) .
Time derivatives of these variables satisfy the same parity
conditions, whereas radial derivatives flip the signs on the right
hand sides.
Introducing these variables creates new constraints. According to our
definitions, they read
C_C_±≡_R C_±
- e^δ(θ^± - θ^±) ,
C_ζ≡_R ζ - e^δζ^+ - ζ^-/κ .
As discussed earlier, at least within a large class of initial data, a
particular behavior of the variables is expected as ℐ^+ is
approached. Therefore, in order to obtain O(1) variables throughout
the whole domain, we rescale the evolved functions according to their
expected asymptotic decay. We define
C̃_±≡χ (C_±∓ 1) ,
Θ^±≡χ^2θ^± ,
Θ^±≡χθ^± ,
Z ≡χζ ,
Z^+ ≡χ^2 ζ^+ ,
Z^- ≡χζ^- ,
where the ∓ 1 in the C̃_± is their Minkowski value, and
so it is only this difference that decays. In words, the rule for the
definitions is that rescaling by the given powers of χ maps from
the “ζ” variables and their null derivatives to the
capitalized variables “Z”. In our numerical implementation all of
the evolution equations and constraints are written in terms of these
rescaled variables.
For brevity we do not state the full symmetric hyperbolic equations of
motion for the rescaled reduction variables, but they are
straightforwardly derived, and can be found in the Mathematica
notebooks <cit.> that accompanies this
paper. Instead, to illustrate the procedure and the basic shape of the
expressions obtained in the reduction, in particular
at ℐ^+, let us consider a variation of the `ugly' model
equation employed in earlier work, namely
φ + 2 p R^-1(D^aR)D_aφ =S ,
with p a non-negative integer and where the source term S may be
thought of as decaying like R^-3
near ℐ^+. (The constant p
in (<ref>) corresponds to the value
of p in (<ref>) for the variables δ and C_-). As
elsewhere, we assume sphericity, but now with an arbitrary given
asymptotically flat metric using the same notation as above. We
introduce the rescaled variable =Rφ, and
define rescaled reduction variables
^+=R^2D_ξφ , ^-=RD_ξφ .
This gives rise to the reduction constraint
C_φ =_Rφ-κ^-1e^δR^-1
[R^-1^+-^-] .
The equations of motion for the reduction variables can then be
written as
D_σ^- + pR^-1(D_σR)^-
+(1+p)R^-2(D_σR)^+
-1κ(D_σC_+-D_σC_-)^-
+ κ2RS = 0 ,
D_σ^+
- (1-p)R^-1(D_σR)^+
+(1+p)DR^-
-1κ(D_σC_+-D_σC_-)^+
+κ2R^2S = 0 .
Introducing the change to hyperboloidal coordinates we effectively
have to multiply the first of these equations by a power R'∼ R^n,
with n∈(1,2]. If the fields have wave equation asymptotics (p=0)
and S=O(R^-3), this creates no formally singular
terms. If instead p is taken to be a positive integer, suitable
initial data allows for ^- to decay faster
than O(R^-1), but this still renders the second term in
the first of these equations formally singular. With our first order
reduction, the EFEs are similar to this system with either p=0 or
with p=1, so that all formally singular terms appear with an
identical structure and can be straightforwardly treated by
L'Hôpital's rule.
§.§ Hyperboloidal Foliations and the
Dual-Frame Formalism
The purpose of the present work is to perform the numerical evolutions
within hyperboloidal slicings of spacetime. To do so, we need to
introduce the appropriate changes of coordinates that define them.
Following the dual-foliation strategy of <cit.>,
this change is made without transforming the tensor basis we use. In
our setting, this gives greater freedom in the choice of coordinates
but without interfering with hyperbolicity, and ultimately permits us
to include ℐ^+ within the computational domain.
The first step in our construction is to introduce a compactification
so that we can bring R→∞ to a finite coordinate
distance. This is accomplished by defining a new coordinate r
through
R(r) = r_m + r-r_m/Ω(r)^1/n-1Θ(r-r_m)
,
Ω(r) =1-(r-r_m)^2/(r_ℐ-r_m)^2 ,
1< n ≤ 2 ,
where Θ is the Heaviside function, so
that R→∞ corresponds
to r→ r_ℐ. For simplicity we
take r_ℐ = 1. The compactification is the identity in
the range r∈ [0,r_m]. Note that R is a monotonically increasing
function of r, and so it is invertible. The derivative diverges
asymptotically at the rate dR/dr≡ R'∼ R^n, so that the
parameter n serves to control the rate of
compactification <cit.>. By definition a hyperboloidal
slice is one which remains spacelike everywhere but terminates at null
infinity. We construct our hyperboloidal time coordinate according to
two strategies, which we explain in the following subsections.
*The height function approach: The first method we use to
construct hyperboloidal time employs a height function. Here, level
sets of the time coordinate are explicitly lifted up by a given
function in a spacetime diagram relative to those of the harmonic time
coordinate T, which in contrast are taken to terminate at spatial
infinity. For this we define a time function that asymptotes to
retarded time through
t = T - H(R) ,
where H(R) is called the height function because it encodes how the
new slices are lifted. Recalling the expected
asymptotics D_σ C_+= D_σ C_+ = O(R^-2)
of the outgoing coordinate lightspeed, it follows that the associated
`mass-term' at ℐ^+
C̃_+|_ℐ^+ = m_C_+ ,
is constant. Imposing that the outgoing coordinate lightspeed c_+ in
the hyperboloidal coordinates is bounded <cit.> requires knowledge of this mass-term. We therefore
choose
H(R) = R - m_C_+ln R - r .
Observe that the term involving m_C_+ mimics the tortoise
coordinate of Schwarzschild spacetime asymptotically. The appearance
of m_C_+ here is the reason that we need improved asymptotic decay
in C_+, since if the mass-term were time dependent the
height-function ansatz as defined in (<ref>) would
fail.
If we take initial data such that C_+ decays sufficiently fast, in
particular so that m_C_+≡ 0, we recover the definition used
in earlier numerical works to treat systems of linear and nonlinear
wave equations in the Minkowski spacetime <cit.>. The inclusion of the term m_C_+ here
is important both to guarantee that the outgoing radial coordinate
lightspeed in the compactified hyperboloidal coordinates is O(1),
but also, as illustrated in Figure <ref>, to obtain the
desired global structure of the slices.
With this construction, the Jacobians to change from (T,R)
to (t,r) derivatives are given by
_R = 1/R'(r)_r
+ ( 1 - 1/R'(r)
- m_C_+/ R(r) ) _t ,
_T = _t ,
where we see that the mass-term places a correction that dominates the
term coming from the compactification itself.
*The eikonal approach: Our second approach to construct a
hyperboloidal time coordinate is by defining
t = u + r
where u satisfies the eikonal equation
∇^cu∇_cu = 0 .
As explained in <cit.>, demanding
that ∇^a u ∝ξ^a leads to having outgoing coordinate
lightspeeds identically one in the compactified hyperboloidal
coordinates. This means that hyperboloidal slices built this way adapt
dynamically so that we can control outgoing speeds and should help
avoid undesirable coordinate red and blue-shift effects on outgoing
signals.
The idea in the eikonal approach is to derive
equation (<ref>) and project it
along σ^a to get an evolution equation
for U^-≡∇_σu, which can then be solved
alongside the rEFEs, while the
condition -∇^a u ∝σ^a gives a fixed functional form
to U^+≡ D_σ u in terms of U^- and the metric
variables. The equation we get for U^- is of advection-type, with
principal part decoupled from the principal part of the rEFEs, so
symmetric-hyperbolicity of the composite system is trivially
preserved.
Similarly to the previous sections, the choice we take for the
function u that satisfies the eikonal equation can only be made
asymptotically, as parity of u at the origin is complicated and in
any case is incompatible with black hole excision. The reason for this
is that since the eikonal coordinates forces the outgoing lightspeeds
to be identically one, we necessarily need boundary conditions at the
black hole boundary. To overcome these obstacles, we place a
source S in the right-hand-side of equation (<ref>),
instead solving
∇^cu∇_cu = S ,
and choosing S so that u ≃ T-R near the origin/horizon and
satisfies the eikonal equation identically only
near ℐ^+. This reduces the Jacobians to the identity
when R is small whilst allowing them to take the desired form
near ℐ^+. (See <cit.> for full details).
Concretely, if we decompose the vector u^a≡ -∇^a u with
the identity
u^a = e^δ/κ ( U^- σ^a + U^+ σ^a )
the evolution equation for U^- reads
u^a D_a U^- = ( u^a D_a σ^b )u_b
+ 12D_σS .
When the expressions on the right-hand-side are expanded out we see
that they in fact contain incoming null derivatives
of C_+. Rewriting this equation in the compactified hyperboloidal
coordinates requires multiplying by R^n, so the improved asymptotic
decay of C_+ is needed just as in the height function setting. The
sourced eikonal equation (<ref>) places a constraint
for U^+ of the form
U^+ = -κ/2U^-e^-δS .
The Jacobians in this setting that change from (T,R) derivatives
to (t,r) ones are
_R = 1/R'(r)_r + ( e^δ/κ (U^+ - U^-)
+ 1/R'(r)) _t ,
_T = e^δ/κ
(C_+ U^- - C_- U^+)_t .
§.§ Bondi Mass
In the spherical setting, for each hyperboloidal slice, the Bondi
mass <cit.> can be found by taking the limit of the
Misner-Sharp mass as one tends to ℐ^+. From our
expression for M_MS, eq. (<ref>), this
leads to the expression
M_B ≡lim_R→∞ M_MS
= lim_R→∞1/4 RE^-
+1/8( 2C̃^- -2C̃_+
+ 4Δ -2E^+
+(C̃_+ + C̃_-)E^- + E^-E^+
- 4E +3E^-E )
Recalling that ϵ has improved asymptotic decay, the first
term is formally singular, meaning it attains a finite limit by a
product of a term that diverges, R, and a term that decays at least
as O(R^-1), namely E^-. As opposed to previous formally singular
terms appearing in the equations of motion, the evaluation of this
term by use of L'Hôpital's rule is cumbersome. However, since at the
continuum level all physical quantities are defined up to constraint
addition, and noting that precisely this reduction variable appears in
the σ^a component of the GHG constraints
(eq. (<ref>)), the expression
for M_B can be regularized by a constraint addition that
asymptotes at leading order to -R^2 C^σ/4. With
this particular choice the regularized expression for M_B
reads instead
M_B = 1/4( -C̃_+ -C̃_- +F_D
- E^+ -2Θ^- )
where clearly all terms are now regular O(1) at ℐ^+. It
is this expression that will be
used when we evaluate M_B for our numerical simulations.
The Bondi mass has to satisfy two requirements for a physical solution
of the EFEs. First it has to be non-negative, and second it should be
monotonically decreasing as radiation leaves the spacetime
through ℐ^+. This last property can be deduced at the
continuum level by the Bondi mass-loss formula, which in terms of our
matter model, variables, our particular constraint addition and
choice of gauge reads
Ṁ_B = -π (Ψ^-)^2 .
§.§ Constraint Satisfying Initial Data
The evolution Eqs. (<ref>) are equivalent to
the EFEs only when the GHG constraints are identically satisfied over
the entire domain. As these constraints satisfy a system of coupled
second-order nonlinear homogeneous hyperbolic PDEs <cit.>, they remain satisfied throughout the evolution if they
are satisfied in the initial data up to their first time
derivative. To obtain this formal evolution system satisfied by the
harmonic constraints we just have to take a divergence of the
rEFEs (<ref>). Being a free evolution scheme,
the GHG formulation of the EFEs provides us with no other way to
ensure that these constraints are satisfied throughout the evolution.
Recall that we employ two sets of coordinates,
namely X^α, which are used to define the tensor basis
for the GHG formulation, and the compactified hyperboloidal
coordinates x^α. To avoid confusion in our discussion of the
constraints it is therefore most convenient to rely on the rEFEs in
abstract index form as
in (<ref>). Trace-reversing the rEFEs and
contracting once with n^a, the future pointing unit normal to our
hyperboloidal (constant t) slices gives
∇_n C_a = 2 ℳ_a
+ (n_a γ^bc - n^c γ_a^b) ∇_b C_c
+ 2 W_abn^b-n_aW ,
with ℳ_a=(G_ab-8π T_ab)n^b similar to the
expression given in <cit.>, W the trace of the
constraint addition tensor W_ab and γ_ab the spatial
metric. Here C_a is the one form of the GHG constraints defined in
the X^α tensor basis by (<ref>),
and ℳ_a = 0 encodes the Hamiltonian and momentum
constraints associated with the hyperboloidal foliation. From this
expression we observe that it is sufficient to choose initial data
that satisfy the harmonic constraints together with data that satisfy
the standard Hamiltonian and momentum constraints as usual. From this
we then need to reconstruct the evolution variables employed in our
formulation.
Given the spatial metric and extrinsic curvature associated with t
in the initial data, a general strategy for the latter step can be
formulated within the language of <cit.>. In broad strokes, this
would involve constructing appropriate projections of the Jacobians
that map between the two coordinate tensor bases, choosing the lapse
function of the uppercase T foliation, together with its
Lie-derivative along the uppercase normal vector to the T-foliation,
and then combining these quantities to build the uppercase spatial
metric and extrinsic curvature. From there we could choose the
uppercase shift vector and switch to our desired choice for the
evolved variables. This warrants a careful treatment without symmetry,
but for now we settle on a bespoke spherical approach sufficient for
our present needs.
We begin with a collection of useful expressions. First we express the
spatial metric and extrinsic curvature associated with the time
coordinate t in terms of our variables. To accomplish this, we
treat t as a general time coordinate that defines a foliation
and r to be a radial coordinate on those slices. Later, we shall
take the special case of hyperboloidal coordinates. The uppercase
coordinates (T,R) are then taken as
T ≡ T(t,r) , R ≡ R(r) .
The metric in the lowercase coordinates can then be defined in terms
of the Jacobians J_μ^μ≡_μ X^μ as
g_μν = J_μ^μ J_ν^ν g_μν ,
with g_μν given
in (<ref>). Denoting t and r derivatives by
dot () and prime ('), respectively, and the future directed
unit normal to the constant t hypersurfaces by n^a, the
standard ADM variables lapse, shift and spatial metric, denoted in the
standard notation, are expressed in terms of the GHG ones as,
α = e^δ/2 R' Ṫ √(κ)/√(2) √(R' - C_- T') √(R' - C_+ T') ,
β^r = - Ṫ( (C_+ + C_-)
R' - 2 C_+ C_- T' )/2 (R' - C_- T')
(R' - C_+ T') ,
γ_ij = (
[ γ_rr 0 0; 0 R^2 e^ϵ 0; 0 0 R^2 e^ϵsin ^2 θ; ]) ,
with
γ_rr = 1/γ^rr
= 2 e^δ/κ(R'-C_- T')
(R'-C_+ T') .
Similarly, the extrinsic curvature, given
by K_ij = - _n γ_ij/2, takes the form
K_ij = (
[ K_rr 0 0; 0 K_θθ 0; 0 0 K_θθ sin ^2 θ; ]) ,
with
K_rr≡β^r γ_rr' + 2 (β^r)' γ_rr - γ̇_rr/2 α ,
K_θθ≡R^2 e^ϵ/α( R' β^r/R + (β^r ϵ' - ϵ̇)/2) .
Plugging (<ref>) into the former of these
expressions results in an evolution equation involving our variables.
We also define the normal and radial derivatives for the massless
scalar field ψ described above as
ψ_n ≡_n ψ = n^μ_μψ ,
ψ_r ≡_r ψ ,
and the corresponding charge and current densities by
ρ_ψ = 1/2( ψ_n^2 + γ^rr ψ_r^2 ) ,
j^r_ψ = γ^rr ψ_n ψ_r .
The nontrivial components of the vector ℳ_a are then
ℋ≡ 2 ℳ_n ≡
2 n^a n^b ( R_ab - 1/2 g_ab R - 8 π T_ab) ,
and
ℳ_r ≡ n^a γ_r^b ( R_ab
- 1/2 g_ab R - 8 π T_ab) .
In terms of the above variables,
and ^γ K_rr≡ K_rrγ^rr
and ^γ K_T ≡ K_θθγ^θθ,
the Hamiltonian and momentum constraint equations are expressed as
2 ^γ K_T^2 + 4 ^γ K_rr ^γ K_T
- 16 πρ - 2 (R' (γ^rr)'+ 2 R” γ^rr)/R
+ 2 (e^-ϵ - R'^2 γ^rr)/R^2
- ((γ^rr)'
+ 6 R' γ^rr/R) ϵ'
- 3/2γ^rr ϵ'^2
- 2 γ^rr ϵ” = 0 ,
and
- 8 π j_r + 2 R'/R
(^γ K_T - ^γ K_rr)
+ (^γ K_T - ^γ K_rr) ϵ'
+ 2 ^γ K_T' = 0 ,
respectively. Along with the GHG constraints these are the equations
that need to be satisfied by initial data to ensure we obtain
solutions of GR. Our bespoke procedure for spherical initial data is
divided into three steps:
Step 1: Reformulate the Hamiltonian and momentum constraints,
eqs. (<ref>) and (<ref>). Here, we work
assuming that the initial matter distribution ψ_n and ψ_r
and the Jacobian are given.
Step 2: Formulate the GHG
constraints (<ref>). At this stage one
can choose initial data for the gauge driver f_D defined
in (<ref>).
Step 3: Make a concrete choice for the foliation and solve the
constraints. Here, we can specify T(t,r) at t = 0 to construct
the initial slice of interest, which is hyperboloidal in our
case. This way, we keep the entire procedure general and keep the
first two steps agnostic to the choice of foliation.
We see that both Hamiltonian and momentum constraint equations are
formally singular at the origin and at ℐ^+. To regularize
these equations, we extensively use the fact that spherically
symmetric vacuum solutions, like Minkowski and Schwarzschild, are the
solutions to these equations. To facilitate this, we shall take the
general solution to a nontrivial matter distribution to be corrections
over these vacuum solutions. This idea has been extensively explored
in a series of works about formulations of the constraint
equations <cit.> with different PDE character.
The GHG constraint equations are much easier to solve due the fact
that they contain time derivatives of our evolved fields C_± and
which can simply be set. Trivial data for f_D leads to undesired
asymptotics for Ċ_-. Nonetheless, the latter can be fully
controlled by carefully choosing f_D in the initial data, which we
shall do while solving for the GHG constraints.
A significant advantage of this approach is that all these results
apply to any spacetime foliation. For our specific interests, we shall
restrict ourselves to the hyperboloidal foliations constructed via the
height function and eikonal approach described above, and study the
nonlinear perturbations of Minkowski and Schwarzschild backgrounds.
§.§.§ Minkowski perturbations
For the Minkowski spacetime in global inertial time and standard
spherical polar coordinates, C_+ = - C_- = 1
and δ = ϵ = 0, all with vanishing time derivatives. This
gives a simplified form to the ADM variables defined in
Eqs. (<ref>)-(<ref>), that
we shall denote with the subscript “MK", for Minkowski, for
example α_MK. We will utilize these values to simplify
the general constraint equations.
As mentioned above, our first step is to solve the Hamiltonian and
momentum constraint equations. We choose to solve them
for γ^rr and ^γ K_T, respectively, as these
equations are linear in these variables. Assigning the Minkowski
values obtained above to ^γ K_rr and ϵ, and taking
the substitution
^γ K_T = ^γ K^MK_T
+ K_T^(1)/R ,
j^r = j^r_ψ ,
reduces the momentum constraint equation to
(K_T^(1))' = 4 π R ψ_n ψ_r .
To set ^γ K^MK_rr and ^γ K^MK_T
and so forth, we choose the Minkowski values for our evolved variables
just stated, then apply the Jacobian transformations given above. This
equation is regular whenever ψ_n ψ_r ∼ 1/R in the
initial data. Thereby, the entire impact of the current j_r is now
contained in the correction term K_T^(1). Setting
moreover ψ_n = 0 and imposing the boundary
condition K_T^(1) = 0 at the origin, we obtain the trivial
solution for K_T^(1) in the initial data. Further, taking
γ^rr = (γ_MK)^rr
+ γ^(1)/R'^2 R ,
ρ = ρ_ψ ,
gives the simple form
(γ^(1))' = - 4 π R^2 R' ψ_r^2
( (γ_MK)^rr
+ γ^(1)/R R') ,
to the Hamiltonian constraint equation, which is regular
whenever ψ_r ∼ 1/R in the initial data. For this class of
initial data, all information of the initial matter
distribution ψ_r is now encoded in γ^(1). This is the
final equation we solve for generating the Hamiltonian and momentum
constraint satisfying initial data in the Minkowski case.
The second step involves solving the GHG
constraints (<ref>). We will solve them
for Ċ_+ and Ċ_- using the gauge source functions given
in (<ref>), along with all the
simplifying assumptions and solutions to the Hamiltonian and momentum
constraint equations given above. We further take p = 0 in the gauge
source functions and C_+ = - C_- = 1 in the initial data, which
translates to the ADM variables as
α = ( α_MK √(γ_rr (γ_MK)^rr))
, β^μ = β^μ_MK .
Now we observe that the trivial data for the gauge driver f_D gives
data for Ċ_- which is nonzero and O(1)
at ℐ^+. This inconsistency contradicts our regularization
scheme adopted from <cit.>. As previously noted, the data for f_D is
sufficient to establish the asymptotic behavior of Ċ_- in the
initial data to any desired level. For our convenience, we opt for a
choice that causes it to decay more rapidly than any inverse power
of R. One such choice is
f_D = - ( γ^(1)/R'^2 R
- β_MK^r/Ṫ R'(1 - √(1 + (γ_MK)_rr γ^(1)/R R'^2)) ) ×
2 χ(R) R' (R' + T')/R ,
and Ḟ_D = 0. Interestingly, this data depends on the
foliation. This exhausts all our requirements to build the data. To
sum up, we have assigned Minkowski values to C_±, ϵ
and ^γ K_rr, which translates to δ̇. The
Minkowski values are determined by transforming the global inertial
values of our evolved variables through the appropriate Jacobians. We
must then solve the constraints to obtain Ċ_±, γ^rr
and ^γ K_T. The last two translate to δ
and ϵ̇ in the data. We have also established data for
the matter and the gauge driver while ensuring that the asymptotic
properties of the ADM variables are satisfied.
The final step is choosing the Jacobians. We consider both the height
function and eikonal approaches to define the foliation T(t,r) and
keep the same compactification R(r) defined above in both cases. For
Minkowski perturbations, we set m_C_+ = 0 in the height
function (<ref>). In our numerical
implementation we simply integrate out the resulting ODEs.
The Misner-Sharp mass corresponding to this class of initial data is
M_MS^MK = -1/2γ^(1) .
Interestingly, this expression for the mass and the rescalings in the
correction terms in (<ref>) and (<ref>) are
independent of the foliation. As we conclude from
Eq. (<ref>), and observe in
Figure <ref>, M_MS remains positive
semi-definite everywhere and O(1) whenever ψ_r falls-off at
least like 1/R towards ℐ^+. We will consider the more
general case with ψ_n 0 in the future.
§.§.§ Schwarzschild perturbations
Our procedure for perturbed black hole initial data is very similar.
The Schwarzschild metric in Kerr-Schild coordinates is given by
(g_SS)_μν = (
[ -1 + 2 M/R 2 M/R 0 0; 2 M/R 1 + 2 M/R 0 0; 0 0 R^2 0; 0 0 0 R^2 sin ^2 θ; ]) .
Here, M represents the mass of the black hole and the
subscript “SS" stands for Schwarzschild. Comparing this with
the metric in (<ref>), we observe that for the
Schwarzschild metric in Kerr-Schild coordinates, we have
C_+ = 1 - 2M/R/1 + 2M/R ,
C_- = -1 ,
δ = ϵ = 0 ,
all with vanishing time derivatives. This gives the associated ADM
variables that we denote by the subscript “SS".
Like before, we set the Schwarzschild values for ^γ K_rr
and ϵ in the initial data and take
^γ K_T = ^γ K^SS_T + (K_SS)_T^(1)/R
to get the momentum constraint equation similar
to (<ref>). Once again, we set ψ_n = 0 to get
the trivial solution (K_SS)_T^(1) = 0 in the initial
data. Likewise, taking
γ^rr = (γ_SS)^rr
+ (γ_SS)^(1)/R'^2 R ,
gives an equation for (γ_SS)^(1) similar
to (<ref>). This equation is regular, as before, with
a solution that is regular throughout the domain. Thus we again embed
all information of the initial matter distribution
in (γ_SS)^(1) and solve it to generate the
Hamiltonian and momentum constraints satisfying initial data.
We again take the simplifying conditions
α = ( α_SS √(γ_rr (γ_SS)^rr))
, β^μ = β^μ_SS ,
to get the Schwarzschild values for C_± and solve the GHG
constraints for Ċ_± with the gauge source functions
F^σ_SS = F^σ + C_+ - 1/R ,
and F^σ given
in (<ref>) with p = 0. We,
additionally, take the following initial data for the gauge driver
F^SS_D = - ( γ^(1)/R'^2 R
- β_SS^r/Ṫ R'(1 - √(1 + (γ_SS)_rr (γ_SS)^(1)/R R'^2)) ) ×
2 R' (R' + T')
,
and ḟ_D = 0 to correct the asymptotics of Ċ_-. This
specific choice of f_D gives trivial data for Ċ_- and,
interestingly, reduces to (<ref>) for
vanishing M and χ(R) = R. This exhausts all our requirements to
generate the initial data.
As before, we consider both constructions of hyperboloidal foliation
defined above. We take m_C_+ = -4M in the height function.
The Misner-Sharp mass in this case is given by,
M_MS^SS = M -1/2 (γ_SS)^(1) ,
with (γ_SS)^(1) having similar properties as in the
Minkowski case, and effectively increasing the mass of the initial
data.
§ NUMERICAL EVOLUTIONS
Our numerical implementation lies within the infrastructure used in
earlier works, the most similar systems being <cit.>. The method itself is
entirely standard, so we give just a brief overview. Evolution is made
under the method of lines with fourth order Runge-Kutta. We use second
order finite differences in space. Our first order reduction makes the
treatment of the origin quite subtle. Following our earlier work we
have adapted Evans method <cit.> (see also <cit.>
for variations) in the obvious manner for the metric components (and
their reduction variables) as well as the scalar field. We define in
particular two second order accurate finite differencing operators as
acting on some given grid-function f,
D_0f = 1/hf_i+1-f_i-1/2 ,
D̃f =1/h (p+1) r^p_i+1f_i+1-r^p_i-1
f_i-1/r^p+1_i+1-r^p+1_i-1 ,
with h the grid-spacing. The parameter p here is not directly
related to those that appeared
in (<ref>)
and (<ref>)). Since we are focused in this work on
proof-of-principle numerics, we have not tried to extend the code
beyond second order accuracy. This remains an important task for the
future, but there is no particular reason to expect difficulties in
doing so. Terms in the evolution equations
like _rψ + prψ are treated with D̃,
whereas plain derivatives are approximated by D_0. As mentioned
above, ghostzones to the left of the origin are populated by
parity. With this said, the EFEs still contain formally singular terms
at the origin. These are managed by application of L'Hôpital's
rule. At ℐ^+ we do not require continuum boundary
conditions, and so it is permissible simply to shift the finite
differencing stencils to the left. To minimize reflections from the
outer boundary we use truncation error matching,
(D f)_N = 1/4h(f_N-4-5 f_N-3+10 f_N-2-11 f_N-1+5 f_N) .
The idea is that by matching the form of the finite differencing error
at the boundary with that of the interior, high frequency errors are
reduced. Formally singular terms at ℐ^+, which appear in
our first order reduction in exactly the same shape as those of
the GBUF model we studied earlier <cit.>, are likewise
treated with L'Hôpital's rule. As is
standard when using the GHG formulation, we use excision to treat the
black hole, should there be one. The strategy is to keep the apparent
horizon, the position where the expansion of outgoing null-geodesics
vanishes, within the computational domain with the boundary itself
remaining outflow, so that no boundary continuum conditions are
needed. Due to the relatively simple dynamics in the strong-field
region in our experiments, this can be done without the full control
system machinery <cit.> used in binary spacetimes, and
without any careful imposition of the outflow
condition <cit.>. Instead we simply monitor the position
of the apparent horizon, which is very simple in spherical symmetry
(see <cit.> for a textbook discussion), and monitor the
coordinate lightspeeds at the boundary to check that the outflow
condition is satisfied. Typically we take the domain to extend a small
number of points inside the apparent horizon. To compute derivatives
at the excision boundary we populate ghostzones by fourth order
extrapolation. When evolving initial data that collapse to form a
black hole we must use our two finite differencing operators. If
instead we start from data that already contains a black hole we can
do away with the D̃ operator. We have implemented both
possibilities. We study numerical evolution of various different
choices of initial data. They are described in turn in the following
subsections.
§.§ Gauge Perturbations
As a first test we perform numerical evolutions of gauge perturbations
of Minkowski spacetime. The simplest procedure to input this type of
initial data is to construct our slices with hyperboloidal
layers, as explained in <cit.>, so the nontrivial field
content lies inside the region where the change of coordinates, in
this case from Minkowski in global inertial time and spherical polars,
to compactified hyperboloidal coordinates, is simply the
identity. Concretely, we use the same
expression (<ref>) for the height function
with m_C_+=0 and we take r_m=0.4. In
the 3+1 language we place a Gaussian perturbation in the lapse with
amplitude A and zero shift. In terms of our variables this
corresponds to
C_+(0,r) = 1 + A e^-R(r)^2/σ_0^2 ,
C_-(0,r) = -1 - A e^-R(r)^2/σ_0^2 ,
δ(0,r) = ln ( 1 + A e^-R(r)^2/σ_0^2 ) ,
ϵ(0,r) = 0 ,
ψ(0,r) = 0 ,
f_D(0,r) = _Tf_D(0,r) = 0 .
where we take σ_0^2=0.02.
The form of the perturbation differs from that in <cit.>
only in that the present perturbation is centered at the
origin. Observe that the choice of GHG implies an evolution equation
for the lapse and shift. In our variables this corresponds to a
non-zero time derivative of the shift which is equivalent to
_T C_+(0,r) = -2A e^-2R^2/σ_0^2/σ_0^2( A + e^R^2/σ_0^2)R ,
_T C_-(0,r) = -2A e^-2R^2/σ_0^2/σ_0^2( A + e^R^2/σ_0^2)R ,
_T δ(0,r) = 0 , _Tϵ(0,r) = 0 .
With these expressions we construct the FOR fields to perform the
numerical evolutions. By construction, this initial data satisfies the
constraints up to numerical error.
Evolving the data, the dynamics look similar in both the height
function and eikonal setup. The gauge pulses initially propagate
outwards and through ℐ^+ with no sign of reflection. They
leave behind small perturbations in the evolved fields which later
gradually decay. We chose to work with the gauge parameters p=0
in (<ref>), meaning that the
variables C_+ and ϵ should have improved asymptotic decay
relative to the wave equation, whilst C_- and δ should behave
like solutions to the wave equation. Since the scalar field vanishes
identically in this case, the gauge driver field f_D, which we feed
trivial initial data, does too. Even at moderate resolution the method
successfully runs for long-times, at least until t=10^4 in code
units. The differences in asymptotics of the variables is well
captured by the numerical evolution, and the harmonic constraints
remain small throughout. Evolving for instance with 200 radial
grid-points, at t=10 the harmonic constraints are of
order O(10^-8). In Figure <ref> we
present snapshots of the evolved fields for an evolution with eikonal
Jacobians with compactification parameter n=1.5 at this
resolution. These evolutions exhibit both pointwise and norm
convergence. To demonstrate this, in
Figure <ref> we plot snapshots
of rescaled differences at three resolutions for the same setup as in
the snapshots of Figure <ref>. To avoid
overpopulating the figure we have taken the sum of absolute value of
these differences over all four of the fields plotted in the first
figure. The data are perfectly compatible with second order
convergence as desired. These results are in excellent qualitative
agreement with those of <cit.>, with a different
formulation of GR.
Interestingly, not all compactification parameters n behave in the
same manner numerically at the moderate resolution we employ. Despite
the continuum freedom to choose it at the analytical level, only for
values n≤ 1.5 we get the appropriate norm-convergence with
increasing resolution. This can be understood from the empirical
observation that for bigger values the reduction fields get sharp
features at the grid-point at ℐ^+, therefore affecting the
precise numerical cancellation that needs to happen in order for this
scheme to work. We believe that by adjusting the compactification we
will be able to achieve convergent results for the entire range
of n, but since this fact is seen in all of our subsequent numerical
evolutions, in practice we work in the range n∈ [1.25,1.5] here.
Pure gauge wave evolutions on the Schwarzschild background behave
similarly, but since physical dynamics, which we discuss thoroughly
below, necessarily excite gauge waves we do not present them in detail
here.
§.§ Constraint Violating Initial Data
We now move on to our hardest set of numerical tests, which comprise
of constraint violating initial data with non-vanishing scalar
field. It is important to consider constraint violating data because
in general numerical error violates the constraints in any
free-evolution setup, and so we must be confident that at least
reasonably small finite errors of this type will not cause a
catastrophic failure of the method. Since these initial data excite
also gauge pulses, all aspects of the solution space including gauge,
constraint violations and physical are probed. In all of the following
tests we see similar results for height function and eikonal Jacobian,
so we present a selection of representative plots from each.
*Minkowski perturbations: In the following we performed
tests both with and without layers, with r_m = 0.4 and r_m=0. Both
behave similarly, so we present results only for the case r_m=0. We
begin by perturbing the Minkowski metric, setting Gaussian data on all
variables according to
C_+(0,r) = 1 + C_0 e^-R(r)^2 ,
C_-(0,r) = -1 - C_0 e^-R(r)^2 ,
δ(0,r) = δ_0 e^-R(r)^2 ,
ϵ(0,r) = (δ_0 - ln(1 + C_0)) e^-R(r)^2 ,
ψ(0,r) = ψ_0 e^-R(r)^2 ,
f_D(0,r) = 0 ,
with vanishing time derivatives. The initial data for the rescaled FOR
variables are then calculated accordingly. Therefore in this first
test the reduction constraints should remain satisfied at the
continuum level. When evolving with the eikonal Jacobians we choose
data appropriate for the Minkowski spacetime in global inertial
coordinates, namely
U^-= e^-δ(0,r)( 2 + C_0e^-R(r)^2) ,
with the variable U^+ taken from the sourced eikonal equation
itself (<ref>).
The coefficients of the Gaussian in the initial data for C_± are
chosen to give the correct parity at the origin. Similarly, the
coefficient of the Gaussian in the data for ϵ is chosen from
the regularity
condition (<ref>) at the origin
at T = 0. Unsurprisingly, if we take data that violate parity the
code is observed to crash near the origin in finite time.
As a first test we kept the initial perturbations small enough to
avoid complete gravitational collapse. For simplicity, we
took C_0 = δ_0 = ψ_0 = 10^-3. This data do not satisfy
the GHG, Hamiltonian or Momentum constraints. The magnitude of the GHG
constraint violation in the initial data is 10^-4, comparable with
the scalar field itself. We performed numerical evolutions with both
the height-function and eikonal Jacobians, with several different
values of the compactification parameter n within the range
specified at the end of the previous subsection. In all cases the time
evolution is comparable to the gauge wave discussed above in
section <ref>. The initial constraint
violation initially grows and eventually decays, so that by t=10 it
has reduced by 1 order of magnitude. Similar comments apply to
the L^2 norm of the constraints. These data are clearly physically
wrong from the outset since the Bondi mass initially vanishes despite
the fact that the scalar field is non-trivial, and even takes negative
values during the evolution. We observe this behavior both in the
original (<ref>) and
regularized (<ref>) expressions for the Bondi
mass, with the former not even varying monotonically due to constraint
violations.
Another important difference between these experiments and the pure
gauge waves is that the gauge driver variable f_D now actually
varies. Recall that the purpose of the gauge driver is to prevent
log-terms, which are known to afflict plain harmonic gauge, from
appearing in the variable C_-. Using the gauge driver
condition (<ref>) we see no evidence that these
log-terms are present. Given the rescaling in the definition of our
variables (<ref>) such log-terms would result
in divergence of the solution, and so should be very obvious. To check
this, we evolved the constraint violating
data (<ref>) without using the gauge
driver, with identical initial data, and find that the growth
in C̃_- is indeed evident, as can be seen in
Figure <ref>. In fact, evolutions without the gauge driver
can only be performed in a staggered grid, as having a grid-point
at ℐ^+ leads to an explosion of the simulation after the
first time step. All of our simulations except this were performed
with a with a grid-point at future null infinity. Earlier work has
been presented with a combination of these two setups. For instance
the evolutions of <cit.> were performed on a staggered
grid, whereas in <cit.> the grid setup was identical to that
in the majority of our simulations.
Next we switched the gauge driver back on. We performed four numerical
evolutions, doubling resolution each time, so that we could examine
multiple curves. Pointwise convergence of the fields at early times
looks very much like that presented in
Figure <ref>. In
Figure <ref> we plot the norm self-convergence
rate from these long experiments. For the test itself we apply the
standard technique, injecting the higher resolution data on to the
coarse resolution grid, taking the norm of the differences between the
very high (V) and high (H) resolutions, the high and medium (M)
resolutions, and finally medium and low (L) resolutions. We then
plot
Q_1 =log_2(||M-L||/||H-M||) ,
Q_2=log_2(||H-M||/||V-H||) ,
as a function of time, from which it can be seen that the simulations
converge at second order as we increase resolution, as expected with
our discretization. Concretely, given a field Z(t,r) and its two
associated first order reduction fields Z^+ , Z^-, see the
discussion around (<ref>), the
continuum limit of the norm we use is,
∫[ r^2Z^2
+ ( R'R^2/χ^2)
( 2R'-1/2R´χ^2(Z^+)^2
+ 1/2R'(Z^-)^2 ) ] dr .
No attempt has been made to tune to the threshold of collapse, but
increasing the amplitude of the data to ψ_0 = 0.355 leads to
apparent horizon formation. As explained above, we also implemented
excision, so that once an apparent horizon is formed, the black hole
interior can be taken out of the evolved region. As a visualization
technique, we put all the evolved variables to zero inside the excised
region so that we can identify where the apparent horizon was found.
Examples of this are shown in
Figure <ref>. Performing longer evolutions we
see that the data appear to settle down, although slow dynamics
continue. At time t=50 substantial constraint violation remains, and
pointwise is in fact substantially larger than the scalar field and
its reduction variables.
*Schwarzschild perturbations: Next we take initial data of
a similar type to (<ref>), but now
built as a perturbation on top of the Schwarzschild spacetime. To do
so we adjust several details in our evolution setup, focusing
henceforth exclusively on the case r_m=0.
In order to do excision as previously described, we need to start from
horizon penetrating coordinates. We take the Schwarzschild solution in
Kerr-Schild form, which we recall was given above in our variables in
Eq. (<ref>).
Since this is an exact solution of the EFEs, it is desirable to have a
choice of gauge for which (<ref>) is static in local
coordinates, so that dynamics in the numerical evolution come from the
departure from it. To achieve this we adjust the gauge
sources (<ref>), in which, as
mentioned above (<ref>), the only modification is
F^σ_SS = F^σ + C_+ - 1/R .
It is easy to see that that expressions (<ref>) are an exact
solution of the rEFEs with the previous choice of gauge. This fact is
reflected numerically in the sense that numerical evolutions with this
exact initial data remain unchanged up to numerical error for long
times, up to t∼ 10^3M, even at very modest resolution. Due to the
way gauge conditions are constructed, this has not yet been achieved
with the approach followed in <cit.>. We observe that the new
term does not alter the asymptotics of the evolved fields, and that we
still need the gauge driver f_D in order to regularize the C_-
field in the presence of a scalar field.
For our next numerical test we put constraint violating Gaussian
perturbations on top of the solution (<ref>), where we take
the BH mass M=1 as defining our units here. Analogous to the
previous constraint-violating case, the data we start our simulations
with is
C_+(0,r) = 1-2/R/1-2/R
+ C_p0 e^-(R-3)^2 ,
C_-(0,r) = -1 + C_m0 e^-(R-3)^2 ,
δ(0,r) = δ_0 e^-(R-3)^2 ,
ϵ(0,r) = ϵ_0 e^-(R-3)^2 ,
ψ(0,r) = ψ_0 e^-(R-3)^2 ,
f_D(0,r) = 0 ,
U^-= e^-δ(0,r)
( 1 - C_-(0,r) )
with the first-order-reduction fields computed assuming vanishing time
derivatives. Note that the Gaussians are now centered at R=3 and
that in this case there is no relation between ϵ and the
amplitude of the other fields, or between C_+ and C_-, since
we do not have a regular center in this case. We have performed
successful evolutions of this data using again both the
height-function and eikonal Jacobians with several values of n. In
broad terms these data develop in a manner similar to the previous
setups, in so much that part of the the initial pulses still propagate
out to infinity with O(1) speeds. Of course in this case part of the
field content also accretes on to the black hole. At infinity the
scalar field moreover clearly exhibits the direct signal, ringing and
tail phases. These data are particularly important as our first test
in which several of the evolved fields have non-trivial `mass-terms'
at infinity. We observe no particular difficulty in their numerical
treatment, finding both pointwise and norm convergence just as
convincingly as in the previous cases with a regular center. As an
example of this, in
Figure <ref> we show the
norm self-convergence rates with this setup.
To check the effect of the logarithmic mass term with the height
function change of coordinates (<ref>), we
performed also tests with that term omitted. We find that the signal
tends to accumulate at large R without ever leaving the domain,
leading for the evolution to crash in finite time, compatible with our
understanding from above in section <ref> that
without the mass-term included these slices have the `wrong' global
structure, terminating instead at spatial infinity, as depicted in
Figure <ref>. It is clear that the inclusion of the
`mass-terms' is a fundamental ingredient in the method.
Despite the success in this suite of configurations, we do expect that
if we were to take the initial constraint violations, of whatever
type, sufficiently large then we could cause our numerical method to
fail. We have not attempted to do so, however, since, first, the same
statement would be true even in standard Cauchy evolutions and second,
the task of the hyperboloidal region is to cope with a combination of
stationary features and outgoing waves, with the metric variables
decaying out to ℐ^+. If we were to face a situation in
application in which large errors in the wavezone induced a failure of
the method, either more resolution is needed, or else the wavezone,
which only loosely defined, ought to be taken to `start' further out
and therefore the parameters for the hyperboloidal layer adjusted.
§.§ Constraint Satisfying Initial Data
The rEFEs are a set of wave equations for all the metric variables, so
up to this point we have successfully tested our regularization
techniques, allowing us to numerically extract the wave signal
at ℐ^+, the ultimate goal of this project. However, in
order to have a physical spacetime that simultaneously solves the EFEs
we need to satisfy all the constraints, namely, GHG, Hamiltonian,
Momentum and FOR constraints, as explained in
section <ref>. Therefore, as a final test
of our scheme, we move on to evolve constraint satisfying initial data
representing perturbations of the Minkowski and Schwarzschild
spacetimes.
*Minkowski perturbations: We begin by constructing initial
data for a spacetime that can be thought as a perturbation of the
Minkowski spacetime. We first take the height function approach for
constructing the initial hyperboloidal slice, with m_C_+≡ 0,
as demanded by our method explained in
section <ref>. We
choose ψ(0,r) = ψ_0 e^-R^2 and ψ_n=0,
with ψ_0 = 10^-3 in order to avoid complete gravitational
collapse. With these choices we numerically generate the
solution γ^(1).
A simple way to generate constraint satisfying initial data for
hyperboloidal slices built with the eikonal approach is to
choose U^+ and U^- initial data so that the
Jacobians (<ref>) match the height function
ones (<ref>). Observe that the choice C_+≡ 1
automatically implies that the U^+ constraint,
Eq. (<ref>), is satisfied.
With these details taken care of, we evolve the initial data with both
the eikonal and height-function equations of motion. The basic
dynamics qualitatively resemble those of the constraint violating
case, with regular fields both at the origin and ℐ^+. For
this reason we do not present snapshots in space. Instead, in the
first panel of Figure <ref> we plot the scalar field
waveform at ℐ^+, where we clearly see the field decaying
at late times.
One of the most stringent test on the physics in the present case is
the evaluation of the Bondi mass, Eq. (<ref>) both
for the initial data and the time development. As previously
mentioned, this should be a non-negative and monotonically decreasing
function of time. Both properties can be seen from the middle panel of
Figure <ref>, from which we see that we initially
start with a positive constant value until the time radiation leaves
the domain through ℐ^+. The left and middle panels indeed
indicate that the spacetime asymptotes to the Minkowski spacetime as
the scalar field leaves the numerical domain.
Within our setup the only physical radiation comes from the scalar
field, so our methods can only be claimed successful if this signal is
well-captured numerically. As for the constraint violating initial
data in section <ref>, we performed norm
self-convergence test for the constraint satisfying data, obtaining
similar results to those shown in
Figure <ref>. Focusing instead on the radiation
field, our proxy for gravitational waves, in the third panel of
Figure <ref>, we plot the absolute value of the
rescaled differences of the scalar field at the grid-point
at ℐ^+ as a function of time. The overlapping of the
three curves in this case shows that the errors of this radiation
signal decrease at the expected rate with increasing resolution, thus
implying that in the limit of infinite resolution we tend to the real
physical solution.
We proceeded to modify the initial data for the scalar field
to ψ=ψ_0 e^-R^2/σ_0^2, with ψ_0=0.8
and σ_0=0.6, in order to generate an apparent horizon
dynamically, which we see at time t∼ 0.5 in code units.
Qualitatively, the evolved fields look much like those presented in
Figure <ref>. Interestingly our evolved fields
appear somewhat more regular near ℐ^+ than those
of <cit.> in the same physical setup. In contrast to the
constraint violating collapse, the Bondi mass remains positive and
monotonically decreasing for all times settling to a non-zero value
for late times. The apparent horizon mass is non-decreasing. After
black hole formation the code continues to run without problems for at
least t∼ 10^3M at moderate resolutions, where M is the Bondi
mass at late times.
*Schwarzschild perturbations: In order to generate
constraint satisfying initial data for Schwarzschild spacetime
perturbations we follow again the steps mentioned in
section <ref>. We started by generating
initial data slice with the height function Jacobians. In the present
case we no longer take m_C_+=0. However, in order for the scheme
to work we need to know the constant m_C_+ exactly, and therefore
we take C_+ identical to the Schwarzschild solution previously
mentioned. We have experimented with various different initial data
choices for the scalar field compatible with our present procedure for
the constraints.
In order to evolve using eikonal hyperboloidal slices in the present
case we again generated initial data for U^+ and U^- so that
eikonal Jacobians match initially the height function
ones. Importantly, the matching of the Jacobians gives a unique
solution for the initial data for U^+ and U^-, so the U^+
constraint (eq. (<ref>)) will not be
satisfied for a generic given S. To overcome this issue we rather
take eq. (<ref>) as defining the
function S. This choice of S does vanish asymptotically, so with
this approach the outgoing radial coordinate lightspeed in lowercase
coordinates still goes to unity, which is the desired property when we
use the eikonal Jacobians.
The basic dynamics proceed as expected, with part of the scalar field
accreting on to the black hole, and the rest gradually propagating out
to null infinity. As a specific example, we take a Gaussian profile
for the scalar field centered at R≃2.1 M, with M=1 from the
reference solution which we perturb,
with ψ(0,r) = ψ_0 e^-(R-2.1)^2/σ^2, σ=0.2
and ψ_0=10^-4. The outcome of a long evolution of this data is
shown in Figure <ref>, where we plot the scalar field
value at ℐ^+ as a function of time for the full non-linear
evolutions with the height function Jacobians. From this we see that
we recover the expected behavior from linear scalar perturbations on
top of Schwarzschild, where we see that the spherically-symmetric
quasi-normal mode of ψ is in good agreement with the first part
of the data, while late time evolution decays as t^-2. This is in
qualitative agreement with the earlier free-evolution results
of <cit.> under different gauges on a staggered grid
(see Figure 3 of <cit.> and Figure 8.24 in <cit.>). For
comparison we also performed evolutions in the Cowling approximation,
which corresponds to taking Schwarzschild spacetime as the fixed
background and evolving the scalar field on top. As expected,
decreasing the amplitude ψ_0 in the initial data for the
non-linear evolutions makes the fitting of the frequencies and tail of
linear theory every time more accurate.
We performed long evolutions of this data with the eikonal Jacobians
as well. They also show quasi-normal mode ringing and tail decay, but
fitting for the known frequencies and tail rate is more involved since
the time at ℐ^+ used in our evolutions then needs to be
post-processed to make a fair comparison, in particular to the Bondi
time coordinate. This is also an issue in the approach
of <cit.>. We postpone a detailed comparison both of these two
cases and of the effect of nonlinearities on the QNM frequencies to
future work.
The Bondi mass of these evolutions has qualitatively the same behavior
as the one displayed in the center panel of
Figure <ref>, namely, it is a strictly positive and
monotonically decaying function of time, where most of its decay
happens when the scalar field leaves the domain
through ℐ^+. Importantly, M_B takes a
value ≃1 at late times, which is close the value we started
with for the constant M of the background perturbed by the scalar
field. Finally, pointwise convergence at ℐ^+ as a function
of time looks qualitatively the same as in the third panel of
Figure <ref>.
§ CONCLUSIONS
Continuing our research program towards the inclusion of future null
infinity in the computational domain in full 3d numerical relativity,
here we presented an implementation of spherical GR in GHG that uses
the dual-foliation formalism to get all the way out. The strategy is
to take the evolved variables to be equivalent to those that would be
solved for in the standard Cauchy problem, subjected to a rescaling to
obtain non-trivial O(1) quantities, and then to change to
compactified hyperboloidal coordinates. In this way, the hope is to
extend the computational domain to null infinity in a manner that
leaves the numerical treatment in the strong-field region absolutely
unchanged, and in the future without symmetry. We examined a broad
suite of initial data, including gauge waves, constraint violating and
satisfying configurations. We considered spacetimes with a regular
center and dynamical black holes, in which case we use the excision
method to remove the interior. In all cases our initial data were
posed on hyperboloidal slices. The coordinate transformation was
managed either by the use of a height-function or by solving the
eikonal equation, both with a given radial compactification containing
a parameter 1<n≤2 that controls how fast the transformation is
made. We have examples in which the compactification takes effect
immediately from the origin, and others which employ hyperboloidal
layers, where it takes effect only further out.
To construct constraint satisfying data from regular equations with
scalar field matter, we treated the overall solution to be a
perturbation of the Minkowski or Schwarzschild spacetimes, taking
suitable initial data for the gauge drivers. In both cases, those
corrections turned out to possess a geometrical and physical
interpretation in terms of the Misner-Sharp mass. We believe that this
technique can be generalized, first to drop simplifying assumptions
within spherical symmetry, but also to full 3d, both of which are
kept for future work.
We find convincing evidence for numerical convergence across the
entire suite of spacetimes we considered. Although we did not make any
push for precision here, for physical initial data we did find good
compatibility vis-à-vis expected frequencies and rates, for instance
in QNMs and Price decay.
It is gratifying to see the line of reasoning developed across the
direct precursors to this study work bear fruit in full GR, even if
only in the spherical setting. In brief, in <cit.> it was
observed that to obtain equations of motion regular enough to treat on
compactified hyperboloidal slices, the coordinate light-speed
variable C_+ needs to display decay beyond that expected for
solutions of the wave-equation. It was then argued in <cit.>
that this could be achieved in the GHG formulation, even when the
constraints are violated, by appropriate constraint addition to the
field equations. In plain harmonic gauge it is known that either slow
decay of the stress-energy tensor or the presence of gravitational
waves serve as an obstruction to decay of the metric
components. In <cit.> it was observed that this
shortcoming of the gauge can be overcome by the use of carefully
chosen gauge source functions. In parallel, numerical studies were
performed with model problems with the same asymptotics as in GR in
GHG. These taught us first <cit.> a convenient choice of
reduction variables, second <cit.> the importance of using
truncation error matching at null infinity, and
third <cit.> that the general strategy to suppress
log-terms is indeed viable in practice.
The interplay between the mathematical and numerical works has also
been important in getting to this stage. For instance,
in <cit.> the suggestion was to force improved asymptotics
for all metric components except for those associated with
gravitational waves. In practice however, with the GBUF model problem,
it was found that this approach would make convergence in the numerics
difficult. (For this reason we have worked here with p=0
in (<ref>) for the C_-
variable). Evidently, all of these ingredients played an important
role in treating spherical GR.
Although the results presented are an important milestone in our
research program, open questions remain both in spherical symmetry and
more generally. So far we have done nothing to chase down sharp
conditions in the required asymptotics of our method. In our current
setup we insist, for instance, on hyperboloidal initial data such that
the radiation field from our scalar matter is O(1)
at ℐ^+. But it is known even in the Minkowski spacetime
that `reasonable' Cauchy data for the wave equation can result in
solutions with logarithmically growing radiation
fields <cit.>. In the future we wish
to understand more clearly the class of data that allows for the
inclusion of ℐ^+ in the computational domain, and how that
class sits within the broader choice that allows analogous growth in
the radiation fields. It would be good also to formulate conditions,
ideally necessary and sufficient, on the choice of gauge that would
allow for the inclusion of ℐ^+ in the computational
domain, even within the better class of data. On a more technical
level we wish to achieve successful numerical evolutions with the most
aggressive compactification parameter n=2, to work without the first
order reduction, and to switch to higher order and pseudospectral
methods. Despite these questions, in view of the 3d toy-model results
of <cit.> and those presented here for spherical GR, we
believe that the essential pieces are now in place to achieve, in the
near-term, the goal of 3d numerical evolutions of full GR in GHG on
compactified hyperboloidal slices.
§ ACKNOWLEDGEMENTS
The authors thank Sukanta Bose, Miguel Duarte, Justin Feng, Edgar
Gasperin, Prayush Kumar and Anil Zenginoğlu for helpful
discussions and or comments on the manuscript.
The Mathematica notebooks associated with this work can be found
at <cit.>.
The authors thank FCT for financial support through Project
No. UIDB/00099/2020 and for funding with DOI
10.54499/DL57/2016/CP1384/CT0090, as well as IST-ID through Project
No. 1801P.00970.1.01.01. This work was partially supported by the ICTS
Knowledge Exchange Grants owned by the ICTS Director and Prayush
Kumar, Ashok and Gita Vaish Early Career Faculty Fellowship owned by
Prayush Kumar at the ICTS. Part of the computational work was
performed on the Sonic cluster at ICTS. SG's research was supported by
the Department of Atomic Energy, Government of India, under project
no. RTI4001, Infosys-TIFR Leading Edge Travel Grant, Ref. No.:
TFR/Efund/44/Leading Edge TG (R-2)/8/, and the University Grants
Commission (UGC), India Senior Research Fellowship. Part of this work
was done at the University of the Balearic Islands (UIB), Spain, the
Department of Mathematics at the University of Valencia and the
Astrophysical and Cosmological Relativity department at the Albert
Einstein Institute, aka the Max-Planck Institute for Gravitational
Physics in Potsdam-Golm. SG thanks Sascha Husa, Isabel Cordero
Carrión and Alessandra Buonanno for local hospitality and travel
support at these institutions.
|
http://arxiv.org/abs/2409.02660v1 | 20240904123454 | A min-max random game on a graph that is not a tree | [
"Natalia Cardona-Tobón",
"Anja Sturm",
"Jan M. Swart"
] | math.PR | [
"math.PR",
"Primary: 82C26, Secondary: 60K35, 91A15, 91A50"
] |
math,calc,arrows,positioning,fit,petri,external
mythm
.5em
#1 #2 (#3)
mythm
theoremTheorem
proposition[theorem]Proposition
lemma[theorem]Lemma
|
http://arxiv.org/abs/2409.03704v1 | 20240905170502 | TOI-3568 b: a super-Neptune in the sub-Jovian desert | [
"E. Martioli",
"R. P. Petrucci",
"E. Jofre",
"G. Hebrard",
"L. Ghezzi",
"Y. Gomez Maqueo Chew",
"R. F. Diaz",
"H. D. Perottoni",
"L. H. Garcia",
"D. Rapetti",
"A. Lecavelier des Etangs",
"L. de Almeida",
"L. Arnold",
"E. Artigau",
"R. Basant",
"J. L. Bean",
"A. Bieryla",
"I. Boisse",
"X. Bonfils",
"M. Brady",
"C. Cadieux",
"A. Carmona",
"N. J. Cook",
"X. Delfosse",
"J. -F. Donati",
"R. Doyon",
"E. Furlan",
"S. B. Howell",
"J. M. Jenkins",
"D. Kasper",
"F. Kiefer",
"D. W. Latham",
"A. M. Levine",
"D. Lorenzo-Oliveira",
"R. Luque",
"K. McLeod",
"J. Melendez",
"C. Moutou",
"Y. Netto",
"T. A. Pritchard",
"P. Rowden",
"A. Seifahrt",
"G. Stefansson",
"J. Sturmer",
"D. J Twicken"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.SR"
] |
Detection and characterization of TOI-3568 b
Laboratório Nacional de Astrofísica, Rua Estados Unidos 154, 37504-364, Itajubá - MG, Brazil, [email protected]
Institut d'Astrophysique de Paris, CNRS, UMR 7095, Sorbonne Université, 98 bis bd Arago, 75014 Paris, France
Universidad Nacional de Córdoba - Observatorio Astronómico de Córdoba, Laprida 854, X5000BGR, Córdoba, Argentina
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Godoy Cruz 2290, CABA, CPC 1425FQB, Argentina
Observatoire de Haute Provence, St Michel l'Observatoire, France
Universidade Federal do Rio de Janeiro, Observatório do Valongo, Ladeira do Pedro Antônio, 43, Rio de Janeiro, RJ 20080-090, Brazil
Instituto de Astronomía, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de México, 04510, México
International Center for Advanced Studies (ICAS) and ICIFI (CONICET), ECyT-UNSAM, Campus Miguelete, 25 de Mayo y Francia, (1650) Buenos Aires, Argentina.
NASA Ames Research Center, Moffett Field, CA 94035, USA
Research Institute for Advanced Computer Science, Universities Space Research Association, Washington, DC 20024, USA
Canada-France-Hawaii Telescope, CNRS, 96743 Kamuela, Hawaii, USA
Université de Montréal, Département de Physique, IREX, Montréal, QC H3C 3J7, Canada
Department of Astronomy & Astrophysics, University of Chicago, Chicago, IL, USA
Center for Astrophysics Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
Aix Marseille Univ, CNRS, CNES, LAM, 38 rue Frédéric Joliot-Curie, 13388 Marseille, France
Université Grenoble Alpes, CNRS, IPAG, 414 rue de la Piscine, 38400 St-Martin d'Hères, France
Université de Toulouse, CNRS, IRAP, 14 avenue Belin, 31400 Toulouse, France
NASA Exoplanet Science Institute, Caltech IPAC, 1200 E. California Blvd., Pasadena, CA 91125, USA
LESIA, Observatoire de Paris, Université PSL, CNRS, Sorbonne Université, Université Paris Cité, 5 place Jules Janssen, 92195 Meudon, France
Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Department of Astronomy, Wellesley College, Wellesley, MA 02481, USA
Universidade de São Paulo, Instituto de Astronomia, Geofísica e Ciências Atmosféricas (IAG), Departamento de Astronomia, Rua do Matão 1226, Cidade Universitária, 05508-900, SP, Brazil
NASA Goddard Space Flight Center, 8800 Greenbelt Road, Greenbelt, MD 20771, USA
Royal Astronomical Society, Burlington House, Piccadilly, London W1J 0BQ, United Kingdom
Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands
Landessternwarte, Zentrum für Astronomie der Universität Heidelberg, Königstuhl 12, D-69117 Heidelberg, Germany
SETI Institute, Mountain View, CA 94043, USA
The sub-Jovian desert is a region in the mass-period and radius-period parameter space, typically encompassing short-period ranges between super-Earths and hot Jupiters, that exhibits an intrinsic dearth of planets. This scarcity is likely shaped by photoevaporation caused by the stellar irradiation received by giant planets that have migrated inward. We report the detection and characterization of TOI-3568 b, a transiting super-Neptune with a mass of 26.4±1.0 , a radius of 5.30±0.27 , a bulk density of 0.98±0.15 g cm^-3, and an orbital period of 4.417965 (5) d situated in the vicinity of the sub-Jovian desert. This planet orbiting a K dwarf star with solar metallicity, was identified photometrically by the Transiting Exoplanet Survey Satellite (TESS). It was characterized as a planet by our high-precision radial velocity monitoring program using MAROON-X at Gemini North, supplemented by additional observations from the SPICE large program with SPIRou at CFHT. We performed a Bayesian MCMC joint analysis of the TESS and ground-based photometry, MAROON-X and SPIRou radial velocities, to measure the orbit, radius, and mass of the planet, as well as a detailed analysis of the high-resolution flux and polarimetric spectra to determine the physical parameters and elemental abundances of the host star. Our results reveal TOI-3568 b as a hot super-Neptune, rich in hydrogen and helium with a core of heavier elements with a mass between 10 and 25 . We analyzed the photoevaporation status of TOI-3568 b and found that it experiences one of the highest EUV luminosities among planets with a mass M_p<2 M_ Nep, yet it has an evaporation lifetime exceeding 5 Gyr. Positioned in the transition between two significant populations of exoplanets on the mass-period and energy diagrams, this planet presents an opportunity to test theories concerning the origin of the sub-Jovian desert.
TOI-3568 b: a super-Neptune in the sub-Jovian desert
E. Martioli<ref>,<ref>
R. P. Petrucci <ref>,<ref>
E. Jofré <ref>,<ref>
G. Hébrard <ref>,<ref>
L. Ghezzi <ref>
Y. Gómez Maqueo Chew <ref>
R. F. Díaz <ref>
H. D. Perottoni <ref>
L. H. Garcia <ref>
D. Rapetti <ref>,<ref>
A. Lecavelier des Etangs <ref>
L. de Almeida <ref>
L. Arnold <ref>
É. Artigau <ref>
R. Basant <ref>
J. L. Bean <ref>
A. Bieryla <ref>
I. Boisse <ref>
X. Bonfils <ref>
M. Brady <ref>
C. Cadieux <ref>
A. Carmona <ref>
N. J. Cook <ref>
X. Delfosse <ref>
J.-F. Donati <ref>
R. Doyon <ref>
E. Furlan <ref>
S. B. Howell <ref>
J. M. Jenkins <ref>
D. Kasper <ref>
F. Kiefer <ref>,<ref>
D. W. Latham <ref>
A. M. Levine <ref>
D. Lorenzo-Oliveira <ref>
R. Luque <ref>
K. K. McLeod <ref>
J. Melendez <ref>
C. Moutou <ref>
Y. Netto <ref>
T. A. Pritchard <ref>
P. Rowden <ref>
A. Seifahrt <ref>
G. Stefánsson <ref>
J. Stürmer <ref>
J D. Twicken <ref>,<ref>
Received xxxx ; accepted xxxx
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Exoplanets with sizes ranging between Jupiters and super-Earths and orbiting their host stars at short periods <5 days are notably rare, giving rise to what is termed the sub-Jovian or Neptunian desert <cit.>. The prevailing hypotheses to explain this intrinsic deficit in the planet population involve high-eccentricity migration and stellar irradiation <cit.>. This combination is believed to cause planetary inflation or the erosion of primary atmospheres of exoplanets through photoevaporation, imposing constraints on planet mass and radius depending on their distance from the star.
Recent confirmations of several exoplanets residing within the sub-Jovian desert <cit.> showed that this region is not completely barren. Instead, it represents a less probable condition for planets to exist. The sparse presence of planets within the sub-Jovian desert suggests that the formation and evolutionary trajectories of these exoplanets may have taken a unique path, diverging from those observed in more densely populated regions of parameter space. Ongoing surveys optimized for redder stars, such as the Transiting Exoplanet Survey Satellite <cit.>, are detecting an increasing number of planets in the sub-Jovian desert, particularly around smaller stars <cit.>. Increasing this sample is important to test physical mechanisms involved in planet formation across a range of stellar parameters, including spectral types, masses, metallicities, and galactic populations.
Here, we present the discovery and characterization of TOI-3568 b, a hot super-Neptune orbiting a K-dwarf star located in the sub-Jovian desert, within a transitional region between the populations of hot Jupiters and short-period super-Earths, where planets are notably scarce. This discovery emerged from a program aimed at identifying and characterizing planetary systems within the thick-disk galactic population. Stars from distinct galactic populations exhibit differences in kinematics and chemical composition, potentially leading to variations in the frequency of giant planets as suggested by the planet-metallicity correlation <cit.>. We selected TOI-3568.01 as a planet candidate from the TESS Object of Interest (TOI) catalog <cit.>, based on its high thick disk to thin disk membership probability (TD/D = 3.58) determined through kinematic classification <cit.>. However, our analysis showed that the nature of this system is more consistent with what is typically associated with the thin disk.
This paper is organized as follows. In Section <ref> we present the observations and data reduction; in Sections <ref> and <ref> we present the characterizations of the star and the planet, respectively; in Section <ref> we discuss the characteristics of this new planetary system in the context of the population of exoplanets; and we conclude in Section <ref>.
§ OBSERVATIONS AND DATA REDUCTION
§.§ TESS photometry
TOI-3568 (TIC 160390955) was first observed by the TESS in sector 15 with a cadence of 30 minutes. A planet candidate with a 4.42 d period was identified in the MIT Quick Look Pipeline (QLP) faint star transit search <cit.>. An alert for TOI-3568.01 was issued by the TESS Science Office on 23 June 2021. We performed a pre-analysis of the 30-minute cadence TESS Full-Frame Image (FFI) data using the community Python package Lightkurve[<https://docs.lightkurve.org>] <cit.> to obtain the photometric time series. We inspected the light curve data and analyzed the transits, where we employed the methods described in <cit.> and obtained a well-constrained model for the planetary parameters. Thus, we concluded that the events observed by TESS were probably planetary. We submitted TOI-3568 to the TESS Director’s Discretionary Targets (DDT 062, PI: E. Martioli), where we were able to include it for observations in sectors 55 and 56 in the 2-min cadence mode. A subsequent search of the 2-min data from sectors 55 and 56 by the TESS Science Processing Operations Center (SPOC) pipeline detected the planetary signature of TOI-3568 b. The difference image centroiding test <cit.> located the host star to lie within 1.1±2.9 arcsec of the source of the transits. Table <ref> shows the log of TESS observations of TOI-3568.
For the observations of TOI-3568 obtained in sectors 55 and 56, we first used the Presearch Data Conditioning (PDC) flux time series <cit.> processed by the SPOC pipeline at NASA Ames Research Center <cit.> obtained from the TESS data products available in the Mikulski Archive for Space Telescopes (MAST)[<mast.stsci.edu>]. Figure <ref> shows the TOI-3568's target pixel files (TPF) for sectors 15, 55, and 56. It highlights the pixels used in the aperture for photometry and marks the positions of sources in the field from Gaia DR3's catalog <cit.>.
We calculated our final TESS photometry using an optimized systematic error correction algorithm, following the methodology of Rapetti et al. (in prep.). We use an adaptation of the Pixel Level Decorrelation <cit.> technique implemented in the PLDCorrector class of Lightkurve. This method employs (i) a spline polynomial fit to describe stellar variability; (ii) Principal Component Analysis (PCA) eigenmodes to model the background light; and (iii) the PLD technique to account for pointing and mechanical effects.
To account for the background as described in (ii), we wish to begin with calibrated pixels that are not corrected for the background. We thus add the background flux estimated by SPOC into the calibrated and background-removed pixel values in the original TPF before the correction. Since PLD might preserve the mean of the uncorrected light curve after the regression, to recover the true mean flux level of the corrected light curve we apply a flux level adjustment. We adjust the corrected flux by subtracting a constant level calculated as the third dimmest median pixel flux value times the number of pixels. This is inspired by the scalar background bias method applied in the SPOC pipeline. We then adjust the flux for crowding by non-target stars and for the fraction of the target star flux captured in the photometric aperture using the methods of and the crowding and flux fraction values provided by the SPOC pipeline. We also use the flux fraction to scale the flux errors, but not the crowding since its effect on the flux errors is negligible.
Before applying the PLD corrector, we add the background flux and errors estimated by the SPOC pipeline back onto the Simple Aperture Photometry (SAP) light curve. Flux level, fraction and crowding adjustments are applied to the corrected light curves. To automatically optimize the selection of parameter values for the corrector, we evaluate the resulting light curve using the Savitsky-Golay Combined Differential Photometric Precision (sgCDPP) proxy algorithm <cit.> implemented in Lightkurve, for durations of 30, 60, 120, 160, and 200 minutes. For a grid of corrector parameter values (for further details on the parameters and the grid, see Rapetti et al. (in prep.)), we calculate the harmonic mean (HM) of these quantities and select the corrected light curve that minimizes the HM. Using this analysis, we were able to recover segments of the data that were initially excluded by the SPOC pipeline due to scattered light. The TESS light curves are illustrated in Figure <ref> along with the results of our model, as outlined in Section <ref>.
§.§ Ground-based photometry
Ground-based time-series photometry was collected as part of the TESS Follow-up Observing Program Sub Group 1 [<https://tess.mit.edu/followup/>] <cit.> which uses a customized TESS version of the TAPIR software package <cit.> for observation planning. The data reduction and aperture photometry were performed using AstroImageJ <cit.> and the light curves are available on ExoFOP[https://exofop.ipac.caltech.edu].
We observed a full transit event of TOI-3568 b on 2022-12-13 using the 0.7 m telescope at the Wellesley College Whitin Observatory [<https://www.wellesley.edu/whitin-observatory>] (WCWO) in MA, USA. Images were taken in an Sloan Digital Sky Survey (SDSS)-r' filter using 30 s exposures, and photometry was extracted using a circular aperture with a radius of 2.7 arcsec. The aperture was small enough to exclude the light from the two nearest GAIA DR3 stars with projected separations of 3.7 arcsec and 5.7 arcsec, the latter of which is bright enough to have caused the TESS event if it had been an eclipsing binary. These uncontaminated r' data are used along with the TESS light curves in the joint analysis in Section <ref>.
§.§ High contrast imaging
High-angular resolution observations of candidate systems hosting transiting exoplanets can aid in identifying blended sources within sub-arcsecond scales. These sources might produce a false positive transit signal, particularly if the source is an eclipsing binary. The EXOFOP-TESS website[https://exofop.ipac.caltech.edu] reports five high contrast imaging observations, spanning from the optical range at 562 nm to the near-infrared (NIR) at 2.2 μm, as summarized in Table <ref>. These observations indicate the absence of a close-in companion with sufficient brightness to generate a false positive signal.
Figure <ref> shows the 5-σ sensitivity curves derived from observations made with the `Alopeke dual-channel speckle imaging instrument on Gemini-N (PI: Howell). These observations were obtained with a pixel scale of 0.01 arcsec per pixel and a full width at half maximum (FWHM) resolution of 0.02 arcsec. The data were processed with the speckle pipeline <cit.>. `Alopeke performed simultaneous speckle imaging at 562 (54) nm and 832 (40) nm. The results from these observations effectively eliminate the possibility of any companion with a contrast of Δmag<6.42 at 0.5 arcsec separation at 832 nm. This wavelength range is particularly pertinent to the photometry and spectroscopic observations presented in this paper, as it overlaps with the spectral sensitivity of TESS and MAROON-X.
§.§ MAROON-X spectroscopy
MAROON-X is a high-resolution spectrograph (λ / Δλ∼ 85000) operating in the optical range (500-920 nm) and installed on the 8.1-m Gemini North telescope atop Maunakea, Hawaii <cit.>. The spectrograph is fiber-fed, highly stabilized, and bench-mounted. It was designed to achieve sub- radial velocity (RV) precision.
We obtained 34 spectra of TOI-3568 under program ID GN-2022A-Q-207/-Q-113 (PI: R. Petrucci) with an exposure time of 720 s and using MAROON-X in its single mode of operation. These spectra were collected over ten different nights spanning from 2022-04-08 to 2022-07-26. The strategy of obtaining ∼3 spectra per visit was precautionary, to mitigate the impact of potential outliers. On average, these observations yielded a peak signal-to-noise ratio (S/N) per spectral element of 51±11 in the blue arm and 70±16 in the red arm.
The MAROON-X raw data have been reduced using the standard procedure implemented in the instrument Python3 pipeline <cit.>. This procedure involved bias and background subtraction, order tracing and the extraction of one-dimensional wavelength-calibrated spectra. Wavelength solutions and instrumental drift corrections were based on the simultaneous calibration data of a stabilized Fabry–Pérot etalon <cit.>, which allows order-by-order drift corrections at the sub– level. The flux-weighted midpoint of each observation was used to calculate the barycentric corrections.
We analyzed the spectra using the SpEctrum Radial Velocity AnaLyser (SERVAL) pipeline <cit.>, which employs the template-matching algorithm to extract precise relative RVs. The blue and red channels of MAROON-X are reduced separately, producing independent RVs, as presented in Table <ref>. The blue channel RVs show a root mean square (RMS) dispersion of 9.7and a median error of 1.9, whereas the red channel shows an RMS dispersion of 9.5and a median error of 3.5.
We co-added all individual MAROON-X observations shifted to the same stellar reference frame to obtain a master spectrum with high S/N. Our spectroscopic analysis in Section <ref> uses this master spectrum.
In addition, we employ a reference solar spectrum obtained from observations of sunlight reflected by the asteroid Vesta on the night of 2022-04-27 under program ID GN-2022A-Q-227 (PI: Y. Netto). These observations were carried out adopting the same MAROON-X setup (S/N ∼ 400 at 600 nm) to ensure precision in determining the stellar parameters and chemical abundances of TOI-3568 through a differential analysis.
§.§ SPIRou spectro-polarimetry
TOI-3568 was observed by the SpectroPolarimètre Infra-Rouge (SPIRou)[ <http://spirou.irap.omp.eu> and <https://www.cfht.hawaii.edu/Instruments/SPIRou/>] under the large program SPIRou Legacy Survey - Consolidation & Enhancement (SPICE[<http://spirou.irap.omp.eu/Observations/The-SPIRou-Legacy-Survey>]; PI: Jean-François Donati) on nights spanning from 2022-11-14 to 2022-11-21. SPIRou is a stabilized high-resolution near infrared (NIR) spectropolarimeter <cit.> mounted on the 3.6 m Canada-France-Hawaii Telescope (CFHT) atop Maunakea, Hawaii. It is designed for high-precision velocimetry to detect and characterize exoplanets and it provides a full coverage of the NIR spectrum from 950 nm to 2500 nm at a spectral resolving power of λ / Δλ∼ 70000.
We observed TOI-3568 with SPIRou/CFHT at five different epochs, where we obtained a total of 20 spectra with an individual exposure time of 900 s. These observations were carried out in the circular polarization mode (Stokes V), where each set of four exposures provides a polarimetric spectrum. The peak S/N per spectral element varied between 40 and 65, with a median of 59. The air mass of the observations ranged from 1.1 to 1.4 and the Barycentric Earth Radial Velocity (BERV) ranged from -20.0 to 20.7 .
The SPIRou data have been reduced using the APERO pipeline v.0.7.284 <cit.>, which produces 1D optimally extracted fluxes that underwent detector gain and artifact corrections, wavelength calibration, blaze correction, and correction for telluric atmospheric absorption. Additionally, the pipeline computed polarimetric Stokes V and null spectra.
The flux spectra have been analyzed using the line-by-line (LBL) method of <cit.>, wherein a high-S/N template spectrum of HD 189733 observed by SPIRou was used as a reference to obtain the RVs. The Table <ref> shows the SPIRou RVs. These RVs show an RMS of 12.8 , and a median error of 12.4 .
§ STELLAR CHARACTERIZATION
We carried out a study to derive the host star properties as will be detailed in the next sections. Table <ref> presents a summary of the stellar parameters of TOI-3568.
§.§ Atmospheric parameters
The fundamental atmospheric parameters (T_eff, log g, [Fe/H], and v_t) of TOI-3568 were determined by imposing a strictly line-by-line differential spectroscopic equilibrium of neutral and singly-ionized iron lines relative to the Sun <cit.>. To perform this process automatically, we employed the program[The code is available at <https://github.com/astroChasqui/q2>] <cit.>. The iron line list, as well as the atomic parameters, namely the excitation potential (EP) and oscillator strengths (loggf), are the same as in <cit.>, and the equivalent widths (EWs) of the MAROON-X spectra of TOI-3568 and the Sun (reflected sunlight from Vesta) were manually measured by fitting Gaussian profiles using the splot task in IRAF[IRAF is distributed by the National Optical Astronomy Observatories, which are operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.]. The solar values were kept fixed at (T_eff, log g, [Fe/H], v_t) = (5777 K, 4.44 dex, 0.0 dex, 1.0 ). The resulting stellar parameters are T_eff=4969±45 K, logg=4.63±0.08 dex, [Fe/H]=-0.01±0.02 dex, and v_t=0.77±0.13 .
As an independent check, we derived fundamental parameters also using the excitation and ionization equilibria of Fe I and Fe II lines technique but without using a strictly line-by-line differential analysis <cit.>. We measured the EWs automatically with the ARES code <cit.> and the pipeline from <cit.> that uses the 2017 version of the MOOG code[https://www.as.utexas.edu/ chris/moog.html] <cit.> and a line list with solar log gf values derived using the same setup (see for further details).
In excellent agreement with the parameters obtained with the differential technique we obtained: T_eff=4897±47 K, logg=4.52±0.11 dex, [Fe/H]=0.02±0.02 dex, v_t=0.52±0.16 .
We determined the projected star rotation velocity (v_ rotsini_⋆) based on the spectral synthesis of relatively isolated iron lines using the code iSpec <cit.> and following the procedure of <cit.>. Adopting the calibration of <cit.> to determine a macroturbulence velocity of 1.06 , we find v_ rotsini_⋆ = 1.4±0.4 . However, considering the resolving power of MAROON-X (R = 85,000), we adopt an upper limit of 2 .
§.§ Mass, radius, and age
We derived stellar mass, radius, and age using Yonsei-Yale (YY) stellar isochrones <cit.>, as described in <cit.>. This was accomplished via the pipeline, using as input the spectroscopic T_ eff and [Fe/H] obtained from the differential method, Gaia DR3 parallax, and V-mag (corrected for extinction[Visual extinction (Av) is computed as a function of the stellar distance and the galactic coordinates (l, b) by interpolating in the tables given by <cit.> using Frédéric Arenou’s online calculator (<https://wwwhip.obspm.fr/cgi-bin/afm>).]). We obtained an age of 6.1±3.7 Gyr, a mass of 0.780±0.021 , and a radius of 0.720±0.013 .
This code also provides the trigonometric gravity, which allows us to perform a consistency check on the spectroscopic log g values. Here, yields log g = 4.62 ± 0.02 dex, which is in excellent agreement with the estimates found from the spectroscopic equilibrium.
As a consistency check, we also employed the 1.3 version of the PARAM web interface[<http://stev.oapd.inaf.it/cgi-bin/param_1.3>] that performs a Bayesian estimation of stellar parameters <cit.> based on PARSEC isochrones <cit.>. As input, we employed the same parameters as those used above in . In good agreement, within the errors, with our estimations from , PARAM returned an age of 4.2±3.7 Gyr, a mass of 0.75±0.02 , a radius of 0.69±0.01 , and logg = 4.60 ± 0.02 dex.
§.§ Longitudinal magnetic field
We performed a least squares deconvolution (LSD) analysis and computed the longitudinal magnetic field (B_ℓ) on individual SPIRou spectra for Stokes I, Stokes V, and null polarizations, using the methodologies introduced by <cit.> and implemented by <cit.>. The resulting mean LSD profiles are depicted in the top panels of Figure <ref>, while the time series of B_ℓ is illustrated in the bottom panel. Table <ref> shows the values of B_ℓ. The Stokes V profile is featureless, indicating the absence of a Zeeman signature for this star. The average longitudinal magnetic field, B_ℓ=-2.5±14.6 G, is consistent with a null detection of a magnetic field in TOI-3568, suggesting its magnetic inactivity.
§.§ Activity
To investigate activity in TOI-3568, we measured three spectral index proxies for chromospheric activity in the MAROON-X spectra: the Ca infrared triplet (Ca IRT), the H-α (available in both the blue and red channels), and the Na D1 and D2 doublet. We found no significant correlations between these indices and RVs, nor with the FWHM in either the blue or red channels, indicating that the RV variations are likely not caused by stellar activity. We measured the median values and standard deviations of the CaIRT and NaD indices as follows: CaIRT1 = 0.446±0.004, CaIRT2 = 0.332±0.004, CaIRT3 = 0.447±0.005, NaD1 = 0.175±0.004, and NaD_2 = 0.206±0.002. For H-α, we measured 0.281 ± 0.005 in the blue channel and 0.283 ± 0.007 in the red channel, resulting in a mean value of 0.282 ± 0.004. This represents a variation in H-α of 1.4% over a baseline of 100 days, indicating that TOI-3568 appears to have low levels of chromospheric activity.
We also looked for signs of stellar variability in the 2-min TESS PLD data from sectors 55 and 56. To do so, we carried out the analysis of the light curve with the tools in the lightkurve package. As a first step, based on the transit parameters obtained in our analysis (see Section <ref>), we removed all the points falling within the transits of TOI-3568 b. Then, we ran two algorithms on the resulting light curve: the Lomb-Scargle periodogram <cit.> and, an additional independent method, the Auto-Correlation Function <cit.>. After applying the criteria described in <cit.> to assess if a detected signal is real, we determined that no significant peak indicating periodic variability was found and, hence, there is no evidence for rotational modulation in the data. This may be a consequence of a small spot coverage on the stellar surface, or due to the existence of a rotation period longer than ∼14 d, which is likely undetected in the TESS data. Additionally, no flare candidate was detected on the 2-min cadence light curve by the Altaipony code <cit.>, optimized to search for sporadic events. A careful by-eye inspection of the TESS photometry confirms these results.
The absence of signs of variability suggests that TOI-3568 is an inactive star. This aligns with the mature age value obtained in this study (see Table <ref>) and with the low levels of chromospheric and magnetic activity measured from our spectropolarimetric data.
§.§ Chemical composition
We measured line-by-line differential abundances relative to solar ([X/H]) abundances of 20 elements other than iron, including C, O, Na, Mg, Al, Si, S, K, Ca, Sc, Ti, V, Cr, Mn, Co, Ni, Cu, Zn, Y, and Ba. This was achieved through EW measurements and by employing the curve-of-growth approach with the MOOG program (abfind driver) using the code. The EWs were manually measured using the splot task in IRAF and the adopted line list and atomic parameters were taken from <cit.>. Hyperfine splitting was taken into account for Sc, V, Mn, Co, Cu Y, and Ba. The O abundance was computed from the 7771-5 Å IR triplet, adopting the non-LTE corrections by <cit.>.
Similar to the previous section, as an independent check, we also performed a non-strictly differential analysis to determine elemental abundances for TOI-3568 following <cit.> and <cit.>.
The EWs were automatically measured for elements with more than three spectral lines using the code ARES <cit.> and an updated line list based on solar log gf values that will be described in a future paper [The line list can be made available upon request directly from the authors if needed before the referred publication is released.] that investigates possible correlations between planetary properties and the chemical abundances of their stellar hosts. For elements with three or less lines, we manually measured the EWs with the splot task in IRAF.
The abundances were calculated using the MOOG code and abfind and blends drivers for the elements without and with hyperfine splitting, respectively. The blends driver was also used to determine the abundance from the [O I] 6300 Å line in order to take the contamination from Ni into account. The non-LTE corrections for the O I 7771-5 Å IR triplet were taken from <cit.>.
Table <ref> lists both line-by-line differential abundances and non-differential values.
The uncertainties were determined considering the standard deviation of the mean abundances (for elements with three or more lines) as well as the contributions of the uncertainties on the atmospheric parameters. In order to calculate these contributions, we vary each atmospheric parameter in turn by ±1σ and calculate new abundances for all elements. We then subtract these new values from the original ones and obtain two differences caused by the variation of each atmospheric parameter. The contribution of each atmospheric parameter for the final uncertainty is taken as the maximum of these two differences. Finally, we add in quadrature the contributions of each of the four atmospheric parameters as well as the standard deviation of the mean abundances. For elements with only one or two lines, the latter is not considered. Figure <ref> shows the abundances as a function of atomic number. There is a very good agreement between both sets of results, which are consistent within 1σ. We also notice the good agreement between the O abundances obtained from different indicators as well as between neutral and ionized species for Sc, Ti and Cr, which further shows that the ionization equilibrium achieved during the determination of the atmospheric parameters is robust.
We also determined the lithium abundance by performing a spectral synthesis of the Li I feature at 6707.8 Å. We adopted the line list from <cit.> and a similar methodology, but with two differences: (1) we determined the Gaussian broadening (that considers the combined effects of the instrumental profile, stellar rotation and macroturbulence) using the Fe I line at 6703.567 Å and (2) we kept the abundances of Fe, C and Si as free parameters (using the previously determined values as initial guesses) for the fit. As we can see in Figure <ref>, the best fit shows no distinguishable Li feature and the residuals are smaller than 0.5%. We are only able to determine an upper limit of A(Li) ≤ 0.45 and this value is consistent with those of stars with similar effective temperatures <cit.>.
§.§ Stellar population membership
As we mention on the introduction, TOI-3568 was classified as a thick disk star by <cit.>. To independently check this previous classification, we determined the galactic population membership in two ways. First, we performed a kinematic classification by computing the Toomre diagram (e.g., ). To identify regions dominated by each galactic population in the v_ϕ vs. √(v_R^2 + v_z^2) plane, we utilized the Galaxia model <cit.> with updated velocity distributions and local fractions of the thick disk and stellar halo (). This is illustrated in Figure <ref>, where the thin disk, thick disk, and halo are represented by red, grey, and blue shaded areas, respectively. The kinematic characteristics of TOI-3568 are consistent with those typically associated with nearby thin-disk stars. We also applied a formalism similar to that of <cit.> (see for more details) to kinematically classify the galactic component of TOI-3568. Stars are classified as thick disk in this method when they show thick-disk-to-thin-disk (TD/D) membership ratios higher than 10, a condition not met by TOI-3568 (TD/D = 1.9).
Furthermore, we estimated the orbital properties of TOI-3568, following the description provided by <cit.>. TOI-3568 is on prograde orbit with a high angular momentum L_z = 1.2 × 10
^3 kpc km s^-1, mild eccentricity e = 0.37, and it reaches Z_max = 0.08 kpc from the galactic plane. Additionally, TOI-3568 is situated in a region of E–L_Z space predominantly occupied by disk stars. Although TOI-3568 exhibits an uncommon eccentricity, the other orbital parameters are similar to those of thin disk stars. Understanding the mechanism that led to this star's eccentricity is beyond the scope of this paper.
On the other hand, we also performed a classification of TOI-3568 based on its chemical composition. Generally, thick disk stars are metal-poor and enhanced in α elements <cit.>. Figure <ref> shows the [α/ Fe][Here “α” indicates the average abundance of Ca, Mg, Si, and Ti.] versus [Fe/H] for TOI-3568 in comparison with 1111 FGK dwarfs observed within the context of the HARPS GTO planet search program for which precise abundances of α-elements are available <cit.>. In this figure the objects are chemically separated, by the dashed line, between thick disk stars, overabundant in α elements and thin disk stars with less content of α-elements for a given [Fe/H].
In line with the kinematic classification, the α content of TOI-3568, with a [α/ Fe] value of 0.061±0.029 dex from the differential method (and [α/ Fe]=0.05±0.06 dex for the non-differential approach), does not exhibit significant enhancement for its metallicity. Therefore, TOI-3568 falls within the thin-disk region, albeit near the transition zone. For comparison, in Figure <ref> we include bona fide thick disk stars with transiting planets for which there are alpha-element abundances available derived from high-resolution optical spectra.
Additional support to our kinematic and chemical Galaxy population classification of TOI-3568 is based on its age. Generally, thick-disk field stars of the Milky Way are older than about 10–11 Gyr <cit.>. Hence, the age of TOI-3568, estimated from the isochrones analysis (approximately in the range of 1-9 Gyr considering the errors in both methods, Sec. <ref>) would be more consistent with those of thin-disk stars <cit.>. Moreover, as a further check to the ages from isochrones estimated above, we utilize the age-[Y/Mg] relation from <cit.> to obtain the age of TOI-3568 employing our measured abundances. This yields an age of 7.6±1.2 Gyr, consistent with the values derived from the isochrones analysis. Therefore, our kinematic and chemical analysis indicate that TOI-3568 is a thin disk (or thin/thick disk transition object) rather than a thick disk star as indicated previously by <cit.>.
§ PLANET DETECTION AND CHARACTERIZATION
§.§ Analysis of TESS photometry data
We analyzed the TESS data from sectors 15, 55, and 56 using methods outlined in <cit.>. This involved fitting a transit model along with a baseline polynomial within selected windows around each transit of TOI-3568 b. We selected six transits with a 30-minute cadence from sector 15 and thirteen transits with a 2-minute cadence from sectors 55 and 56. On average, each transit window contained 32 data points in sector 15 and 468 data points in sectors 55 and 56. This window size includes approximately twice as many out-of-transit data points on each side as in-transit ones. This was found to be a reasonable balance for a non-active star, providing enough baseline data to constrain transit parameters accurately and to allow for precise modeling of the baseline using a low-order polynomial. In this case, we use a first-order polynomial. As illustrated in Figure <ref>, we have also modeled the TESS photometry data with a Gaussian Process (GP) regression using a quasi-periodic kernel as in <cit.>. The primary purpose of employing the GP in this analysis is for detrending, as it does not constrain any significant periodicity. Since the GP approach doesn't show significant improvements compared to the window approach, we opt for the latter for the rest of our analysis for the sake of simplicity.
Our transit model is calculated using the code BATMAN <cit.>, adopting a linear limb-darkening law. The quality of our ground-based WCWO photometry is not sufficient to constrain the limb-darkening on its own. To simplify, we adopted a single coefficient for both TESS and WCWO photometry, which is a reasonable approximation given the significant overlap between the TESS and r-band passes. The posterior distribution of transit parameters is sampled using a Bayesian Monte Carlo Markov Chain (MCMC) framework with the package emcee <cit.>. We use uninformative priors for the transit parameters, shown in Table <ref>.
§.§ Joint analysis of RVs and photometry to obtain the system's parameters
We calculate the Generalized Lomb-Scargle <cit.> periodogram for the MAROON-X and SPIRou RVs (see Tables <ref> and <ref>), as illustrated in Figure <ref>. The highest power is detected at 4.4178 d with a false alarm probability (FAP) of FAP<10^-14, coinciding precisely with the periodicity of transits observed in the TESS photometry data alone. This suggests that our RV data detects the signal of the star's reflex motion induced by the orbit of the planet TOI-3568 b.
We therefore perform a Bayesian MCMC joint analysis of RVs and photometry data to determine the system's parameters employing the same approach as in <cit.>. We calculate the log-likelihood for a model that includes both the transits and the RV orbit using a Keplerian model as in <cit.>, allowing for the simultaneous fitting of the TESS photometry data within the transit windows, the ground-based single-transit photometry data, and the RV data. The RV model includes an independent systemic velocity for each RV data set. In the case of MAROON-X blue and red data, this velocity represents only a systematic offset with respect to the template spectrum and therefore should be close to zero. On the other hand, the SPIRou systemic RV should be close to the real radial velocity of the system, although the value obtained in our analysis is not absolutely calibrated.
Initially, the parameters are fitted using an optimization least-squares (OLS) code with initial parameters obtained from an iterative preliminary analysis. We include a white noise jitter term for each RV dataset, which is fitted only in the OLS analysis. This is followed by sampling the posteriors using the Bayesian MCMC framework implemented with the emcee package. We run 20,000 iterations with 50 random walkers, discarding the first 5,000 samples as burn-in. The priors and posteriors for each parameter are presented in Table <ref>, where the best-fit values are considered to be the mode of the distribution, and the errors are the 34th percentile on each side of the median. Using these fit parameters and the stellar parameters from Table <ref>, we derived other planetary quantities, which are also listed in Table <ref>. Note that we employ bounded priors for the time of conjunction and orbital period, which have sufficiently wide bounds in the uniform distribution to ensure minimal impact on the posterior distribution. The best-fit orbit model for TOI-3568 b shows an eccentricity of 0.035±0.021, consistent within 2.5σ with a circular orbit, which is not uncommon for close-in exoplanets.
Figure <ref> illustrates both the TESS and ground-based photometry data, and the best-fit model for the selected windows around the transits. Figures <ref> and <ref> illustrate the MAROON-X and SPIRou RV data and the orbit model. The final RMS of photometry residuals is 3.6 ppt, while for RV residuals, it is 3.5 . The MCMC samples, their correlations, and the posterior distributions for each parameter are illustrated in Figure <ref> in Appendix <ref>.
§.§ Limits on additional planets from TESS photometry
We explored the detection limits of additional transiting planets by performing an injection-recovery test in the TESS light curve of TOI-3568. To do so, we employed the 2-min TESS PLD photometry residuals from sectors 55 and 56 obtained after removing the baseline GP model multiplied by the best-fit transit model for TOI-3568 b determined in Section <ref>.
We used the BATMAN code <cit.> to generate synthetic transit signals that were injected into the TESS PLD photometry residuals. For all of these simulated planets, a limb-darkening linear law, equatorial transits (b = 0) and circular orbits (e = 0) were assumed. The value adopted for the limb-darkening coefficient, u_0 = 0.85, was extracted from Table <ref>. We surveyed the planetary radius–orbital period parameter space, R_P–P, in the ranges of 0–11 with steps of 1 and 1–34 d with a 3 d step, respectively, adopting a multi-phase approach that allows five different values of T_c for each R_P–P combination. As in previous works <cit.>, to detect the injected signals, we ran the Transit Least Squares code <cit.>, an optimized algorithm to search for periodic transits from time-series photometry. A positive planet detection was considered when the recovered orbital period is within 5% of any half multiple of the injected period. In Figure <ref>, we present the detectability map resulting from our injection-recovery test.
From this plot it can be seen that planets with a radius larger than ∼4.0 and periods ≲25 d have recovery rates of more than 80%, hence, we can exclude the presence of such additional objects in the system. For similar-sized planets but orbital periods longer than 25 d, the chances of detection are between 0 and 100%. As we consider longer periods, fewer transits of a given planet are expected. Here, the chances of detection can drop abruptly, from high to low percentages, if one or more transits are not detected (for example, if the event falls in the gap between orbits). In the space parameter corresponding to planet size smaller than ∼4.0 , most of the recovery rates are lower than 20%. This indicates that still might exist small planetary objects that would remain undetected in the present data.
§ DISCUSSIONS
§.§ Characterization of TOI-3568 b
Considering the planet-to-star radius ratio of R_p/R_⋆=0.067±0.003 and the stellar radius obtained from our stellar analysis, we derive the true physical radius of TOI-3568 b as 5.30±0.27 , which is approximately 1.37 times the size of Neptune. The RV semi-amplitude of 12.1±0.4 implies a planet mass of 26.4±1.0 , approximately 50% larger than Neptune's mass. We estimate a bulk density of 0.98±0.15 g cm^-3. In Figure <ref> we show the mass-radius diagram for the known exoplanets with masses in the range 0.5-500 and radii in the range 0.7-25 . Comparison of the evolutionary models by <cit.> for Hydrogen-Helium (H/He) rich planets at 0.045 au and ages between 1 and 10 Gyr reveals that TOI-3568 b is likely a H/He-dominated planet with a core of heavier elements, with a mass between 10 and 25 .
With an orbital period of 4.4 days and at an orbital distance of 0.0485±0.0004 au, we estimated the equilibrium temperature for TOI-3568 b as in <cit.>, assuming a uniform heat redistribution and an arbitrary geometric albedo of 0.1, which gives T_ eq=899±12 K. Therefore this planet belongs to the hot super-Neptune class, a rare type of planet.
§.§ TOI-3568 b in the sub-Jovian desert
In Figure <ref>, we explore the mass-period diagram, illustrating the population of exoplanets. The color contrasts in this plot represent the detection rate rather than the actual occurrence rate, which ideally should reflect the fundamental physics of planet formation. However, selection effects, primarily due to the limited sensitivity of radial velocity and transit surveys, are significant only in the regime of long periods and small masses. Therefore, the scarcity of planets at short periods and masses above a few hundredths of a Jupiter mass should reflect the intrinsic planetary population in this region. We highlight the boundaries of the sub-Jovian desert as defined by <cit.>. TOI-3568 b lies within the boundaries of the desert, albeit in a region that has a low detection rate, yet is not entirely devoid of planets. TOI-3568 b falls near the lower boundary of the sub-Jovian desert, which is thought to be caused by photoevaporation <cit.>. As pointed out by <cit.>, this lower boundary is becoming more blurred as we detect more planets around that region, raising the question whether this need to be reevaluated. Regardless of the existence of a real desert in this region, TOI-3568 b is notably positioned right within the transition between the populations of hot-Jupiters and super-Earths, thus having potential importance in investigating the origin of this natural segmentation of planet populations.
§.§ Evaporation status of TOI-3568 b
Photoevaporation is thought to play an important role in the distribution of planets in the mass-period diagram <cit.>. However, this diagram does not take into account different stellar types. To assess the evaporation status of TOI-3568 b, we plot it on the energy diagram proposed by <cit.>. This diagram explores the relationship between the potential energy of the planet and the extreme ultraviolet (EUV) luminosity it receives from its parent star. These two competing sources of energy control the evaporation rate of the planet throughout its lifetime.
As in Eq. 10 of <cit.>, we consider the planet's potential energy E_ p', including tidal forces from the gravitational interaction with the parent star, and calculate the EUV luminosity following his recipe. For simplicity, we assume a constant mean EUV luminosity and adopt a correction factor of γ=6 to account for the time variation in the energy flux received by the planet. This approximation becomes more problematic for very young systems and very hot stars, but both types of objects account for a small fraction of the exoplanets that we present in our analysis.
Figure <ref> reproduces the energy plot from <cit.>, showing up-to-date exoplanet data from the <exoplanet.eu> catalog for transiting planets with measured masses and highlighting planets in three mass regimes: Jupiter-like (M_p > 2 M_ Nep), Neptune-like (0.25 M_ Nep < M_p < 2 M_ Nep), and Earth-like (M_p < 0.25 M_ Nep). TOI-3568 b stands out as one of the super-Neptunes (M_p = 1.54±0.06 ) with the highest levels of EUV luminosity, receiving about dE_ EUV/dt > 10^40.6 erg Gyr^-1. The dashed lines in Figure <ref> represent planet lifetimes of 0.1, 5, and 10 Gyr. The regions below these lines are considered evaporation-forbidden regions, where planets receive more EUV energy than is needed to fill their potential well, and thus would evaporate in less than 0.1, 5, or 10 Gyr, respectively.
As in the mass-period diagram, this energy diagram also reveals two distinct populations of exoplanets: one population consists of more massive planets with high potential energy and high EUV luminosity. These planets accumulate in the top-left part of the diagram, above the forbidden region at 5 Gyr, with the lower boundary clearly sculpted by photoevaporation. A second population of less massive planets (Mp<2M_ Nep) is accumulated in a lower EUV luminosity regime in the bottom-right part of the diagram. TOI-3568 b lies at the transition between these two populations, making it difficult to clearly classify it into one category or the other. Although being in a region that is not significantly populated, TOI-3568 b is above a lifetime of 5 Gyr, which is consistent with the mature age of the system. However, the question remains: why is this region less populated? As pointed out by <cit.>, there are likely other formation mechanisms sculpting the lower mass end, causing this natural gap in the population. TOI-3568 b appears to be an important planet for probing this gap. However, a more detailed analysis of this topic is beyond the scope of this paper.
§ CONCLUSIONS
We report the discovery of the transiting exoplanet TOI-3568 b, an inflated hot super-Neptune situated in the sub-Jovian desert. Using the TESS and ground-based photometry, MAROON-X optical spectra, and SPIRou NIR spectropolarimetry, we determined the orbit of the planet and the physical parameters of the system. The star is a quiet and mature K dwarf with an effective temperature of 4969±45 K, with nearly solar metallicity. Our analysis identifies this star as part of the transitional population between the galactic thin and thick disk, exhibiting characteristics more consistent with the thin disk population. Although our observations did not confirm this candidate as a member of the galactic thick disk, it emerged as an interesting discovery of a rare super-Neptune situated in a region of the mass-period diagram with a low occurrence rate of planets. TOI-3568 b is likely an H/He-dominated planet with a core of heavier elements with a mass between 10 and 25 , and it lies at the lowest point between the two significant populations of hot-Jupiters and super-Earths. We analyzed the photoevaporation status of TOI-3568 b, finding that this planet experiences a high regime of EUV luminosity for its mass range, making it one of the planets with the highest EUV luminosities among those with a mass M_p<2 . However, this planet is not in an evaporation-forbidden region, as its status is still consistent with a planet having an evaporation lifetime exceeding 5 Gyr. Perhaps the most interesting aspect of this planet is that it is not a common type of planet. It lies in a transition region in both the mass-period and energy diagrams, an area with a dearth of planets that cannot be explained by photoevaporation.
This work is based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the US National Science Foundation (NSF), the Canadian National Research Council (NRC), the Chilean Agencia Nacional de Investigación y Desarrollo (ANID), the Brazilian Ministério da Ciência, Tecnologia e Inovação, the Argentinean Ministerio de Ciencia, Tecnología e Innovación, and the Korea Astronomy and Space Institute (KASI).
This work is based on observations obtained at the Canada-France-Hawaii Telescope (CFHT) which is operated from the summit of Maunakea by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. Based on observations obtained with SPIRou, an international project led by Institut de Recherche en Astrophysique et Planétologie, Toulouse, France.
E.M. acknowledges funding from Fundação de Amparo à Pesquisa do Estado de Minas Gerais (FAPEMIG) under project number APQ-02493-22 and research productivity grant (PQ) number 309829/2022-4 awarded by the National Council for Scientific and Technological Development (CNPq), Brazil.
The University of Chicago group acknowledges funding for the MAROON-X project from the David and Lucile Packard Foundation, the Heising-Simons Foundation, the Gordon and Betty Moore Foundation, the Gemini Observatory, the NSF (award number 2108465), and NASA (grant number 80NSSC22K0117). The Gemini observations are associated with programs ID GN-2022A-Q-207/-Q-113.
Funding for the TESS mission is provided by NASA’s Science Mission Directorate. We acknowledge the use of public TESS data from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center for the production of the SPOC data products. DR was supported by NASA under award number NNA16BD14C for NASA Academic Mission Services. TESS data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute.
This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement.
This work made use of tpfplotter by J. Lillo-Box (publicly available in www.github.com/jlillo/tpfplotter), which also made use of the python packages astropy, lightkurve, matplotlib and numpy.
KKM acknowledges support from the New York Community Trust Fund for Astrophysical Research.
aa
§ POSTERIOR DISTRIBUTIONS OF MODEL PARAMETERS
This appendix presents, in Figure <ref>, the MCMC samples and the final posterior distributions of the free parameters used in our joint analysis of the TESS and ground-based photometry, as well as the MAROON-X and SPIRou RVs of TOI-3568.
|
http://arxiv.org/abs/2409.03598v1 | 20240905145701 | A practical approach to evaluating the adversarial distance for machine learning classifiers | [
"Georg Siedel",
"Ekagra Gupta",
"Andrey Morozov"
] | cs.LG | [
"cs.LG",
"cs.CV"
] |
Proceedings of the ASME 2024International Mechanical Engineering Congress and Exposition
IMECE2024
November 17–21, 2024
Portland, OR
IMECE2024-145280
Georg [email protected], [email protected],
Ekagra Gupta2,
Andrey Morozov2
1Federal Institute for Occupational Safety and Health (BAuA), Dresden, Germany
2University of Stuttgart, Germany
A Practical Approach to Evaluating the Adversarial Distance for Machine Learning Classifiers
K. Grunthal [email protected]
V. Venkatraman Krishnan 1
P. C. C. Freire 1
M. Kramer 1
M. Bailes 8,9
S. Buchner 7
M. Burgay 5
A. D. Cameron 8,9
C.-H.R. Chen 1
I. Cognard 2,3
L. Guillemot 2,3
M. E. Lower 6
A. Possenti 5
G. Theureau 2,3,4
September 9, 2024
=======================================================================================================================================================================================================================================================================================================================================================================================================
§ ABSTRACT
Robustness is critical for machine learning (ML) classifiers to ensure consistent performance in real-world applications where models may encounter corrupted or adversarial inputs. In particular, assessing the robustness of classifiers to adversarial inputs is essential to protect systems from vulnerabilities and thus ensure safety in use. However, methods to accurately compute adversarial robustness have been challenging for complex ML models and high-dimensional data. Furthermore, evaluations typically measure adversarial accuracy on specific attack budgets, limiting the informative value of the resulting metrics.
This paper investigates the estimation of the more informative adversarial distance using iterative adversarial attacks and a certification approach. Combined, the methods provide a comprehensive evaluation of adversarial robustness by computing estimates for the upper and lower bounds of the adversarial distance. We present visualisations and ablation studies that provide insights into how this evaluation method should be applied and parameterised. We find that our adversarial attack approach is effective compared to related implementations, while the certification method falls short of expectations. The approach in this paper should encourage a more informative way of evaluating the adversarial robustness of ML classifiers.
§ INTRODUCTION
Robustness is a prerequisite for the safe and secure implementation of ML in high-risk applications. The recently published proposal for a horizontal European regulation on artificial intelligence (AI), the "AI Act", explicitly addresses robustness as an essential property <cit.>. High-risk AI on the European market will be required to meet certain levels of accuracy, robustness and cybersecurity to ensure a trustworthy and safe application according to Article 15 <cit.>. Robustness is the ability of an AI module to cope with erroneous, noisy, unknown, or adversarially constructed input data <cit.>. Specifically, adversarial robustness (AR) refers to robustness against "attempts to deceive the AI module by means of carefully chosen harmful input" <cit.>. AR can be interpreted as a worst-case scenario for robustness under the threat of an attacker. Thus, AR is required by two of the above-mentioned requirements of the AI Act at the same time: cybersecurity and robustness. The AI Act encourages the development of benchmarks and measurement methodologies "to address the technical aspects of how to measure the appropriate levels of accuracy, robustness and cybersecurity" <cit.>. However, despite ongoing research and discussion as required by the AI Act <cit.>, there exists a gap in the development of standardised, widely applicable and meaningful measurement methodologies for assessing the robustness of ML systems <cit.>.
Current research typically evaluates adversarial robustness based on accuracy, which requires the prior definition of an attack budget and solely reports a discrete 0-or-1 success rate. Based on the above motivation to develop measures of adversarial robustness, this paper focuses on a less common but more informative metric: (minimal) Adversarial Distance (AD), for which we compute estimates for upper and lower bounds (see Figure <ref>). We find that many popular methods, or their implementations, are ill-suited for accurately estimating adversarial distance. It appears that while the techniques for AD evaluation are available, their correct application using a popular software package is not straightforward.
Our contributions to this problem are as follows:
* We propose an efficient attack algorithm for estimating adversarial distance, which serves as a baseline measure.
* We combine this baseline with other appropriate approaches in an "estimation ensemble", theoretically bounding the true adversarial distance above and below,
* We perform experiments on the parameterisation, effectiveness and computational efficiency of the proposed estimation methods, evaluating two differently robust models on an image classification task[Code available: https://github.com/georgsiedel/adversarial-distance-estimationgithub.com/georgsiedel/adversarial-distance-estimation].
§ PRELIMINARIES
Given a classifier f, an adversarial perturbation is typically defined as the minimal perturbation r that changes the estimated label f(x):
Δ_adv(x, f) = min_r ∈ℝ^d r _p subject to f(x) ≠ f(x + r)
where x ∈ℝ^d is a data point and p is a norm to quantify the perturbation <cit.>. Although adversarial perturbations have been defined according to other and more general distance measures <cit.>, L_2 and L_∞ are the most common norms used for adversarial robustness evaluation <cit.>.
Δ_adv(x, f) is then called the robustness of f at x. With 𝔼_μ defined as the expected value over all x sampled from distribution μ, the overall robustness of f is defined as:
ρ_adv(f) = 𝔼_μ(Δ_adv(x, f)).
In words, it is defined as the average norm of the minimal perturbation required to change the predictions of f across all data points <cit.>. For this reason, this measure is also referred to as the (mean) adversarial distance <cit.>.
Research on adversarial robustness evaluation has focused on methods that find such adversarial perturbations through attacks, while solving (<ref>) directly is a less popular research direction in machine learning <cit.>. Adversarial examples found by an attack are always a guaranteed upper bound on the true minimal adversarial perturbation, no matter how tight they are.
For the purpose of benchmarking models and defences on datasets, most adversarial attack and certification approaches define a specific attack budget ϵ instead of trying to find the minimal adversarial distance <cit.>. This attack budget bounds the norm of the maximum allowed perturbation. The adversarial robustness is then evaluated according to the probability of correct classification using this given attack budget:
P_adv(f) = 𝔼_μ(f(x')=y) for all x' ∈ℬ(x, ϵ).
where ℬ(x, ϵ) is a norm ball defining the given attack budget and y is the true label for x. P_adv(f) is typically called "adversarial accuracy" or "astuteness" <cit.>, while "adversarial risk" is its opposite term. Adversarial accuracy can be estimated with a selected attack and attack budget on a given test dataset.
The evaluation approach (<ref>) is quite different from (<ref>). For adversarial accuracy evaluation, attacks can always fully exploit their attack budget. Reporting adversarial accuracy is well comparable for a fixed attack type and attack budget. However, the metric is less informative than (<ref>), as it simply sums up the successes or failures of an attack as 0 or 1. For example, consider two models that could behave very differently under different attacks or in real-world applications: The first is a classifier that is robust just below the given attack budget on all tested data points, and the second is not robust at all. Both would obtain an adversarial accuracy of 0% according to (<ref>), but different mean adversarial distances according to (<ref>), providing a more nuanced and more informative assessment of the models adversarial robustness. Despite its advantages, adversarial distance is only rarely reported in basic research such as <cit.> and is absent from the evaluations of most adversarial defense methods.
§ RELATED WORK
§.§ Adversarial Distance Calculation through Attacks
There exist approaches that pave the way for estimating the upper bound adversarial distance according to (<ref>) by focusing on generating minimal adversarial perturbations instead of using the entire attack budget.
Single-step adversarial attacks are generally not well suited for estimating tight minimal adversarial perturbations, as they generate a deceptive example in one step. One such attack is the Fast Gradient Sign Method (FGSM) <cit.>, where a single perturbation is applied to the original input along the gradient of the classifier. In a non-iterative setting, the attack will use its full attack budget ϵ, producing suboptimally large minimal perturbation estimates (see Figure <ref>).
Iterative attacks, on the other hand, incrementally alter the input while staying within the predefined attack budget. As shown in Figure <ref>, these methods push the perturbation towards the decision boundary. Popular iterative adversarial attacks include Projected Gradient Descent (PGD) <cit.> and Basic Iterative Method (BIM) <cit.>. Iterative methods do not need to use their entire attack budget epsilon, as they could potentially early stop once they have successfully changed the class. The smaller the step size, the more applicable they become for estimating tight minimal perturbations according to (<ref>).
The authors of <cit.> emphasise adversarial distance as a refined measure of robustness beyond adversarial accuracy. They propose an adversarial attack called DeepFool for L_2 norm, which estimates a tight minimal perturbation using an iterative attack that stops when the prediction changes.
A related approach to DeepFool is NewtonFool <cit.>. This method also aims to create minimal perturbations, meaning that they are in principle suitable for adversarial distance estimation.
The authors of <cit.> propose the Carlini-Wagner (CW) attack that uses an iterative search strategy to find close adversarial perturbations. Their method is reported to be particularly effective for the L_2 norm. The authors are also among the few to report adversarial distance for successful attacks in their experiments.
§.§ Robustness Certification
In contrast to upper bound adversarial distance calculation through attacks, certification methods[Note that certification of classifier robustness is unrelated to efforts of official institutions and authorities that may certify products.] aim at identifying a lower bound on the true minimal adversarial perturbation, namely a norm distance for which no adversarial perturbation exists. One line of research is on using formal mathematical proofs to verify robustness <cit.>.
Another is robust defense approaches that also claim theoretically guaranteed certified robustness. Sometimes, such papers plot guaranteed adversarial accuracy over various sizes of adversarial perturbations <cit.>. When evaluated for a sufficiently continuous set of perturbation sizes, such plots can be interpreted as an estimate of the cumulative distribution function for the lower bounds of the adversarial distances of all data inputs.
§.§ Clever Score Metric
The CLEVER (Cross Lipschitz Extreme Value for nEtwork Robustness) score <cit.> is a popular and scalable robustness certification approach that estimates a lower bound on adversarial distance. It represents an attack-agnostic metric applicable to all neural network classifiers. It is based on the principle of Lipschitz continuity <cit.>.
The CLEVER method uniformly generates an additional number of samples in a specified p-norm neighbourhood around the original input, as shown in Figure <ref>. It then assigns the samples to a specified number of batches and calculates the maximum norm of the local gradients of the samples in the batch. These maximum gradients follow a reverse Weibull distribution according to the principles of extreme value theory <cit.>. From this distribution, the maximum gradient, which represents the cross-Lipschitz constant, can be estimated as the finite right tail of the Weibull distribution using maximum likelihood estimation. The cross-Lipschitz constant can be used to derive a robustness lower bound for the data point, below which the class cannot be changed, implying adversarial robustness.
It should be noted that CLEVER is an estimation based on statistical sampling and does not provide guarantees.
§.§ Robustness Toolbox
There are several popular libraries that implement approaches for adversarial attack, defence, and robustness estimation. One such open source library is the Adversarial Robustness Toolbox (ART) <cit.>, implemented in Python.
ART provides an implementation of CLEVER and several adversarial attacks, including those mentioned above as well as HopSkipJump (HSJ) <cit.> or ElasticNet (EAD) <cit.>. On paper then, the ART toolbox provides all the means to compute upper and lower bounds of (<ref>).
However, we show in section <ref> that many attacks, or their implementation in ART, are ill-suited for accurate adversarial distance estimation for one of several reasons. Some iterative approaches do not provide sufficiently tight perturbations, such as the implementation of DeepFool. Other iterative methods, such as the implementation of all PGD variants, do not stop at class change, but use their full attack budget ϵ instead. ART even provides the metric "Empirical Robustness", which implements a wrapper for some iterative attacks to stop them early in order to estimate the adversarial distance. However, our results show that this metric only works for an iterative FGSM attack, and the resulting perturbation is not competitively tight.
Looking back at the state of the art, we find that calculating upper bounds of adversarial distance in particular is relatively uncommon in the literature. It is no wonder then that implementations for practitioners in a popular package such as ART do not work effectively.
§ PROPOSED ALGORITHM
Based on our unsatisfactory findings of the minimal adversarial distance calculation in a common robustness toolbox such as ART, we propose a simple approach to adversarial distance calculation. Our method, described in Algorithm (<ref>), can use any attack that returns an intermediate result even if no adversarial example is found. The attack generates a perturbed image x_adv in norm p using the input image x and the specific attack parameter ϵ_step. The algorithm repeats this generation up to max_iters times. After each iteration, it checks whether f(x_adv) diverges from y, which indicates a successful adversarial attack. An early stopping mechanism stops the attack in this case, allowing the Algorithm (<ref>) to estimate the minimum adversarial perturbation of the image x.
It uses an early stopping function to extract the tightest possible adversarial perturbation for this attack.
Note that the principle of this early stopping function has been described in <cit.> for the DeepFool attack. However, any attack that returns intermediate results after each iteration would be possible (e.g. FGSM, NewtonFool, DeepFool). In this study, we use a simple 1-step PGD attack on the (<ref>) algorithm. While this recombination seems obvious, we will see in the experiments that ART does not provide a functionality that leads to the same results.
Algorithm (<ref>) extends our approach to a comprehensive evaluation of the adversarial distance of a classifier. It performs two adversarial attacks on each image in the test set, the first using Algorithm (<ref>), the second being a selected second attack that estimates tight minimal perturbations. The choice of the second attack depends on the norm p and is described in section <ref>. For both attacks, Algorithm (<ref>) computes the adversarial distance between the original image x and its adversarial example x_adv according to the norm p. The algorithm selects the smaller of the two distances for each image, identifying it as the most effective minimal perturbation. Algorithm (<ref>) serves as a baseline, while the second, norm-specific attack may produce even tighter minimal perturbations. From the adversarial distances of all data points, the maximum perturbation distance is identified as radius^max_p. This value is used as the sampling radius for calculating the CLEVER score for the same classifier f. The algorithm then outputs adversarial distances and corresponding CLEVER scores for all data points, ordered by increasing size of the adversarial distances. Overall, Algorithm (<ref>) provides both upper and lower bounds on the adversarial distance for the classifier and the given data points. The real value should lie between the adversarial distance and the CLEVER score, although only the adversarial distance is a guaranteed and trustworthy upper bound.
§ EXPERIMENTS
We evaluate the effectiveness of our adversarial distance computation with experiments on the CIFAR-10 image classification dataset <cit.>.
We compare two pre-trained models according to the Wide Residual Network 28-4 architecture described in <cit.>. The "standard" model is trained with a typical Pytorch training pipeline using only dropout and standard random crop and flip data augmentations as described in <cit.>. The "robust" model is additionally trained with a combined set of random data augmentations, including Mixup <cit.>, TrivialAugment <cit.>, and random p-norm noise injections <cit.>. It is therefore expected to be more robust. Although the model is not trained for adversarial robustness, which <cit.> is a much harder goal to achieve than robustness against random corruptions, the adversarial distance metric should be able to measure the higher robustness of the robust model, even if both models are not adversarially trained. We also compare an "adversarial" model of similar architecture from <cit.> loaded from <cit.>, which is expected to yield much higher adversarial distance.
The computations for which we report runtime evaluations were performed on the data science platform Kaggle, using a NVIDIA P100 as a GPU.
We evaluated adversarial distances for three common norms:
* L_1 Distance (1-Norm) is defined as:
x - x_adv_1 = ∑_i|x_i - x_adv_i|
L_1 perturbations make intense changes to few pixels.
* L_2 Distance (2-Norm / Euclidean distance) is defined as:
x - x_adv_2 = √(∑_i(x_i - x_adv_i))
L_2 perturbations are more evenly spread across all pixels.
* L_∞ Distance (∞ Norm) is defined as:
x - x_adv_∞ = max_i · |x_i - x_adv_i|
This is probably the most common adversarial distance metric. This type of attack tends to be the least visible, as it limits the maximum change to any pixel in the image uniformly.
Algorithm (<ref>) should change the class on all images, and we initially set max_iters = 500, a value high enough for the standard and robust models. For the adversarial model, this attack budget is not high enough, so it was increased to max_iters = 10000. The overall maximum perturbation ϵ is then defined by Algorithm (<ref>) to be max_iters * ϵ_step. The only important parameter to tune in this algorithm is ϵ_step, for which we report sensible parameters in section <ref>.
We have compared our Algorithm (<ref>) with PGD with several potential second attacks. HSJ, EAD and CW do not return intermediate results if they cannot find an adversarial example after one iteration. They are therefore not suitable to be plugged into our (<ref>) algorithm, but can still find tight adversarial perturbations on their own. We run them with 40 iterations (100 for the adversarial model), as done in <cit.> for the CW attack, and the runtime is reported with this parameterisation. For DeepFool and NewtonFool it makes no difference whether we plug them into Algorithm (<ref>) or let them run independently with the same ϵ_step. We also tested the iterative FGSM implementation in ART, which is deliberately designed to evaluate adversarial distance. Finally, we experimented with ART implementations of different PGD variants and AutoAttack <cit.>. These attacks never stop early when changing a class, even when plugged into our Algorithm (<ref>). They are therefore inapplicable for adversarial distance computation and are not reported upon.
§ RESULTS
In sections <ref> and <ref> we present results on a reasonable parameterisation of Algorithm (<ref>) and the CLEVER computation. In sections <ref> to <ref> we present results for adversarial distances on the three norms. We justify which attack is effective enough to be used as a second attack and evaluate the results of the CLEVER score lower bound estimation for all norms.
§.§ Parameterization of Algorithm 1 with PGD
In a practical framework, 1/255 represents the smallest real-world step size feasible in the pixel space. However, for a theoretical white-box evaluation of adversarial robustness, this value can be adjusted. Figure <ref> shows a clustering effect for high values of ϵ_step, leading to inaccurate discrete overestimates of adversarial distance and requiring a precise parameterization of ϵ_step for our approach. The smaller ϵ_step becomes, the higher the resolution and the tighter the minimal perturbation is for the approach according to Algorithm (<ref>).
Figure <ref> shows how the average L_∞ minimal perturbation on 20 selected images becomes smaller as ϵ_step is reduced. At the same time, more steps are required for each image to change the class, increasing the computation time and revealing a trade-off between performance and computation time. Since the overall computation time for Algorithm (<ref>) is relatively small, we choose a relatively small step size of ϵ_step = 0.0003 for L_∞ distance and 0.005/0.2 for L_2/L_1 respectively. In addition to the trade-off described above, this choice should also take into account the robustness of the image classifier.
§.§ Parameterization of Clever Score
Table <ref> shows, that a small number of samples and batches for CLEVER calculation leads to a larger and, assuming it is a lower bound, less tight mean CLEVER score. Specifically, the lower the number of samples and batches, the higher the proportion of images with a CLEVER score (lower bound) above the adversarial distance (upper bound). For this "Err." ratio of points, CLEVER cannot be a correct lower bound and the metric is not sound. In contrast, setting the number of samples to 1024 and batchsize of 500, as parameterised in the original paper <cit.>, improves the fraction of correct CLEVER scores. It comes at high computational cost, which scales linearly with the number of samples and the number of classes in the dataset. The decision to use a large sample size must be made keeping in mind that this configuration is the most computationally expensive evaluation of all in this paper. From Table <ref> we also find that for L_2-norm the error ratio of CLEVER score is lowest for the adversarial trained model. We discuss this phenomenon in more detail in section <ref>.
§.§ Adversarial Distances in L_∞
Figure <ref> shows the results of adversarial distance estimation by multiple L_∞-norm attacks on the standard model for 20 images. The EAD, DeepFool and NewtonFool attacks show high variability, often resulting in high perturbation distances. The (iterative) FGSM also fails to consistently induce minimal perturbations, which is surprising since it is the basis of PGD and should work similarly to our Algorithm (<ref>) with PGD. CW for L_∞ and HSJ generate solid adversarial perturbations.
The HSJ attack outperforms PGD in perturbing some of the 20 images displayed in <ref>, although not being as effective on average. For L_∞ norm we therefore chose HSJ as the second attack. The advantage of HSJ is that it is a black box method and only requires access to the output of the classifier, not to its internal gradients. However, HSJ requires more time per step compared to PGD, underlining the greater efficiency of PGD. Thus, in situations where time efficiency is a priority, PGD seems to be the more sensible choice in L_∞.
§.§ Adversarial Distances in L_2
Under the L_2 norm, we choose CW as the second attack, as it tends to find the tightest adversarial perturbations (see Figure <ref>). CW is expected to produce tight adversarial perturbations in L_2, as its sophisticated loss function is known to be effective in L_2. The Algorithm (<ref>) with PGD is still competitive with CW, and produces tighter estimates for some inputs.
Again, the precision of CW as a second attack comes at the cost of its computational time, as it takes about 40 times as long to compute as PGD on the standard model, as shown in Figure <ref>.
§.§ Adversarial Distances in L_1
Under the L_1 norm, we choose EAD as the second attack. EAD produces significantly tighter adversarial perturbations compared to Algorithm (<ref>) with PGD on almost all data points and on average as can be seen in Figure <ref>. This makes EAD a preferred choice for adversarial distance estimation in L_1, despite being computationally more expensive compared to PGD.
§.§ CLEVER Score as a Lower Bound
The figures <ref> to <ref> show the comparison between CLEVER (for its most reliable 1024-500 parameter setup) and the minimal adversarial attack distance on 500 images, plotted for all 3 norms for the standard and robust models as well as for the adversarial model on L_2-norm. The images are sorted by adversarial distance as returned by Algorithm (<ref>). Misclassified points are assigned an adversarial distance of 0. Ideally, we expect the CLEVER scores to be just below the adversarial distance for most points. For the standard model and L_1 norm, it is clearly visible that CLEVER massively underestimates the adversarial distance, with many images having scores of 0. In contrast, for the robust model and L_1 norm, CLEVER seems to work better, although many images still have CLEVER scores of 0. For the standard model on L_2 and L_∞ as well as the adversarial model on L_2, CLEVER gives reasonable estimates, with about 15% of the CLEVER scores higher than their respective adversarial distances, indicating an incorrect lower bound estimate. For the robust model, CLEVER scores are unreliable on both norms. For L_2, about 18% of the CLEVER scores are incorrect lower bounds and most of the rest are close to zero. For L_∞, 54% are incorrect lower bounds.
In the plot for the adversarial model it is also visible how a larger ratio of points being misclassified as the models clean accuracy is lower. Also, 4% of points cannot be succesfully attacked by our algorithm <ref>, in which case we assume those points to have the maximum adversarial distance found for any other point (see the top right corner of the diagram). An overview of all mean CLEVER scores for all parameter setups compared to the mean adversarial attack distance can be found in Table <ref>.
§ DISCUSSION
Our results shed light on the effectiveness of a selection of evaluation methods for adversarial distance estimation. First, we emphasise that most of this evaluation was carried out with two models that were not trained to be adversarially robust. A comparison with the adversarial model show clearly that the latter is much more robust in terms of adversarial distance according to our method.
In our experiments, we used implementations of attacks from only one popular adversarial robustness toolbox. It may be that another toolbox has already built an estimator like Algorithm (<ref>).
In our experiments, we found our iterative attack algorithm with early stopping and a small step size, to be an effective baseline in terms of computational efficiency and adversarial distance estimates compared to a number of other adversarial attacks. Surprisingly, it is particularly effective compared to the ART implementations of FGSM <cit.> with early stopping and DeepFool <cit.>, both deliberately designed to estimate exactly this minimal adversarial distance.
We found that our estimation algorithm should be supported by a second algorithm that provides tight estimates of adversarial distances instead of using its entire attack budget. There are several reasons for this:
* Models trained adversarially using one particular method, where that method may then be less effective at estimating adversarial distance.
* Ensembles of adversarial attacks are state of the art for robustness evaluation <cit.>.
* For all norms, but L_1 in particular, there exist attacks that appear to be more effective than PGD. The effectiveness of EAD on L_1 is probably due to its dual regularisation technique, which combines the sparsity of L_1 with the evenly distributed perturbations of the L_2 norm.
In our experiments, we expected the CLEVER score to be an effective estimator of an adversarial distance lower bound. The visualisations of the results as in Figures <ref> allows us to discuss this expectation. It leads us to conclude that for our experiments, CLEVER is a rather unreliable estimator, even when parameterised to sample within the perfect norm distance derived from the previous attacks. This is true in particular when evaluating the robust model, probably because it was not trained to be strictly smooth using adversarial training or smoothing methods. However, as it was trained to be robust to random corruptions, it may have lulled CLEVER, which uses a random sampling scheme for estimation, into a false sense of security. In fact, the mechanism of a wrong estimation of CLEVER on a non-smooth model is explained in detail in <cit.>. Countering this phenomenon with many more samples is inefficient, but a different sampling scheme may help <cit.>. For a reliable lower bound on adversarial distance, robustness verification methods with guarantees are needed. For the adversarial model, CLEVER worked more reliably, but we still found counter examples among the less robust points in particular. The precision of CLEVER may be improved by using the adversarial attack distance of each individual point as CLEVERs ϵ value in order to not waste any samples for its estimation.
We found from results such as in Table <ref> that the mean adversarial distance captures the differences in adversarial robustness of all 3 models well, while the adversarial distance distributions across all points provide more details. On the other hand, for the adversarially trained model, our method has trouble finding an adversarial example on some of the points due to their high robustness. Nevertheless, we emphasise the usefulness of measuring the adversarial distance. For example, it helps evaluate the robustness of the robust model compared to the standard model, with its adversarial distance being twice as high. This is plausible, as research suggests a positive effect of random data augmentations such as those used for the robust model, on adversarial robustness <cit.>. However, a typical benchmark evaluation of L_∞ adversarial accuracy with our PGD attack and an attack budget of 8/255 would evaluate the standard model at 0% and the robust model at 1.4% adversarial accuracy, indicating little difference. Our evaluation gives a different and more informative impression about the adversarial robustness of the models compared to each other. We summarize that in line with the findings in <cit.>, adversarial distance is a nuanced and useful measure of robustness at least for comparing models with about similar adversarial accuracy as in the example above.
§ CONCLUSION
This paper proposes a practical approach to estimating the (mean) adversarial distance of classifiers. We find that our simple algorithm provides a solid basis for this estimation, and propose a combination of several attacks and a certification methods to provide an overall assessment of a model's adversarial robustness. While our attack methods work effectively to estimate an upper bound on the adversarial distance for different norm distances, the CLEVER score certification does not provide reliable lower bounds in our experiments. We highlight the value of adversarial distance as a metric to consider for an overall robustness evaluation of a machine learning classifier. Future work should explore different (iterative) adversarial attacks to enable tight upper bounds on adversarial distance for various models and applications. Those results can in turn be used to evaluate the tightness of lower bounds on adversarial distance from verification approaches outside CLEVER score, or the validity of robustness certification approaches.
asmeconf
|
http://arxiv.org/abs/2409.02526v1 | 20240904083432 | Waveform distortion for temperature compensation and synchronization in circadian rhythms: An approach based on the renormalization group method | [
"Shingo Gibo",
"Teiji Kunihiro",
"Tetsuo Hatsuda",
"Gen Kurosawa"
] | physics.bio-ph | [
"physics.bio-ph",
"math.DS",
"q-bio.MN"
] |
Shedding Light on the Future: Exploring Quantum Neural Networks through Optics
Dagomir Kaszlikowski
September 9, 2024
==============================================================================
§ ABSTRACT
Numerous biological processes accelerate as temperatures increase, but the period of circadian rhythms remains constant, known as temperature compensation, while synchronizing with the 24h light-dark cycle.
We
theoretically explores
the possible relevance of waveform distortions in circadian gene-protein dynamics to
the temperature compensation and synchronization.
Our analysis of the Goodwin model provides a
coherent explanation
of most of temperature compensation hypotheses. Using the renormalization group
method, we analytically
demonstrate that the decreasing phase of circadian protein oscillations should lengthen with increasing temperature,
leading to waveform distortions to maintain a stable period. This waveform-period correlation also occurs in other oscillators like Lotka-Volterra and van der Pol models. A reanalysis of known data nicely confirms
our findings on waveform distortion and its impact on synchronization range.
Thus we conclude that circadian rhythm waveforms are fundamental to both temperature compensation and synchronization.
Keywords: waveform distortion, renormalization group method, circadian rhythms, temperature compensation, synchronization
§ AUTHOR SUMMARY
Our daily rhythms are underlain by gene regulatory and biochemical networks, called circadian clocks. Although most biochemical reactions accelerate as temperature increases, the period of circadian rhythms is almost constant even with increasing temperature. This phenomenon is called temperature compensation, and the mechanism is still unclear. By applying a method of theoretical physics, the renormalization group method to a biological problem for the first time, we revealed that the waveform of gene dynamics should be more distorted from sinusoidal wave at higher temperature when the circadian period is stable to changes in temperature. This prediction as for the importance of waveform in temperature compensation is verified by analyzing published experimental data. Notably, the correlation between period and waveform distortion holds for other oscillator models, indicating the waveform distortion is important for determining the period in various types of oscillatory systems. Another unsolved problem of circadian clocks is to synchronize with environmental light-dark cycles. By theoretically analyzing a circadian clock model, we found that the frequency range for synchronization becomes narrower when the waveform is distorted.
§ INTRODUCTION
Humans exhibit sleep-wake cycles with an approximate 24h period, and these cycles persist
under constant environmental conditions, a phenomenon termed the circadian rhythm. This temporal regulation
exists in both humans and various organisms such as molds, plants, and insects <cit.>. Recent advances in genetic research
through insects, molds, mammals, and plants have unveiled
that genes and proteins
are involved as integral components in the primary mechanism governing autonomous circadian rhythms <cit.>.
Understanding circadian rhythms holds promise for deciphering a multitude of sleep patterns, including sleep disorders such as advanced sleep phase syndrome (characterized by early awakening around 4:00 am), delayed sleep phase syndrome (marked by late awakening), non-24h sleep-wake disorder, and narcolepsy <cit.>. Notably, advanced and delayed sleep phase syndromes are believed to be linked to the circadian rhythm period <cit.>. Ongoing studies explore possible correlations between genetic characteristics revealed by large-scale genetic analysis and various sleep patterns <cit.>. However, the nature of the system is so intricate that it remains a challenge to link sleep patterns to specific genes. In such a situation, it would be meaningful to have recourse to mathematical models and obtain possible hints for the linkage and hopefully suggestions for studies of genetic dynamics.
One unresolved fundamental issue in circadian rhythm research is temperature
compensation <cit.>
, in which the period
keeps constant despite temperature-induced changes in reaction rates.
Despite the extensive experimental and theoretical research on
temperature compensation, the mechanism has remained elusive.
Hypotheses have been proposed to explain temperature compensation,
including the balance hypothesis, critical-reaction hypothesis,
temperature-amplitude coupling hypothesis, and waveform hypothesis.
The balance hypothesis proposes that the stability of the circadian period
with temperature arises from a balance between period-lengthening and
period-shortening reactions <cit.>.
The critical-reaction hypothesis assumes that there should be critical
reactions that determine the circadian period. If these reaction
rates are stable against temperature variations,
then the circadian period will similarly remain stable
<cit.>. The temperature-amplitude
coupling hypothesis suggests that temperature-sensitive amplitudes
in gene activity rhythms should generate a stable period by generating
larger amplitudes at higher temperatures <cit.>.
Lastly, the waveform hypothesis proposes that temperature-sensitive
waveforms in gene activity rhythms should be correlated with a stable
period in a manner that their higher harmonic components become
larger and the distortion of the waveform increases at higher
temperatures <cit.>.
Another unresolved issue in circadian rhythm research is synchronization
with 24h environmental light-dark cycles.
Previous theoretical and experimental studies on synchronization revealed
that if the internal period of the oscillation closely matches the
external period, then it is more likely to synchronize with the forced
period <cit.>.
Additionally, experimental studies on several species uncovered genes and
proteins in circadian systems affected by a light pulse <cit.>.
In reality, the circadian rhythm must adjust to the 24h light-dark cycle while
maintaining a temperature-compensated circadian period.
Therefore, multiple questions arise. (i) Given the significant temperature
variations between seasons, how do organisms synchronize their circadian
rhythms with the 24h light-dark cycle across various temperatures
<cit.>? (ii) if the gene activity
rhythm of the circadian rhythms becomes more distorted as temperatures increase
to achieve temperature compensation, how does the ease of synchronization
change with temperature variations?
Theoretical analyses incorporating the findings of light pulse experiments
might provide further insights into these questions.
In the present paper, we investigate possible roles of the waveform distortion
in temperature compensation based on analytical and numerical analyses
of the Goodwin model for circadian rhythms and clarify how the
waveform in gene activity rhythms tends to be more distorted at higher
temperatures (e.g., steeper rise, longer tail) for temperature compensation.
To this end, we employ the renormalization group (RG) method, a powerful
tool for analyzing various non-linear systems described by ordinary and partial
differential equations, to derive global solutions that are valid in a global
time domain <cit.>.
Combining an index for waveform distortion, namely non-sinusoidal
power (NS) introduced by two of the present authors (KG) <cit.>, with the result of
the RG method,
we can obtain both a unified picture of
the above mentioned theoretical hypotheses
(balance hypothesis, critical-reaction
hypothesis, temperature-amplitude coupling hypothesis, and waveform hypothesis)
and quantify previous experimental data on Drosophila <cit.>.
Our analyses demonstrate that the fundamental role of the waveform distortions
in temperature compensation from both theoretical and experimental
perspectives in accordance with
the previous findings <cit.>.
Moreover, we reveal for the first time the mechanism by which
the synchronization of circadian rhythms changes with temperature if
the waveform in gene activity rhythms is more distorted at higher temperatures.
We theoretically prove that the frequency range of the external force
that synchronizes circadian rhythms becomes narrower if the waveform
of gene activity rhythms is more distorted. This indicates that it is more
difficult to synchronize with light-dark cycles at higher temperatures. The
present result of synchronization is consistent with the previous experimental
and numerical studies demonstrating that the magnitude of the phase shift
caused by light pulses was smaller at higher temperature <cit.>.
§ RESULTS
§ WAVEFORM DISTORTION IN CIRCADIAN RHYTHMS
§.§ Index for waveform distortion
Let the time dependence of a certain variable in the circadian rhythm system be
expressed in a Fourier series as
x(t)=∑_j=-∞^∞a_jexp(i(2π/τ)jt)),
with a_j being the Fourier coefficients of
the oscillatory time series.
Then, we introduce an index for describing
the distortion of x(t) from a sinusoidal shape as
NS=[ ∑_j=1^∞|a_j|^2j^m/∑_j=1^∞|a_j|^2j^q]^1/2 (m>q≥ 0),
where m and q are integers.
Termed the "non-sinusoidal power (NS)",
this index is designed to emphasize higher harmonics (m>q)
as discussed in a previous paper <cit.>.
For instance, we have NS=1 when the time series has a sinusoidal waveform
in which only the coefficients for the fundamental component are non-zero
(a_± 1≠0). Conversely, for non-sinusoidal time series,
the coefficients for higher harmonics are non-vanishing, resulting in NS>1.
The previous theoretical work demonstrated that a more distorted waveform
(larger NS) at higher temperature is necessary for temperature compensation
in the four-variable negative-feedback model <cit.>.
However, it is unclear whether the relevance of waveforms
to temperature compensation
found in previous research has general validity not
restricted to some specific model.
To explore the possible relevance of the waveform characteristics to temperature
compensation and synchronization, we consider
the simplest model for circadian rhythms, known as the Goodwin model (Fig. <ref>A)
<cit.>. This model incorporates negative-feedback regulation
of gene expression, a mechanism established as essential for
transcriptional-translational oscillations.
The three-component Goodwin model reads:
dx_1/dt=f(x_3)-k_1x_1,
dx_2/dt=p_1x_1-k_2x_2,
dx_3/dt=p_2x_2-k_3x_3,
where x_1(t) represents mRNA abundance, and x_2(t)
and x_3(t) denote protein abundance.
The function f(x_3) in the model signifies transcriptional regulation,
and the parameters p_1 and p_2 denote protein synthesis and
phosphorylation rates, respectively,
and k_i (i=1, 2, 3) represent degradation rates (Fig. <ref>A).
By applying signal processing methods, Forger derived the period of this model
as follows <cit.>:
τ=2π/√(k_1k_2+k_2k_3+k_3k_1)[ ∑_j=1^∞|a_j|^2j^4/∑_j=1^∞|a_j|^2j^2]^1/2.
Subsequently, two of the present authors indicated that this formula implies
that temperature compensation of the period in this model occurs
only when the waveform (NS) is distorted as temperature
increases <cit.>.
Suppose that all reactions become faster as temperature increases in the model.
Then, one can numerically demonstrate that the waveform tends to be more
distorted (larger NS) at higher temperatures for temperature compensation
(Fig. <ref>B, magenta line).
We call this mechanism the waveform hypothesis for temperature compensation.
§.§ Theories of temperature compensation
The waveform hypothesis and the other three hypotheses for temperature
compensation (balance hypothesis I, critical-reaction hypothesis II, and
amplitude hypothesis III) are not mutually exclusive, which can be understood in a
unified way through Eq. (<ref>):
* I.
The balance hypothesis, previously explored theoretically by Ruoff <cit.>,
suggests that the temperature compensation of the period is caused by a balance
between the effects of reactions that shorten the period and those that lengthen
the period. Equation (<ref>) illustrates that the balance between the effect
of shortening the period and that of lengthening the period can be caused by
changing the distortion of the waveform.
* II.
Equation (<ref>) demonstrates that even if some of the governing
reactions in circadian rhythms
are temperature-insensitive <cit.>
as discussed in the critical-reaction hypothesis.
Still, the temperature compensation requires waveform distortion at high
temperatures
if reactions other than some of the governing reactions accelerate at higher
temperatures.
* III.
Our numerical analysis of the circadian model (Fig. <ref>B)
show that when temperature compensation occurs, the waveform is more
distorted at higher temperatures,
and the amplitude of the oscillation is larger. This tendency for the amplitude to
increase at high temperatures
is consistent with the amplitude hypothesis. According to Eq.
(<ref>), a greater distortion of the
waveform at higher temperatures is necessary, but not sufficient, for
temperature compensation.
§.§ Synchronization in circadian rhythms
If the waveform is more distorted at higher temperatures for temperature
compensation, then it would be intriguing to explore whether the temperature-
dependent waveform also affects the synchronization of circadian rhythms with
environmental light-dark cycles at various temperatures.
Theoretical and experimental studies of synchronization
and circadian rhythms illustrated that the oscillation is more likely to
synchronize with forcing cycles if the internal period is sufficiently
close to the period of the forcing cycles <cit.>. In mammals and Neurospora, a light pulse is
known to increase Per1 and frq mRNA expression <cit.>. To incorporate gene activation during the light phase,
we employ
a model in which
Eq. (<ref>) for the change in mRNA expression is modified to
dx_1/dt=f(x_3)-k_1x_1+Icos(Ω t),
where I represents the light intensity and Ω is
the angular frequency of the light-dark cycles.
§.§ Main results
In Sections 3 and 4, we provide detailed discussions on how the waveform
in gene activity rhythms should be distorted at higher temperatures using the
RG method, as well as the synchronization in circadian rhythms.
The main results are pictorially summarized in Fig. <ref>B.
The magenta line indicates that the waveform of the gene activity rhythms
should be more distorted
at higher temperatures for temperature compensation, whereas the cyan line
indicates that synchronization
with the light-dark cycles should become more difficult at higher temperatures
because of the larger waveform distortion.
§ WAVEFORM DISTORTION AND TEMPERATURE COMPENSATION
§.§ Numerical simulation of the waveform-period correlation
Equation (<ref>) and numerical simulations indicate that NS tends
to be larger when the period is relatively stable even with
increased parameter values (see Fig. <ref>B).
To quantitatively reveal the correlation between waveform and period,
we conduct
numerical simulations using a circadian clock model.
In the analysis of the circadian clock model, the transcription function
f(x_3)=r/x_3^n was considered for simplicity.
We first search
for parameter sets in which oscillations occur.
We define
those parameter sets as the reference parameter sets. Because
many biochemical parameters have not yet been measured, we prepared
100 random reference parameter sets
for the oscillations. k_1, k_2, k_3, p_1, p_2, and r
were assigned uniformly distributed random values ranging from 0 to 10,
and n
was assigned a uniformly distributed random
integer ranging from 9 to 15. The period obtained with each reference
parameter set
was denoted as τ_1.
Next, reaction rates often follow the Arrhenius equation, which states that a 10°C
rise in temperature increases
the reaction rate by a factor of 2-3. To incorporate the effect of high
temperature, instead of
using the Arrhenius equation, each parameter in the model's reference parameter
set
was randomly multiplied by a factor of 1.1-1.9, and the period and waveform
were examined when the oscillation behavior persists.
The period obtained by increasing the parameters from each reference parameter
set
was denoted as τ_2, and the ratio, τ_2/τ_1,
was called the relative period.
To quantitatively analyze the correlation between period and waveform
when temperature compensation occurs,
we consider
the case of the relative period ≥ 0.85 because it has been experimentally
confirmed that the circadian rhythm frequency at high temperature divided by
that at low temperature of the wild-type ranges
between 0.85 and 1.15 when the temperature
is increased by 10°C and temperature compensation occurs <cit.>. In the
present numerical analysis, the period
is relatively stable (relative period ≥ 0.85)
in 34 of the 4900 parameter sets,
in qualitative agreement with previous theoretical analyses
that the period often shortens with increasing
reaction rates <cit.>. Because the range of parameter
variation
is 1.1-1.9 and the average value
is 1.5, the reaction rate
is accelerated by a factor of 1.5 on average, and the average relative period
is approximately 1/1.5 ≈ 0.67. Figure <ref> indicates
that when temperature compensation occurs, there is a clear correlation between the period and waveform.
§.§ RG analysis of waveform-period correlations
In the previous section, we demonstrated that the temperature compensation
of the period
in the Goodwin model is always accompanied by an increase of the index
NS and waveform distortion correspondingly occurs
as temperatures increase.
This raises the following question:
Is there a universal law governing the waveform distortion occurring when
temperature increases? To
answer this question, we derive
an approximate solution for the time evolution of the Goodwin model
for the circadian rhythm using
a powerful reduction method,
called "the renormalization-group(RG) method " <cit.>.
The solution obtained using the RG method can be interpreted
as the envelope of the set of solutions given in the perturbation theory,
which has been applied to various models, including (but not limited to) ODE, PDE,
discrete systems, and stochastic equations
<cit.>.
To apply the RG method, we again set the transcriptional regulation
function f(x_3) to be r/x_3^n.
In this function, n is the cooperativity of the transcriptional
regulation, which is a Hopf bifurcation parameter.
The approximate solution of the phosphorylated protein of the circadian clock
reads (see
Supplementary Information A.2)
x_3(t)=( p_1p_2r/s_3)^s_3/s_1s_2+ε A_0sin(ω t)
+ε^2 A_1A_0^2sin(2ω t+α)+o(ε^2)
where we have
ε =n-s_4/s_3,
s_1=k_1+k_2+k_3,
s_2=k_1k_2+k_2k_3+k_3k_1,
s_3=k_1k_2k_3,
s_4=(k_1+k_2)(k_2+k_3)(k_3+k_1),
with the angular velocity and the phase parameter of the second-order term
ω = √(s_2)-εs_1s_3s_4/6(2s_1s_2^2-
(s_1^2+6s_2)s_3)√(s_2)+o(ε^2),
α = arctan( s_1/2√(s_2)),
as well as the amplitudes
A_0=√(4((s_1^2+s_2)^2-2ε s_1s_3)
(s_1^2+4s_2)s_3^3/ε (2s_1s_2^2-(s_1^2+6s_2)s_3)
(s_1^2+s_2)^2s_4s_1s_2)( p_1p_2r/s_3) ^s_3/s_1s_2,
A_1=s_4s_1/12s_3√(s_1^2+4s_2)(
s_3/p_1p_2r) ^s_3/s_1s_2.
The RG method provides an approximate but globally valid solution,
and thus enable us to make a
detailed investigation of
the waveform distortion when temperature compensation occurs
in the Goodwin model.
The numerical analysis using the same parameter sets as used in Fig. <ref>A,
in which the relative period in the model remains stable
and within the interval (0.85, 1.0) against the temperature variations,
shows that the phase parameter α in the 2nd-order frequency
tends to decrease with increasing reaction rates
(see Fig. <ref>A).
When the increase in reaction rates is small, the change
in the phase of the second-order frequency scatters around zero
and is negligible.
However, with a significant increase in the reaction rates,
the phase of the second order always tends
to decrease as the reaction rates increase.
The significance of the phase parameter α given in the second-order term
on the waveform of the time series can be understood intuitively as follows:
when the phase α is large, the increasing duration tends to become
longer because of
the less overlap of the time profiles given by sin(ω t) and
sin(2ω t+α) (Fig. <ref>BC, blue line).
Conversely, a smaller α tends to result
in a shorter increasing duration because of an additive effect of the two terms,
which leads to a steeper slope on total, as presented
in Fig. <ref>BC, red line.
Therefore, the numerical results in Fig. <ref>A suggest
that the decreasing duration
of the time series elongates with as the reaction rate
increases when the period is relatively stable despite
the increasing reaction rate.
For a theoretical confirmation of the numerical result that the
phase parameter α given in the second-order term
tends to become smaller as the temperature increases
when the period is temperature-compensated,
we analyze
the sensitivity of the angular frequency ω and the phase α
to the reaction rates by utilizing
the results of the RG method.
With use of Eqs. (<ref>) and (<ref>), we calculate
dω =(∂ω/∂ k_1)dk_1+(∂ω/∂
k_2)dk_2+(∂ω/∂ k_3)dk_3 and dα =
(∂α/∂ k_1)dk_1+(∂α/∂ k_2)dk_2+
(∂α/∂ k_3)dk_3.
In Fig. <ref>A, we present the parameter regions given
by the constraints dω =0 (red surface) and
dα =0 (yellow surface) for
the cooperativity n=12 of the transcription regulation.
We can see that the region for dα<0 (outside yellow surface)
includes that given
by the constraint dω =0 for all k_i (i=1, 2, 3).
This implies that if the period is robust against a change
in the parameters
k_i (i=1, 2, 3), then the phase α in the second-order term
always becomes smaller with increasing parameters.
Next, let us examine how the parameter regions given by the constraints dω =0 and dα =0
change with variations of the cooperativity n.
The numerical calculation
shows that the region corresponding to dα <0
includes that given
by dω =0 for n=13 and 14.
However, in the case of an exceedingly high cooperativity of
transcription regulation n, which is not biologically realistic,
dα can be positive when dω =0. For instance, for n=20,
although dα is negative for most of the parameter space,
there is a region in which dα >0 when dω =0
(see Fig. <ref>B).
These results indicate that if circadian rhythms are stable
under temperature variations,
the slope in the increasing phase of phosphoproteins should become sharper as
temperature increases.
Thus, we conclude that
(i) the waveform of the gene activity rhythm should be more distorted
at higher temperatures, and (ii) the rate of the increase
in phosphoprotein levels
should be greater at higher temperatures if temperature compensation is achieved.
In principle, these features can be tested experimentally.
§.§ Verification of the theoretical analysis of temperature
compensation using published experimental data
The period formula Eq. (<ref>) of the Goodwin model
indicates that the non-sinusoidal index NS of the waveform
of the circadian rhythms becomes larger,
implying greater distortion of the waveform when all reactions are faster at
higher temperatures during temperature compensation.
To test this theoretical prediction of circadian gene activity in actual
organisms, we analyze
the waveform of the activity rhythms of the timeless gene in Drosophila at 18 and
29°C using published experimental data <cit.>.
First, we extract the time series of the average curve
from Fig. 3C in a prior study <cit.>
using WebPlotDigitizer per hour.
Second, we add uniformly distributed noise between
-0.4 and 0.4
to the extracted
data to consider data errors. Then, we interpolate
the time series every 0.1 h
using spline interpolation (Fig. <ref>A).
The interpolated data were detrended by multiplying
an exponential
function so that the position of the local minima of the oscillations
are approximately reproduced.
Then, the detrended time series is fitted with
a sum of trigonometric functions up to the third harmonics using
the generalized harmonic analysis (GHA) method <cit.>.
The width of the window for the analysis is set to one period.
Using the Fourier coefficients of the fitting time series,
we
evaluate the distribution and average value of NS.
The
resultant NS of the activity rhythms
of the timeless gene, as defined by Eq. (<ref>),
at a higher temperature (29 ^∘C) tend to be larger than
that at a lower temperature (18 ^∘C), whereas
the NS values are somewhat varied (Fig. <ref>B),
which
is consistent with
the prediction.
Experimental studies have demonstrated that temperature compensation
can be impaired by genetic mutations. In the Drosophila mutant perL,
the period increases with increasing temperature <cit.>.
Equation (<ref>) implies that if the period increases with temperature,
then the waveform of the circadian rhythm should be non-sinusoidal and more
distorted at higher temperatures.
Thus, it is predicted that the waveform
of the circadian gene activity in perL should become more non-sinusoidal
with higher temperatures. Again, we can quantify the waveform of perL using
experimental data <cit.> (Supplementary Fig. <ref>). The waveform
of circadian gene activity tends to be more non-sinusoidal
at higher temperatures in perL, as observed in the wild-type,
whereas the NS values varied, in line with the prediction.
§ THEORETICAL ANALYSIS OF SYNCHRONIZATION IN THE CIRCADIAN RHYTHM MODEL
The numerical
reslut in Fig. <ref>B
shows that the range of synchronization into the light-dark cycles tends
to decrease as the waveform becomes more distorted
in the simple circadian clock model.
To
clarify the condition for synchronization, we again
consider
the Goodwin model but with an external
force incorporated as follows:
dx_1/dt=f(x_3)-k_1x_1+Icos(Ω t)
dx_2/dt=p_1x_1-k_2x_2
dx_3/dt=p_2x_2-k_3x_3
where Icos(Ω t) is a periodic environmental change,
such as a light-dark cycle. By eliminating x_1 and x_2,
Eqs. (<ref>)-(<ref>) is converted to the following single equation:
d^3x_3/dt^3+s_1d^2x_3/dt^2
+s_2dx_3/dt+s_3x_3
=p_1p_2f(x)+p_1p_2Icos(Ω t).
If the model is to admit a synchronization
to the external cycle Icos(Ω t) at all,
then x_3(t) should be written as the Fourier series
x_3(t)=∑_j=-∞^∞a_jexp(iΩ jt).
Multiplying Eq. (<ref>) by dx/dt and integrating
that for the interval t to t+2π/Ω, we have the following equation:
Ω^3-ω^2Ω =
1/2p_1p_2I Rsinβ
where
ω=2π/τ=√(s_2∑_j=1^∞|a_j|^2j^2/∑_j=1^∞|a_j|^2j^4)
is the natural angular frequency without the external force and
R=
|a_1|/∑_j=1^∞|a_j|^2j^4
with β being the argument of a_1 such that a_1=|a_1|exp(iβ).
Because -1≤sinβ≤ 1, when x(t) synchronizes with the external
cycles, the angular frequency of the external cycles (Ω) should satisfy
the inequality
|Ω^3-ω^2Ω |
≤1/2p_1p_2I R.
We note that
R defined by Eq. (<ref>) becomes smaller when the components
of higher harmonics become larger and the waveform exhibits greater
distortion. Therefore, if the waveform is more distorted
by, say, an increasing temperature, the bounds of Eq. (<ref>) become smaller,
and accordingly, the allowed region of the middle term is narrower.
The left hand side of Eq. (<ref>) is a cubic function of Ω,
which is monotonically increasing near Ω=ω.
If the waveform is more distorted,
Eq.(<ref>)
should be smaller, making the allowed region of Eq. (<ref>) narrower.
Then, the range of Ω that causes synchronization becomes narrower,
as presented in Fig. <ref>.
This indicates that the range of synchronization into light-dark cycles always
decreases as the waveform becomes more distorted in the simple circadian model,
which is consistent with the numerical simulation in Fig. <ref>B.
§ DISCUSSION
We theoretically explored the conditions for clarifying the temperature
compensation of the biological clock and its synchronization to light-dark cycles
with a particular focus on waveform distortion. The theoretical analysis
of the Goodwin model, one of the most studied models of biological
clocks, revealed that waveform distortion of gene activity rhythms with increasing
temperature is necessary for temperature compensation. Furthermore, we derived an
approximate but globally valid solution to the waveform of the time profiles using
the RG method as a powerful tool for global analysis. This allowed us to
investigate, based on the analytical solution, whether there is a universal law
for the mechanism by which the waveform changes with temperature
variation.
The results indicated that temperature compensation is more likely to
occur if the waveform is distorted if the decreasing duration of
circadian protein oscillation elongates as temperature increases.
Although theoretical predictions based on a model might not
always be realized in real organisms, we quantified the gene activity rhythms of
published experimental data using Drosophila. This quantification confirmed that
the waveform is distorted at high temperatures, in accordance with our
theoretical predictions.
It is notable that the systematic wave distortion governed by Eq. (<ref>),
which we have found to hold in the Goodwin model, also applies to a wide
class of non-linear oscillators used for biological phenomena different from
biological rhythms, including the Lotka-Volterra model <cit.>,
which is commonly used in ecology, and the van der Pol model, as
presented in the Supplementary Infomration A.3 and A.4 (see also <cit.>). This suggests that
exploring the possible significance of waveform distortion in other mathematical
models, such as the Fitz-Hugh-Nagumo model in neuroscience, would be intriguing
<cit.>.
To the best of our knowledge, this is the first study to apply the RG method,
a powerful resummation method of the perturbation series first developed in physics,
to circadian rhythm problems. In the RG method, secular terms appearing in the
naïve perturbation series are renormalized into the 'integral constants'
, which thus acquire the nature of the slow modes,
making it a powerful tool for global and asymptotic analysis.
Unlike the naive perturbation theory, the solutions given by the RG method provide
a time evolution close to numerical simulations in
the relevant global domain of time,
offering an approximate solution for the period and waveform.
The analytical results predict that longer tails of gene activity rhythms
at higher temperatures occur for temperature compensation.
This study
also investigated the synchronization
with environmental light-dark cycles at various temperatures <cit.>. The numerical simulations and theoretical analysis predict
that as the distortion
of the gene activity rhythms for achieving a temperature-compensated period
increases, it becomes more difficult to synchronize with
the light-dark cycle.
This prediction aligns with the reported temperature-dependent variation
in response
to light pulses in Drosophila and Neurospora,
displaying smaller phase shifts
at higher temperatures <cit.> <cit.>.
As mentioned in the Introduction, previous theoretical and experimental
studies, such as those using Drosophila <cit.>, suggested
that waveforms in gene activity rhythms do not change under temperature variations,
although they are blurred by error bands. These findings are
apparently inconsistent with our current conclusion. We believe that this
discrepancy stems from two main factors: differing assumptions about the
temperature sensitivity of degradation rates and differing interpretations of
experimental results regarding gene activity rhythms at distinct temperatures.
First, the previous study assumed that all degradation rates are
temperature-insensitive. Thus, the waveform of gene activity rhythms does not need
to change with temperature. By contrast, we assume that some degradation
rates at least should accelerate with temperature, leading to a more distorted
waveform of gene activity rhythms at higher temperatures. Second, the previous
study <cit.> interpreted their experimental results as indicating that gene
activity rhythms at different temperatures can be collapsed onto each other by
rescaling, supporting their prediction that temperature compensation occurs because
of rescaling and the stability of the temperature insensitivity of degradation
rates. Conversely, our quantification of their experimental data
indicated that the waveform tends to be more distorted at higher
temperatures, whereas variation in NS values was noted. Thus, the
present result is consistent
with our theoretical
prediction. We believe that further systematic quantification of the waveforms of
gene activity and/or protein activity rhythms in various circadian organisms will
be essential for clarifying the importance of the waveform in circadian
rhythms in the future.
§ METHODS
§.§ Computation of ODEs
The ordinary differential equations in this work were calculated by a fourth-order Runge-Kutta method with MATLAB (The MathWorks, Natick, MA). The time step size Δ t for the Goodwin model (Eqs. (<ref>)-(<ref>)) was 0.001, and that for the Lotka-Volterra model (Eqs. (<ref>)-(<ref>)) and the van der Pol oscillator (Eq. (<ref>)) was 0.0001.
§.§ Numerical analysis of synchronization with light-dark cycles
The angular frequency of the light-dark cycle Ω in Eq. (<ref>) was changed by 0.0001. We considered that the model was synchronized with the light-dark cycle when the change in the amplitude was smaller than the rounding error as in Fig. 1B.
§.§ Estimation of waveform distortion from simulated time-series
We estimated waveform distortion, i.e., non-sinusoidal power (NS, see Eq. (<ref>)) from the simulated time-series by using the generalized harmonic analysis (GHA) method as in Figs. 1B and 2A. The GHA method estimates the amplitudes A_j and B_j, and the frequencies f_j, which minimize the squared residual ∫_0^L[x(t)-∑_j=1^j_max{ A_jcos(2π f_jt)+B_jsin(2π f_jt) } ]^2dt. The detailed procedure of GHA can be found in <cit.>.
§.§ Estimation of phases of
Fourier components from simulated time-series
By using GHA method, we
computed the Fourier series x(t)=∑_j=1^j_max{ A_jcos(2π f_jt)+B_jsin(2π f_jt)}. This form was converted into x(t)=∑_j=1^j_maxr_jsin(2π f_jt+α_j), where r_j=√(A_j^2+B_j^2) and α_j =arctan (B/A).
§.§ Computation of parameter space for temperature compensation using the RG method
The frequency change dω =(∂ω/∂ k_1)dk_1+(∂ω/∂ k_2)dk_2+(∂ω/∂ k_3)dk_3 and the phase change dα =(∂α/∂ k_1)dk_1+(∂α/∂ k_2)dk_2+(∂α/∂ k_3)dk_3 were calculated from Eqs. (<ref>) and (<ref>) by using Maple. The parameter space for dω =0 and dα =0 in Fig. <ref>AB were numerically generated by Maple.
§.§ Estimation of waveform distortion from published experimental data
The average curves of experimental time-series of timeless luciferase (tim-luc) reporter (shown to recapitulate the dynamics of timeless gene expression) at 18 and 29 ^∘C of Figs. 3C and 4B in the published literature <cit.> were extracted using WebPlotDigitizer at 1 h intervals. Uniformly distributed noise between -0.4 and 0.4 was added to the extracted data, generating 100 time series datasets so that the experimental noise
is roughly reproduced as in Fig. 5A and Supplementary Fig. 3A. Spline interpolation was applied to set the sampling interval to 0.1 h to smooth time series data. The time series data with noise were detrended by multiplying an exponential function so that the positions of local minima of the oscillations are approximately
reproduced. The Fourier coefficients of the detrended time series were quantified using GHA. The NS values were estimated from the coefficients up to the third harmonics as in Fig. 5B and Supplementary Fig. 3B.
§ ACKNOWLEDGEMENTS
We thank H. Nakao, H. Chiba, Y. Kawahara, A. Mochizuki for useful comments on this study. This work was supported by grants from Japan Science and Technology Agency (JPMJCR1913 to G.K.), and from the Japanese Society for the Promotion of Science, and the Ministry of Education, Culture, Sports, Science, and Technology in Japan (JP21K06105 to G.K., 19K03872 to T.K.).
§ SUPPORTING INFORMATION FOR
§ WAVEFORM DISTORTION FOR TEMPERATURE COMPENSATION AND SYNCHRONIZATION IN CIRCADIAN RHYTHMS: AN APPROACH BASED ON THE RENORMALIZATION GROUP METHOD
Shingo Gibo^1*, Teiji Kunihiro^2, Tetsuo Hatsuda^1, and Gen Kurosawa^1*
^1Interdisciplinary Theoretical and Mathematical Sciences Program (iTHEMS), RIKEN
^2Yukawa Institute for Theoretical Physics (YITP), Kyoto University
^*Correspondence: [email protected] (S.G), [email protected] (G.K)
This PDF includes:
Supplementary Text
Supplementary Figures 1 to 3
Supplementary Tables 1 to 2
§ SUPPLEMENTARY TEXT
§.§ A.1 Brief introduction of the renormalization group (RG) method
using a simple model with Hopf bifurcation
In this section, we introduce the RG method <cit.> in a geometrical manner as
formulated in <cit.> with a simpler prescription
without the redundant 'time-splitting' procedure.
In this aim, we
use a generic model with a Hopf bifurcation.
A more detailed account of
the method is given in <cit.>.
Let us consider the model equation
Lx(t)=F(x(t); ε)
where x(t) is a state variable of our dynamics,
L=∑_n=1^Na_n(d/dt)^n
is a linear differential operator, F(x(t); ε) is a nonlinear function of x(t),
and ε is an internal parameter of F, which acts as a bifurcation parameter of the system.
We assume that the model has a fixed point x_0 satisfying the equation
F(x_0; 0)=0, which is destabilized for ε >0 through
the Hopf bifurcation. We are interested in the derivation of the reduced equation
and an approximate but valid solution in a global domain of time around the critical point
of the Hopf bifurcation.
Thus,
we apply
the perturbation theory and express the solution around an arbitrary
time t=t_0 belonging to a global domain in the asymptotic regime (see below) in a power series of ε as follows:
x(t;t_0)=x_0+ε u_1(t;t_0)+ε ^2u_2(t;t_0)+ε^3u_3(t;t_0)+o(ε^3).
Substituting Eq. (<ref>) into Eq. (<ref>) and equating the terms with the same powers of ε, we obtained
O(ε^1): L'u_1=0,
O(ε^2): L'u_2=f_1(u_1),
O(ε^3): L'u_3=f_2(u_1,u_2),
where
L'=L-(∂ F/∂ x)|_x=x_0,ε =0,
and f_1(u_1) and f_2(u_1,u_2) are nonlinear functions that depend on F(x_0).
Because Hopf bifurcation occurs at ε =0, two of the eigenvalues of the linear differential operator L' are written as
± iω_0, with ω_0 being a real number, and the others have negative real parts as
Re(λ_k)<0 (k=1,...,N-2).
Then, the first-order solution can be expressed as
u_1(t;t_0)=A(t_0)cos(ω_0 t+θ (t_0))+∑_k=1^N-2c_k(t_0)e^λ_kt,
where A(t_0), θ (t_0), and c_k(t_0) are the
integral constants which are assumed to
depend on the initial time t_0.
Next, we consider the asymptotic regime as t → ∞ so that
the second term describing the transient behavior has virtually become negligible.
Then,
the first-order solution in this asymptotic regime can be expressed only by the first term as
u_1(t →∞ ;t_0)=A(t_0)cos(ω_0 t+θ (t_0)).
Next, we proceed to the second-order equation.
Substituting Eq. (<ref>) into Eq. (<ref>), we have
L'u_2
=b_1Acos(ω_0t+θ)
+b_2A^2cos(2(ω_0t+θ))
+b_2A^2,
where b_k (k=1, 2) are constants depending on f_1(u_1).
It is to be noted that the inhomogeneous part (r.h.s.) contains a term proportional to cos(ω_0 t+θ), which is a zero mode of the linear operator that gives rise to secular terms in the particular solutions of the inhomogeneous equation.
The general solution to Eq. (<ref>) is given as a sum of a particular solution to the inhomogeneous equation
and the general solution to the homogeneous equation.
Now, it is possible and convenient to choose
the coefficients of the latter so that all of the secular terms vanish
at t=t_0 <cit.>, which leads to the second-order solution as
u_2(t;t_0)= (t-t_0)d_1Acos(ω_0 t+θ)
+(t-t_0)d_2Asin(ω_0 t+θ)
+d_3A^2cos(2(ω_0t+θ))
+d_4A^2sin(2(ω_0t+θ))
+d_5A^2,
where d_k (k=1⋯ 5) are constants depending on the right-hand side of Eq. (<ref>).
Similarly, the third-order solution takes the form of
u_3(t;t_0)= (t-t_0)(f_1aA^3+f_1bA)cos(ω_0t+θ)
+(t-t_0)(f_2aA^3+f_2bA)sin(ω_0t+θ)
+(t-t_0)^2f_3Acos(ω_0t+θ)
+(t-t_0)^2f_4Asin(ω_0t+θ)
+f_5A^2cos(2(ω_0t+θ))
+f_6A^2sin(2(ω_0t+θ))
+(t-t_0)f_7A^2cos(2(ω_0t+θ))
+(t-t_0)f_8A^2sin(2(ω_0t+θ))
+f_9A^3cos(3(ω_0t+θ))+f_10A^3sin(3(ω_0t+θ))
+f_11A^2+(t-t_0)f_12A^2,
where f_1a, f_1b, f_2a, f_2b, and f_k (k=3⋯ 12) are constants depending on Eqs. (<ref>), (<ref>), and (<ref>).
Note that the solution is constructed so that the secular terms
vanish at t=t_0.
Thus, collecting all of the terms, the approximate solution to Eq. (<ref>) up to the third order
of ε reads
x(t;t_0)= x_0+ε Acos(ω_0 t+θ)
+ε^2{(t-t_0)d_1Acos(ω_0 t+θ)
+(t-t_0)d_2Asin(ω_0 t+θ)
+d_3A^2cos(2(ω_0t+θ))
+d_4A^2sin(2(ω_0t+θ))
+d_5A^2}
+ε^3{ (t-t_0)(f_1aA^3+f_1bA)cos(ω_0t+θ)
+(t-t_0)(f_2aA^3+f_2bA)sin(ω_0t+θ)
+(t-t_0)^2f_3Acos(ω_0t+θ)
+(t-t_0)^2f_4Asin(ω_0t+θ)
+f_5A^2cos(2(ω_0t+θ))
+f_6A^2sin(2(ω_0t+θ))
+(t-t_0)f_7A^2cos(2(ω_0t+θ))
+(t-t_0)f_8A^2sin(2(ω_0t+θ))
+f_9A^3cos(3(ω_0t+θ))+f_10A^3sin(3(ω_0t+θ))
+f_11A^2+(t-t_0)f_12A^2} +o(ε^3).
Because Eq. (<ref>) contains the secular terms,
this solution
is valid only locally around t=t_0,
but
it exhibits a divergent behavior as | t-t_0|
goes infinity.
In fact, this is a rather common behavior occurring in naïve perturbation expansions.
Next, we use a geometrical viewpoint to circumvent the disastrous situation
following <cit.>.
The solution (<ref>) gives a family of curves with t_0 being the parameter specifying each curve
in the t-x plane. Each curve gives a good approximate solution to the original equation in a local
domain around t=t_0. The idea is that the envelope curve of the family of curves
hopefully gives an approximate but valid solution in the global domain including the arbitrary time t_0.
Indeed, this has rigorously been demonstrated to be the case
<cit.>.
Now, the envelope curve can be constructed using the following envelope equation <cit.>:
.
dx(t;t_0)/dt_0| _t_0=t
=.
∂ x/∂ t_0| _t_0=t
+.
dA(t_0)/dt_0∂ x/∂ A| _t_0=t
+.
dθ (t_0)/dt_0∂ x/∂θ| _t_0=t
=0.
Note that we have
taken into account the fact that
the integral constants A and θ depend on the `initial time' t=t_0, and (<ref>) actually gives the dynamical equations for these variables. As will be done shortly, the insertion of the solutions to the dynamic equation into (<ref>) gives
an approximate but globally valid solution to the original equation.
Because the envelope equation (<ref>) takes a similar form
as the RG equation in quantum field theory,
it is also called the RG equation, and the asymptotic/global
analysis based on this equation was named the RG method <cit.>.
Substituting Eq. (<ref>) into Eq. (<ref>), we have
0= ε{dA/dt-ε^2f_1aA^3-ε (d_1+ε f_1b)A
}cos(ω_0 t+θ)
+ε A{
-dθ/dt-ε^2 f_2aA^2-ε (d_2+ε f_2b)
}sin(ω_0 t+θ)
+ε^2A{
2(d_3+ε f_5)dA/dt+2(d_4+ε f_6)Adθ/dt-ε f_7A
}cos(2(ω_0t+θ))
+ε^2A{
2(d_4+ε f_6)dA/dt-2(d_3+ε f_5)Adθ/dt-ε f_8A
}sin(2(ω_0t+θ))
+3ε^3A^2{
f_9dA/dt+f_10Adθ/dt}cos(3(ω_0t+θ))
+3ε^3A^2{
f_10dA/dt-f_9Adθ/dt}sin(3(ω_0t+θ))
+2ε^2(d_5+ε f_11)AdA/dt
-ε^3 f_12A+o(ε^3).
Because dA/dt and dθ/dt are of order ε, the coefficients cos(2(ω_0t+θ)),
sin(2(ω_0+θ)),
cos(3(ω_0t+θ)), sin(3(ω_0t+θ)), and AdA/dt are of order ε^3 or
higher.
To make Eq. (<ref>) hold for any t,
we only must ensure that
the coefficients of the independent functions, namely cos(ω_0t+θ) and sin(ω_0t+θ),
vanish, and hence, we have
dA/dt=ε^2f_1aA^3+ε(d_1+ε f_1b)A+o(ε^2),
dθ/dt=-ε^2f_2aA^2-ε (d_2+ε f_2b)+o(ε^2),
which are the dynamic equations governing the 'integral constants' A and θ. We now see that
the integral constants have been lifted to dynamic variables
through the RG/envelope equation.
The amplitude equation (<ref>) can be readily solved analytically.
For instance, when f_1a < 0 and d_1+ε f_1b > 0,
it yields
A(t)=A_0 A/√( A^2+(A_0^2- A^2)e^-2α t),
where
α=ε(d_1+ε f_1b) and
A_0=√(-d_1+ε f_1b/ε f_1a),
with A being the initial amplitude.
Equation (<ref>) indicates that the amplitude approaches A_0 monotonically as
t → ∞,
implying that A_0 is nothing but the amplitude of the limit cycle admitted
in the original equation (<ref>).
Furthermore, Eq. (<ref>) indicates that the angular frequency on the limit cycle reads
ω=ω_0+(dθ/dt)|_A=A_0
=
ω_0-εd_2f_1a-d_1f_2a/f_1a,
which is constant.
The globally valid solution is given as the envelope of the
family of curves, as previously stated.
Thus, the solution on the limit cycle, which valid in a global
domain in the asymptotic regime, reads
x(t)= x(t;t_0)|_t_0=t
= x_0+ε A_0cos(ω t+θ_0)
+ε^2{ d_3A_0^2cos(2(ω t+θ_0))
+d_4A_0^2sin(2(ω t+θ_0))+d_5A_0^2}
+ε^3{ f_5A_0^2cos(2(ω t+θ_0))
+f_6A_0^2sin(2(ω t+θ_0))
+f_9A_0^3cos(3(ω t+θ_0))
+f_10A_0^3sin(3(ω t+θ_0))
+f_11A_0^2} +o(ε^3).
The RG method is a powerful method for obtaining a globally valid solution.
This method can be applied to various models including discrete, stochastic, and partial differential equations,
as given in <cit.>.
§.§ A.2 Derivation of a time-evolution solution in a circadian rhythm model using the RG method
In this subsection, we applied the RG method to derive an approximate but globally valid solution of
the
circadian clock model given by Eq. (<ref>)-(<ref>), which is
a system of first-order equations with three variables.
We first convert the system into a single equation with higher-order derivatives as
d^3x_3/dt^3
+s_1d^2x_3/dt^2
+s_2dx_3/dt
+s_3x_3=p_1p_2f(x_3)
where s_1=k_1+k_2+k_3, s_2=k_1k_2+k_2k_3+k_3k_1, s_3=k_1k_2k_3.
To obtain the approximate solution, we set the transcriptional regulator f(x_3) as r/x_3^n.
This model has a fixed point
x_0=( s_3/p_1p_2r) ^-1/n+1,
which is destabilized through Hopf bifurcation for
n> n_0≡s_4/s_3,
with s_4=(k_1+k_2)(k_2+k_3)(k_3+k_1).
Then, we expand the solution around t=t_0 as a series
of ε
x_3(t;t_0) = x_0+ε u_1(t;t_0)+ε^2 u_2(t;t_0)+ε ^3u_3(t;t_0) +O(ε^3).
Substituting Eq. (<ref>) into Eq. (<ref>) and
equating the coefficients with the same powers of ε, we obtain
O(ε): d^3u_1/dt^3+s_1d^2u_1/dt^2
+s_2du_1/dt+s_1s_2u_1=0,
O(ε^2): d^3u_2/dt^3+s_1d^2u_2/dt^2
+s_2du_2/dt+s_1s_2u_2
=-s_3u_1+B_1u_1^2,
O(ε^3): d^3u_3/dt^3+s_1d^2u_3/dt^2
+s_2du_3/dt+s_1s_2u_3
=-s_3u_2+B_2u_1u_2+B_3u_1^2+B_4u_1^3,
where
B_1=s_4s_2s_1/2s_3( s_3/p_1p_2r) ^s_3/s_1s_2,
B_2=s_4s_2s_1/s_3( s_3/p_1p_2r) ^s_3/s_1s_2,
B_3={
s_4+s_3/2-( s_4s_3/2s_2s_1ln( s_3/p_1p_2r)
)
}( s_3/p_1p_2r) ^s_3/s_1s_2,
B_4=(s_2s_1+s_3)s_4s_1s_2/6s_3^2( 2s_3/p_1p_2r) ^s_3/s_1s_2.
Because Eq. (<ref>) has three eigenvalues, namely λ_1,2=± i√(s_2) and λ_3=-s_1, the general solution of Eq. (<ref>) reads
u_1(t;t_0)=A(t_0)cos(ω_0t+θ (t_0))+c(t_0)e^-s_1t, (ω_0:= √(s_2)),
where A, θ, and c are integral constants that depend on initial time t_0.
Considering the asymptotic regime in which the second term in Eq. (<ref>) is so small and negligible,
then the first-order solution can be written as
u_1(t;t_0)=A(t_0)cos(ω_0t+θ (t_0)).
Substituting Eq. (<ref>) into Eq. (<ref>), we obtain
d^3u_2/dt^3+s_1d^2u_2/dt^2
+s_2du_2/dt+s_1s_2u_2
=-s_3Acos(ω_0t+θ)+CA^2cos(2ω_0t+2θ)
+CA^2,
where
C=s_4s_1s_2/4s_3( s_3/p_1p_2r) ^s_3/s_1s_2.
Equation (<ref>) is an inhomogeneous equation that contains a zero mode of the linear operator,
and the solution
having a suitable form for applying the RG method is written as
u_2(t;t_0)=
D_1A(t-t_0)cos(ω_0t+θ)
-D_2A(t-t_0)sin(ω_0t+θ)
-D_3A^2cos(2ω_0t+2θ)-D_4A^2sin(2ω_0t+2θ)
+D_5A^2,
where the coefficients are
D_1=s_3/2(s_1^2+s_2),
D_2=s_1s_3/2√(s_2)(s_1^2+s_2),
D_3=s_4s_1^2/12(s_1^2+4s_2)s_3( s_3/p_1p_2r) ^s_3/s_1s_2,
D_4=s_4s_1√(s_2)/6(s_1^2+4s_2)s_3( s_3/p_1p_2r) ^s_3/s_1s_2,
D_5=s_4/4s_3( s_3/p_1p_2r) ^s_3/s_1s_2.
Tentatively, after collecting all the obtained terms, we have the approximate solution in the second order as
x_3(t;t_0)= x_0+ε Acos(ω_0t+θ)
+ε^2{ D_1A(t-t_0)cos(ω_0t+θ)
-D_2A(t-t_0)sin(ω_0t+θ)
-D_3A^2cos(2ω_0t+2θ)-D_4A^2sin(2ω_0t+2θ)
+D_5A^2} +o(ε^2).
Because it contains the secular terms, the solution diverges as |t-t_0|
goes infinity.
To resum the would-be divergent terms,
we apply the RG equation dx_3(t, t_0)/dt_0|_t_0=t,
which leads to the equations governing the amplitude and phase as
dA/dt=ε s_3A/2(s_1^2+s_2),
and
dθ /dt=ε s_1s_3/2√(s_2)(s_1^2+s_2),
nicely describing the slow motions of the amplitude and phase, respectively.
However, it fails to describe a transitional behavior approaching a limit cycle, as indicated by the present model.
Therefore, we analyzed the third-order equation, which might lead to
a limit cycle solution.
Substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>), we have
d^3u_3/dt^3+s_1d^2u_3/dt^2
+s_2du_3/dt+s_1s_2u_3
=
E_1A^3cos(ω_0t+θ)-E_2A^3sin(ω_0t+θ)
-E_3A(t-t_0)cos(ω_0t+θ)
+E_4A(t-t_0)sin(ω_0t+θ)
+E_5A^2cos(2ω_0t+2θ)
+E_6A^2sin(2ω_0t+2θ)
+E_7A^2(t-t_0)cos(2ω_0t+2θ)
-E_8A^2(t-t_0)sin(2ω_0t+2θ)
-E_9A^3cos(3ω_0t+3θ)
-E_10A^3sin(3ω_0t+3θ)
+E_11A^2+E_12A^2(t-t_0).
where
E_1=(s_1^2s_2-4s_1^2s_3+6s_1s_2^2-18s_2s_3)s_4s_1s_2/12(s_1^2+4s_2)s_3^2( s_3/p_1p_2r) ^2s_3/s_1s_2,
E_2=s_4^2s_1^2√(s_2^3)/12(s_1^2+4s_2)s_3^2( s_3/p_1p_2r) ^2s_3/s_1s_2,
E_3=s_3^2/2(s_1^2+s_2),
E_4=s_1s_3^2/2(s_1^2+s_2)√(s_2),
E_5={7s_1^3s_2-4s_1^2s_3+24s_1s_2^2-12s_2s_3/12(s_1^2+4s_2)
-s_4s_3/4s_1s_2ln( s_3/p_1p_2r)
}( s_3/p_1p_2r) ^s_3/s_1s_2,
E_6=s_4s_1√(s_2)/6(s_1^2+4s_2)( s_3/p_1p_2r) ^s_3/s_1s_2,
E_7=s_4s_1s_2/4(s_1^2+s_2)( s_3/p_1p_2r) ^s_3/s_1s_2,
E_8=s_4s_1^2√(s_2)/4(s_1^2+s_2)( s_3/p_1p_2r) ^s_3/s_1s_2,
E_9=((s_1^2+2s_2)s_1+2s_3)s_4s_1s_2^2/12(s_1^2+4s_2)s_3^2( s_3/p_1p_2r) ^2s_3/s_1s_2,
E_10=s_4^2s_1^2√(s_2^3)/12(s_1^2+4s_2)s_3^2( s_3/p_1p_2r) ^2s_3/s_1s_2,
E_11=1/4{
s_1s_2
-s_4s_3/s_1s_2ln( s_3/p_1p_2r)
}( s_3/p_1p_2r) ^2s_3/s_1s_2,
E_12=s_4s_1s_2/4(s_1^2+s_2)( s_3/p_1p_2r) ^s_3/s_1s_2.
The solution to Eq. (<ref>)
is given by
u_3(t;t_0)= -(F_1aA^3+F_1bA)(t-t_0)cos(ω_0t+θ)
+(F_2aA^3+F_2bA)(t-t_0)sin(ω_0+θ)
-F_3A(t-t_0)^2cos(ω_0t+θ)
-F_4A(t-t_0)^2sin(ω_0t+θ)
+F_5A^2cos(2ω_0t+2θ)
+F_6A^2sin(2ω_0t+2θ)
-F_7A^2(t-t_0)cos(2ω_0t+2θ)
+F_8A^2(t-t_0)sin(2ω_0t+2θ)
+F_9A^3cos(3ω_0t+3θ)
+F_10A^3sin(3ω_0t+3θ)
+F_11A^2+F_12A^2(t-t_0)
where
F_1a=(2s_1s_2^2-(s_1^2+6s_2)s_3)s_4s_1s_2/8(s_1^2+4s_2)(s_1^2+s_2)s_3^2( s_3/p_1p_2r) ^2s_3/s_1s_2,
F_1b=s_1s_3^2/(s_1^2+s_2)^3,
F_2a=((s_1^2+7s_2)s_1s_2-(4s_1^2+19s_2)s_3)s_4s_1^2√(s_2)/24(s_1^2+s_2)(s_1^2+4s_2)s_3^2( s_3/p_1p_2r) ^2s_3/s_1s_2,
F_2b=((s_1^2+6s_2)s_1^2-3s_2^2)s_3^2/8(s_1^2+s_2)^3√(s_2^3),
F_3=(s_1^2-s_2)s_3^2/8(s_1^2+s_2)s_2,
F_4=s_1s_3^2/4(s_1^2+s_2)√(s_2),
F_5={s_4s_3/12(s_1^2+4s_2)s_2^2ln( s_3/p_1p_2r) .
. -
(3s_1^5+4s_1^3s_2+11s_1^2s_3+64s_1s_2^2-52s_2s_3)s_1/36(s_1^2+s_2)(s_1^2+4s_2)^2}( s_3/p_1p_2r) ^s_3/s_1s_2,
F_6={s_4s_3/6(s_1^2+4s_2)s_1√(s_2^3)ln( s_3/p_1p_2r) .
. -
7s_1^5s_2-s_1^4s_3-8s_1^3s_2^2+38s_1^2s_2s_3+48s_1s_2^3-24s_2^2s_3/36(s_1^2+s_2)(s_1^2+4s_2)^2√(s_2)}( s_3/p_1p_2r) ^s_3/s_1s_2,
F_7=s_4s_1^2/4(s_1^2+s_2)(s_1^2+4s_2)( s_3/p_1p_2r) ^s_3/s_1s_2,
F_8=(s_1^2-2s_2)s_4s_1/12(s_1^2+s_2)(s_1^2+4s_2)√(s_2)( s_3/p_1p_2r) ^s_3/s_1s_2,
F_9=(s_1(s_1^2-s_2)+5s_3)s_4s_1^2s_2/96(s_1^2+4s_2)(s_1^2+9s_2)s_3^2( s_3/p_1p_2r) ^2s_3/s_1s_2,
F_10=(2(2s_1^2+3s_2)s_1s_2+(6s_2-s_1^2)s_3)s_4s_1√(s_2)/96(s_1^2+4s_2)(s_1^2+9s_2)s_3^2( s_3/p_1p_2r) ^2s_3/s_1s_2,
F_11={(s_1^3+s_3)/4(s_1^2+s_2)s_1
-s_4s_3/4s_1^2s_2^2ln( s_3/p_1p_2r)
}( s_3/p_1p_2r) ^s_3/s_1s_2,
F_12=s_4/4(s_1^2+s_2)( s_3/p_1p_2r) ^s_3/s_1s_2.
To obtain a globally valid solution using Eq. (<ref>), we apply
the RG method, which utilizes the RG equation
. dx_3(t;t_0)/dt_0| _t_0=t
= . ∂ x_3(t;t_0)/∂ t_0| _t_0=t
+. dA/dt_0∂ x_3(t;t_0)/∂ A| _t_0=t
+. dθ/dt_0∂ x_3(t;t_0)/∂θ| _t_0=t
= {εdA/dt-ε D_1A+ε ^3(F_1aA^3+F_1bA) }cos(ω_0t+θ)
+{ -ε Adθ/dt+ε ^2D_2A-ε ^3(F_2aA^3+F_2bA) }sin(ω_0t+θ)
=0,
where we have neglected the higher-order terms o(ε^3).
For Eq. (<ref>) to hold for any t, the coefficients of the two independent functions should vanish. Thus, we obtain the dynamic equations for A and θ:
dA/dt=ε D_1A-ε^2(F_1aA^3+F_1bA)
+o(ε^2),
dθ/dt=ε D_2-ε^2(F_2aA^2+F_2b)
+o(ε^2).
The amplitude equation (<ref>) has a new fixed point
A_0=√(D_1-ε F_1b/ε F_1a)
=√(4((s_1^2+s_2)^2-2ε s_1s_3)(s_1^2+4s_2)s_3^3/ε (2s_1s_2^2-(s_1^2+6s_2)s_3)(s_1^2+s_2)^2s_4s_1s_2)( p_1p_2r/s_3) ^s_3/s_1s_2,
which is
nothing but the amplitude of the desired limit cycle.
The phase function θ(t) on the limit cycle is expressed as
θ (t)=(ε D_2-ε^2(F_2aA_0^2+F_2b)+o(ε^2))t+θ _0,
where θ _0 is the integral constant and it gives
the initial phase at t=0.
Substituting Eqs. (<ref>), (<ref>), (<ref>) and (<ref>) into (<ref>), θ (t) is reduced to
θ (t)={
-εs_1s_3s_4/6(2s_1s_2^2-(s_1^2+6s_2)s_3)√(s_2)+o(ε)
}
t +θ _0.
Thus, the solution describing the limit cycle, which is valid in a global domain in the asymptotic regime, is given by
x_3(t)= x_3(t;t_0)|_t_0=t
= x_0+ε A_0cos(ω t+θ _0)
-ε^2{ D_3A_0^2cos(2(ω t+θ_0))
+D_4A_0^2sin(2(ω t+θ_0))
} +o(ε ^2).
In particular, if the initial phase is set to be θ_0 =-π /2, we have Eq. (<ref>).
§.§ A.3 Numerical and
RG analyses of the Lotka-Volterra model
Equation (<ref>) and the numerical simulation indicate that non-sinusoidal power (NS) tends to be larger when the period
hardly changes and is stable in response to increases of the parameter values specifying the degradation rates, as presented in Fig. <ref>B. This suggests that the waveform becomes more distorted at higher temperatures when the circadian period is temperature-compensated.
It was previously reported that the same conclusion holds for other oscillatory models, including a realistic mammalian circadian clock model, a post-translational model in cyanobacteria, and the van der Pol oscillator <cit.>.
However, it is important to note that the findings using specific mathematical models might not be universally applicable to other models and actual organisms.
Therefore, one should examine whether the waveform also plays a crucial role in the stability of the period in other oscillatory models.
Thus, we conduct a numerical simulation to test the possible period-waveform correlation
in the Lotka-Volterra model as done for the circadian clock model.
Needless to say, the Lotka-Volterra model is one of the most extensively studied mathematical models in biology <cit.>, and it effectively explains population dynamics in prey-predator systems.
The Lotka-Volterra model is given as a system with two variables as
dx/dt=ax-ε xy,
dy/dt=-by+ε 'xy,
where x(t) and y(t) are numbers of prey and predators. Parameter a is the growth rate of prey, b is the death rate of predators, ε is the death rate of prey attributable to predation, and ε ' is the growth rate of predators (Supplementary Fig. <ref>A).
In the present numerical simulation, we first generated
100 reference parameter sets corresponding to the reference temperature.
The values of the model parameters a and b were generated randomly with a uniform distribution between 0 and 1, and similarly, the values of the other model parameters ε and ε '
were also randomly assigned values between 0 and 0.5. The period obtained for each parameter set with the initial condition (x_0,y_0)=(2,2) was denoted as τ_1.
Next, a, b, ε, and ε '
were multiplied by a random factor within the range of 1.1-1.9 to simulate the increase in temperature, yielding 49 oscillatory parameter sets. Each resulting new period was denoted as τ_2, and thus, the ratio of the period =τ_2/τ_1, which is called the relative period,
was obtained.
The numerical simulations reveal a consistent positive correlation between NS and the relative period (Supplementary Fig. <ref>BC).
A notable point is that the value of NS tends to increase along with the period when larger multiplicative factors are used in the simulation.
This result again suggests that our findings that the waveform becomes more distorted when the period remains relatively stable in response to increased parameter values
is a rather universal phenomenon not restricted to the behavior observed in the circadian clock model (Fig. <ref>A).
The period of the Lotka-Volterra model, together with its approximate but globally valid solution, was previously derived analytically by one of the present authors <cit.> on the basis of the RG method;
see also the pioneering work <cit.> based on a different method.
Next, we will demonstrate that the expression explicitly reveals that the period of the Lotka-Volterra model almost linearly increases with the waveform distortion for the average of the prey-predator time series (i.e. NS).
In the mathematical analysis, it proved convenient
to use new variables (ξ(t), η(t)) defined as
x(t)=(b+εξ (t))/ε ' ,
y(t)=a/ε +η (t).
The RG method performed in <cit.>
in the second order of ε leads to
ξ(t)= ( 1-ε^2a-b/4ab^2A^2/12) AsinΘ
-ε^21/b√(ab)A^2/24cosΘ
-ε1/√(ab)A^2/6sin(2Θ)
-ε1/bA^2/3cos(2Θ)
-ε^23a-b/4ab^2A^3/8sin(3Θ)
+ε^21/b√(ab)A^2/8cos(3Θ) +o(ε^2),
η(t)= ε^21/b^2A^3/24sinΘ
-√(ab)/b( 1+ε^2a-b/4ab^2A^2/12) AcosΘ
-ε√(ab)/b^2A^2/6sin(2Θ)
+ε1/bA^2/3cos(2Θ)
+ε^21/b^2A^3/8sin(3Θ)
+ε^2a-3b/4b^2√(ab)A^3/8cos(3Θ)
+o(ε^2),
where A and θ are the integral constants, which
are to be determined by the initial condition,
and Θ=ω̃ t+θ, with
ω̃ being the angular frequency given by
ω̃=
√(ab){ 1-ε^2A^2(a+b)/24ab^2},
from which we have the formula of the period of the system after some manipulation as
τ=2π/ω̃=2π/√(ab){2/5( 1+5/2ε^2A^2(a+b)/24ab^2) +3/5} +o(ε^2).
From the waveforms for ξ(t) and η(t) given by (<ref>) and (<ref>), respectively,
we can obtain the waveform distortion of each variable as follows:
NS^(ξ) =1+ε^2A^2(4a+b)/24ab^2+o(ε^2),
NS^(η) =1+ε^2A^2(a+4b)/24ab^2+o(ε^2).
It is notable that the mean of NS^(ξ) and NS^(η)
takes the form
NS=1/2(NS^(ξ)+NS^(η))
=1+5/2ε^2A^2(a+b)/24ab^2+o(ε^2).
Indeed, comparing (<ref>) and (<ref>), we arrive at
τ=2π/√(ab)(2/5NS+3/5)
+o(ε^2),
which states that the period and the mean waveform distortion, namely NS, are linearly dependent on each other, and they tend to increase (or decrease) in a parallel manner. This is what we aimed to demonstrate for the
Lotka-Volterra model.
§.§ A.4
Numerical and RG analyses of the van der Pol model
We consider the van del Pol model as follows:
d^2x/dt^2+x=ε (1-x^2)dx/dt,
which is known as one of the fundamental non-linear oscillator models.
Previously, two of the authors derived the period formula τ =2π[∑_j=1^∞|a_j|^2j^2/∑_j=1^∞|a_j|^2]^1/2, meaning that the period of the model is also proportional to NS <cit.>. Then, we derive the approximate solution of the model using the RG method and confirm the proportionality between the period and the waveform distortion NS in detail. First, we represent the local solution around t=t_0 as a perturbation series
x(t;t_0)=x_0(t;t_0)+ε x_1(t;t_0)+ε^2 x_2(t;t_0)+o(ε^2).
Then, substituting Eq. (<ref>) into Eq. (<ref>) and equating the terms with the same powers of ε, we obtain
O(ε^0): d^2x_0/dt^2+x_0=0,
O(ε^1): d^2x_1/dt^2+x_1
=(1-x_0^2)dx_0/dt,
O(ε^2): d^2x_2/dt^2+x_2
=(1-x_0^2)dx_1/dt-2x_0x_1dx_0/dt.
The solution for the zeroth-order equation (<ref>) is
x_0(t;t_0)=A(t_0)cos(t+θ(t_0)),
where A and θ are integral constants and they potentially depend on initial time t_0. Then, substituting Eq. (<ref>) into Eq. (<ref>), we have
d^2x_1/dt^2+x_1
=-A( 1-A^2/4) sin(t+θ)+A^3/4sin(3t+3θ).
By solving the solution of Eq. (<ref>) around t=t_0, the first-order solution is given by
x_1(t;t_0)
=A/2( 1-A^2/4) (t-t_0)cos(t+θ)
-A^3/32sin(3t+3θ).
Similarly, by substituting zero-th and first-order solutions (<ref>) and (<ref>) into Eq. (<ref>), the second-order equation is
d^2x_2/dt^2+x_2
= F_1(A)cos(t+θ)-F_2(A)(t-t_0)sin(t+θ)+F_3(A)cos(3t+3θ)
-F_4(A)(t-t_0)sin(3t+3θ)+F_5(A)cos(5t+5θ),
where
F_1(A)=A/2( 13/64A^4-A^2+1 ) ,
F_2(A)=A/2( 3/16A^4-A^2+1 ),
F_3(A)=A^3/32( 5/2A^2-7 ),
F_4(A)=3A^3/8( 1/4A^2-1 ),
F_5(A)=5A^5/128.
The solution of the second-order equation (<ref>) is
x_2(t;t_0)
= 1/4(2F_1(A)-F_2(A))(t-t_0)sin(t+θ)
+1/4F_2(A)(t-t_0)^2cos(t+θ)
+1/32(-4F_3(A)+3F_4(A))cos(3t+3θ)
+1/8F_4(A)(t-t_0)sin(3t+3θ)
-1/24F_5(A)cos(5t+5θ).
Therefore, the perturbative solution up to the second-order of ε is
x(t;t_0)
= Acos(t+θ)
+ε{A/2( 1-A^2/4) (t-t_0)cos(t+θ)
-A^3/32sin(3t+3θ)
}
+ε^2{1/4(2F_1(A)-F_2(A))(t-t_0)sin(t+θ)
+1/4F_2(A)(t-t_0)^2cos(t+θ) .
+1/32(-4F_3(A)+3F_4(A))cos(3t+3θ)
+1/8F_4(A)(t-t_0)sin(3t+3θ)
.
-1/24F_5(A)cos(5t+5θ).
} +o(ε^2) .
To obtain the globally valid solution using Eq. (<ref>), we apply
the RG method, which utilizes the RG equation
. dx/dt_0|_t_0=t=
{dA/dt+ε1/2A( 1/4A^2-1 ) }cos(t+θ)
+{ -Adθ/dt -ε^21/4(2F_1(A)-F_2(A)) }sin(t+θ)=0,
where we have neglected the higher-order terms o(ε^2).
For Eq. (<ref>) to hold for any t, the coefficients of the two independent functions should vanish. Thus, we end up with the
dynamical equations for A and θ
dA/dt=1/2ε A( 1-1/4A^2)
dθ/dt=-ε^21/4A(2F_1(A)-F_2(A))
=-ε^21/8( 7/32A^4-A^2+1 )
Equation (<ref>) has two fixed points, namely A=0 and 2. The amplitude A asymptotically approaches A=2, which is the limit cycle.
Therefore, the dynamical behavior of θ (t) on the limit cycle reads
θ(t) = -ε^21/16 t +θ_0,
where θ_0 is the initial phase at t=0. Thus, the globally valid solution on the limit cycle is given by
x(t)=2cos(ω t+θ_0)
-ε1/4sin(3ω t+3θ_0)
-ε^23/32cos(3ω t+3θ_0)
-ε^25/96cos(5ω t+5θ_0)
+o(ε^2),
where the angular frequency ω up to the second order is expressed as
ω = 1-ε^21/16+o(ε^2).
Using Eqs. (<ref>) and (<ref>), the period and NS of the van der Pol model read
τ =2π/ω=2π/1-ε^2/16+o(ε^2)
= 2π( 1+ε^21/16 +o(ε^2) ) ,
NS=[ ∑_j=1^∞|a_j|^2j^2/∑_j=1^∞|a_j|^2]^1/2
= [ 1+ε^2(9/64)+o(ε^2)/1+ε^2/64+o(ε^2)]^1/2
= 1+ε^21/16+o(ε^2),
Accordingly,
τ =2π NS.
This expression clearly illustrates that the period τ and the waveform distortion NS in the van der Pol model tend to increase (or decrease) together in a proportional manner, which was also demonstrated using signal processing methods (Supplementary Fig. <ref>) <cit.>.
§ SUPPLEMENTARY FIGURES
§ SUPPLEMENTARY TABLES
|
http://arxiv.org/abs/2409.02260v1 | 20240903193807 | Penalty Adversarial Network (PAN): A neural network-based method to solve PDE-constrained optimal control problems | [
"Shilin Ma",
"Yukun Yue"
] | math.OC | [
"math.OC"
] |
arabic
Non-Relativistic Holography from AdS_5/CFT_4
Juan Miguel Nieto García
September 9, 2024
============================================
§ ABSTRACT
In this work, we introduce a novel strategy for tackling constrained optimization problems through a modified penalty method. Conventional penalty methods convert constrained problems into unconstrained ones by incorporating constraints into the loss function via a penalty term. However, selecting an optimal penalty parameter remains challenging; an improper choice, whether excessively high or low, can significantly impede the discovery of the true solution. This challenge is particularly evident when training neural networks for constrained optimization, where tuning parameters can become an extensive and laborious task. To overcome these issues, we propose an adversarial approach that redefines the conventional penalty method by simultaneously considering two competing penalty problems—a technique we term the penalty adversarial problem. Within linear settings, our method not only ensures the fulfillment of constraints but also guarantees solvability, leading to more precise solutions compared to traditional approaches. We further reveal that our method effectively performs an automatic adjustment of penalty parameters by leveraging the relationship between the objective and loss functions, thereby obviating the need for manual parameter tuning. Additionally, we extend this adversarial framework to develop a neural network-based solution for optimal control problems governed by linear or nonlinear partial differential equations. We demonstrate the efficacy of this innovative approach through a series of numerical examples.
§ INTRODUCTION
Optimal control problems are fundamental in various scientific and engineering disciplines. These problems involve finding a control state y that determines the desired state u through governing physical constraints, aiming to minimize or maximize a given performance criterion, typically expressed as an objective functional <cit.>. In many practical scenarios, the system dynamics are governed by partial differential equations (PDEs), leading to PDE-constrained optimal control problems <cit.>. These problems have gained significant attention due to their critical applications in fields such as aerospace engineering <cit.>, environmental marine sciences <cit.>, medical treatment planning for radiation therapy <cit.>, heat transfer <cit.>, fluid dynamics <cit.>, liquid crystals <cit.>, and wave propagation <cit.>. Achieving optimal performance while adhering to physical laws and constraints is crucial in these applications.
Mathematically, we can formulate the problem as
min_u ∈U, y ∈Y J(u,y), subject to F(u,y) = 0.
Here, J(u,y) represents the performance criterion to be minimized, often referred to as the objective functional <cit.>. The term F(u,y) contains the constraints that u and y must satisfy, including the differential operators in the form of PDEs and the boundary or initial conditions for the PDEs. We denote U and Y as the appropriate spaces in which u and y reside, respectively.
The highly nonlinear nature and multi-scale structure of PDE-constrained optimal control problems <cit.> necessitate using complex numerical methods. Over the years, various approaches have been developed to create robust and accurate numerical algorithms and tools to solve these problems. Among the prevalent methods, adjoint-based techniques are particularly notable for their effectiveness in gradient computation, which is crucial for iterative optimization algorithms <cit.>. These methods are often combined with traditional numerical techniques, such as the finite difference method or finite element methods, to handle spatial and temporal discretizations, allowing for the management of complex geometries and boundary conditions <cit.>.
Recently, with the rapid development of neural networks, machine learning-based numerical methods have been extensively developed to solve PDEs <cit.>. As an important extension of this work, considerable research efforts have focused on applying these methods to solve PDE-constrained optimal control problems. Several successful examples have emerged in this area <cit.>. Among them, the most prevalent approaches can be classified into three main categories:
* Training surrogate models to obtain solvers for PDEs, then using the trained solver to map inputs to solutions of the PDE, enforcing the PDE constraint while minimizing an objective cost functional <cit.>;
* Using the Lagrangian approach to reformulate the constrained optimization problem and solve systems associated with the Karush–Kuhn–Tucker (KKT) conditions <cit.> via classical neural network-based methods <cit.>;
* Adding the cost functional to the standard loss induced by the PDEs and minimizing the total loss simultaneously <cit.>.
A common challenge these approaches encounter is enforcing the constraints during optimization. The most direct approach is to treat the constraints as penalty terms and add them to the original loss function minimized by the neural network, transforming a constrained problem into an unconstrained one. This transformation makes it easier to apply various iterative optimization tools to solve the problem <cit.>, as implemented in the third approach above. Though not directly employing this method when enforcing PDE constraint, the other two approaches implicitly address the same challenge while solving the PDE by ensuring that initial or boundary conditions are satisfied <cit.>. For simplicity, we will explain our main idea concerning the last approach as an example. At the same time, the same analysis can be applied to the other two approaches when penalty terms are introduced.
Specifically, instead of solving problem (<ref>), we consider a penalty problem which can be formulated as:
min_u∈U,y∈Y(u,y),
where is defined as
(u,y) = J(u,y) + λ/2 F(u,y)^2.
Here, λ > 0 is a tunable penalty parameter, and · denotes the standard L^2 norm. It is important to note that problem (<ref>) is not equivalent to the constrained problem (<ref>). However, as λ approaches infinity, the solution of this unconstrained problem converges to the solution of the constrained one <cit.> (We will provide more details on this in the linear case in Section <ref>). Conversely, when the penalty parameter becomes too large, the problem can become ill-posed and difficult to solve <cit.>. If a neural network is used to solve this optimization problem, achieving convergence during training can be challenging. On the other hand, if the penalty parameter is too small, the PDE constraints will not be adequately satisfied, resulting in a solution that deviates significantly from the desired solution of the original constrained problem.
To address this challenge, <cit.> proposes a two-step line-search approach to determine the optimal penalty parameter that minimizes the cost function while ensuring the PDE constraints are satisfied within an acceptable tolerance. In contrast, <cit.> suggests dynamically adjusting the penalty parameters based on a problem-dependent update rule <cit.>. These approaches commonly face issues such as the extensive effort required for parameter tuning and the absence of general guidelines for selecting an appropriate penalty parameter for various problems. This complicates the implementation of existing methods for solving optimal control-related problems.
In this paper, we propose a novel framework based on penalty methods and present it using a neural network structure that does not require an artificial selection process for the varying value of the penalty parameter. Instead, we construct two neural networks to minimize (u,y) with different fixed values of λ and train them simultaneously. One network competes with the other during training to ensure ease of training and convergence while maintaining the constraints within an acceptable tolerance. This concept is inspired by the well-known generative adversarial network (GAN) <cit.>, which has extensive applications, for example, in image translation <cit.>, video generation <cit.>, and speech synthesis <cit.>. The adversarial network structure incorporates a generator and a discriminator, with the discriminator providing feedback to the generator to help it produce more realistic information to deceive the discriminator. This idea has also benefited research on numerical methods based on adversarial network structures for solving PDEs. We refer readers to <cit.> and the references therein.
In particular, we create a solver network (corresponding to the generator in GAN) and a discriminator network, and we choose two real numbers λ_1, λ_2 > 0 with λ_1 being much larger than λ_2, and λ_2 being relatively small. The discriminator network is set to minimize the objective functional (u,y) as defined in (<ref>). In practice, the smallness of λ_2 ensures the convergence of the discriminator network as long as the optimal control problem (<ref>) is well-posed. Conversely, the solver network aims to minimize the sum of (u,y) and an additional cost based on feedback from the discriminator network. The exact functional form of this extra cost will be detailed in Section <ref>.
With feedback from the discriminator network, the solver network can automatically adjust its focus during training without manual tuning. If the solver network emphasizes reducing the objective functional at the expense of not maintaining the PDE constraint, the weight of the PDE constraint will increase accordingly. Conversely, if the solver network focuses too much on satisfying the PDE constraint but fails to reduce the objective cost functional, it will adjust its weights on the objective functionals to correct this imbalance.
To this end, the proposed framework utilizes the strengths of both traditional penalty methods and the adversarial training paradigm, offering a novel solution to the challenges inherent in PDE-constrained optimal control problems. Numerical examples, presented in Section <ref>, demonstrate that our approach ensures robust convergence and effective enforcement of PDE constraints without requiring extensive parameter tuning. This dual-network strategy not only simplifies the training process but also enhances the overall stability and performance of the optimization.
The rest of this paper is structured as follows: In Section <ref>, we describe various PDE-constrained optimal control problems in general forms that will be the focus of this paper. Section <ref> serves as a motivation for our methodology, where a linear problem is discussed to illustrate the effectiveness of the proposed penalty adversarial framework in solving problems with penalty formulations. We demonstrate that in the linear case, the solution to the adversarial problem better adheres to the constraints than simply solving the problem with a small penalty parameter under certain conditions. Following this analysis, Section <ref> provides a detailed construction of a neural network-based method utilizing this concept, culminating in a practical algorithm. Finally, in Section <ref>, we conduct numerical experiments on both linear and nonlinear problems in 1D and 2D to validate the effectiveness of the proposed approach.
§ PROBLEM SETUP
This section formally outlines the various types of optimal control problems, including distributed, boundary, and initial value control problems. We consider an open bounded physical domain Ω⊂ℝ^d, where d is a positive integer denoting the spatial dimension, and a time span [0, T]. The problem is governed by the following system:
ℒ[u(x,t),y_f(x,t)] = 0, ∀ x ∈Ω, t ∈ [0,T],
ℬ[u(x,t),y_b(x,t)] = 0, ∀ x ∈∂Ω, t ∈ [0,T],
ℐ[u(x,0),y_i(x)] = 0, ∀ x ∈Ω.
Here, x and t denote the spatial and temporal variables, respectively. ℒ is an operator involving differentials that represents the PDE to be satisfied by u, ℬ denotes the boundary condition, and ℐ represents the initial condition. Common choices for boundary conditions include Dirichlet, Neumann, or Robin conditions, and our framework imposes no specific restrictions on these choices. The spaces U and Y, in which u and y reside, are selected to ensure the well-posedness of the PDE problem. For instance, if ℒ(u,y) = Δ u - y, corresponding to a standard second-order elliptic equation with no initial condition and a homogeneous Dirichlet boundary condition, appropriate spaces to consider are U = H^1_0(Ω) and Y = L^2(Ω) <cit.>.
If we set y = (y_f, y_b, y_i) and define the constraint F(u,y) = (ℒ(u,y), ℬ(u,y), ℐ(u,y)), we recover the constraint given in (<ref>). Specifically, the entire system (<ref>) or any individual equation within it can be viewed as a specific example of the general form of constraints F(u,y) = 0.
By setting different components of y in (<ref>) to be tunable, we obtain various types of control problems. For example, if we consider y_b and y_i to be fixed and take y_f as a tunable control, we obtain a distributed control problem, initially introduced in <cit.>. The objective function to be minimized in this case is:
J_d(u,y) = 1/2u - û^2 + ρ/2y_f^2,
where û denotes the desired state that we aim for our solution of the PDE system to approximate by tuning the value of y_f. The second term in this functional is a Tikhonov regularization term <cit.>. Generally, the problem can be ill-posed without such a regularization term, and the Tikhonov regularization parameter ρ value is typically chosen in advance <cit.>.
If we consider y_b to be tunable and y_f and y_i to be fixed, then we obtain a boundary control problem, which minimizes the following objective function:
J_b(u,y) = 1/2u - û^2 + ρ/2y_b^2,
with the same notation for û and ρ. Similarly, one can define an initial value optimal control problem.
Additionally, to clarify our terminology: from now on, we refer to functionals like J(u,y) as objective functionals, as they represent the goal that we aim to minimize with (u,y) found by our algorithms. On the other hand, we refer to functionals like (u,y) as cost functionals or loss functionals.
It is important to note that the optimal control problems listed here only encompass some possible applications of our proposed method. General PDE-constrained optimization problems can be adapted to fit within our framework. Our methodology can be viewed as a variant of the penalty method. As long as a problem can be resolved or approximated using the penalty method, it is feasible to implement our approach. This study will focus on distributed and boundary optimal control problems because they are typically classic and representative examples.
§ DISCRETIZED PROBLEM
This section is devoted to discussing our motivation for setting up our method. In PDE-constrained optimization problems, there is ongoing debate on whether to use the discretize-then-optimize or optimize-then-discretize strategy <cit.>. Since our method is based on a neural network, the autodifferentiation technique <cit.> naturally leads to a discretized system to solve. We adopt the discretize-then-optimize approach, beginning with an analysis of a discretized problem.
We consider the following discretized constrained optimization problem: find u ∈ℝ^n and y ∈ℝ^m to minimize
J(u,y) = 1/2Au - b^2 + ρ/2y^2,
subject to
Ku = y,
where 0 < m ≤ n, A ∈ℝ^k × n, K ∈ℝ^m × n with k>0 are matrices, b ∈ℝ^k is a given vector, and ρ > 0 is a given Tikhonov regularization parameter. This is a discretized version of problem (<ref>), with the objective functional chosen to match the type of optimal control problem set up in Section <ref>. As a natural choice for discretizing the optimal control problem, we can take A = I, where I is the n × n identity matrix. However, for the generality of our analysis, we only require that the row vectors of A and K span ℝ^n. Under this assumption, we know that for any α > 0, the matrix G_α is invertible, where G_α is defined as
G_α = A^T A + α K^T K.
We will start from here to discuss how to solve problem (<ref>)-(<ref>).
§.§ Explicit Solution
We note that problem (<ref>)-(<ref>) has an explicit solution that can be computed using a Lagrangian formulation. We consider the Lagrangian form:
L(u,y) = J(u,y) + ζ (Ku - y) = 1/2Au - b^2 + ρ/2y^2 + ζ^T (Ku - y),
with ζ∈ℝ^m as an auxiliary Lagrangian parameter vector. By differentiating (<ref>) with respect to u, y, ζ respectively, the first-order optimality conditions <cit.> are given by the following equations:
{[ A^T A u - A^T b + K^T ζ = 0 ,; ρ y - ζ = 0,; Ku - y = 0. ].
Solving this system results in:
{[ û = (A^T A + ρ K^T K)^-1 A^T b,; ŷ = Kû = K (A^T A + ρ K^T K)^-1 A^T b. ].
Thus, we have found (û, ŷ) to be the analytical solution to problem (<ref>)-(<ref>), and this notation will continue to be used throughout this paper. However, computing the inverse matrix can be challenging when dealing with large-scale systems, and its numerical stability may become an issue <cit.>. Therefore, in practice, the penalty method is often preferred for implementation, as it is more suitable for applying iterative methods that do not require direct computation of the inverse <cit.>. As introduced in (<ref>), the problem is formulated as follows: Find u ∈ℝ^n and y ∈ℝ^m to minimize
(u,y) = J(u,y) + λ/2Ku - y^2 = 1/2Au - b^2 + ρ/2y^2 + λ/2Ku - y^2,
where λ > 0 is a penalty parameter. For simplicity of notation, we will denote the remainder function corresponding to the constraint (<ref>) as R(u,y), defined as
R(u,y) = Ku - y^2.
The first-order optimality conditions to minimize (<ref>) are given by:
{[ A^T A u - A^T b + λ K^T K u - λ K^T y = 0 ,; ρ y - λ Ku + λ y = 0. ].
Solving this system results in:
{[ u^λ = (A^T A + ρλ/ρ + λ K^T K)^-1 A^T b ,; y^λ = λ/ρ + λ Ku^λ = λ/ρ + λ K (A^T A + ρλ/ρ + λ K^T K)^-1 A^T b. ].
Comparing (<ref>) with (<ref>), we observe that as λ tends to infinity, u^λ will converge to û because lim_λ→∞ρλ/ρ + λ = ρ, and y^λ will converge to ŷ. Therefore, although problem (<ref>) is distinct from problem (<ref>)-(<ref>), we can consider (<ref>) with a sufficiently large λ as an acceptable approximation to problem (<ref>)-(<ref>).
However, in practice, as λ increases, the difficulty of solving the unconstrained optimization problem associated with (<ref>) also increases. Mathematically, this can be observed by examining the Hessian matrix for (u,y) with respect to u and y, denoted as . Direct computation shows that
= [ A^TA+λ K^TK -λ K^T; -λ K (ρ +λ)I_m, ]
where I_m denotes an m× m identity matrix. As λ tends to infinity, tends to become a singular matrix, making it difficult to find the minimizer of (u,y).
To illustrate this, we present a graphical example. In Figure <ref>, we consider a one-dimensional problem with m=n=1 and aim to find u, y∈ℝ that minimize
J^*(u,y) = 1/2| u - 2|^2 + 1/2| y|^2, subject to 2u = y.
The corresponding penalty formulation is to find u, y∈ℝ such that they minimize
P^λ,*(u,y) = 1/2| u - 2|^2 + 1/2| y|^2 + λ/2| 2u-y|^2.
Consistent with the notation used above, we denote the exact solution as (û, ŷ) and mark it as a red point in the figure. The solution for the minimization problem with λ_1 = 5, denoted as (, ), is marked as a blue point, while the solution for the minimization problem with λ_2 = 0.5, denoted as (, ), is marked as a green point. The range for u and y is chosen to be [-0.5, 2]. We can observe that (, ) is closer to the exact solution than (, ).
Additionally, Figure <ref>, parts (a) and (b), plot the contour of each level set of P^λ,* for these two values of λ. When λ is relatively larger, the contour is more flattened, indicating that the conditioning of the problem is worse <cit.>. This results in greater difficulty in finding the optimal value, which aligns with our analysis above.
In a neural network setting, a large λ makes the network difficult to train and may not yield the correct solution. Conversely, if λ is not sufficiently large, the solution to the corresponding unconstrained problem may not satisfy the constraint adequately, and u^λ might not approximate u^* well. Therefore, we seek a practical method that ensures the solution satisfies the constraints while effectively approximating the true solution.
§.§ Penalty Adversarial Problem
To resolve the problem mentioned in the end of last subsection, now, we propose a new unconstrained optimization approach based on the penalty method. Instead of using a single penalty parameter λ, we simultaneously consider two problems with different penalty parameters. Let λ_1 > λ_2 > 0, and let (u^λ_1, y^λ_1) and (u^λ_2, y^λ_2) denote the solutions that minimize the functionals 𝒫^λ_1(u,y) and 𝒫^λ_2(u,y), respectively. We assume that the former problem is hard to solve in practice, while the latter is easier to solve. The corresponding objective functionals for (u^λ_1, y^λ_1) and (u^λ_2, y^λ_2) can be computed using (<ref>) and are denoted as J(u^λ_1, y^λ_1) and J(u^λ_2, y^λ_2). We now consider the following problem:
Find u ∈ℝ^n and y ∈ℝ^m to minimize
(u,y) = { J(u,y) + λ_1/2 R(u,y) + ω[J(u, y) - J(u^λ_2, y^λ_2)]^2, if (u,y) ∈Ω_1,
J(u,y) + λ_1/2 R(u,y), if (u,y) ∈Ω_2.
.
Here, ω > 0 is a tunable parameter, and
Ω_1 = {(u,y): J(u, y) > J(u^λ_2, y^λ_2)}, Ω_2 = {(u,y): J(u, y) ≤ J(u^λ_2, y^λ_2)}.
Clearly, Ω_1 is an open set and Ω_2 is a closed set due to the continuity of J(u,y). We call this minimization problem the penalty adversarial problem (PAP).
Compared to the standard penalty problem, (u,y) incorporates an additional term to penalize the failure of achieving a sufficiently small objective functional. By designing this problem, we aim to find a solution that adheres closely to the constraint while maintaining feasibility in its solvability. The problem's feasibility is difficult to assert through a mathematical criterion, as it is highly problem-dependent. However, we will demonstrate this advantage through graphical examples in Section <ref> and numerical examples in Section <ref>. On the other hand, a mathematical measure to determine if a solution better adheres to the constraints is straightforward to obtain, which is the value of the remainder function evaluated at the solution. Therefore, mathematically, we aim to show that by choosing an appropriate value of ω, the minimizer of (u,y), denoted as (, ) hereafter, will satisfy the following:
R(, ) < R(, ).
Here R(u,y) is defined in (<ref>). As long as this holds, it can be seen that the minimizer for (u,y) is more accurate than the minimizer for (u,y) in terms of satisfying the PDE constraints. Proving this argument will be the main focus of our analysis hereafter.
First, consider the relationship between J(u,y) and R(u,y): For a fixed λ∈ℝ^+, (u^λ, y^λ) minimizes (u,y), which is a combination of two parts: one involves the objective functional J(u,y), while the other comprises the remainder function R(u,y). If the value of J(u,y) is fixed, then minimizing (u,y) is equivalent to minimizing R(u,y). Therefore, intuitively, the fact that (u^λ, y^λ) minimizes (u,y) indicates that (u^λ, y^λ) minimizes the remainder function R(u,y) among all (u,y) ∈ℝ^n ×ℝ^m such that J(u,y) = J(u^λ, y^λ). The following lemma justifies this.
For a fixed λ > 0, let (u^λ, y^λ) ∈ℝ^n ×ℝ^m minimize P^λ(u,y) as given in (<ref>). If (u,y) ∈ℝ^n ×ℝ^m, then we have the following assertions:
* If J(u,y) ≤ J(u^λ, y^λ), then R(u,y) ≥ R(u^λ, y^λ).
* If R(u,y) ≤ R(u^λ, y^λ), then J(u,y) ≥ J(u^λ, y^λ).
* In each of the previous two assertions, equality can only be attained simultaneously. Specifically, if J(u,y) < J(u^λ, y^λ), then R(u,y) > R(u^λ, y^λ). If R(u,y) < R(u^λ, y^λ), then J(u,y) > J(u^λ, y^λ).
We will only prove the first argument and the proof for the second will follow in the same way. By definition, (u^λ, y^λ) minimizes (u,y). Thus, for any (u,y) ∈ℝ^n ×ℝ^m, we have (u,y) ≥(u^λ, y^λ). This implies that
J(u,y) + λ/2 R(u,y) ≥ J(u^λ, y^λ) + λ/2 R(u^λ, y^λ).
Since J(u,y) ≤ J(u^λ, y^λ), it follows that
R(u,y) ≥ R(u^λ, y^λ),
as λ > 0. The third argument also follows immediately from (<ref>) when the inequality relation between J(u,y) and J(u^λ, y^λ) is strict.
This result indeed provides a sufficient condition to restrict (u,y) in domain Ω_1. Namely, using the third argument of Lemma <ref>, as long as R(u,y) < R(, ), then (u,y) ∈Ω_1 and the value of (u,y) will differ from (u,y) at these points. Another important deduction we can make from here is that the minimum of (u,y) in the closed region Ω_2 could always be attained at (, ). We conclude this in the following result:
If (u,y) ∈Ω_2, then
(u,y) ≥ J(, ) + λ_1/2 R(, ).
In other words,
min_(u,y) ∈Ω_2(u,y) = J(, ) + λ_1/2 R(, ).
(u,y) ∈Ω_2 implies that J(u,y) ≤ J(, ). According to the first assertion in Lemma <ref>, we know that
R(u,y) ≥ R(, ).
On the other hand, since (, ) minimizes (u,y), we have
J(u,y) + λ_2/2 R(u,y) = (u,y) ≥(, ) = J(, ) + λ_2/2 R(, ).
Combining these two inequalities, we get
J(u,y) + λ_1/2 R(u,y) = J(u,y) + λ_2/2 R(u,y) + λ_1 - λ_2/2 R(u,y)
≥ J(, ) + λ_2/2 R(, ) + λ_1 - λ_2/2 R(, )
= J(, ) + λ_1/2 R(, ).
This proves the claim.
Revealed by this, if for any (u,y) ∈Ω_1, (u,y) is always greater than [J(u^λ_2, y^λ_2) + λ_1/2 R(,)], then (u^λ_2, y^λ_2) can be a solution to the minimization problem related to (<ref>). In this case, our claim that (, ), the minimizer of this problem, will always adhere more closely to the constraint than (, ) would no longer hold. Therefore, to make our claim valid, it is necessary for the minimizer (, ) to fall within Ω_1. Enlightened by this, we present the following result:
Assuming ω > 0 is a fixed positive constant parameter, the following two arguments are equivalent:
* (u,y) ∈Ω_1 and
(u,y) < J(u^λ_2, y^λ_2) + λ_1/2 R(, ),
* (u,y) satisfies R(u,y) < R(u^λ_2, y^λ_2) and
J(u^λ_2, y^λ_2) < J(u,y) < J(u^λ_2, y^λ_2) + λ_1 [ R(u^λ_2, y^λ_2) - R(u,y) ]/1 + √(1 + 2ωλ_1 [ R(u^λ_2, y^λ_2) - R(u,y) ]).
We first assume the former argument holds and try to deduce the latter one. To start, we define the difference between J(u,y) and J(u^λ_2, y^λ_2) as D^λ_1, λ_2(u,y), namely,
D^λ_1, λ_2(u,y) = J(u,y) - J(u^λ_2, y^λ_2).
As (u,y) ∈Ω_1, (u,y) satisfies
D^λ_1, λ_2(u,y) > 0.
Using the definition of (u,y) given in (<ref>), we see that (<ref>) is equivalent to
ω[ D^λ_1, λ_2(u,y) ]^2 + D^λ_1, λ_2(u,y) + λ_1/2[ R(u,y) - R(u^λ_2, y^λ_2) ] < 0.
Since ω > 0, this inequality will have solutions only when the discriminant of the corresponding quadratic formula is greater than zero, which is equivalent to:
1 - 2ωλ_1 [ R(u,y) - R(u^λ_2, y^λ_2) ] > 0.
In this case, D^λ_1, λ_2(u,y) should satisfy
-1 - √(1 + 2ωλ_1 [ R(u^λ_2, y^λ_2) - R(u,y) ])/2ω < D^λ_1, λ_2(u,y) < -1 + √(1 + 2ωλ_1 [ R(u^λ_2, y^λ_2) - R(u,y) ])/2ω.
Since we require that D^λ_1, λ_2(u,y) > 0, it is necessary to have
-1 + √(1 + 2ωλ_1 [ R(u^λ_2, y^λ_2) - R(u,y) ])>0,
which is equivalent to
R(u,y) < R(u^λ_2, y^λ_2).
With this, J(u,y) can be estimated as
J(u,y) = J(, ) + D^λ_1, λ_2(u,y) < J(, ) + √(1 + 2ωλ_1 [ R(u^λ_2, y^λ_2) - R(u,y) ]) - 1/2ω,
which is equivalent to (<ref>).
This proves the latter argument.
For the other direction, Lemma <ref> ensures that R(u,y) < R(, ) implies (u,y) ∈Ω_1. In addition, (<ref>) implies that the value of D^λ_1, λ_2(u,y) will ensure (<ref>), which is equivalent to (<ref>), thereby completing the proof.
Here, we have provided an equivalent condition to characterize the case where the minimum of (u,y) falls in Ω_1. As long as this condition is met, the minimum point (, ) will correspond to a smaller value of R(u,y) compared to R(,), which is the desired outcome. The remaining task is to justify this condition. To do so, we need to demonstrate that there exists a point in Ω_1 that satisfies (<ref>), and then we can invoke the continuity of J(u,y) to conclude the existence of a minimum point. This leads to the main result of this section.
If K≠ 0 and there exists σ > 0 such that
λ_1/2 R(u^λ_2, y^λ_2) > σ + J(û, ŷ) - J(, ),
where (û, ŷ) given in (<ref>) is the exact solution that minimizes J(u,y) subject to Ku = y, then there exists ω > 0 such that (, ), which minimizes (u,y) defined in (<ref>), satisfies
R(, ) < R(u^λ_2, y^λ_2).
Recalling the definition for (û, ŷ), we will show that (û, ŷ) satisfies
(û, ŷ) < J(u^λ_2, y^λ_2) + λ_1/2 R(, ).
Using Lemma <ref>, it is equivalent to show R(û, ŷ) < R(, ) and (û, ŷ) satisfies (<ref>).
It is clear that R(û, ŷ) = 0. On the other hand,
R(u^λ_2, y^λ_2) = Ku^λ_2 - y^λ_2^2 = Ku^λ_2 - λ_2/ρ + λ_2 Ku^λ_2^2
= ρ/ρ + λ_2 Ku^λ_2^2
> 0,
according to our assumption. Thus, the relation R(û, ŷ) < R(, ) holds.
Meanwhile, using the fact that R(û, ŷ) = 0 again, (<ref>) in this case reduces to
J(u^λ_2, y^λ_2) < J(û,ŷ) < J(u^λ_2, y^λ_2) + λ_1 R(u^λ_2, y^λ_2)/1 + √(1 + 2ωλ_1 R(u^λ_2, y^λ_2)).
Since σ > 0 and (<ref>) holds, we can find ω > 0 to satisfy this condition. Therefore, we have shown (<ref>). This indicates that (, ) is not the minimizer of (u,y) for small enough positive ω, as (u,y) reaches a smaller value at (û, ŷ) compared to (, ). It remains to show that any minimizer of (u, y) satisfies R(, ) < R(u^λ_2, y^λ_2).
To see this, we consider a subdomain of Ω_1 = {(u,y) : J(u,y) ≥ J(u^λ_2, y^λ_2)}, the closure of Ω_1, defined as
Ω^λ_1, λ_2 = {(u,y) : J(u^λ_2, y^λ_2) ≤ J(u,y) ≤ J(u^λ_2, y^λ_2) + λ_1/2 R(u^λ_2, y^λ_2)}.
Due to the continuity of J(u,y) with respect to (u,y) and the definition of J(u,y), we see that Ω^λ_1, λ_2 is a closed and bounded domain. Hence, it is compact. When (u,y) ∈Ω_1 ∖Ω^λ_1, λ_2,
(u,y) ≥ J(u,y) > J(u^λ_2, y^λ_2) + λ_1/2 R(u^λ_2, y^λ_2) = (, ),
and so it cannot be a minimizer of (u,y) in Ω_1. On the other hand, as a continuous function defined on a compact set Ω^λ_1, λ_2, (u,y) attains its minimum at some point in Ω^λ_1, λ_2. For consistent notation, let us denote this point as (, ). Therefore,
J(, ) + λ_1/2 R(, ) + ω[J(, ) - J(u^λ_2, y^λ_2)]^2
= (, ) ≤(û, ŷ) < J(u^λ_2, y^λ_2) + λ_1/2 R(u^λ_2, y^λ_2).
Finally, we will show that R(, ) < R(, ). By Lemma <ref>, it is sufficient to show that J(, ) > J(, ). Therefore, we need to prove that J(, ) ≠ J(, ) since (, )∈Ω^λ_1, λ_2. In fact, if J(, ) = J(, ), then by Lemma <ref>, we know that R(, ) ≥ R(, ). From (<ref>), we have
J(, ) + λ_1/2 R(, )
≤ J(, ) + λ_1/2 R(, )
= J(, ) + λ_1/2 R(, ) + ω[J(, ) - J(u^λ_2, y^λ_2)]^2
=(, )≤(û,ŷ)< (,)= J(u^λ_2, y^λ_2) + λ_1/2 R(u^λ_2, y^λ_2).
This is a contradiction, and so we conclude that J(, ) is strictly larger than J(, ) and thus R(, ) is strictly smaller than R(, ). This completes the proof.
To conclude this part, we will make two comments on our strategy of proof for Theorem <ref>.
The proof for Theorem <ref> presented here focuses on the existence of a minimizer, which is sufficient to support our claim that R(, ) < R(, ). For a deeper study of the properties of (, ), one can consider a quantitative analysis of the functional form of (u,y).
In this context, we use (û, ŷ) as a reference point to demonstrate that there exists at least one point satisfying (<ref>). However, this choice is not mandatory. In fact, any point in Ω_1 that can be computed and shown to result in a value smaller than (, ) could replace (û, ŷ) in our proof.
§.§ Choice of ω
The last part discusses the possibility of designing a penalty adversarial problem to ensure its minimizer adheres to the constraints more closely than (,). Once λ_1 and λ_2 are fixed, the choice of ω will determine the formulation of the corresponding problem. In this section, we will explore this choice's influence through theoretical analysis and computational examples.
§.§.§ Upper Bound of ω
We note that (<ref>) provides an explicit condition that can be used to determine if the value of λ_1 is large enough to implement this penalty adversarial strategy independent of ω. Then, after fixing the values for λ_1 and λ_2, based on this proof, we can provide an estimate for the upper bound of ω. This conclusion is formalized in the following proposition.
Assuming (<ref>) holds, then ω > 0 should satisfy the following relation:
ω < λ_1 R(u^λ_2, ) - 2[J(û, ŷ) - J(, )]/2[J(û, ŷ) - J(, )]^2.
In the proof of Theorem <ref>, we deduced the following relation:
J(û, ŷ) < J(u^λ_2, y^λ_2) + λ_1 R(u^λ_2, y^λ_2)/1 + √(1 + 2ωλ_1 R(u^λ_2, y^λ_2)).
Rearranging this inequality results in (<ref>).
We will provide an intuitive understanding of why there exists an upper bound on the choice of ω in Section <ref>.
§.§.§ Comparison between (u,y) and (u,y)
We have discussed the advantage of minimizing (u,y) over (u,y), specifically its ability to adhere to the constraint more effectively. On the other hand, the main disadvantage of trying to minimize (u,y), as mentioned earlier, is its poor conditioning, which makes it difficult to solve. Therefore, by choosing to minimize (u,y) instead of (u,y), we aim to make the corresponding minimization problem easier to solve.
Here, we will present a graphical example to illustrate this point. Using the same example as in Section <ref>, we consider a one-dimensional problem with m = n = 1 and aim to find u, y ∈ℝ to minimize
J^*(u,y) = 1/2| u - 2|^2 + 1/2| y|^2, subject to 2u = y.
The corresponding penalty formulation has been stated in (<ref>), and the corresponding penalty adversarial problem is as follows:
find u ∈ℝ and y ∈ℝ to minimize
(u,y) = { 1/2| u - 2|^2 + 1/2| y|^2 + λ_1/2| 2u - y|^2 + ω[1/2| u - 2|^2 + 1/2| y|^2 - J(u^λ_2, y^λ_2)]^2, if (u,y) ∈Ω_1,
1/2| u - 2|^2 + 1/2| y|^2 + λ_1/2| 2u - y|^2, if (u,y) ∈Ω_2.
.
where
J(, ) = 1/2| - 2|^2 + 1/2||^2
is a fixed number that can be calculated using (<ref>) once λ_2 is fixed.
We plot the level sets of (u,y) and (u,y) with different parameters in Figure <ref>. To provide more information, we extend the range of each figure to [-2, 3] instead of [-0.5, 2] as in Figure <ref>. As shown in Figure <ref>, the first row displays the contour plots of (u,y) with λ_1 = 5 and λ_2 = 0.5, while the second row displays the contour plots of (u,y) with the same values for λ_1 and λ_2, and three different choices of ω: 0.1, 1, and 10. The points (û, ŷ), (, ), and (, ) are marked as red, blue, and green points, respectively, in each figure. In the second row, a fourth point representing the minimizer of (u,y) for the corresponding value of ω is also marked.
We make the following observations:
* The point (, ) always lies between (, ) and (, ) and is closer to the analytical solution (û, ŷ) than (, ).
* The smaller the value of ω, the closer (, ) is to (, ). In Figure <ref> (c), (, ) almost overlaps with (, ) for ω = 0.1.
* As the value of ω increases, the contour shape changes from elliptical to bowl-shaped, with the direction towards (, ) generally expanding.
The third point illustrates why it is easier to minimize (u,y) compared to (u,y). At points away from the minimizer, for a suitable choice of ω, the condition number is more favorable since the contour of the corresponding level set is not as flattened as it is for (u,y). This behavior enables the effective use of standard iterative methods to find the minimizer.
§.§.§ Relation to Auto-Tuning Parameter Strategy
To address the central challenge discussed in this paper regarding the penalty method—specifically, the difficulty of selecting a suitable parameter, as small parameters lead to inaccurate solutions and large parameters lead to ill-conditioned problems—a common strategy is to propose an approach with varying penalty parameters. This strategy is also commonly used when employing neural networks to solve PDE-related problems; for example, see <cit.>.
To illustrate this, consider using λ^k to denote the penalty parameter used in the k-th iteration, and as an example given in <cit.>, one possible choice is to take λ^k+1=β^k λ^k,
where β^k > 0 is a constant parameter selected for the k-th iteration. The main difficulty with this strategy is that the choices of {β^k}_k are problem-dependent <cit.>, often requiring fine-tuning in practice. Such a process is usually challenging or highly technical. It would be favorable if there were an automatic strategy to tune the value of λ^k. Here, we point out that the penalty adversarial strategy proposed here provides such an automatic mechanism. Specifically, when J(u, y) ≥ J(u^λ_2, y^λ_2), we have
(u,y) = J(u,y) + λ_1/2 R(u,y) + ω[J(u, y) - J(u^λ_2, y^λ_2)]^2,
and its gradient can be computed as
∇(u,y)=(1+2ω(J(u,y)-J(,)))∇ J(u,y)+λ_1/2∇ R(u,y).
On the other hand, for (u,y) with a general λ>0, its gradient can be computed as
∇(u,y) = ∇ J(u,y)+λ/2∇ R(u,y).
If one is using an iterative method to find the minimizer and employs an explicit scheme, then minimizing (u,y) at a fixed stage (u,y) is equivalent to minimizing P^λ̃(u,y) in the sense that their gradients have the same direction, with
λ̃ = λ_1/1 + 2ω(J(u,y) - J(u^λ_2, y^λ_2)) < λ_1.
Thus, the penalty adversarial problem becomes more accessible to minimize under such conditions, as it is equivalent to using a smaller penalty parameter than λ_1 when J(u,y)>J(,). In addition, as J(u,y) gets closer to J(,), λ̃ will approach λ_1.
To conclude, solving the penalty adversarial problem is a strategy to dynamically tune the penalty parameter according to the closeness of the solution to the real solution. When the solution is far from the correct one, the corresponding problem will be equivalent to minimizing P^λ(u,y) with a small λ, allowing it to converge to the actual solution more quickly. As the solution nears the correct solution, the problem transitions to one with a larger penalty term, thereby adhering more closely to the constraints. Compared to other traditional penalty methods with dynamic penalty terms, this approach is more straightforward to implement since no specific strategy is needed to determine the dynamics of the penalty parameters in advance.
To this end, we use this equivalence to provide an intuitive reasoning for the existence of an upper bound for the choice of ω. As seen from (<ref>), if ω is too large, when J(u,y)>J(,), λ̃ can become very small, even smaller than λ_2. In this case, minimizing (u,y) behaves similarly to minimizing a penalty problem with a penalty parameter smaller than λ_2. Consequently, the assertion that R(,) < R(,) will no longer hold.
§.§ Alternate Formulations of Penalty Adversarial Problem
As implemented in (<ref>), the additional penalty term associated with ω is chosen as a quadratic form of the difference | J(u,y) - J(, ) |. However, this choice is not unique. Other non-negative functions of | J(u,y) - J(, ) | can also be used to add this penalty term. A natural choice is to use a monomial with a power of k, and the corresponding function can be defined as:
(u,y) = { J(u,y) + λ_1/2 R(u,y) + ω| J(u, y) - J(u^λ_2, y^λ_2)|^k, if (u,y) ∈Ω_1,
J(u,y) + λ_1/2 R(u,y), if (u,y) ∈Ω_2.
.
The corresponding penalty adversarial problem will be modified to find the minimizer of (u,y) instead of (u,y) = A^λ_1,λ_2_ω,2(u,y). Using the same example as presented in (<ref>), we plot the contour of level sets for the corresponding functional (u,y) with k ranging from 1 to 9, and fixed parameters: λ_1 = 5, λ_2 = 0.5, ω = 5. As shown in Figure <ref>, the corresponding minimizer, denoted as (, ), moves closer to the exact solution as the penalty power order k increases. Additionally, as k increases, the shape of the contour generally becomes more bowl-like: the curvature of the part near the actual solution remains similar, while the curvature on the other side becomes more flattened, and the curve extends to be wider. This property enhances the solvability of the problem. Moreover, we can observe that (,) always lies between the point (,) and (,), and it remains closer to the actual solution than (,). In conclusion, in the previous subsection, we considered the case of k = 2 for simplicity of analysis, but in practice, other values of k could also be valid choices.
§ PENALTY ADVERSARIAL NETWORK
This section presents the neural network-based algorithm inspired by the penalty adversarial problem discussed in the previous section. Following the standard approach in physics-informed neural networks <cit.>, the input to the neural network consists of the spatial variable x and the temporal variable t, while the output is the neural network solution to the corresponding optimal control problem constrained by PDEs, as introduced in (<ref>). We denote this neural network solution as ((x,t;θ), (x,t;θ)), with parameters θ learned to minimize the corresponding loss function L[x, t; θ]. By defining the loss function in different ways, various methods can be implemented to solve optimal control problems through neural networks.
As introduced above, a standard approach <cit.> is to add the constraints as penalty terms to the objective function and then minimize it. We denote the loss function as L_P[x, t; θ], which is defined as:
L_P[x, t; θ]
= 1/N_J∑_m=1^N_JJ[(x_m^J, t_m^J; θ), (x_m^J, t_m^J; θ)]_Objective loss + λ_p/N_p∑_m=1^N_pℒ[(x_m^p, t_m^p; θ), (x_m^p, t_m^p; θ)]^2_PDE residual loss
+ λ_b/N_b∑_j=1^N_bℬ[(x_j^b, t_j^b; θ), (x_j^b, t_j^b; θ)]^2_Boundary loss + λ_i/N_i∑_n=1^N_iℐ[(x_n^i, t_n^i; θ), (x_n^i, t_n^i; θ)]^2_Initial loss.
The variables in the loss function are defined as follows: x and t represent the vectors of spatial and temporal variables used as input to the neural network, which are collections of sample points {x_m^J}_m=1^N_J, {x_m^p}_m=1^N_p, {x_j^b}_j=1^N_b, {x_n^i}_n=1^N_i, and {t_m^J}_m=1^N_J, {t_m^p}_m=1^N_p, {t_j^b}_j=1^N_b, {t_n^i}_n=1^N_i, respectively. Here, N_J, N_p, N_b, and N_i denote the number of sample points for the objective functional, PDE residual, boundary conditions, and initial conditions, respectively. The objective loss, evaluated at sample points (x_m^J, t_m^J), is denoted by J[(x_m^J, t_m^J; θ), (x_m^J, t_m^J; θ)]. The PDE residual loss, boundary loss, and initial loss, evaluated at their respective sample points, are represented by ℒ[(x_m^p, t_m^p; θ), (x_m^p, t_m^p; θ)], ℬ[(x_j^b, t_j^b; θ), (x_j^b, t_j^b; θ)], and ℐ[(x_n^i, t_n^i; θ), (x_n^i, t_n^i; θ)]. We note that the loss function based on the penalty formulation is slightly different from the general penalty formulation we discussed above, as we choose not to divide the penalty parameters by 2 to keep the form of the loss functions simple. One can, however, use the previous formulations to construct the loss functions here.
In contrast to this penalty approach, we introduce an adversarial network structure enlightened by the penalty adversarial problem. Following the standard terminology used in adversarial network related works <cit.>, we call the two different network solver network and discriminator network, and denote their solution respectively as ((x,t;θ),(x,t;θ)) and ((x,t;θ),(x,t;θ)). The discriminator network is simply minimizing (<ref>) with some small penalty parameters. Namely, it focuses on minimizing
L^d[x, t; θ]
= 1/N_J∑_m=1^N_J
J[(x_m^J, t_m^J; θ), (x_m^J, t_m^J; θ)] + λ_p^d/N_p∑_m=1^N_pℒ[(x_m^p, t_m^p; θ), (x_m^p, t_m^p; θ)]^2
+ λ_b^d/N_b∑_j=1^N_bℬ[(x_j^b, t_j^b; θ), (x_j^b, t_j^b; θ)]^2 + λ_i^d/N_i∑_n=1^N_iℐ[(x_n^i, t_n^i; θ), (x_n^i, t_n^i; θ)]^2,
where λ_p^d, λ_b^d, λ_i^d>0 are penalty weights for PDE loss, boundary loss and initial loss of discriminator network, respectively. In practice, we usually choose them to be relatively small to guarantee ease of training. On the other hand, for the solver network, in addition to then standard penalty formulation, we consider an extra term measuring the difference between objectives values of the discriminator network and the solver network. Namely, it minimizes the following loss function:
L^s[x, t; θ]
= 1/N_J∑_m=1^N_J
J[(x_m^J, t_m^J; θ), (x_m^J, t_m^J; θ)] + λ_p^s/N_p∑_m=1^N_pℒ[(x_m^p, t_m^p; θ), (x_m^p, t_m^p; θ)]^2
+ λ_b^s/N_b∑_j=1^N_bℬ[(x_j^b, t_j^b; θ), (x_j^b, t_j^b; θ)]^2 + λ_i^s/N_i∑_n=1^N_iℐ[(x_n^i, t_n^i; θ), (x_n^i, t_n^i; θ)]^2
+ω|1/N_J∑_m=1^N_J J[(x_m^J, t_m^J; θ), (x_m^J, t_m^J; θ)]-1/N_J∑_m=1^N_J J[(x_m^J, t_m^J; θ), (x_m^J, t_m^J; θ)]|^2,
where ω is a tunable hyperparameter which plays the same role as it is in the penalty adversarial problem.
In practice, we initialize both networks and training data, then train these two networks together. Thanks to the fast development of machine learning toolboxes, the training of this problem is relatively standard and can be implemented directly in the TensorFlow framework <cit.> since it supports automatic differentiation to calculate derivatives of the loss functions with respect to the weights. Backpropagation <cit.> is then applied to update the weights in the network. We summarize the training process in Algorithm <ref>.
To conclude this section, we note that Algorithm <ref> is not exactly the neural network version of the penalty adversarial problem discussed in Section <ref>. The penalty adversarial problem uses the exact objective value at (,) to compute the additional penalty term, which is a fixed number. However, in the neural network algorithm, we use (,), which is not the exact minimum but a solution to the discriminator network trained simultaneously with the solver network. To recover a neural network version of the penalty adversarial problem, it is possible to train the discriminator as a surrogate network in advance and use it to construct the penalty term. At this stage, we are not able to assert which approach is better. We present this work with the current choice for its simplicity in implementation, as it provides an all-at-once solution.
§ NUMERICAL RESULTS
We will present and discuss the performance of the penalty adversarial network when it is applied to solve optimal control problems constrained by different types of equations, including both linear and nonlinear problems. To begin, we will provide additional details to complement Algorithm <ref> for practical implementation.
§.§ Learning Rate Scheduling
Algorithm <ref> outlines a general workflow for simultaneously training the discriminator and solver networks. In practice, to enhance the training process, we employ a learning rate scheduling strategy <cit.>. Specifically, we set a certain number of epochs as the patience parameter P. If the loss has not improved after P epochs and the learning rate is still larger than a preset minimum learning rate, the learning rate is reduced to half of its original value. This approach helps automatically fine-tune the training process by decreasing the learning rate when performance stagnates, allowing for more precise adjustments. Additionally, during training, the network often gets stuck in local minima in the first few epochs, so we typically discard the initial epochs and begin recording the best weights only after this initial phase.
§.§ Numerical Examples
Here, we will provide a few examples demonstrating the effectiveness of our proposed strategy when applied to various types of optimal control problems constrained by different equations. In each instance, we will present the effect of the direct penalty formulation with a large penalty parameter and then compare the results with the application of our proposed penalty adversarial approach. We will consider three different problems constrained by the 1D Poisson equation, 2D Poisson equation, and 2D Allen-Cahn equation, respectively.
§.§.§ Example 1: Boundary Control Problem Constrained by 1D Poisson Equation
In this example, we consider a boundary control problem constrained by a 1D Poisson equation, following the problem setup described in (<ref>). We implement Algorithm <ref> and the learning rate adjustment strategy discussed in Section <ref>. The specific optimal control problem we consider is given by:
Minimize J(u) = 1/2∫_0^1 [ u(x) - u_d(x) ]^2 dx + ρ/2[ u(0)^2 + u(1)^2 ],
subject to -d^2 u/dx^2 = A sin(2π x), x ∈ [0,1],
where the desired state u_d(x) = A/4π^2sin(2π x) + bx + a, with parameters a = -10, b = 65, and A = 8π^2. The control for this problem is the boundary values u(0) and u(1). In fact, if the boundary values are fixed, the solution to the Poisson equation is uniquely determined, which in turn determines the value of J(u).
This problem has an analytical solution, given by:
u_analytical(x) = A/4 π^2sin(2 π x) + b^* x + a^*,
where a^* = 1/1 + 2 ρ a + 2 ρ/(1 + 2 ρ)(1 + 6 ρ) b and b^* = b/1 + 6 ρ. In this example, we take ρ = 2, resulting in a^* = 2 and b^* = 5.
Firstly, we consider using a standard penalty formulation, as given in (<ref>), with a large penalty parameter to solve the problem. Namely, the corresponding neural network aims to output (,) to minimize the following loss function:
L[x; θ] = ρ/2[ (0;θ)^2 + (1;θ)^2 ] +1/2N∑_m=1^N[ (x_m;θ) - u_d(x) ]^2
+ λ_p/N∑_m=1^N[d^2 /dx^2(x_m;θ) + A sin(2π x_m)]^2.
The training data {x_m}_m are N = 32 points uniformly chosen from [0,1]. The hyperparameters for this example chosen in implementation are as follows. We set the neural network depth to 4, width to 40, the penalty parameter λ_p=5000. The initial learning rate is 0.001, which can be reduced to a minimum learning rate of 0.0001 based on the learning rate scheduling strategy with patience set to be 3000. The training is conducted over a maximum of 200000 epochs. The numerical findings are presented in Figure <ref>.
We observe that after 200000 epochs, the loss function continues to decrease. However, the numerical solution computed via the neural network remains significantly distant from the analytical solution. The derivative of the numerical solution exhibits a nearly constant error compared to the analytical derivative. In contrast, the second derivative of the numerical solution, which corresponds to the constraint in this problem, remains close to its analytical value. This outcome aligns with our expectations, as we enforce a large penalty parameter on the PDE constraint. While this enforcement ensures the solution adheres well to the constraint, it also makes it difficult for the network to be trained effectively and to reach the actual solution.
It is important to note that the training process here is standard without applying specific techniques. While refining the training procedure or increasing the number of training epochs can bring the numerical solution closer to the analytical solution, the results demonstrate that a basic neural network may struggle to find the optimal solution when a large penalty parameter is employed. In comparison, we now focus on solving the same problem using our proposed penalty adversarial strategy.
Instead of using one single network, to utilize penalty adversarial network approach, we create and train the solver and the discriminator network together. Namely, the discriminator network aims to output (,) to minimize the following loss function
L^d[x; θ] = ρ/2[ (0;θ)^2 + (1;θ)^2 ] +1/2N∑_m=1^N[ (x_m;θ) - u_d(x) ]^2
+ λ_p^d/N∑_m=1^N[d^2 /dx^2(x_m;θ) + A sin(2π x_m)]^2,
and the solver network aims to output (,) to minimize the following loss function
L^s[x; θ] = ρ/2[ (0;θ)^2 + (1;θ)^2 ]+1/2N∑_m=1^N[ (x_m;θ) - u_d(x) ]^2
+ λ_p^s/N∑_m=1^N[d^2 /dx^2(x_m;θ) + A sin(2π x_m)]^2
+ω {ρ/2[ (0;θ)^2 + (1;θ)^2 ]-ρ/2[ (0;θ)^2+(1;θ)^2 ]
+1/2N∑_m=1^N[ (x_m;θ) - u_d(x) ]^2-1/2N∑_m=1^N[ (x_m;θ) - u_d(x) ]^2}^2.
The parameters here are chosen as λ_p^d = 1, λ_p^s= 5000 and ω = 1.
Other hyperparameters for the construction of the networks remain the same as in the previous setting. The neural network's depth and width are set to 4 and 40, respectively. The training data consists of N=32 points uniformly drawn from [0,1]. The initial learning rate is set to 0.001, which can be reduced to a minimum learning rate of 0.0001 based on the learning rate scheduling strategy, with patience set to 3000. The training is conducted over a maximum of 200000 epochs.
The numerical findings are presented in Figure <ref>. The first row compares the neural network approximations from both the solver and discriminator networks with the analytical solution for the value of u, explicitly showing the magnitude of the difference between these two numerical solutions and the analytical solution. Both numerical solutions approximate the analytical solution well. However, the solver network's solution is closer to the real solution, with a maximum error of 0.08, while the maximum error for the discriminator network reaches 0.16.
A more notable difference is observed in the second row, which compares the second-order derivatives. The accuracy of the second-order derivative reflects the degree of constraint satisfaction. Here, the solver network's solution maintains the error of d^2u/dx^2 under 0.025, which is quite small compared to the magnitude of the second-order derivative. However, the discriminator network exhibits a much larger error in this term, reaching a maximum error of 1.2.
This example demonstrates the advantage of using an adversarial network compared to a standard penalty formulation with both large and small penalty parameters. As shown in Figure <ref>, using a large penalty parameter makes the neural network harder to train, resulting in a numerical solution that deviates from the real solution after a certain number of epochs. The adversarial approach produces a much more accurate approximation while maintaining a large penalty parameter to guarantee satisfying the underlying PDE. Conversely, using a small penalty parameter results in a formulation whose theoretical solution is away from the real analytical solution u_analytical, as given in (<ref>). This fact means that even if the discriminator neural network is easier to train and converges, the obtained solution will not be accurate enough, particularly not satisfying the PDE constraint well, as shown in Figure <ref>. The adversarial approach successfully achieves a solution closer to the real analytical solution without requiring complicated training techniques beyond introducing the adversarial structure. This example highlights the effectiveness of applying the penalty adversarial approach.
§.§.§ Example 2: Distributed Control Problem Constrained by 2D Poisson Equation
Here we will present that our approach will also work for multi-dimensional problems. We consider a distributed control problem constrained by a 2D Poisson equation. The same problem is investigated in <cit.> as well.
The specific optimal control problem we consider is given by:
Minimize J(u, f) = 1/2∫_0^1 ∫_0^1 [ u(x, y) - u_d(x, y) ]^2 dx dy + ρ/2∫_0^1 ∫_0^1 f(x, y)^2 dx dy,
subject to -Δ u(x,y) = f(x, y), (x, y) ∈ [0,1] × [0,1],
with u(x,y)=0 on the boundary and the desired state
u_d(x, y) = Asin(π x) sin(π y),
with parameter A = 10. The control for this problem is the distributed control f(x, y) over the domain. This problem also has an analytical solution, given by:
u_analytical(x, y) = A/1 + 4 ρπ^4sin(π x) sin(π y),
and the exact control is:
f_analytical(x, y) = 2 π^2 A/1 + 4 ρπ^4sin(π x) sin(π y).
In this example, we take ρ = 0.01. We present the desired state u_d and the analytical solution u_analytical in Figure <ref>. Under this choice of ρ, there is an apparent difference between the desired state and the analytical solution to the control problem. We deliberately choose such an example instead of a problem with a much smaller regularization parameter (for example, ρ = 0.0001) to ensure that the neural network genuinely finds the solution to the optimal control problem rather than merely solving an approximation problem that approximates the desired state as if learning a given function.
Firstly, as a naive approach, we aim to solve this problem using a penalty formulation. We consider using a neural network with a depth of 4 layers and width of 60 neurons to minimize the following loss function:
L[x, y; θ] =
1/2N^2∑_m=1^N∑_n=1^N [ (x_m, y_n; θ) - u_d(x_m, y_n) ]^2 + ρ/2N^2∑_m=1^N∑_n=1^N (x_m, y_n; θ)^2
+ λ_p/N^2∑_m=1^N∑_n=1^N[ ∂^2 /∂ x^2(x_m, y_n; θ) + ∂^2 /∂ y^2(x_m, y_n; θ) + (x_m, y_n; θ) ]^2
+ λ_b/(N_b)∑_k=1^N_b[ (x_k^b, y_k^b; θ) ]^2,
The weights for the equation and boundary loss terms are set to λ_e = λ_b = 2000. The training data used to compute the objective function and equation loss are selected as {x_m}_m=1^N, {y_n}_n=1^N with N = 16, resulting in a total of 256 interior points. The training data used to compute the boundary loss are selected as {(x_k^b,y_k^b)}_k=1^N_b with N_b = 32. Both the interior points and boundary data are uniformly sampled from their corresponding domains. For the training details, the initial learning rate is set to 0.001, which can be reduced to a minimum learning rate of 0.0001 based on the learning rate scheduling strategy, with patience set to 3000 epochs. The training is conducted over a maximum of 450000 epochs. The numerical findings are presented in Figure <ref>.
The results indicate that the numerical solution does not approximate the analytical solution accurately. The exact value for u ranges from 0 to approximately 2.04, while the numerical value for u remains below 0.6, indicating a significant discrepancy. Although the training loss continues to decrease, it is evident that the model does not converge to the correct solution even after 450000 epochs. While we cannot exclude the possibility that the network might eventually converge to the actual solution with more training epochs, this convergence has not been observed at this point after such a large number of epochs. This phenomenon showcases the limitations of a direct approach using penalty formulation with large penalty parameters to implement the PDE constraints.
In contrast, let us apply the penalty adversarial network to solve the same problem. The loss function that the discriminator network aims to minimize is given by:
L^d[x, y; θ] =
1/2N^2 ∑_m=1^N∑_n=1^N[ (x_m, y_n; θ) - u_d(x_m, y_n) ]^2 + ρ/2N^2∑_m=1^N∑_n=1^N (x_m, y_n; θ)^2
+ λ_p^d/N^2∑_m=1^N∑_n=1^N[ ∂^2 /∂ x^2(x_m, y_n; θ) + ∂^2 /∂ y^2(x_m, y_n; θ) + (x_m, y_n; θ) ]^2
+ λ^d_b/(N_b)∑_k=1^N_b[ (x_k^b, y_k^b; θ) ]^2.
On the other hand, the solver network aims to minimize:
L^s[x, y; θ] =
1/2N^2∑_m=1^N∑_n=1^N[ (x_m, y_n; θ) - u_d(x_m, y_n) ]^2 + ρ/2N^2∑_m=1^N∑_n=1^N (x_m, y_n; θ)^2
+ λ_p^s/N^2∑_m=1^N∑_n=1^N[ ∂^2 /∂ x^2(x_m, y_n; θ) + ∂^2 /∂ y^2(x_m, y_n; θ) + (x_m, y_n; θ) ]^2
+ λ^s_b/(N_b)∑_k=1^N_b[ (x_k^b, y_k^b; θ) ]^2
+ω {ρ/2N∑_m=1^N∑_n=1^N (x_m, y_n; θ)^2-ρ/2N∑_m=1^N∑_n=1^N (x_m, y_n; θ)^2
+ 1/2N^2∑_m=1^N∑_n=1^N[ (x_m, y_n; θ) - u_d(x_m, y_n) ]^2
- 1/2N^2∑_m=1^N∑_n=1^N[ (x_m, y_n; θ) - u_d(x_m, y_n) ]^2 }^2.
The parameters are chosen as λ_p^s=λ_b^s = 2000, λ_p^d=λ_b^d = 10 and ω = 100. All other parameters related to the structure and the training data remain the same.
The numerical results of applying the penalty adversarial network to solve problem (<ref>) are presented in Figure <ref>. We observe that the solver successfully finds the real solution. Subplot (c) in Figure <ref> shows that the difference between the exact solution, shown in Subplot (a), and the predicted result from the solver, shown in Subplot (b), is minimal. However, the discriminator fails to reach the exact solution, exhibiting a significant error in the bottom-right area compared to other regions, as shown in Subplot (f).
Focusing on the ability of the networks to adhere to the constraints, as reflected by the PDE losses, we see that the solver's prediction for u and f generally follows the Poisson equation quite well in most areas, except for some deviation at the top boundary. In comparison, while the discriminator's prediction also generally follows the Poisson equation, it deviates strongly in the bottom-right area, corresponding to the same region where the discriminator makes a significant error in predicting the values of u.
Examining the evolution of the loss function over epochs, we observe an interesting phenomenon. As seen in Subplot (i) of Figure <ref>, the loss for the discriminator decreases slowly and remains almost flat during the first 400000 epochs. However, after 400000 epochs, the loss decreases much more rapidly. At the same time, the loss for the solver increases quickly due to the difference in the objective value between the solver and the discriminator, which causes the additional penalty term introduced by the penalty adversarial approach to increase rapidly.
Though we do not fully understand this behavior at present, we propose one potential explanation: both the solver and the discriminator gradually converge toward the analytical solution at first. However, since the discriminator uses a small penalty parameter, theoretically, the minimum that the discriminator network can converge to deviates a certain distance from the analytical solution u_analytical. This deviation arises from a tradeoff between disobeying the constraints and decreasing the objective function, leading to a situation where the corresponding numerical solution might attempt to decrease the objective function by strongly disobeying the PDE constraints in certain areas. This explanation aligns with the observed phenomenon in the discriminator's result, which significantly deviates from satisfying the Poisson equation in a specific region. Comparatively, although the loss function for the solver increases after 400000 epochs, the minimum that the solver network reaches does not change significantly and remains close to the real analytical solution.
§.§.§ Example 3: Distributed Control Problem Constrained by 2D Allen-Cahn Equation
As the last example presented here, we will showcase that the penalty adversarial network can also be applied to solve nonlinear problems, albeit more complicated. We consider a distributed control problem constrained by a 2D Allen-Cahn equation, which has received considerable research interest from the computational community <cit.>.
The specific optimal control problem has the same formulation for the objective functional as the previous example (<ref>) but is constrained by a different equation, given by:
Minimize J(u, f) = 1/2∫_0^1 ∫_0^1 [ u(x, y) - u_d(x, y) ]^2 dx dy + ρ/2∫_0^1 ∫_0^1 f(x, y)^2 dx dy,
subject to -Δ u(x,y) + 1/ϵ^2[u(x,y)^3-u(x,y)] = f(x, y), (x, y) ∈ [0,1] × [0,1],
with u(x,y) = 0 on the boundary and a chosen desired state. The parameter ϵ > 0 is related to the PDE constraint itself. Using an adjoint formulation <cit.>, we can find the following pair of functions that serve as a solution to problem (<ref>): We set
u_analytical(x, y) = αsin(π x) sin(π y) + βsin(2π x) sin(2π y),
and
f_analytical(x, y)
= 2π^2 [αsin(π x) sin(π y) + 4βsin(2π x) sin(2π y)]
+ 1/ϵ^2[(αsin(π x) sin(π y) + βsin(2π x) sin(2π y))^3 - αsin(π x) sin(π y) - βsin(2π x) sin(2π y)],
while the desired state is
u_d(x, y) = u_analytical(x, y) + ρ Δ^2 u_analytical(x, y) - 3ρ/ϵ^2u_analytical^2(x, y) Δ u_analytical(x, y)
- ρ/ϵ^2[ 6 u_analytical(x, y) |∇ u_analytical(x, y)|^2 + Δ u_analytical(x, y) ]
- ρ/ϵ^2[ Δ u_analytical(x, y) - 1/ϵ^2( u_analytical^3(x, y) - u_analytical(x, y) ) ] (3 u_analytical^2(x, y) - 1).
Here, α, β∈ℝ are parameters. Note that if ρ = 0, then u_d = u_analytical, which coincides with the expectation that, in this case, the minimization problem (<ref>) is simply equivalent to learning a known function. However, when ρ > 0, u_d can have obvious differences from u_analytical. In the example presented here, we choose ϵ = 0.4, α = 0.45, β = 0.55, and ρ = 0.0001. Under these choices, the corresponding desired state u_d and u_analytical are plotted in Figure <ref>. Obvious differences in values at the same points can be observed from this comparison, preventing the network from simply learning a function.
For the sake of brevity of our presentation, we will skip the experiment of using a naive approach as a penalty formulation with a large penalty parameter. As expected, it will fail to work. We will only present the results of applying a penalty adversarial network. For this approach, we define the corresponding loss function for the discriminator and solver network, respectively, as
L^d[x, y; θ] =
1/2N^2 ∑_m=1^N∑_n=1^N[ (x_m, y_n; θ) - u_d(x_m, y_n) ]^2 + ρ/2N^2∑_m=1^N∑_n=1^N (x_m, y_n; θ)^2
+ λ_p^d/N^2∑_m=1^N∑_n=1^N[ ∂^2 /∂ x^2(x_m, y_n; θ) + ∂^2 /∂ y^2(x_m, y_n; θ)
-1/ϵ^2((x_m, y_n; θ)^3-(x_m, y_n; θ)) +(x_m, y_n; θ) ]^2
+ λ^d_b/(N_b)∑_k=1^N_b[ (x_k^b, y_k^b; θ) ]^2,
and
L^s[x, y; θ] =
1/2N^2∑_m=1^N∑_n=1^N[ (x_m, y_n; θ) - u_d(x_m, y_n) ]^2 + ρ/2N^2∑_m=1^N∑_n=1^N (x_m, y_n; θ)^2
+ λ_p^d/N^2∑_m=1^N∑_n=1^N[ ∂^2 /∂ x^2(x_m, y_n; θ) + ∂^2 /∂ y^2(x_m, y_n; θ)
-1/ϵ^2((x_m, y_n; θ)^3-(x_m, y_n; θ)) +(x_m, y_n; θ) ]^2
+ λ^s_b/(N_b)∑_k=1^N_b[ (x_k^b, y_k^b; θ) ]^2
+ω {ρ/2N^2∑_m=1^N∑_n=1^N (x_m, y_n; θ)^2-ρ/2N^2∑_m=1^N∑_n=1^N (x_m, y_n; θ)^2
+ 1/2N^2∑_m=1^N∑_n=1^N[ (x_m, y_n; θ) - u_d(x_m, y_n) ]^2
- 1/2N^2∑_m=1^N∑_n=1^N[ (x_m, y_n; θ) - u_d(x_m, y_n) ]^2 }^2.
The hyperparameters for this experiment are configured as follows: The training data for both the objective function and equation loss are sampled as {x_m}_m=1^N and {y_n}_n=1^N, with N = 32. The boundary training data consist of {(x_k^b, y_k^b)}_b=1^N_b, with N_b = 32. Both the interior and boundary points are uniformly sampled from their respective domains. For the training specifics, the initial learning rate is set at 0.001, potentially decreasing to a minimum of 0.0001 using a learning rate scheduling strategy with a patience parameter of 10000 epochs. The learning rate adjustment does not begin until after 300000 epochs to avoid early-stage inaccuracies. The total number of epochs is extended to 1.5 million to capture the full training dynamics, as the solver network, as seen in the experiment results, continues to improve throughout. This extended training period also highlights the common challenges of addressing nonlinear equations in PINN applications <cit.>.
The penalty parameters selected for this example are as follows: λ_p^s = λ_b^s = 1000, λ_p^d = λ_b^d = 0.2, and ω = 20000. It is important to note that we opted for a relatively large value for ω due to the complexity of the associated nonlinear problem. A small ω would not significantly accelerate the training process or help achieve convergence, which is the main advantage over using a large penalty parameter alone. Although we have established that ω should have an upper bound, as demonstrated in Proposition <ref>, this example illustrates that using a relatively large value for ω in some instances is practical. The results of this experiment are shown in Figure <ref>.
We observe that both the solver and discriminator networks converge to solutions close to the exact solution, with the solver network being notably more accurate than the discriminator network. This is evident in Subfigures (c) and (f), which explicitly show the differences between the solver's and discriminator's predictions compared to the exact solution, respectively. The inaccuracy of the discriminator network stems from the fact that its penalty parameter, set at λ_p^d = λ_b^d = 0.2, is too small to ensure a solution that closely approximates the exact solution of the constrained optimization problem. On the other hand, since ω is chosen to be large in this example, the expected advantage of the solver network adhering better to the constraint compared to the discriminator network becomes less pronounced, as observed in Subfigures (g) and (h). However, even so, we can still observe that the PDE loss for the solver's network is slightly smaller than that of the discriminator's network, aligning with theoretical expectations.
Another important observation is that the discriminator network converges quickly, as expected, due to its small penalty parameter. As shown in Figure <ref>, the discriminator network finds a solution with similar errors to the final result after just 300000 epochs. As we can see, Subfigures (b) and (c) are very similar, indicating that the subsequent 1200000 epochs yield minimal improvement for the discriminator network. Additionally, the discriminator's prediction shows larger inaccuracies on the boundary compared to the interior points. This phenomenon may be related to the limited number of training points assigned to each boundary, with only 8 points on each side.
In contrast, the solver network converges much more slowly, as evident in Figure <ref>. The solver network continuously reduces the error between its prediction and the exact solution, ultimately resulting in a more accurate solution than the discriminator network. However, it's important to note that while the solver network requires more time to converge than the discriminator, it is still far more efficient than simply using a traditional penalty formulation with a large penalty parameter, which will still be far from the actual solution after 1.5 million epochs. From Figure <ref>, we observe that after 500000 epochs, the prediction still deviates from the exact solution by a certain margin, but the result after 1000000 epochs is already quite good. The maximum error shown in Subfigure (b) is just slightly larger than in Subfigure (c) while the latter one took an additional 500000 epochs to achieve. This observation suggests a practical tradeoff between efficiency and accuracy that should be considered in real-world applications.
§ ACKNOWLEDGMENT
§.§ Declaration of generative AI and AI-assisted technologies in the writing process
During the preparation of this work, the author(s) used ChatGPT in order to improve the writing of this paper. After using this tool, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.
abbrv
|
http://arxiv.org/abs/2409.03712v1 | 20240905171829 | Exploring neutrino interactions in light of present and upcoming galaxy survey | [
"Sourav Pal",
"Rickmoy Samanta",
"Supratik Pal"
] | astro-ph.CO | [
"astro-ph.CO",
"hep-ph"
] |
Confidential Computing Transparency
[
5th September 2024
===================================
§ INTRODUCTION
Neutrinos, ever-present throughout the universe's cosmic history, hold a profound significance in our understanding of fundamental physics. While they are considered massless in the standard model of particle physics, neutrino oscillation experiments <cit.> conducted over the past few decades have revealed that at least two of the neutrino flavor eigenstates possess mass. Data from the neutrino oscillation experiments <cit.> provide a level of insight into the mass-splitting, and hence on the sum of neutrino mass. Current and future cosmological observations may chip in here by providing further information about the sum of neutrino mass and possible neutrino interactions. The most stringent upper bound on the sum of neutrino mass currently stands at ∑ m_ν < 0.12 eV <cit.>. The presence of a non-zero neutrino mass brings forth a wide range of beyond standard model (BSM) scenarios, each allowing for different types of neutrino interactions. Terrestrial experiments have already made remarkable strides in probing these BSM interactions <cit.>. Complimentary to that, cosmological observations provide a wealth of information about these interactions, offering a unique lens through which we can explore both the sum of neutrino mass and the underlying nature of their interactions.
In the standard model of cosmology, neutrinos decouple from the primordial plasma at around ∼ 1 MeV, when the weak interaction (∼ G_ F^2 T^5) is suppressed in comparison to the Hubble expansion rate H(T). Here G_ F is the Fermi coupling constant, which denotes the strength of the weak interaction. After decoupling, neutrinos free-stream through the photon-baryon fluid with almost the speed of light, that eventually drags the photon-baryon fluid towards smaller scales. Moreover, neutrinos introduce a significant anisotropic component in the evolution of gravitational potential, leading to a noticeable suppression in the peaks of the Cosmic Microwave Background (CMB) spectrum <cit.>. These influences extend to the Baryon Acoustic Oscillation (BAO) features observed in the large-scale structure of the late universe <cit.>. The combined insights from CMB and BAO measurements thus serve to constrain the elusive nature of neutrino free-streaming and corresponding interactions <cit.>.
Various well-motivated neutrino interactions have been brought forth in the last few decades. These include interactions in both early and late universe. Neutrino interactions in the early universe delay the onset of free-streaming and as a result leaves distinguishable signatures in both the CMB power spectra and BAO. In the early universe, neutrino interactions are preferably described by the four Fermi self-interaction where the interaction strength varies with temperature as T_ν^5. These kinds of neutrino interactions have been studied before in the literature in light of CMB <cit.>. In addition, various BSM interactions within the neutrino sector have also been studied, including scenarios where neutrinos annihilate into massless scalars <cit.> or undergo decay and inverse decay via eV-scale neutrinophilic scalars <cit.>. These latter interactions are more prominent at lower temperatures and primarily impact large-scale modes, whereas neutrino self-interactions on the other hand are most effective at higher temperatures, influencing small-scale modes.
Neutrinos interacting at late universe as well as transient interactions have a free-stream window in the redshift range 2000<z<10^5, primarily affecting the large scale modes <cit.>. On the other hand, small scale modes carry information of interactions that are dominant at high temperature (early universe), which is the focus of the present analysis.
Most of the previous studies on interacting neutrino models have primarily focused on the CMB. Recently, however, information from the Large Scale Structure (LSS), particularly in the mildly non-linear regime, has been employed to investigate scenarios of strongly interacting (SI) and moderately interacting (MI) modes in T_ν^5 type neutrino self-interactions in <cit.>. However, a major challenge in probing the non-linear regime is the breakdown of standard perturbation theory after the liner regime, that calls for simulation-based approaches <cit.>. Possible alternatives like the Effective Field Theory (EFT) of Large Scale Structure (LSS) has shown promise in extracting cosmological information from small scales, at least up to the mildly non-linear regime. In order to investigate non-trivial neutrino interactions that may not be easily tractable in simulations, our study will follow the EFT of LSS approach. Additionally, previous studies mainly focused on constraining the effective interaction strength in four Fermi like neutrino self-interactions using EFT of LSS. Our analysis adopts a more general parameterization of neutrino interactions in the early universe, following <cit.>. We adopt a temperature-dependent parameterization of the neutrino interaction rate as detailed in <cit.> and characterize it by a power law in temperature, Γ_ν∝ T_ν^n_ int, where Γ_ν represents the neutrino interaction rate, T_ν is the background neutrino temperature, and n_ int is a power-law index that generalizes all types of neutrino interactions.
Specifically, we consider interactions with power law index n_ int =3, 4 and 5 respectively in our analysis. This will also help us investigate the scenario in a fairly model-independent way. Although some of the scenarios may not be readily mapped into a simple particle physics models, examining the free-streaming window of neutrinos through LSS analysis, in conjunction with recent CMB studies, remains a valuable endeavor.
In this work, we investigate the effects of neutrino interaction on LSS following the EFT of LSS approach and search for possible bounds on the parameters from present and upcoming Galaxy Surveys in combination with CMB. Although we constrain the standard cosmological parameters (in the background ΛCDM setup) along with the neutrino interactions parameters, our primary intention is to find out possible bounds on the interaction redshift and the sum over neutrino mass.
Here the term “interaction redshift” refers to the specific redshift at which neutrinos start to free-stream (denoted by z_ int throughout the paper, which is essentially the decoupling redshift of neutrinos from the specific interaction under consideration.[ Note that, for interactions active at low redshift regime, neutrinos can recouple again with an unknown scalar field (in eV-scale neutrinophilic model),
which is not considered here in this analysis.])
Additionally, contrary to the previous analysis <cit.>, we make use of full shape (FS) galaxy survey data in combination with CMB to constrain the parameters.
More specifically, we employ the multipoles of galaxy power spectra data from Baryonic Oscillation Spectroscopic Survey (BOSS) Data Release 12 (DR12), which has been combined into a full shape (FS) likelihood in <cit.>. As demonstrated in the present article, these early universe neutrino interactions impact the galaxy power spectra multipoles differently depending on the interaction redshift. Our investigations suggest improved bounds on interaction redshift (z_ int) over those obtained from (Planck+ BAO) only analysis <cit.>. More specifically, we find z_ int > 7.93 × 10^3 (for n_ int=3), z_ int > 1.28 × 10^5 (for n_ int=4) and z_ int > 1.7 × 10^5 (for n_ int=5),
Interestingly, the (n_ int=5) scenario admits a concrete particle physics model <cit.> which allows us to obtain constraints on the coupling constant G_ eff< 1.59 × 10^-4 MeV^-2, which is consistent with the results obtained recently in <cit.>. Moreover, our analysis with FS data in combination with CMB data suggests a relaxed bound on the sum of neutrino mass, yielding ∑ m_ν < 0.16 eV at 95% confidence label (C.L.) for all the models in considerations. We further examine forecast results for future LSS observation, like Euclid mission in a joint analysis with CMB-S4 and Planck baseline, providing a further improved bound z_ int > 6.31 × 10^5 (for n_ int=3), z_ int > 1.78 × 10^6 (for n_ int=4) and z_ int > 1.78× 10^6 (for n_ int=5). For n_ int=5 model, z_ int >1.78 × 10^6 corresponds to an upper bound on the interaction coupling constant G_ eff< 4.3 × 10^-6 MeV^-2. Our forecast analysis also suggests that apart from z_ int, Euclid galaxy survey will be able to probe the sum of neutrino mass with σ(∑ m_ν)=0.02 eV at 95% C.L. in a joint analysis with CMB-S4 and Planck.
The paper is organized as follows. In Sec. <ref>, we briefly review the basics of cosmological perturbation theory (CPT) in presence of massive neutrinos within the linear regime in presence of interactions and modeling neutrino interactions in the early universe. Following that, in Sec. <ref> we discuss how these scenarios affect the galaxy power spectra in mildly non-linear regime. The data and methodology used in this paper are presented in Sec. <ref>, while our results and discussions are shown in Sec. <ref>. Further in Sec. <ref>, we present the forecast for future missions and finally we summarize in Sec. <ref>. The detailed (6+2) parameter posterior distributions for all the cases are presented in Appendix <ref> and <ref>.
§ NEUTRINOS IN COSMOLOGICAL PERTURBATIONS
§.§ Neutrino perturbation equations
In the primordial universe, neutrinos, as relativistic entities, generate anisotropic stress within the perturbed Einstein metric. This anisotropy arises from the velocity perturbations in the fluid equations of free-streaming neutrinos and plays a pivotal role in shaping the CMB spectrum. Such anisotropic stress drives the metric perturbation that suppresses the CMB angular power spectra. Additionally, the rapid free-streaming of neutrinos at nearly the speed of light during this epoch introduces a phase shift in BAO. On the other hand, presence of interactions among neutrinos can delay the onset of free-streaming by dampening the anisotropic stress, thereby altering the dynamics. The Boltzmann hierarchy equations, accounting for neutrino interactions, can be expressed as follows <cit.>,
dΨ_0/dτ = -qkϵΨ_1+1 6ḣdln f_0 dln q ,
dΨ_1/dτ = qk 3ϵ(Ψ_0- 2 Ψ_2 ) ,
dΨ_2/dτ = qk 5ϵ(2Ψ_1 - 3Ψ_3 ) - ( 115ḣ + 25η̇)
dln f_0 dln q - a Γ_ν Ψ_2 ,
dΨ_l/dτ = qk (2l+1)ϵ[ lΨ_l-1 - (l+1)Ψ_l+1]- a Γ_ν Ψ_l , l ≥ 3 .
Here, f_0 represents the background Fermi-Dirac distribution function, while Ψ_l(k,q,τ) denotes the l^ th order perturbation to the distribution function, corresponding to the l^ th order Legendre polynomial in the Fourier space. The comoving energy density of relativistic neutrinos is given by ϵ (ϵ = √(q^2+a^2 m_ν^2)), where q is the corresponding amplitude of the comoving momentum. Here Boltzmann hierarchy equations are expressed in synchronous gauge, where h and η represent the standard metric perturbations in this gauge.
Additionally, Γ_ν signifies the neutrino interaction rate in the early universe, with the parameterization of this interaction rate detailed in the subsequent section. Due to the conservation of mass and momentum, the evolution equations for Ψ_0 and Ψ_1 are unaffected by the interaction terms while neutrino interactions begin to influence the Boltzmann hierarchy starting from l = 2.
We have implemented Eqs. (<ref>-<ref>) in the cosmological code <cit.> which is an extension to the Boltzmann solver code <cit.>.
The code outputs are based on the EFT of LSS <cit.> as detailed in Sec. <ref>. Within , the Boltzmann hierarchy equations have been computed in synchronous gauge following <cit.> as usually done in standard code <cit.>. For implementing the interaction scenario, we use the standard relaxation time approximation. We assume that the interaction rate Γ_ν only depends on the neutrino temperature and independent of the internal momenta and cosmological scales. Although the effects of including the momentum dependency of the interaction rate has been studied before in <cit.>, as long as neutrino mass is negligible and we consider Γ_ν to be the average rate at which the neutrino free-streaming is damped, this is a good approximation to proceed. The approximations used for the neutrino Boltzmann hierarchy is mentioned in the footnote [We have used the default fluid approximation for non-cold relics i.e. CLASS-FA following <cit.>. Full Boltzmann hierarchy is employed until default value of kτ. Also truncation of the full Boltzmann hierarchy has been done at l_ max=17. Additionally we consider three degenerate neutrinos and solve the hierarchy for one neutrino species. ].
§.§ Modeling early universe neutrino interactions
Neutrino interactions across the evolutionary timeline of our universe can be depicted through a generic, model-independent framework. The interaction rates within this paradigm are articulated in terms of the Hubble parameter, with a dependency on both the interaction redshift and the neutrino temperature index.
Neutrino interactions can be broadly categorized into those occurring in the early universe and those in the late universe. In the early universe, neutrinos are coupled through four-Fermi weak interactions up to a certain redshift, beyond which they decouple from the primordial plasma. Although even after weak interaction domination phase, they can remain coupled through self-interactions motivated from BSM physics. Conversely, in the late universe, interactions may arise from various mechanisms, including neutrino decay, self-interactions mediated by light particles, and neutrinophilic interactions.
In this model-independent framework, these interactions can be parameterized as <cit.>:
Γ_ν(z,z_ int) = H(z_ int) (1+z/1+z_ int)^n_ int,
Here, z_ int represents the redshift at which the Hubble parameter equals the interaction rate, i.e., Γ_ν(z_ int)= H(z_ int). The interaction types are characterized by the power-law index n_ int, with n_ int = [3, 4, 5] in the early universe and n_ int = [-5, -3, -1, 1] in the late universe. Apart from that, there are interactions which are transient in nature characterized by a free-streaming window 2000<z_ int<10^5, see <cit.>. A majority of such interactions are phenomenological in nature. For example, n_ int=-5 corresponds to the neutrino decay scenario <cit.> and n_ int=1 corresponds to the case where neutrinos and anti-neutrinos annihilate to massless bosons <cit.>.
On the other hand, n_ int=5 case represents a well-known particle physics scenario that can be easily mapped to neutrino self-interactions mediated by a heavy scalar mediator. There are a plethora of studies in the literature on neutrino self-interactions, both in the context of CMB <cit.> and LSS <cit.>. In particular, in our parameterization, n_ int=5 corresponds to the moderately interacting (MI) mode in the self-interaction models <cit.>. It can be identified whether the interactions are active at early epoch or late universe through the Hubble parameter. In the radiation domination epoch H(z) ∝ T^2 and in the matter domination epoch H(z) ∝ T^3/2, where T is the background temperature. This suggests that interactions with n_ int =4 and 5 are dominant in the early universe. On the other hand, for n_ int=3, the term Γ_ν/H(z) is almost constant throughout the evolution history. As of now, the literature is insufficient to demonstrate if there is any obvious mapping to any specific particle physics model for n_ int = 3 and 4 cases. However, they are interesting cases to explore in a model-independent framework, given their prospects in the early universe scenario.
In this article, we primarily focus on the early universe neutrino interactions, i.e. n_ int∈ [3, 4, 5]. As pointed out in <cit.>, interactions at low temperature as well as those that are transient in nature do not affect the matter power spectra in mildly non-linear regime. As mentioned earlier, even in absence of particle physics mapping of n_ int =3 , 4, these kinds of interactions do have significant effects on matter power spectra in mildly non-linear regime and hence in galaxy power spectra. It is evident from Fig. <ref>, that neutrino interaction with Γ∝ T_ν^3, enhances the matter power spectra ∼ 10% at scale k ≈ 0.2 h/ Mpc depending on the interaction redshifts. Also interactions with n_int = 4 and 5 modifies the matter power spectra up to 14-15 % at scales k ≈ 1 h/ Mpc. Since interacting neutrinos in early universe significantly affect the matter power spectra in mildly non-linear regime, this in turn have imprints on the multipoles of galaxy power spectra, discussed in detail in Sec. <ref>.
§ EFFECT OF NEUTRINO INTERACTIONS ON FULL SHAPE GALAXY POWER SPECTRA
The effects of neutrinos are most effectively analyzed through the study of the evolution of gravitational potential. Within the standard framework, neutrino anisotropic stress is the primary factor contributing to the gravitational potential, as described by the perturbed Einstein equation,
k^2 (ϕ - ψ) = 16 π G a^2 ρ_ totR_νσ_ν
where ρ_ tot represents the total radiation energy density, and R_ν is the fractional energy density of free-streaming neutrinos. In the standard ΛCDM model, free-streaming neutrinos make up approximately 41 % of the total radiation energy density <cit.>. The difference in the evolution of the potentials in Eq. (<ref>) affects the growth of matter fluctuations in subsequent evolution history of our universe.
Neutrino interactions modify the gravitational potentials by suppressing the anisotropic stress term σ_ν, leading to ϕ-ψ≈ 0, thereby affecting the growth of dark matter fluctuations in different scales. Large scale modes enter the horizon well after neutrino free-streaming and hence remain unaffected by these interactions, evolving as in standard ΛCDM cosmology.
On the contrary, modes with k ∼ 10 h/ Mpc, that enter the horizon while neutrinos are still tightly coupled to the primordial plasma through the corresponding interactions, experience an initial amplitude enhancement due to the amplification of the gravitational potential. However, the absence of anisotropic stress also amplifies the magnitude of oscillations in the gravitational potential for these small scales in comparison to ΛCDM scenario. As a result, ψ decays slowly for these modes than ΛCDM paradigm. This results in a damping of dark matter fluctuations and a suppression of the matter power spectrum at these scales.
Modes of particular interest are those with k∼ 0.1 h/ Mpc, which enter the horizon as neutrinos begin to free-stream. While these modes also experience an initial enhancement, the gravitational potential rapidly decays to the ΛCDM baseline, leading to a subsequent enhancement in the matter power spectrum. The interplay of these effects produces a bump-like feature in the matter power spectrum in the mildly non-linear regime.
As illustrated in Fig. <ref>, the onset of free-streaming determined by the redshift of decoupling, significantly influences the matter power spectrum by introducing distinctive features that depend on the nature of the interactions involved. In all the plots in Fig. <ref>, standard cosmological parameters are fixed to the best-fit ΛCDM values and ∑ m_ν is fixed to 0.12 eV.
Also all the power spectra are plotted at redshift 0.61, as probed by BOSS DR12. Specifically, for interactions that scale as T_ν^3, the matter power spectrum exhibits an enhancement of roughly 10% over the wavenumber range k ∼ [0.1, 10] h/ Mpc for interaction redshifts z_= 3500, 7000 and 5 × 10^4.
This bump-like feature shifts toward smaller scales as the redshift of decoupling increases, reflecting the earlier transition to free-streaming. This shift is similarly observed for interactions characterized by n_ int = 4 and n_ int = 5, where the enhancement in the matter power spectrum becomes even more pronounced. In particular, for n_ int = 4, the enhancement reaches nearly 14% for interaction redshifts within 10^4-10^5 ranges, and the associated feature is also displaced towards smaller scales as shown in Fig. <ref> demonstrating a clear correlation between the interaction strength and the scale of the enhancement. The interaction most relevant from the perspective of particle physics, where the interaction rate scales as Γ_ν∝ T_ν^5, leads to a substantial enhancement in the matter power spectrum, approaching 15% for interaction redshifts within 10^4-10^5 ranges in the scales that are critical for galaxy surveys as shown in Fig. <ref>. This demonstrates that the redshift at which neutrinos begin to free-stream has non-trivial effects on the matter power spectrum in both linear and mildly non-linear regimes.
In the standard ΛCDM framework, massive neutrinos begin to free-stream after they become non-relativistic, deep within the matter-dominated epoch. The free-streaming scale in ΛCDM cosmology is given by k_ FS(z_ NR) ≈ 0.018 √(Ω_ m ( m_ν/1 eV)) h/ Mpc <cit.>, where z_ NR is the non-relativistic transition redshift. The suppression of the matter power spectrum by massive neutrinos at scales much greater than the free-streaming scale (k ≫ k_ FS) and their CDM-like behavior on larger scales (k ≪ k_ FS) remain unaffected by early universe interactions, which persist until matter-radiation equality without altering the matter-dominated epoch.
As already mentioned, we have incorporated neutrino interactions into the <cit.> code. The linear perturbation theory remains valid up to modes with k ≲ 0.1 h/ Mpc, beyond which code applies one-loop corrections to the matter power spectra up to mildly non-linear scales. These one-loop corrections are based on the EFT of LSS in Eulerian space, using EDS (Einstein De-Sitter) convolution kernels approximation <cit.>. Additionally, the galaxy power spectra multipoles are computed with redshift space distortion (RSD) corrections. We can use the EDS approximation for the dark matter sector even in the presence of neutrino interactions, since massive neutrinos are known to free-stream in the matter dominated epoch <cit.>. The one-loop corrected redshift space galaxy power spectra using EFT of LSS become unreliable for modes k ≳ 0.25 h/ Mpc <cit.>, thus our analysis is limited to k_ max≈ 0.2 h/ Mpc. In contrast, the real space power spectrum is reliable up to scales k_ max≈ 0.4 h/ Mpc <cit.>.
In Fig. <ref> we have shown the effects of free-streaming redshift (z_ int) on galaxy monopoles only (since the effects on galaxy quadrupole moments are not so prominent). The data points and error bars in all the plots are derived from BOSS DR12 galaxy full shape power spectra likelihood [https://github.com/oliverphilcox/full_shape_likelihoodshttps://github.com/oliverphilcox/full_shape_likelihoods], as detailed in <cit.>. In the left panels of Fig. <ref>, the data points and error bars of monopoles are extracted from the North Galactic Cap (NGC) data chunk over the redshift bin 0.5<z<0.75 with effective redshift z_ eff=0.61, while the right panel shows similar data for the South Galactic Cap (SGC).
In all the plots of Fig. <ref>, the black dotted line represents ΛCDM case. We observe that the interaction parameters z_ int and n_ int significantly influence the galaxy monopoles compared to the ΛCDM case. For all the figures, the best-fit values considered for the EFT nuisance parameters are taken assuming background ΛCDM cosmology, just to demonstrate the effect of z_ int on the galaxy power spectra.
§ CURRENT DATA AND METHODOLOGY
As previously mentioned, we utilize a modified version of [https://github.com/Michalychforever/CLASS-PThttps://github.com/Michalychforever/CLASS-PT] <cit.> to perform the Bayesian analysis of the model in constraining the model parameters z_ int and ∑ m_ν, as well as the standard cosmological parameters, using the latest version of the Markov Chain Monte Carlo (MCMC) sampler, [https://github.com/brinckmann/montepython_publichttps://github.com/brinckmann/montepython_public]
<cit.>. For our analysis, we consider the combinations of the following currently available datasets:
* CMB: low-ℓ and high-ℓ CMB temperature power spectrum and low-ℓ and high-ℓ CMB E mode polarization and their temperature cross correlation from Planck <cit.>.
* BAO: On top of Baryonic Oscillation Spectroscopic Survey (BOSS) DR12 BAO, we used BAO data from Lyman-α (Lyα) absorption and quasars at an effective redshift, z_ eff = 2.33 from DR16 extended BOSS (eBOSS) survey <cit.>, which we denote as BAO throughout our analysis.
* Galaxy Full Shape Spectra: We utilize the dataset from the twelfth data release of BOSS DR12 <cit.> and its corresponding window-free galaxy power spectrum <cit.> to investigate potential new interactions in the neutrino sector. Since eBOSS DR16 datasets have yet not been combined into a full shape likelihood, we have considered the latest full shape power spectra, i.e., DR12 for our analysis. The BOSS DR12 galaxies are divided into four subsets, corresponding to two redshift slices 0.2 < z < 0.5 from the LOWZ sample (effective redshift, z_ eff = 0.38) and 0.5 < z < 0.75 from the CMASS sample (z_ eff = 0.61) and two sky cuts in the north and south Galactic caps (NGC and SGC). The galaxy power spectrum data is provided for each subset. We denote the combined dataset as Full Shape (FS) throughout the analysis.
To explore a possible delay in the onset of neutrino free-streaming, we analyze the multipoles of the galaxy power spectrum P_ℓ(k, z) (ℓ = 0, 2, 4) <cit.> along with the Q_0(k, z) estimator <cit.>, which is closely related to the real-space power spectrum and derived using a linear combination of the first few power spectrum multipoles. For the reason mentioned earlier, our primary analysis conservatively uses the multipoles within the wavenumber range k_ min = 0.01 h/ Mpc to k_ max = 0.2 h/ Mpc for redshift-space. Since real-space perturbation theory is applicable to smaller scales, we also consider measurements of the Q_0 estimator in the range k_ min = 0.2 h/ Mpc to k_ max = 0.4 h/ Mpc. In both cases, we use a bin width of Δ k = 0.005 h/ Mpc. Additionally, we utilize the reconstructed power spectrum to provide constraints on the Alcock-Paczynski (AP) parameters <cit.>.
Our analysis employs the BOSS likelihood <cit.>, which analytically marginalizes over the nuisance parameters that enter linearly into the power spectrum, such as the counterterms (monopole c_0, quadrupole c_2, hexadecapole c_4, and fingers-of-God c̃), the third order galaxy bias b_Γ_3, and the stochastic contributions (P_ shot, a_0, and a_1). The covariance matrix used for this likelihood is computed using MultiDark-Patchy 2048 simulations <cit.>.
With these datasets, we run MCMC code MontePython with the following free parameters: the 6 standard cosmological parameters for ΛCDM, namely: CDM density ω_ cdm, baryon density ω_ b, angular scale of the sound horizon at recombination θ_s, the amplitude A_s, spectral index
of the primordial spectra n_s and finally, the optical depth to reionization τ_ reio. Additionally, our model parameters include the interaction redshift z_ int and the sum of neutrino mass ∑ m_ν[We have fixed the value of N_ eff to 3.046 in our analysis.].
In Table <ref>, prior ranges of the cosmological and model parameters, used for the present analysis are listed.
§ RESULTS AND ANALYSIS
In this section, we present the results obtained from the above methodology using combinations of various datasets. First, let us present the constraints on the (6+2) parameters using Planck TT, TE, EE + BAO datasets as well as Planck TT, TE, EE + BAO + FS datasets. In our analysis, we focus exclusively on interactions within the neutrino sector, ensuring that the equivalence principle remains valid in the dark matter sector. The FS measurements incorporate both the non-wiggle part of the power spectra P(k) and the geometrical information from the wiggle part of P(k) (i.e. BAO). Including the shape information from P(k), along with the geometrical feature in the BAO data, results in tighter constraints on the cosmological as well as model parameters.
Fig. <ref> illustrates the triangular plots with 1σ and 2σ confidence contours of major parameters for the three different cases (n_ int=3, 4 and 5) (while the posterior distributions for all the parameters for all the three cases are individually displayed in Figs. <ref>, <ref> and <ref> in Appendix <ref>). The corresponding parameter values are listed in Tables <ref>, <ref> and <ref>. In Fig. <ref>, the constraints on different interaction scenarios for the combined Planck TT, TE, EE + BAO dataset as well as for the Planck TT, TE, EE + BAO + FS dataset are shown in blue and red respectively. The Planck + BAO dataset (blue) yields constraints on all standard cosmological parameters for n_ int = 4, 5 cases, which fall within the 1σ bounds of vanilla ΛCDM cosmology, except for the n_ int=3 case, where strong degeneracies between A_s, n_s and z_ int are observed, consistent with the previous analyses <cit.>.
The inclusion of the galaxy full shape (FS) dataset slightly modifies these constraints, though they remain more or less within the 1σ uncertainty of the former dataset. However, there are certain characteristic changes upon inclusion of the FS data, that need to be pointed out.
As pointed out in previous studies <cit.>, in the strongly interacting (SI) mode of self-interacting model, the changes in the CMB anisotropy and galaxy power spectra can be absorbed into modifications of the primordial scalar power spectra, specifically a decrease in the amplitude A_s and the spectral index n_s <cit.>. These strong degeneracies are absent in our analysis for n_ int=4 and 5 respectively due to the modeling of the interactions, as we did not consider the explicit momentum dependencies of the interaction cross section in the Boltzmann hierarchy equations. Essentially our model corresponds to the moderately interacting (MI) mode in the self-interacting neutrino models. The error from this assumption is negligible for the purposes of our analysis, as noted in <cit.>. Although for n_ int=3 model, Γ_ν/H(z) is almost constant throughout the evolution and as a result neutrinos decouple comparatively at late time modifying A_s, n_s which implies the degeneracy as obtained in Fig <ref>.
Furthermore, Fig. <ref> shows a comparison of the full posterior probability distribution with 1σ and 2σ uncertainty of all the parameters under consideration, for three different cases of interaction under consideration, using Planck + BAO +FS dataset. The contours in blue, green and red respectively represent the models with n_ int =3, 4 and 5. As mentioned earlier, it is important to note that the n_ int=5 model effectively maps to the moderately interacting (MI) mode studied in <cit.>[Note that for n_ int=5, a recent study <cit.> reports a lack of concordance between CMB and galaxy FS data for both moderately interacting (MI) and strongly interacting (SI) modes.].
It is crucial to investigate the impact on cosmological parameters other than the interaction redshift, z_ int, when incorporating full shape data. As demonstrated in <cit.>, the geometrical measurements of BAO provide nearly equivalent information to the broadband shape data in the context of BOSS DR12 dataset. Consequently, we obtain comparable constraints on the cosmological parameters with including shape information from FS data. Important point to notice in all of these cases is that, since FS data is insensitive to the optical depth and sum of neutrino mass, including this dataset on top of BAO loosens the constraint on τ slightly (within 1σ) and for sum of neutrino mass (within 2σ). Although for n_ int=3 model, the degeneracies between the parameters have already been identified in <cit.>, here our analysis with FS data show identical degeneracies with slight improvements. The key constraint of interest, the interaction redshift, is provided separately in Eq. <ref>.
The constraints on z_ int from Planck+BAO+FS dataset at 95% C.L., are as follows:
n_ int = 3: z_ int >7.93 × 10^3 ,
n_ int = 4: z_ int >1.28 × 10^5 ,
n_ int = 5: z_ int >1.7 × 10^5 .
Further, it is important to note that the inclusion of FS datasets leads to a relaxation in the bounds on the sum of neutrino masses, consistent with the findings in <cit.>. With Planck +BAO +FS dataset, we obtain the bounds on sum of neutrino mass as ∑ m_ν< 0.19 eV and ∑ m_ν< 0.16 eV for n_ int=3 and 4 cases respectively. For n_ int=5 case we obtain ∑ m_ν< 0.16 eV at 95% C.L. As noted in <cit.>, including FS data with Planck and BAO slightly loosens the bounds on ∑ m_ν as opposed to Planck + BAO analysis for ΛCDM +∑ m_ν model. This adjustment slightly alters the onset of neutrino free-streaming across all cases.
For n_ int=5 model, using dimensional analysis, the neutrino interaction rate can be expressed as Γ_ν≃ G_ eff^2 T_ν^5, where Γ_ν = H(z_ int). This relationship allows us to infer the neutrino interaction strength based on the Hubble parameter at the interaction redshift. Analysis combining Planck and BAO data yields a 95% C.L. constraint of z_ int > 8.7 × 10^4, which corresponds to an upper limit on the interaction strength parameter: G_ eff < 3.9 × 10^-4 , MeV^-2. Including the BOSS BR12 Full Shape galaxy power spectra further tightens this constraint to z_ int > 1.7 × 10^5 and G_ eff < 1.59 × 10^-4 , MeV^-2. Given the insufficiency in the literature of obvious mapping to any specific particle physics model for n_ int = 3 and 4 cases, our analysis for these two cases focuses mostly on determining constraints on the interaction redshift along with standard cosmological parameters. As presented in Tables <ref> and <ref>, we find that z_ int > 7.93 × 10^3 for n_ int = 3 and z_ int > 1.28 × 10^5 for n_ int = 4 at a 95% C.L.
§ FORECASTS ON FUTURE CMB+LSS MISSIONS
Future CMB experiments such as CMB-S4 <cit.>, PICO <cit.> and LiteBIRD <cit.>, along with future LSS experiment like Euclid <cit.> for galaxy redshift surveys, are expected to provide crucial insights into constraining cosmological parameters, including the sum of neutrino mass, and advancing our understanding of both standard and beyond Standard Model neutrino interactions. The combination of both future CMB and LSS missions will help in probing the mildly non-linear regime and the dynamics on very small scales with unprecedented sensitivity. Keeping this in mind, we proceed to perform a forecast analysis of the early universe neutrino interaction scenarios as discussed in the present article, in the context of the upcoming data from CMB-S4 and Euclid.
Euclid satellite promises to deliver the most precise galaxy survey in redshift space till date, enabling the measurement of cosmological observables and non-standard model parameters with better than 1% accuracy <cit.>. This will significantly enhance our understanding of dark matter distribution, dark energy as well as their interplay with other cosmic species. By performing a spectroscopic survey, Euclid aims to gather data from approximately 10^7 galaxies across a redshift range of 0.7-2.0. Following the modeling approach in <cit.>, the error in spectroscopic measurements for this survey can be described by σ_ z = 0.001(1+z), while angular resolution errors have been neglected. Detailed mission specifications can be found in <cit.>. Euclid will detect galaxies over a sky fraction f_ sky=0.3636, within redshift bins of width Δ z centered around z, as described by,
N(z) = 41253 f_ sky deg^2∫_z-Δ z/2^z+Δ z/2dN(z)/dz1 deg^2 dz.
Additionally, two nuisance parameters, β_ 0^ Euclid and β_ 1^ Euclid, have been introduced in modeling the galaxy bias factor detected by Euclid <cit.>, expressed as,
b_z = β_ 0^ Euclid(1+z)^0.5 β_1^ Euclid.
As a prior we have chosen Gaussian priors with σ=2.5% for these β parameters.
In our forecast analysis, we also include CMB-S4 as the future CMB mission.
CMB-S4 is the first ground based stage-IV CMB project with the primary goal to search for inflationary B modes. Along with that it will also be able to measure the sum of neutrino mass with a target threshold of 2σ and 3σ detection to 0.03 eV and 0.02 eV respectively <cit.>. In the CMB maps, multipole moments receive contributions primarily from the CMB signal s_ℓ m and the experimental noise n_ℓ m, which can be written as,
a_ℓ m^P = s_ℓ m^P + n_ℓ m^P
Here P stands for temperature and E and B polarization modes respectively. The noise spectrum for CMB-S4 can be modeled as following <cit.>,
N_ℓ^PP'≡⟨ n_ℓ m^P* n_ℓ m^P'⟩ =
δ_P P'θ_FWHM^2 σ_P^2 exp[ℓ(ℓ+1) θ_FWHM^2/8 ln 2],
where θ_ FWHM and σ_P represent the full width at half maximum of the Gaussian beam and root mean square of the instrumental noise.
CMB-S4 is designed to probe at a target frequency 150 GHz with a beam width 3.0 arcmin and temperature and polarization sensitivity 1.0 and 1.41 μK arcmin respectively <cit.>.
Furthermore, we intend to include the non-linear corrections to the matter power spectrum in our forecast analysis. There has been a handful of studies to search for possible constraints on the sum of neutrino mass based on the sensitivity of Euclid, incorporating these non-linear corrections <cit.>. Our analysis may be considered as a complimentary to that, where possible neutrino interactions have also been taken into account in a model-independent way. Unlike the approach in <cit.>, which employs the EFT of LSS for Euclid specifications, we apply the standard Halofit <cit.> corrections to the matter power spectrum and the corresponding Euclid sensitivity model as previously described. This choice is made primarily to minimize the number of nuisance parameters in the analysis. Within the we use the non-linear Halofit model and restrict the k_ max to a conservative limit to 0.2 h/ Mpc to minimize error (since the error is found to increase with further increase of k_ max value <cit.>.)
With this, we generate the CMB temperature and polarization spectra data considering the fake Planck Gaussian likelihood with f_ sky =0.57 for 2<ℓ<50 and CMB-S4 for f_ sky =0.4 for 51<ℓ<3000. For the power spectrum data generation we adopt the conservative approach as in <cit.> considering redshift dependent non-linear cut-off k_ NL(z)=k_ NL(0)(1+z)^2/(2+n_s) modeling, where k_ NL(0) is the non-linear cut-off scale today and n_s is the scalar spectral index. In our analysis, we consider k_ NL(0) to be 0.2 h/ Mpc for Euclid.
Based on the above-mentioned instrumental specifications and possible sources of error, we carry out Fisher forecast followed by Bayesian MCMC analysis using and modified in view of combined sensitivity of CMB-S4 and Euclid, and investigate possible constraints on the (6+2) parameters that may be obtained in future.
The relevant parameters are the standard 6 parameters {ω_b, ω_ cdm, 100 θ_s, ln(10^10A_s), n_s, τ_ reio} along with the interaction redshift, z_ int and sum of neutrino mass, ∑ m_ν[The fiducial parameter values are taken to be: ω_b=0.022377, ω_ cdm=0.1201, 100 θ_s=1.0411, ln(10^10A_s)=3.0447, n_s=0.9659, τ_ reio=0.543 and ∑ m_ν=0.06 eV].
Let us now briefly discuss the major results of our forecast analysis as presented
in Fig. <ref> (while the constraints on whole set of parameters are presented in Fig. <ref> in Appendix <ref>) as well as
in Tables <ref>, <ref> and <ref>. It has been investigated earlier <cit.> that
CMB-S4 in combination with Planck Baseline, have the potential to constrain the onset of neutrino free-streaming to z_ int>2.4 × 10^5 for n_ int =3 and z_ int>2.8 × 10^5 for n_ int =5 at 95% C.L. CMB-S4 will also be able to break the degeneracy of z_ int with A_s, n_s and H_0 for n_ int=3 case. With the inclusion of Euclid, our analysis goes over the previous literature. Since increasing the value of z_ int essentially affects the small scales in the matter power spectra, we obtain a tighter constraint for all the three cases with the combination of CMB-S4 and Euclid. For n_ int=5 case, our analysis implies earlier decoupling obtaining, z_ int>1.78 × 10^6 which constrains G_ eff<4.3 × 10^-6 MeV^-2. Our analysis extends over the previous studies by inferring that combining Euclid with Planck Baseline and CMB-S4 will be able to constrain the neutrino interactions up to redshift,
n_ int = 3 ⟹ z_ int >6.31 × 10^5 ,
n_ int = 4 ⟹ z_ int >1.78 × 10^6 ,
n_ int = 5 ⟹ z_ int >1.78 × 10^6 .
Additionally, 95% C.L. bounds for Planck Baseline + CMB-S4 + Euclid improves over the previous forecasts with CMB-S4 alone. Our investigation suggests a joint analysis of CMB-S4 with Euclid would be able to constrain the onset of neutrino free-streaming up to redshift z ∼ 10^6, implying earlier decoupling even in Γ_ν∝ T_ν^3 model. The constraint is also significantly improved compared to results of our previous section with current dataset, since Full shape data is not sensitive to the sum of neutrino mass as pointed out in <cit.>.
Further, our analysis with Euclid predicts high sensitivity for sum of neutrino mass with σ(∑ m_ν) = 0.021 eV, 0.022 eV and 0.021 eV respectively for n_ int =3,4 and 5 cases. In compared to the analysis with FS data in the previous section, forecast analysis with Euclid and CMB-S4 for all the interacting models will be able to put constraints ∑ m_ν<0.10 eV(at 95 % C.L.), greater than 1σ improvement over the full shape analysis with DR12 dataset (∑ m_ν< 0.16 eV, at 95% C.L.).
From Table <ref>, <ref> and <ref>, we can see that 1σ uncertainties of all cosmological parameters in our neutrino interaction models, with the inclusion of Euclid Galaxy Clustering, remain close to the values predicted as in ΛCDM + ∑ m_ν model forecast analysis in <cit.>. It is important to highlight that our analysis examines different combinations of future experiments compared to <cit.>, and therefore, a direct one-to-one comparison is not appropriate. Thus, a joint analysis of Planck, CMB-S4 and Euclid will be able to probe neutrino interactions up to redshift z ∼ 10^6 and can also put tighter bound on the sum of neutrino mass.
§ SUMMARY
Cosmology has entered a precision era, enabling us to probe particle physics interactions throughout the universe with unprecedented accuracy. In conjunction with CMB observations, Large Scale Structure experiments provide insights into various particle physics models, including neutrino interactions in the early universe. Over the past decade, several studies have suggested that neutrino interactions in the early universe might have been delayed due to yet-to-be-discovered models of neutrino self-interactions, which subsequently affect the evolution of gravitational potentials, leaving detectable imprints on CMB anisotropy and matter power spectra in both linear and mildly non-linear regimes. In the present analysis, we investigated whether LSS data is sensitive to these changes. To do this, we made use of a fairly generic parameterization of neutrino interaction rates, focusing on the interaction in the early universe, and searched for possible constraints on the (6+2) model parameters using the combined dataset from Planck TT, TE, EE + BAO, along with the full shape (FS) galaxy power spectra data.
Analyses using Planck and BAO dataset have placed constraints on the interaction redshifts for neutrino interactions in the early universe, finding z_ int>6 × 10^3, 7.8 × 10^4, and 8.4 × 10^4 for models with n_ int = 3, 4, and 5, respectively, at 95% C.L., consistent with the earlier studies <cit.>.
Further, since these interactions impact the matter power spectra in the mildly non-linear regime, which is probed by the galaxy full shape power spectra, we included the BOSS DR12 full shape spectra data in our analysis. The FS data was found to have tightened the constraints on the interaction redshifts to z_ int>7.93 × 10^3, 1.28 × 10^5, and 1.7 × 10^5 for n_ int = 3, 4, and 5 models, respectively, at 95% C.L. While the inclusion of FS data slightly reduces the degeneracies of z_ int with cosmological parameters for the n_ int =3 case, it relaxes the bounds on the sum of neutrino mass. We obtained ∑ m_ν<0.19 eV for the n_ int=3 model including the FS data at 95% C.L. whereas, Planck + BAO data provides a tighter constraint ∑ m_ν<0.17 eV. Similar trends in the constraints on sum of neutrino mass persist for n_ int=4 and 5 models.
Furthermore, recent findings suggest that the moderately interacting (MI) mode of neutrino self-interactions mediated by heavy scalars in the early universe (which corresponds to n_ int = 5 in our case) shows a lack of concordance when considering both Planck and LSS data. Our study with galaxy power spectra reveals that an even earlier onset of free-streaming is permitted for the moderately interacting (MI) mode in self-interacting neutrino model.
Having investigated the effects of the present LSS data, we then moved on to examine the sensitivity of future LSS data to the onset of neutrino free-streaming. Using CMB alone, previous studies found that the upcoming CMB-S4 experiment has the potential to constrain the lower bound of the free-streaming redshift z_ int to 3 × 10^5. Our findings suggest that the Euclid galaxy clustering survey (covering the redshift range 0.7 < z < 2.0), when combined with Planck and CMB-S4 data, would be able to constrain the interaction strength up to z_ int∼ 10^6. It will further lower the uncertainty on ∑ m_ν, leading to σ(∑ m_ν) ≈ 0.02 eV with ∑ m_ν< 0.10 eV at 95% C.L. for almost all the cases. Additionally, the joint forecast study with Planck+CMB-S4+Euclid would help break the parameter degeneracy for the n_ int = 3 model, which persists even in the present Planck+BAO+FS dataset analysis.
In a nutshell, present and future galaxy surveys, combined with CMB missions, play a significant role in shedding light on possible neutrino interaction in the early universe, the sum of the neutrino mass as well as major cosmological parameters. This in turn helps us take a step forward to improve our understanding of the universe and its interplay with this essential particle physics entity as well as the theories encompassing them.
The results presented in this article point to several areas that warrant further investigation. Firstly, since our analysis was conducted using the BOSS DR12 FS datasets, it would be intriguing to explore its extension to the BOSS DR16 full-shape galaxy power spectra, as and when that is made publicly available. Additionally, the study of neutrino interactions requires a robust consideration of all cosmological parameters, particularly the sum of neutrino mass, which remains poorly constrained in the context of the EFT of LSS analysis. While current DESI data releases and the future Euclid survey are expected to provide more precise measurements of the sum of neutrino mass, they have not yet been analyzed within the framework of the EFT of LSS. This lies beyond the scope of our current analysis. A comprehensive study of these surveys, in conjunction with CMB anisotropy datasets, will be crucial for unraveling the complexities of neutrino interactions.
§ ACKNOWLEDGEMENTS
We would like to thank Petter Taule and David Camarena for useful discussions. SP1 thanks Debarun Paul, Arko Bhaumik, Rahul Shah, Pathikrith Banerjee, Antara Dey and Purba Mukherjee for discussions at various stages of the project. SP1 also thanks Bithika Halder for constant support and inspiration throughout the project, and CSIR for financial support through Senior Research Fellowship (File no. 09/093(0195)/2020-EMR-I). RS acknowledges support from DST Inspire Faculty fellowship Grant no. IFA19-PH231 at ISI Kolkata and the NFSG Research grant from BITS Pilani Hyderabad. SP2 thanks the Department of Science and Technology, Govt. of India for partial support through Grant No. NMICPS/006/MD/2020-21.
We gratefully acknowledge the use of the publicly available codes https://github.com/lesgourg/class_public, https://github.com/Michalychforever/CLASS-PT and https://github.com/brinckmann/montepython_public for parameter estimation and https://github.com/cmbant/getdist for plotting.
We also acknowledge the use of computational facilities of Technology Innovation Hub at ISI Kolkata, along with High Performance Computing facility Pegasus at IUCAA, Pune, India.
§ POSTERIOR DISTRIBUTION FOR ALL PARAMETERS: FULL SHAPE GALAXY SPECTRA
Here we show the full posterior probability distribution of all the cosmological and model parameters for each cases (i.e. n_ int=3, 4 and 5) with Planck+BAO and Planck+BAO+FS datasets, discussed in detail in Sec. <ref>.
§ POSTERIOR DISTRIBUTION FOR ALL PARAMETERS: FUTURE CMB+LSS MISSIONS
We present here the full posterior probability distribution of all the cosmological parameters for all the models with n_ int=3, 4 and 5 with the forecast analysis as detailed in Sec <ref> for Planck+CMB-S4+Euclid.
JHEP.bst
|
http://arxiv.org/abs/2409.02385v1 | 20240904022510 | Unified Framework with Consistency across Modalities for Human Activity Recognition | [
"Tuyen Tran",
"Thao Minh Le",
"Hung Tran",
"Truyen Tran"
] | cs.CV | [
"cs.CV"
] |
Unified Compositional Query Machine with Multimodal Consistency for
Video-based Human Activity Recognition
Qian Niu12,
Junyu Liu2,
Ziqian Bi3,
Pohsun Feng4,
Benji Peng5,
Keyu Chen5
2Kyoto University
3Indiana University
4National Taiwan Normal University
5Georgia Institute of Technology
Corresponding Email: [email protected]
September 9, 2024
===========================================================================================================================================================================================================================================================================================================================================================================================
Unified Compositional Query Machine with Multimodal Consistency for
Video-based Human Activity Recognition
Qian Niu12,
Junyu Liu2,
Ziqian Bi3,
Pohsun Feng4,
Benji Peng5,
Keyu Chen5
2Kyoto University
3Indiana University
4National Taiwan Normal University
5Georgia Institute of Technology
Corresponding Email: [email protected]
September 9, 2024
===========================================================================================================================================================================================================================================================================================================================================================================================
empty
COMPUTER
HUB
§ ABSTRACT
Recognizing human activities in videos is challenging due to the spatio-temporal
complexity and context-dependence of human interactions. Prior studies
often rely on single input modalities, such as RGB or skeletal data,
limiting their ability to exploit the complementary advantages across
modalities. Recent studies focus on combining these two modalities
using simple feature fusion techniques. However, due to the inherent
disparities in representation between these input modalities, designing
a unified neural network architecture to effectively leverage their
complementary information remains a significant challenge. To address
this, we propose a comprehensive multimodal framework for robust video-based
human activity recognition. Our key contribution is the introduction
of a novel compositional query machine, called
(COMPositional hUman-cenTric quERy
machine), a generic neural architecture that models the interactions
between a human of interest and its surroundings in both space and
time. Thanks to its versatile design, can be leveraged
to distill distinctive representations for various input modalities.
Additionally, we introduce a consistency loss that enforces agreement
in prediction between modalities, exploiting the complementary information
from multimodal inputs for robust human movement recognition. Through
extensive experiments on action localization and group activity recognition
tasks, our approach demonstrates superior performance when compared
with state-of-the-art methods. Our code is available at: https://github.com/tranxuantuyen/COMPUTERhttps://github.com/tranxuantuyen/COMPUTER.
§ INTRODUCTION
Human activity recognition in videos is a crucial area of focus within
the field of Artificial Intelligence (AI), enabling numerous practical
applications in real-world scenarios <cit.>.
However, this is a challenging task due to the spatio-temporal complexity
and context-dependence of human interactions. These factors require
AI systems to robustly interpret and generalize across a wide range
of behaviors and environmental conditions.
Previous studies have explored representing human activities in consideration
of contextual factors <cit.>.
Tang et al.<cit.> analyze human actions
from an object-centric perspective, modeling the relationships between
humans and the surrounding objects. These works all model the dynamics
of visual scenes by focusing on how the relationships between entities
evolve over time. However, they rely solely on computationally expensive
RGB image sequences, making temporal representation from video data
challenging <cit.>. Additionally, using only RGB
data limits the capability to capture subtle body movements.
Human body key points and skeleton data offer advantages in computational
costs and temporal modeling due to their compactness and robustness
against lighting conditions and scene variations <cit.>.
However, skeleton data lacks contextual information, limiting its
capability to represent spatial relationships involved in human-object
interactions.
Given the complementary attributes of RGB and skeleton data, a natural
question arises: how to design a unified model to effectively
combine these modalities for the task of human activity recognition?
However, it is not straightforward to build a joint representation
that leverages both modalities' strengths due to their inherent disparities.
HIT <cit.> was among the earliest attempt to address
this challenge. Their approach involved designing separate components
to process each modality independently, followed by a late fusion
technique. Due to significant structural differences between modalities,
late fusion performed poorly. This is because one modality may negatively
impact the other, ultimately reducing the representational capabilities
of the joint features.
To address the limitations of current methods, we first propose a
unified feature representation framework for multiple modalities in
human activity recognition. Second, we introduce a novel self-supervised
mechanism to ensure consistency in prediction using different modalities
to avoid negative cross-modality impacts within their joint representation.
Overview of the proposed approach is illustrated in Fig. <ref>.
To the best of our knowledge, we are the first to propose a generic
and modality-agnostic architecture, along with a novel mechanism for
multi-modal consistency for human activity recognition. To evaluate
the effectiveness of the proposed approach, we conduct intensive experiments
on two human activity recognition tasks: Spatio-Temporal Action Localization
and Group Activity Recognition.
In summary, our contribution is three-fold: (1) Introduction of a
unified compositional query machine for simultaneously handling multi-modal
inputs for the task of human-centric video understanding; (2) Introduction
of a novel mechanism to encourage consistency in prediction across
modalities in a self-supervised manner; (3) Conducting extensive experiments
and analyses across two tasks in human activity recognition in videos.
§ RELATED WORK
§.§ Multi-modal human action recognition
Prior works on human activity recognition mostly rely solely on RGB
features, discarding valuable information from other modalities. For
instance, skeleton data offers distinct advantages in recognizing
actions that require superior temporal modeling such as running or
driving a car <cit.>. Recognizing the benefits of
multi-modal input, some studies have attempted to incorporate additional
modalities beyond RGB features <cit.>.
PCSC <cit.> proposes to use optical flow to capture
motion, designing an inception-like model with an early fusion mechanism
to combine RGB with flow features. In contrast, <cit.>
extracts RGB and motion features using I3D <cit.>,
and then combines them with a late fusion mechanism. Most recently,
HIT <cit.> utilizes both skeleton data and RGB
features for spatio-temporal action localization using a simple late
fusion technique. While these approaches have shown some benefits
of using multi-modal inputs, neither early fusion or late fusion are
capable of building a joint representation that captures the complementary
advantages across modalities. Different from these works, our approach
uses a novel mechanism to leverage the consistency in prediction across
different modalities for robust human activity recognition. More importantly,
our newly introduced consistency loss allows us to train our proposed
method in an unsupervised manner without the need for additional training
data.
§.§ Contrastive self-supervised learning
Contrastive self-supervised learning has gained popularity for its
ability to avoid the need for large-scale datasets. It requires the
sampling of positive and negative pairs from raw, unlabeled data.
During the learning process, it encourages convergence of the positive
pair representations in latent space while enforcing divergence of
the negative pairs. A prominent example is CLIP <cit.>,
where it constructs positive pairs consisting of an image and a sentence
describing the same object, and negative pairs consist of an image
and a sentence that refer to different objects. While structurally
different, visual and text-based latent representations should contain
mutual information linked to the same concept. This strategy is also
applied to image-image pairs for data augmentation, e.g., SimCLR <cit.>.
The intuition is to bring different augmented views of the same image
closer in latent space, while pushing different augmented views apart.
Proven highly effective for data representation, this technique has
pioneered subsequent works <cit.>
for robust feature representation learning. In this work, we applied
this technique to human activity recognition using multi-modal input,
enforcing convergence of latent representations from different modalities
originating from the same actor, despite their structural disparities.
§ METHOD
§.§ Preliminaries
Formulation: Our goal is to design a model that leverages
multi-modal inputs, e.g., RGB sequences and human body key points,
for human activity recognition in videos. We achieve this by formulating
the problem under a neural query machine. Our query machine
takes as input a human-centric query { q_i} that
probes different aspects regarding the movements of a specific human
actor and its relationships with the surrounding entities within a
video input V. The output is a prediction of an action label ỹ,
based on the collective human-centric attributes in response to the
queries. Formally, our query machine is given as:
ỹ=𝒜({ g^m(q_i,t^m,V)} _m).
For each modality m, q_i,t^m is i-th query at time step
t; g^m(.) is a neural building block that retrieves
relevant information in V in response to q_i,t^m; 𝒜(.)
is a neural network that aggregates the attributes from the input
modalities and maps them to label space.
Our work investigates human activity recognition in videos under two
specific applications: Spatio-Temporal Action Localization and Group
Activity Recognition. Since human activity is usually interpreted
through different layers of interactions, such as self movements and
cross-entity interactions, we hypothesize a compositional function
for each modality-wise query machine g^m(.). This
compositional design places humans at the center of relational modeling
of their interactions with the surroundings (Sec. <ref>).
Spatio-temporal video representation: Following recent studies
<cit.>, we first
extract a spatio-temporal representation for each video input V
using video feature extractors such as Slowfast <cit.>
and MViT models <cit.>. The video
V is usually segmented into T non-overlapping clips, resulting
in video features X∈ℝ^T× FHW× D, where F
is the number of frames in each clip, and H,W,D are the height,
width, and channel dimension of the feature maps, respectively.
Query representation: We use two input modalities as queries:
human-centric visual appearance and body key points.
Human-centric visual appearance: Visual appearance of human
actors themselves plays a crucial role in interpreting their actions.
To capture this, we follow <cit.>
to use an actor localization module to extract the appearance saliency
of human actors. First, for each video segment t in the T non-overlapping
clips from a video input, we utilize a human detector <cit.>
to localize human actors within their center frame. This yields a
set of bounding boxes for all N detected actors. We then use RoI-Align
<cit.> to extract visual appearance features for the N
actors: Q_t^vis={ q_i,t^vis| q_i,t^vis∈ℝ^1× D} _i=1^N.
Body key points: We use the common framework Detectron2 <cit.>
to detect human body key points from RGB frames. Similar to visual
appearance feature extraction, we use the middle frame of a video
clip t for pose detection, resulting in a set Q_t^key
of N person skeletons: Q_t^key={ q_i,t^key| q_i,t^key∈ℝ^1× D} _i=1^N.
§.§ Compositional Human-centric Query Machine
We propose a novel family of model architectures, dubbed COMPositional
hUman-cenTric quERy machine (),
for multi-modal human activity recognition in videos.
leverages a modular design, combining several identical modality-wise
query machines that model the relationships between human actors and
their surroundings in both space and time. This modular design simplifies
the construction of by stacking identical building blocks,
facilitating dynamic model sizes and model's representation capabilities
for efficient action prediction.
Our query machines focus on two main types of interactions: human-human
interactions and human-context interactions. Inspired by Dang
et al. <cit.>, each modality-wise query machine
in adopts a two-stage design, where the output of the
first stage serves as the input for the second stage. Figure <ref>
(on the left) provides a general architecture of . One
of the key advantages of 's modular design is its inherent
scalability. The system can be easily extended to incorporate additional
input modalities and handle different types of interactions. In this
work, we demonstrate this capability in the specific context of human
activity recognition with two modalities (visual appearance
and body key points) and two types of interactions (human-human
and human-context interactions). Mathematically, implements
each individual query machine g^m(.) for the m-th
modality in Eq. (<ref>) using a compositional function:
g^m(q_i,t^m,V)=(Φ_h(q_i,t^m,X^H),X^C).
Here, X^H and X^C represent human-centric and contextual
features, respectively, derived from the embedding of the video input
V. We define Φ_h(.) and Φ_c(.)
as reusable computational units called HUman-centric query
Blocks (s). These units play a
crucial role in modeling human-human interactions (HH-HUB)
and human-context interactions (HC-HUB). Since the operation of the
HUB is generic and does not depend on a specific modality, we omit
m for the brevity. We elaborate the design of in
the following.
Central to the 's operation is the widely used scaled
dot-product attention layer <cit.>:
Attn(q,K,V)=∑_μ=1^Msoftmax_μ(K_μW_k(qW_q)/√(d)^⊤)V_μW_v,
where query q∈ℝ^1× D, keys K∈ℝ^M× D,
values V∈ℝ^M× D. The output of Attn(q,K,V)
is a vector in ℝ^1× D and W_q∈ℝ^D× D,
W_k∈ℝ^D× D, W_v∈ℝ^D× D
are network parameters. Fig. <ref> (on the right) demonstrates
the operation of . is comprised of stacked
attention layers that accounts for the similarity between the dynamics
of a human actor and its surroundings in space and time through three
information channels (past, current, future). It searches for relevant
information of the query q in memories X_past, X_current,
X_future storing past, current and future knowledge in
the form of key-value pairs. While the query is a specific actor representation
at time t, the information in the memories can consist either human-centric
or general contextual information of a video clip at different points
in time. The output of is a refined representation of
the actor-specific feature in response to the given query. Denoting
q̃_i,t as the output representation of actor i, defined
as:
q̃_i,t =HUB(q_i,t,{ X_past,X_current,X_future}),
where X_past={ K_t-w:t-1,V_t-w:t-1},
X_current={ K_t,V_t}, X_future={ K_t+1:t+w,V_t+1:t+w}
are key-value stores that encapsulates information in past, current
and future times. The window size w indicates clips that are w
steps apart from the present clip t. To enable effective retrieval
of past/future information while reducing the computational costs,
we employ a pre-computed clip selection mechanism that allows
us to skip irrelevant clips. In particular, we assess the relevance
of all clips within the window w to the present clip at time t
using their feature similarity. We then select the top-k most relevant
clips and store them as as key-value memories in the past (X_past)
and future times (X_future). computes each
pair (q,X) using a multi-layer attention in Eq. (<ref>),
followed by a linear aggregation layer which returns a single vector
q̃_i,t for each human actor i.
Human-human interactions with HH-HUB: This stage considers
an actor in the relation with other actors involved in the same visual
scene in space and time. The HH-HUB Φ_h(.) takes
as input a query q_i,t, either visual appearance or human body
key points, representing an individual actor i at video clip t
and three key-value stores X_past^h, X_current^h,
X_future^h denoting visual appearance features
of all other human actors detected in past, present and future video
clips, respectively (See Sec. <ref>). While
the pair (q_i,t,X_current^h) at
current clip t captures the spatial relationships between actors
in the current scene, the across-time pairs (q_i,t,X_past^h)
and (q_i,t,X_future^h) provide information
about how the relationships evolve over time. The output of the HH-HUB
is a refined representation q̃_i,t∈ℝ^1× D
for the actor i at clip t:
q̃_i,t=[(q_i,t,X_past^h),Attn(q_i,t,X_current^h),Attn(q_i,t,X_future^h)]W_a,
where [· , ·] indicates feature concatenation, and W_a∈ℝ^3D× D
is learnable parameters.
Human-context interactions with HC-HUB: Unlike the HH-HUB
module, the HC-HUB Φ_c(.) focuses on modeling human-context
relationships. It takes the output q̃_i,t of the HH-HUB
as an input query and video spatio-temporal representations X_past^c,
X_current^c and X_future^c
of past, present and future video clips (See Sec. <ref>)
as key-value stores. The computation of the output q̂_i,t∈ℝ^1× D
of the HC-HUB is similar to the HH-HUB as in Eq. <ref>.
It now incorporates both human-human and human-context interactions
over space and time.
§.§ Cross-modality Consistency with Contrastive Loss
In multi-modal human activity recognition, models should leverage
complementary aspects across modalities for label prediction. However,
inherent representation disparities between input modalities make
finding a joint representation capturing the saliency across all modalities
challenging. Instead of directly fusing high-level features of these
modalities together, we introduce a consistency loss to encourage
the model to exploit mutual information across input modalities of
the same person, as they both lead to the same activity prediction.
We achieve this by maximizing the mutual information between any pairs
of input modalities. Specifically, we sample a positive pair by taking
the final representations q̂_i,t^vis,q̂_i,t^key
by , which belongs to the same person while treating
k augmented samples randomly paired from different individuals
within a mini-batch as negative samples. Our cross-modality consistency
loss ℒ_𝒞𝒞 is implemented similar to the contrastive
loss in <cit.>:
,
where, sim(·,·) is the cosine similarity
function between two input vectors. 𝕀(.) is an
indicator function iff k≠ i within mini batch B. Importantly,
our consistency loss allows us to train the proposed model in an unsupervised
manner without the need for additional training data. We train our
models with this consistency loss together with the usual cross-entropy
loss for label prediction. We detail the training of our two tasks
as below.
Spatio-temporal action localization: We use a classifier
of an MLP followed by a logistic function to predict action labels
by an actor at time step. Our network is trained end-to-end by jointly
minimizing the binary cross entropy loss and the consistency loss:
ℒ=ℒ_ℬ𝒞ℰ+ℒ_CC.
Group activity recognition: As all actors share the same
action label throughout a video input, we first apply the arithmetic
mean function across actors and along the temporal axis on the actor-specific
representations q̂^vis and q̂^key
to obtain a single output vector. We then use an MLP layer to map
the video feature to label space before applying the soft-max function
to return action label probabilities.We jointly minimize the cross
entropy loss and the consistency loss to train the network.
§ EXPERIMENTS
We evaluate the effectiveness of the proposed framework on two major
applications of human behavior understanding in videos: Spatio-Temporal
Action Localization on AVA v2.2 <cit.> and Group Activity
Recognition on Collective Activity dataset <cit.>.
§.§ Spatio-temporal Action Localization
Quantitative results: We first demonstrate the efficacy of
when using video features by different video representation
backbones. The results are displayed in Fig. <ref>.
In general, consistently improves all the baselines
where the gaps are more significant on weaker baselines.
We also compare against the most recent SoTA methods
on AVA (See Tab. <ref>). We categorize the prior
works based on the respective datasets that their video feature extractors
are pre-trained on, following <cit.>.
As seen, consistently outperforms all the recent approaches
across all categories. While these works either only focus on human-human
interactions such as <cit.> or human-object/human-context
interactions as in <cit.>,
enjoys the benefits of these two types of interactions within a single
model. While clearly outperform approaches using single
modalities such as MViTv2 <cit.>, ORViT <cit.>
and MemViT <cit.>, we wish to emphasize its superior
performance when comparing with the most recent approach HIT <cit.>
that leverages identical input modalities. This clearly demonstrates
the effectiveness of our proposed method in both architecture modeling
with units and learning with the cross-modality consistency
loss.
Qualitative results: We showcase examples taken from the
AVA dataset in Fig. <ref>. Combining both modalities
significantly enhances prediction performance compared to using a
single modality. Actions that requires efficient temporal modeling
such as running and driving are among the ones that benefit the most
from leveraging human body key points.
Ablation studies: We conduct a comprehensive analysis on
's computational costs (Tab. <ref>)
and the contributions of each input modality (Tab. <ref>).
We also provide additional analysis on the effects of ablating different
designated components from the full design in the Supp. All ablation
studies use the MViTv2-S backbone.
Computational complexity: To demonstrate the benefits
of , we compare it with stronger baselines of similar
representation capacity (a.k.a model size) in Tab. <ref>.
These baselines are implemented by fine-tuning extending MViTv2 baselines
with additional self-attention layers on AVA v2. Results show that
simply increasing model size offer minimal improvement (See Row 1
vs. 1.a/1.b, and Row 2 vs. 2.a/2.b). In contrast, COMPUTER significantly
enhances baseline performance with minimal additional costs. Specifically,
improves MViTv2-S by 3.0 points (10.9%) , with only
6.0% increase in GFLOPs and around 3.0% additional inference time.
We obverse consistent behaviors on MViTv2-B. Importantly,
with MViTv2-S baseline even outperforms MViTv2-B despite faster inference
time and only 1/3 of the GFLOPs, thanks to the sparsity of our human
input tokens (Row 1.c vs Row 2).
Effectiveness of each modality: We analyze the impact
of each modality on the performance in Tab. <ref>.
RGB sequences slightly outperform body key points thanks to its rich
information. successfully leverages the advantages of
each modality to improve the performance when using them in combination
(Row 3). Additionally, the proposed consistency loss considerably
improves the performance by nearly 1.5 points (5.1%).
Effectiveness of each component in :
To provide more insights of our architecture , we ablate
its components and observe its effect to the overall performance.
In general, ablating any designed components of would
result in degradation in performance (See Table <ref>).
Effectiveness of the hierarchy design: In this experiment,
the target human query attends to both the human and context elements
simultaneously, without imposing a hierarchical order. The significant
performance drop by 1.5 points (5.1%) highlights the importance of
our hierarchical design.
Effectiveness of the HC-HUB block: This experiment
removes all HC-HUB blocks out of the original design of .
This leads to a considerable decrease in performance by nearly 2.0
points (5.9%).
Effectiveness of the HH-HUB block: Similarly, this
experiment removes all the HH-HUB blocks. With the absence of the
human-human interactions, we observe a similar level of performance
degradation.
Effectiveness of temporal modeling: This experiment
limits all blocks to consider only the present information
channel while ignoring the other channels in past and future times.
Without considering the temporal dynamics of information, the performance
drops by nearly 1.5 points (4.8%).
Effectiveness of pre-computed clip selection: This
experiment justifies the benefit of our pre-computed clip selection.
Instead of selectively choosing top-k top relevant past/future
clips with the clip at present time, we take into account all video
clips within the window size w and merely take average over the
post attention layer outputs. This suffers from 1.0 points performance
decrease that highlights the necessity of the information selection
strategy for the sake of both performance and computational burden.
§.§ Group Activity Recognition
The Collective Activity dataset <cit.> includes 44
clips of five types of group activities including crossing, queuing,
walking, waiting and talking. For fair comparisons with prior works
<cit.>, we use InceptionNet
<cit.> pre-trained on ImageNet <cit.> for feature
extraction.
Quantitative results: The results of our proposed
model for action group recognition, shown in Tab. <ref>,
demonstrate the effectiveness of our approach. Our method clearly
outperforms existing works by successfully incorporating multiple
modalities, leading to a more comprehensive representation.
§ CONCLUSION
We introduced a unified framework named for multi-modal
human activity recognition. The framework features a generic architecture
effectively retrieving information about human movements and the relationships
between human actors and their surroundings from different input modalities.
We also introduced a novel consistency loss to leverage the complementary
information across modalities for robust prediction of human activity
in an unsupervised manner. Through extensive experiments on two applications,
our framework demonstrated high efficiency approach compared to existing
methods.
|
http://arxiv.org/abs/2409.03406v1 | 20240905104235 | Medium-enhanced polaron repulsion in a dilute Bose mixture | [
"Jesper Levinsen",
"Olivier Bleu",
"Meera M. Parish"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas",
"cond-mat.mes-hall"
] |
apsrev4-1_our_style
|
http://arxiv.org/abs/2409.02995v1 | 20240904180003 | The K2-24 planetary system revisited by CHEOPS | [
"V. Nascimbeni",
"L. Borsato",
"P. Leonardi",
"S. G. Sousa",
"T. G. Wilson",
"A. Fortier",
"A. Heitzmann",
"G. Mantovan",
"R. Luque",
"T. Zingales",
"G. Piotto",
"Y. Alibert",
"R. Alonso",
"T. Bárczy",
"D. Barrado Navascues",
"S. C. Barros",
"W. Baumjohann",
"T. Beck",
"W. Benz",
"N. Billot",
"F. Biondi",
"A. Brandeker",
"C. Broeg",
"M. -D. Busch",
"A. Collier Cameron",
"A. C. M. Correia",
"Sz. Csizmadia",
"P. E. Cubillos",
"M. B. Davies",
"M. Deleuil",
"A. Deline",
"L. Delrez",
"O. D. S. Demangeon",
"B. -O. Demory",
"A. Derekas",
"B. Edwards",
"D. Ehrenreich",
"A. Erikson",
"L. Fossati",
"M. Fridlund",
"D. Gandolfi",
"K. Gazeas",
"M. Gillon",
"M. Güdel",
"M. N. Günther",
"Ch. Helling",
"K. G. Isaak",
"F. Kerschbaum",
"L. Kiss",
"J. Korth",
"K. W. F. Lam",
"J. Laskar",
"A. Lecavelier des Etangs",
"M. Lendl",
"D. Magrin",
"P. F. L. Maxted",
"B. Merín",
"C. Mordasini",
"G. Olofsson",
"R. Ottensamer",
"I. Pagano",
"E. Pallé",
"G. Peter",
"D. Pollacco",
"D. Queloz",
"R. Ragazzoni",
"N. Rando",
"H. Rauer",
"I. Ribas",
"N. C. Santos",
"G. Scandariato",
"D. Ségransan",
"A. E. Simon",
"A. M. S. Smith",
"R. Southworth",
"M. Stalport",
"S. Sulis",
"M. Gy. Szabó",
"S. Udry",
"B. Ulmer",
"V. Van Grootel",
"J. Venturini",
"E. Villaver",
"N. A. Walton"
] | astro-ph.EP | [
"astro-ph.EP",
"astro-ph.SR"
] |
V. Nascimbeni et al.
A new dynamical modeling of K2-24
INAF, Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, 35122 Padova, Italy Dipartimento di Fisica e Astronomia, Università degli Studi di Padova, Vicolo dell’Osservatorio 3, 35122 Padova, Italy Instituto de Astrofisica e Ciencias do Espaco, Universidade do Porto, CAUP, Rua das Estrelas, 4150-762 Porto, Portugal Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL, United Kingdom Weltraumforschung und Planetologie, Physikalisches Institut, University of Bern, Gesellschaftsstrasse 6, 3012 Bern, Switzerland Center for Space and Habitability, University of Bern, Gesellschaftsstrasse 6, 3012 Bern, Switzerland Observatoire astronomique de l'Université de Genève, Chemin Pegasi 51, 1290 Versoix, Switzerland Department of Astronomy & Astrophysics, University of Chicago, Chicago, IL 60637, USA Dipartimento di Fisica e Astronomia "Galileo Galilei", Università degli Studi di Padova, Vicolo dell'Osservatorio 3, 35122 Padova, Italy Instituto de Astrofísica de Canarias, Vía Láctea s/n, 38200 La Laguna, Tenerife, Spain Departamento de Astrofísica, Universidad de La Laguna, Astrofísico Francisco Sanchez s/n, 38206 La Laguna, Tenerife, Spain Admatis, 5. Kandó Kálmán Street, 3534 Miskolc, Hungary Depto. de Astrofísica, Centro de Astrobiología (CSIC-INTA), ESAC campus, 28692 Villanueva de la Cañada (Madrid), Spain Departamento de Fisica e Astronomia, Faculdade de Ciencias, Universidade do Porto, Rua do Campo Alegre, 4169-007 Porto, Portugal Space Research Institute, Austrian Academy of Sciences, Schmiedlstrasse 6, A-8042 Graz, Austria Max Planck Institute for Extraterrestrial Physics, Gießenbachstraße, 85748 Garching, Germany Department of Astronomy, Stockholm University, AlbaNova University Center, 10691 Stockholm, Sweden Physikalisches Institut, University of Bern, Gesellschaftsstrasse 6, 3012 Bern, Switzerland Centre for Exoplanet Science, SUPA School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews KY16 9SS, UK CFisUC, Departamento de Física, Universidade de Coimbra, 3004-516 Coimbra, Portugal Institute of Planetary Research, German Aerospace Center (DLR), Rutherfordstrasse 2, 12489 Berlin, Germany INAF, Osservatorio Astrofisico di Torino, Via Osservatorio, 20, I-10025 Pino Torinese To, Italy Centre for Mathematical Sciences, Lund University, Box 118, 221 00 Lund, Sweden Aix Marseille Univ, CNRS, CNES, LAM, 38 rue Frédéric Joliot-Curie, 13388 Marseille, France Astrobiology Research Unit, Université de Liège, Allée du 6 Août 19C, B-4000 Liège, Belgium Space sciences, Technologies and Astrophysics Research (STAR) Institute, Université de Liège, Allée du 6 Août 19C, 4000 Liège, Belgium Institute of Astronomy, KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium ELTE Gothard Astrophysical Observatory, 9700 Szombathely, Szent Imre h. u. 112, Hungary SRON Netherlands Institute for Space Research, Niels Bohrweg 4, 2333 CA Leiden, Netherlands Centre Vie dans l’Univers, Faculté des sciences, Université de Genève, Quai Ernest-Ansermet 30, 1211 Genève 4, Switzerland Leiden Observatory, University of Leiden, PO Box 9513, 2300 RA Leiden, The Netherlands Department of Space, Earth and Environment, Chalmers University of Technology, Onsala Space Observatory, 439 92 Onsala, Sweden Dipartimento di Fisica, Università degli Studi di Torino, via Pietro Giuria 1, I-10125, Torino, Italy National and Kapodistrian University of Athens, Department of Physics, University Campus, Zografos GR-157 84, Athens, Greece Department of Astrophysics, University of Vienna, Türkenschanzstrasse 17, 1180 Vienna, Austria European Space Agency (ESA), European Space Research and Technology Centre (ESTEC), Keplerlaan 1, 2201 AZ Noordwijk, The Netherlands Institute for Theoretical Physics and Computational Physics, Graz University of Technology, Petersgasse 16, 8010 Graz, Austria Konkoly Observatory, Research Centre for Astronomy and Earth Sciences, 1121 Budapest, Konkoly Thege Miklós út 15-17, Hungary ELTE Eötvös Loránd University, Institute of Physics, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary Lund Observatory, Division of Astrophysics, Department of Physics, Lund University, Box 118, 22100 Lund, Sweden IMCCE, UMR8028 CNRS, Observatoire de Paris, PSL Univ., Sorbonne Univ., 77 av. Denfert-Rochereau, 75014 Paris, France Institut d'astrophysique de Paris, UMR7095 CNRS, Université Pierre & Marie Curie, 98bis blvd. Arago, 75014 Paris, France Astrophysics Group, Lennard Jones Building, Keele University, Staffordshire, ST5 5BG, United Kingdom European Space Agency, ESA - European Space Astronomy Centre, Camino Bajo del Castillo s/n, 28692 Villanueva de la Cañada, Madrid, Spain INAF, Osservatorio Astrofisico di Catania, Via S. Sofia 78, 95123 Catania, Italy Institute of Optical Sensor Systems, German Aerospace Center (DLR), Rutherfordstrasse 2, 12489 Berlin, Germany ETH Zurich, Department of Physics, Wolfgang-Pauli-Strasse 2, CH-8093 Zurich, Switzerland Cavendish Laboratory, JJ Thomson Avenue, Cambridge CB3 0HE, UK Institut fuer Geologische Wissenschaften, Freie Universitaet Berlin, Maltheserstrasse 74-100,12249 Berlin, Germany Institut de Ciencies de l'Espai (ICE, CSIC), Campus UAB, Can Magrans s/n, 08193 Bellaterra, Spain Institut d'Estudis Espacials de Catalunya (IEEC), 08860 Castelldefels (Barcelona), Spain ESOC, European Space Agency, Robert-Bosch-Str. 5, 64293 Darmstadt, Germany HUN-REN-ELTE Exoplanet Research Group, Szent Imre h. u. 112., Szombathely, H-9700, Hungary Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge, CB3 0HA, United Kingdom Dipartimento di Fisica, Università di Trento, Via Sommarive 14, 38123 Povo (TN), Italy
K2-24 is a planetary system composed of two transiting low-density Neptunians locked in an almost perfect 2:1 resonance and showing large TTVs, i.e., an excellent laboratory to search for signatures of planetary migration. Previous studies performed with K2, Spitzer and RV data tentatively claimed a significant non-zero eccentricity for one or both planets, possibly high enough to challenge the scenario of pure disk migration through resonant capture. With 13 new CHEOPS light curves (seven of planet -b, six of planet -c), we carried out a global photometric and dynamical (RV+TTV) re-analysis by including all the available literature data as well. We got the most accurate set of planetary parameters to date for the K2-24 system, including radii and masses at 1% and 5% precision (now essentially limited by the uncertainty on stellar parameters) and non-zero eccentricities e_b=0.0498_-0.0018^+0.0011, e_c=0.0282_-0.0007^+0.0003 detected at very high significance for both planets. Such relatively large values imply the need for an additional physical mechanism of eccentricity excitation during or after the migration stage. Also, while the accuracy of the previous TTV model had drifted by up to 0.5 days at the current time, we constrained the orbital solution firmly enough to predict the forthcoming transits for the next ∼15 years, thus enabling an efficient follow-up with top-level facilities such as JWST or ESPRESSO.
The K2-24 planetary system revisited by CHEOPSThis article uses data from the CHEOPS program . The individual data sets are listed in Table <ref>.
V. Nascimbenisend offprint requests to: 1 ^https://orcid.org/0000-0001-9770-1214
< g r a p h i c s >L. Borsato1 ^https://orcid.org/0000-0003-0066-9268
< g r a p h i c s >P. Leonardi2,1,55 ^https://orcid.org/0000-0001-6026-9202
< g r a p h i c s >S. G. Sousa3 ^https://orcid.org/0000-0001-9047-2965
< g r a p h i c s >T. G. Wilson4 ^https://orcid.org/0000-0001-8749-1962
< g r a p h i c s >A. Fortier5,6 ^https://orcid.org/0000-0001-8450-3374
< g r a p h i c s >A. Heitzmann7 ^https://orcid.org/0000-0002-8091-7526
< g r a p h i c s >G. Mantovan2,1R. Luque8T. Zingales2,1 ^https://orcid.org/0000-0001-6880-5356
< g r a p h i c s >G. Piotto2,1 ^https://orcid.org/0000-0002-9937-6387
< g r a p h i c s >Y. Alibert6,5 ^https://orcid.org/0000-0002-4644-8818
< g r a p h i c s >R. Alonso10,11 ^https://orcid.org/0000-0001-8462-8126
< g r a p h i c s >T. Bárczy12 ^https://orcid.org/0000-0002-7822-4413
< g r a p h i c s >D. Barrado Navascues13 ^https://orcid.org/0000-0002-5971-9242
< g r a p h i c s >S. C. C. Barros3,14 ^https://orcid.org/0000-0003-2434-3625
< g r a p h i c s >W. Baumjohann15 ^https://orcid.org/0000-0001-6271-0110
< g r a p h i c s >T. Beck5W. Benz5,6 ^https://orcid.org/0000-0001-7896-6479
< g r a p h i c s >N. Billot7 ^https://orcid.org/0000-0003-3429-3836
< g r a p h i c s >F. Biondi16,1A. Brandeker17 ^https://orcid.org/0000-0002-7201-7536
< g r a p h i c s >C. Broeg5,6 ^https://orcid.org/0000-0001-5132-2614
< g r a p h i c s >M.-D. Busch18A. Collier Cameron19 ^https://orcid.org/0000-0002-8863-7828
< g r a p h i c s >A. C. M. Correia20Sz. Csizmadia21 ^https://orcid.org/0000-0001-6803-9698
< g r a p h i c s >P. E. Cubillos22,15M. B. Davies23 ^https://orcid.org/0000-0001-6080-1190
< g r a p h i c s >M. Deleuil24 ^https://orcid.org/0000-0001-6036-0225
< g r a p h i c s >A. Deline7L. Delrez25,26,27 ^https://orcid.org/0000-0001-6108-4808
< g r a p h i c s >O. D. S. Demangeon3,14 ^https://orcid.org/0000-0001-7918-0355
< g r a p h i c s >B.-O. Demory6,5 ^https://orcid.org/0000-0002-9355-5165
< g r a p h i c s >A. Derekas28B. Edwards29D. Ehrenreich7,30 ^https://orcid.org/0000-0001-9704-5405
< g r a p h i c s >A. Erikson21L. Fossati15 ^https://orcid.org/0000-0003-4426-9530
< g r a p h i c s >M. Fridlund31,32 ^https://orcid.org/0000-0002-0855-8426
< g r a p h i c s >D. Gandolfi33 ^https://orcid.org/0000-0001-8627-9628
< g r a p h i c s >K. Gazeas34 ^https://orcid.org/0000-0002-8855-3923
< g r a p h i c s >M. Gillon25 ^https://orcid.org/0000-0003-1462-7739
< g r a p h i c s >M. Güdel35M. N. Günther36 ^https://orcid.org/0000-0002-3164-9086
< g r a p h i c s >Ch. Helling15,37K. G. Isaak36 ^https://orcid.org/0000-0001-8585-1717
< g r a p h i c s >F. Kerschbaum35L. L. Kiss38,39J. Korth40 ^https://orcid.org/0000-0002-0076-6239
< g r a p h i c s >K. W. F. Lam21 ^https://orcid.org/0000-0002-9910-6088
< g r a p h i c s >J. Laskar41 ^https://orcid.org/0000-0003-2634-789X
< g r a p h i c s >A. Lecavelier des Etangs42 ^https://orcid.org/0000-0002-5637-5253
< g r a p h i c s >M. Lendl7 ^https://orcid.org/0000-0001-9699-1459
< g r a p h i c s >D. Magrin1 ^https://orcid.org/0000-0003-0312-313X
< g r a p h i c s >P. F. L. Maxted43 ^https://orcid.org/0000-0003-3794-1317
< g r a p h i c s >B. Merín44 ^https://orcid.org/0000-0002-8555-3012
< g r a p h i c s >C. Mordasini5,6G. Olofsson17 ^https://orcid.org/0000-0003-3747-7120
< g r a p h i c s >R. Ottensamer35I. Pagano45 ^https://orcid.org/0000-0001-9573-4928
< g r a p h i c s >E. Pallé10,11 ^https://orcid.org/0000-0003-0987-1593
< g r a p h i c s >G. Peter46 ^https://orcid.org/0000-0001-6101-2513
< g r a p h i c s >D. Pollacco4D. Queloz47,48 ^https://orcid.org/0000-0002-3012-0316
< g r a p h i c s >R. Ragazzoni1,9 ^https://orcid.org/0000-0002-7697-5555
< g r a p h i c s >N. Rando36H. Rauer21,49 ^https://orcid.org/0000-0002-6510-1828
< g r a p h i c s >I. Ribas50,51 ^https://orcid.org/0000-0002-6689-0312
< g r a p h i c s >N. C. Santos3,14 ^https://orcid.org/0000-0003-4422-2919
< g r a p h i c s >G. Scandariato45 ^https://orcid.org/0000-0003-2029-0626
< g r a p h i c s >D. Ségransan7 ^https://orcid.org/0000-0003-2355-8034
< g r a p h i c s >A. E. Simon5,6 ^https://orcid.org/0000-0001-9773-2600
< g r a p h i c s >A. M. S. Smith21 ^https://orcid.org/0000-0002-2386-4341
< g r a p h i c s >R. Southworth52M. Stalport26,25S. Sulis24 ^https://orcid.org/0000-0001-8783-526X
< g r a p h i c s >Gy. M. Szabó28,53 ^https://orcid.org/0000-0002-0606-7930
< g r a p h i c s >S. Udry7 ^https://orcid.org/0000-0001-7576-6236
< g r a p h i c s >B. Ulmer46V. Van Grootel26 ^https://orcid.org/0000-0003-2144-4316
< g r a p h i c s >J. Venturini7 ^https://orcid.org/0000-0001-9527-2903
< g r a p h i c s >E. Villaver10,11N. A. Walton54 ^https://orcid.org/0000-0003-3983-8778
< g r a p h i c s >
Submitted 23 May 2024 / Accepted 4 September 2024
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The advent of space-based, high-precision photometry, inaugurated by CoRoT <cit.> and then continued by Kepler <cit.>, its second phase K2 <cit.>, and more recently by TESS <cit.> enabled not only the unexpected discovery of entirely new classes of exoplanets, but also the application of analysis techniques hitherto relegated to theory. Among these, one the most fruitful is a dynamical technique known as Transit Time Variations (TTVs; ), in which the gravitational perturbation between planets, and its time-variable effect on the measured orbital periods, is exploited to retrieve their orbital solution. In the case where two or more planets are transiting their host stars, TTVs are very effective at both confirming the planetary nature of candidates, and at measuring their masses without the need of (or in synergy with; ) radial velocity measurements (RVs).
Dynamical simulations show that the expected amplitude of TTVs in ordinary planetary systems is quite small, usually in the order of magnitude of seconds to minutes <cit.>. Close, therefore, or even below the detection limit imposed by photon noise and/or stellar activity <cit.>. A fairly interesting exception is represented by systems where planets are locked in mean-motion resonances (MMRs) or, more in general, close to commensurability[Being close to an integer ratio of orbital periods does not necessarily imply the system to be in a MMR from a dynamical point of view; see also our discussion in Section <ref>.], the orbital period ratio being close to an integer ratio. Low-order MMRs in the j+1:j form (j∈ℕ), such as 2:1 or 3:2 can boost the TTV signal by orders of magnitude, reaching hours or even days in the most favorable configurations <cit.>. A famous case, and the first one to be investigated, is the Kepler-9 system <cit.>, a pair of transiting warm Saturn-sized planets orbiting their host in about 19.2 and 38.9 days, i. e., close to a 2:1 MMR. Resonant configurations are not merely a useful playground to exploit the TTV technique. Rather, they are also extremely interesting by themselves, since they represent a unique laboratory to test planetary formation and migration theories <cit.>. In particular, how resonances can be maintained during a disk-migration phase or form/change at a later stage is currently a very active area of debate ( and references therein).
The typical time scale of the orbital period modulation induced by the TTV (sometimes called the superperiod; P_TTV) needs to be fully mapped to avoid degeneracy in the dynamical retrieval, and can reach months or even years. In the <cit.> approximation, the superperiod can be estimated as a function of the orbital periods of the inner and outer planets:
P_ttv = | j+1/P_out-j/P_in |^-1 .
For most orbital configurations, P_TTV can be significantly longer than the average duration of a K2 campaign (∼ 70-80 days) or a TESS sector (∼ 27 days). Even if TESS, by design, is able to revisit a given target throughout additional sectors (according mostly to its ecliptic latitude), the sampling of the transit times can be very sparse, especially in the long-period regime (P≳ 30 d). This is particularly true for those systems discovered by K2, which lie close to the ecliptic and are therefore rarely monitored by TESS, if at all.
On the other hand, the ESA S-class mission CHEOPS <cit.>, launched in 2019, is very efficient at observing at low ecliptic latitudes and, being a single-target telescope, has the ability to gather even very long-period transits once their ephemeris is reasonably constrained. CHEOPS has been successfully exploited several times to follow-up systems discovered by K2, sometimes with a particular focus on TTV analysis. WASP-47 is one of such systems <cit.>, for which our analysis led to an improvement of the orbital/physical parameters and, in particular, of the density of planet -d. We chose K2-24 as the next system to explore for the science case mentioned above, within the CHEOPS GTO program.
K2-24, announced by <cit.> (hereafter P16), is a planetary system made by two (sub-)Saturn-sized (R_b≃ 5.6 R_⊕, R_c≃ 8 R_⊕) planets close to a 2:1 period ratio, with orbital periods of P_b≃ 21 and P_c≃ 42 days. Since the baseline of the K2 light curve was not long enough to detect TTVs, the discovery paper had to rely on the HIRES RVs alone to constrain the planetary masses, which turned out to be in the Neptunian range (M_b= 21.0± 5.4 M_⊕, M_c= 27± 7 M_⊕), hence making K2-24b and -c extremely inflated planets with unusually large H/He envelopes predicted by models. A further PSF/HARPS RV follow-up by <cit.> essentially confirmed the mass estimates at M_b= 20± 4 M_⊕ and M_c= 26± 6 M_⊕. The only follow-up transits published so far (two of -b and two of -c), observed by Spitzer in 2015-2016, were presented by <cit.> (hereafter P18), who also merged all the existing photometric and spectroscopic data and carried out the first TTV analysis of this system, revealing even smaller masses (M_b= 19± 2 M_⊕, M_c= 15± 2 M_⊕) and tentatively detecting the presence of an outer 54±14 M_⊕ companion at ∼ 1.1 au.
The TTV modeling by , performed through the analytic approach developed by <cit.> and based on transit data covering only ∼40% of P_TTV, did not yield a precise measurement for both eccentricities e_b and e_c. Rather, it concluded that at least one planet must have an eccentricity significantly larger than zero, adopting e_b=0.06± 0.01 and e_c<0.07 (at 90% confidence) based on dynamical stability constraints and an imposed prior derived from the distribution of ⟨ e ⟩ observed in Kepler multi-planet systems. <cit.> later presented a more detailed analysis of dynamical stability in K2-24 based on the planetary parameters by , concluding that MMR locking protects its long-term evolution, and tighter constraining the eccentricity of the outer planet to e_c<0.05.
<cit.> further investigated the dynamical architecture of K2-24 and its implications for its formation and migration history, concluding that a pure disc-induced migration is not able to reproduce the period ratio and the TTV amplitude observed, and would result in much smaller eccentricities, by a factor of ∼ 30. Rather, they proposed a two-stage scenario where the two planets are first captured in resonance at low eccentricities within the disk, then eccentricities are excited by an outer companion (such that hinted by RV observations) during the disk dispersal phase. The same authors also suggested that the actual value of e_b and e_c may be higher than the estimate, according to their simulations.
Only for a handful of transiting planetary systems, we know accurate eccentricities for planets in or close to low-order MMRs. From the latest version of the NASA Exoplanet Archive (; v. 2023-12-28), only seven[An additional eighth system is TIC279401253 <cit.>, a 2:1 pair of giants (P_b≃ 77 d, P_c≃ 155 d) of which the outer one is not transiting and detected through RVs.] pairs of resonant[All the listed pairs lie in or close the 3:2 MMR (K2-146, Kepler-223, K2-19), or the 2:1 MMR (all the others).] planets can be found with both eccentricities constrained at better than 3σ (Fig. <ref>). Sorted by increasing average orbital period, they are: K2-146 b/c <cit.>, Kepler-223 b/c <cit.>, K2-19 b/c <cit.>, KOI-142=Kepler-88 b/c <cit.>, Kepler-9 b/c <cit.>, TOI-2525 b/c <cit.>, and Kepler-30 b/c <cit.>. Among these, only K2-146 and Kepler-223 lie in the Neptunian mass regime, but are quite close-in at P<10 d, where tidal interactions with the host star start to become significant <cit.>. From this point of view, the K2-24 system offers us the rare opportunity to probe the “primordial” eccentricities of a pair of warm (T_eq<800 K) Neptunians, i.e., not affected by tidal effects.
The aim of this paper is to present and analyze thirteen new CHEOPS observations of K2-24-b/c, and to merge them with all the existing literature light curves and radial velocities (RVs) to derive an updated and consistent dynamical solution, able to 1) recover the transit ephemeris for any future follow-up, 2) improve the measurement of planetary masses, radii, and densities, with a particular focus on the implications for their inner structure, and 3) firmly detect eccentricities for both planets. We present the new observations together with the employed archival data in Section <ref>. We describe the photometric modeling of the light curves in Section <ref>, then the global TTV+RV dynamical analysis in Section <ref>. Finally, the results are compared and interpreted in Section <ref>, where prospects for the future characterization of this system are discussed.
§ OBSERVATIONS
We collected all the available photometric and spectroscopic data of K2-24 for our analysis; they are described in the following subsections. The very long orbital periods of K2-24b and -c, together with their long duration, uncertain ephemeris (See Section <ref>) and small transit depths (2-4 mmag) make the ground-based follow-up of this system extremely difficult. Indeed, no ground-based light curves have been published so far. It is also worth mentioning that K2-24 has never been observed by TESS in its first six observing cycles (2018-2024).
All the time stamps of the photometric and spectroscopic data described below were uniformly converted to the BJD-TDB standard and referred to the mid-exposure instant, following the prescription by <cit.>.
§.§ K2 and Spitzer photometry
K2-24 has been observed by K2 once, in Campaign 2, from 2014-08-23 to 2014-11-13. This uninterrupted, ∼75 day-long light curve contains four transits of planet -b and two transits of planet -c (plotted with green points in Fig. <ref>), and led to their discovery published by . This light curve has been corrected for systematic errors due to the spacecraft jitter and drifting by following the approach developed by <cit.>.
Four more transits were secured by Spitzer and presented by : two of planet -b on 2015-10-27 and 2016-06-13, and two of planet -c on 2015-11-12 and 2016-06-10. Both the latter time series actually cover partial transits: the scheduling was based on a simple linear ephemeris since a more sophisticated TTV model was not available yet at that time. All the Spitzer light curves have been corrected for systematics through the pixel-level decorrelation algorithm (PLD; ) as modified by <cit.>.
§.§ HST photometry
We downloaded the publicly available HST WFC3 G141 observations of K2-24b from the MAST archive. These data cover a single transit gathered on 2016-07-04 as part of proposal GO-14455 (PI: E. Petigura, plotted with orange points in Fig. <ref>) to extract a transmission spectrum of the planetary atmosphere. These observations have been previously analyzed by <cit.>; we will get back on their results in Section <ref>.
The visit consists of a total of eight HST orbits; in our analysis we excluded the first one due to the presence of significant time-dependent systematic errors. At the beginning of each orbit, a direct image captured with the F130N filter was used for wavelength calibration. These data were collected with the GRISM256 aperture and the SPARS10 reading sequence. The total exposure time was set to 103.13 s, with 16 up-the-ramp reads for exposure. Both scanning directions were employed.
We calibrated the raw WFC3 data and extracted the photometric information through the Iraclis dedicated pipeline (, , ). We extracted the detrended white-light curve (spectral range: 1.088 to 1.680 μm; plotted with orange points in Fig. <ref>) from the calibrated images, taking into account the tilted configuration of the WFC3/NIR detector and modeling the time-dependent systematics using the Eq. 1 of <cit.>. The HST WFC3 time series are often affected by linear long-term and exponential short-term (“orbit ramps”) trend, especially when observing bright sources. We note that both ingress and egress are missed from this visit, implying that the transit time T_0 is expected to be relatively poorly constrained.
§.§ CHEOPS photometry
K2-24 was targeted by CHEOPS thirteen times over a span of about two years, within the GTO subprogram #25 (PID ), focused on the study of the mass-radius relation through the TTV analysis of resonant pairs of low-mass exoplanets. A complete log of the observations is reported in Table <ref>; the corresponding light curves, extracted by the CHEOPS DRP v14 pipeline <cit.>, are plotted in Fig. <ref> and labelled with matching IDs. The gaps located at regular time intervals are due to the avoidance angles of CHEOPS and to the SAA crossing events, Earth occultations and Earth stray light contamination during its 98.77-min low-earth orbit.
It is evident that, particularly for planet -c, some transits are partial ones: this is due to the transit predictions by becoming more and more inaccurate as the time passed since the K2/Spitzer observations increased. The O-C (observed minus calculated) discrepancy is much larger than the prediction errors reported in Table 4 of , demonstrating that their dynamical solution had to be revised and improved in order to reliably predict the future transit times.
§.§ Radial velocities
The merged data set we collected is made of 89 RV observations in total from three different instruments:
* 63 RV points from HIRES, published by (the first 32 being already analyzed by );
* 16 RV points from PFS, presented by <cit.>;
* 10 RV points from HARPS (PID: 095.C-0718), also presented by <cit.>. Two additional HARPS points can be found in the ESO archive (PID: 191.C-0873), but were not included in our analysis since they are affected by an RV offset introduced on 2015-06-03.
We emphasize that we are going to fit all these RV data simultaneously for the first time, as did not include the PFS and HARPS data in their modeling.
§ LIGHT CURVE MODELING
We performed a global modeling of our 13 CHEOPS, six K2 and one HST light curves by fitting simultaneously the signal of planet -b and -c on both data sets, with the [<https://github.com/LucaMalavolta/PyORBIT>] software version 10 <cit.>. After some preliminary tests, we decided not to incorporate the Spitzer light curves into our global fit, as their transit depths are clearly inconsistent with the K2/CHEOPS/HST data, and also with each other. This could be due to an imperfect correction of systematic errors, since the transits are partial ones and the detrending process works by extrapolation rather than interpolation. To avoid any bias on our retrieved planetary parameters, we took the transit times T_0 for our dynamical analysis (Section <ref>) from Table 1 of instead.
The K2/CHEOPS/HST transits were modeled with the Batman code <cit.> and parametrized as a function of the impact parameter b, the radius ratio R_p/R_⋆ and the scaled semi-major axis a/R_⋆. The transit model for K2 was super-sampled by a factor of 10 to account for the non-negligible length of the K2 exposure times (30-minute cadence). Each transit time T_0 (13 from CHEOPS, 6 from K2, 1 from HST) was treated as a free, independent parameter, so the orbital period P was fixed at its average value interpolated over our observations, i.e., P_b=20.8891 and P_c=42.3391 days. The limb darkening effect was modeled through a quadratic law, i.e., with two parameters called u_1 and u_2 for each instrument; internally, u_1 and u_2 were re-parametrized as q_1 and q_2 following the prescription by <cit.>, to minimize the correlation between the two parameters.
The residual systematic errors present in the CHEOPS light curves f(t) were detrended as a linear combination of terms as a function of external parameters: the first and second-order derivative of the centroid offset in x and y pixel coordinates (df/dx, d^2f/dx^2, df/dy, d^2f/dy^2), background level (df/db), photometric contamination factor (df/dcontam), the first three harmonics of the spacecraft roll angle (in cosϕ and sinϕ) and a quadratic baseline f_0 + df/dt + d^2f/dt^2. The roll angle term, as expected on data from a low-Earth orbit satellite, is always dominating. Since adding 15 free parameters for each CHEOPS data set would have implied 13× 15 = 195 free parameters in our global fit just for the detrending (i. e., prohibitively expensive in terms of computational time), we went through a two-stage approach as described in <cit.>. In a first pass each individual CHEOPS light curve was fitted with both a transit model and the detrending model; then the detrended light curves were fed into the final, global fitting. The latter had therefore 52 free parameters: six LD coefficients (u_1 and u_2 for CHEOPS, K2, HST), six planetary parameters (b, R_p/R_⋆, a/R_⋆ for -b and -c), 20 transit times and 20 jitter parameters (one for each light curve).
We set uninformative priors on all our fitting parameters. The only exceptions are the six limb darkening parameters u_1, u_2 for CHEOPS, K2 and HST. We carried out two independent analyses: the first one with fully uninformative priors (“LD-free”), and a second one by centering the prior at the theoretical value computed by the code <cit.> and increasing to 0.05 the associated Gaussian error, to accommodate for the well-known underestimation by models (“LD-prior”). The input stellar parameters were computed by the CHEOPS Target Characterization (TS3) working group according to the procedure described by <cit.>, Section 3.2.1, and reported in Table <ref>. We set to use <cit.> to find a reasonable starting point in the parameter space (50 000 generations with a population size of 8× N_par, where N_par is the number of free parameters), then we initialized an MCMC optimization with <cit.>, running for 500 000 steps and setting a thinning factor of 100. After discarding the first 50 000 steps as burn-in phase, convergence was checked by auto-correlation function analysis (ACF).
All the final best-fit parameters of interest from the MCMC distributions are reported in Table <ref> for both the LD-free and LD-prior case; transit times are reported separately in Table <ref>. The corresponding corner plots for the transit shape parameters (i.e., excluding transit times, LD, and jitter parameters) are shown in Fig. <ref> and Fig. <ref>. The best-fit values of u_1 and u_2 are consistent between the LD-free and LD-prior case (Table <ref>), although of course LD-prior has smaller error bars; all the remaining parameters agree within 1 σ. We adopt the LD-prior solution throughout the following analysis. We plot the CHEOPS light curves of K2-24b and -c, folded on the best-fit individual T_0 and binned over 0.3-hour intervals in Fig. <ref>.
In the last column of Table <ref> we also compare our results with the literature, i. e., with ( did not present a new set of independent planetary parameters, since it was based on priors from ). Overall there is a very good agreement. The planetary radii R_b and R_c, in particular, are consistent within 1 σ but our error bars are improved by an order of magnitude, i. e., from a relative error of ∼9% to ∼1%. The uncertainty is now limited by our current knowledge of the stellar radius σ(R_⋆)/R_⋆≃ 1% (Table <ref>).
§ DYNAMICAL MODELING
We carried out a dynamical modeling of the K2-24 system and its strong TTV signals by simultaneously fitting the three RV data sets available (see Section <ref>) and the transit times (T_0s) extracted with (see Table <ref>) through the code[<https://github.com/lucaborsato/trades>]
<cit.>.
We adopted a parameterisation similar to <cit.>, assuming a 3-planet[Candidate planet -d (at ≃ 1.1 au) is far enough to be dynamically decoupled from the inner pair, so in principle it should not impact the TTV signals. It has to be included into our dynamical modeling, though, because of its effect on RV,
as in .] model and fitting for the stellar mass M_⋆, planetary-to-star mass ratio M_p/M_⋆, periods P, mean longitude[The mean longitude is defined as λ=ℳ+Ω+ω, where ℳ is the mean anomaly, Ω the longitude of ascending node, ω the argument of pericenter.] λ of all planets, eccentricity e and argument of periastron passage ω in the form √(e)cosω and √(e)sinω for planet -b and -c (as specified by the indexes b and c). We also fitted a jitter term, in log_2-space, and an offset for each RV data set.
We fixed the following parameters: longitude of ascending node Ω to 180 for each planet, circular orbit of planet -d (eccentricity e_d=0 and argument of periastron ω_d=90), inclination, i, of planet b and c as in Table <ref> and to 90 for planet -d. All the parameters are defined at the reference time
T_ref = 2 456 905 BJD_TDB. We defined parameter priors in the physical space and converted them into fitting space; all the priors used have been reported in Table <ref>.
First, we run with (100 different configurations for 150 000 generations) to find a suitable starting point. Then, we run with 100 walkers for 1 000 000 steps and we apply a conservative thinning factor of 100.
As in <cit.>, we used a combination of the differential evolution proposal <cit.> and the snooker differential evolution proposal <cit.>
as the sampler within . After checking chains convergence through Gelman-Rubin statistics <cit.>,
Geweke criterion <cit.>, Auto-correlation Function (ACF), and visual inspections, we discarded as burn-in the first 50% of the steps.
From the posterior distributions we extracted the maximum-a-posteriori (MAP[By the term “maximum-a-posteriori” (MAP) we mean the set of parameters that maximize the log-probability of the posterior distributions. If all the priors were uninformative/uniform, then the MAP would correspond to the maximum likelihood estimation (MLE).])
as the best-fit parameters, and the uncertainties as the high density interval (HDI) at the 68.27%[
HDI at 68.27% is equivalent of the 16^th–84^th percentiles of a Gaussian distribution.].
The best-fit parameters from and their uncertainties are reported in Table <ref>,
with a comparison with for the parameters in common.
The TTV and RV models from the best-fit orbital solution by are also plotted along with
the observed data points in Fig. <ref> and Fig. <ref>, respectively.
The fit looks perfectly satisfactory with an overall reduced χ^2 (TTV+RV) of 1.33 with 94 degrees of freedom. The corresponding lnℒ and ln(probability) values are -107.242 and -107.623, respectively.
§.§ Prediction of future transits
A useful application of our dynamical model is the prediction of future transit events for any follow-up opportunity,
as for both planets -b and -c transits cannot be reliably scheduled according to a linear ephemeris. The combined impact of a poorly-constrained linear ephemeris with a set of orbital parameters determined over a relatively short time span has been discussed in <cit.> in the context of the scientific preparation of the Ariel mission <cit.>.
All the transits of planets -b and -c predicted by our best-fit dynamical model up to the year 2029 included
are reported along with the associated uncertainty in the Appendix, Table <ref>, <ref>, <ref>. The values are their associated uncertainties were calculated by integrating 100 orbital solutions randomly chosen from the fit posterior, then computing the median and the 68.27% HDI interval at each transit epoch.
§ DISCUSSION AND CONCLUSIONS
In our work, we merged all the available space-based photometry of K2-24b and -c (including 13 unpublished CHEOPS light curves; Section <ref>) and derived improved stellar parameters for K2-24 (Table <ref>) to perform a global transit fit (Section <ref>), which yielded a homogeneous set of planetary parameters and transit times (Tables <ref>, <ref>). Then we fitted the latter together with all the available RVs (HIRES, PFS, HARPS) through an RV+TTV dynamical model (Section <ref>) to get a complete orbital solution for K2-24b and -c, and for candidate planet -d as well (Table <ref>).
§.§ Planetary parameters of K2-24b and K2-24c
All the derived parameters for planets -b and -c look statistically consistent, at least within 2 σ, with those published by , but mostly with much smaller error bars due to the increased signal-to-noise ratio of the combined data set, the improved stellar parameters, and to the much larger baseline of the observations. This is in particular true for the planetary radii (R_b/R_⊕=5.64± 0.06, R_c/R_⊕=7.93± 0.12) and masses (M_b/M_⊕=20.6_-0.3^+1.6, M_c/M_⊕=16.4_-0.2^+1.3) for which we reached a relative error of 1% and 4-5%, respectively. We confirm the unusually low density of the outer planet (ρ_c = 0.181_-0.009^+0.017 g cm^-3), implying a very large gaseous envelope, possibly larger than 50% and hence challenging a core-accretion scenario due to the onset of runaway accretion .
Several alternative scenarios has been proposed to explain the existence of such “super-puff” planets <cit.>, including light scattering from high-altitude photo-chemical hazes <cit.> or the presence of planetary rings on specific configurations <cit.>. These hypotheses, however, will require a JWST follow-up to be tested.
A particularly interesting variable to discuss is the orbital eccentricity, due to its important consequences on the planetary migration mechanisms. We measured a extremely significant non-zero eccentricity for both planets e_b=0.049_-0.002^+0.001, e_c=0.0282_-0.0007^+0.0003, confirming the findings by , who based on the <cit.> theory predicted that e_b and e_c cannot be both zero. It is worth noting that our best-fit values are perfectly compatible with their constraints, even though our analysis is based on uninformative priors only and does not adopt any assumption on the distribution of the eccentricity in the Kepler population. We also mention that, compared with the <cit.> prediction, we found the eccentricity of -c at the very limit they set (e_c< 0.05).
§.§ Dynamical stability
The K2-24 system hosts three planets (Table <ref>), including two Neptune-mass planets (M_b ≈ 20.6 M_⊕, M_c ≈ 16.4 M_⊕) in the vicinity of a 2:1 mean motion resonance (P_c / P_b ≈ 2.029).
Following a suggestion from the referee, we first checked the dynamical stability of our orbital solution by computing the Angular Momentum Deficit <cit.> of the whole posterior distribution, then we explored the stability by evaluating the AMD-Hill criterion proposed in Eq. 26 of <cit.>. We found that the whole posterior is AMD-Hill stable. We also ran an N-body integration with the Mean Exponential Growth factor of Nearby Orbits <cit.> indicator through the rebound package with the integrator <cit.>. We set a step-size equal to the 10% of the shorter period of the system and integrated for 10^5 years. We found that not only the MAP solution is stable (MEGNO = 2), but also 1000 samples, randomly selected from the posterior, are stable with MEGNO≃ 2.
§.§ Are K2-24b and -c on a MMR configuration?
To get a wider view on the stability of the system, and also to assess whether the system is truly on a resonant configuration, we performed a dynamical analysis in a similar way as for other planetary systems <cit.>.
The system is integrated on a regular 2D mesh of initial conditions around the best fit, including planet -d (Table <ref>).
Each initial condition is integrated for 10^4 yr, using the symplectic integrator SABAC4 <cit.>, with a step size of 10^-3 yr and general relativity corrections.
Then, we perform a frequency analysis <cit.> of the mean longitude of the inner planet over two consecutive time intervals of 5000 yr, and determine the main frequency, n and n', respectively.
The stability is measured by Δ = |1-n'/n|, which estimates the chaotic diffusion of the orbits.
In Fig. <ref>, the results for planet -b (top panel) and planet -c (bottom panel) are reported in color: orange and red represent strongly chaotic unstable trajectories; yellow indicates the transition between stable and unstable regimes; green correspond to moderately chaotic trajectories, but stable on Gyr timescales; cyan and blue give extremely stable quasi-periodic orbits.
The best-fit solution obtained from our analysis (Table <ref>) is marked with a white circle.
We observe that the best-fit solution from Table <ref> is completely stable, even if we increase the eccentricities up to 0.4. However, for eccentricities up to 0.1, which include the current best fit determination (e_b ≈ 0.05, e_c ≈ 0.03), we observe that the system is outside the 2:1 mean motion resonance, which corresponds to the large stable structure above the V-shape chaotic region in the middle of the figures.
The TTVs analysis constrains the resonant part of the architecture, as can be seen in Fig. <ref>. In that figure, we can see that the posterior of the fit lies outside the formal resonant domain (red area in the figure), unlike, for example, TOI-216, Kepler-1705 or Kepler-1972 <cit.>.
We conclude that the K2-24 three-planet orbital solution presented in Table <ref> is not in resonance, but still reliable and supple to the uncertainties in the determination of the eccentricities of the two innermost planets.
We also note that the system is on the correct side of the resonance predicted by planetary migration models <cit.>.
This feature is usually attributed to tidal interactions with the parent star <cit.>, but in this case this mechanism does not seem to be very efficient, because the orbital period of the inner Neptune-like planet is much longer than 5 days <cit.>.
Finally, from the best-fit solution we monitored the evolution over 10 000 years of some parameters of the inner pair, including the period ratio P_c/P_b, the difference between the arguments of the pericenter Δω, and the critical resonant angles ϕ_1, ϕ_2 (Fig. <ref>). Interestingly, ϕ_1 and ϕ_2 are circulating (as one would expect from a non-MMR configuration, thus confirming our previous finding), while Δω librates in an anti-aligned (180^∘) configuration. Following a more quantitative approach, we repeated the same analysis on 10 000 random samples from the posterior, to find that on 100% of them Δω is confined between ∼ 140^∘ and ∼ 220^∘ with a mean value perfectly centered on 180^∘, therefore confirming the anti-aligned scenario.
§.§ Candidate planet K2-24d
It is worth noting that we independently confirmed the RV signal of the planet candidate K2-24d, previously only tentatively detected by on HIRES data alone. While its parameters look consistent, our circular fit now yields a 8-σ detection at M_d/M_⊕ = 54_-4^+9. The period ratio with respect to the inner planets is too large (and too far from a MMR) to generate a detectable TTV on the inner planets, hence all the constraint on M_d comes from RVs. At P_d≃ 470 d, the a-priori transit probability <cit.> of -d would be approximately just R_⋆/a ≃ 0.4%, yet the actual chances are much better than that since multiple planetary systems are very likely to be coplanar <cit.>. Unfortunately, the fraction of orbital phase currently mapped by K2, Spitzer and CHEOPS together (all of which are capable of detecting the transit of a ∼50 M_⊕ planet at high confidence) is only <20%, so no conclusion can be drawn about the orbital inclination of -d. We will keep on considering -d as a candidate rather than a confirmed planet since we did not run any specific validation test for it, being outside the main scope of this paper.
§.§ Future prospects for follow-up
The K2-24 system looks like a very promising target for a follow-up with several current and future facilities. To this purpose, the list of predicted transit windows we listed in the Appendix (Tables <ref>, <ref>, <ref>) is crucial to reliably schedule the observations. The most obvious science case is a deeper study if its dynamical architecture, including the modeling of new transit timings which could unveil additional companions on orbits on external MMRs to planet -c[A planet internal to -b, i.e., at P≲ 20 d, is easily discarded by K2 photometry, if on transiting configurations. Even if we postulate a non-transiting geometry due to an unusually high mutual inclination, the currently available RVs would put an upper limit to its mass in the rocky planet regime.]. Both planets, and in particular -c, are also compelling targets for transmission spectroscopy, since their low bulk density combined with the brightness of their host star (V≃ 11.3, J≃ 9.6, K≃ 9.2) offers a unique opportunity to probe the atmospheres of a pair of warm sub-Saturns close to a MMR and to link their composition with their formation site and migration history <cit.>. If we compute the Transmission Spectroscopy Metric (TSM; ) based on our newly derived parameters on Tables <ref>, <ref> and <ref>, we get 62± 5 for -b and 177± 16 for -c (TSM scaled factor computer for >4 R_⊕ planets). We remind the reader that a value of 90 is usually considered the threshold to select the best targets amenable to atmospheric characterization with JWST <cit.>. As already mentioned (Section <ref>), HST, through WFC3/NIR, has been already exploited to search for atmospheric features on K2-24b, unfortunately with a null result. <cit.> noted, however, that the best-fit free chemistry model was preferred to a flat line at 2.5σ, suggesting the presence of NH_3, but without evidence for H_2O.
TESS will observe K2-24, for the first time, in Sector 91 of Cycle 7, currently planned from 2025, April 9 to May 7. According to our modeling, only one event will be captured: a transit of K2-24b at 2025-04-25T21:15:40 UTC, unfortunately close to the mid-sector gap. It is difficult at this stage to predict whether or not TESS will manage to add new data to the TTV analysis.
The availability of a new, well-constrained ephemeris, on the other hand, opens an interesting opportunity for a ground-based follow-up campaign from the southern hemisphere. Both transit depths (approx. 2 000 and 4 000 ppm, respectively) are feasible with most medium-sized telescopes operating with the defocusing technique <cit.>, and even partial transits would provide us with reliable transit times and help mapping the TTV signal, as now the transit shape parameters of both planets (including duration) are constrained at high precision (Table <ref>).
As a closing note, we mention that, in the next years, both PLATO <cit.> and Ariel <cit.> could be able to follow-up K2-24. PLATO, to be launched in 2026, will unfortunately not observe this target during its LOP (long-pointing operation phase), since K2-24 lies too close to the Ecliptic to meet the engineering constraints; however, it could be monitored at a later stage for a shorter duration (2-3 months) during the so-called short-duration observing phase (SOP; ).
Ariel, on the other hand, will observe transits of K2-24b and -c in Tier 1 and 3, respectively <cit.>. A detailed study <cit.> demonstrated that the Ariel FGS light curves of K2-24 can be exploited also for accurate TTV analysis, and that ten transits would be enough to constrain the presence of an external resonant companion down to the rocky regime.
We thank the anonymous referee for her/his valuable comments and suggestions.
CHEOPS is an ESA mission in partnership with Switzerland with important contributions to the payload and the ground segment from Austria, Belgium, France, Germany, Hungary, Italy, Portugal, Spain, Sweden, and the United Kingdom. The CHEOPS Consortium would like to gratefully acknowledge the support received by all the agencies, offices, universities, and industries involved. Their flexibility and willingness to explore new approaches were essential to the success of this mission. CHEOPS data analysed in this article will be made available in the CHEOPS mission archive (<https://cheops.unige.ch/archive_browser/>).
VNa, LBo, TZi, GPi, GMa, IPa, RRa, and GSc acknowledge support from CHEOPS ASI-INAF agreement n. 2019-29-HH.0.
S.G.S. acknowledge support from FCT through FCT contract nr. CEECIND/00826/2018 and POPH/FSE (EC).
The Portuguese team thanks the Portuguese Space Agency for the provision of financial support in the framework of the PRODEX Programme of the European Space Agency (ESA) under contract number 4000142255.
TWi acknowledges support from the UKSA and the University of Warwick.
YAl acknowledges support from the Swiss National Science Foundation (SNSF) under grant 200020_192038.
We acknowledge financial support from the Agencia Estatal de Investigación of the Ministerio de Ciencia e Innovación MCIN/AEI/10.13039/501100011033 and the ERDF “A way of making Europe” through projects PID2019-107061GB-C61, PID2019-107061GB-C66, PID2021-125627OB-C31, and PID2021-125627OB-C32, from the Centre of Excellence “Severo Ochoa” award to the Instituto de Astrofísica de Canarias (CEX2019-000920-S), from the Centre of Excellence “María de Maeztu” award to the Institut de Ciències de l’Espai (CEX2020-001058-M), and from the Generalitat de Catalunya/CERCA programme.
DBa, EPa, and IRi acknowledge financial support from the Agencia Estatal de Investigación of the Ministerio de Ciencia e Innovación MCIN/AEI/10.13039/501100011033 and the ERDF “A way of making Europe” through projects PID2019-107061GB-C61, PID2019-107061GB-C66, PID2021-125627OB-C31, and PID2021-125627OB-C32, from the Centre of Excellence “Severo Ochoa” award to the Instituto de Astrofísica de Canarias (CEX2019-000920-S), from the Centre of Excellence “María de Maeztu” award to the Institut de Ciències de l’Espai (CEX2020-001058-M), and from the Generalitat de Catalunya/CERCA programme. A.C.M.C. acknowledges support from the FCT, Portugal, through the CFisUC projects UIDB/04564/2020 and UIDP/04564/2020, with DOI identifiers 10.54499/UIDB/04564/2020 and 10.54499/UIDP/04564/2020, respectively.
S.C.C.B. acknowledges support from FCT through FCT contracts nr. IF/01312/2014/CP1215/CT0004.
0000-0003-0312-313X.
A.C., A.D., B.E., K.G., and J.K. acknowledge their role as ESA-appointed CHEOPS Science Team Members.
ABr was supported by the SNSA.
CBr and ASi acknowledge support from the Swiss Space Office through the ESA PRODEX program.
ACC acknowledges support from STFC consolidated grant number ST/V000861/1, and UKSA grant number ST/X002217/1.
P.E.C. is funded by the Austrian Science Fund (FWF) Erwin Schroedinger Fellowship, program J4595-N.
This project was supported by the CNES.
The Belgian participation to CHEOPS has been supported by the Belgian Federal Science Policy Office (BELSPO) in the framework of the PRODEX Program, and by the University of Liège through an ARC grant for Concerted Research Actions financed by the Wallonia-Brussels Federation.
L.D. thanks the Belgian Federal Science Policy Office (BELSPO) for the provision of financial support in the framework of the PRODEX Programme of the European Space Agency (ESA) under contract number 4000142531.
This work was supported by FCT - Fundação para a Ciência e a Tecnologia through national funds and by FEDER through COMPETE2020 through the research grants UIDB/04434/2020, UIDP/04434/2020, 2022.06962.PTDC.
O.D.S.D. is supported in the form of work contract (DL 57/2016/CP1364/CT0004) funded by national funds through FCT.
B.-O. D. acknowledges support from the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number MB22.00046.
This project has received funding from the Swiss National Science Foundation for project 200021_200726. It has also been carried out within the framework of the National Centre of Competence in Research PlanetS supported by the Swiss National Science Foundation under grant 51NF40_205606. The authors acknowledge the financial support of the SNSF.
MF and CMP gratefully acknowledge the support of the Swedish National Space Agency (DNR 65/19, 174/18).
DG gratefully acknowledges financial support from the CRT foundation under Grant No. 2018.2323 “Gaseous or rocky? Unveiling the nature of small worlds”.
M.G. is an F.R.S.-FNRS Senior Research Associate.
MNG is the ESA CHEOPS Project Scientist and Mission Representative, and as such also responsible for the Guest Observers (GO) Programme. MNG does not relay proprietary information between the GO and Guaranteed Time Observation (GTO) Programmes, and does not decide on the definition and target selection of the GTO Programme.
CHe acknowledges support from the European Union H2020-MSCA-ITN-2019 under Grant Agreement no. 860470 (CHAMELEON).
KGI is the ESA CHEOPS Project Scientist and is responsible for the ESA CHEOPS Guest Observers Programme. She does not participate in, or contribute to, the definition of the Guaranteed Time Programme of the CHEOPS mission through which observations described in this paper have been taken, nor to any aspect of target selection for the programme.
K.W.F.L. was supported by Deutsche Forschungsgemeinschaft grants RA714/14-1 within the DFG Schwerpunkt SPP 1992, Exploring the Diversity of Extrasolar Planets.
This work was granted access to the HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the programme Investissements d'Avenir supervised by the Agence Nationale pour la Recherche.
ML acknowledges support of the Swiss National Science Foundation under grant number PCEFP2_194576.
PM acknowledges support from STFC research grant number ST/R000638/1.
This work was also partially supported by a grant from the Simons Foundation (PI Queloz, grant number 327127).
NCSa acknowledges funding by the European Union (ERC, FIERCE, 101052347). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
GyMSz acknowledges the support of the Hungarian National Research, Development and Innovation Office (NKFIH) grant K-125015, a a PRODEX Experiment Agreement No. 4000137122, the Lendület LP2018-7/2021 grant of the Hungarian Academy of Science and the support of the city of Szombathely.
V.V.G. is an F.R.S-FNRS Research Associate.
JV acknowledges support from the Swiss National Science Foundation (SNSF) under grant PZ00P2_208945.
NAW acknowledges UKSA grant ST/R004838/1.
Ple acknowledges that this publication was produced while attending the PhD program in Space Science and Technology at the University of Trento, Cycle XXXVIII, with the support of a scholarship co-financed by the Ministerial Decree no. 351 of 9th April 2022, based on the NRRP - funded by the European Union - NextGenerationEU - Mission 4 "Education and Research", Component 2 "From Research to Business", Investment 3.3 – CUP E63C22001340001.
E.V. acknowledges support from the ’DISCOBOLO’ project funded by the Spanish Ministerio de Ciencia, Innovación y Universidades under grant PID2021-127289NB-I00.
aa
§ ADDITIONAL PLOTS AND TABLES
|
http://arxiv.org/abs/2409.02069v1 | 20240903171601 | A Deployed Online Reinforcement Learning Algorithm In An Oral Health Clinical Trial | [
"Anna L. Trella",
"Kelly W. Zhang",
"Hinal Jajal",
"Inbal Nahum-Shani",
"Vivek Shetty",
"Finale Doshi-Velez",
"Susan A. Murphy"
] | cs.AI | [
"cs.AI",
"cs.HC"
] |
Embedding theory contributions to average atom models for warm dense matter
David A. Strubbe
September 9, 2024
===========================================================================
§ ABSTRACT
Dental disease is a prevalent chronic condition associated with substantial financial burden, personal suffering, and increased risk of systemic diseases. Despite widespread recommendations for twice-daily tooth brushing, adherence to recommended oral self-care behaviors remains sub-optimal due to factors such as forgetfulness and disengagement. To address this, we developed Oralytics, a mHealth intervention system designed to complement clinician-delivered preventative care for marginalized individuals at risk for dental disease. Oralytics incorporates an online reinforcement learning algorithm to determine optimal times to deliver intervention prompts that encourage oral self-care behaviors. We have deployed Oralytics in a registered clinical trial. The deployment required careful design to manage challenges specific to the clinical trials setting in the U.S. In this paper, we (1) highlight key design decisions of the RL algorithm that address these challenges and (2) conduct a re-sampling analysis to evaluate algorithm design decisions.
A second phase (randomized control trial) of Oralytics is planned to start in spring 2025.
§ INTRODUCTION
Dental disease is a prevalent chronic condition in the United States with significant preventable morbidity and economic impact <cit.>. Beyond its associated pain and substantial treatment costs, dental disease is linked to systemic health complications such as diabetes, cardiovascular disease, respiratory illness, stroke, and adverse birth outcomes. To prevent dental disease, the American Dental Association recommends systematic, twice-a-day tooth brushing for two minutes <cit.>.
However, patient adherence to this simple regimen is often compromised by factors such as forgetfulness and lack of motivation <cit.>.
mHealth interventions and tools can be leveraged to prompt individuals to engage in high-quality oral self-care behaviors (OSCB) between clinic visits. This work focuses on Oralytics, a mHealth intervention designed to improve OSCB for individuals at risk for dental disease.
The intervention involves (i) a Bluetooth-enabled toothbrush to collect sensor data on an individual's brushing quality, and (ii) a smartphone application (app) to deliver treatments, one of which is engagement prompts to encourage individuals to remain engaged in improving their OSCB.
See Figure <ref> for screenshots from the Oralytics app.
Oralytics includes multiple intervention components one of which is an online reinforcement learning (RL) algorithm which is used to learn, online, a policy specifying when it is most useful to deliver engagement prompts.
The algorithm should avoid excessive burden and habituation by only sending prompts at times they are likely to be effective.
Before integrating a mHealth intervention
into broader healthcare programs, the effectiveness of the intervention is deployed and tested in a clinical trial. However, the clinical trial setting introduces unique challenges for the design and deployment of online RL algorithms as part of the intervention.
might be best to delete this paragraph to prevent reader from going off-track. Including an online (as opposed to batch) reinforcement learning algorithm as part of the mHealth intervention is attractive when there are societal or other health trends that result in cohort effects cite Rothman, K. J., Greenland, S., and Lash, T. L. (2008). Modern epidemiology (Third edition). Wolters Kluwer Health/Lippincott Williams and Wilkins or there is otherwise an absence of high quality batch data. In these cases offline algorithms that learn from such batch data can result in poor performing policies when deployed.
§.§ Design & Deployment Challenges in Clinical Trials
First, clinical trials, conducted with US National Institutes of Health (NIH) funding, must adhere to the NIH policy on the dissemination of NIH-funded clinical trials <cit.>.
This policy requires pre-registration of the trial in order to enhance transparency and replicability of trial results (Challenge 1). The design of the health intervention, including any online algorithms that are components of the intervention, must be pre-registered.
Indeed, changing any of the intervention components, including the online algorithm, during the conduct of the trial, makes it difficult for other scientists to know exactly what intervention was implemented and to replicate any results.
Thus to enhance transparency and replicability, the online algorithm should be autonomous. That is, the potential for major ad hoc changes that alter the pre-registered protocol should be minimized.
Second, the current setting involves running an online algorithm which learns and updates the policy using incoming data throughout the trial. However, by design, each individual only receives the mHealth intervention for a limited amount of time.
This proposes a challenge to the RL algorithm's ability to learn based on data from any single individual (Challenge 2).
Second, while the online algorithm learns and updates the policy using incoming data throughout the trial, the algorithm has, in total, a limited amount of data to learn from.
By design, each individual only receives the mHealth intervention for a limited amount of time.
Therefore, the RL algorithm only has data on a limited number of decision times for an individual.
This poses a challenge to the RL algorithm's ability to learn based on a small amount of data collected per individual (Challenge 2).
§.§ Contributions
In this paper, we discuss how we addressed these deployment challenges in the design of an online RL algorithm – a generalization of a Thompson-sampling contextual bandit (Section <ref>) - as part of the Oralytics intervention to improve OSCB for individuals at risk for dental disease. The RL algorithm (1) learns online from incoming data and (2) makes decisions for individuals in real time as part of the intervention. Recently, the Oralytics intervention was deployed in a registered clinical trial <cit.>. Key contributions of our paper are:
* We highlight key design decisions made for the Oralytics algorithm that deals with deploying an online RL algorithm as part of an intervention in a clinical trial (Section <ref>).
* We conduct a re-sampling analysis[All code used in this paper can be found in GitHub: https://github.com/StatisticalReinforcementLearningLab/oralytics-post-deployment-analysis/tree/mainhere] using data collected during the trial to (1) re-evaluate design decisions made and (2) investigate algorithm behavior (Section <ref>).
Further details about the clinical trial and algorithm design decisions can be found in <cit.>.
§ RELATED WORK
AI in Clinical Trials
A large body of work exists that incorporates AI algorithms to conduct clinical trials. AI can improve trial execution by automating cohort selection <cit.> and participant eligibility screening <cit.>.
Prediction algorithms can be used to assist in maintaining retention by identifying participants who are at high risk of dropping out of the trial <cit.>.
Recently, generative models have been considered to create digital twins <cit.>
of participants to predict participant outcomes or simulate other behaviors. Online algorithms in adaptive trial design <cit.> can lead to more efficient trials (e.g., time and money saved, fewer participants required) by modifying the experiment design in real-time (e.g., abandoning treatments or redefining sample size).
The above algorithms are part of the clinical trial design (experimental design)
while in our setting, the RL algorithm is a component of the intervention.
Online RL Algorithms in mHealth
Many online RL algorithms have been included in mHealth interventions deployed in a clinical trial. For example, online RL was used to optimize the delivery of prompts to encourage physical activity <cit.>,
manage weight loss <cit.>, improve medical adherence <cit.>,
assist with pain management <cit.>, reduce cannabis use amongst emerging adults <cit.>, and help people quit smoking <cit.>.
There are also deployments of online RL in mHealth settings that are not formally registered clinical trials <cit.>.
Many of these papers focus on algorithm design before deployment.
Some authors <cit.>, compare outcomes between groups of individuals where each group is assigned a different algorithm or policy.
Here we use a different analysis to inform further design decisions. Our analysis focuses on learning across time by a single online RL algorithm.
§ PRELIMINARIES
§.§ Oralytics Clinical Trial
The Oralytics clinical trial (Table <ref>) enrolled participants recruited from UCLA dental clinics in Los Angeles[
The study protocol and consent procedures have been approved by the University of California, Los Angeles Institutional Review Board (IRB#21–001471) and the trial was registered on ClinicalTrials.gov (NCT05624489).]. Participants were recruited incrementally at about 5 participants every 2 weeks.
All participants received an electric toothbrush with WiFi and Bluetooth connectivity and integrated sensors. Additionally, they were instructed to download the Oralytics app on their smartphones.
The RL algorithm dynamically decided whether to deliver an engagement prompt for each participant twice daily, with delivery within an hour preceding self-reported morning and evening brushing times.
The clinical trial began in September 2023 and was completed in July 2024. A total of 79 participants were enrolled over approximately 20 weeks, with each participant contributing data for 70 days.
However, due to an engineering issue, data for 7 out of the 79 participants was incorrectly saved and thus their data is unviable. Therefore, we restrict our analyses (in Section <ref>) to data from the 72 unaffected participants. For further details concerning the trial design, see <cit.> and <cit.>.
§.§ Online Reinforcement Learning
Here we consider a setting involving sequential decision-making for N participants, each with T decision times.
Let subscript i ∈ [1:N] denote the participant and subscript t∈ [1:T] denote the decision time. S_i, t denotes the current state of the participant. At each decision time t, the algorithm selects action A_i, t after observing S_i, t, based on its policy π_θ(s) which is a function, parameterized by θ, that takes in input state s. After executing action A_i, t, the algorithm receives a reward R_i, t. In contrast to batch RL, where policy parameters are learned using previous batch data and fixed for all t ∈ [1: T], online RL learns the policy parameters with incoming data. At each update time τ, the algorithm updates parameters θ using the entire history of state, action, and reward tuples observed thus far ℋ_τ. The goal of the algorithm is to maximize the average reward across all participants and decision times, 𝔼[ 1/N· T∑_i = 1^N ∑_t = 1^T R_i, t].
§.§ Oralytics RL Algorithm
The Oralytics RL algorithm is a generalization of a Thompson-Sampling contextual bandit algorithm <cit.>. The algorithm makes decisions at each of the T = 140 total decision times (2 every day over 70 days) on each participant.
The algorithm state (Table <ref>) includes current context information about the participant collected via the toothbrush and app (e.g., participant OSCB over the past week and prior day app engagement).
The RL algorithm makes decisions regarding whether or not to deliver an engagement prompt to each participant twice daily, one hour before a participant's self-reported usual morning and evening brushing times. Thus the action space is binary, with A_i, t=1 denoting delivery of the prompt and A_i, t=0, otherwise.
The reward, R_i, t, is constructed based on the proximal health outcome OSCB, Q_i, t, and a tuned approximation to the effects of actions on future states and rewards. This reward design allows a contextual bandit algorithm to approximate an RL algorithm that models the environment as a Markov decision process.
See <cit.> for more details on the reward designed for Oralytics.
As part of the policy, contextual bandit algorithms use a model of the mean reward given state s and action a, parameterized by θ: r_θ(s, a). We refer to this as the reward model. While one could learn and use a reward model per participant i, in Oralytics, we ran a full-pooling algorithm (Section <ref>) that learns and uses a single reward model shared between all participants in the trial instead. In Oralytics, the reward model r_θ(s, a) is a linear regression model as in <cit.> (See Appendix <ref>). The Thompson-Sampling algorithm is Bayesian and thus the algorithm has a prior distribution θ∼𝒩(μ^prior, Σ^prior) assigned to parameter θ. See Appendix <ref> for the prior designed for Oralytics.
The RL algorithm updates the posterior distribution for parameter θ once a week on Sunday morning using all participants' data observed up to that time; denote these weekly update times by τ.
Let n_τ be the number of participants that have started the trial before update time τ, and t(i, τ) be a function that takes in participant i and current update time τ and outputs the last decision time for that participant. Then to update posterior parameters μ^post_τ, Σ^post_τ, we use the history ℋ_τ := {(S_i, t', A_i, t', R_i, t')}_i = 1, t' = 1^n_τ, t(i, τ).
Thus the RL algorithm is a full-pooling algorithm that pools observed data, ℋ_τ from all participants to update posterior parameters μ^post_τ, Σ^post_τ of θ.
Notice that due to incremental recruitment of trial participants, at a particular update time τ, not every participant will be on the same decision time index t and the history will not necessarily involve all N participants' data.
To select actions, the RL algorithm uses the latest reward model to model the advantage, or the difference in expected rewards, of action 1 over action 0 for a given state s.
Since the reward model for Oralytics is linear, the model of the advantage is also linear:
r_θ(s, a=1) - r_θ(s, a=0) = f(s)^⊤β
f(s) denotes the features used in the algorithm’s model for the advantage (See Table <ref>), and β is the subset of parameters of θ corresponding to the advantage.
For convenience, let τ = τ(i, t) be the last update time corresponding to the current reward model used for participant i at decision time t. The RL algorithm micro-randomizes actions using ℙ(f(s)^⊤β > 0 | s = S_i, t, ℋ_τ) and therefore forms action-selection probability π_i, t:
π_i,t := 𝔼_β∼𝒩(μ^β_τ, Σ^β_τ )[ρ(f(s)^⊤β) | s = S_i, t, ℋ_τ]
where μ^β_τ and Σ^β_τ are the sub-vector and sub-matrix of μ^post_τ and Σ^post_τ corresponding to advantage parameter β.
Notice that while classical posterior sampling uses an indicator function for ρ, the Oralytics RL algorithm instead uses a generalized logistic function for ρ to ensure that policies formed by the algorithm concentrate and enhance the replicability of the algorithm <cit.>.
Finally, the RL algorithm samples A_i, t from a Bernoulli distribution with success probability π_i, t:
A_i, t|π_i, t∼Bern(π_i, t)
§ DEPLOYING ORALYTICS
§.§ Oralytics Pipeline
Software Components
Multiple software components form the Oralytics software service. These components are (1) the main controller, (2) the Oralytics app, and (3) the RL service. The main controller is the central coordinator of the Oralytics software system that handles the logic for (a) enrolling participants, (b) pulling and formatting sensor data (i.e., brushing and app analytics data), and (c) communicating with the mobile app to schedule prompts for every participant. The Oralytics app is downloaded onto each participant's smartphone at the start of the trial. The app is responsible for (a) obtaining prompt schedules for the participant and scheduling them in the smartphone's internal notification system and (b) providing app analytics data to the main controller. The RL service is the software service supporting the RL algorithm to function properly and interact with the main controller. The RL service executes three main processes: (1) batch data update, (2) action selection, and (3) policy update.
The main controller and RL service were deployed on infrastructure hosted on Amazon Web Services (AWS). Specifically, the RL service was wrapped as an application using Flask. A daily scheduler job first triggered the batch data update procedure and then the action-selection procedure and a weekly scheduler job triggered the policy update procedure. The Oralytics app was developed for both Android and iOS smartphones.
End-to-End Pipeline Description
We now describe interactions between clinical staff with components of the Oralytics software system and between software components (See Figure <ref>). The Oralytics clinical trial staff recruits and registers participants (Step 1). The registration process consists of the participant downloading the Oralytics app and staff verifying that the participant had at least one successful brushing session from the toothbrush. Successfully registered participants are then entered into the participant enrollment database maintained by the main controller. The main controller maintains this database to track participants entering and completing the trial (i.e., at 70 days).
Every morning, a daily scheduler job first triggers the batch data update process and then the action-selection process (Step 2). The RL service begins by fetching the list of participants currently in the trial (Step 3) and the latest sensor data (i.e., brushing and app analytics data) for current participants (Step 4) from the main controller.
Notice that this data contains rewards to be associated with previous decision times as well as current state information.
Rewards are matched with the correct state and action and these state, action, and reward tuples corresponding to previous decision times are added to the RL service's internal batch data table (Step 5). During the action-selection process, the RL service first uses the latest sensor data to form states for all current participants (Step 6). Then, the RL service uses these states and the current policy to create a new schedule of actions for all current participants (Step 7).
These states and actions are saved to the RL internal database to be added to the batch data table during Step 5, the next morning.
All new schedules of actions are pushed to the main controller and processed to be fetched (Step 8). When a participant opens their Oralytics app, the app fetches the new prompt schedule from the main controller and schedules prompts as notification messages in the smartphone's internal notification system (Step 9).
Every Sunday morning, a weekly scheduler job triggers the policy update process (Step 10). During this process, the RL system takes all data points (i.e., state, action, and reward tuples) in the batch data table and updates the policy (Step 11). Recall that the Oralytics RL algorithm is a Thompson sampling algorithm which means policy updates involve updating the posterior distribution of the reward model parameters (Section <ref>). The newly updated posterior distribution for the parameters is used to select treatments for all participants and all decision times for that week until the next update time.
Every morning, the Oralytics pipeline (Steps 6-8) produces a full 70-day schedule of treatment actions for each participant starting at the current decision time (as opposed to a single action for the current decision time). The schedule of actions is a key design decision for the Oralytics system that enhances the transparency and replicability of the trial (Challenge 1).
Specifically, this design decision mitigates networking or engineering issues if: (1) a new schedule of actions fails to be constructed or (2) a participant does not obtain the most recent schedule of actions. We further see the impact of this design decision during the trial in Section <ref>.
§.§ Design Decisions To Enhance Autonomy and Thus Replicability
A primary challenge in our setting is the high standard for replicability and as a result the algorithm, and its components, should be autonomous (Challenge 1). However, unintended engineering or networking issues could arise during the trial. These issues could cause the intended RL system to function incorrectly compromising: (1) participant experience and (2) the quality of data for post-trial analyses.
One way Oralytics dealt with this constraint is by implementing fallback methods. Fallback methods are pre-specified backup procedures, for action selection or updating, which are executed when an issue occurs. Fallback methods are part of a larger automated monitoring system <cit.>
that detects and addresses issues impacting or caused by the RL algorithm in real-time. Oralytics employed the following fallback methods:
*
if any issues arose with a participant not obtaining the most recent schedule of actions, then the action for the current decision time will default to the action for that time from the last schedule pushed to the participant's app.
* if any issues arose with constructing the schedule of actions, then the RL service forms a schedule of actions where each action is selected with probability 0.5 (i.e., does not use the policy nor state to select action).
* for updating, if issues arise (e.g., data is malformed or unavailable), then the algorithm stores the data point, but does not add that data point to the batch data used to update parameters.
§.§ Design Decisions Dealing with Limited Decision Times Per Individual
Each participant is in the Oralytics trial for a total of 140 decision times, which results in a small amount of data collected per participant. Nonetheless, the RL algorithm needs to learn and select quality actions based on data from a limited number of decision times per participant (Challenge 2).
A design decision to deal with limited data is full-pooling. Pooling refers to clustering participants and pooling all data within a cluster to update the cluster's shared policy parameters. Full pooling refers to pooling all N participants' data together to learn a single shared policy.
Although participants are likely to be heterogeneous (reward functions are likely different), we chose a full-pooling algorithm like in <cit.> to trade off bias and variance in the high-noise environment of Oralytics. These pooling algorithms can reduce noise and speed up learning.
We finalized the full-pooling decision after conducting experiments comparing no pooling (i.e., one policy per participant that only uses that participant's data to update) and full pooling. We expected the no-pooling algorithm to learn a more personalized policy for each participant later in the trial if there were enough decision times, but the algorithm is unlikely to perform well when there is little data for that participant. Full pooling may learn well for a participant's earlier decision times because it can take advantage of other participants' data, but may not personalize as well as a no-pooling algorithm for later decision times, especially if participants are heterogeneous. In extensive experiments, using simulation environments based on data from prior studies, we found that full-pooling algorithms achieved higher average OSCB than no-pooling algorithms across all variants of the simulation environment (See Table 5 in <cit.>).
§ APPLICATION PAYOFF
We conduct simulation and re-sampling analyses using data collected during the trial to evaluate design decisions made for our deployed algorithm. We focus on the following questions:
* Was it worth it to invest in fallback methods? (Section <ref>)
* Was it worth it to run a full-pooling algorithm? (Section <ref>)
* Despite all these challenges, did the algorithm learn? (Section <ref>)
§.§ Simulation Environment
One way to answer questions 2 and 3 is through a simulation environment built using data collected during the Oralytics trial. The purpose of the simulation environment is to re-simulate the trial by generating participant states and outcomes close to the distribution of the data observed in the real trial. This way, we can (1) consider counterfactual decisions (to answer Q2) and (2) have a mechanism for resampling to assess if evidence of learning by the RL algorithm is due to random chance and thus spurious (to answer Q3).
For each of the N=72 participants with viable data from the trial, we fit a model which is used to simulate OSCB outcomes.
Q_i, t given current state S_i, t and an action A_i, t. We also modeled participant app opening behavior and simulated participants starting the trial using the exact date the participant was recruited in the real trial.
See Appendix <ref>
for full details on the simulation environment.
§.§ Was it worth it to invest in fallback methods?
During the Oralytics trial, various engineering or networking issues (Table <ref>) occurred that impacted the RL service's intended functionality. These issues were automatically caught and the pre-specified fallback method was executed. Figure <ref> shows that all 3 types of fallback methods were executed over the Oralytics trial.
Notice that fallback method (i), made possible by our design decision to produce a schedule of actions instead of just a single action, was executed 4 times during the trial and mitigated issues for more participants than any other method.
While defining and implementing fallback methods may take extra effort by the software engineering team, this is a worthwhile investment. Without fallback methods, the various issues that arose during the trial would have required ad hoc changes, to the RL algorithm reducing autonomy and thus replicability of the intervention.
§.§ Was it worth it to pool?
Due to the small number of decision points (T=140) per participant, the RL algorithm was a full-pooling algorithm (i.e., used a single reward model for all participants and updated using all participants' data). Even though before deployment we anticipated that trial participants would be heterogeneous (i.e., have different outcomes to the intervention), we still believed that full-pooling would learn better over a no-pooling or participant-specific algorithm. Here, we re-evaluate this decision.
Experiment Setup
Using the simulation environment (Section <ref>) we re-ran, with all other design decisions fixed as deployed in the Oralytics trial, an algorithm that performs full pooling with one that performs no pooling over 500 Monte Carlo repetitions. We evaluate algorithms based on:
* average of participants' average (across time) OSCB:
1/N∑_i = 1^N 1/T∑_t = 1^T Q_i, t
* first quartile (25th-percentile) of participants' average (across time) OSCB:
First Quartile({1/T∑_t = 1^T Q_i, t}_i = 1^N)
Results As seen in Table <ref>, the average and first quartile OSCB achieved by a full-pooling algorithm is slightly higher than the average OSCB achieved by a no-pooling algorithm. These results are congruent with the results for experiments conducted before deployment (Section <ref>). Despite the heterogeneity of trial participants, it was worth it to run a full-pooling algorithm instead of a no-pooling algorithm.
§.§ Did We Learn?
Lastly, we consider if the algorithm was able to learn despite the challenges of the clinical trial setting. We define learning as the RL algorithm successfully learning the advantage of action a = 1 over a = 0 (i.e., sending an engagement prompt over not sending one) in a particular state s. Recall that the Oralytics RL algorithm maintains a model of this advantage (Equation <ref>) to select actions via posterior sampling and updates the posterior distribution of the advantage model parameters throughout the trial.
One way to determine learning is to visualize the standardized predicted advantage in state s throughout the trial (i.e., using learned posterior parameters at different update times τ).
The standardized predicted advantage in state s using the policy updated at time τ is:
predicted_adv(τ, s) := μ^β⊤_τ f(s)/√(f(s)^⊤Σ^β_τ f(s))
μ^β_τ and Σ^β_τ are
the posterior parameters of advantage parameter β from Equation <ref>, and f(s) denotes the features used in the algorithm's model of the advantage (Table <ref>).
For example, consider Figure <ref>. Using posterior parameters μ^β_τ, Σ^β_τ learned during the Oralytics trial, we plot the standardized predicted advantage over updates times τ in a state where it is
(1) morning, (2) the participant's exponential average OSCB in the past week is about 28 seconds (poor brushing), (3) the participant received prompts 45% of the times in the past week, and (4) the participant did not open the app the prior day.
Since this value is trending more positive, it appears that the algorithm learned that it is effective to send an engagement prompt for participants in this particular state. In the following section, we assess whether this pattern is evidence that the RL algorithm learned or is purely accidental due to the stochasticity in action selection (i.e., posterior sampling).
Experiment Setup
We use the re-sampling-based parametric method developed in <cit.> to assess if the evidence of learning could have occurred by random chance. We use the simulation environment built using the Oralytics trial data (Section <ref>).
For each state of interest s, we run the following simulation.
(i) We rerun the RL algorithm in a variant of the simulation environment in which there is no advantage of action 1 over action 0 in state s (See Appendix <ref>) producing posterior means and variances, μ^β_τ and Σ^β_τ.
Using μ^β_τ and Σ^β_τ, we calculate standardized predicted advantages for each update time τ. (ii) We compare the standardized predicted advantage (Equation <ref>) at each update time from the real trial
with the standardized predicted advantage from the simulated trials in (i).
We consider a total of 16 different states of interest. To create these 16 states, we consider different combinations of possible values for algorithm advantage features f(s) (Table <ref>). Features (1) and (4) are binary so we consider both values {0, 1} for each. Features (2) and (3) are real-valued between [-1, 1], so we consider the first and third quartiles calculated from the Oralytics trial data.[For feature (2), -0.7 corresponds to an exponential average OSCB in the past week of 28 seconds and 0.1 corresponds to 100 seconds; for feature (3), -0.6 corresponds to the participant receiving prompts 20% of the time in the past week and -0.1 corresponds to 45%.]
Results
Key results are in Figure <ref> and additional plots are in Appendix <ref>.
Our results show that the Oralytics RL algorithm did indeed learn that sending a prompt is effective in some states and ineffective in others. This suggests that our state space design was a good choice because some state features helped the algorithm discern these states.
We highlight 3 interesting states in Figure <ref>:
* A state where the algorithm learned it is effective to send a prompt and the re-sampling indicates this evidence is real. The advantage features f(s) correspond to (1) evening, (2) the participant’s exponential average OSCB in the past week is about 28 seconds (poor brushing),
(3) the participant received prompts 20% of the time in the past week, and (4) the participant did not open the app the prior day.
* A state where the algorithm learned it is ineffective to send a prompt and the re-sampling indicates this evidence is real. The advantage features f(s) correspond to (1) morning, (2) the participant’s exponential average OSCB in the past week is about 100 seconds (almost ideal brushing),
(3) the participant received prompts 45% of the time in the past week, and (4) the participant opened the app the prior day.
* The state in Figure <ref> but the re-sampling method indicates the appearance of learning likely occurred by random chance.
For (a) and (b) the re-sampling method suggests that evidence of learning is real because predicted advantages using posterior parameters updated during the actual trial are trending away from the simulated predictive advantages from re-sampled posterior parameters in an environment where there truly is no advantage in state s. For (c), however, the re-sampling method suggests that the appearance of learning likely occurred by random chance because predicted advantages using posterior parameters updated during the actual trial are extremely similar to those from re-sampled posterior parameters in an environment where there truly is no advantage in state s.
§ DISCUSSION
We have deployed Oralytics, an online RL algorithm optimizing prompts to improve oral self-care behaviors.
As illustrated here, much is learned from the end-to-end development, deployment, and data analysis phases.
We share these insights by highlighting design decisions for the algorithm and software service and conducting a simulation and re-sampling analysis to re-evaluate these design decisions using data collected during the trial. Most interestingly, the re-sampling analysis provides evidence that the RL algorithm learned the advantage of one action over the other in certain states.
We hope these key lessons can be shared with other research teams interested in real-world design and deployment of online RL algorithms.
From a health science perspective, pre-specified, primary analyses <cit.> will occur, which is out of scope for this paper. The re-sampling analyses presented in this paper will inform design decisions for phase 2. The re-design of the RL algorithm for phase 2 of the Oralytics clinical trial is currently under development and phase 2 is anticipated to start in spring 2025.
§ ACKNOWLEDGMENTS
This research was funded by NIH grants IUG3DE028723, P50DA054039, P41EB028242, U01CA229437, UH3DE028723, and R01MH123804. SAM holds concurrent appointments at Harvard University and as an Amazon Scholar. This paper describes work performed at Harvard University and is not associated with Amazon.
§ ADDITIONAL ORALYTICS RL ALGORITHM FACTS
§.§ Algorithm State Space
S_i,t∈ℝ^d represents the ith participant's state at decision point t, where d is the number of variables describing the participant's state.
§.§.§ Baseline and Advantage State Features
Let f(S_i,t) ∈ℝ^5 denote the features used in the algorithm's model for both the baseline reward function and the advantage.
These features are:
* Time of Day (Morning/Evening) ∈{0, 1}
* B: Exponential Average of OSCB Over Past 7 Days (Normalized) ∈ [-1, 1]
* A: Exponential Average of Engagement Prompts Sent Over Past 7 Days (Normalized) ∈ [-1, 1]
* Prior Day App Engagement ∈{0, 1}
* Intercept Term =1
Feature 1 is 0 for morning and 1 for evening. Features <ref> and <ref> are B̅_i,t = c_γ∑_j = 1^14γ^j-1 Q_i, t - j and A̅_i,t = c_γ∑_j = 1^14γ^j-1 A_i, t - j respectively, where γ=13/14 and c_γ = 1 - γ/1 - γ^14. Recall that Q_i, t is the proximal outcome of OSCB and A_i,t is the treatment indicator. Feature 4 is 1 if the participant has opened the app in focus (i.e., not in the background) the prior day and 0 otherwise. Feature 5 is always 1. For full details on the design of the state space, see Section 2.7 in <cit.>.
§.§ Reward Model
The reward model (i.e., model of the mean reward given state s and action a) used in the Oralytics trial is a Bayesian linear regression model with action centering <cit.>:
r_θ(s, a) = f(s)^T α_0 + π f(s)^T α_1 + (a - π) f(s)^T β + ϵ
where θ = [α_0, α_1, β] are model parameters, π is the probability that the RL algorithm selects action a = 1 in state s
and ϵ∼𝒩(0, σ^2). We call the term f(S_i, t)^T β the advantage (i.e., advantage of selecting action 1 over action 0) and f(S_i, t)^T α_0 + π_i,t f(S_i, t)^T α_1 the baseline.
The priors are α_0∼𝒩(μ_α_0, Σ_α_0), α_1∼𝒩(μ_β, Σ_β), β∼𝒩(μ_β, Σ_β). Prior values for μ_α_0, Σ_α_0, μ_β, Σ_β, σ^2 are specified in Section <ref>. For full details on the design of the reward model, see Section 2.6 in <cit.>.
§.§ Prior
Table <ref> shows the prior distribution values used by the RL algorithm in the Oralytics trial. For full details on how the prior was constructed, see Section 2.8 in <cit.>.
§ SIMULATION ENVIRONMENT
We created a simulation environment using the Oralytics trial data in order to replicate the trial under different true environments.
Although the trial ran with 79 participants, due to an engineering issue, data for 7 out of the 79
participants was incorrectly saved and thus their data is unviable. Therefore, the simulation environment is built off of data from the 72 unaffected participants.
Replications of the trial are useful to (1) re-evaluate design decisions that were made and (2) have a mechanism for resampling to assess if evidence of learning by the RL algorithm is due to random chance.
For each of the 72 participants with viable data from the Oralytics clinical trial, we use that participant's data to create a participant-environment model. We then re-simulate the Oralytics trial by generating participant states, the RL algorithm selecting actions for these 72 participants given their states, the participant-environment model generating health outcomes / rewards in response, and the RL algorithm updating using state, action, and reward data generated during simulation. To make the environment more realistic, we also replicate each participant being recruited incrementally and entering the trial by their real start date in the Oralytics trial and simulate update times on the same dates as when the RL algorithm updated in the real trial (i.e., weekly on Sundays).
§.§ Participant-Environment Model
In this section, we describe how we constructed the participant-environment models for each of the N = 72 participants in the Oralytics trial using that participant's data. Each participant-environment model has the following components:
* Outcome Generating Function (i.e., OSCB Q_i, t in seconds given state S_i, t and action A_i, t)
* App Engagement Behavior (i.e., the probability of the participant opening their app on any given day)
Environment State Features The
features used in the state space for each environment are
a superset of the algorithm state features f(S_i, t) (Appendix <ref>). g(S_i,t) ∈ℝ^7 denotes the super-set of features used in the environment model.
The features are:
* Time of Day (Morning/Evening) ∈{0, 1}
* B: Exponential Average of OSCB Over Past 7 Days (Normalized) ∈ [-1, 1]
* A: Exponential Average of Prompts Sent Over Past 7 Days (Normalized) ∈ [-1, 1]
* Prior Day App Engagement ∈{0, 1}
* Day of Week (Weekend / Weekday) ∈{0, 1}
* Days Since Participant Started the Trial (Normalized) ∈ [-1, 1]
* Intercept Term =1
Feature 5 is 0 for weekdays and 1 for weekends. Feature 6 refers to how many days the participant has been in the Oralytics trial (i.e., between 1 and 70) normalized to be between -1 and 1.
Outcome Generating Function
The outcome generating function is a function that generates OSCB Q_i, t in seconds given current state S_i, t and action A_i, t. We use a zero-inflated Poisson to model each participant's outcome generating process because of the zero-inflated nature of OSCB found in previous data sets and data collected in the Oralytics trial. Each participant's outcome generating function is:
Z ∼Bernoulli(1 - sigmoid( g(S_i, t)^⊤ w_i,b - A_i, t·max[ Δ_i,B^⊤ g(S_i, t), 0 ] ) )
S ∼Poisson( exp( g(S_i, t)^⊤ w_i,p + A_i, t·max[ Δ_i,N^⊤ g(S_i, t), 0 ] ) )
Q_i, t = ZS
where g(S_i, t)^⊤ w_i,b,g(S_i, t)^⊤ w_i,p are called baseline (aka when A_i,t=0) models with
w_i,b, w_i,p as participant-specific baseline weight vectors, max[ Δ_i,B^⊤ g(S_i, t), 0 ], max[ Δ_i,N^⊤ g(S_i, t), 0 ] are called advantage models, with Δ_i,B, Δ_i,N as participant-specific advantage (or treatment effect) weight vectors. g(S_i, t) is described in Appendix <ref>, and sigmoid(x) = 1/1 + e^-x.
The outcome generating function can be interpreted in two components: (1) the Bernoulli outcome Z models the participant's intent to brush given state S_i, t and action A_i, t and (2) the Poisson outcome S models the participant's OSCB value in seconds when they intend to brush, given state S_i, t and action A_i, t. Notice that the models for Z and S currently require the advantage/treatment effect of OSCB Q_i, t to be non-negative. Otherwise, sending an engagement prompt would yield a lower OSCB value (i.e., models participant brushing worse) than not sending one, which was deemed nonsensical in this mHealth setting.
Weights w_i,b, w_i,p, Δ_i,B, Δ_i,N for each participant's outcome generating function are fit that participant's state, action, and OSCB data from the Oralytics trial. We fit the function using MAP with priors w_i,b, w_i,p, Δ_i,B, Δ_i,N∼𝒩(0, I) as a form of regularization because we have sparse data for each participant. Finalized weight values were chosen by running random restarts and selecting the weights with the highest log posterior density. See Appendix <ref> for metrics calculated to verify the quality of each participant's outcome generating function.
App Engagement Behavior
We simulate participant app engagement behavior using that participant's app opening data from the Oralytics trial. Recall that app engagement behavior is used in the state for both the environment and the algorithm. More specifically, we define app engagement as the participant opening their app and the app is in focus and not in the background. Using this app opening data, we calculate p^app_i, the proportion of days that the participant opened the app during the Oralytics trial (i.e., number of days the participant opened the app in focus divided by 70, the total number of days a participant is in the trial for). During simulation, at the end of each day, we sample from a Bernoulli distribution with probability p^app_i for every participant i currently in the simulated trial.
§.§ Assessing the Quality of the Outcome Generating Functions
Our goal is to have the simulation environment replicate outcomes (i.e., OSCB) as close to the real Oralytics trial data as possible. To verify this, we compute various metrics (defined in the following section) comparing how close the outcome data generated by the simulation environment is to the data observed in the real trial . Table <ref> shows this comparison on various outcome metrics.
Table <ref> shows various error values of simulated OSCB with OSCB observed in the trial.
For both tables, we report the average and standard errors of the metric across the 500 Monte Carlo simulations and compare with the value of the metric for the Oralytics trial data. Figure <ref> shows comparisons of outcome metrics across trial participants.
§.§.§ Notation
𝕀{·} denotes the indicator function. Let Var({X_k}_k = 1^K) represent the empirical variance of X_1,...,X_K.
§.§.§ Metric Definitions and Formulas
Recall that N=72 is the number of participants and T=140 is the total number of decision times that the participant produces data for in the trial. We consider the following metrics and compare the metric on the real data with data generated by the simulation environment.
* Proportion of Decision Times with OCSB = 0:
∑_i=1^N∑_t=1^T𝕀{Q_i,t = 0}/N × T
* Average of Average Non-zero Participant OSCB:
1/N∑_i=1^NQ̅_i^non-zero
where
Q̅_i^non-zero = ∑_t=1^T Q_i,t·𝕀{Q_i,t > 0}/∑_t=1^T𝕀{Q_i,t > 0}
* Average Non-zero OSCB in Trial:
1/∑_i = 1^N ∑_t = 1^T 𝕀{Q_i, t > 0}∑_i = 1^N ∑_t = 1^T Q_i, t·𝕀{Q_i, t > 0}
* Variance of Average Non-zero Participant OSCB:
Var({Q̅_i^non-zero}_i = 1^N)
where
Q̅_i^non-zero = ∑_t=1^T Q_i,t·𝕀{Q_i,t > 0}/∑_t=1^T𝕀{Q_i,t > 0}
* Variance of Non-zero OSCB in Trial:
Var({Q_i, t : Q_i, t > 0}_i = 1, t = 1^N, T)
* Variance of Average Participant OCSB:
Var({Q̅_i}_i = 1^N)
where Q̅_i = ∑_t = 1^T Q_i, t is the average OSCB for participant i
* Average of Variances of Participant OSCB:
1/N∑_i=1^NVar({Q_i, t}_t = 1^T)
We also compute the following error metrics. We use Q̂_i, t to denote the simulated OSCB and Q_i, t to denote the corresponding OSCB value from the Oralytics trial data.
* Mean Squared Error:
1/N× T∑_i=1^N∑_t=1^T(Q̂_i,t - Q_i,t)^2
* Root Mean Squared Error:
√(1/N× T∑_i=1^N∑_t=1^T(Q̂_i,t - Q_i,t)^2)
* Mean Absolute Error:
1/N× T∑_i=1^N∑_t=1^T|Q̂_i,t - Q_i,t|
§.§ Environment Variants for Re-sampling Method
In this section, we discuss how we formed variants of the simulation environment used in the re-sampling method from Section <ref>. We create a variant for every state s of interest corresponding to algorithm advantage features f(s) and environment advantage features g(s). In each variant, outcomes (i.e., OSCB Q_i, t) and therefore rewards, are generated so that there is no advantage of action 1 over action 0 in the particular state s.
To do this, recall that we fit an outcome generating function (Equation <ref>) for each of the N = 72 participants in the trial. Each participant i's outcome generating function has advantage weight vectors Δ_i,B, Δ_i,N that interact with the environment advantage state features g(s). Instead of using Δ_i,B, Δ_i,N fit using that participant's trial data, we instead use projections proj Δ_i,B, proj Δ_i,N of Δ_i,B, Δ_i,N that have two key properties:
* for the current state of interest s, on average they generate treatment effect values that are 0 in state s with algorithm state features f(s) (on average across all feature values for features in g(s) that are not in f(s))
* for other states s' ≠ s, they generate treatment effect values g(s')^⊤proj Δ_i,B, g(s')^⊤proj Δ_i,N close to the treatment effect values using the original advantage weight vectors g(s')^⊤Δ_i,B, g(s')^⊤Δ_i,N
To find proj Δ_i,B, proj Δ_i,N that achieve both properties, we use the SciPy optimize API[Documentation here: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html] to minimize the following constrained optimization problem:
min_proj Δ1/K∑_k = 1^K (g(s')_k^⊤proj Δ - g(s')_k^⊤Δ)^2
subject to: g(s)^⊤proj Δ = 0
{g(s')_k}_k = 1^K denotes a set of states we constructed that represents a grid of values that g(s') could take.
g(s) has the same state feature values as g(s) except the “Day of Week" and “Days Since Participant Started the Trial (Normalized)" features are replaced with fixed mean values 2/7 and 0. The objective function is to achieve property 2 and the constraint is to achieve property 1.
We ran the constrained optimization with Δ = Δ_i, B and Δ_i, N to get proj Δ_i,B, proj Δ_i,N, for all participants i. All participants in this variant of the simulation environment produce OSCB Q_i, t given state S_i, t and A_i, t using Equation <ref> with Δ_i, B, Δ_i, N replaced by proj Δ_i,B, proj Δ_i,N.
§ ADDITIONAL DID WE LEARN? PLOTS
In Section <ref> we considered a total of 16 different states of interest. Results for all 16 states are in Figure <ref>. Recall each state is a unique combination of the following algorithm advantage feature values:
* Time of Day: {0, 1} (Morning and Evening)
* Exponential Average of OSCB Over Past Week (Normalized): {-0.7, 0.1} (first and third quartile in Oralytics trial data)
* Exponential Average of Prompts Sent Over Past Week (Normalized): {-0.6, -0.1} (first and third quartile in Oralytics trial data)
* Prior Day App Engagement: {0, 1} (Did Not Open App and Opened App)
Notice that since features (2) and (3) are normalized, for feature (2) the quartile value of -0.7 means the participant's exponential average OSCB in the past week is about 28 seconds and similarly 0.1 means its about 100 seconds. For feature (3), the quartile value of -0.6 means the participant received prompts 20% of the time in the past week and similarly -0.1 means it's 45% of the time.
§ DID WE LEARN?
§.§ Equations
Recall this is the prior on the advantage term that we deployed:
μ_β = [0, 0, 0, 53, 0]^⊤
Σ_β = diag(12^2, 33^2, 35^2, 56^2, 17^2)
What knowledge does this prior encode?
* For significant features as deemed by domain experts, we set the prior mean
to the empirical mean parameter value for that feature across the 9 participants in the Oralytics pilot study. Otherwise the prior mean is 0.
* For significant features, we set the prior SD to
the empirical SD for that feature across the 9 participants in the Oralytics pilot study. For non-significant parameters, we set the
prior SD to the empirical SD divided by 2. We reduced the SD of the
non-significant weights because we want to provide more shrinkage to the prior mean of 0.
* Prior day app engagement is the only feature domain experts deemed to be significant in the advantage
§.§ Interesting Participant-Feature Graphs
Interesting Scores
We calculate two types of interesting scores depending on the type of value that each feature can take. In our setting, features are either binary {0, 1} or real-valued between [-1, 1] (See Appendix <ref>).
If feature z ∈{0, 1}, then the interesting score we compute for participant i is:
score_int_z(i) := ∑_t = 1^T 𝕀[π̂_i, t(S_i, t(z = 1)) > π̂_i, t(S_i, t(z = 0))]/T
otherwise, if feature z ∈ [-1, 1], then the interesting score is:
score_int_z(i) := ∑_t = 1^T 𝕀[π̂_i, t(S_i, t) > π̂_i, t(S_i, t(z = 0))]/T
π̂_i, t: s → [0.2, 0.8] is the estimated advantage function (i.e., action selection probability function) that produces an action-selection probability given state s. S_i, t(z = 1) denotes a state that takes the same value as S_i, t for all features except z, which is set to 1, and similarly for S_i, t(z = 0). Notice that for Equation <ref>, S_i, t is the actual observed state with no modifications, and we are contrasting with z = 0, which represents the average feature value.
We say that the trajectory for participant i is interesting for feature z if |score_int_z(i) - 0.5| ≥δ. The number of interesting participants for feature z is:
#participant_int_z := ∑_i = 1^N 𝕀{|score_int_z(i) - 0.5| ≥δ}
where we set δ = 0.4 for our experiments.
|
http://arxiv.org/abs/2409.02867v1 | 20240904165048 | The Impact of Balancing Real and Synthetic Data on Accuracy and Fairness in Face Recognition | [
"Andrea Atzori",
"Pietro Cosseddu",
"Gianni Fenu",
"Mirko Marras"
] | cs.CV | [
"cs.CV"
] |
Balancing Real and Synthetic Data in Face Recognition
Atzori et al.
Department of Mathematics and Computer Science, University of Cagliari, Italy
[email protected], [email protected],
[email protected], [email protected]
The Impact of Balancing Real and Synthetic Data on Accuracy and Fairness in Face Recognition
Andrea Atzori0000-0002-6910-206X Pietro Cosseddu0009-0006-4998-5164 Gianni Fenu0000-0003-4668-2476 Mirko Marras0000-0003-1989-6057
Accepted Sep 3 2024 to ApJ Letters
======================================================================================================================================
§ ABSTRACT
Over the recent years, the advancements in deep face recognition have fueled an increasing demand for large and diverse datasets. Nevertheless, the authentic data acquired to create those datasets is typically sourced from the web, which, in many cases, can lead to significant privacy issues due to the lack of explicit user consent. Furthermore, obtaining a demographically balanced, large dataset is even more difficult because of the natural imbalance in the distribution of images from different demographic groups. In this paper, we investigate the impact of demographically balanced authentic and synthetic data, both individually and in combination, on the accuracy and fairness of face recognition models. Initially, several generative methods were used to balance the demographic representations of the corresponding synthetic datasets. Then a state-of-the-art face encoder was trained and evaluated using (combinations of) synthetic and authentic images. Our findings emphasized two main points: (i) the increased effectiveness of training data generated by diffusion-based models in enhancing accuracy, whether used alone or combined with subsets of authentic data, and (ii) the minimal impact of incorporating balanced data from pre-trained generative methods on fairness (in nearly all tested scenarios using combined datasets, fairness scores remained either unchanged or worsened, even when compared to unbalanced authentic datasets). Source code and data are available at <https://cutt.ly/AeQy1K5G> for reproducibility.
§ INTRODUCTION
Face Recognition (FR) is one of the most popular biometric tasks. Its applications range from access control to portable devices <cit.>. Extremely high levels of accuracy have been achieved thanks to new deep learning architectures <cit.>, margin-based losses <cit.> and the availability of large-scale, annotated face datasets <cit.> collected from the Internet. The collection of data from such sources, however, implies that the users involved cannot directly express consent for the use of their data, thereby raising severe ethical concerns.
The enactment of the General Data Protection Regulation (GDPR) <cit.> by the EU in 2018 heightened criticisms regarding privacy issues in this domain. This enactment led to the removal of several databases commonly used in FR <cit.> to avert legal complications and cast uncertainty on the future of FR research. The GDPR specifically provides all individuals with the "right to be forgotten" and enforces more rigorous data collection standards. Consequently, there has been a growing focus on synthetic data, which has emerged as a promising substitute for genuine datasets in FR training <cit.>. This shift has been facilitated by progress in Deep Generative Models (DGMs), which can create synthetic samples by learning the probability distribution of the real ones.
The majority of DGMs are based on Generative Adversarial Networks (GANs) <cit.>, Diffusion Models (DMs) <cit.> <cit.>, or, occasionally, hybrid implementations of both <cit.>. Presently, FR models using synthetic data typically show a decline in verification accuracy when compared to those trained with authentic data. This performance gap is primarily due to the limited identity discrimination of the training datasets <cit.> or their low intra-class variance <cit.>. DMs have gained attention as a plausible alternative to GANs for image synthesis, albeit at the expense of stability and a significant reduction in training performance. Regrettably, several unresolved questions remain regarding the effective combination of authentic and synthetic data to overcome the limitations of both. In a recent study, various combinations of authentic and synthetic data have been used to train FR models and assess the extent to which the use of authentic data can be minimized by introducing synthetic identities, without encountering the aforementioned performance drawbacks <cit.>. However, the impact of demographically balancing within and among the two sources of data on verification accuracy and fairness has not been considered while training FR models.
This paper aims to investigate the suitability of using combined authentic and synthetic, demographically balanced, training datasets for developing FR models, focusing on both fairness and accuracy. This exploration seeks to determine whether it is possible to simultaneously address performance and fairness concerns while mitigating the privacy-related issues inherent in authentic datasets. By doing so, it may be possible to create accurate and fair FR models with a reduced reliance on authentic data (assuming that synthetic data can be generated without limitation and that a small number of authentic identities can be collected with appropriate user consent). Thus, our contribution is twofold:
* We demographically balanced the employed synthetic datasets with respect to the available demographic groups by generating the missing identities using the same methods originally employed, without additional training. The images generated for this study have been made publicly available.
* We investigated whether FR models trained on demographically balanced combinations of authentic and synthetic data could achieve comparable accuracy and fairness to models trained on demographically balanced (and unbalanced) authentic-only data.
The rest of the paper is structured as follows. Section <ref> discusses recent progress in face recognition methods and synthetic face generation. Section <ref> then describes the data preparation, model creation and training, and model evaluation adopted in our study. Section <ref> examines the differences in verification accuracy and fairness between FR models trained on synthetic and/or authentic data. Finally, Section <ref> summarizes our findings and provides directions for future research. Code and data are available at <https://cutt.ly/AeQy1K5G>.
§ RELATED WORK
Our work bridges recent research on fairness in deep face recognition methods and face generation techniques. In this section, we present an overview of both.
§.§.§ Fairness in Face Recognition.
Derived from machine learning literature <cit.>, the notions of fairness seek to guarantee fair treatment of individuals across various demographic groups using biometric systems that analyze traits like face, fingerprint, or iris <cit.>. Broadly, demographic fairness is encapsulated by three key concepts: parity, equalized odds, and sufficiency <cit.>. Parity denotes the requirement that the outcome of an FR system should remain unaffected by subject's demographic attributes (such as gender or ethnicity). Equalized odds assert that, regardless of demographic characteristics, the rates of false negatives and false positives should be consistent across demographic groups. Sufficiency implies that the available data must provide sufficient information to ensure accurate and fair results in FR without depending on demographic details.
Prior work analyzing fairness in face recognition has shown that, on average, women experienced worse performance than men <cit.>.
Further analyses generally attributed this disparity to the fact that female faces were more similar to each other than male faces, as shown in <cit.>. Notable attention was also paid to factors pertaining to the image (e.g., presence of distortions or noise) or to the face (e.g. presence of make-up or mustache) characteristics <cit.>.
For instance, poor performance on dark-skinned or poorly-lit subjects <cit.> was associated with the fact that the network learns skin-tone-related characteristics already in the top layers. Another demographic dimension whose groups have been shown to be systematically discriminated against is age. Indeed, children's faces were more likely to be badly recognized than those of adults <cit.>.
The imbalanced representation of certain groups was also indicated as a possible reason for unfairness <cit.>
To counter this, a range of demographically balanced data sets have been created <cit.>. In this study, we analyze the impact of data balancing through the generation of new synthetic identities. Specifically, we are going to analyze how this balancing methodology impacts models trained only on synthetic data and on combined data (authentic and synthetic).
§.§.§ Synthetic Face Generation.
Over the last years, several works proposed the use of synthetic data in FR development <cit.> due to the success of deep generative models in generating high-quality and realistic face images <cit.>. These methods can be categorized as GAN-based <cit.>, digital rendering <cit.>, or diffusion-based <cit.>.
In <cit.>, an architecture based on previous StyleGAN methods <cit.> <cit.> is presented. Such architecture uses a disentangled latent space to train control encoders that map human-interpretable inputs to suitable latent vectors, thus allowing explicit control of attributes such as pose, age, and expression. By doing so it is then possible to generate new synthetic faces with chosen variations using the controllable attributes. Later, SynFace <cit.> proposed to generate synthetic data using an attribute-conditional GAN model, i.e., DiscoFaceGAN <cit.>, and perform identity and domain mixup, and SFace <cit.> analyzed the impact of Style-GAN <cit.> training under class conditional settings and the extent to which transferring knowledge from the pretrained model on authentic data improves the performance of synthetic-based FR. In contrast, ExFaceGAN <cit.> introduced a framework to disentangle identity information within the latent spaces of unconditional GANs,to produce multiple images for any given synthetic identity.
Among methods of digital rendering, DigiFace-1M <cit.> leveraged facial geometry models, a diverse array of textures, hairstyles, and 3D accessories, along with robust data augmentation techniques during training. However, it comes at a considerable computational cost during the rendering process. DigiFace-1M also proposed combining synthetic and authentic data during FR training to improve the verification accuracy of synthetic-based FR using a small and fixed number of authentic identities.
Recently, IDiff-Face <cit.> and DCFace <cit.> adopted diffusion models to generate synthetic data for FR training, achieving state-of-the-art verification accuracy for synthetic-based FR. Specifically, the former included fuzziness in the identity condition to induce variations in the generated data. Conversely, the latter proposed a two-stage generative framework in which (i) an image of a novel identity using an unconditional diffusion model is generated and an image style from the style bank is selected in order to (ii) be mixed using a dual conditional diffusion model.
Recently, several challenges and competitions have been organized in conjunction with top venues, aiming at promoting privacy-friendly synthetic-based FR development.
FRCSyn competitions <cit.> were organized at WACV and CVPR 2024, aiming to explore the use of synthetic data in FR training and to attract the development of solutions for synthetic-based FR. The challenge considered two main tasks, training FR only with synthetic data and training FR with both synthetic and authentic data. The achieved results of the top-performing solutions from FRCSyn <cit.> competition are further investigated and reported in <cit.>. Also, the SDFR <cit.> competition was organized in conjunction with FG 2024, to promote the creation of solutions for synthetic-based FR.
§ METHODOLOGY
This section is dedicated to describing the experimental protocol we followed (Fig. <ref>), including the datasets involved in the experiments, both authentic and synthetic, the training methodologies adopted to combine both types of face data, and the metrics used for model evaluation.
§.§ Data Preparation
For our experiments, we used five different datasets to train the models: two authentic and three synthetic. The datasets were aligned using MTCNN <cit.> to extract five facial landmarks, after which all images were resized to 112 × 112 pixels. Images were normalized to have pixel values between -1 and 1.
§.§.§ Authentic Datasets.
For authentic face data used to train the FR models, we adopted the well-known BUPT-Balancedface <cit.> and CASIA-WebFace <cit.> datasets.
BUPT-Balancedface <cit.> consists of 1.3M images from 28K identities and is annotated with both ethnicity and identity labels. Its ethnicity annotations include four demographic groups: African, Asian, Caucasian, and Indian, with 7K identities and approximately 300K images each.
Conversely, CASIA-WebFace <cit.> consists of 0.5M images of 10K identities. It is worth noting that this dataset was included in the experiments as a reference, despite not being demographically balanced. In <cit.>, it is reported a demographic distribution of 63.4% Caucasian, 14.4% Asian, 7.4% African, 7.2% Indian, and 7.4% Others.
§.§.§ Synthetic Datasets.
The synthetic datasets were generated using three methods: one GAN-based and two diffusion-based. These datasets are derived from ExFaceGAN <cit.>, DCFace <cit.>, and IDiff-Face Uniform (25% CPD) <cit.>. Each dataset contains 0.5M images from 10K identities, with 50 images per identity.
The first synthetic dataset was generated via the pretrained GAN-Control <cit.> generator, which was trained on the FFHQ dataset <cit.> and improved with an identity disentanglement approach <cit.>.
The second synthetic dataset was generated via DCFace <cit.>, which is based on a two-stage diffusion model. In the first stage, a high-quality face image of a novel identity is generated using unconditional diffusion models <cit.> trained on FFHQ <cit.>, with the image style randomly selected from a style bank. In the second stage, the generated images and styles from the first stage are combined using a dual conditional diffusion model <cit.> trained on CASIA-WebFace <cit.> to produce an image with a specific identity and style.
Finally, the third synthetic dataset was generated via IDiff-Face <cit.>, a novel approach based on conditional latent diffusion models for synthetic identity generation with realistic identity variations for FR training. IDiff-Face is trained in the latent space of a pretrained autoencoder <cit.> and conditioned on identity contexts (i.e., feature representations extracted using a pretrained FR model, namely ElasticFace <cit.>).
§.§.§ Data Sampling and Balancing.
The authentic dataset employed in the majority of our experiments, BUPT-Balancedface, was already demographically balanced, containing an equal number of identities across the four demographic groups. For our experiments, we required 5K unique, demographically balanced identities, aiming for a total of 1,250 identities per demographic group. To achieve this and reduce the randomness in our experiments, we randomly sampled identities ten times, with each iteration including 5K demographically balanced identities from BUPT-Balancedface. We denote the best-performing iteration as BUPT_sub, which was used in subsequent experiments. The average results across all iterations are referred to as BUPT_avg. Similarly, the average verification accuracy across the ten iterations from CASIA-WebFace is denoted as WF_avg, while the best-performing iteration is referred to as WF_sub.
The synthetic datasets were unbalanced towards the Caucasian group, as determined by labeling all the data using a ResNet18 <cit.> backbone trained on BUPT-BalancedFace <cit.> to predict the ethnicity label of each identity. The inferred ethnicity pseudo-labels are reported in Table <ref>. For our experiments, we required 5K unique, demographically balanced identities, aiming for a total of 1,250 identities per demographic group in each synthetic dataset.
To achieve this, we (i) randomly sampled 1,250 identities (or the available number, if fewer) from the synthetic datasets and (ii) generated new identities for each demographic group until reaching our targets by guiding the generation process with the above-mentioned ResNet18 <cit.> backbone. For each synthetic dataset, the additional identities were generated using the pre-trained models made publicly available by the original authors without further training.
We denote the synthetic subsets sampled in the first step as GC_sub, DC_sub, and IDF_sub, and the ones generated in the second step as GC_gen, DC_gen, and IDF_gen, using GANControl, DCFace, and IDiff-Face, respectively. Finally, the synthetic, demographically balanced datasets, each comprising 5K identities and derived from the union of the two respective datasets for each method, are referred to as GC_bal, DC_bal, and IDF_bal for the sake of clarity.
§.§.§ Training Data Combination.
We trained FR models using combinations of authentic and synthetic data. The authentic subset involved in each combination was always BUPT_sub, which consists of 5K identities and is balanced across demographic groups. This subset was then combined with each of the three synthetic, demographically balanced subsets (GC_bal, DC_bal, IDF_bal), all of which have the same demographic distribution and the same number of identities (5K).
§.§ Model Creation and Training
To train all the FR models we relied on the widely used ResNet50 <cit.> as the backbone and CosFace <cit.> as the loss function. The latter is defined as:
L_CosFace = 1/N∑_i ∈ N -loge^s (cos(θ_y_i) - m)/e^s (cos(θ_y_i) - m) + ∑_j=1, j ≠ y_i^c e^s cos(θ_j)
where c is the number of classes (identities), N is the batch size, m is the margin penalty applied on the cosine angle cos(θ_y_i) between the feature representation x_i of the sample i and its class center y_i, s is the scale parameter. In all the conducted experiments, the margin m is set to 0.35 and the scale parameter s to 64, following <cit.>. During the training, we employed Stochastic Gradient Descend (SGD) as an optimizer with an initial learning rate of 0.1. The learning rate is divided by 10 at epochs 22, 30 and 40. In total, the models are trained for 40 epochs using 256 as batch size.
During the training, we also employed data augmentation techniques, following RandAugment <cit.>. Its augmentation space includes color and geometric transformations such as horizontal flipping, sharpness adjusting, and translation of the x and y axes. RandAugment includes two hyper-parameters, Q and M, to select the number of operations Q and the magnitude M of each transformation. In our experiments, M and Q were set to 16 and 4, as in <cit.> and <cit.>. Further details are provided in the code repository.
§.§ Model Evaluation
We evaluated the trained FR models in terms of verification accuracy on several well-known benchmarks, accompanying the following datasets: LFW <cit.>, CFP-FP <cit.>, CFP-FF <cit.>, AgeDB-30 <cit.>, CA-LFW <cit.>, CP-LFW <cit.> and RFW <cit.>. The latter has also been used to assess the fairness of the trained FR models. Results for all benchmarks are reported as verification accuracy in percentage, thus adhering to their official, original evaluation protocol.
In order to assess the fairness of the models, we computed the standard deviation (STD) and the Skewed Error Ratio (SER) on the verification accuracy of the four sub-groups composing the RFW benchmark, with each sub-group composed of 6K mated and 6K non-mated verification pairs. Specifically, error skewness is computed as the ratio of the highest error rate to the lowest error rate among different demographic groups. Formally:
SER = max_a Err(a)/min_b Err(b)
where a and b are different demographic groups. In this context, a higher error skewness indicates that the model has a substantial discrepancy in accuracy between the best and worst performing demographic groups, and is thus less fair. On the other hand, the metric based on the standard deviation is defined as:
STD = √(1/N∑_i=1^N (E_i - E̅)^2)
where E_i is the error rate for demographic group i, N is the total number of demographic groups, and E̅ is the mean error rate across all groups. A higher standard deviation indicates that the model has substantially different verification accuracies across demographic groups and is therefore less fair.
§ EXPERIMENTAL RESULTS
Our experiments initially aimed to assess whether an FR model trained on a demographically balanced synthetic dataset could achieve competitive accuracy compared to an FR model trained on an authentic dataset with the same number of identities and demographic representation (Section <ref>). Subsequently, we explored the impact on verification accuracy by training FR models on combined synthetic and authentic data (Section <ref>) and investigated the impact on the fairness of each setting involved in our study (Section <ref>).
§.§ RQ1: Accuracy with Separate Synthetic and Real Data Training
In a first analysis, we assessed whether an FR model trained on a demographically balanced synthetic dataset can achieve competitive accuracy compared to an FR model trained on an authentic dataset with the same number of identities and demographic representation. To this end, Tab. <ref> (without data augmentation) and <ref> (with data augmentation) present the accuracy of the FR models trained on authentic and synthetic datasets, separately, with 5K identities.
In our investigation, models trained exclusively on authentic data without the application of data augmentation (Tab. <ref>, first two groups) consistently exhibited superior verification accuracy when trained on subsets of the CASIA-WebFace dataset. This trend was observed both when considering average performance across iterations (WF_avg) and the best iteration outcomes (WF_sub), with these models showing an approximately 15% improvement in verification accuracy w.r.t. the respective one trained on demographically balanced subsets of BUPT (BUPT_avg and BUPT_sub). In contrast, among the models trained solely on synthetic images (Tab. <ref>, third group), the model trained on the DC_bal subset achieved the highest verification accuracy across all evaluation benchmarks. Specifically, the latter model outperformed the one trained on the IDF_bal subset by an average of 3.35% and the one trained on the GC_bal subset by a substantial 20.16%. Interestingly, we observed a pronounced accuracy degradation of the FR model trained on the GC_bal subset, when evaluated on cross-age benchmarks (AgeDB-30 and CA-LFW columns). For instance, compared to the models trained on DC_bal, GC_bal-trained models exhibited a 30.66% reduction on AgeDB-30 and a 19.39% decrease on CA-LFW.
Comparing between models trained with the two different types of sources separately (authentic and synthetic), models trained exclusively on synthetic data from DC_bal and IDF_bal generally achieved better verification accuracy compared to models trained on the authentic, demographically-balanced BUPT_sub subset. Specifically, the model trained on the DC_bal subset obtained 9.82% higher average verification accuracy, while training on the IDF_bal subset led to a 6.25% gain, on average. Despite the promising results achieved by training an FR model on the best-performing synthetic dataset (DC_bal), a substantial gap of 5.40% in average verification accuracy remains when compared to the best-performing authentic dataset (CASIA_sub).
The impact of data augmentation on models trained solely on synthetic data (Tab. <ref>, second group) was notably pronounced, especially for GC_bal. The model trained on the latter, augmented subset, led to an average accuracy improvement of 11.75% compared to the corresponding model trained without augmentation. This improvement was particularly pronounced on cross-age benchmarks, with a remarkable 34.42% increase in verification accuracy on AgeDB-30 and a 15.59% increase on CA-LFW. Furthermore, all the models trained on synthetic datasets still reported higher verification accuracy compared to those trained on the balanced, augmented authentic data (BUPT_sub), with the smallest improvement observed while training on GC_bal (0.72%) and the highest improvement measured while training on DC_bal (9.54%). On the other hand, adding data augmentation to the training pipeline of models trained exclusively on authentic data (Tab. <ref>, first group) resulted in only marginal improvements, where the maximum increase in accuracy was limited to 1.40% (BUPT_sub). Comparing results obtained by training an FR model on DC_bal and CASIA_sub while applying data augmentation, it can be noted that the accuracy gap between training on authentic and synthetic data is reduced (4.19%) with respect to the gap obtained by training on the same datasets without data augmentation.
RQ1Models trained on synthetic data, especially when supplemented with data augmentation, tend to get closer (CASIA-WebFace) or even outperform (BUPT) those trained on authentic (balanced) data, with the highest gains observed in cross-age tasks. The integration of data augmentation substantially mitigated performance degradation in models trained on the GC_bal subset, especially concerning cross-age benchmarks.
§.§ RQ2: Accuracy with Combined, Balanced Training Data
In a second analysis, we explored the impact on verification accuracy by training FR models using a combination of synthetic and authentic data. To this end, Tab. <ref> (without data augmentation) and <ref> (with data augmentation) report the verification accuracy of FR models trained on datasets (either entirely authentic or combined), each composed of 10K identities.
Models trained exclusively on authentic data without data augmentation (Tab. <ref>, first group) highlighted (again) a substantial gap in verification accuracy between the model trained on CASIA-WebFace and the one trained on BUPT_10K, with a 14.32% difference. FR models trained on a demographically balanced combination of synthetic and authentic data without data augmentation (Tab. <ref>, second group) consistently outperformed the baseline model trained solely on BUPT_10K. Specifically, these models obtained 4.04% (BUPT_sub ∪ GC_bal), 7.36% (BUPT_sub ∪ IDF_bal), and 9.71% (BUPT_sub ∪ DC_bal) higher verification accuracy. Notably, when training an FR model on the combined BUPT_sub ∪ GC_bal dataset without data augmentation, the accuracy degradation identified on cross-age benchmarks in the previous subsection was not observed, suggesting that the inclusion of a balanced authentic data subset (BUPT_sub) effectively mitigates these issues. The best verification accuracy across all benchmarks was achieved by models trained on the combined dataset including DC_bal as the synthetic component (BUPT_sub ∪ DC_bal). This model showed an average accuracy increase of 1.18% over the one trained on BUPT_sub ∪ IDF_bal and 5.44% over the one trained on BUPT_sub ∪ GC_bal. Comparing results obtained by training an FR model on BUPT_sub ∪ DC_bal and CASIA-WebFace, it can be noted that while the accuracy gap between training on authentic and combined (authentic and synthetic) data is reduced, it remains remarkable, with a 4.20% difference.
FR models trained with data augmentation only on authentic data (Tab. <ref>, first group) showed slight decreases in verification accuracy w.r.t. the non-augmented counterpart, with degradations of 2.67% (BUPT_10K) and 0.11% (CASIA-WebFace). Conversely, while the impact of data augmentation on models trained on combined synthetic and authentic data (Tab. <ref>, second group) was generally positive, the improvement was minimal. The models reported an increase in average verification accuracy of 0.88% when trained on BUPT_sub ∪ GC_bal, 0.45% on BUPT_sub ∪ IDF_bal, and 0.52% on BUPT_sub ∪ DC_bal. As previously observed, including data augmentation in the training pipeline positively affects the verification accuracy gap observed when comparing the results of the FR model trained on the best-performing authentic (CASIA-WebFace) and combined (BUPT_sub ∪ DC_bal) datasets, leading to a reduced 3.54% difference.
RQ2Combining demographically balanced synthetic and authentic data can improve verification accuracy compared to training exclusively on authentic data, particularly in the absence of data augmentation. The inclusion of balanced authentic data effectively mitigates potential cross-age accuracy degradation. Data augmentation provides modest changes.
§.§ RQ3: Fairness with Combined, Balanced Training Data
In the third and final analysis, we investigated the impact on fairness of each setting involved in our study. To this end, Tab. <ref> (without data augmentation) and <ref> (with data augmentation) present the verification accuracy for each demographic group, as well as the standard deviation (STD) and the skewed error ratio (SER) on the RFW dataset's benchmark used to evaluate the fairness of FR models. Higher values of STD and SER indicate a higher level of unfairness.
On the RFW benchmark, models trained exclusively on authentic data without data augmentation (Tab. <ref>, first group) revealed that training on the balanced dataset (BUPT_10K) led to lower verification accuracy compared to CASIA-WebFace, with a notable gap of 16.19%. Although training on BUPT_10K led to a slight improvement in terms of fairness, as indicated by a 6.52% reduction in STD, it also showed a slight negative impact on SER. A similar trend was observed when training FR models on smaller subsets with 5K identities, BUPT_sub and WF_sub (Tab. <ref>, third and fourth groups), where the balanced subset showed marginally better fairness but still under-performed in verification accuracy.
The results achieved by training FR models on synthetic balanced subsets (Tab. <ref>, second and fifth groups), either alone or in combination with BUPT_sub, slightly diverged from previous observations. Among the models trained solely on synthetic data (Tab. <ref>, second group), the model trained on IDF_bal achieved the highest average verification accuracy, outperforming those trained on DC_bal by 1.06% and on GC_bal by 22.23%. Additionally, the model trained on IDF_bal reported the best SER (1.02), while the model trained on GC_bal achieved the lowest STD. Training on combined balanced datasets (Tab. <ref>, fifth group) led to similar patterns. The model trained on BUPT_sub ∪ IDF_bal exhibited the best average accuracy across demographic groups (82.78%) and the lowest SER and STD (1.07 and 2.33, respectively). Models trained on the other combined datasets (BUPT_sub ∪ GC_bal and BUPT_sub ∪ DC_bal) reported a SER of 1.09, but differences were noted in average STD and verification accuracy. Specifically, the model trained on BUPT_sub ∪ DC_bal achieved 8.06% higher accuracy but a worse STD (-26.97%) compared to the model trained on BUPT_sub ∪ GC_bal.
Training with data augmentation (Tab. <ref>) had a generally negative impact on models trained solely on authentic data (Tab. <ref>, first and third groups), worsening both average verification accuracy and fairness metrics on both the employed authentic datasets. This trend was consistent across the study, with data augmentation resulting in a substantial deterioration of fairness metrics for all models, except for the STD in models trained on DC_bal, IDF_bal, and BUPT_sub ∪ DC_bal. Interestingly, training with data augmentation led to gains in accuracy across all models trained on combined or synthetic datasets (Tab. <ref>, second and fourth groups), exception made for BUPT_sub ∪ DC_bal.
RQ3Training on balanced datasets slightly improved fairness metrics but often resulted in reduced accuracy, particularly when using authentic-only data. However, synthetic data, especially when combined with balanced authentic datasets, shows promising outcomes in both accuracy and fairness. Data augmentation typically introduces trade-offs, as it tends to negatively impact fairness, even though it may provide a modest increase in overall verification accuracy.
§ CONCLUSION AND FUTURE WORK
In this paper, we explored the impact of using combined authentic and synthetic datasets on both verification accuracy and fairness of FR models by balancing their demographic representation. Our results revealed that training an FR model with an equal amount of demographically balanced authentic and synthetic data can help reduce the accuracy gap. For example, training on BUPT_sub ∪ DC_bal and BUPT_sub ∪ IDF_bal achieved performances comparable to FR models trained solely on the authentic CASIA-WebFace dataset, with the model trained on BUPT_sub ∪ DC_bal showing a difference of only 3.53%.
Our study also suggests that training an FR model on a mix of synthetic and authentic demographically balanced datasets can result in a fairer model with lower standard deviation and skewed error ratio. For instance, the model trained on BUPT_sub ∪ IDF_bal achieved an STD of 2.33 and a SER of 1.07, the lowest overall in both metrics. However, the analyses also produced some ambiguous results, where FR models trained on unbalanced datasets achieved better fairness outcomes than those trained on balanced ones.
Finally, we found that while data augmentation typically increases average verification accuracy, it also leads to a rise in standard deviation and skewed error, thereby worsening models' fairness.
Building upon the findings and limitations of this work, our future efforts will focus on exploring the performance of different combinations using a broader range of architectures, such as ResNet-34 and ResNet-100, as well as various loss functions, including ArcFace and AdaFace. Additionally, we plan to incorporate more advanced data augmentation techniques, refined sampling strategies, domain generalization methods, and active learning and/or knowledge distillation techniques to further enhance the accuracy and fairness of the FR models through an optimized combination of both authentic and synthetic data.
splncs04
|
http://arxiv.org/abs/2409.03123v1 | 20240904231738 | High Energy Physics from Low Energy Physics | [
"Roland C. Farrell"
] | quant-ph | [
"quant-ph",
"hep-lat",
"hep-ph",
"nucl-th"
] |
[2012/06/19]
demo
3em
`
{`{
}`}
^1 -0.03in S _0
^3 -0.025in S _1
^3 -0.03in D _1
1/4
σ_MAX
θ_max
1/2
ϕ_max
p_max
#1^#1
⟶
Y>X
p-0.5em /
D-0.6em /
=
High Energy Physics from Low Energy Physics
Roland Carlos Farrell
2024
Physics
Silas BeaneProfessorPhysics
Martin J. Savage
David B. Kaplan
A dissertation
submitted in partial fulfillment of the
requirements for the degree of
footnote
The separation between physics at low and high energies is essential for physics to have any utility;
the details of quantum gravity are not necessary to calculate the trajectory of a cannon ball.
However, physics at low and high energies are not completely independent, and this thesis explores two ways that they are related.
The first is through a UV/IR symmetry that relates scattering processes at low and high energies.
This UV/IR symmetry manifests in geometrical properties of the S-matrix, and of the RG flow of the coupling constants in the corresponding effective field theory.
Low energy nuclear physics nearly realizes this UV/IR symmetry, providing an explanation for the smallness of shape parameters in the effective range expansion of nucleon-nucleon scattering, and inspiring a new way to organize the interactions between neutrons and protons.
The second is through the use of quantum computers to simulate lattice gauge theories.
Quantum simulations rely on the universality of the rules of quantum mechanics, which can be applied equally well to describe a (low energy) transmon qubit at 15 milli-Kelvin as a (high energy) 1 TeV quark.
This thesis presents the first simulations of one dimensional lattice quantum chromodynamics on a quantum computer, culminating in a real-time simulation of beta-decay.
Results from the first simulations of a lattice gauge theory on 100+ qubits of a quantum computer are also presented.
The methods developed in this thesis for quantum simulation are “physics-aware", and are guided by the symmetries and hierarchies in length scales of the systems being studied.
Without these physics-aware methods, 100+ qubit simulations of lattice gauge theories would not have been possible on the noisy quantum computers that are presently available.
It takes a village to raise a child, and two cities to raise a PhD student.
I have had the good fortune of doing both my undergraduate and PhD studies only 60 miles from where I grew up.
I would like to thank my parents for providing a home that that always offered a cozy escape from the big city, and a place to heal after an elbow surgery at the beginning of my PhD and a knee surgery at the end.
I also would like to thank them for instilling in me the unwavering belief that there is no limit to what I am capable of.
In addition, I would like to thank my other friends and family from Mount Vernon who have patiently listened to my attempts at explaining quantum field theory, entanglement and quantum computing over the years: Olivia Farrell, Chris Perry, Peter Whidden, Casey Goodwin, Abe Nurkiewicz, Joe Ordoñez, Edie Granger, Eugene Kang, Stella Ordoñez, Sapphire Ordoñez and Ellen Gray.
I have made many great friends in Seattle who have helped make my life outside of research rich and vibrant.
I would like to thank Richard Ellison for running a house with a revolving door of interesting people, teaching me how to cook and always sharing meals with me.
I am grateful to my housemates Nikita Zemlevskiy, Chris Owen, William Marshall, Michael Dom, Hoang Nguyen, Murali Saravanan and Henry Froland for always making it fun to kill time.
Special thanks to Nikita, Murali and Henry for being people that I can rely on, and the help that they offered me as I was recovering from knee surgery.
I am grateful to my fellow bulger hunters Dane Pollett, John Ferre and Zack Aemmer for the great adventures in the Washington wilderness, and to my other climbing and ski partners Tatsumi Nitta, Karla Diaz, Roel Ardiente, Ashlynn, Sasha Krassovsky and Michelle Yang for many great explorations of the mountains on the Hwy-2 and I-90 corridors (as well as the climbing gyms on the I-5 corridor).
Special thanks to Sasha for subsidizing many meals, and always being excited about making code run fast.
I am also grateful to Matthew Hsieh, Zach Oropesa, Francesco Cueto, Chris Liu, Victor Ho and Valentin Monfort for many great barbecues and hot pot dinners.
I would like to thank the members of my PhD cohort for the shared commiserations over TA duties, finding an advisor and research: John Goldak, Yiyun Dong, Wan Jin Yeo, John Ferre, Ramya Bhaskar, Zachary Draper, Teresa Lo, Chris Thomas, Michael Clancy, Ryan Lanzetta and Arnab Manna.
I would also like to thank the members of IQuS for fostering a great research environment that makes me excited to come into the office every day.
I have especially benefited from collaboration with Marc Illa who has shown me the art of making beautiful figures, selecting text vertically and using high-performance and quantum computers.
I would like to thank the professors/mentors I have had throughout my PhD, especially Andreas Karch, David Kaplan, Silas Beane, Martin Savage, Sanjay Reddy, Lukasz Fidkowski and Ann Nelson.
Their passion and joy for theoretical physics was contagious and continues to fuel my desire for discovery.
I am grateful to Catherine Provost for helping me get into the UW PhD program, and to both Catherine and Katie Hennessy for generally making all aspects of being a PhD student easier.
I would like to thank the participants of the 2022 DNP Summer School, the 2022 Talent Summer School and the 2023 Quantum Connections Summer School, with special thanks to Rossie Jiang for her support and companionship.
Meeting so many young physicists from all over the world doing exciting research has really made me internalize that I am part of a thriving global physics community.
I would like to thank the great friends and colleagues I met while in Bern, especially Matteo Traschel, Martina, Steven Waldvogel and Maike.
The mantra that “There are no strangers in my life, only friends" is something that I strive towards.
I would like to thank my advisor Silas Beane for guiding me on the leap between being a homework-solving student to a (quasi-)independent researcher.
I really value that you were always available and excited to discuss research, that all of my ideas were taken seriously, and the emphasis that was put on creative thinking.
I hope to follow your example of always focusing on the problems that are the most interesting, independent of hype, status quo or inertia of learning something new.
I also appreciate the emphasis that was placed on taking advantage of all opportunities to travel and share my research.
I would also like to thank Martin Savage for essentially being a second advisor during the last two years.
Your tenacity and enthusiasm is infectious and has pushed me to do my best work.
I appreciate that you never shoot down other people's ideas, and particularly value the occasional brutally honest feedback you have given me.
On multiple occasions this feedback has led to a complete rewiring of my thinking, and non-analytic jumps in my growth as a researcher.
To my parents
CHAPTER: INTRODUCTION
The new tools and fresh perspectives offered by quantum information are disrupting many areas of physics.
Simulations of quantum many-body systems using quantum computers are close to surpassing the capabilities of classical computers, including exciting applications relevant to condensed matter <cit.>, random circuit sampling <cit.>, fault tolerance <cit.> and nuclear physics <cit.>.
And the rate of progress shows no sign of slowing down, with demonstrations in the last year of large-scale 2D ion traps <cit.>, Rydberg arrays with thousands of qubits <cit.>, the manipulation of 8 trapped ion ququarts <cit.> and the knitting together of multiple 100+ qubit superconducting Quantum Processing Units (QPUs) <cit.>.
The purely theoretical side of quantum information has also proven to be extremely valuable.
Studies of entanglement have revealed new ways of identifying topological order either through properties of their entanglement Hamiltonians <cit.> or bipartite entanglement <cit.>, hint at explanations for “`accidental" symmetries in (hyper) nuclear <cit.> and particle <cit.> physics and have inspired powerful new numerical methods for simulating quantum many body systems using classical computers <cit.>.
These advancements have the potential to revolutionize many fields of physics; from quantum chemistry and materials <cit.> to hot and dense quantum chromodynamics (QCD) <cit.>.
A primary motivation for the work presented in this thesis is the unexplained puzzles and open questions present in the Standard Model of particle physics <cit.>, and its low energy manifestation in nuclear physics.
The Standard Model is a mathematical framework that describes the fundamental particles and their interactions.
The mathematical underpinning is a quantum field theory that has a tremendous amount of predictive power and has been validated by experiment to staggering precision <cit.>.
Despite having the equations that in principle can be used to predict the properties of matter in almost every setting, there are still many open questions.
These include the origin of “accidental" symmetries or unexplained hierarchies that are present in the masses and interactions of the Standard Model e.g. why is the Higgs boson so light? Why is charge-conjugation and parity (CP) a good symmetry of the strong interaction? Why is the binding energy of the deuteron so small?
There are also many states of matter whose simulation is beyond the capabilities of the most powerful (classical) supercomputers that could ever be built <cit.>.
Lines of inquiry in this direction include:
What are the phases of matter beyond nuclear saturation density <cit.>?
How do nuclei fragment into partons during high energy collisions; and how do the fragments eventually re-hadronize <cit.>?
What are the mechanisms that drive strongly interacting matter to reach thermal equilibrium <cit.>?
How does the QCD vacuum respond in real-time to external probes?
Addressing these questions is crucial for understanding the mass distribution of neutron stars in our universe <cit.>, for sharpening our inferences based on measurements in heavy ion collisions and how the conditions shortly after the big bang have led to the universe we see today.
Many of the above questions have been thought about for a long time, but still remain unresolved, and for good reason –they are very difficult problems!
However, the new tools and way of thinking coming from quantum information are providing new angles to approach these old problems.
At a high level, quantum information is about identifying and utilizing the correlations between quantum states that have no classical analog.
These correlations emerge from the foundational quantum mechanical features of superposition, particle indistinguishability, uncertainty and measurement.
For macroscopic objects, these quantum correlations are washed away, but at atomic and subatomic level they are absolutely essential.
Characteristics of these quantum correlations can often be used to predict physical properties of a quantum state, for example the connection between the entanglement spectrum and topological order in the quantum hall state <cit.>.
Additionally, the advantages of quantum computers over classical ones rests on their ability to efficiently manipulate and process these inherently quantum correlations.
This enables quantum computers to more efficiently solve “quantum" problems such as the simulation of dynamics in strongly correlated quantum many-body systems.
Surprisingly, this also enables quantum computers to efficiently solve classes of problems with no obvious quantum structure, the most prominent being Shor's algorithm for factoring prime numbers <cit.>.[Another surprising usage of quantum information is for provably secure methods of cryptography <cit.>.]
There are many features of the Standard Model and its low energy effective field theories (EFTs) that, although mathematically consistent, beg for a deeper explanation.
As discussed above, this includes the empirical observations of approximate symmetries and unexplained hierarchies.
These peculiarities could truly be accidental, but it is also possible that they emerge from mechanisms that are not currently understood.
One possible explanation is that interactions are organized in terms of how much entanglement they generate.
This would provide a new way to organize the interactions present in chiral and nuclear EFTs.
This conjecture was explored in the seminal work of Beane et. al <cit.> which looked at the spin entanglement generated by the scattering of (hyper)nucleons.
They obtained the striking result that the interaction chosen by nature leads to very little entanglement generated in scattering near threshold.
That is, out of all the possible values for the coupling constants that parameterize the low energy interaction, nature favors those which generate little spin entanglement.
Not only is entanglement suppressed, but the minimal entanglement solution also gives rise to an enhanced SU(4) symmetry for nucleon-nucleon scattering, and a SU(16) symmetry for hypernuclear scattering.
The approximate SU(4) symmetry in the nucleon-nucleon interaction was first pointed out by Wigner <cit.>, and there is evidence of the SU(16) symmetry in the hypernucleon-hypernucleon interaction from lattice QCD <cit.>.
This preliminary investigation motivated the conjecture that the confinement-deconfinement transition in QCD leads to emergent entanglement suppression in hadronic physics.
Chapter <ref> of this thesis explore this conjecture in the context of scattering between pions and nucleons.
Entanglement suppression has also been explored in systems of light nuclei and nucleons <cit.>, hypernucleons <cit.>, Higgs bosons <cit.> and black holes <cit.>.
Pions are pseudoscalar particles, with no spin to entangle, and instead entanglement in isospin space is considered.
Pions have isospin I = 1, while nucleons are I=1/2, and their internal states map onto qutrit and qubit Hilbert spaces respectively.
Using the highly accurate determination of the ππ and π N scattering phase shifts, the isospin entanglement generated from scattering is determined across a wide range of center of mass (c.o.m) energies.
One interesting feature is a local minimum of the entanglement near c.o.m energies that excite the Δ resonance.
This is due to the rapid variation in the phase shift causing a corresponding rapid variation in the entanglement of S-matrix.
Additionally, the entanglement is determined analytically from chiral perturbation theory providing a set of low-energy theorems for isospin entanglement near threshold.
Unlike in the (hyper)nucleon-nucleon system, the only minimal entanglement solution consistent with the symmetries of the interactions is the trivial one –no scattering, with no enhanced symmetry.
However, this no scattering condition is almost satisfied for low-energy interactions involving pions.
This is because pions are the (pseudo) Goldstone bosons of chiral symmetry breaking, and are therefore derivatively coupled, with an interaction that vanishes at low energies <cit.>.
Indeed, this expectation is further reinforced by large N_c that predicts non-interacting mesons <cit.>.
The entanglement produced in scattering is further explored in chapter <ref> and motivates the development of a new geometric formulation of scattering.
A scattering process begins with an initial state of two particles well separated in space.
The particles propagate towards each other, interact within some spatial volume, and then propagate away from each other and become well-separated again.
In the initial “in" state, there are no correlations between the two particles and consequently no entanglement.
However, the final “out" state, can exhibit entanglement which was generated from the interaction.
Once particles are entangled, they will have quantum correlations that can be detected no matter how far away they are by e.g. Bell measurements <cit.>.
The observable consequences of scattering are encoded in the S-matrix that evolves the “in" state to the “out" state, Ŝ|in⟩ = |out⟩.
Thus, the S-matrix encodes the capacity for the interaction to entangle the two particles.
The S-matrix is typically determined by solving an EFT describing particles interacting locally.
In this EFT-based approach, spacetime constraints like Galilean invariance and causality are encoded in the dependence of the scattering on the external kinematics e.g. the c.o.m. energy E.
The framework provided by EFTs is extremely powerful, and has enabled precision calculations of observables, with quantifiable uncertainties <cit.>.
However, keeping spacetime constraints manifest may obscure features of scattering that are non-local, such as entanglement.
Motivated by this, a new geometric formulation of scattering is developed and explored in chapter <ref>.
Quantum mechanics is unitary, and the S-matrix can be parameterized by energy-dependent phase shifts δ(E) that characterize the strength of the interaction in the various scattering channels i.e. Ŝ = e^2 i δ(E).
In non-relativistic scattering from a finite range potential it can be shown that the s-wave phase shift can be parameterized by the effective range expansion (ERE) as <cit.>,
kδ = -1/a + r/2k^2 + 𝒪(k^4) ,
where a is the scattering length, r is the effective range and k = √(2ME) is the magnitude of the incoming momentum in the c.o.m. frame.
The coefficients parameterizing the 𝒪(k^4) and higher order terms are known as shape parameters.
In the geometric formulation these phase shifts form a basis for a space that encompasses all possible unitary S-matrices.
Due to the π periodicity of the phase shift, this space has the topology of a flat torus.
For a given S matrix, the phase shifts as a function of energy form trajectories on the flat torus.
The flat torus and example S-matrix trajectories are shown in Fig. <ref> for two-channel scattering.
The way that spacetime constraints such as causality and Galilean invariance manifest on the flat torus is explored throughout chapter <ref>.
It is shown that causality constrains the allowed tangent vectors to the S-matrix trajectories and Galilean invariance corresponds to the freedom to choose an (inaffine) parameterization of the trajectories.
This geometric way of viewing scattering also reveals a new UV/IR symmetry that relates scattering at low and high energies.
These symmetries are only present for S-matrices that have phase shifts parameterized by scattering lengths, or with effective ranges that are correlated with the scattering lengths.
Any higher order shape parameters in the effective range expansion necessarily break the UV/IR symmetry.
An example of a UV/IR symmetric S-matrix, corresponding to a reflection symmetric S-matrix trajectory, is shown as the solid red trajectory in Fig. <ref>.
This work demonstrated that the new outlook obtained from studying entanglement can be valuable in unexpected ways.
In this case, entanglement motivated the development of the geometric formulation of scattering, which in turn revealed a new symmetry that on the surface is completely unrelated to entanglement.
The implications of this UV/IR symmetry are further explored throughout chapters <ref> and <ref>.
In chapter <ref>, it is shown how this UV/IR symmetry manifests in the renormalization group (RG) running of couplings in the corresponding EFT of contact operators.
The focus is on the EFT that reproduces phase shifts parameterized only by scattering lengths.
This EFT contains only momentum-independent interactions –delta functions in position space, that are singular at high momentum, and consequently needs to be regularized and renormalized.
After renormalization, the coupling constants depends on the cutoff or RG scale that has been introduced to define the theory.
The coupling constants as a function of the RG scale trace out trajectories in “coupling constant space".
These coupling constant trajectories also possess reflection symmetries that are now generated by a UV/IR transformation that interchanges low and high energy RG scales.
This is the fingerprint in the EFT of the reflection symmetries that the UV/IR symmetric S-matrix trajectories possess on the flat torus.[
Note that the UV divergence in this EFT is a linear divergence, which are conventionally thought to not contain “physics", and are completely ignored when regulating with dimensional regularization and MS.]
Further, it is shown that the assumption of a UV/IR symmetry constrains the RG running of the coupling constants, allowing the functional form to be determined without having to compute any loop integrals.
The UV/IR symmetry also implies consistency relations for the RG scale dependence of the coupling constants of interactions that break the UV/IR symmetry.
This is used to determine the RG running of the coupling constant that generates effective range effects.
In chapter <ref>, the implications of the UV/IR symmetry for nuclear physics are considered.
At low energies, the interaction between nucleons can be being efficiently described by pionless EFT <cit.>.
This EFT contains nucleons as the degree of freedoms, with all mesons and higher energy baryons effectively integrated out.
Scattering at the lowest energies is dominated by the s-wave, and the phase shifts in the spin singlet and triplet channels can be parameterized by the ERE of Eq. <ref>.
The “natural" scale for this EFT is set by the lowest energy excitation that is not explicitly included in the theory.
For the case of pionless EFT, this is set by the pion mass M_π≈ 1.5 fm^-1.
In nuclear physics, the s-wave ERE parameters are <cit.>,
a_0 = -23.7 fm , a_1 = 5.4 fm
r_0 = 2.7 fm , r_1 = 1.7 fm,
with the shape parameters very small, consistent with zero.
All of these parameters are larger than the naive breakdown scale of M_π^-1 and, in particular, the size of a_0 leads to the deuteron being very weakly bound.
Approximating the phase shift with only a scattering length and effective range reproduces the empirical measured nucleon-nucleon phase up to k≈ 160 MeV.
As mentioned above, and shown in chapter <ref>, an S-matrix with phase shifts parameterized by only scattering lengths and effective ranges is UV/IR symmetric, provided that the effective ranges are correlated with the scattering lengths.
If one assumes a UV/IR symmetry in the nucleon-nucleon interaction, emerging from some mechanism in QCD not currently understood, then this would explain the smallness of the shape parameters as UV/IR symmetry forbids shape corrections.
This UV/IR symmetric interaction forms the basis for a new EFT expansion, where LO treats scattering length and effective range to all orders, in such a way to preserve the UV/IR symmetry.
It is shown how this UV/IR symmetry implies a set of algebraic constraints on the two-body potential generated by the EFT.
These constraints are solved by a non-local potential, of a similar form to that proposed by Yamaguchi in 1954 <cit.>.
Higher order terms in this EFT break the UV/IR symmetry, either by shifting the effective range from being correlated with the scattering length or by introducing shape parameters.
This UV/IR symmetry provides motivation for a new way of organizing the nuclear interactions, and may lead to better convergence in many-body calculations.
The second half of this thesis explores the use of quantum computers to simulate lattice gauge theories.
Many problems relevant to nuclear and particle physics can only be addressed by solving QCD.
QCD is a quantum field theory that describes the interactions between quarks and gluons.
In a quantum field theory, the effective strength of an interaction is heavily influenced by quantum fluctuations.
These quantum fluctuations change with the energy scale of the interaction.
In asymptotically free theories like QCD <cit.>, the interaction strength is weak at high energies, and observables can be determined in perturbation theory.
However, as the relevant energies approaches the scale of confinement, around 1 GeV, the interaction strength becomes strong and perturbation theory breaks down.
This is the scale where the quarks and gluons become bound inside hadrons (neutrons, protons, pions etc.).
Observables at this scale can still be computed from the QCD path integral but require non-perturbative methods.
The only known framework for non-perturbatively defining QCD is through a powerful numerical method called lattice QCD.
In lattice QCD, spacetime is discretized on a lattice and different field configurations in the path integral are importance sampled.
This importance sampling relies on there being a well-defined probability distribution from which the different field configurations can be drawn.
This is satisfied for systems at zero density and in Euclidean space (imaginary time).
There has been tremendous success with this approach, and lattice QCD has been used to postdict, and in some cases predict meson decay rates and scattering parameters, hadronic masses, QCD phases at low density and high temperature and the anomalous magnetic moment of the muon <cit.>.
However, this method breaks down for observables involving real-time response or in systems at finite baryon density <cit.>.
In these cases, the “weight" of the different field configurations becomes complex, and the condition of an underlying probability distribution breaks down.
This is the sign-problem in lattice QCD, and it is believed to be NP-hard <cit.>.
Fortunately, over 40 years ago it was realized by Feynman and others that there is another route toward simulating quantum systems <cit.>.
This is quantum simulation, where the target quantum theory is mapped onto another quantum system that can be well controlled in the laboratory.
Unlike for lattice QCD simulations using classical computers, it is believed that quantum simulation of real dynamics or systems at finite density are free of the sign problem <cit.>.
The zeroth order step of quantum simulation is mapping the Hilbert space of the target theory to one that is natively available on the quantum simulator, usually in the form of a register of qubits or higher dimensional qudits.
A typical quantum simulation of a dynamical process proceeds by first preparing an initial, physically interesting state, evolving it with the time evolution operator U(t) = e^-i Ĥ t and then measuring observables in the final state.
Broadly speaking there are two types of quantum simulators; analog and digital.
An analog platform is capable of producing the unitary evolution e^-i Ĥ'(θ_i) t for some class of Hamiltonians Ĥ'(θ_i) with control over a set of parameters θ_i.
If there exists a choice of θ_i that can reproduce the target Hamiltonian, then initial state preparation can often be done by adiabatically, and the time evolution operator can be reproduced natively[
In adiabatic state preparation the parameters are evolved as a function of an adiabatic parameter θ_i(τ) with τ∈ [0,1) such that the ground state of Ĥ'(θ_i(0)) is easy to prepare, and Ĥ'(θ_i(1)) is the target Hamiltonian.
This is also an efficient protocol for state preparation in digital quantum computing provided that the mass gap does not vanish during the adiabatic evolution <cit.>.].
Analog quantum simulations often feature very high fidelities <cit.>, but have limited range of application as the unitaries can only cover what is possible with the Ĥ'(θ_i).
A digital quantum computer has the advantage that in principle it can implement the evolution under any unitary.
Digital quantum computers come equipped with a set of elementary unitary operations, a universal “gate set", from which any unitary operation can be constructed.
This universal gate sets often consist of an arbitrary single qubit rotation, and an entangling two-qubit operation like a controlled-not (CNOT).
On current, Noisy Intermediate Scale Quantum (NISQ) era <cit.>, digital quantum computers it is the two-qubit gates that are the primary source of noise and errors.
In a quantum simulation, gates are arranged in such a way to form quantum circuits that prepare the desired initial state and implement the time evolution operator.
Quantum simulation of lattice gauge theories is still in its infancy, with the state-of-the-art focusing on toy models of QCD, often in fewer than three dimensions, with simpler gauge groups and/or without dynamical fermions <cit.>.
Chapter <ref> establishes the foundation for quantum simulations of 1+1D QCD, explores aspects of the theory with exact diagonalization and presents results from the first simulation of QCD in 1+1D on a quantum computer.
This includes working out the zeroth order step of quantum simulation; mapping the Hilbert space of 1+1D QCD to the Hilbert space of qubits.
An in depth analysis of the spectrum of 1+1D QCD reveals several interesting features.
With N_f=2 flavors of quarks there exists a bound state of two baryons, analogous to the deuteron in QCD.
It is found that this binding is due to the vacuum energy density per flavor of N_f=2 QCD being lower than in N_f=1 QCD.
This result is intriguing as it indicates that the presence of a bound state can be deduced from the N_f-dependence of the vacuum energy.
Additionally, it is found that the structure of entanglement in the vacuum can be used to identify a phase transition.
As a function of the coupling constant, g, the vacuum transitions from being primarily composed of “mesonic" excitations consisting of quark-antiquark pairs, to being primarily composed of “baryonic" excitations consisting of baryon-antibaryon pairs.
This is due to a competition between the mass energy, which counts the occupation of quarks and antiquarks, and the energy in the chromoelectric field.
Mesonic excitations only contribute two occupation to the mass energy, but have a string of 3 or 3 color flux that contributes chromoelectric energy.
Baryonic excitations on the other hand contribute six occupation to the mass energy, but are locally color singlets and do not excite the chromoelectric field.
Therefore, increasing g with the quark mass fixed causes the low energy excitations to transition from being mesonic to baryonic.
At the critical coupling where this transition occurs, the bipartite entanglement between quarks and antiquarks spikes.
This is because there are contributions of both mesonic and baryonic states to the vacuum wavefunction, and consequently more pure states contributing to the reduced density matrix.
It is possible that such a rearrangement could occur in the QCD vacuum as the strong coupling constant increases near the confinement-deconfinement transition.
Chapter <ref> also presents results from the first quantum simulations of QCD in 1+1D.
To accomplish this, quantum circuits are developed for preparing the vacuum and implementing time evolution.
The time evolution circuits are executed on IBM's superconducting 7-qubit quantum computers ibm_perth and ibm_jakarta for N_f=1 QCD (corresponding to 6 qubits per spatial site).
The bare vacuum-to-vacuum amplitudes are measured, and found to agree with expectations, with statistical uncertainties below 1%.
This work was featured in a podcast available on https://www.youtube.com/watch?v=PS_8oRaqQRcYouTube.
Chapter <ref> extends these simulations to a single generation of Standard model fermions; N_f=2 QCD with u and d quarks as well as e^- and ν_e leptons.
This system requires 16 qubits per spatial lattice site, 6 each for the u and d quarks, and 2 each for the e^- and ν_e leptons.
By coupling the quarks to the leptons with an effective 4-Fermi operator, weak decays in real-time are simulated.[Note that, as a chiral gauge theory, an ab initio treatment of the weak interaction on the lattice is currently not available.]
This simulation requires the preparation of an initial baryon state, in our case the Δ^- baryon as well as the lepton vacuum.
Time evolution leads to a non-zero decay rate corresponding to the process Δ^- →Δ^0 + e^- + ν_e, that is detected by measuring the electric charge in the lepton sector.
The circuits that prepare the Δ^- baryon and evolve it forward in time are executed using Quantinuum's H1-1 20-qubit trapped ion quantum computer <cit.>.
The electric charge is measured with statistical uncertainties at the 5% percent level, and the findings are consistent with classically computed expectations.
This work serves as a proof of concept, demonstrating the ability of quantum computers to simulate weak decays in real-time.
Work is currently underway to extend these simulations to neutrinoless double beta decay on multiple lattice sites.
The work discussed in chapters <ref> and <ref> will be foundational for future quantum simulations of 1+1D QCD and hopefully QCD in higher dimensions.
However, even the N_f=2 demonstration was limited to 16 qubits, which could be simulated by performing matrix multiplication of 2^16× 2^16 matrices (2^16= 65,536).
This can easily be handled on a laptop, which can handle up to ∼ 2^26× 2^26 dimensional matrices.
However, the exponential growth of Hilbert space causes exact methods to quickly hit a ceiling.
Indeed, even the most powerful supercomputers, with petabytes of memory, can only simulate up to 48 qubits using exact matrix methods <cit.>.
Surpassing this threshold of exact computation with classical computers was part of the motivation of the work in chapters <ref> (quantum simulations using 100 qubits) and <ref> (quantum simulations using 112 qubits).
To facilitate the quantum simulation of larger systems, these chapters focus on a simpler lattice gauge theory, the lattice Schwinger model, which is quantum electrodynamics in 1+1D.
Like QCD, the Schwinger model is a confining gauge theory, has a chiral condensate and possesses composite, “hadronic”, particles that bind and form “nuclei”.
In chapter <ref> the first step of a quantum simulation of the Schwinger model is addressed; state preparation on a quantum computer.
A new state preparation algorithm, Scalable Circuits ADAPT-VQE (SC-ADAPT-VQE), is introduced, see Fig. <ref> for an illustration.
Physical systems have many properties that simplify state preparation: they are often invariant under spatial translations, locally interacting and possess a mass gap.
SC-ADAPT-VQE uses these features to determine scalable quantum circuits for preparing states with localized correlations.
Scalability allows these state preparation circuits to be optimized on small and modest sized lattices using classical computers, and then robustly extrapolated to prepare the desired state on large systems using quantum computers.
The use of classical computers circumvents the challenge of optimizing parameterized quantum circuits on a noisy quantum computer, a task which is known to cause numerical instabilities like barren plateaus <cit.>.
In chapter <ref>, SC-ADAPT-VQE is applied to the preparation of the Schwinger model vacuum.
By optimizing scalable state preparation circuits on systems of up to 28 qubits using classical computers, the ground state is prepared on 100 qubits of IBM’s quantum computer ibm_cusco.
Measurements of the local chiral condensate and charge-charge correlators are found to be in excellent agreement with results obtained from matrix-product-state simulations.
The work in this chapter was featured in a https://www.youtube.com/watch?v=L7Kk_lR1Y2M embeds_referring_euri=https
The work in chapter <ref> solved the first step for large scale quantum simulations of the Schwinger model.
Chapter <ref> builds off this and presents results for quantum simulations of hadron dynamics in the lattice Schwinger model.
The first problem that is addressed is the preparation of a hadronic state that will be time evolved.
In this context, a hadron is an electron positron pair, bound together by the confining electromagnetic potential.
We choose to prepare a hadron wavepacket; a superposition of single hadron states that is localized in both position and momentum space.
This state has localized correlations, and can therefore be prepared with SC-ADAPT-VQE.
The SC-ADAPT-VQE circuits determined in chapter <ref> are first used to prepare the vacuum everywhere on the lattice, and then SC-ADAPT-VQE is applied again to determine scalable circuits that excite a hadron wavepacket on top of the vacuum.
The next challenge addressed in chapter <ref> is the implementation of the time evolution operator.
Fermions in the Schwinger model interact with each other through a linear Coulomb potential Ĥ_el.
On a quantum computer this corresponds to an interaction that is all-to-all between every pair of qubits.
This all-to-all interaction is problematic to implement on a quantum device for two reasons.
First, the number of gates required to implement e^-i Ĥ_el t grows quadratically with the simulation volume.
For system sizes of 100+ qubits, corresponding to 50+ lattice sites, the necessary gate count surpasses the capabilities of present-day quantum devices.
Second, the required connectivity for efficient implementation is all-to-all.
Such connectivity is not available on IBM's superconducting architecture, which only has native nearest-neighbor interactions.
Motivated by the non-perturbative mechanism of confinement, we introduce a truncated interaction that removes interactions between sites separated by ≳ the confinement length.
This interaction is systematically improvable by increasing the interaction range, and converges exponentially.
The truncated interaction allows e^-i Ĥ_el t to be implemented with a number of gates that scales linearly with the lattice size instead of quadratically.
Additionally, interactions only need to be engineered between qubits separated by approximately a confinement length instead of across the whole lattice.
This significantly reduces the two-qubit gate count, and makes quantum simulations of time evolution possible.
Our simulation of hadron dynamics proceeds by preparing the hadron wavepacket with SC-ADAPT-VQE, and then performing Trotterized time evolution using the Hamiltonian with the truncated electric interaction.
These simulations are performed on 112 qubits (56 lattice sites) for up to 14 Trotter steps.
The simulation of 14 Trotter steps uses 13,858 two-qubit gates with an associated two-qubit circuit depth of 370.
The results of the quantum simulation show a clear signal of a hadron propagating through the vacuum, and qualitative agreement is obtained with matrix product state simulations.
Vital to the success of these simulations was an incredible amount of statistics, 156 millions shots in total, that enabled powerful error mitigation techniques.
At the time (January 2024) this was the most complex digital quantum computation to be published on arXiv[Complexity is being measured by either the number of two-qubit gates or by the quantum volume = (# of qubits) × (CNOT depth).].
This work has been featured in an IBM https://www.ibm.com/quantum/blog/hadron-dynamics-simulationsblogpost.
The work in chapters <ref> and <ref> propelled the quantum simulation of lattice gauge theories beyond what is possible with exact methods using classical computers.
However, they are still susceptible to simulation using classical computers with (approximate) tensor-network techniques.
This is because simulations of the vacuum, and of the dynamics of a single hadron only populate relatively low-energy states, which can be efficiently represented with a Matrix Product State ansatz <cit.>.
The work presented in chapter <ref> takes steps toward quantum simulations of fragmentation and hadronization at high energies, which will likely be beyond the capabilities of tensor network based techniques.
An open question relevant to understanding heavy-ion collisions at BNL <cit.> and the LHC <cit.>, and QCD dynamics at high energies more generally, are the mechanisms underlying fragmentation and re-hadronization.
In a high energy collision between two heavy nuclei, the nucleons in the nucleus can be broken apart freeing the quarks and gluons from hadronic confinement.
The resulting state is believed to form an exotic phase of matter called a quark gluon plasma.
As the quark gluon plasma expands, the average energy of the partons decrease, and the quarks and gluons re-hadronize.
Questions related to the formation of quark gluon plasma, its subsequent expansion and thermalization and hadronization require the real-time simulation of out-of-equilibrium QCD dynamics to address.
Such simulations are impossible using classical computers, but will hopefully be possible using future quantum computers.
The Schwinger model is also a confining lattice gauge theory, and this process of fragmentation and hadronization can be simulated via the high energy collision of hadrons.
In chapter <ref> such simulations are performed using classical computers to prepare for future quantum simulations.
The background for our collisions is a dense medium of static hadrons on half of the lattice that are fixed in place by background charges.
Hadrons collide with this medium at high energies, and the energy deposition, entanglement and charge density are measured as a function of incident velocity.
The collision does not factorize as a product of individual collisions, providing clear evidence of quantum coherence between the constituents of the medium.
In addition, measurements of the charge distribution after the collision indicate that hadron are produced at sufficiently high collision energies.
A careful study of lattice artifacts reveals that entanglement, as a non-local quantity, is significantly more sensitive to lattice spacing effects than local observables.
With the goal of eventually performing these simulations on a quantum computer, it is shown how SC-ADAPT-VQE can be used to determine circuits that prepare the static hadrons that make up the medium.
The simulation strategy and state preparation circuits developed in this work are steps toward simulating fragmentation and hadronization in the Schwinger model, and eventually QCD, on a quantum computer.
CHAPTER: ENTANGLEMENT MINIMIZATION IN HADRONIC SCATTERING WITH PIONS
This chapter is associated with Ref. <cit.>:
“Entanglement minimization in hadronic scattering with pions" by Silas R. Beane, Roland C. Farrell and Mira Varma.
§ INTRODUCTION
It is of current interest to uncover implications of quantum
entanglement for the low-energy interactions of hadrons and nuclei[For a recent review, see Ref. <cit.>.]. As
these interactions are profitably described by effective quantum field
theory (EFT), which is an expansion of the relevant effective action
in local operators, entanglement may have subtle implications for EFT
which are difficult to identify due to its intrinsic non-locality.
Ideally entanglement properties reveal themselves as regularities in
hadronic data and, possibly, as accidental approximate symmetries. In
addition to the non-local nature of entanglement, a difficulty lies
with parsing the distinction, if any, between entanglement effects and
generic quantum correlations which account for the deviation of QCD
path integral configurations from a classical path. For instance, if
one assumes that QCD with N_c=3 is near the large-N_c
limit <cit.>, then one
might expect that it would be difficult to distinguish between
large-N_c expectations and some fundamental underlying principle
that minimizes entanglement independent of the value of N_c. To
make this more concrete, consider two local or non-local QCD operators
O_1 and O_2. If the vacuum expectation value of the
product of these operators
obeys the factorization rule <cit.>
⟨ O_1 O_2 ⟩ = ⟨ O_1 ⟩⟨ O_2 ⟩ + O(ϵ)
where ϵ is a small number, then the variance of any operator
vanishes in the limit ϵ→ 0.
The variance of an operator is related to the sampling of multiple field configurations in the path integral, and the vanishing of the variance often implies the existence of a master field solution.
A theory whose
operators obey this factorization behaves like a classical
theory,[Ordinarily one identifies the classical theory with
the trivial ħ→ 0 limit. However,
Ref. <cit.> has established a more general criterion
for the classical limit.] and therefore has a small parameter
ϵ which measures quantum effects. Large-N_c QCD is such a
theory, and indeed, at least for a class of QCD operators, one can
identify ϵ = 1/N_c. The factorization property,
Eq. (<ref>), is then easily deduced from Feynman diagrams
involving quarks and gluons and amounts to the dominance of
disconnected contributions in the path integral.
On the other hand, one might imagine that the factorization of
Eq. (<ref>) arises as a property of the path integral,
rather than as a property of the local action (as in varying N_c and
taking it large in QCD). It is not a priori unlikely that, at
least for a class of QCD operators, the path integral minimizes
quantum fluctuations via a mechanism that is not currently understood.
For instance, starting with QCD defined at short distances, the
procedure of sequentially integrating out short distance modes to
obtain low-energy hadronic scattering amplitudes may remove
highly-entangled states that arise from non-perturbative QCD dynamics,
leaving a low-energy EFT that is near a classical trajectory. It is
intuitively sensible that the QCD confinement length acts as a natural
cutoff of entanglement in the low-energy EFT. This notion can be
raised to the conjecture that QCD will minimize the entanglement in
low-energy hadronic interactions. Testing this conjecture relies on
finding hadronic systems where its consequences deviate from those
implied by large-N_c. And the success of the large-N_c approximation
in describing the world renders this task challenging. Evidence in
favor of this conjecture was found in Ref. <cit.> in a
study of baryon-baryon scattering systems (See also
Refs. <cit.> and <cit.>). This work relied
both on theoretical arguments and high-precision lattice QCD
simulations of baryon-baryon scattering systems with strangeness. In
this chapter, the conjecture of minimal
entanglement will be investigated in both ππ and π N scattering.
Finding measures of the entanglement due to interaction is both
non-trivial and non-unique. The most fundamental object in the
scattering process is the unitary S-matrix. In a scattering process
in which the two in-state particles form a product state, the
S-matrix will entangle the in-state particles in a manner that is
dependent on the energy of the scattering event. A useful measure of
this entanglement is the entanglement power (EP) of the
S-matrix <cit.>. In
the case of nucleon-nucleon (NN) scattering, the EP was found for all
momenta below inelastic threshold <cit.>. However, the
most interesting phenomenological result is at threshold, where the
vanishing EP implies the vanishing of the leading-order spin
entangling operator, which in turn implies Wigner SU(4)
symmetry <cit.>. As this
symmetry is a consequence of large-N_c
QCD <cit.>, the
minimization of entanglement and the large-N_c approximation are
found to be indistinguishable in the two-flavor case. By contrast, in
the three-flavor case, minimization of the entanglement power in
baryon-baryon scattering implies an enhanced SU(16) symmetry which
is not necessarily implied by large-N_c and is realized in lattice
QCD simulations <cit.>. Given that
baryon-baryon scattering exhibits entanglement minimization, it is of
interest to determine whether other low-energy QCD scattering systems
exhibit this property. In investigating the EP of scattering systems
involving pions, once again a crucial difficulty is distinguishing
consequences of entanglement minimization and the large-N_c
limit. In the ππ system the implications of entanglement minimization are found to
be indistinguishable from implications of large-N_c. In the π N
system the implications of entanglement minimization are distinct,
however the absence of an enhanced symmetry limits the predictive
power to simple scaling laws with no smoking-gun predictions.
This chapter is organized as follows. In Section <ref>, the EP
of the ππ S-matrix is considered in detail. After
introducting the standard definition and conventions of the ππ
S-matrix, the S-matrix is formulated in a basis convenient for
calculation of the EP. Explicit expressions are derived for the EP of
the first few partial waves in terms of phase shifts and leading-order
expressions in chiral perturbation theory are provided. Using the
highly-accurate Roy-equation solutions for the low-energy phase
shifts, the experimental EP for the first few partial waves are given
up to inelastic threshold. The consequences of minimizing the EP are
considered and compared to large-N_c expectations. In
Section <ref>, the same procedure is repeated for the π N
S-matrix. Finally, Section <ref> is a discussion of the
possible conclusions that can be drawn from the conjecture of minimal
entanglement.
§ THE SYSTEM
There are, of course, several important differences between baryon-baryon and pion-pion scattering.
Firstly, with pions there is no notion of spin entanglement. However,
isospin (or generally flavor) entanglement is present and can be
quantified using the EP and it is not clear that there is any
meaningful distinction between these two kinds of
entanglement. Indeed, it is straightforward to see that the “spin”
entanglement of Ref. <cit.> can be reformulated as
“isospin” entanglement with identical consequences[At the
level of the EFT, this is simply realized via Fierz
identities.]. This is no surprise as entanglement is fundamentally a
property of a non-product state vector whose existence relies either on an internal or a spacetime symmetry. Secondly, the crucial distinction between
baryon-baryon scattering at very low-energies and the scattering of
pions is that pion scattering at low-energies is strongly constrained
by spontaneous chiral symmetry breaking in QCD. In particular, chiral
symmetry implies that low-energy pion scattering on an arbitrary
hadronic target is weak. The weak nature of the interaction is due to
the smallness of the light-quark masses relative to a characteristic
QCD scale. This translates to a chiral suppression of the EP at
low-energies. Chiral symmetry breaking at large-N_c does involve
enhanced symmetry <cit.>; for N flavors, the QCD
chiral symmetries and their pattern of breaking are enhanced to
U(N)⊗ U(N)→ U(N), as signaled by the presence of a new
Goldstone boson, η', whose squared mass scales as 1/N_c.
Intuitively, the anomaly, as an intrinsically quantum phenomenon, is a
strongly entangling effect which would generally vanish as quantum
fluctuations are suppressed. However, this is not assumed as the focus of this paper is
two-body scattering which does not access the anomaly.
§.§ S-matrix definition
The S-matrix is defined as
S = 1 + i T
where unity, corresponding to no interaction, has been separated out. This then defines the T-matrix.
The S-matrix element for the scattering process π^i π^j →π^k π^l is then given by
⟨π^k(p_3) π^l(p_4)| S |π^i(p_1) π^j(p_2)⟩ = ⟨π^k(p_3) π^l(p_4) | π^i(p_1) π^j(p_2) ⟩
+ ⟨π^k(p_3) π^l(p_4)| iT |π^i(p_1) π^j(p_2)⟩
where i, j, k, and l are the isospin indices of the pion states.
The projection operators onto states of definite isospin are[For a detailed construction, see Ref. <cit.>.]
P_0^kl,ij = 3δ^klδ^ij ,
P_1^kl,ij = 2( δ^kiδ^lj - δ^liδ^kj) ,
P_2^kl,ij = 2( δ^kiδ^lj + δ^liδ^kj)
- 3δ^klδ^ij ,
where the subscript indicates the total isospin, I, of the ππ system.
Straightforward field-theoretic manipulations then give
⟨π^k(p_3) π^l(p_4)| S |π^i(p_1) π^j(p_2)⟩
=
(2π)^4 δ^4(p_1+p_2-p_3-p_4) 16 π/σ(s) ∑_ℓ=0^∞ (2ℓ+1) P_ℓ(cosθ) S_ℓ^kl,ij ,
where the P_ℓ are the Legendre polynomials, and
σ(s) ≡√(1-4 ^2/s) ,
with s=4(q^2+M_π^2) and q is the center-of-mass three-momentum of the pions.
The focus here will be on the S-matrices of definite partial wave:
S_ℓ^kl,ij≡ e^2iδ_ℓ^0 P_0^kl,ij+e^2iδ_ℓ^1 P_1^kl,ij+e^2iδ_ℓ^2 P_2^kl,ij ,
which satisfy the unitarity constraint
S_ℓ^kl,ij S_ℓ^*ij,mn = δ^kmδ^ln .
Since the pions obey Bose statistics, the angular momentum, ℓ, is
even for the states with I = 0 or 2 and odd for states with I = 1.
As the initial state in the scattering process is a product state of two pions, each in the 3-dimensional
(I=1) irrep of SU(2) isospin, it is convenient to work in the direct-product matrix basis.
The pion isospin matrices are the three-by-three matrices t̂_α which satisfy
[ t̂_α , t̂_β ] = i ϵ_αβγ t̂_γ .
In the product Hilbert space H_1⊗ H_2, the total isospin of the two-pion system is t̂_1⊗ I_3 + I_3⊗t̂_2, where
I_3 is the three-by-three unit matrix, which implies
t̂_1 ·t̂_2 = I(I+1) - 4 1̂ = 1̂
-2, I=0
-1, I=1
1, I=2
where
1̂ = Î_3⊗Î_3 and
t̂_1 ·t̂_2 = ∑_α=1^3 t̂_1^α⊗t̂_2^α.
The 9× 9 dimensionality of the matrix is in correspondence with the dimensionality of the SU(2) isospin product representation
3⊗ 3= 1⊕ 3⊕ 5. There are now three invariants and three observables; one easily finds the S-matrix
in the direct-product matrix basis
Ŝ_ℓ =
e^2iδ_ℓ^0P̂_0+e^2iδ_ℓ^1P̂_1+e^2iδ_ℓ^2P̂_2 ,
where the three 9× 9 projection matrices are
P̂_0 = -1/3(1̂- ( t̂_1 ·t̂_2 )^2 ) ,
P̂_1 = 1̂-1/2( ( t̂_1 ·t̂_2 )+ ( t̂_1 ·t̂_2 )^2 ) ,
P̂_2 = 1/3(1̂+ 3/2( t̂_1 ·t̂_2 )+ 1/2( t̂_1 ·t̂_2 )^2 ) .
It is readily checked that the S-matrix is unitary, and using the representation ( t_γ)_αβ=-iϵ_αβγ, it is straightforward to establish
equivalence with the component form, Eq. (<ref>).
The trace is given by e^i 2 δ_ℓ^0 +3 e^i 2 δ_ℓ^1 +5 e^i 2 δ_ℓ^2 which correctly
counts the isospin multiplicity, and is in correspondence with the nine eigenvalues of Ŝ.
§.§ Entanglement power
Consider the ℓ=1 S-matrix. As this system can scatter only in the I=1 channel, it provides
a useful example of how the S-matrix entangles the initial two-pion state. From Eq. (<ref>) one
finds
Ŝ_1 = 1/2( 1+ e^i 2 δ_1^1) 1̂ + 1/2( 1- e^i 2 δ_1^1) P_12
where the SWAP operator is given by
P_12 =( t̂_1 ·t̂_2 )^2 + t̂_1 ·t̂_2 - 1̂ .
As the SWAP operator interchanges the pions in the initial two-pion product state, leaving another two-pion product state, it
does not entangle. Therefore, the S-matrix has the two obvious non-entangling solutions δ_1^1=0 (no interaction) and
δ_1^1=π/2 (at resonance). One measure of S-matrix entanglement would then be the (absolute value squared of the) product of the coefficients of the non-entangling
solutions:
| ( 1+ e^i 2 δ_1^1) ( 1- e^i 2 δ_1^1) |^2 ∼sin^2(2 δ_1^1) .
A state-independent measure of the entanglement generated by the
action of the S-matrix on the initial product state of two free
particles is the
EP <cit.>. In order
to compute the EP an arbitrary initial product state should be
expressed in a general way that allows averaging over a given
probability distribution folded with the initial state. Recall that in
the NN case, there are two spin states (a qubit) for each nucleon and
therefore the most general initial nucleon state involves two complex
parameters or four real parameters. Normalization gets rid of one
parameter and there is an overall irrelevant phase which finally
leaves two real parameters which parameterize the CP^1
manifold, also known as the 2-sphere S^2, or the Bloch sphere.
Now in the isospin-one case we have three isospin states (a qutrit)
which involves three complex parameters. Again normalization and
removal of the overall phase reduce this to four real parameters which
parameterize the CP^2
manifold <cit.>. Since
the ππ initial state is the product of two isospin-one states,
there will be eight parameters to integrate over to get the EP.
There are now two qutrits in the initial state, which live in the Hilbert spaces H_1,2, each
spanned by the states { | -1_i ⟩ , | 0_i
⟩ , | 1_i ⟩} with i=1,2. It is of interest
to determine the EP of a given S-matrix operator, which is a measure of the entanglement of the scattered state averaged
over the CP^2 manifolds on which the qutrits live. Consider
an arbitrary initial product state of the qutrits
| Ψ ⟩ = U(α_1,β_1,μ_1,ν_1) | ⟩_1 ⊗ U(α_2,β_2,μ_2,ν_2) | ⟩_2
with
U(α_i,β_i,μ_i,ν_i) | ⟩_i =
cosβ_i sinα_i| -1 ⟩_i + e^iμ_isinβ_i sinα_i| 0 ⟩_i + e^iν_icosα_i | 1 ⟩_i ,
where 0 ≤μ_i,ν_i < 2π and 0 ≤α_i,β_i≤π/2.
The geometry of CP^2 is described by the Fubini-Study (FS) line element <cit.>
ds_ FS^2 = dα^2+ sin ^2(α ) dβ^2 + ( sin ^2(α ) sin ^2(β ) - sin ^4(α ) sin ^4(β ) ) dμ^2 +
sin ^2(α ) cos ^2(α ) dν^2 -2 sin ^2(α ) cos ^2(α ) sin ^2(β ) dμ dν .
Of special interest here is the differential volume element which in these coordinates is
dV_ FS = √(det g_ FS) dα dβ dμ dν
= cosαcosβsin^3αsinβ dα dβ dμ dν
and the volume of the CP^2 manifold is found to be,
∫ dV_ FS = π^2/2 .
The final state of the scattering process is obtained by acting with the unitary S-matrix of definite angular momentum
on the arbitrary initial product state:
| Ψ̅ ⟩ = Ŝ_ℓ | Ψ ⟩ .
The associated density matrix is
ρ_1,2 = | Ψ̅ ⟩⟨ Ψ̅| ,
and tracing over the states in H_2 gives the reduced density matrix
ρ_1 = Tr_2 ρ_1,2 .
The linear entropy of the S-matrix is then defined as[Note that this is related to the (exponential of the) second Rényi entropy.]
E_Ŝ_ℓ = 1 - Tr_1 ( ρ_1)^2 .
Finally, the linear entropy is integrated over the initial CP^2 manifolds to form the average, and the entanglement power is
ℰ(Ŝ_ℓ) = (2/π^2)^2 (∏_i=1^2∫ dV^i_ FS) P E_Ŝ_ℓ
where P is a probability distribution which here will be taken to be unity.
Evaluating this expression using Eq. (<ref>) yields the s-wave ππ EP:
ℰ(Ŝ_0) = 1/648( 156 - 6 cos[4 δ_0^0 ] - 65 cos[2 (δ_0^0 - δ_0^2)] . .- 10 cos[4 (δ_0^0 - δ_0^2)] - 60 cos[4 δ_0^2] -
15 cos[2 (δ_0^0 + δ_0^2)] ) ,
and the p-wave ππ EP:
ℰ(Ŝ_1) = 1/4sin^2(2 δ_1^1) .
Notice that this matches the intuitive construction which led to Eq. (<ref>).
The EPs have the non-entangling solutions:
δ_0^0 = δ_0^2 = 0, π/2 ,
δ_1^1 = 0, π/2 .
Therefore, in the s-wave, entanglement minimization implies that both isospins are either non-interacting or at resonance, while in
the p-wave, entanglement minimization implies that the I=1 channel
is either non-interacting or at resonance. As no I=2 resonances are
observed in nature (and their coupling to pions is suppressed in
large-N_c QCD <cit.>), the s-wave EP has a single minimum corresponding to
no interaction. By contrast, the I=1 channel will exhibit minima of
both types. It is worth considering the EP of a simple resonance model.
Consider the unitary S-matrix:
Ŝ_1 = s-m_1^2 - i m_1 Γ_1/s-m_1^2 + i m_1 Γ_1 ,
where m_1 (Γ_1) are the mass (width) of the resonance. The EP is
ℰ(Ŝ_1) = ( m_1 Γ_1 (s-m_1^2)/(m_1 Γ_1)^2+ (s-m_1^2)^2)^2 ,
which vanishes on resonance at s=m_1^2 and has maxima at s=m_1(m_1±Γ_1). It is clear that the minimum corresponds to
Ŝ∝ P_12. As the ρ-resonance dominates the I=1 channel at energies below 1 GeV, the EP in nature
will be approximately of this form.
The ππ phase shifts are the most accurately known of all hadronic S-matrices as the Roy equation constraints <cit.>
come very close to a complete determination of the phase shifts <cit.>.
In Fig. (<ref>) the EPs for the first few partial waves are plotted using the Roy equation determinations of the S-matrix.
§.§ Chiral perturbation theory
Near threshold, the phase shift can be expressed in the effective range expansion as
δ_ℓ^I(s) = sin^-1{ 2σ (s) q^2ℓ( a^I_ℓ + 𝒪(q^2) )} ,
where the scattering lengths, a^I_ℓ, relevant to s-wave and p-wave scattering, are given at leading order in chiral perturbation theory by <cit.>
a_0^0 = 7M_π^2/32π F_π^2 , a_0^2 = -M_π^2/16π F_π^2 , a_1^1 = 1/24π F_π^2 ,
where F_π=93 MeV is the pion decay constant. Near threshold the s-wave and p-wave EPs are given by
ℰ(Ŝ_0) = 1/9 M_π^2 4(a^0_0)^2 -5 (a^0_0 a^2_0)+10 (a^2_0)^2 q^2 + 𝒪(q^4) , ℰ(Ŝ_1) = 1/M_π^2 (a^1_1)^2 q^6 + 𝒪(q^8) .
As a_0^0 (a_0^2 ) is positive (negative) definite, the EP is trivially minimized with vanishing scattering lengths.
This then implies a bookkeeping where F_π=𝒪(ϵ^-n) where n is a positive number.
Hence, in the limit of vanishing entanglement, the pions are non-interacting, and the dominant interaction is from tree diagrams; i.e. loops are suppressed by inverse powers
of F_π. In the large-N_c limit, one finds ϵ=1/N_c and n=1/2 <cit.>.
Evidently the implications of vanishing entanglement for the ππ S-matrix are indistinguishable from large-N_c expectations[We also studied the effect of explicit
chiral symmetry breaking on the entanglement power by varying the coefficients of operators with
insertions of the quark mass matrix in the effective action. No evidence of a connection between chiral symmetry breaking and the entanglement power was found. This aligns with large-N_c
expectations as the meson masses are independent of N_c. For an example of a relationship between
entanglement and chiral symmetry breaking see <cit.>.].
§ THE SYSTEM
As baryons are formed from N_c quarks, the baryon masses
and axial matrix elements grow with N_c. The unitarity of the
S-matrix then places powerful constraints on baryon properties via
large-N_c consistency
conditions <cit.>.
At leading order in the large-N_c expansion this yields predictions
that are equivalent in the two (three) flavor case to SU(4)
(SU(6)) spin-flavor symmetry which place the ground-state baryon
spin states in the 20 (56) dimensional irrep together
with the delta (baryon decuplet). Therefore, the large-N_c limit not
only predicts an enhanced symmetry but also alters the definition of
a baryon in a fundamental way. Moreover, any sensible effective
theory of π N scattering in the large-N_c limit must include the
delta resonance as an explicit degree of freedom. In what follows, the
consequences of entanglement minimization of the low-energy S-matrix
are considered for N_c=3 QCD.
§.§ S-matrix definition
The S-matrix element for the scattering process, π^a(q_1) N(p_1) →π^b(q_2) N(p_2), is given by
⟨π^b(q_2) N(p_2)| S |π^a(q_1) N(p_1)⟩ = ⟨π^b(q_2) N(p_2) | π^a(q_1) N(p_1) ⟩
+ ⟨π^b(q_2) N(p_2)| iT |π^a(q_1) N(p_1)⟩ ,
where a and b label the isospin of the pion.
The T matrix element in the center-of-mass system (cms) for the process may be parameterized as <cit.>
T^ba_π N = ( E+m/ 2m ) {δ^ba [ g^+(ω,t) + i σ⃗· (q⃗_⃗2⃗×q⃗_⃗1⃗) h^+(ω,t) ]
+ iϵ^abcτ^c [ g^-(ω,t) + i σ⃗· (q⃗_⃗2⃗×q⃗_⃗1⃗) h^-(ω,t) ] }
where E is the nucleon energy, ω is the pion energy, m is
the nucleon mass and t = (q_1 - q_2)^2 is the square of the momentum
transfer. The σ(τ) matrices act on the spin(isospin) of
the incoming nucleon. This decomposition reduces the scattering
problem to calculating g^±, the isoscalar/isovector
non-spin-flip amplitude and h^±, the isoscalar/isovector
spin-flip amplitude. The amplitude can be further projected onto
partial waves by integrating against P_ℓ, the relevant Legendre
polynomial:
f_ℓ±^±(s) = E + m/16π√(s)∫_-1^+1 dz [ g^± P_ℓ(z) + q⃗^ 2 h^± ( P_ℓ± 1(z) -z P_ℓ(z) ) ] .
Here z=cosθ is the cosine of the scattering angle, s is
the cms energy squared and q⃗^ 2 = q⃗_⃗1⃗^2 =
q⃗_⃗2⃗^2. The subscript ± on the partial wave amplitude
indicates the total angular momentum J = ℓ± s. The amplitudes
in the total isospin I=1/2 and I=3/2 can be
recovered via the identification:
f_ℓ±^1/2 = f_ℓ±^+ + 2 f_ℓ±^- , f_ℓ±^3/2 = f_ℓ±^+ - f_ℓ±^- .
Below inelastic threshold the scattering amplitude is related to a unitary S-matrix by
S_ℓ±^I(s) = 1 + 2 i | q⃗ | f_ℓ±^I(s) , S_ℓ±^I(s) S_ℓ±^I(s)^† = 1
and the S-matrix can be parameterized in terms of phase shifts,
S_ℓ±^I(s) = e^2 i δ_ℓ±^I(s) .
For a more detailed derivation of the π N S-matrix
see <cit.>. Scattering in
a given partial wave and total angular momentum channel leads to a
S-matrix which acts on the product Hilbert space of
the nucleon and pion isospin, ℋ_π⊗ℋ_N. The S-matrix can then be written in terms of total
isospin projection operators
Ŝ_ℓ± = e^2 i δ_ℓ±^3/2P̂_3/2 + e^2 i δ_ℓ±^1/2P̂_1/2
where the 6 × 6 projection matrices are
P̂_3/2 = 2/3 (1̂ + t̂_π·t̂_N ) ,
P̂_1/2 = 1/3 (1̂ - 2(t̂_π·t̂_N) ) .
The operators t̂_N and t̂_π are in the
2 and 3 dimensional representations of SU(2) isospin
respectively and t̂_π·t̂_N =
∑_α=1^3 t̂_π^α⊗t̂_N^α.
§.§ Entanglement power
The entanglement power of the π N S-matrix can be computed in a
similar manner as for the ππ EP. The incoming separable state
now maps to a point on the product manifold, CP^2 ×
S^2. The construction of the reduced density matrix follows the
same steps as in section <ref> and the entanglement power
is found to be,
ℰ(Ŝ_ℓ±) = ( 2/π^21/4 π ) (∫ dV_FS d Ω ) 𝒫 E_Ŝ_ℓ±
=
8/243[ 17 + 10 cos ( 2 ( δ_ℓ±^3/2 - δ_ℓ±^1 / 2 ) ) ] sin ^2 (δ_ℓ±^3 / 2 - δ_ℓ±^1 / 2 )
where 𝒫 has been taken to be 1. Note that the two
particles are now distinguishable and so scattering in each partial
wave is no longer constrained by Bose/Fermi statistics. It follows
that the S-matrix is only non-entangling when it is proportional to
the identity which occurs when,
δ_ℓ±^3 / 2 = δ_ℓ±^1 / 2 .
Notice that the EP allows for interesting local minima when the difference in I=3/2 and I=1/2 phase shifts is π/2.
The π N phase shifts are determined very accurately by the
Roy-Steiner equations up to a center-of-mass energy of 1.38
GeV <cit.> and the entanglement power for the first
couple partial waves is shown in Fig. (<ref>).
There is a local minimum near the delta resonance position in the p-wave due to the rapid change of the
I=3/2 phase shift.
§.§ Chiral perturbation theory
Near threshold the phase shifts can be determined by the scattering lengths through the effective range expansion,
δ_ℓ±^I = ^-1{1/| q⃗ | ^2 ℓ +1 ( 1/a_ℓ±^I + 𝒪(q⃗^ 2) ) } .
This leads to the threshold form of the entanglement power,
ℰ(Ŝ_ℓ±) = 8/9 (a^1/2_ℓ±- a^3/2_ℓ± )^2 q⃗^ 2+4l
which can only vanish if a^1/2_ℓ± = a^3/2_ℓ±.
The scattering lengths at leading order in heavy-baryon chiral perturbation theory, including the delta, are given by <cit.>,
a^1/2_0+ = 2 M_π m/8 π (m + M_π) F_π^2 , a^3/2_0+ = - M_π m/8 π (m + M_π) F_π^2
a^1/2_1- = -m (9 g_A^2 Δ +9 g_A^2 M_π-8 g_π N Δ^2 M_π)/54 π F_π^2 M_π (Δ+M_π) (m+M_π) ,
a^3/2_1- =-m (9 g_A^2 Δ +9 g_A^2 M_π-8 g_π N Δ^2 M_π)/216 π F_π^2 M_π (Δ+M_π) (m+M_π)
a^1/2_1+ = m (-3 g_A^2 Δ +3 g_A^2 M_π+8 g_π N Δ^2 M_π)/72 π F_π^2 M_π (Δ-M_π) (m+M_π) , a^3/2_1+ = m (-3 g_A^2 Δ^2 -2 g_π N Δ^2 M_πΔ +3 g_A^2 M_π^2)/36 π F_π^2 M_π(M_π^2-Δ^2) (m+M_π)
where Δ = m_Δ - m_N is the delta-nucleon mass splitting. The corresponding EPs near threshold are,
ℰ(Ŝ_0+) = m^2 M_π^2/8 π ^2 F_π^4 (m+M_π)^2q⃗^ 2
ℰ(Ŝ_1-) = m^2 (9 g_A^2 Δ +9 g_A^2 M_π-8 g_π N Δ^2 M_π)^2/5832 π ^2 f_π^4 M_π^2 (Δ+M_π)^2 (m+M_π)^2q⃗^ 6
ℰ(Ŝ_1+) = m^2 (-9 g_A^2 Δ^2 +4 g_π N Δ^2 Δ M_π+ (9 g_A^2+8 g_π N Δ^2) M_π^2 )^2/5832 π ^2 f_π^4 M_π^2 (M_π^2-Δ^2)^2 (m+M_π)^2q⃗^ 6 .
Once again the only non-entangling solution consistent with chiral
symmetry is no interaction, with the same scaling of F_π as found in ππ scattering.
Unlike the large-N_c limit, there is no reason to expect an enhancement of the axial couplings,
which in that case gives rise to the contracted spin-flavor symmetries <cit.>.
§ DISCUSSION
In QCD the number of colors, N_c, is a parameter that
appears in the action and in some sense acts as a knob that dials the
amount of quantum correlation in the hadronic S-matrix. The
simplifications, counting rules and enhanced symmetries implied by the
large-N_c approximation have proved highly successful in explaining
regularity in the hadronic spectrum. Recent work in
Ref. <cit.> has conjectured that, independent of the
value of N_c, quantum entanglement is minimized in hadronic
S-matrices. Verifying this conjecture relies on finding consequences
of the conjecture that are distinct from large-N_c predictions, and
indeed this has been found to be the case in baryon-baryon scattering. In
particular, minimization of entanglement near threshold leads to
enhanced symmetry that is verified by lattice QCD simulations. Here
this conjecture has been considered for ππ and π N
scattering. As shown long ago by Weinberg, the scattering of soft
pions off any target is completely determined by chiral
symmetry <cit.> and is weak at low energies. Here
it has been found that the only ππ or π N S-matrix,
consistent with the low energy theorems, that does not entangle
isospin is the identity i.e. no scattering. In the context of chiral
perturbation theory this corresponds to F_π being large when
entanglement is minimized, consistent with large N_c scaling. Unlike
in the large N_c limit, entanglement minimization of the S-matrix
says nothing about the scaling of the baryon masses and axial couplings and therefore
implies no new symmetries in the π N sector. Because of the
weakness of pion processes implied by chiral symmetry, it may be the
case that only systems without external Goldstone bosons (like NN)
give non-trivial constraints from entanglement minimization.
Considering general meson-nucleon scattering, it is clear that
scalar-isoscalar mesons have no spin or isospin to entangle.
Insofar as resonance saturation is effective, entanglement minimization would then predict the contribution to baryon-baryon scattering from
the exchange of non scalar-isoscalar resonances to sum together to give an equal
contribution to all spin-isospin channels <cit.>. This would then naturally lead to the SU(16) symmetry seen in the three flavor baryon
sector <cit.>.
CHAPTER: GEOMETRIC SCATTERING THEORY AND UV/IR SYMMETRIES OF THE S-MATRIX
This chapter is associated with “Geometry and entanglement in the scattering matrix" <cit.> and
“Causality and dimensionality in geometric scattering" <cit.>
by Silas R. Beane and Roland C. Farrell.
§ INTRODUCTION
The scattering matrix (S-matrix) encodes observable
consequences of the quantum mechanical interaction of two particles.
Typically, the S-matrix is determined by solving an effective quantum field theory (EFT) describing particles interacting through a local potential.
In this EFT-based approach, spacetime constraints like Galilean invariance and causality are encoded in the dependence of the scattering on the external kinematics e.g. the center of mass (c.o.m.) momentum p⃗ and energy E.
The framework provided by EFTs is extremely powerful, and has enabled precision calculations of observables, with quantifiable uncertainties.
However, keeping spacetime constraints manifest can obscure features of scattering that are non-local or mix different length scales.
This chapter develops an alternative, geometric, formulation of scattering that is better suited for exploring these non-local aspects of scattering.
The geometric construction is applied to
the non-relativistic s-wave scattering of spin-1/2 fermions.
Identifying these fermions with neutrons and protons describes the low-energy EFT of the Standard Model, and has widespread and important applications in nuclear physics.[See
Ref. <cit.> for a review.]
A new symmetry is found that relates scattering at low and high energies (a UV/UR symmetry).
This symmetry only exists for phase shifts parameterized by scattering lengths and effective ranges, and may explain why shape parameters in nuclear physics are anomalously small.<cit.>
This geometric construction also provides a new perspective on how causality constrains a scattering process, and how entanglement is generated via scattering.
An elastic scattering process is naturally parameterized by (possibly multiple) momentum-dependent phase shift(s) δ_s(p) which ensures that the S-matrix is a unitary operator e.g. for single channel scattering
Ŝ = e^i 2 δ(p).
In the geometric formulation of scattering, the phase shifts are used as a coordinate basis.
Due to the 2π periodicity of each 2δ_s(p), this space has the topology of a torus.
The momentum-dependent S-matrix then emerges as a trajectory, parameterized by the momentum, through the torus.
In the geometric construction, these trajectories are generated from a combination of the intrinsic curvature of the space and “external" forces that permeate the torus.
It will be shown that there exist special S-matrix trajectories that have a symmetry that maps low- and high-energy scattering processes into each other (a UV/IR symmetry).
These symmetries are hidden at the level of the EFT action (see Chapter <ref>), but manifest as reflection symmetries of the S-matrix trajectories.
These UV/IR symmetries leave classes of
observables invariant and will also be referred to as a “conformal symmetries" since they leave various combinations of the phase shifts unchanged.
The UV/IR symmetries also allow for the determination of exact solutions for the external forces needed to generate the S-matrix trajectories.
These exact solutions provide a forum for the exploration of
spacetime constraints on the S-matrix in the geometric formulation
of scattering.
The S-matrix evolves an initial state from the boundary of space in
the infinite past, to the boundary of space in the infinite future,
and must do so in a manner consistent with causality and with
awareness of the number of spatial dimensions in which it is acting.
Constraints due to causality on non-relativistic scattering have
implications for the analytic structure of the
S-matrix <cit.> and,
for systems arising from finite-range potentials, for the range of
allowed values of the scattering
parameters <cit.>.
These bounds, known as Wigner bounds, provide powerful constraints on
the exact S-matrix solutions implied by the UV/IR symmetries. In the
geometric theory it is found that these bounds manifest themselves as
constraints on the tangent vectors of S-matrix trajectories.
In addition, as quantum mechanics depends strongly on spatial
dimensionality, the differences between scattering in two and
three dimensions are explored in the geometric formulation.
The resulting S-matrix in two spatial dimensions again
has a solution implied by the UV/IR symmetries. Despite the strikingly
different physics that it gives rise to, the form of the
two-dimensional external potential differs from its three-dimensional
counterpart only by a change of coupling strength and phase.
This chapter is organized as follows. Section <ref> sets up
the S-matrix framework, focusing on the properties of the most
general S-matrix consistent with finite-range forces. The S-matrix
is shown to allow conformal symmetries that are not manifest in the
EFT action, and which provide powerful geometric constraints. In
Section <ref>, the S-matrix of contact forces is shown to
be the solution of a dynamical system which evolves the two-particle
state in a two-dimensional space defined by the two phase shifts and
bounded by unitarity. The conformal symmetries allow an exact
determination of the forces that determine the S-matrix in this
space. Section <ref> explores the manner in which spacetime features of
scattering manifest themselves in the geometric theory of
scattering. Constraints due to causality are considered, and the
dependence on spatial dimensionality is found by varying between three
and two dimensions.
Finally, Section <ref> summarizes and concludes.
§ S-MATRIX THEORY
§.§ S-matrix theory of contact forces
It is a simple matter to write down the S-matrix without
reference to any underlying field theory by directly imposing general
physical principles and symmetries. Consider two species of
equal-mass, spin-1/2 fermions, which we label as neutrons and
protons (i.e. nucleons), that interact at low energy via forces that are
strictly of finite range. The spins of the two-body system can be
either aligned or anti-aligned. Therefore, near threshold the
S-matrix is dominated by the s-wave and can be written as <cit.>
Ŝ(p)
= 1/2( e^i 2 δ_1(p) + e^i 2 δ_0(p))
1̂ +1/2( e^i 2 δ_1(p) - e^i 2 δ_0(p))
P̂_12
where the SWAP operator is
P̂_12 = (1̂+ σ̂·σ̂)
and, in the direct-product space of the nucleon spins,
1̂≡Î_2⊗Î_2 , σ̂·σ̂≡∑_α=1^3 σ̂^α⊗σ̂^α ,
where I_2 is the 2× 2 unit matrix, and the σ̂^α are the Pauli matrices. The δ_s are s-wave
phase shifts with s=0 corresponding to the spin-singlet ()
channel and s=1 corresponding to the
spin-triplet () channel.
The initial state for a scattering process between two particles is unentangled, and can be written as a tensor product.
When the S-matrix is the identity or SWAP operator it maps tensor product states into tensor product states, and therefore does not generate any spin entanglement.
In general, the S-matrix is an
entangling operator as the total-spin basis which diagonalizes the interaction is
different from the single-particle basis which describes the initialstate. Therefore, it is
imperative to treat the S-matrix as the fundamental object of study,
rather than the EFT action or the scattering amplitude, when
addressing issues related to quantum entanglement.
The entangling character of the S-matrix can be captured by its
entanglement power (EP) <cit.>
ℰ(Ŝ)
= 1/6 sin^2(2(δ_1-δ_0))
,
which is a state-independent measure of the entanglement generated by
the S-matrix acting on an initial product state. Note that this
object manifestly couples the two spin states in a manner that is
quite distinct from the Lorentz-invariant, spin decoupled interactions
that are encoded in the EFT action. Indeed, when it vanishes there is
an enhanced SU(4) spin-flavor symmetry (Wigner's supermultiplet
symmetry <cit.>) which
explicitly relates the singlet and triplet scattering channels.
The two angular degrees of freedom, the phase shifts δ_0,1,
are characterized by the effective range
expansion (ERE)
pδ_s(p) = -1/a_s + r_s p^2 + v_2;s p^4 + 𝒪(p^6)
with p⃗ (with p=|p⃗ |) chosen to be the center-of-mass (c.o.m.) momentum.
Here, a_0,1, r_0,1 and v_2;0,1 are the scattering length, effective range and first shape parameter for the spin singlet (0) and triple (1) channels.
The phase shift parameterizes the scattering amplitude
T_s(p) = -4 π/M[ pδ_s(p) - i p ]^-1,
which is related to the S-matrix element by
S_s(p) = e^2 i δ_s(p) = 1 - i p M/2π T_s(p) .
Near threshold, the S-matrix can be written as
Ŝ = 1/2( S_1+ S_0 ) 1̂ + 1/2( S_1- S_0 ) P̂_12 ,
where the S-matrix elements are
S_s = e^2iδ_s(p) = 1- i a_s(p) p/1+ i a_s(p) p ,
and a momentum-dependent scattering length is defined as
a_s(p) ≡ a_s/1- a_s r_s p^2 + 𝒪(p^4) .
The momentum dependent scattering lengths are related to the phase shifts as
ϕ ≡ 2δ_0 = -2tan^-1( a_0(p) p ) , θ ≡ 2δ_1 = -2tan^-1( a_1(p) p ) .
Here the phase shifts have been expressed in terms of the angular variables ϕ∈ [0,2π] and θ∈ [0,2π].
The S-matrix of Eq. (<ref>) is specified by the two
angular variables ϕ(p) and θ(p) that are determined by the
Schrödinger equation once the finite-range quantum mechanical
potential is specified. As these variables are periodic,
the two-dimensional “phase space” that these variables define is a
flat torus manifold, illustrated in Fig. (<ref>). The range
of values that ϕ(p) and θ(p) can take are bounded by
unitarity, with boundary values determined by the four renormalization group (RG) fixed points,
which occur at Ŝ = ±1̂ and ±P̂_12 when the s-wave scattering lengths are either vanishing
or infinite (at unitarity) <cit.>. Generally, in
effective range theory, the S-matrix trajectory on the flat torus
will originate at the trivial fixed point at scattering threshold and
trace out a curve that exits the flat-torus and enters a bulk space at
the first inelastic threshold <cit.>. In what follows,
all inelasticities will be pushed to infinite momentum and S-matrix
trajectories will begin and end at an RG fixed point.
§.§ UV/IR symmetries of the S-matrix
§.§.§ Out-state density matrix
In this section S-matrices with a momentum inversion
symmetry that interchanges the IR and the UV will be studied. The S-matrix is
defined as the operator which evolves the incoming (“in”) state
before scattering into the outgoing (“out”) state after scattering,
i.e. Ŝ|in⟩ = |out⟩. The criterion for the presence of a symmetry will be based on the
transformation properties of the density matrix of the “out” state
ρ = |out⟩⟨out| = Ŝ|in⟩⟨in|Ŝ^† .
The first kind of symmetry transformation that will be considered is
ρ↦ρ which leaves all spin-observables invariant. In
the following sections it will be shown that there are non-trivial
instances of this symmetry where the S-matrix itself is not
invariant. The second kind of symmetry transformation that will be considered
is ρ↦ρ̅ where
ρ̅≡Ŝ^*|in⟩⟨in|Ŝ^T .
This is the density matrix that would be produced if all
phase shifts change sign, or, equivalently, if attractive
interactions are replaced with repulsive interactions of equal magnitude, and
vice-versa.
§.§.§ The scattering-length approximation
The S-matrix takes constant values at the fixed points of
the RG, which implies that at both the trivial and unitary fixed
points, the underlying EFT has non-relativistic conformal invariance
(Schödinger symmetry). The scattering trajectories are bridges
which connect various conformal field theories. In addition to this
conformal invariance of the EFT action at the RG fixed points, there is a UV/IR symmetry
which acts directly on the S-matrix and which is present at finite
values of the scattering lengths.
First, consider the scattering length approximation defined by a_s≠0 and r_s=0.
The momentum inversion map
p↦1/|a_1 a_0| p ,
transforms the phase shifts and density
matrices as shown in Table <ref>.
As this takes p=0 to p=∞ it is a transformation which interchanges the IR and UV.
This momentum inversion
is a conformal invariance that leaves the combination of angular
variables ϕ+θ (a_1 a_0 <0) or ϕ-θ (a_1 a_0 >0)
invariant and implies a reflection symmetry of the S-matrix
trajectory, as illustrated for a specific case in
Fig. (<ref>) (purple curve). When a_1 a_0 > 0, the two
phase shifts conspire to leave the density matrix
unchanged despite neither phase shift separately being invariant.
This demonstrates that, in multi-channel scattering, there
exist symmetries which are not manifest at the level of the scattering
amplitude and appear as reflections of S-matrix trajectories. It
is notable that the EP is invariant with respect to the
transformation of Eq. (<ref>) <cit.>.
§.§.§ Including effective ranges
The scattering length approximation is a part of
a larger class of UV/IR symmetric S-matrix models which include range effects
that are strictly correlated with the scattering lengths. These
UV/IR symmetries have a distinct character as the range effects
necessarily arise from derivative operators in the EFT.
Consider the general momentum inversion
p↦1/λ |a_1 a_0| p ,
where the real parameter λ >0. One can ask: what is the most
general S-matrix for which this inversion symmetry gives rise to
interesting symmetries? This transformation rules out all
shape-parameter effects and correlates the effective ranges with
the scattering lengths in a specific way. Table <ref>
gives the effective-range parameters for all S-matrix models with
a symmetry under the momentum inversion of Eq. (<ref>).
Note that the first four rows correlate the singlet
effective range with the triplet scattering length and vice-versa.
An interesting and well-known feature of the effective-range expansion
in nucleon-nucleon (NN) scattering is the smallness of the shape parameter
corrections (see, for instance, Ref. <cit.>) as compared
to the range of the interaction, which is given roughly by the Compton
wavelength of the pion. Vanishing shape corrections is a key signature
of an S-matrix with momentum-inversion symmetry. Indeed, as the
NN s-wave effective ranges are positive while the
scattering lengths have opposite sign, the model given in the second
row of Table <ref>, with λ fitted to the data,
provides a description of the low-energy s-wave phase shifts that
improves upon the scattering-length approximation. As will be seen
below, models that arise from zero-range forces with exact
momentum-inversion symmetry and a positive effective range strictly
violate causality. However, relaxing the zero-range condition
can lead to interesting results for nuclear physics, as
is considered in Chapter <ref>.
§ GEOMETRIC SCATTERING THEORY
§.§ Metric on the flat-torus
As the space on which the two phase shifts, θ and ϕ, propagate is a two-dimensional flat space,
the line element should take the form ds^2 ∝ dϕ^2 + dθ^2. This metric
can be obtained formally by parameterizing the S-matrix of Eq. (<ref>) as
Ŝ = x(p) + i y(p) 1̂ + z(p) + i w(p) P̂_12 ,
with
x = cos(ϕ) + cos(θ) ,
y = sin(ϕ) + sin(θ) ,
z = -cos(ϕ) + cos(θ) ,
w = -sin(ϕ) + sin(θ) .
Then, as an embedding in ℝ^4, with line
element
ds^2 = dx^2 + dy^2 + dz^2 + dw^2 ,
one finds the flat two-dimensional Euclidean line element
ds^2 = ( dϕ^2 + dθ^2) .
With ϕ and θ periodic, the corresponding metric describes the flat torus 𝕋^2∼ S^1 × S^1↪ℝ^4, where S^1
is the circle. From this line element, one can read off the flat-torus metric tensor g_ab, with a,b=1,2.
For a basis independent construction of this geometry see App. <ref>.
§.§ Geometric action
The action for a general parameterization of a curve on a space with
coordinates 𝒳^1=ϕ and 𝒳^2=θ and metric tensor g_ab
can be expressed as <cit.>
∫ L(𝒳,𝒳̇)dσ = ∫(N^-2g_ab𝒳̇^a 𝒳̇^b - 𝕍(𝒳))Ndσ ,
where σ parameterizes the curve (affine or inaffine), L is the Lagrangian,
𝒳̇≡d𝒳/dσ, and
𝕍(𝒳) is an external geometric potential which is assumed
to be a function of 𝒳 only. The corresponding
Euler-Lagrange equations give the trajectory equations
𝒳̈^a + _gΓ^a_ bc𝒳̇^b 𝒳̇^c = κ(σ) 𝒳̇^a
- N^2 g^ab∂_b𝕍(𝒳) ,
where _gΓ^a_ bc are the Christoffel symbols for the metric g_ab, and
κ(σ) ≡ Ṅ/N = d/dσlndτ/dσ .
Here κ is the inaffinity <cit.>, which vanishes when σ=τ with τ an affine parameter.
An interesting feature of the geometric construction of scattering is that the relative momentum
that describes the motion of the center-of-mass is a non-affine parameter. For a constant geometric potential, the trajectory equations reduce to the geodesic equations
which describe straight-line trajectories on the flat-torus. Any curvature indicates the presence of a non-constant geometric potential. Now, a priori, if a solution for ϕ and θ is specified,
there are two equations of motion for three unknowns, the inaffinity and the two force components in the ϕ and θ directions. However, the presence of UV/IR symmetries can reduce the number of
unknowns to two and thus allows an explicit construction of the geometric potential <cit.>.
§.§ Solvable models
In the scattering length approximation, the UV/IR symmetry of Eq. (<ref>) determines the geometric potential exactly. It is given by <cit.>
𝕍(ϕ,θ) = |a_0 a_1|/(|a_0|+|a_1| )^2c_1^2tan^2((ϕ+ϵ θ)) ,
where ϵ =-1 for a_1 a_0 >0 and ϵ =+1 for a_1 a_0 <0, and c_1 is an
integration constant. The inaffinity associated with a trajectory parameterized by the c.o.m momentum is constructed from
N = c_1/p(sinϕ -ϵsinθ) .
The S-matrix trajectory is independent of the parameterization and can be simply described
—with vanishing inaffinity and choice N=c_1=1— by an affine parameter τ
via the simple Lagrangian
L = ( ϕ̇ + θ̇) - 𝕍(ϕ,θ) ,
where the dots denote differentiation with respect to τ. Of course τ has no interpretation as a momentum or energy in a scattering
process; such a parameter is not affine.
The conformal S-matrix models with the UV/IR symmetry of Eq. (<ref>), and λ-dependent effective ranges, also lead to solvable geometric
potentials. The general solution is cumbersome, however in the special case where the effective ranges
are correlated to the scattering lengths as in the last two rows of Table <ref> —and λ=1/4— the geometric potential is identical to Eq. (<ref>) except for an overall factor of 1/2 and a rescaled argument
(ϕ+ϵ θ) ⟶ (ϕ+ϵ θ) .
It will be seen in section <ref> why this case is special.
§ SPACETIME CONSTRAINTS
§.§ Galilean invariance
The parameter p, which labels the c.o.m. momentum of the scattering process,
is related to the total energy, E, in the system by p =
√(M E). If the incoming particle momenta are p⃗_1 and
p⃗_2 then in the c.o.m. frame, p⃗_1 = - p⃗_2
≡p⃗ and p = | p⃗ |. Other Galilean frames can be reached from the c.o.m. frame
via a combined rotation R and boost by a velocity v⃗:
p⃗ ⟶ R p⃗ + M v⃗
which implies the transformation
p ⟶ | p⃗ | √(1+ x) ,
with x≡(M v⃗ )^2/p⃗^2. Varying x between 0 and 1 corresponds to transforming
between the c.o.m. and laboratory frames. Hence, Galilean invariance allows arbitrary reparameterizations of the S-matrix
of the form p →Ω p with 1≤Ω≤∞ and Ω interpolating between the c.o.m at rest
and boosted to infinite momentum. As this is just a rescaling of p, changing to another inertial frame does not affect the inaffinity, Eq. (<ref>).
§.§ Causality bounds on zero-range scattering
§.§.§ Wigner bounds
In non-relativistic scattering with finite-range forces, causality places bounds on
physical scattering parameters by way of Wigner
bounds <cit.>.
Consider a two-body s-wave wave function
both free and in the presence of an interaction potential of range
R. The difference in phase between the scattered and free spherical wave is
defined to be twice the phase shift. The most negative phase shift is
obtained when the scattered wave does not penetrate the potential
and reflects off the boundary at r = R. This provides a
lower bound on the phase shift δ(p) ≥ - R p, see
Fig. (<ref>). Now consider a plane wave at an
infinitesimally larger momentum, p̅ with δ(p̅) ≥ -
Rp̅. The difference between δ(p̅) and δ(p)
provides a semi-classical bound on the derivative of the phase shift with respect to
momentum
d δ/dp≥ - R .
By time evolving the plane waves, the above becomes a bound on the time delay between the incident and scattered wave, Δ t ≥ -M R/p. It is in this sense that causality constrains non-relativistic scattering.
A more careful derivation, which includes quantum mechanical effects,
induces a second term on the right hand side of Eq. (<ref>)
and leads to the bound <cit.>:
d δ/dp≥ - R + sin(2δ+2 p R)/2p .
Evaluated at threshold this becomes a constraint on the effective range parameter:
r ≤ 2 R - R^2/a + R^3/3 a^2 .
In the Wilsonian EFT paradigm, an S-matrix element derived from EFT
is dependent on a momentum cutoff, Λ∼ 1/ R, which is
kept finite and varied to ensure cutoff-independence to a given order
in the perturbative EFT expansion. What occurs above this scale is
irrelevant to the infrared physics that is encoded by the
S-matrix and compared to experiment. An explicit calculation of the
Wigner bound in the EFT of contact operators with cutoff
regularization can be found in Ref. <cit.>. As the
bound depends explicitly on the EFT cutoff, its relevance in
physical scenarios is somewhat ambiguous as the EFT can violate
causality bounds as long as the violations occur above the cutoff,
and the bound itself weakens as higher-order corrections in
the EFT expansion are included <cit.>.
The S-matrix models with momentum-inversion symmetry can originate from
zero-range or finite-range forces. Here it will be assumed that the
underlying theory has strictly zero-range forces. This then implies
strong causality bounds whose geometric interpretation can be studied.
Explicitly, with zero-range forces, causality requires r_s ≤ 0 and
the tangent vectors to S-matrix trajectories satisfy
ϕ̇(p) ≥sinϕ(p)/p , θ̇(p) ≥sinθ(p)/p ,
where dot represents differentiation with respect to momenta. The
allowed tangent vectors clearly depend on the quadrant of the flat
torus in which they lie. In addition, by enforcing continuity of the
tangent vectors at the boundary of each quadrant, it is found that an
S-matrix trajectory can only exit a quadrant through the upper or
right edge. These various geometric constraints are illustrated in
Fig. (<ref>) using the examples of Fig. (<ref>).
It is notable that the large momentum behavior of any S-matrix curve
which ends at the trivial fixed point must be in the top-right
quadrant, which is also the only place where a trajectory can have
loops. Since the Wigner bound segregates by quadrant and indicates a
direction of preferred S-matrix evolution, causality introduces an
asymmetry which breaks the homogeneity and discrete isotropy of a
generic flat torus.
Applying the Wigner bound to the symmetric S-matrix models in
Table <ref> restricts the allowed signs of the
scattering lengths as shown in Table <ref>.
§.§.§ Causal singularities of the S-matrix
In addition to Wigner bounds, causality in non-relativistic scattering is manifest in various constraints on the analytic structure
of the S-matrix in the complex-momentum plane <cit.>. The simplicity of the S-matrix
models with momentum-inversion symmetry reveal these constraints and their relation with the Wigner bound in straightforward fashion.
The s-wave S-matrix elements with momentum inversion symmetry are ratios of polynomials of second degree and can thus be expressed as
S_s ≡ (p+p_s^(1))(p+p_s^(2))/(p-p_s^(1))(p-p_s^(2)) ,
where
p_s^(1,2) = 1/r_s(i±√(2r_s/a_s-1)) .
Consider the evolution of the singularities in the complex-p plane as λ is varied <cit.>
for the causal model given in the last row of Table <ref>.
This model, with both scattering lengths negative, leaves ϕ-θ invariant, and
has poles at
p_s^(1,2) = -1/2|a_s|λ(i±√(4λ-1)) .
There are three distinct cases, illustrated in Fig. (<ref>).
λ > 1/4: there are two resonance poles in the lower-half complex plane on
opposite sides of the imaginary axis. Dropping the partial-wave subscript on the scattering length,
p^(1) = -p_R-i p_I , p^(2)= p_R-i p_I ,
with
p_R = 1/2|a|λ√(4λ-1) , p_I = 1/2|a|λ ;
λ = 1/4: there is a double pole corresponding to a virtual state on the negative imaginary axis at
p^(1) = p^(2)= -i/2|a|λ ;
λ < 1/4: there are two poles corresponding to virtual states on the negative imaginary axis at
p^(1) = -i p_- , p^(2)= -i p_+ ,
with
p_± = 1/2|a_s|λ(1±√(1-4λ)) .
It is clear that the Wigner bound implies that the poles of the S-matrix elements lie in the lower-half of the complex-momentum plane as
one would expect of states that decay with time. Note that the special case of the double pole (and vanishing square root)
is in correspondence with the special case geometric potential which takes the simple form, Eq. (<ref>).
§.§ Spatial dependence of scattering
The S-matrix is clearly aware of the number of spatial
dimensions that it is acting in and one therefore expects that this is
reflected in the geometric theory of scattering via a modified
geometric potential. It is straightforward to carry out the analysis
that was done above in two spatial dimensions. In two dimensions, low energy
scattering arising from short-range forces is enhanced in the IR due to an apparent scaling symmetry of the Schrödinger equation. One consequence of this is that there is only a single fixed point S-matrix, the identity, which is reached at both zero and
infinite scattering length. Spin and particle statistics are also distinct in
two dimensions, however, for our purposes, all that will be required is a
scattering process with two independent low-energy channels. One way this could be achieved is by placing the three-dimensional scattering system in a strongly
anisotropic harmonic potential which effectively confines one of the
spatial dimensions <cit.>. This allows
the two-fermion system to be continuously deformed from three to two
dimensions and provides a means of studying the dependence of
the geometric theory, constructed above, on spatial dimensionality.
A qualitatively equivalent and simpler way of achieving this reduction of
dimensionality is by periodically identifying and compactifying one of
the spatial dimensions <cit.>, say in the
z-direction.
Regardless of how the two dimensional system is obtained, the ERE is <cit.>
δ_s(p) = 1/πlog(a^2_s p^2) + σ_2, s p^2 + 𝒪(p^4)
where the a_s and σ_2, s are the two-dimensional scattering lengths and areas, respectively. The full S-matrix can be constructed from the phase shifts as in Eq. (<ref>). Retaining just the first term in the ERE gives rise to the scattering length approximation which, in terms of periodic variables on the flat torus, is
ϕ = 2^-1(1/πlog(a^2_0 p^2)) , θ = 2^-1(1/πlog(a_1^2 p^2)) ,
where the higher order effective area and shape parameters have
been set to zero[This can be obtained from the
compactification of a spatial dimension if the d=3 effective range
parameters are functions of the compactification
radius <cit.>.]. Note that there is an IR enhancement in two dimensions, as made evident by the logarithmic dependence on the c.o.m. momenta, and that there exists a bound state for either sign of coupling strength <cit.>.
The momentum inversion transformation takes a similar form as in three dimensions,
p→ (a_1 a_0 p)^-1, and the
phase shifts transform as
ϕ(p) ↦ -θ(p) , θ(p) ↦ -ϕ(p) ,
which leaves the density matrix invariant.
In two spatial dimensions, the momentum-inversion symmetry implies that all effective area and
shape parameters must vanish[In d=2, causality bounds the effective area
parameter <cit.>
σ_2, s ≤ R^2/π{log( R/2a_s) + γ - 1/2^2 + 1/4} ,
where γ is the Euler-Mascheroni constant. Therefore, momentum-inversion implies that the Wigner bound is saturated with σ_2, s = 0.].
The geometric potential on the flat torus which reproduces the phase shifts of Eq. (<ref>) is found to be
𝕍(ϕ,θ) = -π^2/4(log(a_0/a_1))^2c_1^2tan^2((ϕ+θ)+π/2) .
Notice that the harmonic dependence is the same as in three spatial
dimensions, Eq. (<ref>), except for an additional phase of
π/2 which causes the geometric potential to diverge when both phase shifts
sum to zero. This can only occur at threshold, and can be attributed
to the infinite force needed to reproduce the singular behavior of the
phase shift derivatives at p=0. Another property of the
geometric potential is the divergence of the prefactor when a_0 = a_1. At the end of
section (<ref>) it was pointed out that, for UV/IR symmetric
trajectories, there are two trajectory equations for two unknowns, the
inaffinity and the geometric potential. However, when the scattering
lengths are equal, and ϕ = θ, the two equations are no
longer linearly independent. In this case the trajectory is a geodesic
—a straight line— on the flat torus, and no geometric potential is needed.
§ CONCLUSION
The S-matrix is a unitary operator that
evolves a state vector from the boundary of spacetime, into the
spacetime bulk to experience interaction, and then back to the
spacetime boundary. In this view of scattering, all spacetime
features like causality and spatial dimensionality are bulk
properties, and, as the S-matrix is purely a function of kinematical
variables like momentum and energy, the bulk properties must be
imprinted in some way on these variables. In a general scattering
process, the S-matrix evolves an initial unentangled product state
into an entangled state which in general experiences non-local correlations.
In order to avoid the assumption of locality, which is intrinsic to
the EFT paradigm, a geometric formulation
of scattering for two species of spin-1/2 fermions interacting
at low-energies via finite-range interactions has been developed.
In this geometric theory
the S-matrix emerges, without direct reference to spacetime, as a trajectory
in an abstract space that is defined by unitarity.
These S-matrix trajectories
are generated by an entangling harmonic force whose form is
—in certain special cases— determined exactly by a UV/IR symmetry.
It should be noted that the generation of the S-matrix from an entangling force is strikingly similar to recent proposals of the emergence of spacetime from entanglement <cit.>.
The next chapter will demonstrate how the UV/IR symmetries of the S-matrix manifest in the EFT as reflection symmetries of the RG flow of the coupling constants.
§ BASIS INDEPENDENT GEOMETRIC FORMULATION
The analysis of Section <ref> relies on
choosing a specific isotropic coordinate system to study the geometry
of the S-matrix. As the S-matrix is an operator in the product
Hilbert space of nucleon spins, it is interesting to consider distance
measures in a basis-independent manner. For this purpose it is
convenient to make use of the Hilbert-Schmidt (HS) distance. The HS
distance measure is a natural extension of the Frobenius inner
product, ⟨Â,B̂⟩ = TrÂ^̂†̂B̂. It can be defined as <cit.>
D(Â,B̂)^2 ≡
d_n Tr[ (Â-B̂)(Â-B̂)^† ]
with d_n an arbitrary normalization constant that will be set to 1/2.
The HS distance is independent of basis, positive semi-definite and
zero if and only if  = B̂. If the S-matrix is
parameterized by phase shifts, say ϕ and θ, then
the HS distance induces a metric on the space of S-matrices. This
allows for the direct study of the geometry of the S-matrix. The HS
distance between two S-matrices with distinct phase shifts, Ŝ(ϕ, θ) and Ŝ'̂(ϕ',θ'), is
D(Ŝ, Ŝ'̂)^2 =
Tr[ (Ŝ - Ŝ'̂)(Ŝ - Ŝ'̂)^† ] = 2 (sin^2((ϕ-ϕ')) + 3sin^2((θ-θ')) ) .
The metric is obtained by looking at the infinitesimal differences, dϕ = ϕ' - ϕ and dθ = θ' - θ and is found to be,
ds^2 = (3 d θ^2 + dϕ^2) .
The unitary S-matrix is determined by the two degrees of freedom, ϕ and θ,
and therefore, locally, the S-matrix lives on the space defined by this two-dimensional Euclidean metric
that can be rescaled to remove the anisotropic spin weighting factor of the spin-triplet phase shift θ.
The HS distance serves to obtain an operator definition and an alternate understanding of
the EP. Recall that the S-matrix is non-entangling when either ϕ = θ or ϕ = θ±π [When ϕ = θ±π the S-matrix acts as a
swap gate on the incoming nucleon-nucleon state up to an overall
phase. Likewise, when ϕ = θ±π/2, the
S-matrix acts as a root-swap gate on the incoming nucleon-nucleon state
up to an overall phase. ]. Therefore, the non-entangling
S-matrices form a codimension-one subspace within the space of all
possible S-matrices. The EP of a given S-matrix, Ŝ(ϕ,θ), is found to be,
ℰ(Ŝ) = D(Ŝ(ϕ,θ), Ŝ(θ,θ))^2 D(Ŝ(ϕ,θ), Ŝ(θ - π,θ))^2 = N_p sin^2(ϕ - θ) ,
where the freedom in defining the HS norm has been used to set the normalization to N_p.
As both Ŝ(θ,θ) and Ŝ(θ -
π,θ) are non-entangling, the EP can be interpreted as a
measure of the distance from a given S-matrix to the two
non-entangling subspaces. Using the HS distance
highlights the fact that the EP of an operator is a
state-independent measure of entanglement.
CHAPTER: UV/IR SYMMETRIES OF THE S-MATRIX AND RG FLOW
This chapter is associated with Ref. <cit.>:
“UV/IR Symmetries of the S-matrix and RG flow" by Silas R. Beane and Roland C. Farrell.
§ INTRODUCTION
Non-relativistic s-wave scattering with finite-range forces
exhibits two fixed points of the RG: the trivial fixed point
corresponding to no interaction, and the unitary fixed point where the
interaction strength takes the maximal value consistent with
unitarity <cit.> (for a review, see
Ref. <cit.>).
As shown in the previous chapter, the
S-matrix also has interesting properties with respect to UV/IR
transformations that invert the momentum.
In the scattering length approximation, where the effective range and all higher-order shape parameters vanish, the UV/IR transformations maps the trivial RG fixed point into
the unitary RG fixed point and vice versa. As a result, the UV/IR
transformation does not act simply on the scattering amplitude which
vanishes at the trivial fixed point. Instead, it is the S-matrix,
which accounts for the trivial fixed point via the unit operator, that
manifests the UV/IR symmetry.
These novel symmetries provide a new perspective on EFT descriptions of the
nucleon-nucleon (NN) interaction and facilitate the development of new
EFTs which enhance the convergence[For recent work which aims
to improve the convergence of NN EFT see
Refs. <cit.>.]
of the description at very low energies (see Chapter <ref>).
This chapter shows how the UV/IR properties of the
S-matrix are reflected in the (scheme dependent) beta-functions which characterize the
RG scale dependence of the EFT couplings. This will be explored for the same system as in the previous chapter: non-relativistic fermions with an s-wave interaction in both two and three spatial dimensions.
This chapter is organized as follows.
Section <ref> introduces the S-matrix and its UV/IR
symmetries (whose details are treated in appendix <ref>). In section <ref>, the EFT which matches to
the S-matrix in the scattering length approximation is reviewed and the
UV/IR symmetry is shown to be present in the RG flow of the
coefficients of momentum-independent local operators.
Constraints on higher-order EFT operators which follow from considering
UV/IR symmetry breaking in the S-matrix are considered in section <ref>.
Section <ref> summarizes and concludes.
§ S-MATRIX THEORY AND UV/IR SYMMETRY
The fixed points of the RG are determined by the flow
of coupling constants with a change of scale in the EFT which gives rise to Eq. (<ref>).
In non-relativistic scattering, the S-matrix takes special
constant values at these fixed points <cit.>. For a given
channel, the fixed points of the RG occur when the phase shifts,
δ_0 and δ_1, vanish (trivial fixed point) or are at π/2
(unitary fixed point); i.e. when S_s=± 1. Therefore the fixed
points of the full S-matrix occur when the phase shifts both vanish,
δ_1=δ_0=0, both are at unitarity, δ_1=δ_0=π/2, or
when δ_1=0, δ_0=π/2 or δ_1=π/2, δ_0=0. The
S-matrices at these four fixed points are ±1̂ and
±P̂_12. In the ERE, the fixed
points of the RG are reached at a_1= a_0=0, |a_1|=| a_0|=∞,
and at a_1=0 , |a_0|=∞, |a_1|=∞ , a_0=0, with
all effective range and shape parameters taken to be vanishing (the
scattering length approximation). If all inelastic thresholds are
absent, and p is defined on the interval [0,∞), then all four
of the RG fixed points are accessible (in a limiting sense) via
scattering. In the scattering length approximation this can be seen
to be a consequence of the invariance of the S-matrix with respect
to the scaling transformation
p↦ e^β p , a_s↦ e^-β a_s ,
with β an arbitrary parameter. Operationally, keeping a fixed and scaling p with β
positive (negative) accesses the S-matrix of large (small)
scattering lengths. Hence, with a_0 and a_1 finite, the S-matrix
is a trajectory from 1̂ to -1̂. With a_0
finite (at unitarity) and a_1 at unitarity (finite), the trajectory
originates at -P̂_12 (P̂_12) and again ends at
-1̂.
In addition to the scale invariance of Eq. (<ref>), the individual S-matrix
elements, S_s, transform simply with respect to the momentum-inversion transformation
p↦1/a_s^2 p .
As this transformation maps threshold to asymptotic infinity and vice-versa, it is a UV/IR transformation. (This transformation and its associated Ward identity are considered in detail in appendix <ref>.) As a transformation on phase shifts, one finds that
Eq. (<ref>) implies
δ_s(p) ↦ -δ_s(p) ±π/2 , S_s → -S^*_s ,
where the sign of the shift by π/2 is determined by the sign of
the scattering length. Therefore, considering scattering near
threshold, this momentum inversion transformation interchanges the trivial and unitary fixed points of the RG. It will
be seen below how this transformation is manifest in the
running coupling constants of the EFT.
§ EFFECTIVE FIELD THEORY AND RG FLOW
§.§ EFT action and potential
It is interesting to ask whether the UV/IR properties of the
S-matrix are reflected in the EFT. The S-matrix in the
scattering-length approximation can be derived from an EFT with
highly-singular, momentum-independent, contact interactions. These
give rise to the quantum mechanical potential that enters the
Lippmann-Schwinger equation, which in turn generates the S-matrix.
The S-matrix models with exact UV/IR symmetry are,
by definition, UV-complete. Therefore one might expect the UV/IR
symmetry to be reflected in the flow of the EFT potential between
fixed points of the RG.
The EFT which describes low-energy s-wave scattering of nucleons is
constrained by spin, isospin and Galilean invariance (and, for the
case we consider, parity and time-reversal invariance). The
leading-order (LO) interactions in the Lagrangian density
are <cit.>
L_ LO
=
-1/2 C_S (N^† N)^2
-1/2 C_T (N^†σN)·(N^†σN) ,
where the field N represents both spin states of the proton and neutron fields. These interactions
can be re-expressed as contact interactions in the and
channels with couplings C_0 = ( C_S-3 C_T) and C_1 = (C_S+C_T)
respectively, where the two couplings are fit to reproduce the
and scattering lengths. The quantum-mechanical potential is
scheme dependent and can be read off from the Lagrangian density <cit.>
V(μ)_σ = 1/2( C_1(μ) + C_0(μ) ) 1̂ +
1/2( C_1(μ) - C_0(μ) ) P̂_12 ,
where the S-matrix basis has been chosen. In what follows
the flow of the potential with the RG scale, μ, will be considered
in three and two spatial dimensions (d=3,2).
§.§ RG flow in d=3
Solving the Lippmann-Schwinger equation with the potential
of Eq. (<ref>), or alternatively, summing to all orders
the loop diagrams with insertions of the operators in
Eq. (<ref>), leads to the d=3 NN scattering amplitude.
In dimensional regularization (dim reg) with the power-divergence subtraction (PDS)
scheme <cit.> the amplitude is
i 𝒜 = -i V(μ)/1 + M V(μ) ( μ + i p) /4π ,
where M is the nucleon mass and V(μ) is the projection of
V(μ)_σ onto a particular scattering channel. The PDS scheme
offers a clean way of accounting for the linear
divergences which appear in loops. The PDS couplings also exhibit the trivial and unitary RG fixed points
and, for μ∼ p, scale so as to justify their non-perturbative treatment (power counting is manifest).
The relation between the μ-dependent coefficients
and the phase shifts in the scattering-length
approximation follows from matching the scattering amplitude to the ERE and is
p δ_s = - ( 4 π/M C_s + μ) = -1/a_s .
Therefore, the running couplings in the PDS scheme are
C_s(μ) = 4 π/M1/1/a_s-μ .
There is a fixed point at C_s=0, corresponding to free particles (a_s=0),
and a fixed point at C_s=C_⋆ corresponding to a divergent scattering
length (unitarity). It is convenient to rescale the couplings to
Ĉ_s ≡ C_s/C_⋆. The beta-functions for the rescaled couplings
are then
β̂(Ĉ_s) = μd/dμĈ_s(μ) = -Ĉ_s(μ)(Ĉ_s(μ)-1) ,
which has fixed points at Ĉ_s=0 and 1, as shown in Fig. (<ref>). The coupling is near the
trivial fixed point for μ < 1/|a_s|, and near the non-trivial fixed point for μ
> 1/ |a_s|. The four fixed points in the NN system are at Ĉ_1= Ĉ_0=0, Ĉ_1=Ĉ_0=1,
and at Ĉ_1=0 , Ĉ_0=1, and Ĉ_1=1 , Ĉ_0=0.
In the space of rescaled couplings, these four points,
(0,0), (1,1), (0,1) and (1,0) furnish a representation of the Klein
group <cit.>.
Now recall that the momentum inversion, p →1 /(a_s^2 p), has the
effect of interchanging the trivial and unitary fixed points of
the S-matrix elements in the scattering length approximation.
In the EFT, under an inversion of the PDS RG scale
μ↦1/a_s^2 μ ,
the couplings transform as
Ĉ_s(μ) ↦ 1 -Ĉ_s(μ) .
This implies that the beta-function evaluated at two scales related by an inversion are
equal, i.e. β̂(Ĉ_s)|_μ̅ = β̂(Ĉ_s)|_1/(a_s^2 μ̅), for any μ̅. This scale-inversion
transformation maps the two RG fixed points to one another and the UV/IR
transformation property of the phase shifts is reflected in the
μ dependence of the coupling. In addition, the beta-function is reflection
symmetric about the fixed point of the inversion transformation, Ĉ_s(μ^∘) = 1/2, which occurs at
μ^∘=|a_s|^-1, as shown in Fig. (<ref>).
The analysis presented above holds for other renormalization schemes as well, provided that they preserve the UV/IR symmetry; i.e. reproduce the scattering length approximation.
For example, consider regulating with a hard-cutoff in momentum space.
The scattering amplitude is
A = - C_s(Λ)/1 + M C_s(Λ) [Λ + i p tan^-1(i Λ/p) ]/(2π^2)
where the cutoff-dependent coupling is defined as
C_s(Λ) = 4π/M1/1/a_s - 2Λ/π .
For finite Λ>p, the expanded tan^-1 (i Λ / p) term generates cutoff-dependent contributions to the amplitude to all orders in the effective range expansion <cit.>. These higher-order
terms break the UV/IR symmetry and therefore, to preserve the symmetry, higher-dimensional operators must be added to the EFT action to cancel these symmetry-breaking effect. For instance, expanding Eq. <ref> gives
𝒜 = -4 π/M1/1/a_s + i p (1 +2/πΛ1/1/a_s + i pp^2 + 𝒪(Λ^-2) )
which evidently requires a momentum dependent counterterm —a shift in the C_2 s operator that appears at NLO in the EFT expansion— that scales like 𝒪(Λ^-1) <cit.>. In addition, insertions of this counterterm in perturbation theory will generate new terms in the amplitude that scale like positive powers of the cutoff, and whose removal will in turn require even higher dimensional counterterms <cit.>.
This procedure of choosing counterterms to reproduce the scattering length approximation is nothing new; the identical procedure must be carried out in order to renormalize the cutoff EFT in a manner that preserves the Schrödinger symmetry Ward identities <cit.>, in the unitary, | a_s |→∞, limit.
A key observation is that the cutoff dependence of C_s (Λ) does not change as more counterterms are added, and the RG flow of this coupling in the cutoff scheme is the same as in PDS, Eq. (<ref>), but with μ↦2/πΛ. Therefore, the RG scale-inversion symmetry of the leading-order beta function
is not an artifact of a particular scheme, but rather a manifestation of
a physical property of the system which is reflected in the RG evolution
of the EFT couplings. Note that after all the higher-dimensional symmetry-restoring operators have been added, the theory is valid for all momenta and varying the arbitrary cutoff sets the scale for the physical process.
Returning to two-channel s-wave scattering, it is convenient to define the components of the re-scaled potential as
u(μ) = 1/2( Ĉ_1(μ) + Ĉ_0(μ) ) , v(μ) = 1/2( Ĉ_1(μ) - Ĉ_0(μ) ) .
In the u-v basis the RG fixed points of the rescaled potential are at (0,0), (1,0), (1/2,1/2) and (1/2,-1/2),
and the components of the potential flow with the RG according to the algebraic curve
v(v-w̅) = u( u-1) , w̅ ≡ a_1 + a_0/a_1 - a_0 .
The curves for all combinations of signs of scattering lengths are plotted in Fig. (<ref>).
At the end of section <ref> it was pointed out that when a_1 a_0 > 0 the momentum inversion symmetry is an exact symmetry of the
density matrix. In the figure this corresponds to the green and cyan curves which have a reflection symmetry about
the line u = 1/2. When a_1 a_0 < 0 the momentum inversion transformation maps the density matrix into one obtained from flipping the signs of the
scattering lengths. In the EFT this is seen through the
reflection about the line u = 1/2 mapping the trajectories with a_0 > 0 and a_1 < 0 (brown curve) and a_0 < 0 and a_1 > 0 (red curve) into each other.
In either case the reflection is generated by the scale-inversion
μ↦1/| a_1 a_0 |μ .
Therefore, in the EFT, the symmetry properties of the density matrix are encoded in
a geometric, reflection symmetry of the RG flow of the coupling constants. It
is curious that when the system is unbound in both s-wave channels, the RG flow is confined to the rhombus
formed by the four RG fixed points in much the same way that the S-matrix is confined via unitarity to the
flat torus defined by the two s-wave phase shifts <cit.>.
§.§ RG flow in d=2
In order to further investigate the relation between UV/IR
symmetries of the S-matrix and RG flow, consider deforming the
scattering system to d=2 via an anisotropic harmonic
trap <cit.> or by compactifying a
dimension <cit.>. The S-matrix element in the
d=2 scattering length approximation becomes
S_s = log ( a_s^2 p^2 ) + i π/log ( a_s^2 p^2 ) - i π ,
where a_s is the intrinsically positive d=2 scattering length.
Here the momentum inversion transformation, p ↦ 1/(a_s^2 p), maps S_s↦ S_s^*.
The scattering amplitude obtained in the EFT has a logarithmic divergence which requires regularization and
renormalization. Using dim reg with the MS scheme one finds that the couplings run
with the RG scale as <cit.>
C_s(μ) = - 4 π/M log ( a_s^2 μ^2 ) .
Notice that the coupling has a pole at μ = a_s^-1 where it changes sign. With μ > a_s^-1,
C_s(μ)<0 corresponding to attraction, and the coupling appears asymptotically free; i.e. flows to zero logarithmically.
With μ < a_s^-1, C_s(μ)>0 corresponding to repulsion, and the coupling runs into a Landau pole
in the UV at μ = a_s^-1.
It is convenient to define the dimensionless couplings, Ĉ_s(μ) = - MC_s(μ)/(4 π), whose beta-functions are
β̂(Ĉ_s) = μd/dμĈ_s(μ) = -2 Ĉ^2_s(μ) .
There is a single RG fixed point Ĉ_s = 0, which is reached asymptotically at μ = 0 and at μ = ∞ as seen in
Fig. (<ref>). Under an inversion of the RG scale,
μ↦ (a_s^2 μ)^-1, the running couplings transform as
Ĉ_s(μ) ↦ -Ĉ_s(μ) ,
which implies β̂(Ĉ_s)|_μ̅ = β̂(Ĉ_s)|_1/(a_s^2 μ̅) for any μ̅. The fixed point of the scale-inversion
transformation is at the Landau pole, μ^∘ = a_s^-1.
Considering both s-wave scattering channels simultaneously, the momentum inversion
transformation of the S-matrix generalizes to p ↦ (a_1 a_0 p)^-1 and leaves the
density matrix invariant. Defining u(μ) and v(μ) as in d = 3, the components
of the potential flow with the RG according to the algebraic curve
w̅( v^2-u^2) = v , w̅ ≡ log(a_1/a_0) .
The UV/IR transformation on the potential, via the inversion of
the scale, μ↦ (a_1 a_0 μ)^-1, generates a reflection about the v-axis, u ↦
-u, v ↦ v, in the u-v plane. The fixed point of the scale-inversion
symmetry occurs at μ^∘ = (a_1 a_0)^-1/2 where Ĉ_0 = -Ĉ_1. The RG
trajectory of the
potential is shown in Fig. (<ref>). It is clear that, as in
three spatial dimensions, a symmetry of the density matrix is encoded in the
EFT as a reflection symmetry of the RG flow of the potential.
§ UV/IR SYMMETRY BREAKING
The correspondence between the UV/IR transformation
properties of the S-matrix and the RG flow of EFT couplings is not
confined to the scattering length approximation. The UV/IR
transformation properties of the momentum-dependent corrections in the
effective range expansion are reflected in the EFT interaction
potential via constraints on the RG flow of the associated EFT
couplings. This section will focus on corrections to the
single-channel example in d=3 considered above: the case of a large
s-wave scattering length with perturbative effective range
corrections <cit.>.
The methods used in this section are similar to those used in
Ref. <cit.> where a distinct UV/IR symmetry was imposed to
constrain the EFT relevant for a non-perturbative treatment of the
effective range. It will be shown that the UV/IR transformation
properties of the scattering-amplitude corrections reproduces key
features of the known RG flow of higher-order couplings in the EFT,
without explicit calculation of the higher-order effects.
Treating the scattering length to all orders with effective range and
shape parameter effects treated perturbatively, the d=3
s-wave[Note that in this section all spin labels are dropped
and the results apply equally to the spin-singlet and spin-triplet
channels.] scattering amplitude is given by the ERE which, up to
NLO,
is <cit.>
𝒜 = -4 π/M1/1/a + i p (1+ r/2/1/a + i pp^2 ) ≡𝒜_-1 (p) + 𝒜_0(p)
where r is the effective range. Defining p̂≡ a p, at each order the scattering amplitude transforms simply under the UV/IR transformation, p̂→ 1/p̂,
𝒜_-1(1/p̂) = -i p̂ 𝒜_-1(p̂)^* , 𝒜_0(1/p̂) = -p̂^-2 𝒜_0(p̂)^* .
The scattering amplitude is generated from an EFT of contact interactions via a renormalized, on-shell, tree-level interaction potential
V = C_0(μ̂) + C_2(μ̂) p̂^2/a^2 ≡ V_-1(μ̂) + V_0(μ̂, p̂)
where C_n is the coefficient of the four-fermion interaction with
n derivatives and μ̂ = a μ. Note that here the RG scale
μ can represent the PDS scale or a hard cutoff [For the case of cutoff regularization, we are omitting from the potential the counterterms necessary to exactly match onto Eq. (<ref>).], and therefore
Eq. (<ref>) is not assumed. The potential relevant for
momentum p̂ is obtained by setting μ̂ = p̂ and,
in a particular renormalization scheme, the UV/IR properties of the
potential should mirror those of the scattering amplitude that it
generates, i.e. Eq. (<ref>). Hermiticity implies that V =
V^* and that there will be no imaginary phases in the UV/IR
transformation. The assumption that the interaction, as represented by
the potential, reflects the UV/IR transformation properties of the
amplitude at each order in the EFT expansion, suggests that by
imposing the UV/IR transformation, μ̂→ 1/μ̂ and
p̂→ 1/p̂, and setting μ̂ = p̂, the
potential should transform as
V_-1(1/p̂) = ϵ_ -1 p̂ V_-1(p̂) , V_0(1/p̂, 1/p̂) = ϵ_ 0 p̂^-2 V_0(p̂, p̂)
where (ϵ_ -1,0)^2=1.
This in turn implies that the renormalized couplings transform as
C_0(1/μ̂) = ϵ_ -1 μ̂ C_0(μ̂) , C_2(1/μ̂) = ϵ_ 0 μ̂^2 C_2(μ̂) .
Hence, once C_0 is determined, the UV/IR transformation properties imply
C_2∝ r (C_0)^2, as confirmed by explicit calculation of higher
order loop effects in the EFT using both PDS and cutoff [Using cutoff
regularization, the insertion of the C_n operators in the EFT
calculation generates increasingly singular and non-linear cutoff dependence. However,
these singular contributions are
canceled by existing counterterms <cit.>, as noted in section <ref>.]
regularization <cit.>.
This readily extends to higher orders in the EFT expansion; it is easy
to check that the UV/IR transformations of shape parameter and
higher-order amplitudes constrains C_2n to scale as specific
powers of C_0. Furthermore, from Eq. (<ref>), it is
clear that the couplings should all vanish as a → 0. Assuming
polynomial dependence on a and μ, dimensional analysis then
implies that C_0(μ̂) = a/M f(μ̂), and one class of
solutions to the UV/IR constraint is given by
f(μ̂) = -c(μ̂+1)μ̂^n/μ̂^2n+2+ϵ_ -1
where c and n are real constants. Setting
ϵ_ -1=-1, n=0, and c = 4π recovers C_0(μ) in the PDS scheme <cit.>.
§ SUMMARY AND CONCLUSION
In both two and three spatial dimensions, the scattering
length approximation to the low-energy, s-wave S-matrix has a UV/IR
symmetry which leaves the density matrix of the “out” state
invariant. While this is not a symmetry of the scattering amplitude or
effective action in the sense of a transformation on the fields, the
UV/IR symmetry does appear as a symmetry of the RG evolution of the
EFT couplings. In a sense, the echo of the S-matrix symmetry in RG
evolution is a consistency check that indeed the S-matrix is
rendered UV complete by the symmetry. As the UV/IR transformation maps
threshold to asymptotic infinity and vice versa, its presence signals a
UV-complete description of the scattering event. That is, scattering
is well defined at all distance scales. Clearly then, the UV completeness
should be reflected in the interaction itself, which should also be
defined over all distance scales. In the EFT of contact operators,
this is reflected by the presence of the RG scale μ which can be
chosen to take any value, and in the UV/IR symmetry of the beta-function.
It was shown that the UV/IR symmetry has utility beyond the scattering
length approximation, where the simple RG evolution between two fixed
points breaks down. Indeed, the manner in which perturbations around
the scattering length approximation break the UV/IR symmetry strongly
constrains the RG flow of higher dimensional couplings in the
corresponding EFT. In addition, there is another class of UV/IR
symmetric S-matrices which include effective ranges that are
correlated to the scattering
lengths <cit.>. In that case, the symmetry
also appears in the quantum mechanical potential which gives rise to
the S-matrix, albeit in a different manner than was shown in this
paper <cit.>.
One of the important conclusions of this chapter is that there are
symmetries of a scattering process which are not manifest symmetries
of the scattering amplitude. This arises because observables measured
on an “out” state depend on the full wave function after
scattering. That is, the contribution from the part of the wave
function which does not scatter (corresponding to the identity
operator in the S-matrix) is crucial in constructing the “out”
state. Furthermore, as the full wave function may decompose into many
scattering channels, each with their own scattering amplitude, there
can be symmetries which are only apparent if all scattering channels
are viewed holistically. It would be interesting if such S-matrix
symmetries can arise in other contexts, perhaps unrelated to
momentum inversion.
One important issue that has not been addressed
is the spacetime nature of the UV/IR symmetry. As the UV/IR
transformation is an inversion of momentum, it necessarily involves a
scale transformation. Given the results of appendix <ref>, it appears promising to investigate whether the
UV/IR symmetry could be understood as an extension of Schrödinger
symmetry <cit.>
to systems with finite scattering length.
§ MOMENTUM INVERSION WARD IDENTITY
This appendix considers a generalization of the momentum inversion transformation of Eq. (<ref>)
and derives the associated Ward identity. Consider the S-matrix of Eq. (<ref>) in the scattering length approximation.
If one allows the momentum, p, to span the entire real line, then with respect to the real Möbius transformation
p ↦ϑ p+ 1/a_s/±(a_s p-ϑ) ,
with ϑ an arbitrary real parameter, the S-matrix transforms as
S ↦ϑ± i/ϑ∓ i
S^* ,
S .
That is, the S-matrix transforms to itself or its complex conjugate,
times a constant complex phase. Choosing the positive
sign in the transformation law, and ϑ=0 recovers the UV/IR
transformation of Eq. (<ref>).
The Möbius transformation is a general mapping of the momentum to itself and therefore
generally contains UV/IR transformations. In what follows, the Ward identity
for this symmetry will be derived. Let p̂≡ a_s p, and
consider the infinitesimal version of Eq. (<ref>), with the
minus sign chosen in the transformation law. Since ϑ large
recovers the identity, take ϑ=1/ϵ with ϵ
infinitesimal. Then we see that Eq. (<ref>) becomes
p̂↦p̂+ ϵ/-ϵ p̂+1 .
Now consider the infinitesimal translation
p̂↦p̂ + ϵ ,
the infinitesimal dilatation
p̂↦ e^ϵp̂ = p̂ + ϵ p̂ + 𝒪(ϵ^2) ,
and finally consider two inversions with a translation in between,
p̂↦(p̂^-1-ϵ)^-1 = p̂/1-ϵ p̂ = p̂ + ϵ p̂^2 + 𝒪(ϵ^2) .
This final step is critical in giving an infinitesimal description of momentum inversion.
These three transformations are generated by the differential operators L_-1, L_0, and L_1, respectively, where
L_k≡ - p̂^k+1∂/∂p̂ .
These satisfy the 𝔰l(2,ℝ) algebra:
[ L_1 , L_-1 ] = 2 L_0 , [ L_± 1 , L_0 ] = ± L_± 1 .
Note that the general Möbius transformation matrix of Eq. (<ref>) has determinant ∓(ϑ^2+1)
and therefore is an element of PSL(2,ℝ) only in the special case where the minus sign is chosen in the transformation law
and ϑ=0 [If a_s = ± 1 then Eq. (<ref>) is an element of the modular group, PSL(2,ℤ).]. The general Möbius transformation is an element of PGL(2,ℝ).
It is easy to check that the (infinitesimal) Möbius transformation of Eq. (<ref>) is constructed by
successive transformations of Eq. (<ref>) and Eq. (<ref>); that is, it is generated by L_1 and L_-1. Indeed the Ward identity is:
(L_1 + L_-1-2 i )S = 0 .
The general solution of this differential equation is Eq. (<ref>), up to an overall complex coefficient.
Note that the (broken) on-shell dilatation Ward identity of the Schrödinger group <cit.> takes
the form <cit.>
L_0 S = -( S^2-1)
and, as expected, is respected at the RG fixed points, S=± 1.
CHAPTER: UV/IR SYMMETRIC EFFECTIVE FIELD THEORIES FOR THE NUCLEON-NUCLEON INTERACTION
This chapter is associated with
“Symmetries of the Nucleon-Nucleon S-matrix and Effective Field Theory Expansions" <cit.> by Silas R. Beane and Roland C. Farrell.
§ INTRODUCTION
In this chapter the UV/IR symmetries explored in the previous chapters are used to construct a UV/IR symmetric EFT for the two nucleon system.
It is shown that the assumption of a UV/IR symmetry highly constrains the kinematical dependence of the interaction.
These constraints manifest as a set of algebraic equations, that are solved by a “Yamaguchi"-like potential <cit.>.
This UV/IR symmetric interaction has the scattering length and effective ranges treated to all orders, with higher order shape parameters set to zero.
The UV/IR symmetric interaction constitutes LO in this new EFT, and it is shown how UV/IR symmetry breaking can be included as higher orders in perturbation theory.
§ UV/IR SYMMETRIES OF THE NN S-WAVE S-MATRIX
It may seem that momentum inversion transformations are orthogonal to the idea of EFT since
they interchange the UV and the IR. Note however that at LO in the
EFT, one is considering scattering in a limit in which all
short-distance mass scales are taken to be very large. In this
limit, long-distance forces (like pion exchange) and inelastic
thresholds (like the pion-production threshold) are only probed as
momentum approaches infinity. Therefore it is reasonable, at LO, to consider
transformations of the momenta over the entire momentum half-line, 0<k<∞.
Realistic NN scattering at low energies is not a single-channel system as the initial-state nucleons can be arranged into two distinct spin configurations. The NN S-matrix at very low energies is dominated by the s-wave, see Eq. <ref>.
The low-energy S-matrix can be well reproduced by the ERE truncated at the effective range, and below
the following physical effective range parameters will be used <cit.>: a_0
= -23.714(13) fm, a_1 = 5.425(1) fm, r_0 = 2.73(3) fm, and
r_1 = 1.749(8) fm.
With effective range corrections included, the momentum inversion transformation
p↦1/λ |a_1 a_0| p ,
with the arbitrary real parameter λ >0, implies
δ_0(p) ↦δ_0(p) , δ_1(p) ↦ -δ_1(p) ,
but only in the special case[The general case is considered in Sec. <ref>.] where the effective ranges are correlated with the scattering lengths
as
r_0 = 2λ a_1 , r_1 = -2λ a_0 .
This UV/IR symmetry has interesting implications for
nuclear physics. The measured singlet NN scattering phase shift rises
steeply from zero due to the unnaturally large scattering length, and
then, as momenta approach inelastic threshold, the phase shift goes
through zero and becomes negative, indicating the fabled short-distance
repulsive core. While this impressionistic description assigns
physics to the potential, which is not an observable and indeed need
not be repulsive at short distances, the UV/IR symmetry directly
imposes the physically observed behavior of the phase shift. Even though the
singlet phase shift changes sign at momenta well beyond the range of
applicability of the pionless theory, ascribing this symmetry to the
LO results —through the required presence of range corrections— results
in a more accurate LO prediction than the usual pionless expansion <cit.>.
§ EFT DESCRIPTION: SINGLE-CHANNEL CASE
§.§ Potential and Lippmann-Schwinger equation
It is convenient to introduce the generic UV scale M and the
generic IR scale ℵ. The EFT will describe physics for k∼ℵ≪ M.
Subleading corrections to the scattering amplitude in the EFT are expected to be parametrically
suppressed by powers of k/ M. For the NN system at very-low energies, described by the pionless EFT, the UV scale is M∼ M_π.
The s-wave potential stripped from the most general effective Lagrangian of four-nucleon contact operators is <cit.>
V(p',p)=C_0 + C_2 (p^2 + p'^2) + C_4 (p^4 + p'^4) + C_4' p^2 p'^2+… ,
where the C_n are the bare coefficients.
The scattering amplitude is obtained by solving the Lippmann-Schwinger (LS) equation with this potential
T(p',p;E)=V(p',p) + M∫d^3q/(2 π)^3 V(p',q)
1/EM- q^2+iϵ T(q,p;E) .
As the potential is separable to any order in the momentum expansion, the scattering amplitude can be
obtained in closed form to any desired order in the
potential <cit.>. Of course the singular nature of the
potential requires regularization and renormalization. The scattering
amplitude can be regulated using, for instance, dimensional regularization and its various schemes, or by
simply imposing a hard UV cutoff, Λ, on the momentum integrals.
§.§ Matching equations
In the language of cutoff regularization, matching the solution of the LS equation with the ERE of
Eq. (<ref>), formally gives the all-orders matching equations
a = f_0(Λ; C_0,C_2,C_4,…) ,
r = f_2(Λ; C_0,C_2,C_4,…) ,
v_n = f_2n(Λ; C_0,C_2,C_4,…) ,
where the f_2n are non-linear functions determined by solving the
LS equation. These equations can be inverted to find the now
cutoff-dependent coefficients C_2n(Λ;a,r,v_2,…). To
obtain an EFT with predictive power, it is necessary to identify the
relative size of the effective range parameters. If they all scale as
powers of M^-1, then the entire potential of
Eq. (<ref>) can be treated in perturbation theory for momenta
k≪ M. If there are big parts that instead scale as powers
of ℵ^-1, they will, via the interactions of Eq. (<ref>), constitute the LO potential in the EFT
expansion.
§.§ Scattering length approximation
With a∼ℵ^-1 and the effective range and shape parameters of
natural size, r∼ M^-1, v_n∼ M^-2n+1,
the amplitude can be expanded for k≪ M as
T(k) = -4 π/M( -1/a - i k )^-1 1 + O(k) .
In order to generate this expansion in the EFT, the potential is written as
V(p',p)=V_+V_r(p',p) ,
where V_=C_0 is treated exactly in the LS equation and the residual potential, V_r, which includes range
and shape parameter corrections, is treated in perturbation theory.
Keeping the first term in the s-wave potential, the solution of the LS equation is
T_(k) =( 1/C_0 - 𝕀(k) )^-1,
where
𝕀(k) ≡(ω/2)^3-d M∫d^dq/(2 π)^d 1/k^2- q^2+iϵ PDS -M/4π(ω +i k ) ,
has been evaluated in dimensional regularization with the PDS scheme <cit.> and renormalized at the RG scale ω.
The matching equations in this case are
a = (4π/M C_0 + ω)^-1 ,
r = v_n = 0 .
Inverting one finds
C_0(ω) = 4 π/M1/1/a-ω .
The coupling at the unitary fixed point, C_0=C_0⋆, corresponds to a divergent scattering
length (unitarity). A rescaled coupling can be defined as
Ĉ_0 ≡ C_0/C_0⋆. The corresponding beta-function
is then
β̂(Ĉ_0) = ωd/dωĈ_0(ω) = -Ĉ_0(ω)(Ĉ_0(ω)-1) ,
which explicitly has fixed points at Ĉ_0=0 and 1. Note that the beta-function inherits the properties of the UV/IR symmetry as shown in Chapter <ref>.
§.§ Range corrections with zero-range forces
With a, r∼ℵ^-1 and shape parameters of
natural size, v_n∼ M^-2n+1,
the amplitude can be expanded for k≪ M as
T(k) = -4 π/M( -1/a + 1/2 r k^2 - i k )^-1 1 + O(k^3) .
In order to generate this expansion from the EFT perspective, one may choose V_=C_0+C_2
(p^2 + p'^2). Due to the highly singular UV behavior of this potential, cutoff
regularization will be used to carefully account for the divergences that are generated when
solving the LS equation. The amplitude is found in closed form to be <cit.>
T_(k)=((C_2 I_3 -1)^2/C + C_2^2 I_5 + k^2 C_2 (2 - C_2 I_3) - 𝕀(k))^-1 ,
where now 𝕀(k) is evaluated with cutoff regularization, and
I_n ≡ -M ∫d^3q/(2 π)^3 q^n-3θ(π/2Λ-q)=-M Λ^n/2^1+nπ^2-nn .
Matching the amplitude to the ERE gives
a = M/4 π((C_2 I_3 -1)^2/C + C_2^2 I_5 - I_1)^-1 ,
r = 8 π/M((C_2 I_3 -1)^2/C + C_2^2 I_5)^2
[1/(C_2 I_3 - 1)^2 I_3 - 1/I_3] ,
v_n = -4 π/MC_2^n(C_2 I_3 -2)^n(C_2 I_3 -1)^2/(C + C_2^2 I_5)^n+1 .
Taking the limit Λ→∞ then recovers the ERE
with all shape parameter corrections vanishing, v_n=0. However, it
is straightforward to verify that in this limit, r ≤ 0, as
required by the Wigner bound <cit.>. If s-wave NN scattering involved
negative effective ranges and resonances rather than bound states,
then this scheme, with the effective potential consisting of a finite
number of strictly delta-function potentials, would
suffice <cit.>. However, as the s-wave NN effective ranges are
both positive, it is clear that Λ must be kept
finite. In that case the higher-order terms in the bare
potential, even if neglected in the choice of V_, are generated
quantum mechanically at LO as evidenced by the non-vanishing shape
parameters. Therefore the higher order operators should be kept from the start. That is, one
is back to the general, formal statement of the matching conditions
given in Eq. (<ref>), where a renormalization scheme is
required which ensures that v_n=0, and a choice of V_ must be
found which achieves this while including all orders in the momentum
expansion.
§.§ Range corrections with a finite-range scheme: LO
The lesson provided by the Wigner bound is that if range corrections are treated exactly, then the LO potential
should, in general, include all orders in the momentum expansion. Therefore, the potential
can be written as in Eq. (<ref>) except now with a momentum dependent LO potential
V(p',p)=V_(p',p) + V_r(p',p) .
The residual potential, V_r, which accounts for NLO and higher effects in
perturbation theory, will be considered in detail below.
Now the potential is non-unique and there is no reason for the
separation into V_ and V_r to be unique. Ideally one
finds a LO potential which identically gives the ERE with all
shape parameters vanishing, and indeed that is what will be achieved
by imposing the UV/IR symmetry at the level of the interaction.
As the s-wave EFT potential is non-local (non-diagonal in coordinate space) and separable to any order in the momentum expansion, it appears sensible to assume that the LO potential which generates the ERE with range corrections only is non-local and separable. However, such an assumption is not necessary: all potentials which generate the ERE truncated at the effective
range are, by definition, phase equivalent[Note that the potential
considered in the previous section is phase equivalent only in the limiting sense of
Λ→∞ and r < 0.]. Once one potential is found, others can be
obtained by unitary transformation. Here the simplest possibility
will be considered; that the LO potential is (rank-one) separable.
For a separable potential V(p,p'), the on-shell scattering amplitude
solves the LS equation algebraically to
T(k) = V(k,k)(1-M∫d^3q/(2 π)^3 V(q,q)/k^2- q^2+iϵ)^-1 .
The momentum inversion symmetry of Eq. (<ref>)
(assuming for simplicity η≡ a r/2 >0) implies
T(k)↦ T(1/η k) = -η k^2 T^*(k) .
With the range of the potential taken to be
the momentum scale μ, the potential can be taken to be the real function V_(μ;p,p'). Constraints on the S-matrix concern the potential
V_(μ;p,p)≡ V_(μ;p). Therefore,
T^*(k) = V_(μ;k)(1-M∫d^3q/(2 π)^3 V_(μ;q)/k^2- q^2-iϵ)^-1
= -1/η k^2 V_(μ;1/η k)/1-M/2π^2∫ dq (-1/η q^2)V_(μ;1/η q)
-M∫d^3q/(2 π)^3 (-1/η q^2)V_(μ;1/η q)/k^2- q^2-iϵ
where the first line follows from Eq. (<ref>) and the second line follows from Eq. (<ref>).
Now consider non-singular solutions of this equation of the form
V_(μ;1/η q) = ϵ η q^2 V_(ν;q) ,
where ϵ=± 1, and the range of the potential has been allowed to vary under momentum inversion,
and so ν is an independent scale in correspondence with the range of the transformed potential. Integrating this equation gives
∫_0^∞ dq V_(μ;q) = ϵ ∫_0^∞ dq V_(ν;q) .
With the choice ϵ=-1 and μ=ν, this integral must vanish identically and there is clearly a formal solution
to Eq. (<ref>). However, an explicit solution does not seem to exist for a real, finite-range potential.
What follows focuses on the case ϵ=1 and μ≠ν. In this case, Eq. (<ref>) is solved by
the ratio of polynomials
V^n_(μ;p) = N/Mμ1-(p^2/μ^2)^n+1/1-(p^2/μ^2)^n+2 ,
where n∈ℤ: n≥ 0, N is a dimensionless normalization constant and μ ν=1/η. One finds
∫_0^∞ dq V^n_(μ;q) = ∫_0^∞ dq V^n_(ν;q) = π N/M (n+2)(π/2 n+4) .
With the postulated solution, Eq. (<ref>) takes the form
T^*(k) = V^n_(μ;k)(1-M∫d^3q/(2 π)^3 V^n_(μ;q)/k^2- q^2-iϵ)^-1
= V̅^n_(ν;k)(1-M∫d^3q/(2 π)^3 V̅^n_(ν;q)/k^2- q^2-iϵ)^-1 ,
where
V̅^n_(ν;k) = -V^n_(ν;k)( 1 + M/2π^2∫_0^∞ dq V^n_(ν;q))^-1
= -N/Mν(1+ N/2π (n+2)(π/2 n+4))(1-(k^2/ν^2)^n+1/1-(k^2/ν^2)^n+2) .
Now all that is required is to show that V^n_(μ;k) and V̅^n_(ν;k) are phase-equivalent potentials. Equating
the two sides of Eq. (<ref>) at both k=0 and k = √(μν)
yields a single solution for n and N:
n=0 , N= -4π(1 + ν/μ) .
Solving the LS equation with either phase-equivalent potential gives
a= 1/μ+1/ν , r=2/μ+ν , v_n=0 .
Finally, the potential takes the separable and non-local form <cit.>
V_(μ,ν;p,p') = -4π/Mμ(1 + ν/μ)1/√(1+p^2/μ^2)√(1+p^' 2/μ^2) ,
with phase-equivalent potential V̅_(μ,ν;p,p')=V_(ν,μ;p,p').
Note that the potential of Eq. (<ref>) satisfies the scaling law of Eq. (<ref>) only if N is held fixed. In some sense, the original potential that appears in the
LS equation may be viewed as a bare potential which is determined by requiring that Eq. (<ref>) leave the LS form invariant. The corresponding transformation of the S-matrix, S → S^*, is seen only after solving the LS
equation, which fixes N to its μ and ν dependent value in Eq. (<ref>). N is therefore a kind of anomalous scaling factor which takes the bare potential to the renormalized form of Eq. (<ref>). Letting the UV/IR transformation act on both the momenta, p, and the scales, μ and
ν, (i.e. μ↔ν), generates a transformation on V_ (V̅_) which is augmented from that of Eq. (<ref>) by an anomalous
scaling factor of ν/μ (μ/ν). One important observation is that, up to an anomalous scaling factor and a sign, V_ and V̅_ transform in the same manner as the amplitude that they generate, Eq. (<ref>).
With μ and ν intrinsically positive, the general case, with scattering length and effective range of any sign
is obtained by taking ζμ ν=1/η, with ζ=1 corresponding to a r >0 and ζ=-1 corresponding to a r <0.
The general solution is then
V_(μ,ν;p,p') = -4π/Mμ(1 + ζν/μ)1/√(1+p^2/μ^2)√(1+p^' 2/μ^2) ,
with phase-equivalent potential V̅_(μ,ν;p,p')=ζ V_(ν,μ;p,p'), and
a= 1/μ+1/ζν , r=2/μ+ ζν , v_n=0 .
Having both a and r large as compared to the (inverse) UV scale
ℳ^-1∼ M^-1_π generally requires μ,ν∼ℵ.
Expanding
V_ in powers of the momenta for p,p'≪ℵ and matching
onto the momentum expansion of Eq. (<ref>) leads to the scaling
C^(')_LO m ∼ 4π/M ℵ^m+1 .
The coefficients of the residual potential
are expected to be suppressed, in a manner to be determined below, by the UV scale. As
the potential is not unique, the decomposition into IR enhanced and UV
suppressed contributions is not unique. Treating the expanded LO
potential as a renormalization scheme, then for momenta p,p'∼ℵ, all C^(')_LO m terms in the
potential should be summed into the LO potential to give
Eq. (<ref>) which is treated exactly in the LS
equation, while the residual potential is treated in perturbation theory.
§.§ Range corrections with a finite-range scheme: NLO
Recall that treating a,r∼ℵ^-1 and v_n∼ℳ^-2n+1, for k≪ℳ, the ERE of Eq. (<ref>) can be expanded to
give the NLO amplitude
T_(k) = 4 π/M( -1/a + 1/2 r k^2 - i k )^-2 v_2 k^4 .
Note that it has been assumed in this expression that r is close to its “physical” value.
More generally, and of greater utility when considering realistic NN scattering, one can decompose r=r_+r_, where r_∼ℵ^-1 and r_∼ℳ^-1. In this case
T_(k) = 4 π/M( -1/a + 1/2 r_k^2 - i k )^-2 r_ k^2
so that shape-parameter corrections enter at NNLO (as a subleading contribution to range-squared effects).
The goal in what follows is to generate the NLO amplitudes of Eq. (<ref>) and Eq. (<ref>) (in that order) in the EFT.
The LS equation, Eq. (<ref>), for the full scattering amplitude is symbolically expressed as
T = V + V G T ,
where G is the two-particle Green's function. Expanding the scattering amplitude and potential as
T=T_+T_+… , V=V_+V_r
leads to T_ as an exact solution of the LS equation, as illustrated in Fig. <ref>, and the NLO and beyond amplitude
T_ + … =V_r + V_r G T_ + T_ G V_r + T_ G V_r G T_ … ,
as illustrated diagrammatically to NLO via the Feynman diagrams in Fig. <ref>. The form of
the bare EFT potential V_r which matches to the expanded ERE is
straightforward to find using the UV/IR symmetry. It is convenient to express the LO potential in the compact form
V_(p,p') = C_LO 0 𝒢(p) 𝒢(p') .
where C_LO 0 and 𝒢(p^(')) are defined by comparing with Eq. (<ref>).
The LO amplitude is then
T_(k) = C_LO 0 𝒢^2(k) Z^-1 ,
with Z≡ 1-C_LO 0𝕀_2 and the convergent integral
𝕀_2 = M∫d^3q/(2 π)^3𝒢^2(q)/k^2- q^2+iϵ = -M/4π𝒢^2(k)(μ+i k) .
Now notice that the NLO amplitude of Eq. (<ref>) transforms simply under the momentum inversion k↦ 2/( | a r | k)= μν/k as T_→
T^*_ (a r > 0) or T_→ T_ (a r <
0). Based on the discussion below Eq. (<ref>) it may be expected that the part of V_r that generates
T_ will be invariant under momentum inversion up to a sign and μ↔ν.
Consider the energy-dependent potential,
k^2 V_(k,k)𝒢^2(k) = C_LO 0k^2 𝒢^4(k)
which maps to (minus) itself with μ↔ν for a r positive (negative) and is therefore a candidate for the piece of V_r which generates T_. An energy-independent residual potential can then be defined as
V_r(p',p)= c_0+ c_2 (p^2 + p'^2)+… V_(p',p)(𝒢^2(p)+𝒢^2(p')) ,
where the c^(')_m coefficients are bare parameters, subject to renormalization. On-shell this potential has the desired UV/IR transformation properties. Note that the form of the potential reflects that only odd powers of 𝒢(p^(')) in V_r will match to
a polynomial in k for k∼ℵ. Formally,
the (bare) coefficients of the full potential can be expressed for k≪ℵ as
C^(')_m=C^(')_LO m+C^(')_NLO m, where C^(')_NLO m≡ C_LO 0· c^(')_m.
Evaluating the diagrams of Fig. <ref> with a single insertion of V_r gives
T_ = 2c_0 C_LO 0𝒢^2(k) 𝒢^2(k) + C_LO 0 Z^-1(𝒢^2(k)𝕀_2+𝕀_4)
+ C^2_LO 0 Z^-2𝕀_2𝕀_4
+2c_2 C_LO 0𝒢^2(k) { 2𝒢^2(k) k^2
+ C_LO 0 Z^-1𝒢^2(k)(2k^2𝕀_2-𝕁_2)+2k^2𝕀_4-𝕁_4
+ C^2_LO 0 Z^-2 2𝕀_2𝕀_4 k^2-𝕀_2𝕁_4 -𝕁_2𝕀_4 }
where
𝕀_4 = M∫d^3q/(2 π)^3𝒢^4(q)/k^2- q^2+iϵ = -M/4π𝒢^4(k)(μ -k^2/2μ+i k) ,
𝕁_2 = (ω/2)^3-dM∫d^dq/(2 π)^d𝒢^2(q) PDS -Mμ^2/4π(μ-ω) ,
𝕁_4 = M∫d^3q/(2 π)^3𝒢^4(q) = Mμ^3/8π .
Here the linearly divergent integral 𝕁_2 has been evaluated in dimensional regularization with the PDS scheme <cit.>,
and ω is the renormalization scale[The linearly divergent integral 𝕁_2 in the MS scheme can be obtained by setting ω = 0. Similarly, cutoff regularization, as in Eq. (<ref>), is obtained by replacing ω with Λ (for Λ large).].
In terms of renormalized parameters, the amplitude takes the form
T_ = C_LO 0( 𝒢^2(k) Z^-1)^2 c_0^R(1-ζν/μ)+ c_2^R (3-ζν/μ)k^2 ,
where the renormalized parameters, c^(')R_m, are defined as
c_2^R = c_2 , c_0^R = c_0(ω) + c_2^R(μ+ ζν/μ-ζν)ζμν +ω( μ-ζν) .
Matching to the expanded ERE of Eq. (<ref>) gives c_0^R=0[Note that one can also decompose a=a_+a_ in which case this condition follows from a_=0.] and
C_NLO 2 = M/4π C^2_LO 0(3-ζν/μ)^-1 r_ ,
with C^(' )_NLO m = C_LO 0· c^(' )R_m.
This relation gives a subleading enhancement of the same form as the usual pionless EFT <cit.> up to the factor in parenthesis,
and results in the scaling
c_2^R ∼ 1/ℳℵ , C_NLO 2∼4π/M ℳℵ^2 .
In similar fashion, the NLO amplitude of Eq. (<ref>) can be obtained via the
energy-independent residual potential
V_r(p',p) = c_0+ c_2 (p^2 + p'^2)+c_4 (p^4 + p'^4)+c'_4 p^2 p'^2+…
× V_(p',p)(𝒢^2(p)+𝒢^2(p')) .
Working in dimensional regularization with MS,
the amplitude takes the form
T_ = C_LO 0( 𝒢^2(k) Z^-1)^2 c_0^R(1-ζν/μ)+ c_2^R (3-ζν/μ)k^2
+ 2 c_4^' R + c_4^R (3- ζν/μ) k^4
,
with the renormalized parameters
c^(')R_4 = c^(')_4 , c_2^R = c_2 - μ(μ+ ζν) c_4^R + (3- ζν/μ)^-1c_4^' R ,
c_0^R =
c_0 - μ(μ+ ζν/μ-ζν)
-c_2 νζ+c_4^Rμ^2 (2 μ+νζ)+c_4^' Rμ^2 (μ+νζ)
.
Matching to the expanded ERE of Eq. (<ref>) now gives c_0^R=c_2^R=0 and
C_NLO 4+2C_NLO 4'(3-ζν/μ)^-1 = M/4π C^2_LO 0(3-ζν/μ)^-1 v_2 .
The coefficients scale as
c_4^(')R ∼ 1/ℳ^3 ℵ , C^(')_NLO 4∼4π/M ℳ^3ℵ^2 .
This differs from the conventional pionless theory counting which has a nominally leading contribution to the
C^(' )_4 operators from effective range (squared) effects.
§ EFT DESCRIPTION: THE NN S-WAVE PHASE SHIFTS
The phase shifts to NLO in the EFT expansion are
δ_s(k) = δ_ (s)(k) + δ_ (s)(k) ,
with
δ_ (s)(k) = -1/2 i ln(1 - i k M/2π T_ (s)(k) ) ,
δ_ (s)(k) = -k M/4π T_ (s)(k) (1 - i k M/2π T_ (s)(k) )^-1 .
Recall from section <ref> that in s-wave NN scattering, the UV/IR symmetry of the full S-matrix
requires range corrections that are correlated with the scattering lengths and treated exactly.
The physical effective ranges can therefore be expressed as r_s=r_ (s)+r_ (s) with
r_ (0) = 2λ a_1 , r_ (1) = -2λ a_0 .
The singlet and triplet phase shifts with the NLO amplitude given by Eq. (<ref>) are plotted in Fig. (<ref>).
At LO, there are three parameters given by the two s-wave scattering lengths and λ, which is fixed to
λ=0.14± 0.11, the range of values that exactly encompasses fits of λ to each s-wave channel independently. This spread in λ
corresponds to the shaded gray region of the figures and is a conservative estimate of the LO uncertainty. The NLO curve has been generated
by tuning the r_ (s) to give the “physical” effective ranges. A band on the NLO curves can easily be set by folding in the nominally
NNLO effect.
The singlet and triplet phase shifts with the NLO amplitude given by Eq. (<ref>) are plotted in Fig. (<ref>). As this treats both effective ranges exactly it provides an extremely accurate fit to the phase shifts due to the smallness of the shape parameters.
§ CONCLUSION
The s-wave NN S-matrix obtained from the ERE with
scattering length and effective range terms only, has interesting
UV/IR symmetries which are special inversions of the momenta. These
symmetries set the region of applicability of the EFT
descriptions. For instance, while it is common to view the EFT of
large scattering lengths as an expansion about the unitary fixed point
of the RG, it is, strictly speaking, an expansion about the fixed
point of a momentum inversion symmetry. The UV/IR symmetries, while
not symmetries of the EFT action or of the scattering amplitude, are
present in the interaction. For instance, in the EFT of large
scattering lengths, the UV/IR symmetry is manifest in the RG flow of
the contact operator as an inversion symmetry of the RG scale which
interchanges the trivial and unitary RG fixed points and leaves the
beta function invariant <cit.>. When the effective range is also treated at LO in the EFT, the S-matrix has a (distinct) UV/IR symmetry which
effectively determines the LO potential, and constrains the form
of the perturbative NLO corrections.
There are many avenues to pursue with this new EFT. For instance, the softened asymptotic behavior of
the LO potential may resolve the issues of renormalization that arise when range corrections are
added to the integral equations that describe the three-nucleon system at very low energies. In addition, given the improved
convergence of LO in the EFT up to momenta beyond the range of validity of the pionless EFT, the perturbative pion paradigm may be worth revisiting in this scheme.
It may also be the case that the UV/IR
symmetries have interesting consequences for systems of many nucleons near
unitarity.
CHAPTER: QUANTUM SIMULATIONS OF QUANTUM CHROMODYNAMICS IN DIMENSIONS
This chapter is associated with Ref. <cit.>:
“Preparations for Quantum Simulations of Quantum Chromodynamics in Dimensions: (I) Axial Gauge" by Roland C. Farrell, Ivan A. Chernyshev, Sarah J. M. Powell, Nikita A. Zemlevskiy, Marc Illa and Martin J. Savage.
§ INTRODUCTION
Simulations of the real-time dynamics of out-of-equilibrium, finite density quantum systems is a major goal of Standard Model (SM) <cit.> physics research and is expected to be computed efficiently <cit.> with ideal quantum computers <cit.>.
For recent reviews, see Refs. <cit.>.
Developing such capabilities would enable precision predictions of particle production and fragmentation in beam-beam collisions at the LHC and RHIC, of the matter-antimatter asymmetry production in the early universe, and of the structure and dynamics of dense matter in supernova and the neutrino flavor dynamics therein.
They would also play a role in better understanding protons and nuclei, particularly their entanglement structures and dynamics, and in exploring exotic strong-interaction phenomena such as color transparency. First steps are being taken toward simulating quantum field theories
(QFTs) using currently available, NISQ-era (Noisy Intermediate Scale Quantum) quantum devices <cit.>, by studying low-dimensional and truncated many-body systems (see for example, Refs. <cit.>).
These studies are permitting first quantum resource estimates to be made for more realistic simulations.
There has already been a number of quantum simulations of latticized 1+1D quantum electrodynamics (QED, the lattice Schwinger model), starting with the pioneering work of Martinez et al. <cit.>.
The Schwinger model shares important features with quantum chromodynamics (QCD), such as charge screening, a non-zero fermion condensate, nontrivial topological charge sectors and a θ-term.
Quantum simulations of the Schwinger model have been performed using quantum computers <cit.>, and
there is significant effort being made to extend this progress to higher dimensional QED <cit.>.
These, of course, build upon far more extensive and detailed classical simulations of this model and analytic solutions of the continuum theory.
There is also a rich portfolio of classical and analytic studies of 1+1D SU(N_c) gauge theories <cit.>, with some seminal papers preparing for quantum simulations <cit.>, with the recent appearance of quantum simulations of a 1-flavor (N_f=1) 1+1D SU(2) lattice gauge theory <cit.>.
An attribute that makes such calculations attractive for early quantum simulations is that the gauge field(s) are uniquely constrained by Gauss's law at each lattice site.
However, this is also a limitation for understanding higher dimensional theories where the gauge field is dynamical. After pioneering theoretical works developing the formalism and also end-to-end simulation protocols nearly a decade ago, it is only recently that first quantum simulations of the dynamics of a few plaquettes of gauge fields have been performed <cit.>.
Due to its essential features,
quantum simulations of the Schwinger model provide benchmarks for QFTs and quantum devices for the foreseeable future.
Moving toward simulations of QCD requires including non-Abelian local gauge symmetry and multiple flavors of dynamical quarks. Low-energy, static and near-static observables in the continuum theory in 1+1D
are well explored analytically and numerically, with remarkable results demonstrated, particularly in the 't Hooft model of large-N_c <cit.>
where the Bethe-Salpeter equation becomes exact. For a detailed discussion of 1+1D U(1) and SU(N_c) gauge theories, see Refs. <cit.>.
Extending such calculations to inelastic scattering to predict, for instance, exclusive processes in high-energy hadronic collisions
is a decadal challenge.
In 3+1D QCD, the last 50 years have seen remarkable progress in using classical high-performance computing to provide robust numerical results using lattice QCD, e.g., Refs. <cit.>,
where the quark and gluon fields are discretized in spacetime. Lattice QCD is providing complementary and synergistic results to those obtained in experimental facilities, moving beyond what is possible with analytic techniques alone.
However, the scope of classical computations, even with beyond-exascale computing platforms <cit.>, is limited by the use of a less fundamental theory (classical) to simulate a more fundamental theory (quantum).
Building upon theoretical progress in identifying
candidate theories for early exploration (e.g., Ref. <cit.>),
quantum simulations of 1+1D non-Abelian gauge theories including matter were recently performed <cit.> for a N_c=2 local gauge symmetry with one flavor of quark, N_f=1.
The Jordan-Wigner (JW) mapping <cit.> was used to define the lattice theory, and
Variational Quantum Eigensolver (VQE) <cit.> quantum circuits were developed and used on IBM's quantum devices <cit.>
to determine the vacuum energy, along with meson and baryon masses.
Further, there have been previous quantum simulations of
1- and 2-plaquette systems in N_c=2,3 Yang-Mills lattice gauge theories <cit.> that did not include quarks.
Simulations of such systems are developing rapidly <cit.> due to algorithmic and hardware advances. In addition, distinct mappings of these theories are being pursued <cit.>.
This chapter focuses on the quantum simulation of 1+1D SU(N_c) lattice gauge theory for arbitrary N_c and N_f.
Calculations are primarily done in A^(a)_x=0 axial (Arnowitt-Fickler) gauge,[For a discussion of Yang-Mills in axial gauge, see, for example, Ref. <cit.>.]
which leads to non-local interactions in order to define the chromo-electric field contributions to the energy density via Gauss's law.
This is in contrast to Weyl gauge, A_t^(a)=0, where contributions remain local.
The resource estimates for asymptotic quantum simulations of the Schwinger model in Weyl gauge have been recently performed <cit.>, and also for Yang-Mills gauge theory based upon the Byrnes-Yamamoto mapping <cit.>.
Here, the focus is on near-term, and hence non-asymptotic, quantum simulations to better assess the resource requirements for quantum simulations of non-Abelian gauge theories with multiple flavors of quarks.
For concreteness, N_f=2 QCD is studied in detail, including the mass decomposition of the low-lying hadrons (the σ- and π-meson, the single baryon and the two-baryon bound state), color edge-states, entanglement structures within the hadrons and quantum circuits for time evolution.
Further, results are presented for the quantum simulation of a N_f=1, single-site system, using IBM's quantum computers <cit.>.
Such quantum simulations will play a critical role in evolving the functionality, protocols and workflows to be used in 3+1D simulations of QCD, including the preparation of scattering states, time evolution and subsequent particle detection.
As a step in this direction, in a companion to the present paper, the results of this work have been applied to the quantum simulation of β-decay of a single baryon in 1+1D QCD <cit.>.
Motivated by the recent successes in co-designing efficient multi-qubit operations in trapped-ion systems <cit.>,
additional multi-qubit or qudit operations are identified,
specific to lattice gauge theories,
that would benefit from being native operations on quantum devices.
§ QCD WITH THREE COLORS AND TWO FLAVORS IN 1+1D
In 3+1D,
the low-lying spectrum of N_f=2 QCD is remarkably rich.
The lightest hadrons are the πs, which are identified as the pseudo-Goldstone bosons associated with the spontaneous breaking of the approximate global SU(2)_L⊗ SU(2)_R chiral symmetry, which becomes exact in the chiral limit where the πs are massless. At slightly higher mass are the broad I=0 spinless
resonance, σ, and the narrow I=0, ω, and I=1, ρ, vector resonances as well as the multi-meson continuum.
The proton and neutron, which are degenerate in the isospin limit and the absence of electromagnetism,
are the lightest baryons, forming
an I=J=1/2 iso-doublet.
The next lightest baryons, which become degenerate with the nucleons in the large-N_c limit (as part of a
large-N_c tower), are the four I=J=3/2 Δ resonances.
The nucleons bind together to form the periodic table of nuclei, the lightest being the deuteron, an I=0, J=1 neutron-proton bound state with a binding energy of ∼ 2.2 MeV, which is to be compared to the mass of the nucleon M_N∼ 940 MeV.
In nature, the low-energy two-nucleon systems have S-wave scattering lengths that are much larger than the range of their interactions, rendering them unnatural. Surprisingly, this unnaturalness persists for a sizable range of light-quark
masses, e.g., Refs. <cit.>.
In addition, this unnaturalness, and the nearby renormalization-group fixed point <cit.>, provides the starting point for a systematic effective field theory expansion about unitarity <cit.>.
Much of this complexity is absent in a theory with only one flavor of quark.
As a first step toward 3+1D QCD simulations of real-time dynamics of nucleons and nuclei, we will focus on preparing to carry out quantum simulations of 1+1D QCD with N_f=2 flavors of quarks. While the isospin structure of the theory is the same as in 3+1D, the lack of spin and orbital angular momentum significantly reduces the richness of the hadronic spectrum and S-matrix.
However, many of the relevant features and processes of 3+1D QCD that are to be addressed by quantum simulation in the future are present in 1+1D QCD.
Therefore, quantum simulations in 1+1D are expected to provide inputs to the development of quantum simulations of QCD.
§.§ Mapping 1+1D QCD onto Qubits
The Hamiltonian describing non-Abelian lattice gauge field theories in arbitrary numbers of spatial dimensions was first given by Kogut and Susskind (KS) in the 1970s <cit.>. For 1+1D QCD with N_f = 2 discretized onto L spatial lattice sites, which are mapped to 2L q, q sites to separately accommodate quarks and antiquarks, the KS lattice Hamiltonian is
H_KS
= ∑_f=u,d[
1/2 a∑_n=0^2L-2 ( ϕ_n^(f)† U_n ϕ_n+1^(f) + h.c. )
+
m_f ∑_n=0^2L-1 (-1)^nϕ_n^(f)†ϕ_n^(f)]
+ a g^2/2∑_n=0^2L-2∑_a=1^8
| E^(a)_n|^2
- μ_B/3∑_f=u,d∑_n=0^2L-1ϕ_n^(f)†ϕ^(f)_n
- μ_I/2∑_n=0^2L-1(ϕ_n^(u)†ϕ^(u)_n - ϕ_n^(d)†ϕ^(d)_n )
.
The masses of the u- and d-quarks are m_u,d,
g is the strong coupling constant at the spatial lattice spacing a,
U_n is the spatial link operator in Weyl gauge
A_t^(a)=0,
ϕ^(u,d)_n are the u- and d-quark field operators which transform in the fundamental representation of SU(3)
and
E^(a)_n is the chromo-electric field associated with the SU(3) generator,
T^a.
For convention, we write, for example, ϕ^(u)_n=(u_n,r, u_n,g, u_n,b)^T to denote the u-quark field(s) at the n^ th site in terms of 3 colors r,g,b.
With an eye toward simulations of dense matter systems, chemical potentials for baryon number, μ_B, and the third component of isospin, μ_I, are included.
For most of the results presented in
this work, the chemical potentials will be set to zero, μ_B=μ_I = 0,
and there will be exact isospin symmetry, m_u=m_d ≡ m.
In Weyl gauge and using the chromo-electric basis of the link operator | R,α,β⟩_n,
the contribution from the energy in the chromo-electric field from each basis state is proportional to the Casimir of the irrep R.[
For an irrep, R, represented by a tensor with p upper indices and q lower indices, T^a_1 ⋯ a_p_b_1 ⋯ b_q,
the Casimir provides
∑_b=1^8| E^(a)_n|^2 | R,α,β⟩_n = 1/3( p^2+q^2+p q + 3 p + 3 q ) | R,α,β⟩_n .
The indices α and β specify the color state in the left (L) and right (R) link Hilbert spaces respectively.
States of a color irrep R are labelled by their total color isospin T, third component of color isospin T^z and color hypercharge Y, i.e., α = (T_L, T^z_L, Y_L) and β = (T_R, T^z_R, Y_R).
]
The fields have been latticized
such that the quarks reside on even-numbered sites, n=0,2,4,6,…, and antiquarks reside on odd-numbered sites, n=1,3,5,….
Open boundary conditions (OBCs) are employed in the spatial direction,
with a vanishing background chromo-electric field.
For simplicity,
the lattice spacing will be set equal to 1.
The KS Hamiltonian in Eq. (<ref>)
is constructed in Weyl gauge.
A unitary transformation can be performed on Eq. (<ref>) to eliminate the gauge links <cit.>, with Gauss's Law
uniquely providing the energy in the chromo-electric field in terms of a non-local sum of products of charges, i.e., the Coulomb energy.
This is equivalent to formulating the system in axial gauge <cit.>, A^(a)_x = 0, from the outset.
The Hamiltonian in Eq. (<ref>), when formulated with A^(a)_x = 0, becomes
H
= ∑_f=u,d[
1/2∑_n=0^2L-2 ( ϕ_n^(f)†ϕ_n+1^(f) + h.c. )
+
m_f ∑_n=0^2L-1 (-1)^nϕ_n^(f)†ϕ_n^(f)]
+ g^2/2∑_n=0^2L-2∑_a=1^8 ( ∑_m ≤ n Q^(a)_m ) ^2
- μ_B/3∑_f=u,d∑_n=0^2L-1ϕ_n^(f)†ϕ^(f)_n
- μ_I/2∑_n=0^2L-1(ϕ_n^(u)†ϕ^(u)_n - ϕ_n^(d)†ϕ^(d)_n )
,
where the color charge operators on a given lattice site are the sum of contributions from the u- and d-quarks,
Q^(a)_m = ϕ^(u) †_m T^a ϕ_m^(u) + ϕ^(d) †_m T^a ϕ_m^(d) .
To define the fields,
boundary conditions with A_0^(a)(x)=0 at spatial infinity and zero background chromo-electric fields are used, with Gauss's law sufficient to determine them at all other points on the lattice,
E^(a)_n = ∑_m≤ n Q^(a)_m .
In this construction, a state is completely specified by the fermionic occupation at each site. This is to be contrasted with the Weyl
gauge construction where both fermionic occupation and the SU(3) multiplet defining the chromo-electric field are required.
There are a number of ways that this system,
with the Hamiltonian given in Eq. (<ref>), could be mapped
onto the register of a quantum computer.
In this work, both a staggered discretization and a JW transformation <cit.> are chosen to map the N_c=3 and N_f=2
quarks to 6 qubits, with ordering d_b, d_g, d_r, u_b, u_g, u_r,
and the antiquarks associated with the same spatial site adjacent with ordering
d_b, d_g, d_r, u_b, u_g, u_r.
This is illustrated in Fig. <ref> and
requires a total of 12 qubits per spatial lattice site (see App. <ref> for more details).
The resulting JW-mapped Hamiltonian is the sum of the following five terms:
H = H_kin + H_m + H_el +
H_μ_B + H_μ_I ,
H_kin = -1/2∑_n=0^2L-2∑_f=0^1∑_c=0^2[ σ^+_6n+3f+c ( ⊗_i=1^5σ^z_6n+3f+c+i )σ^-_6(n+1)+3f+c +h.c.] ,
H_m = 1/2∑_n=0^2L-1∑_f=0^1∑_c=0^2 m_f[ (-1)^nσ_6n + 3f + c^z + 1] ,
H_el = g^2/2∑_n=0^2L-2(2L-1-n)( ∑_f=0^1 Q_n,f^(a) Q_n,f^(a) + 2 Q_n,0^(a) Q_n,1^(a))
+ g^2 ∑_n=0^2L-3∑_m=n+1^2L-2(2L-1-m) ∑_f=0^1 ∑_f'=0^1 Q_n,f^(a) Q_m,f'^(a) ,
H_μ_B = -μ_B/6∑_n=0^2L-1∑_f=0^1∑_c=0^2σ_6n + 3f + c^z ,
H_μ_I = -μ_I/4∑_n=0^2L-1∑_f=0^1∑_c=0^2 (-1)^fσ_6n + 3f + c^z ,
where now repeated adjoint color indices, (a), are summed over,
the flavor indices, f=0,1, correspond to u- and d-quark flavors and σ^± = (σ^x ± i σ^y)/2.
Products of charges are given in terms of spin operators as
Q_n,f^(a) Q_n,f^(a) = 1/3(3 - σ^z_6n+3fσ^z_6n+3f+1 - σ^z_6n+3fσ^z_6n+3f+2 - σ^z_6n+3f+1σ^z_6n+3f+2) ,
Q_n,f^(a) Q_m,f'^(a) = 1/4 [2 (σ^+_6n+3fσ^-_6n+3f+1σ^-_6m+3f'σ^+_6m+3f'+1
+ σ^+_6n+3fσ^z_6n+3f+1σ^-_6n+3f+2σ^-_6m+3f'σ^z_6m+3f'+1σ^+_6m+3f'+2
+σ^+_6n+3f+1σ^-_6n+3f+2σ^-_6m+3f'+1σ^+_6m+3f'+2 + h.c. )
+ 1/6∑_c=0^2∑_c'=0^2( 3 δ_c c' - 1 ) σ^z_6n+3f+cσ^z_6m+3f'+c' ] .
A constant has been added to H_m to ensure that all basis states contribute positive mass. The Hamiltonian for SU(N_c) gauge theory with N_f flavors in the fundamental representation is presented in Sec. <ref>.
Note that choosing A^(a)_x = 0 gauge and enforcing Gauss's law has resulted in all-to-all interactions, the double lattice sum in H_el.
For any finite lattice system, there are color non-singlet states in the spectrum, which are unphysical and have infinite energy in the continuum and infinite-volume limits.
For a large but finite system, OBCs can also support finite-energy color non-singlet states which are localized to the end of the lattice (color edge-states).[Low-energy edge-states that have global charge in a confining theory can also be found in the simpler setting of the Schwinger model.
Through exact and approximate tensor methods, we have verified that these states exist on lattices up to length L=13, and they are expected to persist for larger L.]
The existence of such states in the spectrum is independent of the choice of gauge or fermion mapping.
The naive ways to systematically examine basis states and preclude such configurations is found to be impractical due to the non-Abelian nature of the gauge charges
and the resulting entanglement between states required for color neutrality.
A practical way to deal with this problem is to add a term to the Hamiltonian that
raises the energy of color non-singlet states.
This can be accomplished by including the energy density in the chromo-electric field beyond the end of the lattice with a large coefficient h.
This effectively adds the energy density in a finite chromo-electric field over a large spatial extent beyond the end of the lattice.
In the limit h→∞, only states with a vanishing chromo-electric field beyond the end of the lattice remain at finite energy, rendering the system within the lattice to be a color singlet.
This new term in the Hamiltonian is
H_ 1 = h^2/2∑_n=0^2L-1( ∑_f=0^1 Q_n,f^(a) Q_n,f^(a) +
2 Q_n,0^(a) Q_n,1^(a)) + h^2 ∑_n=0^2L-2∑_m=n+1^2L-1∑_f=0^1 ∑_f'=0^1 Q_n,f^(a) Q_m,f'^(a) ,
which makes a vanishing contribution when the sum of charges over the whole lattice is zero; otherwise, it makes a contribution ∼ h^2.
§.§ Spectra for L=1,2 Spatial Sites
The spectra and wavefunctions of systems with a small number of lattice sites can be determined by diagonalization of the Hamiltonian.
In terms of spin operators, the N_f=2 Hamiltonian in Eq. (<ref>) decomposes into sums of tensor products of Pauli matrices. The tensor product factorization can be exploited to perform an exact diagonalization relatively efficiently.
This is accomplished by first constructing a basis
by projecting onto states with specific quantum numbers, and then building the Hamiltonian in that subspace.
There are four mutually commuting symmetry generators that allow states to be labelled by (r,g,b,I_3): redness, greenness, blueness and the third component of isospin.
In the computational (occupation) basis, states are represented by bit strings of 0s and 1s. For example, the L=1 state with no occupation is |000000111111⟩.[Qubits are read from right to left, e.g., |q_11 q_10 … q_1 q_0⟩. Spin up is |0⟩ and spin down is |1⟩.] Projecting onto eigenstates of (r,g,b,I_3) amounts to fixing the total number of 1s in a substring of a state.
The Hamiltonian is formed by evaluating matrix elements of Pauli strings between states in the basis, and only involves 2× 2 matrix multiplication.
The Hamiltonian matrix is found to be sparse, as expected, and the low energy eigenvalues and eigenstates can be found straightforwardly.
As the dimension of the Hamiltonian grows exponentially with the spatial extent of the lattice, this method becomes intractable for large system sizes, as is well known.
§.§.§ Exact Diagonalizations, Color Edge-States and Mass Decompositions of the Hadrons
For small enough systems, an exact diagonalization of the Hamiltonian matrix in the previously described basis can be performed.
Without chiral symmetry and its spontaneous breaking, the energy spectrum in 1+1D does not contain a massless isovector state (corresponding to the QCD pion) in the limit of vanishing quark masses.
In the absence of
chemical potentials for baryon number, μ_B=0, or isospin, μ_I=0,
the vacuum, |Ω⟩, has B=0
(baryon number zero) and I=0 (zero total isospin).
The I=0
σ-meson is the lightest meson,
while the I=1 π-meson is the next lightest.
The lowest-lying eigenstates in the
B=0 spectra for L=1,2
(obtained from
exact diagonalization of the Hamiltonian)
are given in Table <ref>.
The masses are defined by their energy gap to the vacuum,
and all results in this section are for m_u=m_d=m=1.
By examining the vacuum energy density
E_Ω/L, it is clear that, as expected, this number of lattice sites is insufficient to fully contain hadronic correlation lengths.
While Table <ref> shows the energies of color-singlet states, there are also non-singlet states in the spectra with similar masses,
which become increasingly localized near the end of the lattice, as discussed in the previous section.
It is informative to examine the spectrum of the L=1 system as both g and h are slowly increased and, in particular, take note of the relevant symmetries. For g=h=0,
with contributions from only the hopping and mass terms,
the system exhibits a global SU(12) symmetry
where the spectrum is that of free quasi-particles; see App. <ref>.
The enhanced global symmetry at this special point restricts the structure of the spectrum to the 1 and 12 of SU(12) as well as the antisymmetric combinations of fundamental irreps, 66, 220, ….
For g>0, these SU(12) irreps split into irreps of color SU(3)_c and flavor SU(2)_f.
The 12 corresponds to single quark (q) or antiquark (q) excitations
(with fractional baryon number), and splits into 3_c⊗ 2_f for quarks and 3_c⊗ 2_f for antiquarks. In the absence of OBCs, these states would remain degenerate, but the boundary condition of vanishing background
chromo-electric field is not invariant under
q↔q and the quarks get pushed to higher mass. As there is no chromo-electric energy associated with exciting an
antiquark at the end of the lattice in this mapping, the 3_c⊗ 2_f states remains low in the spectrum until h≫0.
The 66 corresponds to two-particle excitations, and contains all combinations of qq, qq and
qq excitations.
The mixed color symmetry (i.e., neither symmetric or antisymmetric) of qq excitations allows for states with
1_c⊗ 1_f
⊕ 1_c⊗ 3_f
⊕ 8_c⊗ 1_f
⊕ 8_c⊗ 3_f,
while the qq excitations with definite color symmetry allow for
6_c⊗ 1_f
⊕ 3_c⊗ 3_f
and
qq excitations allow for
6_c⊗ 1_f
⊕ 3_c⊗ 3_f,
saturating the 66 states in the multiplet.
When g>0, these different configurations split in energy, and when
h≫0, only color-singlet states are left in the low-lying spectrum. Figure <ref> shows the evolution of the spectrum as
g and h increase.
The increase in mass of non-singlet color states with h is proportional to the Casimir of the SU(3)_c representation which is evident in Fig. <ref> where, for example, the increase in the mass of the 3_cs and 3_cs between h^2 = 0 and h^2=0.64 are the same.
The antiquark states are particularly interesting as they correspond to edge states that are not “penalized" in energy by the chromo-electric field when h=0.
These states have an approximate
SU(6) symmetry where the 6 antiquarks transform in the fundamental.
This is evident in the spectrum shown in
Fig. <ref>
by the presence of a 3_c ⊗ 2_f
and nearly degenerate
6_c⊗ 1_f
and
3_c⊗ 3_f
which are identified as states of a
15
(an antisymmetric irrep of SU(6))
that do not increase in mass as g increases.
This edge-state SU(6) symmetry is not exact
due to interactions from the hopping term that couple the edge qs to the rest of the lattice.
These colored edge states are artifacts of OBCs and will persist in the low-lying spectrum for larger lattices.
Figures <ref> and <ref>
reveal the near-degeneracy of the σ- and π-mesons throughout the range of couplings g and h, suggesting another approximate symmetry, which
can be understood in the small and large g limits.
For small g^2, the effect of H_el = g^2/2(Q_0,u^(a) + Q_0,d^(a))^2 on the the SU(12)-symmetric spectrum can be obtained through perturbation theory.
To first order in g^2, the shift in the energy of any state is equal to the expectation value of H_el.
The σ- and π-meson states are both quark-antiquark states in the 66 irrep of SU(12), and therefore, both have a 3_c color charge on the quark site and receive the same mass shift.[This also explains why
there are three other states nearly degenerate with the mesons, as seen in Fig. <ref>.
Each of these states carry a 3_c or 3_c color charge on the quark site and consequently have the same energy at first order in perturbation theory.
]
For large g^2, the only finite-energy excitations of the trivial vacuum (all sites unoccupied) are bare baryons and antibaryons,
and the spectrum is one of non-interacting color-singlet baryons.
Each quark (antiquark) site hosts 4 distinct baryons (antibaryons) in correspondence with the multiplicity of the I=3/2 irrep.
As a result, the σ, π, I=2,3 mesons, deuteron and antideuteron are all degenerate.
The σ- and π-meson mass splitting is shown in Fig. <ref> and has a clear maxima for g ∼ 2.4.
Intriguingly, this corresponds to the maximum of the linear entropy between quark and antiquarks (as discussed in Sec. <ref>),
and suggests a connection between symmetry, via degeneracies in the spectrum, and entanglement.
This shares similarities with the correspondence between Wigner's SU(4) spin-flavor
symmetry <cit.>,
which becomes manifest in low-energy nuclear forces in the large-N_c limit of QCD <cit.>,
and entanglement suppression in
nucleon-nucleon scattering found in Ref. <cit.> (see also Refs. <cit.>).
Color singlet baryons are also present in this system, formed by contracting the color indices of three quarks with a Levi-Civita tensor (and antibaryons are formed from three antiquarks).
A baryon is composed of three I=1/2 quarks in the (symmetric) I=3/2 configuration and in a (antisymmetric) color singlet.
It will be referred to as the Δ, highlighting its similarity to the Δ-resonance in 3+1D QCD.
Interestingly, there is an isoscalar ΔΔ bound state, which will be referred to as the deuteron.
The existence of a deuteron makes this system valuable from the standpoint of quantum simulations of the formation of nuclei in a model of reduced complexity.
The mass of the Δ, M_Δ, and the binding energy of the deuteron, B_ΔΔ = 2 M_Δ - M_ΔΔ, are shown in Table <ref> for a range of strong couplings.
Understanding and quantifying the structure of the lowest-lying hadrons is a priority for nuclear physics research <cit.>.
Great progress has been made, experimentally, analytically and computationally,
in dissecting the mass and angular momentum of the proton (see, for example, Refs. <cit.>).
This provides, in part, the foundation for anticipated precision studies at the future electron-ion collider (EIC) <cit.> at Brookhaven National Laboratory.
Decompositions of the vacuum energy and the masses of the σ, π and Δ are shown in Fig. <ref> where, for example, the chromo-electric contribution to the
σ is ⟨ H_el⟩ = ⟨σ| H_el|σ⟩ - ⟨Ω| H_el|Ω⟩.
These calculations demonstrate the potential of future quantum simulations in being able to quantify decompositions of properties of the nucleon,
including in dense matter.
For the baryon states, it is H_el that is responsible for the system coalescing into localized color singlets in order to minimize the energy in the chromo-electric field (between spatial sites).
The deuteron binding energy is shown in the left panel of Fig. <ref> as a function of g.
While the deuteron is unbound at g=0 for obvious reasons, it is also unbound at large g because the spectrum is that of non-interacting color-singlet (anti)baryons.
Therefore, the non-trivial aspects of deuteron binding for these systems is for intermediate
values of g. The decomposition of B_ΔΔ is shown in the right panel of Fig. <ref>, where, for example, the chromo-electric contribution is
⟨ H_el⟩ = 2 ( ⟨Δ| H_el|Δ⟩ - ⟨Ω| H_el|Ω⟩ ) - (⟨ΔΔ| H_el|ΔΔ⟩ - ⟨Ω| H_el|Ω⟩ ) .
The largest contribution to the binding energy is ⟨ H_kin⟩, which is the term responsible for creating q q pairs.
This suggests that meson-exchange may play a significant role in the attraction between baryons,
as is the case in 3+1D QCD, but larger systems will need to be studied before
definitive conclusions can be drawn.
One consequence of the lightest baryon
being I=3/2 is that, for L=1,
the I_3=+3/2 state completely occupies the up-quark sites.
Thus the system factorizes into an inert up-quark sector and a dynamic down-quark sector, and the absolute energy of the lowest-lying baryon state can be written as
E_Δ =
M_Δ + E_Ω^2 f =
3m + E_Ω^1 f,
where
E_Ω^1,2 f is the
vacuum energy of the
N_f=1,2 flavor systems.
Analogously, the deuteron absolute energy is
E_ΔΔ=6m, and therefore the
deuteron binding energy can be written as
B_ΔΔ= 2(3m+E_Ω^1 f-E_Ω^2 f) - (6m-E_Ω^2 f)
= 2E_Ω^1 f-E_Ω^2 f.
This is quite a remarkable result because, in this system, the deuteron binding energy depends only on the difference between
the N_f=1 and N_f=2 vacuum energies, being bound when 2 E_Ω^1 f - E_Ω^2 f > 0.
As has been discussed previously, it is the
qq contribution from this difference that dominates the binding.
§.§.§ The Low-Lying Spectrum Using D-Wave's Quantum Annealers
The low-lying spectrum of this system can also be determined through annealing by using
D-Wave's quantum annealer (QA) Advantage <cit.>,
a device with 5627 superconducting flux qubits, with a 15-way qubit connectivity via Josephson junctions rf-SQUID couplers <cit.>.
Not only did this enable the determination of the energies of low-lying states, but it also assessed the ability of this quantum device to isolate nearly degenerate states.
The time-dependent Hamiltonian
of the device, which our systems are to be mapped,
are of the form of an Ising model, with the freedom to specify the single- and two-qubit coefficients. Alternatively, the Ising model can be rewritten in a quadratic unconstrained binary
optimization (QUBO) form, f_Q(x)=∑_ij Q_ijx_i x_j,
where x_i are binary variables
and Q_ij is a QUBO matrix, which contains the coefficients of single-qubit (i=j) and two-qubit (i≠ j) terms.
The QUBO matrix is the input that is submitted to
Advantage, with the output being a bit-string that minimizes f_Q.
Due to the qubit connectivity of Advantage,
multiple physical qubits are chained together to recover the required connectivity, limiting the system size that can be annealed.
The QA Advantage was used to
determine the lowest three states in the B=0 sector of the L=1 system, with m=g=1 and h=2, following techniques presented in Ref. <cit.>.
In that work, the objective function to be minimized is defined as F=⟨Ψ|H̃|Ψ⟩ -η⟨Ψ| Ψ⟩ <cit.>, where η is a parameter that is included to avoid the null solution, and its optimal value
can be iteratively tuned to be as close to the ground-state energy as possible.
The wavefunction is expanded in a finite dimensional orthonormal basis ψ_α, |Ψ⟩ =∑^n_s_α a_α |ψ_α⟩, which in this case reduces the dimensionality of H to 88, defining H̃, thus making it feasible to study with Advantage.
The procedure to write the objective function in a QUBO form can be found in Ref. <cit.> (and briefly described in App. <ref>), where the coefficients a_α are
digitized using K binary variables <cit.>, and the adaptive QA eigenvalue solver is implemented by using the zooming method <cit.>. To reduce the uncertainty in the resulting energy and wavefunction, due to the noisy nature of this QA, the iterative procedure
described in Ref. <cit.> was used, where the (low-precision) solution obtained from the machine after several zooming steps constituted the starting point of a new anneal. This led to a reduction of the uncertainty by an order of magnitude (while effectively only doubling the resources used).
Results obtained using Advantage are shown in Fig. <ref>, where the three panels show the convergence of the energy of the
vacuum state (left), the mass of the σ-meson (center) and the mass of the π-meson (right) as a function of zoom steps, as well as comparisons to the exact wavefunctions. The bands in the plot correspond to 68% confidence intervals determined from 20 independent runs of the annealing workflow, where each corresponds to 10^3 anneals with an annealing time of t_A=20 μs, and the points correspond to the lowest energy found by the QA. The parameter K in the digitization of a_α is set to K=2. The parameter η is first set close enough to the corresponding energy (e.g., η=0 for the ground-state), and for the subsequent iterative steps it is set to the lowest energy found in the previous step. The first two excited states are nearly degenerate, and after projecting out the ground state, Advantage finds both states in the first step of the iterative procedure (as shown by the yellow lines in the π wavefunction of Fig. <ref>).
However, after one iterative step, the QA converges to one of the two excited states.
It first finds the second excited state (the π-meson), and once this state is known with sufficient precision, it can be projected out to study the other excited state.
The converged values for the energies and masses of these states are shown in Table <ref>,
along with the exact results. The uncertainties in these values should be understood as uncertainties on an upper bound of the energy (as they result from a variational calculation). For more details see App. <ref>.
§.§.§ Quark-Antiquark Entanglement in the Spectra via Exact Diagonalization
With h ≫ g, the eigenstates of the Hamiltonian are color singlets and irreps of isospin.
As these are global quantum numbers (summed over the lattice) the eigenstates are generically entangled among the color and isospin components at each lattice site. With the hope of gaining insight into 3+1D QCD, aspects of the entanglement structure of the L=1 wavefunctions are explored via exact methods.
An interesting measure of entanglement for these systems is the linear entropy between quarks and antiquarks, defined as
S_L = 1 - [ρ_q^2]
,
where ρ_q = _q [ρ] and ρ is a density matrix of the system. Shown in Fig. <ref> is the linear entropy between quarks and antiquarks in
|Ω⟩, |σ⟩, |π_I_3=1⟩ and |Δ_I_3=3/2⟩ as a function of g.
The deuteron is not shown as there is only one basis state contributing for L=1.
The scaling of the linear entropy in the vacuum and baryon with g can be understood as follows.
As g increases, color singlets on each site have the least energy density.
The vacuum becomes dominated by the unoccupied state and the Δ becomes dominated by the “bare" Δ with all three quarks located on one site in a color singlet.
As the entropy generically scales with the number of available states,
the vacuum and baryon have decreasing entropy for increasing g.
The situation for the π and σ is somewhat more interesting.
For small g, their wavefunctions are dominated by q q excitations on top of the trivial vacuum, which minimizes the contributions from the mass term.
However, color singlets are preferred as g increases,
and the mesons become primarily composed of baryon-antibaryon (B B) excitations.
There are more q q states than
B B states with a given I_3,
and therefore there is more entropy at small g than large g.
The peak at intermediate g occurs at the crossover between these two regimes where the meson has a sizable contribution from both q q and B B excitations.
To illustrate this,
the expectation value of total
quark occupation (number of quarks plus the number of antiquarks) is shown in Fig. <ref>.
For small g, the occupation is near 2 since the state is mostly composed of q q,
while for large g it approaches 6 as the state mostly consists of B B.
This is a transition from the excitations being
“color-flux tubes" between quark and antiquark of the same color to bound states of color-singlet baryons and antibaryons.
§.§ Digital Quantum Circuits
The Hamiltonian for 1+1D QCD with arbitrary N_c and N_f, when written in terms of spin operators, can be naturally mapped onto a quantum device with qubit registers. In this section the time evolution for systems with N_c = 3 and N_f=2 are developed.
§.§.§ Time Evolution
To perform time evolution on a quantum computer, the operator U(t) = exp(-i H t) is reproduced by a sequence of gates applied to the qubit register.
Generally, a Hamiltonian cannot be directly mapped to such a sequence efficiently, but each of the elements in a Trotter decomposition can, with systematically reducible errors.
Typically, the Hamiltonian is divided into Pauli strings whose unitary evolution can be implemented with quantum circuits that are readily constructed.
For a Trotter step of size t, the circuit that implements the time evolution from the mass term, U_m(t) = exp(- i H_m t), is shown in Fig. <ref>.
The staggered mass leads to quarks being rotated by a positive angle and antiquarks being rotated by a negative angle.
Only single qubit rotations about the z-axis are required for its implementation, with
R_Z(θ) = exp(-i θ Z/2).
The circuit that implements
the evolution from the baryon chemical potential, μ_B,
U_μ_B(t) = exp(- i H_μ_B t),
is similar to U_m(t) with
m →μ_B/3, and with both quarks and antiquarks rotated by the same angle.
Similarly, the circuit that implements the evolution from
the isospin chemical potential, μ_I,
U_μ_I(t) = exp(- i H_μ_I t),
is similar to U_m(t) with m →μ_I/2 and up (down) quarks rotated by a negative (positive) angle.
The kinetic piece of the Hamiltonian, Eq. (<ref>), is composed of hopping terms of the form
H_kin ∼ σ^+ ZZZZZ σ^- + h.c. .
The σ^+ and σ^- operators enable quarks and antiquarks to move between sites with the same color and flavor
(create q^α_i q_α^i pairs)
and the string of Z operators
incorporates the signs from Pauli statistics.
The circuits for Trotterizing these terms are
based on circuits in Ref. <cit.>. We introduce an ancilla to
accumulate the parity of the JW string of Zs.
This provides a mechanism for the
different hopping terms to re-use
previously computed
(partial-)parity.[An ancilla was used similarly in Ref. <cit.>.]
The circuit for the first two hopping terms is shown in Fig. <ref>.
The first circuit operations initialize
the ancilla to store the parity of the string of Zs between the first and last qubit of the string. Next, the system is evolved
by the exponential of the hopping term. After the exponential of each hopping term, the ancilla is modified for the parity of the subsequent hopping term
(the CNOTs highlighted in blue).
Note that the hopping of quarks, or antiquarks, of different flavors and colors commute, and the Trotter decomposition is exact (without Trotterization errors) over a single spatial site.
Implementation of the time-evolution
induced by the energy density in the
chromo-electric field, H_el,
given in Eq. (<ref>),
is the most challenging due to its
inherent non-locality in axial gauge.
There are two distinct types of contributions: One is from same-site interactions and the other from interactions between different sites.
For the same-site interactions, the operator is the product of charges
Q_n,f^(a) Q_n,f^(a), which contains only ZZ operators, and is digitized with the standard two CNOT circuit.[Using the native ZX gate on IBM's devices allows this to be done with a single two-qubit entangling gate <cit.>.]
The Q_n,f^(a) Q_m,f'^(a) operators contain 4-qubit interactions of the form
(σ^+ σ^- σ^- σ^+ + h.c.)
and
6-qubit interactions of the form
(σ^+ Z σ^- σ^- Z σ^+ + h.c.),
in addition to ZZ contributions.
The manipulations required to implement the 6-qubit operators parallel those required for the 4-qubit operators, and here only the latter is discussed in detail.
These operators can be decomposed into eight mutually commuting terms,
σ^+ σ^- σ^- σ^+ + h.c. = 1/8( XXXX + YYXX + YXYX - YXXY - XYYX + XYXY
+ XXYY + YYYY) .
The strategy for identifying the corresponding time evolution circuit is to first apply a unitary that diagonalizes every term, apply the diagonal rotations, and finally, act with the inverse unitary to return to the computational basis.
By only applying diagonal rotations,
many of the CNOTs can be arranged to cancel.
Each of the eight Pauli strings
in Eq. (<ref>)
takes a state in the computational basis to the corresponding bit-flipped state (up to a phase).
This suggests that the desired eigenbasis
pairs together states with their bit-flipped counterpart, which is an inherent property of the GHZ basis <cit.>.
In fact, any permutation of the GHZ state-preparation circuit diagonalizes the interaction.
The two that will be used,
denoted by G and G̃,
are shown in Fig. <ref>.
In the diagonal bases, the Pauli strings
in Eq. (<ref>) become
G^† (σ^+ σ^- σ^- σ^+ + h.c.) G = 1/8 ( IIZI - ZIZZ - ZZZZ + ZIZI + IZZI - IIZZ
- IZZZ + ZZZI ) ,
G̃^† (σ^+ σ^- σ^- σ^+ + h.c.) G̃ = 1/8 ( IIIZ - IZZZ - IIZZ + ZIIZ + IZIZ - ZZZZ
- ZIZZ + ZZIZ) .
Another simplification comes from the fact that
ZZ in the computational basis becomes
a single Z in a GHZ basis if the GHZ state-preparation circuit has a CNOT connecting the two Zs.
For the case at hand, this implies
G^† (IZZI + IZIZ + ZIIZ) G = IZII + IIIZ + ZIII ,
G̃^† (ZIZI + IZZI + ZIIZ) G̃ = IIZI + IZII + ZIII .
As a consequence, all nine ZZ terms in Q_n,f^(a) Q_m,f'^(a)
become single Zs in a GHZ basis, thus requiring no additional CNOT gates to implement.
Central elements of the circuits
required to implement time evolution of the chromo-electric energy density
are shown in Fig. <ref>,
which extends the circuit presented in Fig. 4 of Ref. <cit.> to non-Abelian gauge theories.
More details on these circuits can be found in App. <ref>.
§.§.§ Trotterization, Color Symmetry and Color Twirling
After fixing the gauge, the Hamiltonian is no longer manifestly invariant under local SU(3) gauge transformations.
However, as is well known, observables of the theory are correctly computed from such a gauge-fixed Hamiltonian, which possesses a remnant global SU(3) symmetry.
This section addresses the extent to which this symmetry is preserved by Trotterization of the time-evolution operator.
The focus will be on
the N_f=1 theory as including additional flavors
does not introduce new complications.
Trotterization of the mass and kinetic parts of the Hamiltonian,
while having non-zero commutators between some terms, preserves the global SU(3) symmetry.
The time evolution of Q_n^(a) Q_n^(a)
can be implemented in a unitary operator without Trotter errors, and, therefore, does not break SU(3).
On the other hand,
the time evolution induced by
Q_n^(a) Q_m^(a)
is implemented by the operator being divided into
four terms:
(Q^(1)_n Q^(1)_m + Q^(2)_n Q^(2)_m), (Q^(4)_n Q^(4)_m + Q^(5)_n Q^(5)_m), (Q^(6)_n Q^(6)_m + Q^(7)_n Q^(7)_m) and (Q^(3)_n Q^(3)_m + Q^(8)_n Q^(8)_m). In order for global SU(3) to be unbroken,
the sum over the entire lattice
of each of the 8 gauge charges must be unchanged under time evolution.
Therefore,
the object of interest is the commutator
𝒞 = [ ∑_n=0^2L-1Q^(a)_n , Q^(b̃)_m· Q^(b̃)_l ] ,
where b̃ is summed over the elements of one of the pairs in {(1,2), (4,5), (6,7), (3,8)}.
It is found that this commutator only vanishes if a=3 or a=8, or if b̃ is summed over all 8 values (as is the case for the exact time evolution operator).
Therefore, Trotter time evolution does not preserve the global off-diagonal SU(3) charges and, for example, color singlets can evolve into non-color singlets.
Equivalently, the Trotterized time evolution operator is not in the trivial representation of SU(3).
To understand this point in more detail,
consider the transformation of
(T^a)^i_j (T^a)^k_l for any given a.
Because of the symmetry of this product of operators, each transforming as an 8, the product must decompose into 1⊕ 8⊕ 27,
where the elements of each of the irreps can be found from
(T^a)^i_j (T^a)^k_l =
(Ô_27^a)^ik_jl
-2/5[ δ^i_j (Ô^a_8)^k_l + δ^k_l (Ô^a_8)^i_j ]
+ 3/5[δ^i_l (Ô^a_8)^k_j + δ^k_j (Ô^a_8)^i_l ]
+
1/8( δ^i_l δ^k_j - 1/3δ^i_j δ^k_l ) Ô^a_1
,
where
(Ô^a_27)^ik_jl
= 1/2[ (T^a)^i_j (T^a)^k_l + (T^a)^i_l (T^a)^k_j ]
-
1/10[
δ^i_j (Ô^a_8)^k_l
+ δ^i_l (Ô^a_8)^k_j
+ δ^k_j (Ô^a_8)^i_l
+ δ^k_l (Ô^a_8)^i_j ]
-1/24( δ^i_j δ^k_l + δ^i_l δ^k_j ) Ô^a_1 ,
(Ô^a_8)^i_j
= (T^a)^i_β(T^a)_j^β - 1/3δ^i_j Ô^a_1
, Ô^a_1 = (T^a)^α_β(T^a)_α^β = 1/2 .
When summed over a=1,…,8, the contributions from the 8 and 27 vanish, leaving the familiar contribution from the 1.
When only partials sums are available, as is the situation with individual contributions to the Trotterized evolution,
each of the contributions is the exponential of
1⊕ 8⊕ 27, with only the singlet contributions leaving the lattice a color singlet.
The leading term in the expansion of the product of the four pairs of Trotterized evolution operators sum to leave only the singlet contribution.
In contrast, higher-order terms do not cancel and
non-singlet contributions are present.
This is a generic problem
that will be encountered when satisfying Gauss's law
leads to non-local charge-charge interactions.
This is not a problem for U(1), and
surprisingly, is not a problem for
SU(2) because (Q^(1)_n Q^(1)_m, Q^(2)_n Q^(2)_m, Q^(3)_n Q^(3)_m ) are in the Cartan sub-algebra of SU(4) and therefore mutually commuting. However, it is a problem for N_c>2.
One way around the breaking of global SU(N_c)
is through the co-design
of unitaries that directly (natively) implement
exp( i α Q^(a)_n Q^(a)_m); see Sec. <ref>.
Without such a native unitary,
the breaking of SU(N_c) appears as any other Trotter error, and can be systematically reduced in the same way. A potential caveat to this is if the time evolution operator took the system into a different phase, but our studies of L=1 show no evidence of this.
It is interesting to note that the terms generated by the Trotter commutators form a closed algebra.
In principle, a finite number of terms could be
included to define an effective Hamiltonian whose Trotterization exactly maps onto the desired evolution operator (without the extra terms).
It is straightforward to work out the terms generated order-by-order in the Baker-Campbell-Hausdorff formula.
Aside from re-normalizing the existing charges, there are 9 new operator structures produced.
For example, the leading-order commutators generate the three operators, O_i, in Eq. (<ref>),
O_i =
(σ^+ I σ^- σ^- Z σ^+ - σ^+ Z σ^- σ^- I σ^+) - h.c. ,
(I σ^- σ^+ Z σ^+ σ^- - Z σ^- σ^+ I σ^+ σ^-) - h.c. ,
(σ^+ σ^- Z σ^- σ^+ I - σ^+ σ^- I σ^- σ^+ Z) - h.c. .
In general, additional operators are constrained only by (anti)hermiticity,
symmetry with respect to n ↔ m and preservation of (r,g,b), and should generically be included in the same spirit as terms in the Symanzik-action <cit.> for lattice QCD.
With Trotterization of the gauge field introducing violations of gauge symmetry, and the presence of bit- and phase-flip errors within the device register, it is worth briefly considering a potential mitigation strategy. A single
bit-flip error will change isospin by |Δ I_3|=1/2 and color charge by one unit of red or green or blue.
After each Trotter step on a real quantum device, such errors will be encountered and a mitigation or correction scheme is required.
Without the explicit gauge-field degrees of freedom and local charge conservation checks enabled by Gauss's law, such errors can only be detected globally, and hence, cannot be actively corrected during the evolution.[When local gauge fields are present,
previous works have found that including a quadratic “penalty-term" in the Hamiltonian is effective in mitigating violation of Gauss's law <cit.>. See also Refs. <cit.>.]
Motivated by this, consider introducing a twirling phase factor into the evolution, exp(-i θ^a Q^(a)), where Q^(a) is the total charge on the lattice.
If applied after each Trotter step, with a randomly selected set of eight angles, θ^a,
the phases of color-nonsinglet states become random for each member of an ensemble, mitigating errors in some observables.
Similar twirling phase factors could be included for the other charges that are conserved or approximately conserved.
§.§.§ Quantum Resource Requirements for Time Evolution
It is straightforward to extend the circuits presented in the previous section to arbitrary N_c and N_f. The quantum
resources required for time evolution can be quantified
for small, modest and asymptotically large systems. As discussed previously, a quantum register with N_q=2 L N_c N_f qubits[The inclusion of an ancilla for the kinetic term increases the qubit requirement to N_q = 2L N_c N_f + 1.] is required to encode one-dimensional SU(N_c) gauge theory
with N_f flavors on L spatial lattice sites using the JW transformation. For SU(3) gauge theory, this leads to, for example, N_q = 6L with only u-quarks and N_q = 18L with u,d,s-quarks.
The five distinct contributions to the resource requirements,
corresponding to application of the unitary operators providing
a single Trotter step associated with the quark mass, U_m, the baryon chemical potential, U_μ_B, the isospin chemical potential, U_μ_I, the kinetic term, U_kin, and the chromo-electric field, U_el, are
given in terms of the number of
single-qubit rotations, denoted by “R_Z”, the number of Hadamard gates, denoted by “Hadamard”, and the number of CNOT gates, denoted by “CNOT”.
It is found that[
For N_c = 2 only three of the ZZ terms can be combined into Q_n,f^(a) Q_m,f'^(a) and the number of CNOTs for one Trotter step of U_el is
U_el : (2 L-1) N_f [9 (2 L-1) N_f-7] | CNOT .
Additionally, for N_c N_f < 4, the Trotterization of U_ kin is more efficient without an ancilla and the number of CNOTs required is
U_ kin : 2 (2 L-1) N_c (N_c + 1) | CNOT .
The construction of the circuit that implements the time evolution of the hopping term for N_c=3
and N_f=1
is shown in Fig. <ref>.
]
U_m : 2 N_c N_f L | R_Z ,
U_μ_B : 2 N_c N_f L | R_Z ,
U_μ_I : 2 N_c N_f L | R_Z ,
U_kin : 2 N_c N_f(2L-1) | R_Z ,
2 N_c N_f (2L-1) | Hadamard ,
2 N_c N_f (8L-3) -4 | CNOT ,
U_el : 1/2(2L-1)N_c N_f [3-4N_c+N_f(2L-1)(5N_c-4) ] | R_Z ,
1/2(2L-1)(N_c-1) N_c N_f [N_f(2L-1)-1 ] | Hadamard ,
1/6 (2 L -1) (N_c-1) N_c N_f [(2 L-1) (2 N_c+17) N_f-2 N_c-11] | CNOT .
It is interesting to note the scaling of each of the contributions. The mass, chemical potential and kinetic terms scale as O(L^1), while the non-local gauge-field contribution is O(L^2).
As anticipated from the outset, using Gauss's law to constrain the energy in the gauge field via the quark occupation has given rise to circuit depths that scale quadratically with the lattice extent, naively violating one of the criteria for quantum simulations at scale <cit.>.
This volume-scaling is absent for formulations that explicitly include the
gauge-field locally,
but with the trade-off of requiring a volume-scaling increase in the number of
qubits or qudits or bosonic modes.[
The local basis on each link is spanned by the possible color irreps
and the states of the left and right Hilbert spaces (see footnote <ref>).
The possible irreps are built from the charges of the preceding fermion sites,
and therefore the dimension of the link basis grows polynomially in L.
This can be encoded in 𝒪(log L) qubits per link and
𝒪(L log L) qubits in total.
The hopping and chromo-electric terms in the Hamiltonian are local,
and therefore one Trotter step will require 𝒪(L) gate operations up to logarithmic corrections.]
We expect that the architecture of quantum devices used for simulation
and the resource requirements for the local construction will determine
the selection of local versus non-local implementations.
For QCD with N_f=2, the total requirements are
R_Z : (2L-1)( 132 L -63 )+18 ,
Hadamard : (2L-1)( 24L - 6 ) ,
CNOT : (2L-1)( 184L - 78 ) + 8 ,
and further, the CNOT requirements for a single Trotter step
of SU(2) and SU(3) for N_f = 1,2,3 are shown in Table <ref>.
These resource requirements suggest that systems with up to L=5 could be simulated, with appropriate error mitigation protocols, using this non-local framework in the near future. Simulations beyond L=5 appear to present a challenge in the near term.
The resource requirements in Table <ref> do not include those for a gauge-link beyond the end of the lattice. As discussed previously, such additions to the time evolution could be used to move color-nonsinglet contributions to high frequency, allowing the possibility that they are filtered from observables.
Such terms contribute further to the quadratic volume scaling of resources.
Including chemical potentials in the time evolution does not increase the number of required entangling gates per Trotter step. Their impact upon resource requirements may arise in preparing the initial state of the system.
§.§.§ Elements for Future Co-Design Efforts
Recent work has shown the capability of creating many-body entangling gates natively <cit.> which have similar fidelity to two qubit gates.
This has multiple benefits. First, it allows for (effectively) deeper circuits to be run within coherence times.
Second, it can eliminate some of the Trotter errors due to non-commuting terms.
The possibility of using native gates for these calculations is particularly interesting from the standpoint of eliminating or mitigating the Trotter errors that violate the global SU(3) symmetry, as discussed in Sec. <ref>.
Specifically,
it would be advantageous to have a “black box" unitary operation of the form,
e^-i α Q_n^(a) Q_m^(a) = exp{-i α/2 [σ^+_nσ^-_n+1σ^-_mσ^+_m+1 + σ^-_nσ^+_n+1σ^+_mσ^-_m+1 + σ^+_n+1σ^-_n+2σ^-_m+1σ^+_m+2
+ σ^-_n+1σ^+_n+2σ^+_m+1σ^-_m+2 + σ^+_nσ^z_n+1σ^-_n+2σ^-_mσ^z_m+1σ^+_m+2
+ σ^-_nσ^z_n+1σ^+_n+2σ^+_mσ^z_m+1σ^-_m+2 + 1/6(σ^z_n σ^z_m + σ^z_n+1σ^z_m+1 + σ^z_n+2σ^z_m+2)
- 1/12(σ^z_n σ^z_m+1 + σ^z_n σ^z_m+2 + σ^z_n+1σ^z_m + σ^z_n+1σ^z_m+2 + σ^z_n+2σ^z_m + σ^z_n+2 σ^z_m+1)
] } ,
for arbitrary α and pairs of sites, n and m (sum on a is implied).
A more detailed discussion of co-designing
interactions for quantum simulations of these theories is clearly warranted.
§.§ Results from Quantum Simulators
The circuits laid out in Sec. <ref> are too deep to be executed on currently available quantum devices,
but can be readily implemented with quantum simulators such as cirq and qiskit.
This allows for an estimate of the number of Trotter steps required to achieve a desired precision in the determination of any given observable as a function of time.
Figure <ref> shows results for the
trivial vacuum-to-vacuum and trivial vacuum-to-d_r d_r probabilities as a function of time for L=1. See App. <ref> for the full circuit which
implements a single Trotter step, and App. <ref> for the decomposition of the energy starting in the trivial vacuum.
The number of Trotter steps,
N_ Trott, required to evolve out to a given t within a specified (systematic) error,
ϵ_ Trott, was also investigated.
ϵ_ Trott is defined as the
maximum fractional error between the
Trotterized and exact time evolution in two quantities, the vacuum-to-vacuum persistence probability
and the vacuum-to-d_rd_r transition probability. For demonstrative purposes, an analysis at leading order in the Trotter expansion is sufficient.
Naive expectations based upon global properties of the Hamiltonian defining the evolution operators indicate that an upper bound for ϵ_ Trott scales as
|| e^-i H t - [ U_1 (t/N_ Trott ) ]^N_ Trott|| ≤ 1/2∑_i ∑_j>i||[ H_i , H_j ] ||t^2/N_ Trott ,
where the Hamiltonian has been divided into sets of mutually commuting terms, H = ∑_i H_i. This upper bound indicates that the required number of Trotter steps to maintain a fixed error scales as N_ Trott∼ t^2 <cit.>.
To explore the resource requirements for simulation based upon explicit calculations between exclusive states, as opposed to upper bounds for inclusive processes, given in Eq. (<ref>),
a series of calculations was performed requiring ϵ_ Trott≤0.1 for a range of times, t.
Figure <ref> shows the required
N_ Trott as a function of t for m=g=L=1.
The plateaus
observed in Fig. <ref> arise from
resolving upper bounds from oscillating functions,
and introduce a limitation in fitting to extract scaling behavior. This is less of a limitation
for the larger vacuum-to-vacuum probabilities which are fit well by a quadratic polynomial, starting from t=1, with coefficients,
N_ Trott = 0.0393(5) t^2 + 4.13(10) t - 22(5) .
The uncertainty represents a 95% confidence interval in the fit parameters and corresponds to the shaded orange region in
Fig. <ref>. The weak quadratic scaling with t implies that, even out to t ∼ 100, the number of Trotter
steps scales approximately linearly, and a constant error in the observables can be achieved with a fixed Trotter step size.
We have been unable to distinguish between fits with and without logarithmic terms.
These results can be contrasted with those obtained for the Schwinger model in Weyl gauge. The authors of Ref. <cit.> estimate a resource
requirement, as quantified by the number of T-gates, that scales as ∼ (L t)^3/2log L t, increasing to
∼ L^5/2 t^3/2log L t log L if the maximal value of the gauge fields is accommodated within the Hilbert space.
The results obtained in this section suggest that resource requirements in axial gauge, as quantified by the number of CNOTs,
effectively scale as ∼ L^2 t up to intermediate times and as ∼ L^2 t^2 asymptotically. In a scattering process with localized
wave-packets, it is appropriate to take L∼ t
(for the speed of light taken to be c=1),
as the relevant non-trivial time evolution is bounded by the light cone.
This suggests that the required resources scale asymptotically as ∼ t^4, independent of the chosen gauge to define the simulation.
This could have been
anticipated at the outset by assuming that the minimum change in complexity for a process has physical meaning <cit.>.
§ SIMULATING 1+1D QCD WITH NF=1 AND L=1
With the advances in quantum devices, algorithms and mitigation strategies, quantum simulations of 1+1D QCD can now begin, and this section presents results for N_f=1 and L=1. Both state preparation and time evolution will be discussed.
§.§ State Preparation with VQE
Restricting the states of the lattice to be color singlets reduces the complexity of state preparation significantly.
Transformations in the quark sector are mirrored in the antiquark sector.
A circuit that
prepares the most general state with r=g=b=0 is shown in Fig. <ref>.
The (multiply-)controlled θ gates are short-hand for (multiply-)controlled R_Y(θ) gates with half-filled circles denoting a control on 0
and a different control on 1.
The subscripts on θ_ij signify that there are different angles for each controlled rotation. For example,
θ_i has two components, θ_0 and θ_1, corresponding to a rotation controlled on 0 and 1, respectively.
The CNOTs at the end of the circuit
enforce that there are equal numbers of quarks and antiquarks with the same color,
i.e., that r=g=b=0.
This circuit can be further simplified by constraining the angles to only parameterize color singlet states. The color singlet subspace is spanned by[
The apparent asymmetry between q_r,q_g,q_b is due to the charge operators generating hops over different numbers of quarks or antiquarks.
For example, Q^(1) hops q_r to q_g without passing over any intermediate quarks, but Q^(4) hops q_r to q_b passing over q_g.
Also note that when m=0 the ℤ_2 spin-flip symmetry reduces the space of states to be two-dimensional.]
|Ω_0⟩ , 1/√(3) (|q_r q_r⟩ - |q_g q_g⟩ + |q_b q_b⟩ ) ,
|q_r q_r q_g q_g q_b q_b⟩ , 1/√(3) (|q_r q_r q_gq_g⟩ - |q_r q_r q_bq_b⟩ + |q_g q_g q_b q_b⟩ ) ,
where |Ω_0⟩ = |000111⟩ is the trivial vacuum.
This leads to the following relations between angles,
θ_10 = θ_01 , θ_00 = -2 sin^-1[ tan(θ_0/2) cos(θ_01/2) ] ,
θ_01 = -2 sin^-1[ cos(θ_11/2) tan(θ_1/2) ] , θ_0 = -2 sin^-1[ tan(θ/2) cos(θ_1/2) ] .
The circuit in Fig. <ref> can utilize the strategy outlined in Ref. <cit.> to
separate into a “variational" part and a “static" part.
If the VQE circuit can be written as
U_var(θ) U_s,
where U_s is independent
of the variational parameters,
then U_s can be absorbed by a redefinition of
the Hamiltonian.
Specifically, matrix elements of the Hamiltonian can be written as
⟨Ω_0| U_var^†(θ) H̃ U_var(θ) |Ω_0⟩ ,
where H̃= U_s^† H U_s.
Table <ref> shows the
transformations of various Pauli strings under conjugation by a CNOT controlled on the smaller index qubit.
Note that the ℤ_2 nature of this transformation is manifest.
In essence, entanglement is
traded for a larger number of correlated measurements.
Applying the techniques in Ref. <cit.>, the VQE circuit of Fig. <ref> can be put into the form of Fig. <ref>,
which requires 5 CNOTs along with all-to-all connectivity between the three qs.
§.§ Time Evolution Using IBM's 7-Qubit Quantum Computers
A single leading-order Trotter step of N_f=1 QCD with L=1 requires 28 CNOTs.[By evolving with U_el before U_kin in the Trotterized time evolution, two of the CNOTs
become adjacent in the circuit and can be canceled.]
A circuit that implements one Trotter step of the mass term is shown in Fig. <ref>.
As discussed around Eq. (<ref>), it is more efficient to not use an ancilla qubit in the Trotterization of the kinetic part of the Hamiltonian.
A circuit that implements one Trotter step of a single hopping term is shown in Fig. <ref> <cit.>.
Similarly, for this system,
the only contribution to H_el is Q^(a)_n Q^(a)_n, which contains three ZZ terms that are Trotterized using the standard two CNOT implementation.
The complete set of circuits required for Trotterized time evolution are given in App. <ref>.
To map the system onto a quantum device, it is necessary to understand the required connectivity for efficient simulation.
Together, the hopping and chromo-electric terms require connectivity between nearest neighbors as well as between q_r and q_b and
qs and qs of the same color.
The required device topology is planar and two embedding options are
shown in Fig. <ref>.
The “kite” topology follows from the above circuits,
while the “wagon wheel” topology makes use of the identities CX(q_a,q_b) · CX(q_b,q_c) = CX(q_a,q_c) · CX(q_b,q_c) = CX(q_b,q_a) · CX(q_a,q_c) where CX(q_a,q_b) denotes a CNOT controlled on qubit q_a.
Both topologies can be employed on devices with all-to-all connectivity, such as trapped-ion systems, but
neither topology exists natively on available superconducting-qubit devices.
We performed leading-order Trotter evolution to study the trivial vacuum persistence and transition probability using IBM's quantum computers ibmq_jakarta and ibm_perth, each a r5.11H quantum processor with 7 qubits and “H"-connectivity.
The circuits developed for this system require a higher degree of connectivity than available with these devices, and so SWAP-gates were necessary for implementation.
The IBM transpiler was used to first compile the circuit for the H-connectivity and then again to compile the Pauli twirling (discussed next).
An efficient use of SWAP-gates allows for a single Trotter step to be executed with 34 CNOTs.
A number of error-mitigation techniques were employed to minimize associated systematic uncertainties in our calculations: randomized compiling of the CNOTs (Pauli twirling) <cit.> combined with decoherence renormalization <cit.>, measurement error mitigation, post-selecting on physical states and dynamical decoupling <cit.>.[A recent detailed study of the stability of some of IBM's quantum devices using a system of physical interest can be found in Ref. <cit.>.]
The circuits were randomly complied with each CNOT Pauli-twirled as a mechanism to transform coherent errors in the CNOT gates into statistical noise in the ensemble.
This has been shown to be effective in improving the quality of results in other simulations, for example, Refs. <cit.>.
Pauli twirling involves multiplying the right side of each CNOT by a randomly chosen
element of the two-qubit Pauli group, G_2,
and the left side by G'_2
such that G'_2 CX G_2 = CX (up to a phase).
For an ideal CNOT gate, this would have no effect on the circuit.
A table of required CNOT identities is given,
for example,
in an appendix in Ref. <cit.>.
Randomized Pauli-twirling is combined with performing measurements of a “non-physics", mitigation circuit, which is the time evolution circuit evaluated at t=0, and is the identity in the absence of noise.
Assuming that the randomized-compiling of the Pauli-twirled CNOTs transforms coherent noise into depolarizing noise,
the fractional deviation of the noiseless and computed results
from the asymptotic limit of complete decoherence
are expected to be approximately equal for both the physics and mitigation ensembles. Assuming linearity, it follows that
( P_pred^(phys)-1/8 ) = ( P_meas^(phys)-1/8 ) × ( 1-1/8/ P_meas^(mit)-1/8 ) ,
where P_meas^(phys) and P_meas^(mit) are post-processed probabilities and
P_pred^(phys) is an estimate of the probability once the effects of depolarizing noise have been removed.
The “1/8" represents the fully decohered probability after post-selecting on physical states (described next) and the “1" is the probability of measuring the initial state from the mitigation circuit in the absence of noise.
The computational basis of 6 qubits contains 2^6 states but time evolution only connects those with the same r, g and b. Starting from the trivial vacuum, this
implies that only the 8 states with r=g=b=0 are accessible through time evolution.
The results off the quantum computer were post-processed to only select events that populated 1 of the 8 physically
allowed states, discarding outcomes that were unphysical. Typically, this resulted in a retention rate of ∼ 30%. The
workflow interspersed physics and mitigation circuits to provide a correlated calibration of the quantum devices. This enabled the detection (and removal) of
out-of-specs device performance during post-processing. We explored using the same twirling sequences for both physics and
mitigation circuits and found that it had no significant impact.
The impact of dynamical decoupling of idle qubits using qiskit's built in functionality was also investigated and found to have little effect.
The results of each run were corrected for measurement error using IBM's available function, TensoredMeasFitter, and associated downstream operations.
The results obtained for the trivial vacuum-to-vacuum and trivial vacuum-to-q_r q_r probabilities from one step of leading-order Trotter time evolution are shown in Fig. <ref>.
For each time, 447 Pauli-twirled physics circuits
and 447 differently twirled circuits with zeroed angles (mitigation) were analyzed using 10^3 shots on both ibmq_jakarta and ibm_perth (to estimate device systematics).
After post-selecting on physical states, correlated Bootstrap Resampling was used to form the final result.[As the mitigation and physics circuits were executed as adjacent jobs on the devices, the same Bootstrap sample was used to select results from both ensembles to account for temporal correlations.]
Tables <ref> and <ref> display the results of the calculations performed using ibmq_jakarta and ibm_perth quantum computers.
The same mitigation data was used for both the trivial vacuum-to-vacuum and trivial vacuum-to-q_rq_r calculations, and is provided in columns 2 and 4 of Table <ref>.
See App. <ref> for an extended discussion of leading-order Trotter.
Note that the negative probabilities seen in Fig. <ref> indicate that additional non-linear terms are needed in Eq. (<ref>).
It is interesting to consider the distributions of events obtained from the Pauli-twirled circuits, as shown in Fig. <ref>.
The distributions are not Gaussian and, in a number of instances, exhibit heavy tails particularly near the boundaries.[For a study of heavy-tailed distributions in Euclidean-space lattice QCD calculations, see Refs. <cit.>.]
The spread of the distributions, associated with non-ideal CNOT gates, is seen to reach a maximum of ∼ 0.4, but with a full-width at half-max that is ∼ 0.2. These distributions are already broad with a 34 CNOT circuit, and we probed the limit
of these devices by time-evolving with two first-order Trotter steps,[Under a particular ordering of terms, two steps of first- and second-order Trotter time evolution are equivalent.]
which requires 91 CNOTs after accounting for SWAPs.
Using the aforementioned techniques, this was found to be beyond the capabilities of ibmq_jakarta, ibmq_lagos and ibm_perth.
§ ARBITRARY NC AND NF
In this section, the structure of the Hamiltonian for N_f flavors of quarks in the fundamental representation of SU(N_c) is developed.
The mapping to spins has the same structure as for
N_f=2 QCD, but now, there are N_c× N_f qs and N_c× N_f qs per spatial lattice site.
While the mass and kinetic terms generalize straightforwardly, the energy in the chromo-electric field is more tricky.
After enforcing Gauss's law, it is
H_el = g^2/2∑_n=0^2L-2 ( ∑_m ≤ n Q^(a)_m ) ^2
,
Q^(a)_m = ϕ^†_m T^a ϕ_m
,
where T^a are now the generators of SU(N_c).
The Hamiltonian,
including chemical potentials for baryon number (chemical potentials for other flavor combinations can be included as needed), is found to be
H = H_kin + H_m + H_el + H_μ_B ,
H_kin = 1/2∑_n=0^2L-2∑_f=0^N_f-1∑_c=0^N_c-1[ σ_i(n,f,c)^+ ( ⊗_j=1^N_cN_f-1(-σ_i(n,f,c)+j^z ) ) σ_i(n,f,c) + N_c N_f^- +h.c.] ,
H_m = 1/2∑_n=0^2L-1∑_f=0^N_f-1∑_c=0^N_c-1 m_f [ (-1)^n σ^z_i(n,f,c) + 1 ] ,
H_el = g^2/2∑_n=0^2L-2(2L-1-n)( ∑_f=0^N_f-1 Q_n,f^(a) Q_n,f^(a) +
2 ∑_f=0^N_f-2∑_f'=f+1^N_f-1Q_n,f^(a) Q_n,f'^(a))
+ g^2 ∑_n=0^2L-3∑_m=n+1^2L-2(2L-1-m) ∑_f=0^N_f-1∑_f'=0^N_f-1 Q_n,f^(a) Q_m,f'^(a) ,
H_μ_B = -μ_B/2 N_c∑_n=0^2L-1∑_f=0^N_f-1∑_c=0^N_c-1σ^z_i(n,f,c) ,
where, i(n,f,c) = (N_c N_f n + N_c f + c),
and the products of the charges are
4 Q_n,f^(a) Q_n,f^(a) = N_c^2-1/2 - (1+1/N_c )∑_c=0^N_c-2∑_c' = c+1^N_c-1σ^z_i(n,f,c)σ^z_i(n,f,c') ,
8 Q_n,f^(a) Q_m,f'^(a) = 4 ∑_c=0^N_c-2∑_c'=c+1^N_c-1[ σ^+_i(n,f,c) Z_(n,f,c,c') σ^-_i(n,f,c')σ^-_i(m,f',c) Z_(m,f',c,c') σ^+_i(m,f',c') + h.c.]
+ ∑_c=0^N_c-1∑_c'=0^N_c-1 (δ_cc' - 1/N_c)σ^z_i(n,f,c)σ^z_i(m,f',c') ,
Z_(n,f,c,c')≡ ⊗_k=1^c'-c-1σ^z_i(n,f,c)+k .
The resource requirements for implementing Trotterized time evolution
using generalizations of the circuits in Sec. <ref> are given in Eq. (<ref>).
It is interesting to consider the large-N_c limit of the Hamiltonian,
where quark loops are parametrically suppressed and
the system can be described semi-classically <cit.>.
Unitarity requires rescaling the strong coupling,
g^2 → g^2/N_c and leading terms in the Hamiltonian scale as 𝒪(N_c).
The leading order contribution to the product of charges is
4 Q_n,f^(a) Q_n,f^(a) = ∑_c=0^N_c-2∑_c' = c+1^N_c-1 (1 - σ^z_i(n,f,c)σ^z_i(n,f,c') ) ,
8 Q_n,f^(a) Q_m,f'^(a) = 4 ∑_c=0^N_c-2∑_c'=c+1^N_c-1[ σ^+_i(n,f,c)
Z_(n,f,c,c') σ^-_i(n,f,c')σ^-_i(m,f',c)
Z_(m,f',c,c') σ^+_i(m,f',c') + h.c.] .
Assuming that the number of qq pairs that contribute to the meson wavefunctions do not scale with N_c,
as expected in the large-N_c limit,
H_el∝ N_c
and mesons are non-interacting, a well known consequence of the large-N_c limit <cit.>.
Baryons on the other hand are expected to have strong interactions at leading order in N_c <cit.>. This is a semi-classical limit and we expect that there exists a basis
where states factorize into localized tensor products, and the time evolution operator is non-entangling.
The latter result has been observed in the large-N_c limit of hadronic scattering <cit.>.
§ SUMMARY AND DISCUSSION
Important for future quantum simulations of processes that can be meaningfully compared to experiment, the real-time dynamics of strongly-interacting systems are predicted to be efficiently computable with quantum computers of sufficient capability.
Building upon foundational work in quantum chemistry and in low-dimensional U(1) and SU(2) gauge theories, this work has developed the tools necessary for the quantum simulation of
1+1D QCD (in axial gauge) using open boundary conditions, with arbitrary numbers of quark flavors and colors and including chemical potentials for baryon number and isospin.
Focusing largely on QCD with N_f=2, which shares many of the complexities of QCD in 3+1D, we have performed a detailed analysis of the required quantum resources for simulation of real-time dynamics, including efficient quantum circuits and associated gate counts, and the scaling of the number of Trotter steps for a fixed-precision time evolution.
The structure and dynamics of small systems, with L=1,2 for N_c=3 and N_f=1,2 have been detailed using classical computation, quantum simulators, D-Wave's Advantage and IBM's 7-qubit devices ibmq_jakarta and ibm_perth. Using recently developed error mitigation strategies, relatively small uncertainties were obtained for a single Trotter step with 34 CNOT gates after transpilation onto the QPU connectivity.
Through a detailed study of the low-lying spectrum, both the relevant symmetries and the color-singlets in the mesonic and baryonic sectors, including a bound two-baryon nucleus, have been identified.
Open boundary conditions also permit low-lying color edge-states that penetrate into the lattice volume by a distance set by the confinement scale.
By examining quark entanglement in the hadrons, a transition from the mesons being primarily composed of quark-antiquarks to baryon-antibaryons was found.
We have presented the relative contributions of each of the terms in the Hamiltonian to the energy of the vacuum, mesons and baryons.
This chapter has provided an estimate for the number of CNOT-gates required to implement one Trotter step in N_f=2, 1+1D axial-gauge QCD. For L = 10 spatial sites, ∼ 3 × 10^4 CNOTs
are required, while ∼ 4 × 10^6 CNOTs are required for L = 100.
Realistically, quantum simulations with L=10 are a beginning toward providing results with a complete quantification of uncertainties, including lattice-spacing and finite-volume
artifacts, and L=100 will likely yield high-precision results. It was found that, in the axial-gauge formulation, resources for time evolution effectively scale as L^2 t for intermediate times and L^2 t^2 for
asymptotic times. With L∼ t, this asymptotic scaling is the same as in the Schwinger model, suggesting no differences in scaling between Weyl and axial gauges.
§ MAPPING TO QUBITS
This appendix outlines how the qubit Hamiltonian in Eq. (<ref>) is obtained from the lattice Hamiltonian in Eq. (<ref>).
For this system,
the constraint of Gauss's law is sufficient to uniquely determine the chromo-electric field carried by the links between lattice sites in terms of a background chromo-electric field and the distribution of color charges. The difference between adjacent chromo-electric fields at a site with charge
Q^(a)
is
E^(a)_n+1 - E^(a)_n = Q^(a)_n ,
for a=1 to 8, resulting in a
chromo-electric field
E^(a)_n = F^(a) + ∑_i≤ n Q^(a)_i .
In general, there can be a non-zero background chromo-electric field, F^(a),
which in this paper has been set to zero.
Inserting the chromo-electric field in terms of the charges into Eq. (<ref>) yields Eq. (<ref>).
The color and flavor degrees of freedom of each q and q are then distributed over
6 (=N_c N_f) sites as illustrated in Fig. (<ref>).
There are now creation and annihilation operators for each quark, and the Hamiltonian is
H = ∑_n=0^2L-1∑_f=0^1 ∑_c=0^2 [ ( m_f (-1)^n - μ_B/3 - μ_I/2(-1)^f ) ψ^†_6n+3f+cψ_6n+3f+c ]
+ 1/2∑_n=0^2L-2∑_f=0^1 ∑_c=0^2 (ψ^†_6n+3f+cψ_6(n+1)+3f+c + h.c. ) + g^2/2∑_n=0^2L-2 ( ∑_m≤ n∑_f=0^1 Q^(a)_m,f ) ^2 ,
where the color charge is evaluated over three (r,g,b) occupation sites with the same flavor,
Q_m,f^(a) = ∑_c=0^2∑_c'=0^2 ψ^†_6m+3f+c T^a_cc' ψ_6m+3f+c' ,
and the T^a are the eight generators of SU(3).
The fermionic operators in Fock space are mapped onto spin operators via the JW transformation,
ψ_n = ⊗_l<n( -σ^z_l ) σ^-_n , ψ_n^† = ⊗_l<n( -σ^z_l ) σ^+_n .
In terms of spins, the eight SU(3) charge operators become[Calculations of quadratics of the gauge charges are simplified by the Fierz identity,
( T^(a) )^α_β (T^(a) )^γ_δ = 1/2 (δ^α_δδ^γ_β - 1/N_cδ^α_βδ^γ_δ) .
]
Q_m,f^(1) = 1/2σ^+_6m+3fσ^-_6m+3f+1 + h.c. ,
Q_m,f^(2) = -i/2σ^+_6m+3fσ^-_6m+3f+1 + h.c. ,
Q_m,f^(3) = 1/4(σ^z_6m+3f - σ^z_6m+3f+1) ,
Q_m,f^(4) = -1/2σ^+_6m+3fσ^z_6m+3f+1σ^-_6m+3f+2 + h.c. ,
Q_m,f^(5) = i/2σ^+_6m+3fσ^z_6m+3f+1σ^-_6m+3f+2 + h.c. ,
Q_m,f^(6) = 1/2σ^+_6m+3f+1σ^-_6m+3f+2 + h.c. ,
Q_m,f^(7) = -i/2σ^+_6m+3f+1σ^-_6m+3f+2 + h.c. ,
Q_m,f^(8) = 1/4 √(3)(σ^z_6m+3f + σ^z_6m+3f+1 - 2σ^z_6m+3f+2) .
Substituting Eqs. (<ref>) and (<ref>) into Eq. (<ref>) gives the Hamiltonian in Eq. (<ref>). For reference, the expanded Hamiltonian for L=1 is
H = H_kin + H_m + H_el +
H_μ_B + H_μ_I ,
H_kin = -1/2 (σ^+_6 σ^z_5 σ^z_4 σ^z_3 σ^z_2 σ^z_1 σ^-_0 + σ^-_6 σ^z_5 σ^z_4 σ^z_3 σ^z_2 σ^z_1 σ^+_0 + σ^+_7 σ^z_6 σ^z_5 σ^z_4 σ^z_3 σ^z_2 σ^-_1 + σ^-_7 σ^z_6 σ^z_5 σ^z_4 σ^z_3 σ^z_2 σ^+_1
+ σ^+_8 σ^z_7 σ^z_6 σ^z_5 σ^z_4 σ^z_3 σ^-_2 + σ^-_8 σ^z_7 σ^z_6 σ^z_5 σ^z_4 σ^z_3 σ^+_2 + σ^+_9 σ^z_8 σ^z_7 σ^z_6 σ^z_5 σ^z_4 σ^-_3 + σ^-_9 σ^z_8 σ^z_7 σ^z_6 σ^z_5 σ^z_4 σ^+_3
+ σ^+_10σ^z_9 σ^z_8 σ^z_7 σ^z_6 σ^z_5 σ^-_4 + σ^-_10σ^z_9 σ^z_8 σ^z_7 σ^z_6 σ^z_5 σ^+_4 + σ^+_11σ^z_10σ^z_9 σ^z_8 σ^z_7 σ^z_6 σ^-_5 + σ^-_11σ^z_10σ^z_9 σ^z_8 σ^z_7 σ^z_6 σ^+_5 ) ,
H_m = 1/2 [ m_u (σ^z_0 + σ^z_1 + σ^z_2 -σ^z_6 - σ^z_7 - σ^z_8 + 6 )+ m_d (σ^z_3 + σ^z_4 + σ^z_5 -σ^z_9 - σ^z_10 - σ^z_11 + 6 ) ] ,
H_el = g^2/2 [ 1/3(6 - σ^z_1 σ^z_0 - σ^z_2 σ^z_0 - σ^z_2 σ^z_1 - σ^z_4 σ^z_3 - σ^z_5 σ^z_3 - σ^z_5 σ^z_4) + σ^+_4σ^-_3σ^-_1σ^+_0 + σ^-_4σ^+_3σ^+_1σ^-_0
+ σ^+_5σ^z_4σ^-_3σ^-_2σ^z_1σ^+_0
σ^-_5σ^z_4σ^+_3σ^+_2σ^z_1σ^-_0 + σ^+_5σ^-_4σ^-_2σ^+_1 + σ^-_5σ^+_4σ^+_2σ^-_1
+ 1/12 (2 σ^z_3 σ^z_0 + 2σ^z_4 σ^z_1 + 2σ^z_5 σ^z_2 - σ^z_5 σ^z_0 - σ^z_5 σ^z_1 - σ^z_4 σ^z_2 - σ^z_4 σ^z_0 - σ^z_3 σ^z_1 - σ^z_3 σ^z_2 ) ] ,
H_μ_B = -μ_B/6 ( σ^z_0 + σ^z_1 + σ^z_2 + σ^z_3 + σ^z_4 + σ^z_5
- σ^z_6 + σ^z_7 + σ^z_8 + σ^z_9 + σ^z_10 + σ^z_11 ) ,
H_μ_I = -μ_I/4 ( σ^z_0 + σ^z_1 + σ^z_2 - σ^z_3 - σ^z_4 - σ^z_5
+ σ^z_6 + σ^z_7 + σ^z_8 - σ^z_9 - σ^z_10 - σ^z_11 ) .
§ SYMMETRIES OF THE FREE-QUARK HAMILTONIAN
Here the symmetries of the free-quark Hamiltonian are identified to better understand the degeneracies observed in the spectrum of 1+1D QCD with N_f=2 and L=1 as displayed in Figs. <ref> and <ref>.
Specifically, the Hamiltonian with g=h=μ_B=μ_I=0, leaving only the hopping and mass terms (m = m_u = m_d), is
H = ∑_f=0^1 ∑_c=0^2 [ m ∑_n=0^2L-1 (-1)^n ψ^†_6n+3f+cψ_6n+3f+c + 1/2∑_n=0^2L-2 (ψ^†_6n+3f+cψ_6(n+1)+3f+c + h.c. ) ] .
The mapping of degrees of freedom is taken to be as shown in Fig. <ref>, but it will be convenient to work with Fock-space quark operators instead of spin operators.
In what follows the focus will be on L=1, and larger systems follow similarly.
The creation operators can be assembled into a 12-component vector,
Ψ^†_i = (ψ_0^†, ψ_1^†, … ,ψ_10^†, ψ_11^†),
in terms of which the Hamiltonian becomes
H = Ψ^†_i M_ijΨ_j ,
where M is a 12 × 12 block matrix of the form,
M =
[
[ m 1/2; 1/2 -m ]]
,
with each block a 6 × 6 diagonal matrix.
Diagonalizing M, gives rise to
M̃ =
[
[ λ 0; 0 -λ ]]
, λ = 1/2√(1+4m^2) ,
with associated eigenvectors,
ψ̃_i = 1/√(2) (√(1+λ/m) ψ_i + √(1-λ/m) ψ_6+i ) , ψ̃_6+i = 1/√(2) (-√(1-λ/m) ψ_i + √(1+λ/m) ψ_6+i )
,
where ψ̃_i (ψ̃_6+i) corresponds to the positive (negative) eigenvalue
and the index i takes values 0 to 5.
These eigenvectors create superpositions of quarks and antiquarks with the same color and flavor, which are the OBC analogs of momentum plane-waves.
In this basis, the Hamiltonian becomes
H = ∑_i=0^5λ ( ψ̃^†_i ψ̃_i - ψ̃^†_6+iψ̃_6+i )
,
which has a vacuum state,
|Ω_0 ⟩ = ∏_i=0^i=5ψ̃^†_6+i|ω_0⟩ ,
where |ω_0⟩ is the unoccupied state,
and
|Ω_0 ⟩ corresponds to
| 000000111111 ⟩ (in binary)
in this transformed basis.
Excited states are formed by acting with either ψ̃^†_i or ψ̃_6+i on
|Ω_0 ⟩ which raises the energy of the system by λ.
A further transformation is required for the SU(12) symmetry to be manifest.
In terms of the 12-component vector, Ψ̃^† = (ψ̃^†_0, …, ψ̃^†_5, ψ̃_6, …, ψ̃_11), the Hamiltonian in Eq. (<ref>) becomes,
H =
∑_i=0^5λ ( ψ̃^†_i ψ̃_i - ψ̃^†_6+iψ̃_6+i )
= λ(
Ψ̃^†Ψ̃ - 6
)
,
where the canonical anticommutation relations have been used to obtain the final equality.
This is invariant under a SU(12) symmetry, where Ψ̃ transforms in the fundamental representation.
The free-quark spectrum (g=h=0) is therefore described by states with degeneracies corresponding to the 1 and 12 of SU(12) as well as
the antisymmetric combinations of fundamental irreps, 66, 220, … as illustrated in Figs. <ref> and <ref>.
The vacuum state corresponds to the singlet of SU(12). The lowest-lying 12 corresponds to single quark or antiquark excitations, which are color 3_cs for quarks and 3_cs for antiquarks and will each appear as isodoublets, i.e., 12→ 3_c⊗ 2_f ⊕ 3_c⊗ 2_f.
The 66 arises from double excitations of quarks and antiquarks. The possible color-isospin configurations are, based upon totally-antisymmetric wavefunctions for qq, qq and qq,
66 =
1_c⊗ 1_f
⊕ 1_c⊗ 3_f
⊕ 8_c⊗ 1_f
⊕ 8_c⊗ 3_f
⊕ 6_c⊗ 1_f
⊕ 6_c⊗ 1_f
⊕ 3_c⊗ 3_f
⊕ 3_c⊗ 3_f.
The OBCs split the naive symmetry between quarks and antiquarks and, for g 0, the lowest-lying color edge-states are from the antiquark sector with degeneracies 6 from a single excitation and 6,9 from double excitations.
Larger lattices possess an analogous global
SU(12) symmetry, coupled between spatial sites by the hopping term, and the spectrum is again one of non-interacting quasi-particles.
§ DETAILS OF THE D-WAVE IMPLEMENTATIONS
In this appendix, additional details are provided on the procedure used in Sec. <ref> to extract the lowest three eigenstates and corresponding energies using D-Wave's Advantage, (a more complete description can be found in Ref. <cit.>). The objective function F to be minimized can be written in terms of binary variables and put into QUBO form. Defining F=⟨Ψ|H̃|Ψ⟩ -η⟨Ψ| Ψ⟩ <cit.>, and expanding the wavefunction with a finite dimensional orthonormal basis ψ_α, |Ψ⟩ =∑^n_s_α a_α |ψ_α⟩, it is found
F=⟨Ψ|H̃|Ψ⟩ -η⟨Ψ| Ψ⟩ = ∑_αβ^n_s a_α a_β[⟨ψ_α|H̃|ψ_β⟩ -η⟨ψ_α| ψ_β⟩] =∑_αβ^n_s a_α a_β (H̃_αβ -ηδ_αβ)=∑_αβ^n_s a_α a_β h_αβ ,
where h_αβ are the matrix elements of the Hamiltonian that can be computed classically. The coefficients a_α are then expanded in a fixed-point representation using K bits <cit.>,
a^(z+1)_α=a^(z)_α+∑_i=1^K2^i-K-z(-1)^δ_iKq^α_i ,
where z is the zoom parameter. The starting point is a_α^(z=0)=0, and for each consecutive value of z, the range of values that a_α^(z+1) is allowed to explore is reduced by a factor of 2, centered around the previous solution a_α^(z). Now F takes the following form,
F=∑_α,β^n_s∑_i,j^K Q_α,i;β,j q^α_i q^β_j , Q_α,i;β,j=2^i+j-2K-2z (-1)^δ_iK+δ_jK h_αβ + 2 δ_αβδ_ij 2^i-K-z (-1)^δ_iK∑_γ^n_s a^(z)_γ h_γβ .
The iterative procedure used to improve the precision of the results is based on the value a^(z)_α obtained after 14 zoom steps (starting from a_α^(z_0=0)=0), and then launching a new annealing workflow with z_1 ≠ 0 (e.g., z_1=4), with a^(z=z_0+14)_α as the starting point. After another 14 zoom steps, the final value a^(z=z_1+14)_α can be used as the new starting point for a^(z=z_2)_α, with z_2 > z_1. This process can be repeated until no further improvement is seen in the convergence of the energy and wavefunction.
In Table <ref>, the difference between the exact energy of the vacuum and masses of the σ- and π-mesons and the ones computed with the QA, for each iteration of this procedure after 14 zoom steps, are given, together with the overlap of the wavefunctions 1-|⟨Ψ^ exact| Ψ^ Adv.⟩|^2. See also Fig. <ref>.
Focusing on the lowest line of the last panel of Fig. <ref>, which shows the convergence as a function of zoom steps for the pion mass, it can be seen that it displays some oscillatory behavior compared to the rest, which are smooth. This is expected, since the wavefunctions used to project out the lower eigenstates from the Hamiltonian are known with a finite precision (obtained from previous runs). For example, the vacuum state is extracted at the 10^-6 precision level. Then, when looking at the excited states with increased precision (like for the pion, around 10^-7), the variational principle might not hold, and the computed energy level might be below the “true” one (and not above). To support this argument, the same calculation has been pursued, but using the exact wavefunctions when projecting the Hamiltonian to study the excited states (instead of the ones computed using Advantage), and no oscillatory behavior is observed, as displayed in Fig. <ref>.
§ QUANTUM CIRCUITS REQUIRED FOR TIME EVOLUTION BY THE GAUGE-FIELD INTERACTION
This appendix provides more detail about the construction of the quantum circuits which implement the Trotterized time evolution of the chromo-electric terms of the Hamiltonian.
It closely follows the presentation in the appendix of Ref. <cit.>.
The four-qubit interaction in H_el has the form
σ^+ σ^- σ^- σ^+ + h.c. = 1/8(XXXX + XXYY + XYXY - XYYX + YXYX - YXXY +YYXX + YYYY) .
Since the 8 Pauli strings are mutually commuting, they can be simultaneously diagonalized by a unitary transformation. The strategy for
identifying the quantum circuit(s) to implement this term will be to first change to a basis where every term is diagonal, then apply the diagonal unitaries and finally
return back to the computational basis.
The GHZ state-preparation circuits,
shown in Fig. <ref>,
diagonalize all 8 of the Pauli strings, for example,
G^† ( XXXX + YYXX + YXYX - YXXY - XYYX + XYXY + XXYY + YYYY) G
= IIZI - ZIZZ - ZZZZ + ZIZI + IZZI - IIZZ - IZZZ + ZZZI .
This can be verified by using the identities that are shown in Fig. <ref> to simplify the circuits formed by conjugating each Pauli string by G.
As an example, the diagonalization of XXYY is displayed in Fig. <ref>.
The first equality uses Y = i Z X and the second equality uses the X
circuit identity to move all Xs past the CNOTs. The third equality moves the Zs past
the controls of the CNOTs and uses the Z circuit identity. The other Pauli strings are diagonalized in a similar manner.
It is also straightforward to show that, for example,
G^†(IZZI + IZIZ + ZIIZ)G = IZII + IIIZ + ZIII .
In general, a ZZ in the computational basis becomes a single Z in the GHZ basis if the state-preparation circuit has a CNOT that connects the
original two Zs. The two GHZ state-preparation circuits, G and G̃, were chosen so that all 9 of the ZZ terms in Eq. (<ref>) are mapped to single qubit rotations.
Once in the GHZ basis, the diagonal unitaries are performed, e.g., exp(-i IZZZ).
They are arranged to minimize the number of CNOTs required, and the optimal circuit layouts are shown in Fig. <ref>.
§ COMPLETE CIRCUITS FOR NF=1,2 QCD WITH L=1
This appendix provides the complete set of circuits required to
implement one Trotter step for
N_f=1 and N_f=2 QCD with L=1.
The composite circuit for N_f=1 is shown in
Fig. <ref> where, by ordering U_el before U_kin, the CNOTs highlighted in blue cancel. The composite circuit for N_f=2 is shown in
Fig. <ref>,
where the ordering in the Trotterization
is U_m
followed by U_kin
and then by U_el.
§ ENERGY DECOMPOSITION ASSOCIATED WITH TIME EVOLUTION FROM THE TRIVIAL VACUUM
This appendix shows,
in Fig. <ref>, the time evolution of the decomposition of the expectation value of the Hamiltonian starting with the trivial vacuum at t=0 for N_f=2 QCD with m=g=L=1.
Notice that the sum of all three terms equals zero for all times as required by energy conservation and that the period of oscillations is the same as the period of the persistence amplitude shown in Fig. <ref>.
§ DETAILS ON ONE FIRST-ORDER TROTTER STEP OF NF=1 QCD WITH L=1
This appendix discusses the theoretical expectations for one step of first-order Trotter time evolution for N_f=1 QCD with L=1.
The time evolution operator
is decomposed
into U_1(t) = U_kin(t) U_el(t) U_m(t) where the subscript “1” is to denote first-order Trotter. Both the trivial vacuum-to-vacuum and trivial vacuum-to-q_rq_r probabilities involve measurements in the computational basis where U_m(t) and U_el(t) are diagonal and have no effect.
Thus, the time-evolution operator is effectively U_1(t) = U_kin(t), which is exact (no Trotter errors) over a single spatial site. The trivial vacuum-to-vacuum, trivial vacuum-to-q_r q_r and trivial vacuum-to-B B probabilities are found to be,
|⟨Ω_0 | e^-i H_kin t|Ω_0 ⟩| ^2 = cos^6(t/2) ,
|⟨ q_r q_r | e^-i H_kin t|Ω_0 ⟩| ^2 = cos^4(t/2)sin^2(t/2) ,
|⟨ B B| e^-i H_kin t|Ω_0 ⟩| ^2 = sin^6(t/2) .
For large periods of the evolution, the wavefunction is dominated by BB as shown in Fig. <ref>. Exact time evolution, on the other hand, has a small probability of BB
which suggests that detecting
B B could lead to an additional way to mitigate Trotter errors.
It is interesting that the kinetic term alone favors transitioning the trivial vacuum into color singlets on each site. This same behavior holds
for N_f=2 where the dominant transition is to ΔΔΔΔ.
CHAPTER: QUANTUM SIMULATIONS OF WEAK DECAY IN DIMENSIONAL QUANTUM CHROMODYANMICS
This chapter is associated with Ref. <cit.>:
“Preparations for Quantum Simulations of Quantum Chromodynamics in Dimensions: (II) Single-Baryon -Decay in Real Time" by Roland C. Farrell, Ivan A. Chernyshev, Sarah J. M. Powell, Nikita A. Zemlevskiy, Marc Illa and Martin J. Savage.
§ INTRODUCTION
A quantitative exploration of hadronic decays and nuclear reaction dynamics resolved
at very short time scales using quantum simulations will provide a new window into
strong-interaction processes that lies beyond the capabilities of experiment.
In chemistry, the development of
femtosecond laser-pulse imaging in the 1980s <cit.>, allowed for reaction pathways to be studied in real time (for an overview, see Ref. <cit.>).
Although a similar experimental procedure is not available for strong processes, it is expected that quantum simulations will provide analogous insight into hadronic dynamics.
Perhaps the simplest non-trivial class of such reactions to begin exploring is the
β-decay of low-lying hadrons and nuclei.
Single β-decay rates of nuclei have played a central role in defining the
Standard Model (SM) of strong and electroweak processes <cit.>. They
initially provided evidence that the weak (charged-current)
quark eigenstates differ from the strong eigenstates, and, more recently, are
providing stringent tests of the unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix <cit.>.
For recent reviews of β-decay, see, e.g., Refs. <cit.>.
The four-Fermi operators responsible for β-decay <cit.> in the SM
emerge from operator production expansions (OPEs)
of the non-local operators coming from the exchange of a charged-gauge boson (W^-) between quarks and leptons.
Of relevance to this work is the four-Fermi operator, which gives rise to the flavor changing quark process d→ u e^-ν.
In the absence of higher-order electroweak processes, including electromagnetism,
matrix elements of these operators factorize between the hadronic and leptonic sectors. This leaves, for example, a non-perturbative evaluation of n→ p e^-ν for neutron decay, which is constrained significantly by the approximate global flavor symmetries of QCD.
Only recently have the observed systematics of β-decay rates of nuclei been understood without the need for phenomenological re-scalings of the axial coupling constant,
g_A.
As has long been anticipated, the correct decay rates are recovered when two-nucleon and higher-body interactions are included within the
effective field theories (EFTs) (or meson-exchange currents) <cit.>.
This was preceded by successes of EFTs in describing electroweak processes of few-nucleon systems through the inclusion of higher-body electroweak operators (not constrained by strong interactions alone),
e.g., Refs. <cit.>.
The EFT framework describing nuclear β-decays involves contributions from “potential-pion" and “radiation-pion" exchanges <cit.>
(an artifact of a system of relativistic and non-relativistic particles <cit.>)
and real-time simulations of these processes are expected to be able to isolate these distinct contributions.
Recently,
the first Euclidean-space lattice QCD calculations of Gamow-Teller matrix elements in light nuclei
(at unphysical light quark masses and without fully-quantified uncertainties)
have been performed <cit.>,
finding results that are consistent with nature.
While β-decay is a well-studied and foundational area of sub-atomic physics,
the double-β-decay of nuclei continues to present a theoretical challenge in the
the search for physics beyond the SM.
For a recent review of the “status and prospects" of ββ-decay, see Ref. <cit.>.
Although 2νββ-decay is allowed in the SM, and is
a second order β-decay process,
0νββ-decay requires the violation of lepton number.
Strong interactions clearly play an essential role
in the experimental detection of the ββ-decay of nuclei, but
such contributions are non-perturbative and complex, and, for example,
the EFT descriptions involve contributions from two- and higher-body correlated operators <cit.>.
The ability to study the real-time dynamics of such decay process in nuclei would likely
provide valuable insight into the underlying strong-interaction mechanisms, and potentially offer first principles constraints beyond those from Euclidean-space lattice QCD.[
For discussions of the potential of lattice QCD to impact ββ-decay, see, e.g., Refs. <cit.>.]
This chapter is an extension of the previous chapter
to include flavor-changing
weak interactions via a four-Fermi operator that generates the β-decay
of hadrons and nuclei. The terms in the lattice Hamiltonian that generate a Majorana mass for the neutrinos are also given, although not included in the simulations.
Applying the JW mapping, it is found that a single generation of the SM (quarks and leptons) maps onto 16 qubits per spatial lattice site.
Using Quantinuum's H1-1 20-qubit trapped ion quantum computer, the initial state of a baryon is both prepared and evolved with one and two Trotter steps on a single lattice site.
Despite only employing a minimal amount of error mitigation, results at the
∼ 5%-level are obtained, consistent with the expectations.
Finally, we briefly comment on the potential of such hierarchical dynamics for error-correction purposes in quantum simulations.
§ THE BETA-DECAY HAMILTONIAN FOR QUANTUM SIMULATIONS IN 1+1 DIMENSIONS
In nature, the β-decays of neutrons and nuclei involve energy and momentum transfers related to the energy scales of nuclear forces and of isospin breaking.
As these are much below the electroweak scale,
β-decay rates are well reproduced by matrix elements of
four-Fermi effective interactions with V-A structure <cit.>, of the form
H_β =
G_F/√(2) V_ud ψ_uγ^μ (1-γ_5)ψ_d ψ_eγ_μ (1-γ_5)ψ_ν_e + h.c. ,
where V_ud is the element of the CKM matrix for d→ u transitions,
and G_F is Fermi's coupling constant that is
measured to be G_F=1.1663787 (6) × 10^-5 GeV^-2 <cit.>.
This is the leading order (LO) SM result, obtained by matching amplitudes at
tree-level,
where G_F/√(2) = g_2^2/(8 M_W^2)
with M_W the mass of the W^± gauge boson
and g_2 the SU(2)_L coupling constant.
Toward simulating the SM in 3+1D, we consider
1+1D QCD containing u-quarks, d-quarks, electrons and electron neutrinos.
For simplicity,
we model β-decay through a vector-like four-Fermi operator,
H_β^1+1 =
G/√(2)ψ_uγ^μψ_d ψ_eγ_μ𝒞ψ_ν + h.c. ,
where 𝒞 = γ_1 is the charge-conjugation operator
whose purpose will become clear.
Appendices <ref> and <ref> provide details on
calculating the single-baryon β-decay rates in the
infinite volume and continuum limits in the SM and in the 1+1D model considered here.
The strong and weak interactions can be mapped
onto the finite-dimensional Hilbert space provided by a quantum computer
by using the Kogut-Susskind (KS) Hamiltonian formulation of
lattice gauge theory <cit.>.
The KS discretization of the fields is such that
L spatial lattice sites
are split into 2L fermion sites
that separately accommodate
fermions (even sites) and anti-fermions (odd sites).
For the β-decay of baryons, the strong and the weak KS Hamiltonian (in axial gauge)
has the form <cit.>
H = H_ quarks + H_ leptons + H_ glue + H_β ,
where
H_quarks
= ∑_f=u,d[
1/2 a∑_n=0^2L-2 ( ϕ_n^(f)†ϕ_n+1^(f) + h.c. )
+
m_f ∑_n=0^2L-1 (-1)^nϕ_n^(f)†ϕ_n^(f)] ,
H_leptons
= ∑_f=e,ν[
1/2 a∑_n=0^2L-2 ( χ_n^(f)†χ_n+1^(f) + h.c. )
+
m_f ∑_n=0^2L-1 (-1)^nχ_n^(f)†χ_n^(f)] ,
H_ glue
= a g^2/2∑_n=0^2L-2∑_a=1^8
( ∑_m≤ n Q^(a)_m )^2 ,
H_β
= G/a √(2)∑_l=0^L-1 [
(ϕ_2l^(u)†ϕ_2l^(d) + ϕ_2l+1^(u)†ϕ_2l+1^(d) ) (χ_2l^(e)†χ_2l+1^(ν) - χ_2l+1^(e)†χ_2l^(ν) )
+
( ϕ_2l^(u)†ϕ_2l+1^(d) + ϕ_2l+1^(u)†ϕ_2l^(d) )
(χ_2l^(e)†χ_2l^(ν) - χ_2l+1^(e)†χ_2l+1^(ν) )+
h.c. ] .
The masses of the u-, d-quarks, electron and neutrino (Dirac) are m_u,d,e,ν,
and the strong and weak coupling constants are g and G. The SU(3) charges are
Q_m^(a), and
ϕ^(u,d)_n are the u- and d-quark field operators (which both transform in the fundamental representation of SU(3), and hence the sum over color indices has been suppressed). The electron and neutrino field operators are
χ^(e,ν)_n, and for the remainder of this paper the lattice spacing, a, will be set to unity.
We emphasize that the absence of gluon fields is due to the choice of axial gauge, whereas the lack of weak gauge fields is due to the
consideration of a low energy effective theory in which the heavy weak gauge bosons have been integrated out.
This results in, for example, the absence of parallel transporters in the fermion kinetic terms.
The JW mapping of the Hamiltonian in Eq. (<ref>) to qubits,
arranged as shown in Fig. <ref>,
is given by
H_quarks→ 1/2∑_l=0^L-1∑_f=u,d∑_c=0^2 m_f ( Z_l,f,c - Z_l,f,c + 2 )
-1/2∑_l=0^L-1∑_f=u,d∑_c=0^2 [ σ^+_l,f,c Z^7 σ^-_l,f,c + (1-δ_l,L-1) σ^+_l,f,c Z^7 σ^-_l+1,f,c + h.c. ] ,
H_leptons→ 1/2∑_l=0^L-1∑_f=e,ν m_f ( Z_l,f - Z_l,f + 2 )
- 1/2∑_l=0^L-1∑_f=e,ν [ σ^+_l,f Z^7 σ^-_l,f + (1-δ_l,L-1) σ^+_l,f Z^7 σ^-_l+1,f + h.c. ] ,
H_ glue→ g^2/2∑_n=0^2L-2(2L-1-n)( ∑_f=u,d Q_n,f^(a) Q_n,f^(a) + 2 Q_n,u^(a) Q_n,d^(a))
+ g^2 ∑_n=0^2L-3∑_m=n+1^2L-2(2L-1-m) ∑_f=u,d∑_f'=u,d Q_n,f^(a) Q_m,f'^(a) ,
H_β→ G/√(2)∑_l = 0^L-1∑_c=0^2 ( σ^-_l, Z^6 σ^+_l,eσ^-_l,d,c Z^2 σ^+_l,u,c - σ^+_l, Z^8 σ_l,ν^- σ_l,d,c^- Z^2 σ^+_l,u,c
- σ^-_l, Z^2-cσ^-_l,,cσ^+_l,,c Z^c σ^+_l,e + σ^+_l,Z^3-cσ^-_l,,cσ^+_l,,c Z^1+cσ^-_l,ν
- σ^-_l,,c Z^3+cσ^+_l,eσ^-_l,ν Z^5-cσ^+_l,u,c - σ^+_l,σ^-_l,σ^-_l,,cZ^10σ^+_l,u,c
- σ^+_l,,c Z^c σ^+_l,eσ^-_l,ν Z^2-cσ^-_l,d,c - σ^+_l,σ^-_l,σ^+_l,,c Z^4 σ^-_l,d,c + h.c. )
,
where the sums of products of color charges are given by Eq. (<ref>).
§.§ Efficiently Mapping the L=1 Hamiltonian to Qubits
To accommodate the capabilities of current devices,
the quantum simulations performed in this work involve only a single spatial site, L=1,
where the structure of the Hamiltonian can be simplified.
In particular, without interactions between leptons, it is convenient to work with field operators that create and annihilate eigenstates of the free lepton Hamiltonian, H_ leptons.
These are denoted by “tilde operators" <cit.>,
which create the open-boundary-condition (OBC) analogs of plane waves.
In the tilde basis with the JW mapping, the lepton Hamiltonian is diagonal and becomes
H̃_ leptons = λ_ν(χ̃^(ν) †_0 χ̃^(ν)_0-χ̃^(ν) †_1 χ̃^(ν)_1) + λ_e(χ̃^(e) †_0 χ̃^(e)_0-χ̃^(e) †_1 χ̃^(e)_1) → λ_ν/2(Z_ν - Z_ν) + λ_e/2(Z_e - Z_e)
,
where λ_ν,e = 1/2√(1+4m_ν,e^2).
The β-decay operator in Eq. (<ref>) becomes
H̃_β = G/√(2){ ( ϕ_0^(u)† ϕ_0^(d) + ϕ_1^(u)† ϕ_1^(d) ) [ 1/2(s_+^e s_-^ν - s_-^e s_+^ν)(χ̃_0^(e)† χ̃_0^(ν) + χ̃_1^(e)† χ̃_1^(ν))
+ 1/2( s_+^e s_+^ν + s_-^e s_-^ν)(χ̃_0^(e)† χ̃_1^(ν) - χ̃_1^(e)† χ̃_0^(ν))]
+ ( ϕ_0^(u)† ϕ_1^(d) + ϕ_1^(u)† ϕ_0^(d) )[
1/2(s_+^e s_+^ν - s_-^e s_-^ν)(χ̃_0^(e)† χ̃_0^(ν) - χ̃_1^(e)† χ̃_1^(ν))
- 1/2(s_+^e s_-^ν + s_-^e s_+^ν)(χ̃_0^(e)† χ̃_1^(ν) + χ̃_1^(e)† χ̃_0^(ν)) ] + h.c. } ,
where s^ν,e_± = √(1± m_ν,e/λ_ν,e).
In our simulations, the initial state of the quark-lepton
system is prepared in a strong eigenstate with baryon number B=+1 in the quark sector
and the vacuum, |Ω⟩_ lepton, in the lepton sector.
One of the benefits of working in the tilde basis is that the vacuum satisfies χ̃^(e,v)_0|Ω⟩_ lepton = χ̃^(e,v) †_1 |Ω⟩_ lepton = 0, and the terms in the first and third lines of Eq. (<ref>) do not contribute to β-decay. For the processes we are interested in, this results in an effective β-decay operator of the form
H̃_β = G/√(2){ ( ϕ_0^(u)† ϕ_0^(d) + ϕ_1^(u)† ϕ_1^(d) ) [ 1/2( s_+^e s_+^ν + s_-^e s_-^ν)(χ̃_0^(e)† χ̃_1^(ν) - χ̃_1^(e)† χ̃_0^(ν))]
- ( ϕ_0^(u)† ϕ_1^(d) + ϕ_1^(u)† ϕ_0^(d) )[ 1/2(s_+^e s_-^ν + s_-^e s_+^ν)(χ̃_0^(e)† χ̃_1^(ν) + χ̃_1^(e)† χ̃_0^(ν)) ] + h.c. } .
The insertion of
the charge-conjugation matrix,
𝒞, in the continuum operator, Eq. (<ref>),
is
necessary to obtain a β-decay operator that does not annihilate the lepton vacuum.
To minimize the length of the string of Zs in the JW mapping, the lattice layout in Fig. <ref> is used.
In this layout, the hopping piece of H_ quarks has only 5 Zs between the quark and antiquark raising and lowering operators and the β-decay operator is
H̃_β→G/√(2)∑_c=r,g,b [ 1/2( s_+^e s_+^ν + s_-^e s_-^ν) ( σ^-_νσ^+_e - σ^+_e Z^2 σ^-_ν ) (σ^-_d,cZ^2 σ^+_u,c + σ^-_d,c Z^2 σ^+_u,c )
- 1/2(s_+^e s_-^ν + s_-^e s_+^ν) (σ^-_νσ^+_e + σ^+_e Z^2 σ^-_ν ) ( σ^-_d,c Z^8 σ^+_u,c + σ^+_u,c Z^2 σ^-_d,c ) + h.c. ] .
In total, the L=1 system requires 16 (12 quark and 4 lepton)
qubits. See App. <ref> for the complete L=1 Hamiltonian in terms of qubits.
§.§ A Majorana Mass for the Neutrino
Although not relevant to the simulations performed in Sec. <ref>, it is of current interest to consider the inclusion of a Majorana mass term for the neutrinos.
A Majorana mass requires and induces the violation of lepton number by
|Δ L| = 2, and is not present in the minimal SM, defined by dim-4 operators.
However, the Weinberg operator <cit.> enters at dim-5 and generates an effective Majorana mass for the neutrinos,
L^ Weinberg = 1/ 2Λ( L^c ϵϕ)
(ϕ^T ϵ L )
+ h.c. ,
L = ( ν , e )^T_L
, ϕ = ( ϕ^+ , ϕ^0 )^T
, ⟨ϕ⟩ = ( 0 , v/√(2))^T
, ϵ = iσ_2
,
→
-v^2/4Λν^c_L ν_L
+ h.c. + ....
where ϕ is the Higgs doublet,
L^c denotes the charge-conjugated left-handed lepton doublet,
v is the Higgs vacuum expectation value and Λ is a high energy scale characterizing physics beyond the SM.
The ellipsis denote interaction terms involving components of the Higgs doublet fields and the leptons.
This is the leading contribution beyond the minimal SM,
but does not preclude contributions from other sources.
On a 1+1D lattice there is only a single
|Δ L | = 2 local operator
with the structure of a mass term
and, using the JW mapping along with the qubit layout in Fig. <ref>, is of the form
H_ Majorana =
1/2 m_M
∑_n= even^2L-2(
χ_n^(ν)χ_n+1^(ν)
+ h.c.)
→1/2 m_M ∑_l = 0^L-1(
σ^+_l,ν
Z^7
σ^+_l, + h.c.)
.
While the operator has support on a single spatial lattice site, it does not contribute to
0νββ-decay on a lattice with only a single spatial site.
This is because the processes that it could potentially induce, such as
Δ^-Δ^-→Δ^0Δ^0 e^- e^-,
are Pauli-blocked by the single electron site.
At least two spatial sites are required for any such process producing two electrons in the final state.
§ QUANTUM SIMULATIONS OF THE BETA-DECAY OF ONE BARYON ON ONE LATTICE SITE
In this section, quantum simulations of the β-decay of a single baryon are performed
in N_f=2 flavor QCD with L=1 spatial lattice site.
The required quantum circuits to perform one and two Trotter steps of time evolution were developed and run on the Quantinuum H1-1 20 qubit trapped ion quantum computer and its simulator H1-1E <cit.>.
§.§ Preparing to Simulate Beta-Decay
It is well known that, because of confinement, the energy eigenstates (asymptotic states) of QCD
are color-singlet hadrons, which are composite objects of quarks and gluons.
On the other hand,
the operators responsible for β-decay, given in Eq. (<ref>),
generate transitions between d- and u-quarks.
As a result, observable effects of H̃_β, in part,
are found in transitions between
hadronic states whose matrix elements depend on the distribution of the quarks within.
Toward quantum simulations of the β-decay of neutrons and nuclei more generally,
the present work focuses on the decay of a single baryon.
Generically, three elements are required for real-time quantum simulations of
the β-decay of baryons:
* Prepare the initial hadronic state that will subsequently undergo β-decay.
In this work, this is one of the single-baryon states (appropriately selected in the spectrum)
that is an eigenstate of the strong Hamiltonian alone,
i.e., the weak coupling constant is set equal to G=0.
* Perform (Trotterized) time-evolution using the full (G≠0) Hamiltonian.
* Measure one or more of the lepton qubits.
If leptons are detected, then β-decay has occurred.
In 1+1D, Fermi statistics
preclude the existence of a light isospin I=1/2 nucleon,
and the lightest baryons are in an I=3/2 multiplet
(Δ^++, Δ^+, Δ^0, Δ^-)
(using the standard electric charge assignments of the up and down quarks).
We have chosen to simulate the decay
Δ^- →Δ^0 + e + ν, which, at the quark level, involves
baryon-interpolating operators with the quantum numbers of
ddd→ udd.
In order for β-decay to be kinematically allowed,
the input-parameters of the theory must be such that
M_Δ^- > M_Δ^0 + M_ν + M_e.
This is accomplished through tuning the parameters of the Hamiltonian.
The degeneracy in the iso-multiplet is lifted
by using different values for the up and down quark masses.
It is found that the choice of parameters, m_u=0.9, m_d=2.1, g=2 and m_e,ν = 0
results in the desired hierarchy of baryon and lepton masses.
The relevant part of the spectrum, obtained from an exact diagonalization of the Hamiltonian,
is shown in Table <ref>. Although kinematically allowed, multiple instances of β-decay cannot occur for L=1 as there can be at most one of each (anti)lepton.
Note that even though m_e,ν = 0,
the electron and neutrino are gapped due to the finite spatial volume.
To prepare the Δ^- initial state,
we exploit the observation made in the previous chapter,
that the stretched-isospin eigenstates of the Δ-baryons,
with third component of isospin I_3 = ± 3/2,
factorize between the u and d flavor sectors for L=1.
Therefore, the previously developed
Variational Quantum Eigensolver (VQE) <cit.>
circuit <cit.> used to prepare the one-flavor vacuum can be used to initialize the two-flavor Δ^- wave function.
This is done by initializing the vacuum in the lepton sector,
preparing the state |d_r d_g d_b⟩ in the d-sector,
and applying the VQE circuit to produce the u-sector vacuum.
In the tilde basis, the lepton vacuum is the unoccupied state (trivial vacuum),
and the complete state-preparation circuit is shown in Fig. <ref>,
where θ is shorthand for RY(θ).
The rotation angles are related by
θ_0 = -2 sin^-1[ tan(θ/2) cos(θ_1/2) ] ,
θ_00 = -2 sin^-1[ tan(θ_0/2) cos(θ_01/2) ] ,
θ_01 = -2 sin^-1[ cos(θ_11/2) tan(θ_1/2) ]
and, for m_u = 0.9 and g=2,[
The u and u parts of the lattice are separated by a fully packed d sector which implies that the part of the wavefunctions with odd numbers of anti-up quarks have relative minus signs compared to the one-flavor vacuum wavefunction.
]
θ = 0.2256 , θ_1 = 0.4794 , θ_11 = 0.3265 .
In total, state preparation requires the application of 9 CNOT gates.
Once the Δ^- baryon state has been initialized
on the register of qubits, it is then evolved in time with the full Hamiltonian.
The quantum circuits that implement the Trotterized time-evolution
induced by H_ quarks and H_ glue were previously developed in
Ref. <cit.>,
where it was found that, by using an ancilla, each Trotter step
can be implemented using 114 CNOTs.
The lepton Hamiltonian, H̃_ leptons, has just single Zs which are Trotterized with single qubit rotations.
The circuits required to implement a Trotter step from H̃_β are similar to those developed in Ref. <cit.>,
and their construction is outlined in App. <ref>.
For the present choice of parameters,
the main contribution to the initial (Δ^-) wave function
is | d_b d_g d_r ⟩,
i.e., the quark configuration associated with the “bare" baryon in the d-sector and the trivial vacuum in the u-sector.
This implies that the dominant contribution to the β-decay
is from the ϕ_0^(u)†ϕ_0^(d)χ̃_0^(e)†χ̃_1^(ν)
term[Note that the ϕ_0^(u)†ϕ_0^(d)χ̃_1^(e)†χ̃_0^(ν) term is suppressed since the lepton vacuum in the tilde basis satisfies χ̃_1^(e,ν)†|Ω⟩_ lep = χ̃_0^(e,ν)|Ω⟩_ lep = 0.] in Eq. (<ref>),
which acts only on valence quarks, and the β-decay operator can be approximated by
H̃_β^ val
=
G/√(2) (σ^-_νσ^+_e ∑_c=r,g,bσ^-_d,cZ^2 σ^+_u,c + h.c. ) ,
for these parameter values. See App. <ref> for details on the validity of this approximation.
All of the results presented in this section implement this interaction,
the Trotterization of which requires 50 CNOTs.
Notice that, if the Trotterization of H̃_β^ val is placed at the end of the first Trotter step, then
U(t) = exp(-i H̃_β^ val t) ×exp [ -i (H̃_ leptons + H_ quarks + H_ glue)t ] and the initial exponential (corresponding to strong-interaction evolution)
can be omitted as it acts on an eigenstate (the Δ^-).
This reduces the CNOTs required for one and two Trotter steps to 50 and 214, respectively.
For an estimate of the number of CNOTs required to time evolve with the β-decay Hamiltonian on larger lattices see App. <ref>.
The probability of β-decay,
as computed both through exact diagonalization of the Hamiltonian
and through Trotterized time-evolution using the qiskit classical simulator <cit.>,
is shown in Fig. <ref>.
The periodic structure is a finite volume effect, and the probability of
β-decay is expected to tend to an exponential in time as L increases,
see App. <ref>.
Entanglement in quantum simulations of lattice gauge theories
is a growing area of focus,
see, e.g., Refs. <cit.>, and
it is interesting to examine the evolution of entanglement during the β-decay process.
Before the decay, the quarks and antiquarks are together in a pure state as the leptons are in the vacuum, and subsequent time evolution of the state introduces components into the wavefunction that have non-zero population of the lepton states.
One measure of entanglement is the linear entropy,
S_L = 1 - [ρ_q^2]
,
between the quarks and antiquarks plus leptons.
It is constructed by tracing the full density matrix, ρ,
over the antiquark and lepton sector to form the reduced density matrix
ρ_q = _q, leptons [ρ].
Figure <ref> shows the linear entropy computed through exact diagonalization
of the Hamiltonian with the parameters discussed previously in the text.
By comparing with the persistence probability in Fig. <ref>,
it is seen that the entanglement entropy evolves at twice the frequency
of the β-decay probability.
This is because β-decay primarily transitions the baryon between the ground state of the Δ^- and Δ^0.
It is expected that these states will have a comparable amount of entanglement,
and so the entanglement is approximately the same when the decay probabilities are 0 and 1.
While this makes this particular example somewhat uninteresting, it does demonstrate that when multiple final states are accessible, the time-dependence of the entanglement structure might be revealing.
§.§ Simulations Using Quantinuum's H1-1 20 Qubit Trapped Ion Quantum Computer
Both the initial state preparation and one and two steps of Trotterized time evolution were executed
using Quantinuum's H1-1 20 qubit trapped ion quantum computer <cit.> and its simulator H1-1E[The classical simulator H1-1E includes depolarizing gate noise, leakage errors, crosstalk noise and dephasing noise due to transport and qubit idling <cit.>.] (for details on the specifications of H1-1, see App. <ref>).
After transpilation onto the native gate set of H1-1, a single Trotter step requires 59 ZZ gates, while two Trotter steps requires 212 ZZ gates.[The number of ZZ gates could be further reduced by 5 by not resetting the ancilla.]
By post-selecting results on “physical" states with baryon number B=1 and lepton number L=0
to mitigate single-qubit errors (e.g., Ref. <cit.>),
approximately 90% (50%) of the total events from the one (two) Trotter step circuit remained. Additionally, for the two Trotter step circuit, results were selected where the ancilla qubit was in the |0⟩ state (around 95%).[For this type of error, the mid-circuit measurement and re-initialization option available for H1-1 could have been used to identify the case where the bit-flip occurred after the ancilla was used and the error had no effect on the final results.]
The results of the simulations
are shown in Fig. <ref> and given in Table <ref>.
By comparing the results from H1-1 and H1-1E (using 200 shots) it is seen that the simulator is able to faithfully reproduce the behavior of the quantum computer.
The emulator was also run with 400 shots and clearly shows convergence to the expected value, verifying that the agreement between data and theory was not an artifact due to low statistics (and large error bars).
Compared with the results presented in Ref. <cit.> that were performed using IBM's ibmq_jakarta and ibm_perth,
error mitigation techniques were not
applied to the present simulations due to the overhead in resource requirements.
Specifically, Pauli twirling, dynamical decoupling, decoherence renormalization and measurement error mitigation
were not performed. This is practical because the two-qubit gate, state preparation and measurement (SPAM) errors are an order of magnitude smaller on Quantinuum's trapped ion system
compared to those of IBM's superconducting qubit systems (and a similar error rate on the single-qubit gates) <cit.>.
§ SPECULATION ABOUT QUANTUM SIMULATIONS WITH A HIERARCHY OF LENGTH SCALES
It is interesting to consider how a hierarchy of length scales,
as present in the SM, may be helpful in error correction.
In the system we have examined, the low energy strong sector is composed of mesons, baryons and nuclei, with both color singlet and non-singlet excitations (existing at higher energies).
As observed in Ref. <cit.>, OBCs allow for
relatively low-energy colored “edge" states to exist near the boundary of the lattice.
The energy of a color non-singlet grows linearly with its distance
from the boundary, leading to a force on colored objects.
This will cause colored errors in the bulk to migrate to the edge of the lattice where they could be detected and possibly removed.
This is one benefit of using axial gauge, where Gauss's law is automatically enforced,
and a colored “error" in the bulk generates a color flux tube that extends to the boundary.
Localized two-bit-flip errors can create color-singlet
excitations that do not experience a force toward the boundary, but which are
vulnerable to weak decay.
For sufficiently large lattices, color singlet excitations will decay weakly down to stable states
enabled by the near continuum of lepton states.
In many ways, this resembles the
quantum imaginary-time evolution (QITE) <cit.>
algorithm, which is a special case of coupling to open systems,
where quantum systems are driven into their ground state by embedding them in a larger system that acts as a heat reservoir.
One can speculate that, in the future, quantum simulations of QCD
will benefit from also including electroweak interactions as a mechanism to cool the strongly-interacting sector from particular classes of errors.
This particular line of investigation is currently at a “schematic” level, and significantly more work is required to quantify its utility.
Given the quantum resource requirements, it is likely that the Schwinger model will
provide a suitable system to explore such scenarios.
§ SUMMARY AND CONCLUSIONS
Quantum simulations of SM physics is in its infancy and, for practical reasons, has been previously
limited to either QCD or QED in one or two spatial dimensions.
In this chapter, we have started the integration of the electroweak sector into quantum simulations of QCD by examining the time-evolution of the β-decay of one baryon.
In addition to the general framework that allows for
simulations of arbitrary numbers of lattice sites in one dimension,
we present results for L=1 spatial lattice site, which requires 16 qubits.
Explicitly, this work considered quantum simulations of
Δ^-→Δ^0 e ν
in two flavor 1+1D QCD for L=1 spatial lattice site.
Simulations were performed using Quantinuum's H1-1 20-qubit trapped ion quantum computer
and classical simulator H1-1E,
requiring 17 (16 system and 1 ancilla) qubits.
Results were presented for both one and two Trotter steps, including the state preparation of the initial baryon, requiring 59 and 212 two-qubit gates respectively.
Even with 212 two-qubit gates, H1-1 provided results that
are consistent with the expected results, within uncertainties, without error-mitigation beyond physical-state post selection.
While not representative of β-decay in the continuum,
these results demonstrate the potential of quantum simulations to determine
the real-time evolution of decay and reaction processes in nuclear and
high-energy processes.
High temporal-resolution studies of the evolution of the quarks and gluons
during hadronic decays and nuclear reactions
are expected to provide new insights into the mechanisms responsible for these processes,
and lead to new strategies for further reducing systematic errors in their prediction.
§ THE COMPLETE SPIN HAMILTONIAN FOR L=1
After the JW mapping of the Hamiltonian to qubits, and using the tilde-basis for the leptons,
the four contributing terms are
H = H_ quarks + H̃_ leptons + H_ glue + H̃_β ,
H_ quarks= 1/2 [ m_u (Z_0 + Z_1 + Z_2 -Z_6 - Z_7 - Z_8 + 6 )+ m_d (Z_3 + Z_4 + Z_5 -Z_9 - Z_10 - Z_11 + 6 ) ]
- 1/2 (σ^+_6 Z_5 Z_4 Z_3 Z_2 Z_1 σ^-_0 + σ^-_6 Z_5 Z_4 Z_3 Z_2 Z_1 σ^+_0 + σ^+_7 Z_6 Z_5 Z_4 Z_3 Z_2 σ^-_1 + σ^-_7 Z_6 Z_5 Z_4 Z_3 Z_2 σ^+_1
+ σ^+_8 Z_7 Z_6 Z_5 Z_4 Z_3 σ^-_2 + σ^-_8 Z_7 Z_6 Z_5 Z_4 Z_3 σ^+_2 + σ^+_9 Z_8 Z_7 Z_6 Z_5 Z_4 σ^-_3 + σ^-_9 Z_8 Z_7 Z_6 Z_5 Z_4 σ^+_3
+ σ^+_10 Z_9 Z_8 Z_7 Z_6 Z_5 σ^-_4 + σ^-_10 Z_9 Z_8 Z_7 Z_6 Z_5 σ^+_4 + σ^+_11 Z_10 Z_9 Z_8 Z_7 Z_6 σ^-_5 + σ^-_11 Z_10 Z_9 Z_8 Z_7 Z_6 σ^+_5 ) ,
H̃_ leptons = 1/4√(1 +4 m_e^2)(Z_13 - Z_15) + 1/4√(1 +4 m_ν^2)(Z_12 - Z_14)
H_ glue = g^2/2 [ 1/3(3 - Z_1 Z_0 - Z_2 Z_0 - Z_2 Z_1) + σ^+_4σ^-_3σ^-_1σ^+_0 + σ^-_4σ^+_3σ^+_1σ^-_0
+ σ^+_5Z_4σ^-_3σ^-_2Z_1σ^+_0 + σ^-_5Z_4σ^+_3σ^+_2Z_1σ^-_0 + σ^+_5σ^-_4σ^-_2σ^+_1 + σ^-_5σ^+_4σ^+_2σ^-_1
+ 1/12 (2 Z_3 Z_0 + 2Z_4 Z_1 + 2Z_5 Z_2 - Z_5 Z_0 - Z_5 Z_1 - Z_4 Z_2 - Z_4 Z_0 - Z_3 Z_1 - Z_3 Z_2 ) ] ,
H̃_β = G/√(2){1/2( s_+^e s_+^ν + s_-^e s_-^ν) [(σ^-_14σ^+_13 - σ^+_15 Z_14 Z_13σ^-_12) (σ^-_3 Z_2 Z_1 σ^+_0 + σ^-_4 Z_3 Z_2 σ^+_1 + σ^-_5 Z_4 Z_3 σ^+_2
+ σ^-_9 Z_8 Z_7 σ^+_6+ σ^-_10 Z_9 Z_8 σ^+_7 + σ^-_11 Z_10 Z_9 σ^+_8) + (σ^+_14σ^-_13 - σ^-_15 Z_14 Z_13σ^+_12) (σ^+_3 Z_2 Z_1 σ^-_0
+ σ^+_4 Z_3 Z_2 σ^-_1 + σ^+_5 Z_4 Z_3 σ^-_2 + σ^+_9 Z_8 Z_7 σ^-_6 + σ^+_10 Z_9 Z_8 σ^-_7 + σ^+_11 Z_10 Z_9 σ^-_8) ]
- 1/2(s_+^e s_-^ν + s_-^e s_+^ν) [ (σ^-_14σ^+_13 + σ^+_15 Z_14 Z_13σ^-_12) ( σ^-_9 Z_8 Z_7 Z_6 Z_5 Z_4 Z_3 Z_2 Z_1 σ^+_0
+ σ^-_10 Z_9 Z_8 Z_7 Z_6 Z_5 Z_4 Z_3 Z_2 σ^+_1+ σ^-_11 Z_10 Z_9 Z_8 Z_7 Z_6 Z_5 Z_4 Z_3 σ^+_2 + σ^+_6 Z_5 Z_4 σ^-_3
+ σ^+_7 Z_6 Z_5 σ^-_4 + σ^+_8 Z_7 Z_6 σ^-_5 )
+ (σ^+_14σ^-_13 + σ^-_15 Z_14 Z_13σ^+_12) ( σ^+_9 Z_8 Z_7 Z_6 Z_5 Z_4 Z_3 Z_2 Z_1 σ^-_0 + σ^+_10 Z_9 Z_8 Z_7 Z_6 Z_5 Z_4 Z_3 Z_2 σ^-_1
+ σ^+_11 Z_10 Z_9 Z_8 Z_7 Z_6 Z_5 Z_4 Z_3 σ^-_2 + σ^-_6 Z_5 Z_4 σ^+_3 + σ^-_7 Z_6 Z_5 σ^+_4 + σ^-_8 Z_7 Z_6 σ^+_5 ) ] } .
In the mapping, the qubits are indexed right-to-left and,
for example, qubit zero (one) corresponds to a red (green) up-quark.
The terms highlighted in blue provide the leading contribution to the β-decay process
for the parameters used in the text and make up the operator used for the simulations performed in Sec. <ref>.
§ BETA-DECAY IN THE STANDARD MODEL
To put our simulations in 1+1D into context,
it is helpful to outline relevant aspects of single-hadron β-decays
in the SM in 3+1D.
Far below the electroweak symmetry-breaking scale,
charged-current interactions can be included as an infinite
set of effective operators in a systematic EFT description, ordered by their contributions in powers of low-energy scales divided by appropriate powers of M_W.
For instance, β-decay rates between hadrons scale as
∼Λ (G_F Λ^2 )^2 (Λ / M_W )^n,
where Λ denotes the low-energy scales,
G_F/√(2) = g_2^2/8 M_W^2 is Fermi's constant and
LO (in Λ / M_W)
corresponds to n=0.
By matching operators at LO in SM interactions, the β-decay of the neutron is induced by an effective Hamiltonian density of the form <cit.>
H_β =
G_F/√(2) V_ud ψ_uγ^μ (1-γ_5)ψ_d ψ_eγ_μ (1-γ_5)ψ_ν_e + h.c. ,
where V_ud is the element of the CKM matrix for d→ u transitions.
As H_β factors into contributions from lepton and quark operators, the matrix element factorizes into a plane-wave lepton contribution and a non-perturbative hadronic component requiring matrix elements of the quark operator between hadronic states.
With the mass hierarchies and symmetries in nature,
there are two dominant form factors, so that,
⟨ p(p_p) | ψ_uγ^μ (1-γ_5)ψ_d | n(p_n)⟩ =
U_p [ g_V(q^2) γ^μ - g_A(q^2) γ^μγ_5 ] U_n ,
where q is the four-momentum transfer of the process, g_V(0) = 1 in the isospin limit and g_A(0)=1.2754(13) <cit.> as measured in experiment.
The matrix element for n→ p e^- ν_e
calculated from the Hamiltonian in Eq. (<ref>)
is
|ℳ|^2 = 16 G_F^2 | V_ud|^2 M_n M_p (g_V^2 + 3 g_A^2)(E_νE_e + g_V^2-g_A^2/g_V^2 + 3 g_A^2 p_e · p_) ,
which leads to a neutron width of
(at LO in (M_n-M_p)/M_n and m_e/M_n)
Γ_n =
G_F^2 |V_ud|^2 (M_n-M_p)^5/60π^3 ( g_V^2 + 3 g_A^2 ) f^'(y) ,
where f^'(y) is a phase-space factor,
f^' (y) = √(1-y^2)(1 - 9/2y^2 - 4 y^4)
- 15/2y^4 log[ y/√(1-y^2)+1] ,
and y=m_e/(M_n-M_p).
Radiative effects, recoil effects and other higher-order contributions have been neglected.
§ BETA-DECAY IN 1+1 DIMENSIONS: THE L TO INFINITY AND CONTINUUM LIMITS
In 1+1D, the fermion field has dimensions
[ψ] = 1/2,
and a four-Fermi operator has dimension
[θ̂] = 2.
Therefore, while in 3+1D
[G_F ] = -2,
in 1+1D, the coupling has dimension [G ] = 0.
For our purposes, to describe the β-decay of a Δ^--baryon in 1+1D,
we have chosen to work with an effective Hamiltonian of the form
H_β^1+1 = G/√(2)ψ_uγ^μψ_d ψ_eγ_μψ_ν + h.c. = G/√(2)ψ_uγ^μψ_d ψ_eγ_μ𝒞ψ_ν + h.c. ,
where we have chosen the basis
γ_0 = (
[ 1 0; 0 -1 ])
, γ_1 = (
[ 0 1; -1 0 ])
= 𝒞 , γ_0γ_μ^†γ_0 = γ_μ , γ_0 𝒞 ^†γ_0 = 𝒞 , {γ_μ , γ_ν} = 2 g_μν .
For simplicity, the CKM matrix element is set equal to unity
as only one generation of particles is considered.
In the limit of exact isospin symmetry, which we assume to be approximately valid in this appendix,
the four Δ baryons form an isospin quartet
and can be embedded in a tensor T^abc (as is the case for the Δ resonances in nature)
as
T^111=Δ^++,
T^112=T^121=T^211=Δ^+/√(3),
T^122=T^221=T^212=Δ^0/√(3),
T^222=Δ^-.
Matrix elements of the isospin generators
are reproduced by an effective operator of the form
ψ_q γ^μτ^αψ_q → 3 T_abcγ^μ(τ^α)^c_d T^abd ,
which provides a Clebsch-Gordan coefficient for isospin raising operators,
ψ_q γ^μτ^+ ψ_q →√(3) Δ^++γ^μΔ^+
+ 2 Δ^+γ^μΔ^0
+ √(3) Δ^0γ^μΔ^- .
The matrix element for β-decay factorizes at LO in the electroweak interactions.
The hadronic component of the matrix element is given by
⟨Δ^0(p_0) | ψ_uγ^αψ_d | Δ^-(p_-)⟩ = √(3) g_V(q^2) U_Δ^0γ^α U_Δ^- = H^α ,
H^α H^β † = 3 |g_V(q^2) |^2 Tr[ γ^α( _- + M_Δ^-) γ^β( _0 + M_Δ^0) ]
= 6 |g_V(q^2) |^2
[p_-^α p_0^β + p_0^α p_-^β - g^αβ (p_-· p_0)
+ M_Δ^- M_Δ^0 g^αβ] = H^αβ ,
and the leptonic component of the matrix element is given by, assuming that the electron and neutrino are massless,
⟨ e^- ν_e | ψ_e γ^α C ψ_ν | 0⟩ = U_eγ^α C V_ν = L^α ,
L^α L^β † = Tr[ γ^α C _ν C γ^β_e ] = Tr[ γ^α_νγ^β_e ]
=
2 [ p_ν^α p_e^β + p_ν^β p_e^α - g^αβ (p_ν· p_e) ] = L^αβ ,
where p = (p^0, +p^1) and
p=(p^0, -p^1).
Therefore, the squared matrix element of the process is
| M|^2 = G^2/2
H^αβ L_αβ =
12 G^2 g_V^2 M_Δ^-( M_Δ^- - 2 E_ν)
( E_e E_ν - p_e· p_ν) ,
from which
the delta decay width can be determined by standard methods,
Γ_Δ^- =
1/2M_Δ^-∫d p_e/4π E_ed p_ν/4π E_νd p_0/4π E_0 (2π)^2 δ^2(p_- - p_0 - p_e - p_ν)
| M|^2
=
3 G^2 g_V^2/2π∫ dE_e dE_ν δ(Q - E_e - E_ν)
+ O(Q^n/M_Δ^n)
=
3 G^2 g_V^2 Q/2π + O(Q^n/M_Δ^n)
,
where Q= M_Δ^- - M_Δ^0 and we have retained only the leading terms in
an expansion in Q/M_Δ and evaluated the vector form factor at g_V(q^2=0) ≡ g_V.
The electron and neutrino masses have been set to zero, and the inclusion of non-zero masses will lead to a phase-space factor, f_1,
reducing the width shown in Eq. (<ref>),
and which becomes f_1=1 in the massless limit.
§ BETA-DECAY IN 1+1 DIMENSIONS: FINITE L AND NON-ZERO SPATIAL LATTICE SPACING
The previous appendix computed the β-decay rate
in 1+1D in infinite volume and in the continuum.
However, lattice calculations of such processes will necessarily be performed with a non-zero lattice spacing and a finite number of lattice points.
For calculations done on a Euclidean-space lattice, significant work has been done to develop the machinery used to extract physically meaningful results.
This formalism was initially pioneered by Lüscher <cit.> for hadron masses and two-particle scattering, and has been extended to more complex systems relevant to electroweak processes (Lellouch-Lüscher) <cit.> and
to nuclear physics <cit.>.
Lüscher's method was originally derived from an analysis of Hamiltonian dynamics in Euclidean space and later from a field theoretic point of view directly from correlation functions.
The challenge is working around the Maiani-Testa theorem <cit.> and reliably determining Minkowski-space matrix elements from Euclidean-space observables.
This formalism has been used successfully for a number of important quantities, and continues to be the workhorse for Euclidean-space computations.
As quantum simulations provide observables directly in Minkowski space,
understanding the finite-volume and non-zero lattice spacing artifacts requires a similar but different analysis than in Euclidean space.[Estimates of such effects in model 1+1 dimensional simulations can be found in Ref. <cit.>.]
While the method used in Euclidean space of determining S-matrix elements for scattering processes from energy eigenvalues can still be applied, Minkowski space simulations will also allow for a direct evaluation of scattering processes, removing some of the modeling that remains in Euclidean-space calculations.[
For example, the energies of states in different volumes are different,
and so the elements of the scattering matrix are constrained over
a range of energies and not at one single energy,
and a priori unknown interpolations are modeled.
]
Neglecting electroweak interactions beyond β-decay means that the final state leptons are non-interacting (plane-waves when using periodic boundary conditions),
and therefore the modifications to the density of states due to interactions, as encapsulated within the Lüscher formalism, are absent.
With Hamiltonian evolution of a system described within a finite-dimensional Hilbert space, the persistence amplitude of the initial state coupled to final states via the weak Hamiltonian will be determined by the sum over oscillatory amplitudes.
For a small number of final states, the amplitude will return to unity after some finite period of time.
As the density of final states near the energy of the initial state becomes large, there will be cancellations among the oscillatory amplitudes, and the persistence probability will begin to approximate the “classic" exponential decay over some time interval.
This time interval will extend to infinity as the density of states tends to a continuous spectrum.
It is important to understand how to reliably extract an estimate of the decay rate, with a quantification of systematic errors, from the amplitudes measured in a quantum simulation.
This is the subject of future work, but here a simple model will be used to demonstrate some of the relevant issues.
Consider the weak decay of a strong eigenstate in one sector to a strong eigenstate in a different sector (a sector is defined by its strong quantum numbers).
For this demonstration,
we calculate the persistence probability of the initial state, averaged over random weak and strong Hamiltonians and initial states, as the number of states below a given energy increases (i.e. increasing density of states).
Concretely, the energy eigenvalues of the initial strong sector range from 0 to 1.1, and 10 are selected randomly within this interval.
The initial state is chosen to be the one with the fifth lowest energy.
The eigenvalues in the final strong sector range between 0 and 2.03, and Y_f = 20 to 400 are selected.
The weak Hamiltonian that induces transitions between the 10 initial states to the Y_f final states is a dense matrix with each element selected randomly.
The weak coupling constant is scaled so that
G^2ρ_f is independent of the number of states, where ρ_f is the density of states.
This allows for a well-defined persistence probability as Y_f →∞.
For this example, the elements of the weak Hamiltonian were chosen between ± w_f,
where w_f = 1/(2 √(Y_F)).
Figure <ref> shows the emergence of the expected exponential decay as the number of available final states tends toward a continuous spectrum.
In a quantum simulation of a lattice theory, the density of states increases with L, and the late-time deviation from exponential decay will exhibit oscillatory behavior, as opposed to the plateaus found in this statistically averaged model.
The very early time behavior of the probability is interesting to note, and exhibits a well-known behavior, e.g., Refs. <cit.>.
It is, as expected, not falling exponentially, which sets in over time scales set by the energy spectrum of final states.
Only small lattices are practical for near-term simulation and lattice artifacts will be important to quantify. Relative to the continuum, a finite lattice spacing modifies the energy-momentum relation and introduce a momentum cut-off on the spectra.
However, if the initial particle has a mass that is much less than the cut-off, these effects should be minimal as the energy of each final state particle is bounded above by the mass of the initial particle.
As has been shown in this appendix, working on a small lattice with its associated sparse number of final states, will lead to significant systematic errors when extracting the decay rates directly from the persistence probabilities.
Further work will be necessary to determine how to reliably estimate these errors.
§ BETA-DECAY CIRCUITS
The quantum circuits that implement the Trotterized time-evolution of the
β-decay Hamiltonian are similar to those
presented in the previous chapter,
and here the differences between the two will be highlighted.
The β-decay Hamiltonian in both the standard and tilde layouts, Eqs. (<ref>) and (<ref>), contains terms of the form
H_β∼ (σ^- σ^+ σ^- σ^+ + h.c.) + (σ^- σ^+ σ^+ σ^- + h.c.)
= 1/8(XXXX+YYXX-YXYX+YXXY+XYYX-XYXY+XXYY+YYYY)
+ 1/8(XXXX + YYXX + YXYX - YXXY - XYYX + XYXY + XXYY + YYYY)
,
which can be diagonalized by the GHZ state-preparation circuits, G and Ĝ, shown in Fig. <ref>.
In the GHZ basis, it is found that
G^†(XXXX+YYXX-YXYX+YXXY+XYYX-XYXY+XXYY+YYYY)G
= IIIZ - ZIIZ + ZZIZ - ZZZZ -IZIZ + IZZZ - IIZZ + ZIZZ
,
and
Ĝ^†(XXXX + YYXX + YXYX - YXXY - XYYX + XYXY + XXYY + YYYY)Ĝ
= IIZI - ZIZI -ZZZZ+ZZZI+IZZZ-IZZI-IIZZ+ZIZZ
.
Once diagonalized the circuit is a product of diagonal rotations, see Fig. <ref> for an example of the quantum circuit that provides the time evolution associated with
σ^-_νσ^+_e σ^-_d,r Z_u,b Z_u,gσ^+_u,r.
By diagonalizing with both G and Ĝ and arranging terms in the Trotterization so that operators that act on the same quarks are next to each other,
many of the CNOTs can be made to cancel.
Also, an ancilla can be used to efficiently store the parity of the string of Zs between the σ^±.
§ RESOURCE ESTIMATES FOR SIMULATING BETA-DECAY DYNAMICS
For multiple lattice sites, it is inefficient to work with leptons in the tilde basis.
This is due to the mismatch between the local four-Fermi interaction
and the non-local tilde basis eigenstates.
As a result, the number of terms in the β-decay component of the Hamiltonian will scale as 𝒪(L^2) in the tilde basis,
as opposed to 𝒪(L) in the local occupation basis.
This appendix explores a layout different from the one in Fig. <ref>, which is optimized for the simulation of β-decay on larger lattices.
To minimize the length of JW Z strings, all leptons are placed at the end of the lattice, see Fig. <ref>.
After applying the JW mapping, the β-decay operator becomes
H_β→G/√(2)∑_l = 0^L-1∑_c=0^2 ( σ^-_l,σ^+_l,eσ^-_l,d,c Z^2 σ^+_l,u,c - σ^+_l,Z^2σ^-_l,νσ^-_l,d,c Z^2 σ^+_l,u,c + σ^-_l,σ^+_l,eσ^-_l,,c Z^2 σ^+_l,,c
- σ^+_l, Z^2 σ^-_l,νσ^-_l,,c Z^2 σ^+_l,,c + σ^+_l,eσ^-_l,νσ^-_l,,c Z^8 σ^+_l,u,c - σ^+_l,σ^-_l,σ^-_l,,cZ^8 σ^+_l,u,c
+ σ^+_l,eσ^-_l,νσ^+_l,,c Z^2 σ^-_l,d,c - σ^+_l,σ^-_l,σ^+_l,,c Z^2 σ^-_l,d,c + h.c. ) .
Using the techniques outlined in App. <ref>
to construct the relevant quantum circuits,
the resources required per Trotter
step of
H_β are estimated to be
R_Z : 192L ,
Hadamard : 48L ,
CNOT : 436 L .
For small lattices, L≲ 5, it is expected that use of the tilde basis will be more efficient and these estimates should be taken as an upper bound.
Combining this with the resources required to time evolve with the rest of the Hamiltonian, see Ref. <cit.>, the total resource requirements per Trotter step are estimated to be
R_Z : 264L^2 -54L +77 ,
Hadamard : 48L^2 + 20L +2 ,
CNOT : 368L^2 + 120L+74 .
It is important to note that the addition of H_β does not contribute to the quadratic scaling of resources as it is a local operator.
Recently, the capability to produce multi-qubit gates natively with similar fidelities to two-qubit gates has also been demonstrated <cit.>.
This could lead to dramatic reductions in the resources required and, for example, the number of multi-qubit terms in the Hamiltonian scales as
Multi-qubit terms : 96 L^2 -68L+22 .
The required number of CNOTs and, for comparison, the number of multi-qubit terms in the Hamiltonian, for a selection of different lattice sizes are given in Table <ref>.
Note that these estimates do not include the resources required to prepare the initial state.
§ TECHNICAL DETAILS ON THE QUANTINUUM H1-1 QUANTUM COMPUTER
For completeness, this appendix contains a brief description of Quantinuum's H1-1 20
trapped ion quantum computer (more details can be found in <cit.>).
The H1-1 system uses the System Model H1 design,
where unitary operations act on a single line of ^172Y^+ ions induced by lasers.
The qubits are defined as the two hyperfine clock states in the ^2S_1/2 ground state of ^172Y^+.
Since the physical position of the ions can be modified,
it is possible to apply two-qubit gates to any pair of qubits,
endowing the device with all-to-all connectivity.
Moreover, there are five different physical regions where these gates can be applied in parallel. Although we did not use this feature, it is also possible to perform a mid-circuit measurement of a qubit, i.e., initialize it and reuse it (if necessary).
The native gate set for H1-1 is the following,
U_1q(θ,ϕ)=e^-iθ/2[cos(ϕ) X+sin(ϕ) Y] , R_Z(λ)=e^-iλ/2Z , ZZ=e^-iπ/4ZZ ,
where θ in U_1q(θ,ϕ)
can only take the values {π/2,π},
and arbitrary values of θ can be obtained by combining several single-qubit gates, Ũ_1q(θ,ϕ)=U_1q(π/2,ϕ+π/2) . R_Z(θ) . U_1q(π/2,ϕ-π/2).
Translations between the gates used in the circuits shown in the main text and appendices to the native ones are performed automatically by pytket <cit.>.
The infidelity of the single- and two-qubit gates, as well as the error of the SPAM operations, are shown in Table <ref>.
§ TIME EVOLUTION UNDER THE FULL BETA-DECAY OPERATOR
The simulations performed in Sec. <ref> kept only the terms in the β-decay Hamiltonian which act on valence quarks, see Eq. (<ref>).
This appendix examines how well this valence quark β-decay operator approximates the full operator, Eq. (<ref>), for the parameters used in the main text.
Shown in Fig. <ref> is the decay probability when evolved with both the approximate and full operator as calculated through exact diagonalization of the Hamiltonian.
The full β-decay operator has multiple terms that can interfere leading to a more jagged decay probability.
The simulations ran on H1-1 only went out to t=2.5 where the error of the approximate operator is ∼ 20%.
CHAPTER: QUANTUM SIMULATIONS OF THE SCHWINGER MODEL VACUUM ON 100 QUBITS
This chapter is associated with Ref. <cit.>:
“Scalable Circuits for Preparing Ground State on Digital Quantum Computers: The Schwinger Model Vacuum on 100 Qubits" by Roland C. Farrell, Anthony N. Ciavarella, Marc Illa and Martin J. Savage.
§ INTRODUCTION
Quantum simulations of physical systems described by the Standard Model <cit.>,
and descendant effective field theories (EFT),
are anticipated to provide qualitatively new predictions about matter under extreme conditions; from the dynamics of matter in the early universe,
to properties of the exotic phases of quantum chromodynamics (QCD) produced at the LHC and RHIC (for overviews and reviews, see Refs. <cit.>).
One of the major challenges facing quantum simulations of physical systems is
the preparation of initial states on quantum computers
that can be used to determine important quantities that are inaccessible to
classical high-performance computing (HPC),
i.e., the problem of state preparation.
While simulating the dynamics of any given initial state
is known to be efficient for an ideal quantum computer <cit.>,
residing in the BQP complexity class,
preparing an arbitrary state generally requires
quantum resources that asymptotically scale
super-polynomially with increasing system size <cit.>,
residing in the QMA complexity class.[Note that adiabatic state preparation resides within BQP when there is a path through parameter space in which the system remains gapped <cit.>.
However, even in gapped systems the gate count required for adiabatic preparation can be daunting; e.g., see Ref. <cit.> where adiabatic preparation of the Schwinger model vacuum on 16 qubits was estimated to require 2.7×10^5 two-qubit gates.]
However, states of physical systems are not the general case, and are often constrained by both local and global symmetries.[Systems of importance to nuclear physics and high-energy physics are constrained by
a number of local, exact global and approximately global symmetries,
some of which are emergent from the mechanisms of confinement
and spontaneous symmetry breaking.]
In some instances, these symmetries
allow observables to be computed by perturbing around states that can be efficiently
initialized <cit.>.
In the foreseeable future, quantum simulations will be far from asymptotic in both system size and evolution time, and the resources required for both time evolution and state preparation
will be estimated by direct construction and extrapolations thereof.
Furthermore, successful quantum simulations will require specialized quantum circuits and workflows that are optimized for specific quantum hardware.
The development of algorithms for preparing
non-trivial initial states on quantum computers, including the ground states of quantum field theories (QFTs), is an active area of research.
Even with many advances,
algorithms remain limited in capability, and generally do not scale favorably to modest or large-scale simulations of quantum many-body systems.
Consequently, quantum simulations of small model systems are currently
being performed across an array of science domains,
generally studying dynamics starting from tensor-product initial states.
While being the simplest gauge theory based on a continuous group,
the Schwinger model <cit.> (quantum electrodynamics in 1+1D)
possesses many features of interest to both the
quantum chromodynamics and quantum information science (QIS)
communities.
These include
the presence of a mass gap, charge screening, a chiral condensate, few-body bound states (“hadrons” and “nuclei”), and a topological θ-term.
It has emerged as a popular test bed for developing quantum simulation techniques
for lattice gauge theories,
and has been explored using a variety of platforms, including trapped ions <cit.>, superconducting qubits <cit.>, photonic systems <cit.>, Rydberg atoms <cit.>, ultracold atoms <cit.> and classical electric circuits <cit.>, together with classical simulations <cit.>, calculations <cit.>
and tensor-networks <cit.> (for reviews on this last topic, see, e.g., Refs. <cit.>).
There has also been pioneering work on quantum simulations of low-dimensional
non-Abelian gauge theories, both with <cit.>
and without <cit.> matter.
While these are important benchmarks, more sophisticated simulations requiring the preparation of eigenstates or scattering states have so far been too demanding for
NISQ-era quantum computers, and until now have been limited to 20 qubits <cit.>.
Many systems of physical interest, including QCD,
have translational symmetry and possess an energy (mass) gap Λ between the unique ground state and first excited state.
The gap defines a characteristic length scale of the system ξ = 1/Λ, and parameterizes the decay of the longest distance
correlations in the ground state wavefunction, falling as
∼ e^- r/ξ/r^α for regions separated by r≫ξ, for some α.
A natural way to encode a lattice QFT onto a register of a digital quantum computer is by identifying subsets of qubits (or qudits) with spatial points of the lattice
that align with the connectivity of the quantum computer.
A realization of the ground state on the register of a quantum computer
should reflect the localized correlations
between these subsets of
qubits
separated by r≫ξ <cit.>.
In the absence of topological order, one way to establish the ground state is to initialize the quantum register in a
state without correlations between qubits, e.g., a tensor product state,
and
then systematically introduce correlations
through the action of quantum circuits.
A crucial point is that the localized correlations imply that the state preparation circuits need to have structure only for qubits spatially separated by r ≲ξ <cit.>.
This is sufficient to obtain exponentially converged accuracy in the prepared state.
Due to translational invariance, the ground state for an arbitrarily large lattice can be prepared by repeating these circuits across the entire register.
To study the dynamics of physically relevant systems in a quantitative way,
with a complete quantification of uncertainties,
simulations of large volumes of spacetime are typically required.
Motivated by the discussion in the previous paragraph, we introduce Scalable Circuits ADAPT-VQE (SC-ADAPT-VQE), a new method for quantum state preparation that uses the hierarchies of length scales present in physical systems; see Fig. <ref> for an illustration.
In SC-ADAPT-VQE,
quantum circuits that (efficiently) prepare a given state to a specified level of precision
are determined on modest-sized lattices that are large enough to contain the longest correlation lengths.
As long as ξ is not too large, these circuits can be determined using classical computers.
This avoids the challenging task of optimizing circuits on a quantum computer with both statistical uncertainty and device noise <cit.>.
Once determined, (discrete) translation invariance is used to scale these circuits up to the full lattice.
Since the quality of the prepared state becomes
independent of the spatial lattice length L,
with 𝒪(e^-ξ/L) corrections,
this is a potential path toward
quantum simulations of lattice QFTs
that are beyond the capabilities of HPC.
In this chapter, SC-ADAPT-VQE is applied to the Schwinger model and is used to prepare the vacuum on up to 100 qubits on IBM's Eagle quantum processors.
Underlying the development is the ADAPT-VQE algorithm <cit.> for quantum state preparation, which is modified to generate scalable circuits.
After the necessary Trotterized circuits have been built, SC-ADAPT-VQE is performed using the qiskit classical simulator on
system sizes up to L=14 (28 qubits).
It is found that both the energy density and the chiral condensate
converge exponentially with circuit depth to the exact results.
Importantly, both the quality of the prepared state and the structure of the associated circuits are found to converge with system size.
This allows the state preparation circuits, determined on small lattices using classical computing, to be extrapolated to much larger lattices, with a quality that becomes independent of L.
The scaled circuits are used to prepare the L≤ 500 vacua using qiskit's Matrix Product State (MPS) circuit simulator,
and to prepare the L ≤ 50 (100 qubits) vacua on the registers of
IBM's superconducting-qubit quantum computers ibm_brisbane and ibm_cusco.
An improved and unbiased error mitigation technique, Operator Decoherence Renormalization (ODR), is developed and applied to the quantum simulations to estimate error-free observables.
The results obtained from both the MPS circuit simulator and IBM's quantum computers
are found to be in excellent agreement with Density Matrix Renormalization Group (DMRG) calculations.
§ THE LATTICE SCHWINGER MODEL
The Schwinger model <cit.> has a long history of study
in the continuum and using numerical lattice techniques.
In the continuum, it is described by the Lagrange density
L = ψ( i - m_ψ) ψ - 1/4 F^μν F_μν .
Electrically-charged fermions are described by the field operator ψ with mass m_ψ,
the electromagnetic gauge field by A_μ with field tensor F_μν,
and the covariant derivative is defined as D_μ = ∂_μ - i e A_μ.
It is the Hamiltonian lattice formulation,
first developed and studied by Banks, Kogut and Susskind <cit.>,
that is relevant for quantum simulations.
One feature of gauge theories in 1+1D, which distinguishes
them from theories in higher dimensions,
is that the gauge field is completely constrained
by the distribution of fermion charges through Gauss's law.
In axial gauge, the spatial gauge field is absent, and the effects of the time-component of the gauge field
are included by non-local (Coulomb) interactions <cit.>.
With open boundary conditions (OBCs),
using the staggered fermion discretization <cit.> of the electron field,
and applying the Jordan Wigner (JW) transformation to map fermion field operators to spins,
the Schwinger model Hamiltonian is (for a derivation, see, e.g., Ref. <cit.>)
Ĥ = Ĥ_m + Ĥ_kin + Ĥ_el =
m/ 2∑_j=0^2L-1 [ (-1)^j Ẑ_j + Î] + 1/2∑_j=0^2L-2 ( σ̂^+_j σ̂^-_j+1 + h.c.) + g^2/ 2∑_j=0^2L-2 (∑_k≤ jQ̂_k )^2
,
Q̂_k = -1/2[ Ẑ_k + (-1)^kÎ] .
Here, L is the number of spatial lattice sites,
corresponding to 2L staggered (fermion) sites, m and g are the (bare) electron mass and charge, respectively, and the staggered lattice spacing a has been set to one.[For faster convergence to the continuum, an 𝒪(a) improvement to the mass term can be performed to restore a discrete remnant of chiral symmetry in the m→0 limit <cit.>.]
“Physical” quantities are
derived from the corresponding dimensionless quantities by restoring factors of the
spatial lattice spacing.
Even (odd) sites correspond to electrons (positrons), as reflected in the staggered mass term and charge operator.[The convention is that even fermion-sites correspond to electrons,
such that Q̂|↓⟩ = 0 and
Q̂|↑⟩ = -|↑⟩,
while
the odd fermion-sites correspond to positrons, such that
Q̂|↑⟩ = 0 and
Q̂|↓⟩ = +|↓⟩.
]
A background electric field can be included straightforwardly,
equivalent to a θ-term, but
will be set to zero in this work.
Due to the confinement, the low energy excitations are hadrons and the mass gap is given by Λ = m_hadron. For our purposes, m_hadron is defined to be the energy difference in the Q=0 sector between the first excited state (single hadron at rest) and the vacuum.
§.§ Infinite Volume Extrapolations of Local Observables
Central to the development of state preparation circuits is the scaling of expectation values of local observables in the ground state,
with both the correlation length ξ = 1/m_ hadron, and the volume L.
Due to the exponential suppression of correlations in the ground state
between regions separated by r>ξ, it is expected that, locally, the wavefunction has converged to its infinite volume form, with corrections of 𝒪(e^-ξ/L).
As a result, expectation values of local observables will
be exponentially converged to their infinite volume values.
However, near the boundaries of the lattice, the wavefunction is perturbed over a depth proportional to ξ, causing local observables to deviate from their infinite volume values.
Equivalently, boundary effects cause deviations in volume averages of local observables that are 𝒪(ξ/L).
This scaling of observables is responsible for the SC-ADAPT-VQE prepared vacuum converging exponentially in circuit depth, and enables the circuits to be systematically extrapolated to larger system sizes.
Two quantities associated with the ground-state wavefunction (vacuum)
that we focus on are the chiral condensate χ, and the energy density
ε.
The chiral condensate[In the continuum,
the chiral condensate is defined as
χ_cont=⟨ψψ⟩, which on the lattice becomes χ_lat = 1/L∑_j⟨ψ_jψ_j ⟩,
where j labels the spatial site.
To have a positive quantity, we have added a constant to the definition of χ,
χ≡χ_lat + 1.
This counts the average number of electrons and positrons on a spatial site.]
is an order parameter of chiral symmetry breaking,
and in the JW mapping is
χ = 1/2L∑_j=0^2L-1⟨ (-1)^j Ẑ_j + Î⟩ ≡ 1/2L∑_j=0^2L-1χ_j
.
The energy density is defined as
ε = ⟨Ĥ⟩ / L,
and
in axial gauge is not a local observable because the contribution from the
electric-field term in the Hamiltonian, Ĥ_el, involves all-to-all couplings.
However, this is an artifact of using axial gauge and enforcing Gauss's law.
In Weyl gauge, with explicit (local) gauge degrees of freedom,
the Hamiltonian is manifestly local, and therefore the energy density is a local observable.
These quantities are computed for m=0.5,g=0.3 using exact diagonalization for L≤ 14
(Table <ref>)
and DMRG for L≫ 14 (Table <ref>).
As anticipated,
a linear extrapolation in 1/L is found to be consistent with
these results, as seen in Fig. <ref>.
Additional details,
along with results for m=0.1 with g=0.3 and g=0.8,
can be found in App. <ref>.
§ SC-ADAPT-VQE FOR THE LATTICE SCHWINGER MODEL
Underlying SC-ADAPT-VQE is ADAPT-VQE <cit.>,
a quantum algorithm for state preparation that has been applied to spin models <cit.>, systems in quantum chemistry <cit.> and nuclear structure <cit.>.
It builds upon the Variational Quantum Eigensolver (VQE) <cit.>,
in which parameterized quantum circuits are optimized
to minimize the expectation value of a Hamiltonian.
The parameterized circuits are constructed
step-wise (or equivalently in layers),
where the incrementally-improved ansatz states converge to the ground state with successive iterations.
At each step,
the unitary transformation that maximally decreases the energy of the ansatz state is identified from a pre-defined set (“pool") of unitaries.
The quantum circuit corresponding to this unitary is then
appended to the state preparation circuit.
The (initial) state from which the algorithm starts
will often be chosen to be a tensor product
or an entangled state that can be efficiently prepared on a quantum computer,
such as a GHZ-state.
If the target state is the ground state of a confining gauge theory, e.g., the Schwinger model,
the strong-coupling (trivial) vacuum,
|Ω_0⟩ =
|↑↓↑↓…↑↓⟩ ,
can be a good choice for such an initial state as it has the correct long-distance
structure in the gauge fields.
The ADAPT-VQE algorithm can be summarized as follows:
1. Define a pool of operators {Ô} that are
constrained to respect some or all of the symmetries of the system.
2. Initialize the register of the quantum computer to a strategically selected state,
|ψ_ ansatz⟩, with the desired quantum numbers
and symmetries of the target wavefunction.
3. Measure the expectation value of the commutator of the Hamiltonian with each operator in the pool,
⟨ψ_ ansatz| [Ĥ, Ô_i] |ψ_ ansatz⟩.
These are estimators of the associated decrease in energy from
transforming the ansatz wavefunction by
|ψ_ ansatz⟩→ e^i θ_i Ô_i|ψ_ ansatz⟩,
for an arbitrary parameter θ_i.
4.
Identify the operator, Ô_n, with the largest magnitude commutator with the Hamiltonian.
If the absolute value of this commutator is below some pre-determined threshold, terminate the algorithm.
If it is above the threshold,
update the ansatz with the parameterized evolution of the operator |ψ_ ansatz⟩→ e^i θ_n Ô_n|ψ_ ansatz⟩.
5. Use VQE to find the values of the variational parameters that minimize the energy,
⟨ψ_ ansatz (θ_1, θ_2,..., θ_n)|Ĥ|ψ_ ansatz (θ_1, θ_2,..., θ_n)⟩.
The previously optimized values for θ_1,2,...,n-1
and θ_n=0, are used as initial conditions.
If the optimal value of the newest parameter, θ_n, is below some pre-determined threshold, terminate the algorithm.
6. Return to step 3.
For a given pool of operators, it is a priori
unknown if this algorithm will furnish a wavefunction that satisfies the
pre-determined threshold
for the observable(s) of interest,
but it is expected that the pool can be expanded on the fly to
achieve the desired threshold.
The systems that have been explored with this algorithm show,
for a fixed pool,
exponential convergence with increasing numbers of iterations <cit.>.
Generally,
different terms contributing to operators
in the pool
do not commute with each other.
Constructing quantum circuits that exactly implement the exponential of a sum of non-commuting terms is challenging, and in practice approximations such as first-order Trotterization are used.
This introduces (higher-order) systematic deviations from the target unitary operator in each case, and
defines the pool of unitary operators,
{Û_i } = {exp(i θ_i Ô_i) }→{∏_t Û_i^(t)} .
These Trotterized unitary operators correspond to the quantum circuits
that are implemented in state preparation.
In optimization of the quality of the state prepared on a given quantum computer,
particularly a NISQ-era device,
there are tradeoffs between the gate-depth of a particular circuit implementation,
the coherence time,
the errors associated with gate operations,
and the associated Trotter errors.
This is explored in App. <ref>.
Typically, ADAPT-VQE is a hybrid classical-quantum algorithm that evaluates matrix elements of the Hamiltonian
in trial wavefunctions on a quantum computer, with parameters that are optimized classically.
One disadvantage of this is that the evaluation of expectation values of the Hamiltonian requires a large number of measurements (shots) on quantum computers.
A novel part of SC-ADAPT-VQE is the use of a classical simulator to determine the ADAPT-VQE state preparation circuits.
As shown in Sec. <ref>, these circuits can be scaled and used to prepare the vacuum on arbitrarily large lattices.
§.§ A Scalable Operator Pool for the Lattice Schwinger Model
A successful application of SC-ADAPT-VQE to the preparation of the lattice Schwinger model vacuum requires choosing an efficient and scalable pool of operators.
These operators are
used to systematically improve the ansatz vacuum wavefunction,
and are (only) constrained to be
charge neutral,
symmetric under charge-conjugation and parity (CP) and, as a consequence of the CPT theorem <cit.>, invariant under time reversal.[In the total charge Q=0 sector, there is a CP symmetry corresponding to the composition of a reflection through the mid-point of the lattice, exchanging spatial sites n ↔ L-1-n, and an interchange of an electron and a positron on each spatial site.
In terms of spins on staggered sites this is realized as σ̂^i_n ↔σ̂_2L-1-n^i followed by σ̂_n^i ↔X̂_nσ̂_n^iX̂_n, where σ̂^i with i=1,2,3 are the Pauli matrices.
For example, under a CP transformation, the following L=4 state becomes
|↑↓ ↑↑ ↓↓ ↓↑⟩ = |. . . e^- e^+ . e^+ e^-⟩ |↓↑ ↑↑ ↓↓ ↑↓⟩ = |e^+ e^- . e^- e^+ . . .⟩ .
]
Ideally one wants to find the smallest pool of operators
that is expressive enough to converge rapidly toward the vacuum.
For a lattice with OBCs,
the system has translational symmetry in the volume that is broken by the boundaries (surface).
In the vacuum,
the effects of the boundaries are expected to be localized,
with penetration depths set by the mass gap.
Therefore,
the pool of operators should contain
translationally invariant “volume" operators,
and “surface" operators that have support only near the boundaries.
In addition, a hierarchy is anticipated in which one-body operators
are more important
than two-body operators,
two-body more important than three-body, and so on.[An n-body operator involves n fermionic creation and n fermionic annihilation operators.]
Note that because wavefunctions are evolved with exp(i θ_i Ô_i),
arbitrarily high-body correlations are built from n-body operators
(analogous to connected vs disconnected Feynman diagrams).
For the Schwinger model, we observe that one-body operators are sufficient.
With the above discussion as guidance,
it is convenient to define two classes of one-body operators,
one containing volume operators,
and the other containing surface operators:
Θ̂_m^V = 1/2∑_n=0^2L-1 (-1)^nẐ_n ,
Θ̂_h^V(d) = 1/4∑_n=0^2L-1-d (X̂_n Ẑ^d-1X̂_n+d
+ Ŷ_n Ẑ^d-1Ŷ_n+d ) ,
Θ̂_m^S(d) = (-1)^d 1/2 ( Ẑ_d - Ẑ_2L-1-d )
,
Θ̂_h^S(d) = 1/4 (X̂_1Ẑ^d-1X̂_d+1 + Ŷ_1Ẑ^d-1Ŷ_d+1 + X̂_2L-2-dẐ^d-1X̂_2L-2 + Ŷ_2L-2-dẐ^d-1Ŷ_2L-2 )
.
Unlabelled Ẑs
act on the qubits between the
leftmost and rightmost operators
(e.g., X̂_0 Ẑ^2 X̂_3 = X̂_0 Ẑ_1 Ẑ_2 X̂_3).
The first two operators in Eq. (<ref>) are translationally invariant,
Θ̂_m^V is the mass term in the Hamiltonian,
and
Θ̂_h^V(d) is a generalized hopping term that spans an odd-number of fermion sites, d, connecting electrons and positrons at spatial sites separated by Δ L = (d-1)/2.
Only d-odd operators are retained, as the d-even operators break CP.
The second two operators in Eq. (<ref>)
correspond to surface terms, of the form of a mass-density
and of a hopping-density at and near the boundaries.
For Θ̂_h^V(d), d∈{1,3,… 2L-3}, and for Θ̂_h^S(d), d∈{1,3,… 2L-5},
preventing hopping between boundaries (which is found to improve convergence).
Time reversal symmetry implies that the vacuum wavefunction
can be made real up to an overall phase.
The SC-ADAPT-VQE ansatz is built from unitaries of the form e^i θ_i Ô_i, and furnishing a real wavefunction requires that the
operators in the pool
are imaginary and anti-symmetric.
The operators in Eq. (<ref>) are real and are therefore disqualified from being members of the pool.
Instead, consider a pool comprised
of their commutators,[The commutators of Θ̂ operators not included in the pool are linear combinations of those that are.]
{Ô} = {Ô_mh^V(d) , Ô_mh^S(0,d) , Ô_mh^S(1,d)
} ,
Ô_mh^V(d) ≡ i [Θ̂_m^V, Θ̂_h^V(d) ]
=
1/2∑_n=0^2L-1-d(-1)^n (
X̂_nẐ^d-1Ŷ_n+d -
Ŷ_nẐ^d-1X̂_n+d )
,
Ô_mh^S(0,d) ≡ i [Θ̂_m^S(0), Θ̂_h^V(d) ]
=
1/4 (X̂_0Ẑ^d-1Ŷ_d - Ŷ_0Ẑ^d-1X̂_d
- Ŷ_2L-1-dẐ^d-1X̂_2L-1 + X̂_2L-1-dẐ^d-1Ŷ_2L-1 )
,
Ô_mh^S(1,d) ≡ i [Θ̂_m^S(1), Θ̂_h^S(d) ]
=
1/4 (Ŷ_1Ẑ^d-1X̂_d+1 - X̂_1Ẑ^d-1Ŷ_d+1
+ Ŷ_2L-2-dẐ^d-1X̂_2L-2 - X̂_2L-2-dẐ^d-1Ŷ_2L-2 )
.
While the contributions to extensive quantities from the
volume operators,
Ô^V,
typically scale as O(L), the surface operators, Ô^S,
make O(1) contributions as they are constrained to regions near the boundaries.[For the range of m and g we have considered, it was only necessary to consider Θ̂_m^S(d) with d=0,1 in the pool.
Taking the continuum limit, where the correlation length diverges, will likely require keeping terms with d>1.]
When acting on the strong-coupling vacuum,
the exponential of an operator in the pool creates and annihilates e^+ e^- pairs separated
by distance d.
As the operators that are being considered are one-body,
the variational algorithm is essentially building a coupled cluster singles (CCS) state (see, e.g., Refs. <cit.>).
§ SCALABLE QUANTUM CIRCUITS FROM CLASSICAL COMPUTING
Integral to the application of SC-ADAPT-VQE
is performing ADAPT-VQE on a series of systems that are large enough to enable a robust scaling of the parameterized circuits.
These scalable circuits can either be determined with classical computing, or by use of a smaller partition of a larger quantum computer.
In this section,
SC-ADAPT-VQE is implemented using the qiskit noiseless classical simulator <cit.>.
§.§ Trotterized Quantum Circuits for the Scalable Operator Pool
As discussed above, implementing the unitary operators in the pool,
i.e., Eq. (<ref>),
on classical simulators or quantum computers
requires mapping them to sequences of quantum gates.
For the individual terms in
Eq. (<ref>),
we have chosen to do this using Trotterization.
The optimal gate decomposition
is less important for implementation using a classical simulator,
but is crucial for successful simulations on a quantum computer.
With the goal of
using IBM's superconducting-qubit quantum computers <cit.>,
our circuit designs aim to minimize two qubit gate count and circuit depth and require only nearest-neighbor connectivity.
As can be seen in Eq. (<ref>),
each term in a given operator in the pool is of the form
(X̂Ẑ^d-1Ŷ - ŶẐ^d-1X̂)
for some odd value of d.
The construction of circuits implementing the corresponding unitary operators follows the strategy outlined in Ref. <cit.>.
First, consider the Trotterization of terms with d=1,
i.e., constructing a circuit corresponding to
e^i θ/2 (X̂Ŷ±ŶX̂)≡ R_±(θ).
There is a known 2-CNOT realization of this unitary operator <cit.>,
shown in Fig. <ref>a.
For terms with d>1, this circuit can be extended in an “X" pattern as shown in Fig. <ref>b and <ref>c for d=3 and d=5, respectively.[
These circuits have been verified by comparison with
Trotterized exponentials of fermionic operators.]
Terms with larger d are constructed by extension of the legs of the “X".
Compared with the traditional CNOT staircase-based circuits,
there is a reduction by two CNOTs,
and a reduction by a factor × 2 in CNOT-depth.[The staircase circuit can be modified into an X-shaped one, reducing the depth, but with the same number of CNOTs <cit.>.]
However, the primary advantage of these circuits is that they allow for an efficient arrangement of terms leading to cancellations among neighboring R_+(±π2) gates.
As depicted in Fig. <ref>, this is made possible by arranging the circuit elements so that sequential terms are offset by d-1 qubits,
i.e., start on qubit {0,d-1,2(d-1),…}.
This allows the outermost gates to cancel (using the identity in the upper left of Fig. <ref>).
Also, for d≥ 5, the next layer should start (d-1)/2 qubits below the previous one,
as the circuit depth can be reduced
by interleaving the legs of the “X".
Further optimizations are possible by noting that distinct orderings of terms, while equivalent up to higher order Trotter errors, can have different convergence properties; see App. <ref>.
§.§
Building Scalable State Preparation Quantum Circuits using SC-ADAPT-VQE with Classical Computing
In this section, SC-ADAPT-VQE is used to prepare approximations to the vacuum of the lattice Schwinger model on up to L=14 spatial sites (28 qubits) using classical simulations of the quantum circuits developed in the previous section (second step in Fig. <ref>).
In addition to the energy density and chiral condensate introduced in Sec. <ref>, the infidelity density,
= 1/L( 1 -
|⟨ψ_ ansatz|ψ_ exact⟩|^2 )
,
is also studied, where |ψ_ exact⟩ is the exact vacuum wavefunction on a lattice with L spatial sites.
An infidelity density that is constant in L corresponds to constant deviations in local observables evaluated in the prepared state.
To investigate the interplay between L and
ξ=1/m_hadron,
three sets of parameters are considered:
m=0.1, g=0.3 (ξ_L=14 = 2.6),
m=0.1, g=0.8 (ξ_L=14 = 1.3)
and m=0.5, g=0.3 (ξ_L=14 = 0.9).
The ξ are determined with exact diagonalization, and are found to weakly depend on L.
Note that increasing either m or g decreases the correlation length.
To make systematically improvable predictions
of observables from the QFT that emerges from a given lattice model,
extrapolations to the continuum (lattice spacing to zero) and infinite-volume (L →∞) limits must be performed.
This requires that the relevant correlation length(s) are all much greater than the lattice spacing, ξ≫ 1 in lattice units, but are well contained in the lattice volume, L≫ξ.
We primarily focus on extrapolation to large lattices, and therefore only require L≫ξ.
As a result, the parameter set m=0.5, g=0.3 is used as the primary example throughout this work.
The values of ε, χ and obtained at
the 7^ th step of SC-ADAPT-VQE with m=0.5, g=0.3 are given in
Table <ref>,
while their deviations from the exact values
are shown in Fig. <ref>,
as a function of increasing number of SC-ADAPT-VQE steps.
The corresponding numerical values obtained from the other parameter sets are presented in App. <ref>.[The 6^ th and 7^ th steps were chosen for study in detail as the operator ordering has stabilized for L≤ 14. This allows the operator structure to be displayed in a single table, and enables the systematic extrapolation of parameters.
The available classical computing resources limited the maximum number of steps of SC-ADAPT-VQE to 10.]
As seen by their approximately linear behavior in the log-plots in Fig. <ref>,
the error in each of these quantities decreases exponentially with algorithm step,
indicating convergence to the target wavefunction.
This exponential trend is demonstrated out to 10 steps, reaching a convergence comparable to the systematic errors introduced in the L-extrapolations below.
This provides evidence that this choice of initial state and operator pool does not suffer from “barren plateaus" or local minima.
For a given step in the algorithm,
the error is seen to become independent of system size.
This indicates that
extrapolations of the circuits to arbitrarily large systems will have errors that are independent of L.
As discussed above, it is expected that SC-ADAPT-VQE will converge more rapidly for systems with smaller correlation lengths.
This is indeed seen in Fig. <ref>,
where the correlation length decreases from left to right, while the convergence improves.
Also included in Table <ref> is the number of CNOTs per qubit in the SC-ADAPT-VQE circuit.
It is seen to scale as a constant plus a subleading 𝒪(1/L) term,
leading to an asymptotic value of 48 CNOTs per qubit.
This scaling is due to there being (2L-d) terms in each volume operator.
The structure of the SC-ADAPT-VQE state preparation circuit and the corresponding variational parameters for m=0.5 and g=0.3
are given in Table <ref>.
Notice that initially localized operators are added to the wavefunction (small d),
followed by increasingly longer-range ones, as well as surface operators. Systems with longer correlation lengths require larger d operators (e.g., compare Table <ref> and Table <ref>),
in line with previous discussions on the exponential decay of correlations for d > ξ.
It is also seen that the surface operators become less important
(appear later in the ansatz structure) for larger lattices.
For example, as shown in Table <ref>, the 5^ th step of SC-ADAPT-VQE transitions from being a surface to a volume operator at L=10 (causing the jump in convergence at the fifth step in the right column of Fig. <ref>).
This is expected as they contribute 𝒪(1/L)
to the energy density, whereas volume operators contribute 𝒪(1).
Importantly, Table <ref> shows that the order of operators,
and the corresponding variational parameters are converging with increasing system size (third step in Fig. <ref>).
This is due to exponentially decaying correlations for d ≫ξ,
and it is expected that the variational parameters will also converge exponentially,
once L is sufficiently large to contain ξ, and we assume the following form:
θ_i = θ_i^L=∞ + c_1 e^-c_2 L .
Table <ref> shows that this convergence sets in
for L>7,[The ordering of operators changes at L=10 but the operator content is unchanged, so it is still possible to use L=8,9 in the extrapolation.]
and the variational parameters extrapolated to L=∞ are given in the last row of Table <ref>.
These are used in the next section to initialize the vacuum on
lattices up to L=500.
An example of
extrapolating the variational parameters
is shown in Fig. <ref> for
the parameter θ_1,
associated with
Ô_mh^V(1).
The exact results obtained for L≤ 14 are well reproduced and
extrapolated with the exponential functional form in Eq. (<ref>).[
One could imagine generating the θ^L=∞_i for a variety of m and g, and then machine learning the variational parameters for all m and g.
This could be particularly useful for m and g that approach the continuum limit, where the correlation length can no longer be contained within lattice volumes accessible to classical simulators.]
A more complete discussion of the parameter extrapolations, along with
examples for m=0.1 and g=0.3 and for m=0.1 and g=0.8, can be found in App. <ref>.
§ PREPARING THE VACUUM OF THE SCHWINGER MODEL ON LARGE LATTICES
The vacuum preparation circuits, determined for L≤14 with SC-ADAPT-VQE using an exact (statevector) classical simulator, are scaled to prepare the vacuum on much larger lattices.
These scaled circuits are used to prepare the vacuum on lattices of up to L=500 (1000 qubits) using a classical MPS circuit simulator
and up to L=50 (100 qubits) using IBM's Eagle-processor quantum computers (fourth step in Fig. <ref>).
We emphasize that this scaling requires no further optimization of the circuits.
The chiral condensate and energy density determined from the classical simulator
are found to be consistent with DMRG calculations.
On the quantum computers, the chiral condensate and charge-charge correlators
are measured
to probe the quality of one- and two-qubit observables.
The results are in agreement with those from the classical MPS simulator, within statistical uncertainties.
§.§
Classical Simulation
Very large quantum circuits that do not generate long-range entanglement can be efficiently simulated using the
qiskit matrix_product_state classical simulator.
Here it is used to simulate the preparation of the Schwinger model vacuum
on L≫ 14 lattices,
applying the scalable circuits
determined in the previous section
from 7 steps of SC-ADAPT-VQE on L≤ 14 lattices.
The values obtained for the chiral condensate and energy density up to L=500 are compared with DMRG results, and are presented in Table <ref>.
The deviations in the energy density (∼ 1× 10^-4) and chiral condensate (∼ 1× 10^-3) are in good agreement with what was found for smaller L;
see Table <ref>.
This demonstrates that the systematic errors in the vacuum wavefunctions prepared with the scaled quantum circuits are (approximately) independent of L over this range of lattice volumes.[The 6^ th operator in the extrapolation is a surface operator, whose contribution to the energy density scales as 1/L.
Therefore, if SC-ADAPT-VQE could be performed on, for example, L=500, this operator would likely not be in the ansatz.
Evidently the “error" introduced by extrapolating the ansatz with a surface operator is small since the deviation of observables for large L is the same as for L≤14.]
The scaled circuits corresponding to m=0.1, g=0.3 and m=0.1, g=0.8 have also been used to successfully prepare the vacuum.
However, due to the larger correlation lengths,
MPS calculations with L≳ 100 required excessive classical resources, and were not performed. See App. <ref> for more details.
It is worth summarizing what has been accomplished
in this work with classical simulations:
* In Sec. <ref>, the vacuum energy density and chiral condensate were determined exactly for
L≤ 14 (28 staggered lattice sites) using exact diagonalization, and for L≤ 10^3 using DMRG.
The results for L≥ 9 were (consistently) extrapolated to L→∞, with 1/L scaling.
* In Sec. <ref>, SC-ADAPT-VQE, based on the scalable operator pool determined in Sec. <ref>, was performed on L≤ 14 lattices.
Intensive quantities were found to converge exponentially with circuit depth, and the errors in these quantities, as well as the structure of the state preparation circuits, were found to become independent of L.
This enabled the variational parameters defining the state preparation circuits to be consistently extrapolated to arbitrarily large L.
* In this section,
the quantum circuits corresponding to 7 steps of SC-ADAPT-VQE were scaled and applied to large lattices using the qiskit MPS circuit simulator.
The deviations of the energy density and chiral condensate computed from these wavefunctions were found to be independent of L, i.e., consistent with L≤ 14.
These main points indicate that the quantum circuits determined classically with SC-ADAPT-VQE can be used to prepare the vacuum
of the Schwinger model
on quantum computers at scale with a precision that is independent of system size.
§.§ Quantum Simulations on 100 Qubits using IBM's Quantum Computers
The quantum circuits determined via classical simulation on L≤ 14 lattices
are now scaled to larger L to
prepare the vacuum of the Schwinger model
on up to 100 qubits of
IBM's 127 superconducting-qubit
Eagle quantum computers with heavy-hexagonal communication fabric.
Hamiltonian parameters m=0.5, g=0.3 with L=14,20,30,40,50,
and
state preparation circuits scaled from 2 steps of SC-ADAPT-VQE
(compared to 7 steps in the previous section), are used.
Fewer steps equates to shallower circuits, and a preliminary study of the performance of the computer with more steps can be found in App. <ref>.
The variational parameters extrapolated to the chosen range of L for 2 steps of SC-ADAPT-VQE are given in
Table <ref> in
App. <ref>.
The large number of qubits and two-qubit gates involved in these
simulations make error mitigation essential to obtain reliable estimates of observables.
Specifically, this work uses readout-error mitigation (REM), dynamical decoupling (DD), Pauli twirling (PT), and decoherence renormalization.
The qiskit Runtime Sampler primitive is used to obtain readout-corrected quasi-distributions via the matrix-free measurement mitigation (M3) from Ref. <cit.>.
Also included in the primitive is DD, which is used to suppress crosstalk and idling errors <cit.>.
Crucial to the error mitigation is decoherence renormalization <cit.>,
modified in this work for simulations on a large number of qubits, which we call Operator Decoherence Renormalization (ODR).
Underpinning decoherence renormalization is PT <cit.>, which turns coherent two-qubit gate errors into incoherent errors, which can be inverted to recover error-free expectation values.
Unlike previous applications of decoherence renormalization,
which assume a constant decoherence across the device,
ODR estimates the decoherence separately for each operator.
This is done by running a mitigation circuit, which has the same
operator structure as the one used to extract the observables, but with the noise-free result being known a priori.
We choose the state preparation circuits with the variational parameters
set to zero for mitigation,
and
in the absence of noise this prepares the strong coupling vacuum, |Ω_0 ⟩ in Eq. (<ref>).
Naively, it could be expected that post-selecting results on states with
total charge Q = 0 would eliminate the leading bit-flip errors <cit.>.
However, when post-selection is combined with ODR, which accommodates single-qubit decoherence, undesirable correlations between qubits are introduced.
We find that performing both mitigation techniques
(post-selection and ODR)
degrades the quality of two-qubit observables,
and post selection is not used in this work as it is found to be less effective.
More details about ODR and post-selection can be found in App. <ref>.
The local chiral condensate, χ_j in Eq. (<ref>),
obtained from ibm_cusco for L=50 is shown in Fig. <ref>,
where the subscript “j" denotes the qubit index.[
For all of the results presented in this work,
correlated bootstrap re-sampling was used to estimate statistical (shot) uncertainties.
The circuits used for L≤ 40 were executed on ibm_brisbane with
40 Pauli-twirled instances for both the mitigation and the physics circuits,
each with 8×10^3 shots.
For L=50, the M3 method was not applied due to the large overhead in classical computing,
and production was executed on ibm_cusco with 150 Pauli-twirled instances.
Additional details can be found in App. <ref>.]
Three different sets of results (in different stages of error mitigation) are shown: with only DD applied (squares), with DD and PT applied (diamonds), and after ODR (circles). Looking at the results with only DD (squares), it is seen that the noise is not uniform across the device, signaling a significant contribution of coherent noise. After PT (diamonds), this coherent noise is averaged out, and is transformed into incoherent (depolarizing) noise, seen by the almost-constant shift of the results compared with the MPS simulation. Finally, ODR removes this shift by mitigating the effects of depolarizing noise. More details on the interplay between these methods can be found in App. <ref>.
With the statistics and twirlings gathered,
the 1σ uncertainties in each point are ∼ 15% of their mean,
and each χ_j is within 3 σ of the MPS simulator result (the individual values of χ_j can be CP averaged to reduce the uncertainty, as shown in Fig. <ref> in App. <ref>).
It is expected that these uncertainties will reduce with increased statistics
and twirlings.
Notice that the expected values of
χ_j deviate from the volume average
for only a few qubits near the boundaries.
This is because the boundaries perturb the wavefunction only over a few correlation lengths, leaving the rest of the volume unaffected.
The chiral condensates for L=14,20,30,40 and 50 are given in Table <ref>.
This is an average over the whole lattice, Eq. (<ref>), and therefore
the uncertainty decreases with increasing L
due to increased sampling.
Despite having smaller uncertainties,
the results remain within 3σ of the MPS simulator result.
Also given in Table <ref> is the number of two-qubit CNOT gates.
The number of CNOTs is seen to grow linearly with L, without affecting the quality of the result, and 788 CNOTs over 100 qubits is well within the capabilities of the quantum computer.
This is in line with other quantum simulations that have been performed with large numbers of qubits and CNOTs using IBM's quantum computers <cit.>.
This highlights the fact that it is not the total number of CNOT gates in the
quantum circuit that is limiting the scale of simulations, but rather it is the number of CNOT gates per qubit.
This, of course, assumes that the CNOT gates in a single layer of the circuit can be enacted in parallel.
Due to this, increasing L actually improves volume-averaged quantities by ∼ 1/√(L) due to statistical averaging.
In a similar vein, since scalable circuits repeat structures of size ξ many times over the whole lattice,
the number of Pauli-twirls being sampled is effectively multiplied by L/ξ.
To further probe the quality of the prepared wavefunctions, correlations between electric charges on the spatial sites are considered.
The charge on a spatial site is defined
to be the sum of charges on the two associated staggered sites,
Q̂_k = Q̂_2k + Q̂_2k+1,
where k is an integer corresponding to the spatial site.
Of particular interest are connected correlation functions between spatial charges,[For periodic boundary conditions, ⟨Q̂_k⟩ = 0, but for OBCs ⟨Q̂_k⟩ decays exponentially away from the boundaries; see App. <ref>.] defined as
⟨Q̂_jQ̂_k⟩_c = ⟨Q̂_jQ̂_k⟩ - ⟨Q̂_j⟩⟨Q̂_k⟩ .
These correlations decay exponentially for | j-k|≳ξ due to confinement and charge screening.
Unlike the chiral condensate, which is a sum of single qubit observables, ⟨Q̂_jQ̂_k⟩_c is sensitive to correlations between qubits, i.e., requires measurement of ⟨Ẑ_j Ẑ_k ⟩.
The results from ibm_cusco for L=50 are shown in Fig. <ref>.
The correlations are symmetric under j↔ k,
and only the lower-triangle of the correlation matrix is shown.
Each measured value is within 3σ of the MPS simulator result, consistent with statistical fluctuations.
Also shown in Fig. <ref> are the spatial charge-charge correlations as a function of distance, averaged over the lattice volume,
⟨Q̂Q̂⟩_c (d) = 1/L-4-d∑_k=2^L-3-d⟨Q̂_k Q̂_k+d⟩_c .
To reduce the effects of the boundaries, this sum omits the first and last two spatial lattice sites.
As anticipated, this correlation function decays exponentially, with a characteristic length scale proportional to ξ = 1/m_hadron.
For d>2, the correlations are consistent with zero within 2σ (note that the log scale distorts the error bars),
and
increased numbers of shots and twirlings
are needed to distinguish additional points from zero.
The local chiral condensate and charge-charge correlations corresponding to the other values of L are given in App. <ref>.
§ SUMMARY AND OUTLOOK
In this chapter, the vacuum of the lattice Schwinger model was prepared
on up to 100 qubits of IBM's
127-qubit Eagle-processor quantum computers,
ibm_brisbane and ibm_cusco.
This was accomplished with SC-ADAPT-VQE, an algorithm for identifying systematically improvable
state preparation quantum circuits that can be robustly scaled to operate on any number of qubits
The utility of scalable circuits relies on physically relevant systems often having a (discrete) translational symmetry, and a finite correlation length set by the mass gap.
Together, these imply that the state preparation circuits have unique structure over approximately a correlation length <cit.>,
which is replicated across the lattice.
The lattice Schwinger model with OBCs was chosen to explore these ideas as its vacuum has (approximate) translational invariance and, due to confinement, has a mass gap.
By performing SC-ADAPT-VQE on a classical simulator, state preparation circuits for lattices of L≤ 14 (28 qubits) were built from an operator pool containing both translationally invariant terms and ones localized to the boundaries.
Exponential convergence in the quality of the prepared state with both system size and circuit depth enabled the extrapolation of circuits that can be scaled to arbitrarily large lattices.
This methodology was successfully demonstrated by preparing the Schwinger model vacuum on up to 100 superconducting qubits of IBM's quantum computers.
Both the charge-charge correlators and the chiral condensate were measured, and were found to agree with results from an MPS simulator, within statistical uncertainty.
Vital to the success of these quantum simulations involving a large number of qubits was the development of an improved error mitigation technique, which we have called Operator Decoherence Renormalization (ODR).
Due to its generality,
we expect that the scalable circuit framework embodied by SC-ADAPT-VQE can be applied to other gapped theories with translationally-invariant ground states.
Of particular importance is QCD,
for which the initialization of ground states for quantum simulations continues to be a daunting prospect.
It is likely that many of the ideas used to construct efficient state preparation circuits for the Schwinger model can be applied to the initialization of the ground state of QCD.
Of course, the operator pool that informs the state preparation circuits will be
more diverse since the gauge field is no longer completely constrained by Gauss's law.
Local quark-field operators, extended quark operators with associated gauge links, and closed loops of gauge links will need to be included in the pool.
It is also expected that the variational parameters defining the
ground-state preparation circuits will converge exponentially,
once the simulation volume can completely contain the pion(s).
The utility of SC-ADAPT-VQE is that it provides a straightforward prescription for determining low-depth quantum circuits that prepare the ground state on systems of arbitrary size with only classical computing overhead.
This not only allows for the quantum simulation of ground state properties, but will be important for future simulations of dynamics, where preparing the initial state is a crucial first step.
In the following chapter hadron dynamics will be simulated by first using SC-ADAPT-VQE to prepare hadron wavepackets on top of the vacuum, and then evolving them forward in time.
§ VOLUME EXTRAPOLATION OF THE ENERGY DENSITY AND CHIRAL CONDENSATE
Here the vacuum energy density and chiral condensate are extrapolated to L=∞.
The results of exact diagonalization and DMRG calculations are considered independently, providing consistent results within uncertainties. For the DMRG calculations, 60 sweeps were performed with a maximum allowed bond dimension of 150 and a truncation of Schmidt coefficients below 10^-10. This showed a convergence of 10^-10 in the energy of the vacuum state.
Discussions in Sec. <ref> motivated an inverse-power, 1/L, dependence
of the exact vacuum energies as the infinite-volume limit is approached.
This scaling was argued when L is much larger than the longest correlation length, and with OBCs.
Therefore, for masses and couplings that give rise to the lowest-lying hadron
being completely contained within the lattice volume,
we anticipate functional forms
ε(L) = ε(∞) + e_1/L + O(1/L^2) , χ(L) = χ(∞) + d_1/L + O(1/ L^2)
,
for ε and χ.
This is due to the finite penetration depth of boundary effects, and the exponential convergence of both the volume and the surface contributions to their infinite-volume values.
As a result, the surface terms make
O(1/L) contributions to intensive quantities, e.g., densities.
To illustrate this, the expectation value of the charge on each spatial site, Q̂_k, for m=0.5,g=0.3 and L=14 is shown in Fig. <ref>.
This converges exponentially with the distance to the boundary to ⟨Q̂_k ⟩ = 0, the expected infinite volume value.
The results of fits to the exact and DMRG results for the energy density and chiral condensate for m=0.5, g=0.3 are shown in Fig. <ref>,
and for m=0.1, g=0.8 and m=0.1, g=0.3 are shown in Fig. <ref>.
Using polynomials
that are linear and quadratic in 1/L,
fits are performed for L≥ 9 and extrapolated to L=∞.
The differences between extrapolations obtained from the two fit forms are used to
estimate the systematic fitting error,
corresponding to the black and grey points (and error bars).
The difference between linear and quadratic fits is negligible for the exact results, except
for the chiral condensate
in the case of m=0.1 and g=0.3, which sees a small quadratic dependence.
When the fit interval is reduced to L≥ 10, this dependence once again becomes negligible.
§ OPTIMIZING TROTTERIZED CIRCUITS FOR STATE PREPARATION
As discussed in the main text, even after the operator pool has been chosen for SC-ADAPT-VQE, there remains freedom in how the pool of unitary operators, Eq. (<ref>),
is implemented as quantum circuits.
For example,
instead of leading-order Trotterization,
a higher-order Trotterization could be used to suppress Trotter errors.
Alternatively,
different orderings of the terms in the leading-order Trotterization can be considered.
This freedom can be used to optimize the convergence of SC-ADAPT-VQE with circuit depth.
Also, different Trotter orderings can break the CP symmetry.
The circuit orderings in Fig. <ref> were chosen to minimize the circuit depth, and for d=1,3,5 this ordering preserves CP, while for d=7,9 it breaks CP.
Consider the different arrangements of the terms in the Trotterization of Ô^V_mh
(1), given in Eq. (<ref>), as shown in Fig. <ref>a.
The depth-2 ordering (left) was used
to obtain the results presented
in the main text as it leads to the shallowest circuits.
All the orderings shown
in Fig. <ref>a
are equivalent up to 𝒪[(θ_1)^2]
(where θ_1 is
the coefficient of the operator in the corresponding unitary operator),
but the deeper circuits allow for the generation of longer-range correlations.
Note that the deeper circuits can break the CP symmetry; e.g. for L=10 depths 2 and 4 preserve CP while depths 3, 4, 5 and 7 break CP.
It is found that this added circuit depth improves the convergence of SC-ADAPT-VQE, as shown in Fig. <ref>b.
This demonstrates that to minimize circuit depth, for a fixed error threshold, it is preferable to choose a deeper Trotterization of Ô^V_mh(1),
instead of
going to a greater number of SC-ADAPT-VQE steps.
For example, it is more efficient to perform 2 steps of SC-ADAPT-VQE with a depth-3 Trotterization of Ô^V_mh(1), than to perform 3 steps of SC-ADAPT-VQE with a depth-2 Trotterization of Ô^V_mh(1).
Also shown in Fig. <ref>b are results obtained
from performing SC-ADAPT-VQE with exact unitary operators (no Trotterization).
This is found to always perform better than the Trotterized unitaries,
except for a single step.
Intriguingly, for a single step, the error is less with a deep first-order Trotterization than with the exact unitary.
This suggests that the optimizer is finding a solution in which the Trotter errors are tuned to improve the overlap with the vacuum.
Note that the deeper Trotterizations of Ô^V_mh(1) move the recurrence of Ô^V_mh(1) (e.g., at step 4 for m=0.5, g=0.3) to deeper in the SC-ADAPT-VQE ansatz.
§ VOLUME EXTRAPOLATIONS OF THE SC-ADAPT-VQE VARIATIONAL PARAMETERS:
AN “EFFECTIVE-THETA INFINITY”
To initialize large quantum registers, the variational parameters defining the state-preparation quantum circuits need to be extrapolated with high precision.
In volumes large enough to contain the longest correlation length,
the variational parameters are expected to be exponentially close to their infinite-volume values.
Therefore, we assume that the form of the volume dependence for practical purposes is that given in Eq. (<ref>),
θ_i(L) = θ_i^∞ + c_1 e^-c_2 L ,
and check the self-consistency of this form.[For the current paper, due to the small number of parameters, the selection of the points to be fitted was determined by visual inspection (if the points followed an exponential decay or not).]
While there could be a polynomial coefficient of the exponential,
we find that this is not required.
Fitting exponential functions can be challenging;
however, with results over a sufficient range of L, algebraic techniques, such as effective masses, have proven useful in lattice QCD calculations to eliminate “uninteresting” parameters, while at the same time mitigating correlated fluctuations in measurements <cit.>.
With the goal of initializing large lattices, it is the
θ_i^∞ that are of particular interest.
Assuming the volume dependence given in Eq. (<ref>),
it is useful to form four relations
y_L = θ_i(L) - θ_i^∞ = c_1 e^-c_2 L ,
y_L+1 = θ_i(L+1) - θ_i^∞ = c_1 e^-c_2 e^-c_2 L
y_L+2 = θ_i(L+2) - θ_i^∞ = c_1 e^-2 c_2 e^-c_2 L ,
y_L+3 = θ_i(L+3) - θ_i^∞ = c_1 e^-3 c_2 e^-c_2 L .
These relations can be combined to isolate θ_i^∞, providing an L-dependent “effective-θ_i^∞”, denoted as θ_i, eff^∞:
y_L+1 y_L+2 = y_L y_L+3 ,
θ_i, eff^∞(L) = θ_i(L) θ_i(L+3) - θ_i(L+1)θ_i(L+2)/θ_i(L) + θ_i(L+3) - θ_i(L+1) - θ_i(L+2)
.
For a sufficiently large set of results,
θ_i, eff^∞(L)
will plateau for large L if the functional form in
Eq. (<ref>) correctly describes the results.
This plateau can be fit by a constant,
over some range of large L,
to provide an estimate of θ_i^∞.
This method is similar to using varpro (variable projection) in a multi-parameter
χ^2-minimization.
As an example, the results for θ_1^∞
from a 3-parameter fit of θ_1 to Eq. (<ref>) are compared with a determination using
θ_1, eff^∞(L) from
Eq. (<ref>).
Results obtained with these two methods for m=0.1, g=0.3 and for m=0.1, g=0.8 are shown in Fig. <ref>.
The result obtained
from fitting a constant to θ_1, eff^∞(L)
is consistent with the asymptotic result from the 3-parameter fit, but with somewhat larger uncertainty.
The current deficiency of this comparison is the small number of points in the plateau region, and results for larger L are required for a more complete comparison.
Analysis of the other variational parameters shows a similar behavior.
The consistency between the two extraction methods is encouraging, and suggests that the selected exponential form may indeed well describe the results.
The fitting method is likely insensitive to polynomial corrections (coefficients), and requires further exploration to fully-quantify uncertainties in these asymptotic values of the variational parameters.
However, as the MPS simulations with these extrapolated angles reproduce the results calculated with DMRG, it appears that, for the systems and parameters we have selected in our analysis, systematic errors introduced by selecting this functional form are small.
§ OPERATOR DECOHERENCE RENORMALIZATION (ODR)
To mitigate the effects of noise, the decoherence renormalization technique <cit.>
is modified for use with larger systems.
In its original form,
decoherence renormalization assumes that each qubit decoheres at the same rate under a depolarizing noise channel.
When working with a small number of qubits, this is a reasonable approximation, but for larger systems, it is necessary to consider the
rate of decoherence of each qubit individually.
After Pauli twirling, the qubit errors are well described by a Pauli error channel <cit.>, which maps the N qubit density matrix to
ρ → ∑_i=1^4^Nη_i P̂_i ρP̂_i ,
where P̂_i is a tensor product of Pauli operators (Î, X̂, Ŷ or Ẑ) acting on N qubits, and the set of η_i characterizes the error channel. It is important to understand the effect of this error channel on observables. Generic observables can be written as a sum over tensor products of Pauli operators, so it suffices to consider an observable, Ô, that is a tensor product of Pauli operators. Under a Pauli error channel, the measured (noisy) expectation value, ⟨Ô⟩_meas, is given by
⟨Ô⟩_meas = ∑_i=1^4^Nη_i( P̂_i ÔP̂_i ρ) .
Note that P̂_i ÔP̂_i = ±Ô, depending on whether or not Ô and P̂_i commute or anti-commute. Using this fact, the measured (noisy) expectation value ⟨Ô⟩_meas, can be seen to be directly proportional to the predicted (noiseless) expectation value, ⟨Ô⟩_pred = (Ôρ), i.e.,
⟨Ô⟩_meas = ( 1-η_O ) ⟨Ô⟩_pred .
The ODR factor η_O is, in general,
distinct for each operator, and can be estimated by running a mitigation circuit that has the same structure as the physics circuit, but where ⟨Ô⟩_pred is already known.
In this work, the mitigation circuit was taken to be the state preparation circuit with variational parameters set to zero, which is the identity in the absence of noise.
This mitigation circuit will have the same noise channel as the physics circuit provided that the noise is dominated by errors in the two-qubit gates and is independent of the single qubit rotation angles in the circuit. Without noise, the mitigation circuit prepares the strong coupling vacuum, where ⟨Ô⟩_pred is known,
and therefore η_O can be computed.
Once η_O is determined, Eq. (<ref>) is used to estimate the value of the noiseless observable from the
results of the physics circuits.
An added benefit of ODR is that it reduces the need for other error mitigation techniques.
For example, readout errors are
partially mitigated since the measured observables are affected by both gate and measurement errors.
This is convenient as current measurement mitigation techniques require a large classical computing overhead.
It also reduces the need for post-selection, which in our work could have been performed on states with total charge Q = 0.
This post-selection removes single-qubit errors, but introduces further correlations between qubits. These correlations effectively increase the size of the
single-qubit errors (making observables sensitive to errors anywhere on the register).
This reduces the efficacy of the Pauli error model, making post-selection incompatible with ODR.[This is not true for observables involving the entire qubit register,
e.g., the vacuum-vacuum persistence probability.
This is because applying the Q=0 constraint when measuring global observables will not introduce any new correlations.]
Another desirable feature of ODR is that it allows simulations to retain the results of
a much larger fraction of the ensemble.
This is because
the probability of a single-qubit error increases with system size,
and therefore much of the ensemble is lost with naive post selection.
Further, such errors have little effect on local observables that are summed across the entire qubit register.
§ ADDITIONAL RESULTS FROM CLASSICAL SIMULATIONS
The results corresponding to Fig. <ref> for m=0.1,g=0.3 are given in Table <ref> and Table <ref>, and for
m=0.1,g=0.8 are given in Table <ref> and Table <ref>.
The 6^ th step of the algorithm is chosen for m=0.1, g=0.8 because the operator structure through L=14 has converged, allowing a consistent extrapolation of the circuits to large L.
This can be seen by comparing the operator structure in Table <ref> (6 steps) and Table <ref> (7 steps). An interesting observation is that the sum of parameters for a particular operator in the ansatz remains approximately unchanged when an additional insertion of the operator is added.
For example, compare the sum of parameters for Ô^V_mh(1) between L=8 and 9 in Table <ref>.
Using the same method as for m=0.5, g=0.3 in Sec. <ref>, scalable circuits for m=0.1, g=0.3 and m=0.1 and g=0.8 were also determined.
The results of running these circuits on qiskit's MPS simulator for m=0.1, g=0.3 and m=0.1 and g=0.8 are given in Table <ref>.
Due to the longer correlation lengths for these parameters, it was not possible to go to L=500 with the available computing resources.
In these MPS simulations, qiskit's default settings were used, where the bond dimension increases until machine precision is achieved.
The details of the qiskit MPS simulator can be found on the qiskit website <cit.>.
Again, the energy density and chiral condensate are found to have precision comparable to
that found on smaller systems.
This shows that, despite the longer correlation lengths for m=0.1, g=0.3 and m=0.1,g=0.8, it is still possible to accurately extrapolate the state preparation circuits to large lattices.
Note that stabilization of operator ordering for the different m and g (see Tables <ref>, <ref> and <ref>) does not follow the hierarchy in correlation lengths.
This is because larger ξ increases both the contribution of the volume ∼ e^-d/ξ and surface ∼ξ/L terms to the energy density.
To emphasize the advantage of performing SC-ADAPT-VQE using a classical simulator, we give an estimate of the number of shots required to perform SC-ADAPT-VQE on a quantum computer.
For m=0.5,g=0.3, L=14 performing 10 steps of SC-ADAPT-VQE required ∼ 6000 calls to the optimizer, in addition to about 500 evaluations of ⟨ [Ĥ, Ô_i ] ⟩ for pool operators Ô_i.
Each one of these calls required roughly 10^-3 precision in the measured observable, corresponding to about 10^6 shots on a noiseless device.
Therefore, SC-ADAPT-VQE for L=14 would require ∼ 10^10 shots on a noiseless device.
Factoring in the effects of device noise would increase this estimate by at least a factor of 10, and probably close to a trillion shots would be required to perform SC-ADAPT-VQE on a quantum computer. This is infeasible on current hardware.
§ ADDITIONAL DETAILS AND RESULTS FROM SIMULATIONS USING IBM'S QUANTUM COMPUTERS
In this appendix, we provide additional details about how our results are obtained from IBM's quantum computers, together with the additional figures not shown in Sec. <ref>.
All measurements are performed on ibm_brisbane (L≤ 40) and ibm_cusco (L=50) by sending the state preparation circuits,
with measurements in the computational (z) basis,
via the qiskit Runtime Sampler primitive.
The values of the variational parameters obtained from fitting to the exponential form in Eq. (<ref>) for 2 steps of SC-ADAPT-VQE are given in Table <ref>. The different qubits used for each lattice size can be seen in the insets in Figs. <ref>
and <ref>.
χ_j, obtained from ibm_brisbane for L=14,20,30 and 40, is shown in Fig. <ref>, and the charge-charge correlation functions are shown
in Fig. <ref>.
In Fig. <ref>, the CP symmetry relating χ_j = χ_2L-1-j is used to effectively double the number of shots, resulting in statistical error bars that are smaller by a factor of √(2).
In an effort to explore the limitations of the quantum computer,
the 3-step SC-ADAPT-VQE state preparation circuits for L=30 and L=50 were implemented on ibm_brisbane and ibm_cusco, respectively.
The structure of the ansatz wave function and corresponding variational parameters can be found in Table <ref>.
The local chiral condensate and charge-charge correlators obtained from 80 (L=30) and 40 (L=50) twirled instances, each with 8× 10^3 shots, are shown in Figs. <ref> and <ref>.
Despite the factor of three increase in the number of
CNOTs relative to 2 layers (1254 versus 468 for L=30, and 2134 versus 788 for L=50), the results are consistent with those obtained from the qiskit MPS circuit simulator.
Note that qubit 0 and 2 have decohered for both volumes, and in principle could be removed
from volume averaged quantities, such as the chiral condensate.
By sending the circuits with the Sampler primitive, several error mitigation techniques are applied during runtime, as mentioned in Sec. <ref>. Specifically, the readout mitigation technique used (for L≤ 40) is M3 <cit.>. This method is based on correcting only the subspace of bit-strings observed in the noisy raw counts from the machine (which usually include the ideal ones plus those with short Hamming distance, introduced by the noise in the measurement), and using Krylov subspace methods to avoid having to compute (and store) the full assignment matrix.
Unlike the other works that have utilized ≥ 100 superconducting qubits <cit.>, which used zero-noise extrapolation (ZNE) <cit.> in conjunction with probabilistic error correction (PEC) <cit.> to remove incoherent errors,
we use Operator Decoherence Renormalization (ODR), as explained in App. <ref>.
Both methods require first transforming coherent errors into incoherent errors, which is done via Pauli twirling. However, the overhead in sampling using ZNE and PCE, compared with ODR, is substantial. For ZNE, one has to add two-qubit gates to increase the noise level, and then perform an extrapolation to estimate the noiseless result. In the minimal case, this leads to running only another circuit, like in ODR, but with a circuit depth that is three times as large as the original circuit (e.g., replacing each CNOT with 3 CNOTs).
However, this leads to a large uncertainty in the functional form of the extrapolation, and ideally the circuit is run with multiple noise levels to have multiple points from which to extrapolate. For PEC, the overhead is even larger, as it involves learning the noise model of the chip, by running multiple random circuits with different depths (see Ref. <cit.>). For ODR, as explained in App. <ref>, only the same “physics" circuits are run, but with all rotations set to zero, meaning the sampling overhead is only doubled.
To generate the different twirled circuits, the set of two-qubit Pauli gates G_2 and G'_2 that leave the (noisy) two-qubit gate invariant (up to a global phase) must be identified. For the quantum processors used in this work, the native two-qubit gate is the echoed cross-resonance (ECR) gate, which is equivalent to the CNOT gate via single qubit rotations. Explicitly,
ECR = 1/√(2)(X̂⊗Î-Ŷ⊗X̂) , [wires=2]ECR
=
1 R_z(-π/2) R_y(π)
R_x(π/2) .
Using the functions from the package qiskit_research <cit.>, together with the two-Pauli gate set shown in Table <ref>, a total of 40 (150) twirled circuits for both mitigation and physics were generated for L≤ 40 (L=50), each with 8× 10^3 shots.
From Fig. <ref>, the effects of each error mitigation method can be seen.
The first set of results shown are semi-raw, obtained directly from the quantum computer. They are not raw since DD is integrated into the circuits that are run on the machine (REM is also included for L≤ 40).
To check the effect that DD has, several runs were performed without it, and a degradation of the signal was visible when qubits were idle for long periods (the effects of not using DD were more evident when the deeper 3-step circuit was run).
Regarding REM, while the final fully-mitigated results for L=50 (no REM applied) and L ≤ 40 (REM applied) systems are similar in quality, a larger statistical sample for L=50 was required to achieve an equivalent level of precision (2.4×10^6 vs 6.4× 10^5 shots).
The second set shows the effects of applying PT (the results for no Pauli twirling corresponded to one twirled instance).
It is seen that all the coherent noise on the different qubits has been transformed into uniform incoherent noise.
The last set shown is after ODR has been used to remove the incoherent noise.
CHAPTER: QUANTUM SIMULATIONS OF HADRON DYNAMICS IN THE SCHWINGER MODEL ON 112 QUBITS
This chapter is associated with Ref. <cit.>:
“Quantum simulations of hadron dynamics in the Schwinger model using 112 qubits" by Roland C. Farrell, Anthony N. Ciavarella, Marc Illa and Martin J. Savage.
§ INTRODUCTION
The highest-energy collisions of particles, such as those that take place in colliders and cosmic-ray events, reveal and provide insights into the underlying laws of nature.
They tighten constraints on the content, symmetries and parameters of the Standard Model (SM) <cit.>, and provide opportunities to discover what may lie beyond.
In searching for new physics and emergent phenomena in exotic states of matter, contributions from known physics must be reliably predicted with a complete quantification of uncertainties.
The associated complexities, particularly from the strong interactions described by quantum chromodynamics (QCD), provide challenges for phenomenological modeling and classical simulation.
Many forefront research questions in nuclear and particle physics require simulations of systems of fundamental particles that lie far beyond the capabilities of classical computing.
In principle, the collisions of fundamental and composite particles (hadrons)
could be simulated, from the initial state through to the final state(s), with sufficiently capable quantum computers (for recent reviews, see e.g., Refs. <cit.>).
Well before that point, new insights and improvements in predictions for such processes may come from NISQ-era devices <cit.>.
In this chapter, the real-time dynamics of composite particles, “hadrons”, in the lattice Schwinger model are simulated using IBM's superconducting-qubit quantum computers.
This work serves as a proof-of-concept, and builds toward future simulations that will probe highly-inelastic scattering of hadrons and out-of-equilibrium behavior of strongly interacting matter.
Our quantum simulations proceed with the following steps:[As this work was being completed, similar developments in the Thirring model were reported in Ref. <cit.>.]
* Prepare the interacting ground state (vacuum);
* Establish a localized hadron wavepacket on this vacuum;
* Evolve the system forward in time, allowing the hadrons to propagate;
* Measure observables in the final state that detect hadron propagation.
Crucial to the success of our quantum simulations is the development of comprehensive suites of scalable techniques that minimize circuit depth and two-qubit entangling gate counts.
The methods presented here are informed by the symmetries and phenomenological features of the Schwinger model.
They are physics-aware techniques with potential applicability to a broad class of lattice theories.
A significant challenge to performing quantum simulations of the Schwinger model is that, in axial gauge (A_x=0) <cit.>, the electric interaction between fermions is all-to-all.[Working in Weyl gauge (A_t=0) eliminates the need for all-to-all connectivity, but requires additional qubits to encode the gauge field on the links of the lattice.]
This leads to an 𝒪(L^2) scaling in the number of quantum gates required for time evolution, where L is the lattice volume.
It also requires quantum computers to have all-to-all connectivity between qubits for efficient simulation, a native feature in current
trapped-ion devices, but which has a large overhead on superconducting devices.
Fortunately, electric charges are screened in the Schwinger model, causing correlations between distant fermions to decay exponentially with separation; see Fig. <ref>a).
In Sec. <ref>, this screening is used to truncate interactions between fermions beyond a distance, λ, set by the correlation length and the desired level of precision of the simulation.
This improves the scaling of the number of gates required for time evolution to 𝒪(λ L), with 𝒪(λ)-nearest neighbor qubit connectivity.
The construction of low-depth quantum circuits for state preparation is another challenge addressed in this work.
In the previous chapter, we introduced the SC-ADAPT-VQE algorithm, and applied it to the preparation of the Schwinger model vacuum on 100 qubits of ibm_cusco.
SC-ADAPT-VQE uses symmetries and hierarchies in length scales to determine low-depth quantum circuits for state preparation.
Using a hybrid workflow, quantum circuits are determined and optimized on a series of small and modest-sized systems using classical computers, and then systematically scaled to large systems to be executed on a quantum computer.
In Sec. <ref>, SC-ADAPT-VQE is extended to the preparation of localized states, and used to establish a hadron wavepacket on top of the interacting vacuum; see Fig. <ref>b).
The wavepacket preparation circuits are optimized on a series of a small lattices by maximizing the overlap with an adiabatically prepared wavepacket.
The locality of the target state ensures that these circuits can be systematically extrapolated to prepare hadron wavepackets on large lattices.
Quantum
circuits for state preparation and time evolution are developed in Section <ref>.
The circuit design minimizes the two-qubit gate count for implementation on devices with nearest-neighbor connectivity, such as those available from IBM.
A building block for these circuits is a new gate decomposition for R_ZZ rotations acting between all pairs of a set of qubits.
This nearest-neighbor decomposition uses the same number of two-qubit gates as decompositions for devices with all-to-all connectivity, at the cost of an increased circuit depth.
Results from classical simulations performed on small lattices are presented in Sec. <ref>.
These simulations quantify the systematic errors originating from the approximations introduced in previous sections: preparation of the hadron wavepacket with SC-ADAPT-VQE, use of a truncated Hamiltonian for time evolution, and
Trotterization of the time evolution operator.
In Sec. <ref>, the techniques and ideas described in the previous paragraphs
are applied to quantum simulations of hadron dynamics on L=56 (112 qubit) lattices using IBM's quantum computer ibm_torino.
The initial state is prepared using SC-ADAPT-VQE, and time evolution is implemented with up to 14 Trotter steps, requiring 13,858 CNOTs (CNOT depth 370).
After applying a suite of error mitigation techniques, measurements of the local chiral condensate show clear signatures of hadron propagation.
The results obtained from ibm_torino are compared to classical simulations using the cuQuantum Matrix Product State (MPS) simulator.
In these latter calculations, the bond dimension in the tensor network simulations grows with the simulation time, requiring increased classical computing overhead.
Appendix <ref> provides details about
the convergence of the MPS simulations,
and App. <ref> provides details of
our error mitigation strategy, for our simulations using 112 qubits of IBM's quantum computers.
This work points to quantum simulations of more complex processes, such as inelastic collisions, fragmentation and hadronization, as being strong candidates for a near-term quantum advantage.
§ SYSTEMATIC TRUNCATION OF THE ELECTRIC INTERACTIONS
The Schwinger model Hamiltonian in axial gauge and mapped to spin operators is given in Eq. (<ref>).
Due to the removal of the gauge degrees of freedom, the electric interactions are pair-wise between all of the fermions.
This is problematic for implementing time evolution e^-i t Ĥ on a quantum computer as it implies an 𝒪(L^2) scaling in the number of gates.
In addition, this interaction requires connectivity between every pair of qubits for efficient implementation.
Fortunately, charges are screened in confining theories like the Schwinger model, and correlation functions decay exponentially between charges separated by more than approximately a correlation length, ξ.
The correlation length is a scale that emerges from the solution of the theory, and is naturally related to the hadron mass, ξ∼ 1/m_hadron.
This motivates the construction of an effective Hamiltonian where interactions between distant charges are removed.
Such an effective interaction is systematically improvable with exponentially suppressed errors, and only requires 𝒪(ξ L) gates acting between qubits with maximum separation ∼ξ.
To form the effective interactions, it is beneficial to first specialize to the Q=0 sector with zero background electric field.
There are many equivalent ways to express the interaction due to the freedom of integrating Gauss's law from the left or right side of the lattice
when constraining the electric field.
However, the desire to preserve CP symmetry in the truncated theory motivates starting from a manifestly CP-symmetric interaction,
Ĥ_el^(Q=0) = g^2/2{∑_j=0^L-2 ( ∑_k=0^j Q̂_k )^2 + ∑_j=L+1^2L-1 ( ∑_k=j^2L-1Q̂_k )^2 + 1/2 [ (∑_j=0^L-1Q̂_j )^2 + (∑_j=L^2L-1Q̂_j )^2 ] } .
This has decoupled the interactions between charges on different halves of the lattice.
The most straightforward way to form the effective interactions would be to remove Q̂_j Q̂_j+d terms with d≳ξ.
However, this is ineffective because it is only the connected correlations that decay exponentially; on a staggered lattice, ⟨Q̂_j ⟩≠ 0 and ⟨Q̂_j Q̂_j+d⟩ = ⟨Q̂_j ⟩⟨Q̂_j+d⟩ + O(e^-d/ξ).
In order to remove the effects of disconnected correlations, consider charges and dipole moments defined on spatial sites,
Q̂_n = Q̂_2n + Q̂_2n+1 , δ̂_n = Q̂_2n - Q̂_2n+1 .
Unlike charges on staggered sites, the expectation value of a charge on a spatial site is zero, up to exponentially suppressed boundary effects, see App. B of Ref. <cit.>.
Of relevance to constructing the effective Hamiltonian is that correlations between spatial charges, and between spatial charges and dipole moments, decay exponentially,
⟨Q̂_n Q̂_n+d⟩∼ e^- d/ξ̅ , ⟨Q̂_n δ̂_n+d⟩∼ e^-d/ξ̅ ,
for d≳ξ̅,[Dipole-dipole interactions between spatial sites vanish since the Coulomb potential is linear in one dimension.] where ξ̅ = ξ/2 is the correlation length in units of spatial sites.
Rewriting Ĥ_el^(Q=0) in terms of spatial charges and dipole moments, and truncating interactions beyond λ spatial sites, it is found that
Ĥ_el^(Q=0) (λ̅) = g^2/2{∑_n=0^L/2-1[ ( L - 5/4 - 2n ) Q̂^2_n + 1/2Q̂_n δ̂_n + 1/4δ̂^2_n .
+ . ( 3/4 + 2n ) Q̂^2_L/2+n - 1/2Q̂_L/2+nδ̂_L/2+n + 1/4δ̂^2_L/2+n]
+ 2∑_n=0^L/2-2 ∑_m=n+1^min(L/2-1,n+λ̅)[ ( L - 1 - 2m ) Q̂_nQ̂_m + 1/2Q̂_nδ̂_m .
+ . ( 1 + 2n ) Q̂_L/2+nQ̂_L/2+m - 1/2Q̂_L/2+mδ̂_L/2+n] } .
This expression holds for even L, and the analogous expression for odd L can be found in App. <ref>.
For m=0.5,g=0.3, ξ∼ 0.5, and λ=1 will be used for demonstration purposes in the remainder of this work.
Expressed in terms of spin operators, the λ=1 interaction is,
Ĥ_el^(Q=0)(1)
= g^2/2{∑_n=0^L/2-1[ ( L/2 - 3/4 - n )Ẑ_2nẐ_2n+1+ (n+1/4 )Ẑ_L+2nẐ_L+2n+1 ]
+ 1/2∑_n=1^L/2-2 (2 Ẑ_2n + Ẑ_2n+1-Ẑ_L+2n-2Ẑ_L+2n+1 )
+ 1/2 (2Ẑ_0+Ẑ_1+Ẑ_L-2-Ẑ_L+1-Ẑ_2L-2-2Ẑ_2L-1 )
+ ∑_n=0^L/2-2[ (L/2-5/4-n )(Ẑ_2n+Ẑ_2n+1)Ẑ_2n+2 + ( L/2 - 7/4 - n )(Ẑ_2n+Ẑ_2n+1)Ẑ_2n+3
+ (n+1/4 )(Ẑ_L+2n+2+Ẑ_L+2n+3)Ẑ_L+2n + (n+3/4 )(Ẑ_L+2n+2+Ẑ_L+2n+3)Ẑ_L+2n+1] } .
Factors of the identity have been dropped as they do not impact time evolution, and this expression only holds for even L≥ 4.
The effects of these truncations on qubit connectivity, number of two-qubit ẐẐ terms, and the low-lying spectrum are illustrated in Fig. <ref>.
The number of two-qubit operations required for time evolution now scales linearly with volume O(λ L), and there are only operations between qubits separated by at most (2λ+1) staggered sites.
This interaction will be used to time evolve a wavepacket of single hadrons, and it is important that the impact of these truncations is small on the low-lying hadron states.
This is illustrated in panel c) of Fig. <ref>, where the low-lying spectrum is shown to rapidly converge with increasing λ.
There is some transient behavior presumably due to tunneling beyond the truncation range.
It is important to stress that the exponentially-converging truncations that are made possible by confinement are not obvious at the level of the spin Hamiltonian in Eq. (<ref>) due, in part, to Q̂_n Q̂_m having conspiring single Ẑ and double ẐẐ terms.
§ SC-ADAPT-VQE FOR STATE PREPARATION
In the previous chapter, the Scalable Circuits-ADAPT-VQE (SC-ADAPT-VQE) algorithm and workflow was introduced, and used to prepare the vacuum of the Schwinger model on 100 qubits of IBM's quantum computers.
Here, SC-ADAPT-VQE will be detailed in general, and subsequent sections will apply it to prepare both the vacuum and a hadron wavepacket.
The goal of SC-ADAPT-VQE is to determine low-depth circuits for preparing a target wavefunction that are systematically scalable to any lattice size.
This scalability enables a hybrid workflow where circuits determined using classical computers are scaled and executed on a quantum computer.
This eliminates the difficult task of optimizing parameterized quantum circuits on a quantum computer that has both statistical noise from a finite number of shots and device errors <cit.>.
The initial steps of SC-ADAPT-VQE parallel those of ADAPT-VQE <cit.>, and can be summarized as:
1. Define a pool of operators {Ô} that respect the symmetries of the prepared state.
Scalability and phenomenological considerations are used to inform which operators are included in the pool.
2. Initialize a state |ψ_ ansatz⟩ with the quantum numbers of the target state |ψ_ target⟩.
3. Determine a quantity that measures the quality of the ansatz state.
For demonstration, consider the infidelity between the ansatz and target states, I = 1 - |⟨ψ_ target|ψ_ ansatz⟩| ^2.
4. For each operator in the pool Ô_i determine the gradient of the infidelity between the target and evolved ansatz states, ∂/∂θ_i. I|_θ_i=0 = ∂/∂θ_i.( 1 - |⟨ψ_ target | e^i θ_i Ô_i |ψ_ ansatz⟩|^2 )|_θ_i=0.
This is one way of ranking the relative impact of Ô_i on the infidelity.
5. Identify the operator Ô_n with the largest magnitude gradient.
Update the ansatz with the parameterized evolution of the operator |ψ_ ansatz⟩→ e^i θ_n Ô_n|ψ_ ansatz⟩.
6. Optimize the variational parameters to minimize the infidelity.
The previously optimized values for θ_1,...,n-1 and θ_n=0, are used as initial conditions.
7. Return to step 4 until the desired tolerance is achieved.
ADAPT-VQE returns an ordered sequence of unitary operators {Û_i } = {exp(i θ_i Ô_i) } that prepares the target state up to a desired tolerance.
For use on a quantum computer, the sequence of unitaries can be converted to a sequence of gates through, for example, Trotterization.
If this introduces Trotter errors, the unitaries in steps 4 and 5 should be replaced by their Trotterized versions, exp(i θ_i Ô_i) →∏_j Û_j^(i).
In SC-ADAPT-VQE, the previous steps are supplemented with the following,
8. Repeat ADAPT-VQE for a series of lattice volumes {L_1, L_2, …, L_N} using a classical computer (or a small partition of a quantum computer).
9. Extrapolate the sequence of unitary operators {{Û_i}_L_1, {Û_i}_L_2, …, {Û_i}_L_N} to the desired L.
This sequence is expected to converge for states with localized correlations.
L can be arbitrarily large and beyond what is accessible
using a classical computer.
The sequence of extrapolated unitaries {Û_i}_L can then be used to prepare the target state on a quantum computer.
This provides an explicit implementation of systematically-localizable <cit.> and fixed-point <cit.> quantum operators and circuits.
§.§ Hadron Wavepacket Preparation
SC-ADAPT-VQE can be used to prepare a state that has large overlap with an adiabatically prepared hadron wavepacket.
An alternative method for preparing wavepackets is discussed in App. <ref>.
In a lattice theory of interacting scalar fields, a complete procedure for preparing single particle wavepackets has been proposed by Jordan, Lee and Preskill <cit.>.[Other proposals for creating initial states and wavepackets can be found in Refs. <cit.>, including recent work on creating hadronic sources in the bosonized form of the Schwinger model using circuit-QED <cit.>.]
In their method, wavepackets are first prepared in free scalar field theory, and then the λϕ^4 interaction is adiabatically “turned on”.
This method runs into difficulty in the Schwinger model because the single particle states (hadrons) of the interacting theory are non-perturbatively different from the single particle states of the non-interacting theory (electrons).
To overcome this, consider starting in the interacting theory with m=0.5 and g=0.3, and adiabatically turning on the kinetic term.
The initial Hamiltonian is diagonal in the computational z-basis, and the ground state is the same as the infinite coupling (anti-ferromagnetic) vacuum |Ω_0 ⟩.
The infinite-coupling vacuum provides a suitable starting configuration upon which to build the wavepacket as it correctly encodes the long-distance correlations that characterize this confining theory.[The strong-coupling limit has been extensively studied, particularly in the context of lattice QCD. See, for example, Ref. <cit.> and references therein.]
On this vacuum, a hadron can be excited by creating an e^-e^+ pair on adjacent staggered sites.
By preparing a superposition of such hadrons at different locations, an arbitrary wavepacket can be prepared.
Here, the focus will be on preparing a localized hadron wavepacket that is centered in the middle of the lattice to preserve CP and minimize boundary effects.
A suitable initial state is,
|ψ_ WP⟩_init = X̂_L-1X̂_L|Ω_0⟩ .
To transition to a hadron wavepacket in the full theory, this state is taken through two steps of adiabatic evolution with a time-dependent Hamiltonian (illustrated in Fig. <ref>),
Ĥ_ ad(t) =
Ĥ_m + Ĥ_el + t/T_1 [ Ĥ_kin - 1/2 (σ^+_L-2σ^-_L-1 + σ^+_Lσ^-_L+1 + h.c. ) ] 0<t≤ T_1 ,
Ĥ_m + Ĥ_el + Ĥ_kin - (1-t-T_1/T_2 ) 1/2 (σ^+_L-2σ^-_L-1 + σ^+_Lσ^-_L+1 + h.c. ) T_1<t≤ T_1+T_2 .
For t∈ (0,T_1 ], the kinetic term is adiabatically turned on everywhere except for the links connecting the initial wavepacket to the rest of the lattice.
This mitigates spatial spreading of the initial wavepacket (see times t_a,b,c,d in Fig. <ref>).
Next, for t∈ (T_1,T_2 ], the remaining two links are adiabatically turned on.
These remaining links are spatially localized (act over a pair of staggered sites), and therefore primarily couple to high-momentum (energy) states.
This implies that the energy gap relevant for the adiabatic evolution is large, and the second evolution can be performed much faster than the first evolution.
There is a small amount of wavepacket spreading (times t_e,f), which is undone by evolving backwards in time for a duration T_B = T_2/2 with the full Hamiltonian Ĥ from Eq. (<ref>) (time t_g).
Explicitly, the hadron wavepacket is given by,
|ψ_ WP⟩ =
e^i T_B Ĥ T e^-i ∫_0^T_1 + T_2dt Ĥ_ ad(t)|ψ_ WP⟩_init ,
where T denotes time-ordering.
For practical implementation, the evolution of the time-dependent Hamiltonian can be accomplished with Trotterization,
T e^-i ∫_0^T_1 + T_2dt Ĥ_ ad(t) ≈ T e^-i ∑_n=0^N_T-1δ t Ĥ_ ad [(n+0.5) δ t ] ≈ T∏_n=0^N_T-1e^-i (δ t ) Ĥ_ ad [(n+0.5) δ t ] ,
where N_T is the number of Trotter steps and δ t = T_1+T_2/N_T is the step size.
For the simulation parameters chosen in this work, we find that T_1 = 200, T_2=10 and δ t=0.2 are sufficient for adiabatic evolution.
The final state is localized (within a few sites), and primarily consists of single-hadron states (see overlaps in Fig. <ref>).
In principle, this adiabatic procedure could be used to prepare a hadronic wavepacket on a quantum computer.
In practice, the required circuits are too deep to run on current devices.
To address this, SC-ADAPT-VQE is used to find low-depth circuits that prepare an approximation to the adiabatically determined wavepacket.
These low-depth circuits act on the vacuum, whose preparation was outlined in the previous section.
Scalability of the state preparation circuits is expected because the constructed wavepacket is localized away from the boundaries, and is built on top of a vacuum state that has converged exponentially in L to its infinite-volume form <cit.>.
As both the initial state (vacuum) and target state (single hadron) are CP even and charge zero, the operators in the pool must conserve charge and CP.
An operator pool that is found to produce a wavefunction that converges exponentially fast in circuit depth is,
{Ô}_WP = {Ô_mh(n,d), Ô_h(n,d), Ô_m(n) } ,
Ô_mh(n,d) =1/2[
X̂_L-nẐ^d-1Ŷ_L-n+d
- Ŷ_L-nẐ^d-1X̂_L-n+d
+ (-1)^d+1 (1-δ_L-n,γ ) (X̂_γẐ^d-1Ŷ_γ+d - Ŷ_γẐ^d-1X̂_γ+d ) ]
,
Ô_h(n,d) =1/2[
X̂_L-nẐ^d-1X̂_L-n+d
+ Ŷ_L-nẐ^d-1Ŷ_L-n+d
+ (-1)^d+1 (1-δ_L-n,γ )
(X̂_γẐ^d-1X̂_γ+d + Ŷ_γẐ^d-1Ŷ_γ+d ) ] ,
Ô_m(n) = Ẑ_L-n - Ẑ_L-1+n ,
where γ=L-1+n-d, n∈{1,…,L }, and the (1-δ_L-n,γ ) coefficients prevent double counting operators that are already CP-symmetric.
The pool operators are inspired by the Hamiltonian, with Ô_m(n) being a mass-like operator, Ô_h(n,d) a generalized hopping operator spanning d staggered sites, and Ô_mh(n,d) being proportional to their commutator.
Note that unlike the operator pool used to prepare the vacuum, {Ô}_WP is not constrained by time reversal or translational symmetry, and the individual terms in each operator commute.
Thus there are no Trotter errors when the corresponding unitaries are converted to circuits.
The initial state for SC-ADAPT-VQE is chosen to be |ψ_ansatz⟩ = |ψ_vac⟩, as this correctly reproduces the vacuum outside of the support of the hadron wavepacket.
In this section, all calculations are performed with exact diagonalization, and the initial state is the exact vacuum.
In Secs. <ref> and <ref>, the initial state will be the SC-ADAPT-VQE prepared vacuum.
Using the exact vacuum instead of the SC-ADAPT-VQE vacuum prevents operators from being chosen that improve the vacuum but do not build out the local profile of the wavepacket.
The quality of the prepared state is determined by the infidelity of the ansatz state with the adiabatically prepared state from Eq. (<ref>),
I = 1 - |⟨ψ_WP|ψ_ansatz⟩|^2 .
Results obtained from performing the steps in SC-ADAPT-VQE (outlined in the introduction of Sec. <ref>) for L=7-14 are shown in Fig. <ref> and Table <ref>.[The vacuum maximizes the infidelity (has I=1) with the adiabatically determined state as there is no overlap between the vacuum and the single-hadron states that make up the wavepacket.
This presents a problem in step 4 of SC-ADAPT-VQE since ∂/∂θ_i I is zero for all operators in the pool.
To overcome this, for the first iteration of SC-ADAPT-VQE, the parameterized evolution of the ansatz with each operator is determined separately.
The operator that minimizes the infidelity is chosen for the first operator in the SC-ADAPT-VQE ansatz.]
Up to the tolerance of the optimizer, the variational parameters have converged in L, and therefore the L=14 parameters and operator ordering can be used
to prepare a hadron wavepacket for any L>14.
Initially, short-range operators localized around the center of the wavepacket are
selected by SC-ADAPT-VQE.[It is interesting to note the similarities between this wavepacket construction, and the construction of hadronic sources and sinks in Euclidean-space lattice QCD calculation.
Here, the initial interpolating operator for the hadronic wavepacket is being “dressed” by an increasing number of operators with exponentially improving precision.
In Euclidean-space lattice QCD, a matrix of correlation functions between a set of sources and sinks is diagonalized to provide a set of correlators with extended plateaus toward shorter times, corresponding to the lowest-lying levels in the spectrum that have overlap with the operator set.
This “variational method”, e.g., Refs. <cit.>, provides upper bounds to the energies of the states in the spectrum.
The sources and sinks for hadrons are operators constructed in terms of quark and gluon fields, and correlation functions are formed by contracting field operators of the sinks with those of the sources (or with themselves when both quark and anti-quark operators are present).
This becomes computationally challenging with increasingly complex operator structures, as required, for instance, to study nuclei, see for example Refs. <cit.>.]
This is as expected for a wavepacket composed of single hadron states with short correlation lengths, that is approximately a delta function in position space.[The variational parameters change sign between even- and odd-values of L if d is odd (even) in Ô_mh (Ô_h).
Also, note that Ô_m is not chosen until after step 10 in the SC-ADAPT-VQE ansatz.]
The convergence of the infidelity is found to be exponential in the
step of the algorithm (circuit depth), and independent of L.
This is in agreement with previous discussions on localized states being built on top of an exponentially converged vacuum.
Note that the convergence in L is smoother for the SC-ADAPT-VQE wavepacket than for the vacuum as the boundary effects are much smaller (see Fig. 5 in Ref. <cit.>).
Two steps of SC-ADAPT-VQE reaches an infidelity of 0.05, and will be used in the remainder of the work to prepare the wavepacket.
§ QUANTUM CIRCUITS
In this section, the quantum circuits that prepare hadron wavepackets and implement time evolution are developed.
These circuits are constructed to minimize CNOT count and circuit depth in order to reduce the effects of device errors.
In addition, with the goal of running on IBM's quantum computers, the circuits are optimized for nearest-neighbor connectivity.
These circuits are verified using the qiskit classical simulator, and the systematic errors arising from the approximations used in this work are quantified.
§.§ Quantum Circuits for Vacuum and Hadron Wavepacket Preparation
In order to prepare the SC-ADAPT-VQE vacuum on a quantum computer, the circuits presented in the previous chapter can be used.
The circuit building technique follows the strategy of Ref. <cit.>, where an “X”-shaped construction is used to minimize circuit depth and CNOT gate count.
Preparing the SC-ADAPT-VQE hadron wavepacket requires converting the exponential of the pool operators in Eq. (<ref>) to sequences of gates.
The individual terms in each operator in the wavepacket pool commute, and therefore first-order Trotterization is exact.
The corresponding circuits extend those used for preparing the vacuum, and are shown in Figs. <ref> and <ref> for the
2-step SC-ADAPT-VQE wavepacket used in subsequent sections
(see App. <ref> for the 10-step SC-ADAPT-VQE circuits).
These circuits are arranged to maximize cancellations between CNOTs, and minimize the circuit depth.
§.§ Quantum Circuits for Time Evolution
To perform time evolution, a second-order Trotterization of the time-evolution operator with the λ=1 truncated electric interaction will be used,
Û^(Trot)_2(t) = e^-i t/2Ĥ_kin-1 e^-i t/2Ĥ_kin-0 e^-i t Ĥ_m e^-i t Ĥ_el^(Q=0)(1) e^-i t/2Ĥ_kin-0 e^-i t/2Ĥ_kin-1 ,
where Ĥ_kin-0 (Ĥ_kin-1) are the hopping terms between even (odd) staggered sites.
This ordering was chosen to maximize the cancellations between neighboring CNOTs.
A second-order Trotterization is used as it provides a good balance between minimizing both circuit depth and Trotter errors.
In addition, the property of second-order Trotterization Û^(Trot)_2(t) Û^(Trot)_2(-t) = 1̂ enables a powerful error-mitigation technique <cit.>, see Sec. <ref>.
The Trotterization of Ĥ_m only involves single qubit Ẑ rotations, which has a straightforward circuit implementation.
The Trotterization of the kinetic terms uses the right circuit in Fig. <ref> arranged in a brickwall pattern to minimize circuit depth, and requires 4(2L-1) CNOTs per second-order Trotter step.
The Trotterization of Ĥ_el^(Q=0)(1) in Eq. (<ref>) requires nearest-neighbor, next-to-nearest-neighbor and next-to-next-to-nearest-neighbor entangling R_ZZ= e^-i θ/2ẐẐ operations acting between qubits on adjacent spatial sites.
Organizing into blocks of adjacent spatial sites, the problem is to find a nearest-neighbor CNOT decomposition for R_ZZs between all pairs of N_q=4 qubits.
Generalizing to any N_q≥3, a strategy for constructing these circuits, depicted in Fig. <ref>, is
* Group all the rotations that share the top qubit.
* For each block of grouped rotations,
use the bridge decomposition to convert the long-range CNOTs into nearest neighbor ones. Simplify the CNOTs within each block.
* Simplify the CNOTs from neighboring blocks.
These circuits have a total number of CNOTs N and circuit depth D given by,
N=2 N_q2 , D=N_q(N_q-2)+3 .
Compared to the circuits before the nearest-neighbor decomposition (e.g., using the circuits in step 1.), this does not introduce any additional CNOTs, but has a depth that scales as O(N_q^2) compared to O(N_q).
The N_q = 4 circuit used for the λ=1 interaction contributes 12(L-2) CNOTs per second-order Trotter step.
Circuits implementing a full second-order Trotter steps are shown in Fig. <ref>.
Taking into account the CNOT cancellations between the electric and kinetic terms, as well as between adjacent Trotter steps, the total number of CNOTs required is
# of CNOTs for N_T 2^nd order Trotter steps with λ=1 : 19L-28+(17L-26)(N_T-1) .
For L=56, this is 926 CNOTs per additional second-order Trotter step, comparable to the 890 CNOTs required for the 2-step SC-ADAPT-VQE vacuum and hadron wavepacket preparation.
§ QUANTIFYING THE SYSTEMATIC ERRORS OF THE APPROXIMATIONS
The systematic errors that are introduced by the approximations we have employed can be analyzed and quantified by performing end-to-end classical simulations using qiskit.
The approximations are:
* The vacuum is prepared using the 2-step SC-ADAPT-VQE circuits.
This furnishes an infidelity density of I_L = I/L = 0.01 with the exact vacuum.[The infidelity density I_L is a relevant measure for the vacuum as the state is being established across the whole lattice, whereas the infidelity is a relevant figure of merit for the (localized) hadron wavepacket.]
* A hadron wavepacket is prepared using the 2-step SC-ADAPT-VQE circuits.
This furnishes an infidelity of I = 0.05 with an adiabatically prepared wavepacket.
* A Hamiltonian with the electric interactions truncated beyond λ = 1 spatial sites is used to evolve the prepared wavepacket forward in time.
* The time-evolution operator is implemented in quantum circuits using a second-order Trotterization.
This section will focus on a system size of L=12, where the classical simulations can be performed exactly.
The circuit structure and variational parameters for the
2-step SC-ADAPT-VQE vacuum and wavepacket preparation are given in Table <ref>.
Note that the (2-step) wavepacket parameters differ slightly from those in Table <ref>, which are for the 10-step SC-ADAPT-VQE ansatz.
To identify the propagation of hadrons, we choose to measure the local chiral condensate,
χ̂_j = (-1)^j
Ẑ_j + Î ,
with eigenvalues of 0 (staggered site j is empty) and 2 (staggered site j is occupied by a fermion).
It is useful to define the expectation value of the local chiral condensate relative to its vacuum expectation value,
X_j(t)
= ⟨ψ_WP| χ̂_j(t) |ψ_WP⟩ - ⟨ψ_vac| χ̂_j(t) |ψ_vac⟩ .
Here, χ̂_j(t) is the time evolved observable; with exact exponentiation of the full Hamiltonian this would be χ̂_j(t) = e^i t Ĥχ̂_j e^-i t Ĥ.
When using a truncated interaction and/or Trotterization, the time-evolution operator changes.
The states |ψ_vac⟩ and |ψ_WP⟩ represent the prepared vacuum and wavepacket, either exact or using the SC-ADAPT-VQE approximation.
The subtraction of the vacuum expectation value is also time dependent because, for example, the SC-ADAPT-VQE prepared vacuum is not an eigenstate of the truncated Hamiltonian.
This time-dependent subtraction removes systematic errors that are present in both the wavepacket and vacuum time evolution.
It also proves to be an effective way to mitigate some effects of device errors, see Sec. <ref>.
Results obtained for the time evolved chiral condensate are shown in Fig. <ref> with four different levels of approximation.
Small errors are introduced with each approximation, but the results are found to recover expectations
within the uncertainties of the approximations.
Panel (iv) in Fig. <ref> shows the time-evolution operator approximated with 2⌈t/2⌉ second-order Trotter steps, giving a maximum step size of δ t = 1.
These step sizes introduce minimal (Trotter) errors, and will be used
for the time evolution using a digital quantum computer presented in the next section.
The propagation of hadrons outward from an initially localized wavepacket is clearly identified in deviations of the local chiral condensate from its vacuum expectation value.
The oscillations of the condensate at the center of the wavepacket are consistent with expectations, and are discussed further in App. <ref>.
Due to the symmetry of the initial state, the hadron has equal amplitude to propagate in either direction, with a profile that is bounded by the speed of light (1 staggered site per unit time).
The (composite) hadrons that make up the wavepacket are
(bosonic) vector particles, and some features of the hadron dynamics can be qualitatively understood in the simpler setting of non-interacting 1+1D scalar field theory.
In particular, the light-cone structure of propagating hadrons, the damped oscillations at the origin of the wavepacket and the effects of OBCs in both theories are similar.
This is treated in detail in App. <ref>, where the (textbook) example of a localized classical source coupled to a scalar field in 1+1D is treated in the continuum and on the lattice, and in App. <ref>, where OBCs are compared to periodic boundary conditions (PBCs).
§ REAL-TIME SIMULATIONS USING IBM'S DIGITAL QUANTUM COMPUTERS
The end-to-end simulations performed in the previous section using qiskit and classical computers are scaled up to L=56 (112 qubits)
and executed on IBM's 133-qubit ibm_torino Heron processor.
The scalability of the SC-ADAPT-VQE vacuum preparation circuits was demonstrated in Ref. <cit.>, where it was shown that the variational parameters are reproduced well by an exponential in L.
This enables the extrapolation of the state preparation circuits, determined for L≤ 14, to arbitrarily large L.
In principle, a similar exponential convergence of parameters for the hadronic wavepacket preparation circuits is expected.
However, as shown in Sec. <ref>, the operator ordering and variational parameters of the SC-ADAPT-VQE wavepacket preparation have converged up to the tolerance of the optimizer by L=14.
Therefore, the circuit structure and parameters determined for L=14 can be used to initialize the L=56 hadron wavepacket.
The operator ordering and parameters used to prepare the
2-step SC-ADAPT-VQE vacuum and 2-step SC-ADAPT-VQE hadron wavepacket
for L=56 are given in Table <ref>.
Error mitigation is essential for successful simulations utilizing large quantum volumes <cit.>.
Here, our error mitigation methods are outlined, and a more detailed discussion can be found in App. <ref>.
Through cloud-access, the circuits are sent to ibm_torino using the qiskit sampler primitive, which includes both dynamical decoupling <cit.> and M3 measurement mitigation <cit.>.
To mitigate coherent two-qubit gate errors, Pauli twirling <cit.> is used on the native two-qubit gates, control-Z for ibm_torino.
After twirling, we assume that the coherent two-qubit gate errors are transformed into statistically independent and unbiased incoherent errors, which can be modeled by a Pauli noise channel.
Observables are then estimated using Operator Decoherence Renormalization (ODR) <cit.>, which extends decoherence renormalization <cit.> to large systems.[Instead of setting the single-qubit rotations to zero in the mitigation circuits <cit.>,
they could be replaced by Clifford gates <cit.>.
]
To implement ODR, two kinds of circuits are run on the device: a “physics” circuit, and a “mitigation” circuit.
For a simulation of wavepacket dynamics, the physics circuit implements the time evolution of either the wavepacket or the vacuum (to compute X_j(t) in Eq. (<ref>)).
The mitigation circuit(s), with a priori known error-free (predicted) results,
and the physics circuits have similar structures and similar error profiles.
From the mitigation circuits, deviations of measured observables ⟨Ô⟩_meas from their predicted values ⟨Ô⟩_pred are used to compute the depolarizing noise parameters,
η_O = 1 - ⟨Ô⟩_meas/⟨Ô⟩_pred .
These η_O are used to estimate the expectation values from the physics circuits (using the same relation).
For wavepacket (vacuum) time evolution, we choose a mitigation circuit that creates the wavepacket (vacuum), time evolves with half of the Trotter steps until t/2 and then evolves for -t/2 with the remaining Trotter steps <cit.>.
This forwards-backwards time evolution corresponds to the identity operator in the absence of device errors, and restricts our simulations to an even number of Trotter steps.
To determine the η_O, the prediction of a desired observable from the mitigation circuit must be known.
In our case, this requires classically computing ⟨χ̂_j ⟩ in both the SC-ADAPT-VQE vacuum and wavepacket.
This can be accomplished even for large systems using the qiskit or cuQuantum MPS simulator, as was demonstrated in Ref. <cit.> for the SC-ADAPT-VQE vacuum up to L=500.
Interestingly, our numerical calculations highlight that it is the time evolution, and not the state preparation, that is difficult for classical MPS techniques.
We implement time evolution for t={1,2,…,14} with 2⌈t/2⌉ second-order Trotter steps (a maximum step size of δ t = 1).
As shown in the previous section, this step size does not introduce significant Trotter errors.
The number of CNOTs and corresponding CNOT depth for each simulation time are given in Table <ref>, and range from 2,746 CNOTs (depth 70) for 2 Trotter steps to 13,858 CNOTs (depth 370) for 14 Trotter steps.
The results for X_j(t) obtained from ibm_torino and the MPS simulator are shown in Fig. <ref>, with a breakdown of each t given in Fig. <ref> (the separate evolutions of the wavepacket and vacuum are shown in Fig. <ref>).
For each time, four circuits are run: time evolution of the wavepacket, time evolution of the vacuum, forward-backward time evolution of the wavepacket and forward-backward time evolution of the vacuum.
For t=1-8, 480 twirled instances of each circuit are run, and for t=9-14, 160 twirled instances are run.
Each twirled instance has 8,000 shots, using a total of ∼ 1.5 × 10^8 shots for the complete production.
We have estimated the uncertainties in the results from the quantum computer using bootstrap-mean resampling.[Due to the noisy nature of the device, the utility of the Hodges-Lehmann (HL) estimator was studied, and consistent results were obtained.
The HL estimator has been considered in lattice QCD studies to mitigate the impact of outliers in nuclear correlation functions <cit.>.]
The expected results are determined by using the cuQuantum MPS simulator with maximum bond dimension 200.
The run time and convergence of the MPS simulations are discussed in App. <ref>.
The individual time evolutions of the wavepacket and vacuum, used to compute X_j(t), are shown in Fig. <ref> of App. <ref>.
A systematic error in the chiral condensate away from the center of the lattice is seen to increase with simulation time.
Fortunately, it is similar for the wavepacket and vacuum evolution, and largely cancels in the subtraction to form X_j(t), as shown in Fig. <ref>.
The origin of this systematic error is currently unknown to us, and either stems from a deficiency in our error-mitigation techniques, or from insufficient convergence in the MPS simulations.
Without the approximations in the state preparation and time evolution, the chiral condensate would not evolve in regions that are locally the vacuum.
This qualitatively holds for smaller systems with L≤ 14 that can be simulated exactly.
The results from the quantum computer agree with these expectations, showing little evolution of the chiral condensate in the vacuum (right column of Fig. <ref>).
The MPS simulations, on the other hand, show significant evolution of the vacuum chiral condensate.
For the range of maximum bond dimensions we have been able to explore, it appears that the chiral condensate has converged at the level of 10^-2 for late times.
However, these results are not exact, and at this point we cannot rule out systematic errors being present in the MPS simulations.
From preliminary investigations, it appears that the vacuum evolution is due to λ=1 being too small for exponential convergence.
This is not surprising since the relevant ratio for exponential convergence is ∝λ/ξ, with possibly a prefactor proportional to, for example, 2π.
However, the maximum bond dimension required for convergence becomes significantly larger with increasing λ, and it is unclear if this conclusion is consistent.
A future detailed study of the effects of increasing the precision of the state preparation, increasing λ, and increasing the number of Trotter steps will be needed to determine if this discrepancy is due to errors in the MPS simulation or from imperfect error mitigation.
The results shown in Figs. <ref> and <ref> demonstrate that, by implementing a series of exponentially convergent approximations (beyond Trotterization), wavepackets of hadrons can be prepared and evolved forward in time with available quantum computers.
Propagating hadrons are clearly identified as a disturbance in the chiral condensate, with random fluctuations due to device errors outside of the hadron's light-cone.
It should be emphasized that obtaining X_n(t) = 0 outside of the light-cone using IBM's device is a non-trivial result, as it requires cancellations between the wavepacket and vacuum evolutions.
The simulations performed using ibm_torino show qualitative agreement with classical MPS results, but degrade with increasing number of Trotter steps (circuit depth).
The simulations highlight that device errors dominate over the systematic errors due to approximate state preparation and time evolution.
The results qualitatively recover expectations, but often differ by many standard deviations from classical expectations, indicating that we do not have a complete quantification of uncertainties.
This is not surprising given the simplicity and limitations of the assumed error model.
Despite the device errors, it is clear that current hardware is capable of creating and possibly colliding (composite) hadrons over a meaningful time interval.
Such simulations could provide first glimpses of inelastic hadron scattering and fragmentation in the Schwinger model that are beyond present capabilities of classical computing.
§ SUMMARY AND OUTLOOK
Quantum computing offers the potential of reliably simulating the collisions of high-energy hadrons and nuclei directly from quantum chromodynamics, the quantum field theory describing the strong interactions.
First steps are being taken to develop scalable techniques and algorithms for QCD simulations by working with the Schwinger model defined in 1+1D.
Towards these goals, this work develops
protocols for quantum simulations of hadron dynamics that are demonstrated on a L=56 (112 qubit) lattice using IBM's superconducting-qubit digital quantum computer, ibm_torino.
These simulations start with establishing a wavepacket of hadrons in the center of the lattice on top of the vacuum.
The necessary quantum circuits for the creation of this wavepacket are determined using the SC-ADAPT-VQE algorithm that was recently introduced by the authors in Ref. <cit.>.
In SC-ADAPT-VQE, low-depth circuits for state preparation are determined on a series of small lattices using classical computers, and then systematically scaled up to prepare states on a quantum computer.
For the present purposes, the SC-ADAPT-VQE circuits are variationally optimized to have maximal overlap with an adiabatically prepared hadron wavepacket.
The vacuum and hadronic wavepacket that are initialized on the quantum computer are then time evolved using a second-order Trotterization of the time evolution operator.
Naively, the electric interaction between fermions is all-to-all, giving rise to a prohibitive O(L^2) scaling in the number of two-qubit gates needed for time evolution.
Motivated by confinement, an approximation that truncates the electric interaction between distant charges is introduced.
This interaction converges exponentially with increasing interaction distance, and improves the scaling of the number of two-qubit gates required for time evolution to O(λ L), where λ is proportional to the confinement length scale.
These new methods for state preparation are verified on small systems using a classical simulator, and then applied to time evolve hadron wavepackets on a L=56 (112 qubit) lattice using ibm_torino.
Our digital quantum simulations utilize some of the largest quantum volumes to date <cit.>, with up to 13,858 two-qubit entangling gates applied (CNOT depth of 370).
A large number of shots with which to implement the error mitigation techniques is found to be essential to the success of our simulations. Our results show clear signatures of hadron propagation through modifications of the local chiral condensate.
Real-time dynamics typically explore highly-entangled regions of Hilbert space and, as a result, classical methods scale unfavorably with simulation time t, lattice volume L, and energy.
To explore this in more detail, our quantum simulations have been compared to classical MPS circuit simulations using qiskit and cuQuantum.
We have found that our initial state preparation circuits can be simulated relatively easily with these simulators.
However, the bond dimension needed for proper convergence grows rapidly as more steps of Trotterized time evolution are added to the quantum circuit.
All of this points to a potential near-term quantum advantage for the simulation of hadronic dynamics.
In particular, it is likely that the simulation of high-energy hadronic collisions will exceed the capabilities of classical computing for simulation times and volumes that are not excessively large.
Exactly where such a quantum advantage can be realized remains to be established.
On this path, future work will use the hadron wavepacket preparation and time evolution circuits that we have presented here to simulate hadron scattering.
Evolving out to later times will require time-evolution methods that improve upon Trotterization.
A promising direction is to use SC-ADAPT-VQE to find low-depth circuits for simulating over the early times.
The light-cone restricts early-time dynamics to only a modest number of qubits, and scalable low-depth circuits can likely be found with classical computing.
Another direction worth pursuing is to approach the continuum by taking m and g smaller, increasing the correlation length.
These longer correlation lengths will require deeper state-preparation circuits and larger truncations of the electric interaction to reach a target simulation quality.
Further into the future, improved methods for hadron detection will also be needed.
Finally, it will be necessary to extend these techniques to non-Abelian gauge theories and higher dimensions to perform more realistic simulations of QCD.
§ THE CLASSICAL DYNAMICS OF A SOURCED NON-INTERACTING SCALAR FIELD
The spectrum of the Schwinger model consists of composite hadrons due to confinement.
Unlike the underlying electron and positron degrees of freedom, which are fermions, the hadrons are bosonic scalar and vector particles.
Important features of the hadronic dynamics simulated in this work can be understood in the simpler setting of a non-interacting scalar field evolving from a localized source.
The framework for the latter is well known, and can be found in quantum field theory textbooks, for example, Ref. <cit.>.
The spatial and temporal extents of the hadron wavepacket (in the Schwinger model) that we work with are approximately determined by the correlation length, ξ, and we model this by a Gaussian source for the scalar field (describing the Schwinger model vector hadron). The Klein-Gordon equation in the presence of a classical source,
( ∂^μ∂_μ + m^2 )ϕ(x,t) = j(x,t)
,
is solved in 1+1D in infinite volume and with vanishing lattice spacing by
ϕ_j(x,t) = ϕ_j=0(x,t)
+
i ∫ dy dt^' G_R(x-y, t-t^') j(y,t^')
,
where G_R(a,b) is the retarded Green's function and ϕ_j=0(x,t) is the field in the absence of the source.
The effective source we consider is
j(x,t) = J_0 √(α/π) e^-α x^2 δ(t)
, ∫ dx dt j(x,t) = J_0
.
After writing the propagator in momentum space, and using the spatial symmetry of the source, the field in the presence of the source is given by
ϕ_j(x,t) = ϕ_j=0(x,t)
+
2 J_0 ∫_0^∞dp/2πe^-p^2/(4 α)/ω_p cos p x sinω_p t ,
where ω_p = √(p^2+m^2).
In a finite volume with a discrete set of uniformly spaced lattice points, it is straightforward to derive the appropriate analogous relation.
Spatial integrals are replaced by a discrete sum over the finite number lattice sites, and momentum integrals are replaced by sums over momentum modes within the first Brillouin zone (the exact set of modes are determined by the selected boundary conditions imposed on the field).
Figure <ref> shows the downstream field in spacetime from the source given in Eq. (<ref>), with parameters m=0.1 and α=J_0=1.
The light cone at x=t is clear, with the field decaying exponentially beyond these lines.
Importantly, the field near the origin is seen to “ring down”, continuing to emit particles until the initially localized energy density is dispersed via particle production.
The total energy injected into the field by the source is
⟨Ĥ⟩ = √(α/8 π) J_0^2 ,
where Ĥ is the free Hamiltonian without the source, and the energy of the vacuum has been set to zero.
The probability of creating a particle in the |p⟩ momentum state, Prob(|p⟩), and the expectation value of the total number of particles produced in such an event, N_ϕ, are
Prob(|p⟩) =
J_0^2/2 ω_p e^-p^2/2 α ,
N_ϕ =
J_0^2/4π e^m^2/4α K_0(m^2/4α) ,
with K_0 being the modified Bessel function of the second kind of order zero.
§ ASPECTS OF OPEN BOUNDARY CONDITIONS
Ideally, quantum simulations of lattice field theories would utilize periodic boundary conditions (PBCs) in order to maintain the translation invariance of free space (in the continuum limit).
However, without connectivity between the initial and final lattice sites, as is the case in some quantum computers, simulations can be performed with open boundary conditions (OBCs).
In this appendix, we demonstrate some key features of OBCs in the context of scalar field theory, and make connections to the Schwinger model.
The Hamiltonian describing non-interacting lattice scalar field theory with continuous fields at each lattice site and with OBCs is given by
Ĥ_ lsft =
1/2∑_j=0^L-1Π̂_j^2
+ 1/2∑_j=0^L-1 m_0^2 ϕ̂_j^2
-1/2∑_j=0
j-1≥ 0
j+1≤ L-1^L-1ϕ̂_j (ϕ̂_j+1+ϕ̂_j-1-2ϕ̂_j)
=
1/2Π̂^2
+ 1/2Φ^T [
m_0^2 Î + G] Φ ,
where
G =
(
[ 2 -1 0 0 ⋯ 0; -1 2 -1 0 ⋯ 0; 0 -1 2 -1 ⋯ 0; ⋮ ⋮; 0 0 0 0 ⋯ -1; 0 0 0 0 ⋯ 2; ])
, Φ^T = (ϕ_0, ϕ_1, ⋯ ,ϕ_L-1) ,
and where Π̂ is the conjugate-momentum operator.
The only difference between this expression and that for PBCs is the absence of terms in the extreme anti-diagonal entries in G, which renders the matrix non-circulant, reflecting the lack of discrete translational invariance.
An orthogonal transformation can be applied to the fields to diagonalize the Hamiltonian matrix,
Φ = V Ψ , Ĥ = 1/2Π̂^2
+ 1/2Ψ^T
Ω^2 Ψ ,
where Ω is a L× L diagonal matrix with eigenvalues ω_i.
Therefore, the L towers of single-particle energy eigenvalues of these systems are
E_i = (n_i + 1/2) ω_i ,
where n_i are the number of bosons with energy ω_i, with a vacuum energy that is the sum of zero-point energies,
E_ vac = 1/2∑_i ω_i .
§.§ OBCs and PBCs for L=4
It is instructive to consider the similarities and differences
between OBCs and PBCs for non-interacting scalar field theory on L=4 lattice sites.
It is well known that the structure of the Hamiltonian in Eq. (<ref>) indicates that this (and other such systems) can be diagonalized by the eigenvectors of G, and are hence independent of the mass and conjugate momentum (as these are both local operators).
For OBCs, the ω_i are
ω_i =
{√(m_0^2 + 1/2(3-√(5))) ,
√(m_0^2 + 1/2(5-√(5))) ,
√(m_0^2 + 1/2(3+√(5))) ,
√(m_0^2 + 1/2(5+√(5)))}
=
{√(m_0^2 + 1/2(3-√(5))) ,
√(m_0^2 + 2 + 1/2(1-√(5))) ,
√(m_0^2 + 2 - 1/2(1-√(5))) ,
√(m_0^2 + 4 - 1/2(3-√(5)))}
=
{√(m_0^2 + 0.3819) ,
√(m_0^2 + 1.3819) ,
√(m_0^2 + 2.6180) ,
√(m_0^2 + 3.6180)} ,
which are to be compared with those from PBCs,
ω_i =
{ m_0 ,
√(m_0^2 + 4sin^2 π/4) ,
√(m_0^2 + 4sin^2 π/4) ,
√(m_0^2 + 4sin^2 π/2)}
=
{ m_0 ,
√(m_0^2 + 2) ,
√(m_0^2 + 2) ,
√(m_0^2 + 4)} .
The kinetic contributions to the energies in Eq. (<ref>) correspond to “momentum modes” with k=n π/5 with n={1,2,3,4}, and generalizes to k=n π/(L+1) with n={1,…, L}.[A more direct comparison between
Eq. (<ref>) and Eq. (<ref>) can be made using relations such as 4 sin^2 π/10 = (3-√(5))/2.]
The energies of the OBC states are split around the energies of the PBC states,
with the lowest is raised, and highest lowered.
This splits the degeneracies of the left- and right-moving momentum eigenstates associated with PBCs.
These features extend to larger values of L, with the splittings reducing with increasing L.
The eigenstates can all be made real by global phase rotations, and identification of these states with the associated states with PBCs can be made by forming linear combinations of the degenerate PBC states.
Figure <ref> shows the eigenstates for PBCs and OBCs.
Even for L=4, the difference between the eigenstates is not large, and diminishes with increasing L.
§.§ Matching the Schwinger Model to Non-Interacting Scalar Field Theory for L=8 and L=14
with OBCs
In large enough spatial volumes,
it is expected that the low-lying continuum states of the Schwinger model will be approximately recovered by an effective field theory (EFT) of scalar and vector particles <cit.>.
To explore this more with OBC simulations, the mass of the scalar particle needs to be determined from the spectrum of the Schwinger model.
As the energies of the states of the scalar field depend in a non-trivial way on the mass of the scalar particle, this is accomplished numerically.
In the Schwinger model, fermions are discretized on a lattice with 2L staggered sites, corresponding to L spatial sites.
To match to the spectrum of lattice scalar field theory, a conversion must be performed to switch from units of staggered lattice spacing, a_st, to units of spatial lattice spacing, a_sp.
A dimensionless energy, Δ_st, in the Schwinger model is related to a physical energy by Δ_st = a_st E /(ħ c), where ħ c=197.32 MeV fm, and E is an energy in units of MeV.
The corresponding quantity in terms of the spatial lattice spacing is Δ_sp = a_sp E /(ħ c) = 2 a_st E /(ħ c) = 2 Δ_st.
Exact diagonalization of the Schwinger model Hamiltonian with parameters m=0.5,g=0.3,L=8 gives a gap to the first excited state (vector hadron mass) of a_st E_1 = a_st m_hadron = 1.15334.
In the L=8 non-interacting scalar field theory, this corresponds to an excitation of a_sp ω_1= 2 m_hadron = 2.30668.
Fitting the bare scalar field mass m_0 to this value gives m_0^( fit) = 2.28039 in spatial lattice units, which can be then used to predict higher-lying states in the Schwinger model spectrum.
Converting back to the staggered lattice spacing gives the values of a_stω_i to be compared with the exact results from the Schwinger model, a_stE_i, shown in Table <ref>.
Each of the energies a_st ω_i can be identified with an energy in the Schwinger model, within ∼ 2%, indicating that the low-lying spectrum is largely from the motion of a single hadron on the lattice.
We assume that the two states that do not correspond to states in the scalar theory result from internal excitations of the single particle state in the Schwinger model.
This analysis can be repeated for L=14 where it is found that a_st E_1 = a_st m_hadron = 1.1452 and m_0^( fit) = 2.28096 (spatial lattice units).
These quantities are very similar to the L=8 ones, as expected since m_hadron≪ L and finite-size effects are small.
Table <ref> shows the energy levels in the Schwinger model compared with those predicted from non-interacting scalar field theory fit to the lowest level.
Good agreement is again found, supporting the identification of the excited states in the Schwinger model with OBC momentum modes.
§.§ Sources with OBCs
The analysis in App. <ref> related to source dynamics in non-interacting scalar field theory is performed in infinite volume and in the continuum limit.
To better understand the impact of finite-volume and OBCs, it is helpful to consider the retarded-Green's function on such lattices.
The Green's function in Eq. (<ref>) in 3+1D is given by
D_R( x, y,t,0) =
θ(t) ∫ d^3 k/(2π)^3 1/2ω_k (
e^-i ( ω_k t - k· ( x- y) ) -
e^+i ( ω_k t - k· ( x- y) ))
= - i θ(t) ∫ d^3 k/(2π)^3 1/ω_k sinω_k t
e^i k· ( x- y) .
In a 3+1D finite volume with PBCs, this becomes
D_R( x, y,t,0) →
- i θ(t) 1/L^3∑_ k 1/ω_ k sinω_ k t
e^i k· ( x- y)
=
- i θ(t) ∑_ n 1/ω_ n sinω_ n t ψ_ n^†( y)
ψ_ n( x) ,
where ψ_ n( x) is an appropriately normalized lattice eigenstate subject to PBCs, defined by a triplet of integers n,
ψ_ n( x) = 1/L^3/2
e^i k· x , ∑_ xψ_ n^†( x) ψ_ m( x) = δ^(3)_ n, m , ∑_ nψ_ n^†( y) ψ_ n( x) = δ^(3)_ x, y ,
with k=2 π n/L.
To transition to OBCs, the OBC eigenstates ψ_ m( x) are used.
For simulations in 1+1D with OBCs, the relevant retarded Green's function is
D_R(x,y,t,0) =
- i θ(t) ∑_n
sinω_n t/ω_n ψ_n^†(y)
ψ_n(x) ,
with appropriately orthonormalized wavefunctions, such as those shown in Fig. <ref>.
Consider a source with a Gaussian profile, as was considered earlier,
on a lattice of length L,
j_L(y) = η ∑_n=0^L-1 δ (y-n) e^-α (y - L-1/2)^2 .
where η is the appropriate normalization factor determined by requiring,
∫_-∞^+∞ dy j_L(y) = J_0
= η∑_n=0^L-1 e^-α (L-1-2n/2 )^2 ≈ η√(π/α) [
1 + 2∑_p=1^∞ (-)^p e^-π^2 p^2/α]
≡η√(π/α) S(α) .
The approximate equality holds for a well-localized source with large L and small α (in which case the bounds of the sum can extended to ±∞ with exponentially-suppressed errors, and the Poisson resummation formula can be used).
The function S(α) rapidly approaches the continuum result of unity, for decreasing α.
Therefore, the sources can be written as
j_L(y) = J_0 √(α/π) 1/S(α) ∑_n=0^L-1 δ (y-n) e^-α (y - L-1/2)^2 ,
which is the discrete version of Eq. (<ref>).
The expression for the downstream field from the source is given by Eq. (<ref>), and can be written as
ϕ_j(x,t) = ϕ_j=0(x,t)
+
J_0 √(α/π) 1/S(α) ∑_n=1^L (
∑_y=0^L-1ψ_n^†(y)
e^-α (y - L-1/2)^2)
sinω_n t/ω_n ψ_n(x) .
The expression in Eq. (<ref>) is the corresponding result to Eq. (<ref>) but in a finite volume with OBCs.
Numerically, evaluating the field evolution from the source are the same until boundary effects become important.
§ TRUNCATED ELECTRIC INTERACTIONS FOR ODD L
The Hamiltonian corresponding to Eq. (<ref>) for odd L is,
Ĥ_el^(Q=0)(λ̅) = g^2/2{∑_n=0^L-3/2[ ( L - 5/4 - 2n ) Q̂^2_n + 1/2Q̂_n δ̂_n + 1/4δ̂^2_n + ( 7/4 + 2n ) Q̂^2_L+1/2+n.
- 1/2Q̂_L+1/2+nδ̂_L+1/2+n + 1/4δ̂^2_L+1/2+n ] + 1/4(Q̂^2_L-1/2+δ̂^2_L-1/2)
+ 2∑_n=0^L-5/2 ∑_m=n+1^min(L-3/2,n+λ̅)[ ( L-1 - 2m ) Q̂_nQ̂_m + 1/2Q̂_nδ̂_m + ( 2 + 2n ) Q̂_L+1/2+nQ̂_L+1/2+m
- 1/2Q̂_L+1/2+mδ̂_L+1/2+n ]
+ . 1/2∑_n=1^min(L-1/2,λ̅)[ Q̂_L-1/2-nQ̂_L-1/2 + Q̂_L-1/2-nδ̂_L-1/2 + Q̂_L-1/2+nQ̂_L-1/2 - Q̂_L-1/2+nδ̂_L-1/2] } .
§ FURTHER DETAILS ON CIRCUIT CONSTRUCTION
The circuit implementation of the operators from the wavepacket pool in Eq. (<ref>) for d≤ 5 is shown in Fig. <ref>.
§ ANOTHER WAY TO CREATE HADRON WAVEPACKETS
In the main text, circuits are constructed that optimize the overlap with an adiabatically prepared hadron wavepacket.
Here, an alternative method for preparing hadron wavepackets is presented based on minimizing the energy in the single-hadron sector.
Desirable features of a hadronic wavepacket are that it is localized (i.e., outside of the wavepacket profile, the system is locally in the vacuum), and that it is composed of single hadrons.
When establishing a wavepacket on top of the interacting vacuum, as is done in the main text, localizability can be implemented at the level of the operator pool.
For example, by only including operators in the pool that have support over a predefined spatial interval, l, it is guaranteed that outside of l is vacuum.
To ensure that the wavepacket is composed of single hadron states, consider adding a vacuum chemical potential, μ, to the Hamiltonian,
Ĥ_1-hadron = Ĥ + μ|ψ_vac⟩⟨ψ_vac| .
For μ > m_hadron, the ground state of Ĥ_1-hadron
in the Q=0 sector is the lowest-energy state of a single hadron.
The strategy for building a wavepacket is to minimize ⟨ψ_ansatz|Ĥ_1-hadron|ψ_ansatz⟩, where |ψ_ansatz⟩ is adaptively built using a localized operator pool.
The resulting state will be the lowest energy configuration orthogonal to the vacuum that is localized within the interval l.
The prepared state will primarily be a superposition of single hadrons, with multi-hadron contributions decreasing as l increases.
As an example, consider using this procedure to construct a single-hadron wavepacket with an operator pool localized to l=2 sites on either side of the midpoint of the lattice.
Starting from the operator pool in Eq. (<ref>), the l=2 pool consists of Ô_m(1), Ô_m(2), Ô_mh(1,1), Ô_h(1,1), Ô_mh(2,d) and Ô_h(2,d) with d={1,2,3} .
Choosing μ = 2.5 m_hadron pushes the energy of the vacuum above two-particle threshold (which is slightly below 2 m_hadron due to the presence of a two-hadron bound state), and is found to be effective for our purposes.
To update the SC-ADAPT-VQE ansatz, the gradient can be computed with
∂/∂θ_i. ⟨ψ_ansatz| e^-i θ_i Ô_iĤ_1-hadron e^i θ_i Ô_i|ψ_ansatz⟩|_θ_i=0
= - Im [⟨ψ_ansatz| ( [ Ĥ,Ô_i ] + 2 μ |ψ_vac⟩⟨ψ_vac|Ô_i ) |ψ_ansatz⟩ ]
.
Note that it can be necessary to bias the initial parameters to avoid the optimizer choosing θ_i=0 because the initial state is a local maxima of energy (and second derivatives are then required).
Due to the limited size of the operator pool, the SC-ADAPT-VQE algorithm converges relatively well after 4 steps, with the optimal operators and associated variational parameters shown in Table <ref>.
The resulting state has an L-independent energy expectation value of ⟨ψ_ansatz|Ĥ|ψ_ansatz⟩ = 1.18 m_hadron, and overlap onto the vacuum state of |⟨ψ_vac|ψ_ansatz⟩|^2 = 8.5× 10^-5.
These results show that the prepared wavepacket is primarily composed of single-hadron states, and both ⟨ψ_ansatz|Ĥ|ψ_ansatz⟩ and |⟨ψ_vac|ψ_ansatz⟩|^2 can be further reduced by increasing l, i.e., de-localizing the prepared wavepacket.
§ DETAILS ON THE 112-QUBIT MPS SIMULATIONS
The 112-qubit quantum simulations in Sec. <ref> are compared to the expected, error-free, results determined using the qiskit and cuQuantum MPS circuit simulators.
MPS techniques are approximations that can be improved by increasing the bond dimension in the MPS ansatz.
A higher bond dimension increases the maximum amount of entanglement in the ansatz state, at the cost of longer run-time on a classical computer.
As a result, simulations that explore highly-entangled states are promising candidates for a near-term quantum advantage.
Our numerical investigations have found a large contrast between the bond dimension needed for state preparation and time evolution.
The initial hadron wavepacket coincides with the vacuum state outside of the few sites where the wavepacket has support.
This state has a low amount of entanglement as the ground states of gapped 1D systems have area-law entanglement <cit.>.
Therefore, a relatively small bond dimension can be used in the MPS simulations to faithfully reproduce the preparation of the vacuum and initial hadron wavepacket.
Time evolution, on the other hand, involves a superposition of many single-hadron states, which disturb the vacuum as they propagate.
This produces a significant amount of entanglement, and subsequently requires a larger bond dimension.
The bond dimension needed for convergence of the chiral condensate for different simulation times is shown in Fig. <ref>.
It is seen that a relatively small bond dimension is sufficient for convergence, even out to late times.
This should be compared to the convergence of ⟨ψ_WP|χ̂_j |ψ_WP⟩
in the left panel of Fig. <ref>, where the quantity
Δ_i = ∑_j | ⟨ψ^MPS_i_ WP| χ̂_j(t) |ψ^MPS_i_ WP⟩ - ⟨ψ^MPS_i+10_ WP| χ̂_j(t) |ψ^MPS_i+10_ WP⟩ | ,
is computed for different bond dimensions.
This quantity determines how much the local chiral condensate of the evolved wavepacket changes as the maximum bond dimension is increased from i to i+10.
This reveals that MPS calculation of the chiral condensate of
(a) the initial state can be done very efficiently (results with a maximum bond dimension of 10 have already converged below a 10^-5 precision), and
(b) the evolved wavepacket converges slowly, especially at late times.
Indeed, the quick convergence of X_n(t) in Fig. <ref> is due to the cancellations of errors between the MPS simulation of the wavepacket and vacuum evolution.
These MPS simulations take increasingly more compute run-time as the bond dimension increases.
This is illustrated in the right panel of Fig. <ref>, where the run-time for a selection of times and various bond dimensions are shown. In this panel, we compare the performance of the CPU-based qiskit MPS simulator, run on a single 40-core CPU-node on Hyak, and the GPU-based cuQuantum MPS simulator, run on a single NVIDIA RTX A5000 through the OSG Pool.
§ FURTHER DETAILS ABOUT THE ERROR MITIGATION AND ANALYSIS
For each time t={1,2,…,14}, four kinds of circuits are run on the quantum computer: time evolution of the vacuum, time evolution of the wavepacket, and the corresponding forward-backward evolution for ODR error mitigation, see Fig. <ref>.
Each circuit for t=1-8 is run with 480 twirls and each circuit for t=9-14 is run with 160 twirls; each twirl with 8,000 shots, as displayed in Table <ref>.
The longest continuous one-dimensional chain on ibm_torino that we utilize is 112 qubits, corresponding to a L=56 lattice (see layout in Fig. <ref>).
We use two lattice-to-qubit mappings to minimize the effects of poorly performing qubits.
Half of the twirls assign staggered site 0 to the top-right device qubit, and the other half assign staggered site 0 to the bottom left device qubit.
Averaging over multiple layouts mitigates some of the effects of qubit-specific noise.
Indeed, in our simulations there are twirled instances where qubits perform poorly, either due to decoherence or to readout errors.
Such errors can be identified and removed from analysis by filtering out measurements where ⟨Ẑ_j ⟩_meas/⟨Ẑ_j ⟩_pred < ϵ in the mitigation runs, with ϵ some predetermined threshold.[This type of event post-selection, requiring device performance to exceed a specified level in interleaved calibration circuits, has been employed previously, for example, Ref. <cit.>.]
If this ratio is negative, then the qubit has flipped, and if it is 0 then the qubit has completely decohered, i.e., it has become a maximally mixed state.
We choose ϵ = 0.01, and do not see much difference varying up to ϵ = 0.05.
Our scheduling of jobs interleaves physics and mitigation circuits with the same twirl.
Poorly performing qubits, identified from measuring the mitigation circuit, are cut from both the ensemble of mitigation and associated physics measurements.[
Note that 160 of the 480 twirls for t=1-7 do not interleave physics and mitigation. Instead, they are sent in batches of 40 circuits with uncorrelated twirls between mitigation and physics circuits.
In this case a qubit measurement of physics circuit n in the batch is cut if the corresponding qubit measurement in mitigation circuit n is cut.
Surprisingly, no improvement is found when correlating the twirls and interleaving mitigation and physics circuits.]
The results of measurements related by CP symmetry are combined.
For Ẑ_j, this means combining ⟨Ẑ_j ⟩ and -⟨Ẑ_2L-1-j⟩ (for runs with 480 twirls, this can lead to up to 960 independent measurements for ⟨Ẑ_j ⟩).
The central values and corresponding uncertainties are determined from bootstrap re-sampling over twirls.
Due to the filtering procedure, ⟨Ẑ_j⟩ for each qubit can have a different number of contributing twirls, N^(meas)_j.
For each sample in the bootstrap ensemble, N^(meas)_j random integers with replacement { x }∈{1,2,…, N^(meas)_j} are generated, with the prediction for the error-free physics expectation value for that sample given by
.⟨Ẑ_j ⟩_pred|_phys = ( ∑_i ∈{ x }.⟨Ẑ_j ⟩^(i)_meas|_phys) × ( ∑_i ∈{ x } . ⟨Ẑ_j ⟩_pred/⟨Ẑ_j ⟩^(i)_meas|_mit ) ,
where the superscript (i) labels the twirl.
This is performed for the wavepacket and for the vacuum evolution, with the vacuum subtracted chiral condensate given by
X_j = (-1)^j ( .⟨Ẑ_j ⟩_pred|_phys^(WP) - . ⟨Ẑ_j ⟩_pred|_phys^(Vac) ) .
This process is repeated N_Boot times, with N_Boot large enough for the mean and standard deviation of the bootstrap ensemble { X_j } to have converged.
This mean and standard deviation are used to produce the points with error bars in Figs. <ref> and <ref>.[The two sums in Eq. (<ref>) compute the mean of the bootstrap sample.
If instead the median is used, larger error bars are found.
This is likely due to there being correlations in the tails of both the ensembles of physics and mitigation measurements that are captured by the mean, but suppressed
by the median.]
We have found that larger angles in the circuits lead to larger systematic errors, independent of circuit depth.
This is likely due to cross-talk errors between gates acting on neighboring qubits when large rotations are applied.
These kinds of errors are not corrected by ODR.
Thus, there is a trade-off between increased number of Trotter steps with smaller angles, and the associated increased circuit depth.
A full determination of this trade off remains to be explored.
The different stages of error mitigation are displayed in Fig. <ref>.
Two times, t=3 (CNOT depth 120) and t=9 (CNOT depth 270), are chosen for the purpose of demonstration.
Note that in these plots, the decohered value of the chiral condensate is ⟨χ̂_j⟩ =1 (with 𝒳_j =0).
The first row of Fig. <ref> shows the “raw” results obtained from the device (with dynamical decoupling and readout error mitigation) after averaging over all Pauli twirls.
The device errors for the wavepacket and vacuum evolution outside of the wavepacket region are very similar, and cancel to a large degree in forming the subtraction in X_j(t).
It is striking that, for t=9, there is no discernible sign of the presence of a wavepacket in the raw results.
The second row of Fig. <ref> shows the effect of applying ODR.
This helps recover the chiral condensate, being more effective for t=3 than t=9, but can also lead to large error bars when the qubit is close to being completely decohered (⟨Ẑ_j⟩_meas|_mit close to zero).
The third row of Fig. <ref> shows the effects of filtering out runs where ⟨Ẑ_j ⟩_meas/⟨Ẑ_j ⟩_pred|_mit < 0.01.
This removes most of the runs contributing to the large error bars, and is more significant for t=9 than t=3.
It also leads to different numbers of twirls surviving the filtering for different qubits.
Sometimes only a small number survive, compromising the assumption of a depolarizing channel for ODR (and also compromising the error estimates from bootstrap re-sampling).
The fourth row of Fig. <ref> shows the effects of using the CP symmetry to combine the measurements of ⟨Ẑ_j ⟩ and -⟨Ẑ_2L-1-j⟩.
This reduces the effects of poorly performing qubits, and gives the final results presented in Figs. <ref>, <ref>, and <ref>.
CHAPTER: STEPS TOWARD QUANTUM SIMULATIONS OF HADRONIZATION AND ENERGY-LOSS IN DENSE MATTER
This chapter is associated with Ref. <cit.>:
“Steps Toward Quantum Simulations of Hadronization and Energy-Loss in Dense Matter" by Roland C. Farrell, Marc Illa and Martin J. Savage.
§ INTRODUCTION
An improved understanding of the transport of energy, momentum, flavor, and other quantum numbers
in non-equilibrium strongly-interacting dense matter is needed to
refine predictive capabilities for
the extreme matter created in the early universe and in astrophysical environments.
Motivation and inspiration comes from ongoing and planned experiments of heavy-ions collisions <cit.> and from astronomical observations of multi-messenger signals <cit.>.
The state-of-the-art predictions for the structure and dynamics of extreme matter
integrate the experimental results from these programs with analytic and computational techniques,
and phenomenological modeling <cit.>.
Knowledge of energy-loss and stopping ranges of electrically charged particles, from electrons to nuclei, in ordinary materials is essential to many scientific, technological, societal, and therapeutic endeavors, including in protecting humans and scientific equipment from cosmic rays and background radiation.
It is also critical to designing ultra-sensitive experiments to search for new physics that interacts weakly with ordinary matter, such as searches for Dark Matter <cit.> and 0νββ-decay of nuclei <cit.>.
Long-term experimental programs have measured the energy loss for particles penetrating a wide selection of materials over a broad spectrum of energies.
Theoretical descriptions of the underlying mechanisms are well-established for electrically-charged particles and γ-rays over a significant energy regime, including, for example, the effects of elastic and inelastic collisions, pair-production and bremsstrahlung radiation <cit.>.
Interactions with nuclei at higher energies remain the focus of work at current and future high-energy colliders and fixed-target experiments <cit.>, e.g., experiments at the Thomas Jefferson National Accelerator Facility (TJNAF), Brookhaven National Laboratory (BNL), or at the Large Hadron Collider (LHC).
Together, experiment, theory, and computation have established an extensive catalogue enabling predictions for the behavior of charged particles moving through a variety of materials (a brief summary can be found in the Particle Data Group <cit.>).
In contrast, energy loss mechanisms in dense nuclear matter in (and out of) equilibrium are much less well understood.
There has been significant theoretical progress on this topic
(e.g., see Ref. <cit.>),
largely driven by input from heavy-ion collision experiments at BNL <cit.> and the LHC <cit.>.
As an example, because of their mass and compactness, the transport properties, yields, and distributions of
heavy quarks and
quarkonia systems provide insights into the nature of the matter created in heavy-ion collisions (for recent reviews on different theoretical approaches, see Refs. <cit.>).
However, much remains unknown, particularly in processes where quantum coherence and entanglement play a significant role, and conventional methods such as Monte-Carlo sampling over event probability distributions, break down.
Previous approaches have often neglected particular elements of such processes, e.g., the effects of coherent scattering in parton shower simulations <cit.>.
Generally, a more
complete understanding of the dynamics of transport, fragmentation, color screening, and hadronization in non-equilibrium quantum chromodynamics (QCD) matter remains a forefront challenge.
In this chapter, classical simulations of the Schwinger model are performed to determine the energy-loss and other observables associated with particles moving through dense matter.
An extensive study of heavy-hadrons moving through the
lattice vacuum is performed in order to
isolate lattice artifacts that arise from the breaking of Lorentz symmetry,
and will not survive in the continuum.
These lattice artifacts are magnified in certain entanglement measures,
and lead to energy loss and light-hadron production
even for a heavy-hadron moving through the vacuum at constant velocity.
Once parameters are found where these artifacts are minimized, the propagation of heavy-hadrons through a medium of static heavy-hadrons is considered.
Energy loss due to the production of hadrons (hadronization) and internal excitations are identified, and the crucial role of quantum coherence is emphasized.
These classical simulations are limited to system sizes of L=12 spatial sites (24 staggered sites), and we present scalable quantum circuits for state preparation and estimate the resources required for large-scale quantum simulations of these phenomena.
§ THE SIMULATION STRATEGY
To investigate how charges move through dense strongly-correlated matter,
heavy “external” charges are introduced into the Schwinger model,
with positions specified by classical trajectories.
Heavy charges with fixed positions define regions of dense matter
within the lattice,
and additional heavy charges moving across these regions probe energy-loss,
fragmentation, hadronization, and entanglement arising from propagation through a dense medium.
These heavy charges emulate the heavy fields
in heavy-quark effective theory (HQET) <cit.> that are used to define
a systematic expansion about the heavy-quark limit.
In this limit, analogous to a B-meson,
a heavy-hadron in the Schwinger model is composed of a single heavy charge
that is electrically neutralized by a “cloud” of light charges.
Important to the current treatment
is that the position, velocity, and acceleration of the moving heavy charge
are well-defined throughout its motion.
To access the desired physics, we
choose a classical trajectory where the heavy charge accelerates to a constant velocity,
moves through the dense medium, and then decelerates to rest.
§.§ The Lattice Schwinger Model Hamiltonian with Heavy Charges
The Hamiltonian used in this chapter is the same as in Eq. <ref>, except that now we allow for heavy background charges.
This leads to
Ĥ_el → g^2/ 2∑_j=0^2L-2 (∑_k≤ jq̂_k +Q_k )^2
The electric charge operator q̂_k acts on the k^ th staggered site, and Q_k is a heavy background charge.
The heavy charges have been included as discontinuities in Gauss's law, which
(on the lattice) is,
E_k - E_k-1 = q_k + Q_k ,
where E_k is the electric field on the link between staggered sites k and k+1.
Due to confinement and without background charges, the low-energy excitations are charge-neutral bound states of electrons and positrons.
The hadron mass m_hadron and confinement length
ξ∼ m_hadron^-1 depend non-perturbatively on m and g.
The range of {m,g} values used in this work give rise to 1.3 ≤ m_hadron^-1≤ 1.8
that is well contained inside of the volumes accessible to classical simulation,
ξ≪ L.
All lengths are measured in units of the staggered lattice spacing that has been set to
a_staggered = a_spatial/2=1.
It is important to keep in mind that
this Hamiltonian measures the energy in the light degrees of freedom, as there is no mass
or kinetic term for the heavy charges.
An explicit expression for Ĥ_el with the charge operator q̂_k expanded in terms of Ẑ_k can be found in App. <ref>.
§.§ Heavy- Trajectories
In our treatment, moving heavy-Q^+s follow
a trajectory with
a smooth acceleration up to a uniform velocity,
followed by a smooth deceleration to rest.
The trajectory is parameterized by the following equations of motion,
x(t) =
v_max^2/4 𝔞_max logcosh[ β (t-t_0) ]/cosh[ β (t-t_0-T) ] + x_f + x_0/2 ,
v(t) = v_max/2 (
tanh[ β (t-t_0) ] - tanh[ β (t-t_0-T) ]
)
,
𝔞(t) = 𝔞_max (
^2 [ β (t-t_0) ] - ^2 [ β (t-t_0-T) ]
)
,
where
β = 2𝔞_ max/v_max ,
t_0 = ⌊arccosh[√(𝔞_max/𝔞(0))]/β⌉ ,
T = x_f-x_0/v_max .
The trajectories are defined by the maximum velocity (v_max) and acceleration (𝔞_max), as well as the initial position
(x_0) and final position (x_f) of the heavy charge.
The variable t_0 is fixed by the initial acceleration 𝔞(0), which for our numerical studies is set to 𝔞(0) = 10^-4, and
⌊···⌉ denotes the round function to the nearest integer.
The continuous position x(t) is distributed among the two nearest odd numbered (positron) staggered sites to match the positive heavy charge.
Defining x_q1 to be the smaller numbered site and x_q2=x_q1+2 to be the larger numbered
site, the charge is distributed as,
Q_q1 = Q/2(x_q2-x(t))
,
Q_q2 = Q/2(x(t)-x_q1)
,
where Q is the value of the heavy charge.
To minimize boundary effects, the initial and final positions of a heavy-Q^+ are placed as far as practical from the edges of the lattice, but in such a way to have an extended period of constant velocity.
An example trajectory is shown in Fig. <ref>,
where a heavy-Q^+ moves
from x_0=3 to x_f=11 with the constraints that v_ max=0.2 and 𝔞_ max=0.04.
Throughout this work, we will set 𝔞_ max=v_ max/5,
which we have verified does not lead to an appreciable energy loss due to the
radiation from an accelerating charge.
§.§ Maximum Lattice Velocity
Features of the dynamical simulations that will be
presented in the following sections
can be understood from the lattice dispersion relations.
The spectrum of the Schwinger model has been extensively studied in the literature,
with a focus on the first excitations in the charge q_tot=0 sector (scalar and vector mesons) <cit.>.
It is convenient to first consider the theory with g=0,
corresponding to non-interacting electrons and positrons.
The lattice dispersion relation for the electrons (in the charge q_tot=-1 sector)
resulting from the Hamiltonian given in Eq. (<ref>),
subject to OBCs, is
E^2 = m^2 + sin^2( K/2 )
,
K = (n+1/2) π/L+1/2 ,
where n={0,1,…, L-1}.
These energies are the gap above the vacuum, and
there are more energy levels in the g=0 spectrum beyond single-particle states
corresponding to multi-particle excitations.
The form of this (electron) dispersion relation is from, in part, the spatial lattice spacing, relevant for the distance between adjacent lattice sites, being twice the staggered lattice spacing, a_staggered=1 and a_spatial=2.
The electron dispersion is relevant for simulations with a moving heavy-Q^+, whose charge is neutralized by
electrons.
The group velocity of electrons is
v = dE/dK = sin K/4√(m^2 + sin^2 (K/2 )) ,
where the limit L→∞ has been taken in order
to define the derivative.
Unlike in the continuum, there is a maximum group velocity,
v_⋆ = 1/2 (√(m^2+1)-m )
,
which reduces to the speed of light, c=1/2
(in spatial lattice units),
in the m→ 0 (continuum) limit.
This maximum velocity will persist in the interacting theory, with a value that is shifted away from v_⋆.
Therefore, on the lattice, there is a critical velocity of the heavy charge that exceeds the maximum group velocity of the light degrees of freedom.
This critical velocity is lower than the speed of light, which is 1 staggered site per unit time.
This indicates that particle production can occur
on the lattice even when the heavy charge is moving at constant velocity
because some or all of the light degrees of freedom are unable to “keep up” with the charge for sufficiently high velocity.
Conceptually, the moving heavy charge will separate from the light degrees of freedom that were initially screening it, exposing the vacuum to an electric field,
which will create hadrons in its wake.
When determining the energy loss of a
charged particle
in medium at a non-zero lattice spacing,
the energy loss into the vacuum will be subtracted.
§ CLASSICAL SIMULATIONS
Classical simulations of a selection of heavy-Q^+ trajectories with different “dense mediums” are performed.
These simulations determine the state at time t, |ψ(t) ⟩, from
the ground state in a particular charge sector,
|ψ(0) ⟩,
via a Trotterized time evolution associated with the time-dependent Hamiltonian,
|ψ(t) ⟩ = 𝒯∏_j=1^t/Δ t e^- i Δ t Ĥ(j Δ t)|ψ(0) ⟩ ,
where 𝒯 denotes the time-ordered product.
It was found that a (minimal) time step of Δ t=0.25 was sufficient for the convergence of the observables considered.[For small values of v, the time step Δ t can be increased. Explicitly, for v≤ 0.05, Δ t=2.0; for 0.05 < v≤ 0.1, Δ t=1.5; for 0.1 < v≤ 0.2, Δ t=1.0; for 0.2 < v≤ 0.4, Δ t=0.5; for 0.4 < v≤ 0.99, Δ t=0.25.]
The time-dependent energy, charge distribution and various entanglement measures
are determined from |ψ(t) ⟩.
§.§ A Heavy- Moving Through the Lattice Vacuum
Lorentz invariance is broken down to discrete translational invariance
in simulations using a spatial lattice.
The lattice dispersion relation
allows processes to occur that are forbidden in the continuum by energy-momentum conservation,
such as pair production below threshold.
This leads to increasing energy in the light degrees of freedom
as a heavy-Q^+ moves with constant velocity across the lattice vacuum,
which we connect to the more standard framework of energy-loss by the moving heavy-Q^+.
The workflow that we employ to simulate the dynamics of
a neutralized heavy-Q^+ moving across the lattice vacuum is the following:
* Determine the vacuum state, |ψ_ vac⟩,
and low-lying excited states without background charges.
This defines the vacuum energy (E_ vac), the mass of the hadronic excitations
in the light sector, the chiral condensate, and other vacuum observables.
* Determine the ground state with a neutralized
heavy-Q^+ at rest at site x_0, |ψ_ vac⟩_Q^+_{x_0}.
The energy gap above E_ vac defines the mass of the heavy hadron
(analogous to the B-meson), or more precisely the lattice evaluation of Λ in HQET <cit.>.
* Time evolve the state |ψ (0)⟩=|ψ_ vac⟩_Q^+_{x_0,v} using Eq. (<ref>), with the heavy-Q^+ trajectory, x(t), defined above in Eqs. (<ref>) and (<ref>).
At each time step, the relevant observables are computed,
including the total energy, given in Eq. (<ref>).
It is convenient to define observables as functions of the position of the moving heavy-charge, instead of time. For example, the total energy is
E_Q^+(x) = ⟨ ψ[t(x)] | Ĥ[t(x)] | ψ[t(x)] ⟩_Q^+_{x_0,v} ,
where t(x) can be determined from inverting the heavy-Q^+ trajectory x(t) from Eq. (<ref>).
All the quantities displayed
in the rest of the paper will depend on the position of the heavy-Q^+.
Figure <ref> shows the change in the total energy,
E_Q^+(x)
with parameters m=0.1 and g=0.8, using the same classical trajectory as in Fig. <ref>,
but with L=12 and x_f = 19.
Also shown is the instantaneous energy-loss Δ E/Δ x
defined by a
finite-difference approximation to the energy loss
at position x at time t during a Trotter step of size Δ t,
dE/dx(x) → Δ E/Δ x = E_Q^+(t+Δ t) - E_Q^+(t-Δ t)/x(t+Δ t) - x(t-Δ t) .
The saw-tooth structure of the energy is due to the staggering of charges.
Energy decreases as the heavy-Q^+ moves toward an even-numbered (electron) site and away from an odd-numbered (positron) site, due to the Coulomb interaction.
Similarly, the energy increases as the heavy-Q^+ moves away from an electron site and toward a positron site.
This structure likely results from our implementation of motion across the lattice,
and could be mitigated by smoothing the charge evolution over more than two lattice sites,
and by decreasing the lattice spacing.
The inset of the right panel shows the sum of the contributions symmetrized around the midpoint of the lattice (x=11), Δ E/Δ x(11±x̃)=1/2[Δ E/Δ x (11+x̃)+Δ E/Δ x (11-x̃)], with 0≤x̃≤ 7.
The strong cancellations between the positive and negative contributions indicates that the net
Δ E / Δ x is small when averaged across lattice sites,
much smaller than the magnitude of typical instantaneous values,
but importantly not equal to zero.
This demonstrates that there is a net energy
gain per unit length in the light degrees of freedom
as the heavy charge moves across the lattice
vacuum, corresponding
to the production of light hadrons, and a net energy loss of heavy charge.
Lorentz breaking operators in the Hamiltonian
will, in general, contribute terms that are suppressed by powers of the lattice spacing,
O(a^n), with the lowest contribution at
O(a^2) for the KS Hamiltonian <cit.>.
Matrix elements of the Lorentz-breaking operators with a moving
heavy-Q^+ are expected to give rise to contributions to observables that scale as
O(a^2 v^2)
at low velocities
after
parity considerations and renormalization of the Lorentz-preserving contributions.
As the velocity of the heavy charge approaches the speed of light, v→ 1,
higher order terms will become increasingly important.
Figure <ref> shows a
lattice-averaged rate of energy loss,
Δ E/Δ x, as a function of the velocity of the heavy-Q^+.
Δ E/Δ x is determined by a linear fit to E_Q^+(x)
in the region of constant velocity,
defined by v ≥ (v_ max-0.01).
Using our trajectories, there is an upper bound on v_max
for a given simulation volume, leading to incomplete curves for the smaller L in Fig. <ref>.
The results are consistent with
expected quadratic dependence on v for low velocity.
The lattice-spacing dependence can also be probed by decreasing the coupling g while keeping m/g fixed (effectively decreasing a).
The results in Fig. <ref> verify that the energy loss decreases as the lattice spacing decreases.
This can be made more manifest by forming a dimensionless quantity between two physical quantities, that vanishes in the continuum.
The right panels of Fig. <ref>
show Δ E/Δ x rescaled by the square of the heavy hadron mass, Λ^2.
Keeping the physical heavy hadron mass fixed gives Λ∼ a,
whereas Δ E/Δ x is expected to scale at least as ∼ a^3.
The rescaled energy loss is indeed seen to decrease for smaller lattice spacing up to a v_max∼ 0.7,
where this analysis appears to break down.
As the heavy charge moves through the vacuum, the light charges re-arrange to dynamically screen the Q^+.
The charge density in the Lorentz-invariant continuum remains localized around the heavy charge, with a symmetric profile that is increasingly Lorentz-contracted with increasing velocity.
On the lattice, this picture changes due to Lorentz symmetry being broken, and particles having a modified dispersion relation.
Importantly, the light charges have a maximum velocity v_⋆
(see Eq. (<ref>) for g=0) that is less than the speed of light.
These lattice effects are illustrated in Fig. <ref>, which shows the charge density when the heavy-charge is at x=15 for a range of velocities.[This is not an ideal
observable because the charge density in the wake of the heavy charge will fluctuate with time.
This is why the charge density behind the heavy charge is unusually small for v_max=0.4 for that specific time (compared to v_max=0.3 or v_max=0.5).]
At t=0 in our simulations, there is a symmetric distribution of charges screening the Q^+
(up to boundary effects).
For v_max≲ 0.3,
this screening cloud largely
travels with the heavy charge, reproducing continuum expectations.
However, as v_max becomes comparable to v_⋆, the light charges cannot keep up with the heavy charge.
The light charges are dragged behind the heavy charge, and the profile becomes more asymmetric with increasing velocity.
This asymmetric charge distribution exposes the vacuum to a strong electric field, causing particle (hadron) production in the wake of the moving charge.
This is seen in the fluctuations in the light degrees of freedom on the opposite (left) side of the lattice, where light hadrons have a non-zero probability of being produced during the motion.
The role of quantum correlations in strongly interacting systems is an area of active research, with pioneering work connecting entanglement to the confinement and chiral phase transitions in QCD <cit.>, with parallel works on low-energy nuclear systems <cit.>, high-energy processes <cit.>, and quantum field theories <cit.>.
In the continuum, and similar to the charge density,
it is expected that disturbances in the entanglement above the vacuum will be localized around the position of the heavy charge.
However, on the lattice, the production of hadrons in the wake of the Q^+ alter the localized entanglement signatures.
The single-site entanglement entropy S_n=- Tr(ρ_n log_2 ρ_n)
is related to the purity of the reduced density matrix, ρ_n,
on site n,
and is shown in Fig. <ref>
when the heavy-Q^+ is at x=15 for a selection of v_max (same situation as in Fig. <ref>).
For small
v_max, the continuum expectation of entanglement entropy localized around the heavy-Q^+ is recovered.
For larger v_max,
considerable entanglement entropy is generated in the wake of the moving charge, consistent with a lattice artifact that scales as 𝒪(a^2 v^2) at low velocities.
To investigate correlations between sites, the bottom panels of Fig. <ref>
show the mutual information I_nm.
The mutual information between sites n and m is defined as
I_nm=S_n+S_m-S_nm, where S_nm = - Tr(ρ_nmlog_2 ρ_nm)
is the entanglement entropy of the two-site reduced density matrix.
This quantity shows that the correlations produced by the moving charge are
short range, with a scale naturally set by confinement.
For this particular system, these entanglement measures are providing a qualitative picture that is consistent with the charge densities in Fig. <ref>.[The difference in sign between I_nm and S_n in Fig. <ref> (especially in the last column) is due to S_nm being much larger with the moving charge than in the vacuum.]
Quantum correlations beyond two sites can be characterized by
the n-tangle <cit.> of a pure state |ψ⟩, defined by
τ_n(|ψ⟩)^i_1,... i_n = | ⟨ψ |ψ̃⟩ |^2
,
|ψ̃⟩ = Ŷ_i_1Ŷ_i_2⋯Ŷ_i_n |ψ⟩^*
.
The n-tangle for odd-n is only defined for n=3, and
vanishes in this system by charge conservation.
Confinement suggests that the n-tangles should
fall off exponentially when the number of contributing lattice sites exceeds the confinement length.
This can be seen in the first column of Fig. <ref>,
where the non-zero n-tangles of the vacuum state | ψ_ vac⟩ are shown.
The second column of Fig. <ref> shows the n-tangles with a heavy-Q^+ at its initial position x_0=3.
They deviate from those in the vacuum in a local region around the heavy charge, as expected for a system with a finite correlation length.
If there were no lattice discretization effects, deviations from the vacuum n-tangles would remain localized around the moving charge.
The lattice effects are illustrated in columns three through five, which show the n-tangles when the heavy-charge is at x=15 for a selection of velocities.
Relative to the initial state (column two), a moving charge modifies the n-tangles across the whole lattice.
These effects are magnified for larger velocities or coarser lattice spacings, compare g=0.8 (top row) and g=0.6 (bottom row), as expected for a lattice artifact.
However, these lattice artifacts behave noticeably different than the local observables examined in the previous paragraphs.
The n-tangles are more fragile than other observables; with g=0.8, the 2-tangle is diminished by more than 10× for a velocity of v=0.4.[Note that the finite time step in our time evolution,
Δ t in Eq. (<ref>),
induces errors on the order of ∼ 10^-3 in the n-tangles.
While decreasing the value of Δ t modifies the values of the n-tangles, it does not change the qualitative features observed.
This is much more severe than, for example, the deviation of the charge density shown in Fig. <ref>.
In addition, while the deviations in the charge density are restricted to the region behind the moving charge, the n-tangle is significantly destroyed across the entire lattice.
These differences are likely because the n-tangle is not a local observable
due to the complex conjugation in Eq. (<ref>).]
The suppression of the n-tangles is a striking result, compared to the single-site
and two-site entropy.
A possible explanation for this difference is that the n-tangles are not capturing all
of the entanglement in the system
(e.g., when evaluated in the GHZ and W states, S_n and I_mn are non-zero while certain n-tangles are zero <cit.>).
The results from these entanglement measures point to
observables of the system evolving toward those of a classically mixed ensemble,
as predicted for pure state evolution consistent with the
Eigenstate Thermalization Hypothesis
(ETH) <cit.>
(a similar connection was found in Ref. <cit.>).
In order to further understand how the entanglement structure evolves,
other measures, such as the negativity <cit.> or non-stabilizerness entanglement
(magic) <cit.>, could be studied.
We remind the reader that velocity dependence (aside from Lorentz contraction) of the observables calculated in this section are lattice artifacts that will vanish in the continuum.
§.§ A Heavy- Moving Through a Dense Medium
A main objective of this work is to develop machinery for quantum simulations of dynamics in
strongly-interacting dense matter.
The previous subsection quantified the energy loss and other lattice artifacts that are already present for a heavy charge moving across the vacuum.
This provides a benchmark to compare with the results of in-medium simulations.
Matter is introduced into our simulations by including one or more static heavy-Qs,
whose positions are fixed
in time.
For well separated static charges, the ground state consists of a grid of heavy hadrons at rest.
For tightly packed static charges, the screening clouds merge together, analogous to the electron sea in a metal.
The parameters used in this work give rise to screening
with high fermion occupation numbers localized over a couple of staggered sites.
Because of this, Pauli blocking plays a significant role in the dynamics.
Combined with the kinematic restrictions of one dimension,
evolution within the medium leads
to interesting phenomena,
such as significant distortions to the screening profiles that is
more pronounced in the leading edge of the collision.
In the continuum, collisions between hadrons
are inelastic above a given
threshold invariant mass, depending on the hadronic spectrum.
In our simulations, with the hadron velocity fixed to v_max throughout
the collision,
hadron production is possible for all kinematics.
The simulations in this section are all performed
with L=12 and a relatively low heavy-Q^+ velocity of v_max=0.2 to minimize lattice artifacts.
The rest of the parameters defining the classical trajectory of the heavy-Q^+ are the same as in the vacuum simulations of the previous section.
§.§.§ A Heavy- Incident upon One Static-
The simplest system to begin studies of energy loss in matter is that of
a neutralized heavy-Q^+ moving past another neutralized heavy-Q^+ that is fixed in place.
Initially, we prepare the ground state of the system in the presence of two heavy-Q^+s: the moving charge at x_0=3 and the static charge at x=11.
The total charge of the light
sector in the
ground state is q_tot=-2.
This initial wavefunction is time-evolved
using Eq. (<ref>), and the energy loss and charge density are compared to the results from the vacuum simulations in the previous section.
Figure <ref> shows the energy loss as a function of the
position of the moving heavy-Q^+ in the presence of the static heavy-Q^+ (red points).
To remove some of the lattice artifacts, it is useful to define the vacuum-subtracted quantity Δ_Q^+_{3,v}Q^±_{x',0},
Δ_Q^+_{3,v}Q^±_{x',0} = . Δ E/Δ x|_Q^+_{3,v} Q^±_{x',0}
- . Δ E/Δ x|_Q^+_{3,v} ,
where x' is the staggered site of the static charge.
As expected, the energy loss receives its largest contributions when the screening
of the two heavy-charges overlap.
The change in energy is more rapid
on the leading edge of the collision than the trailing edge.
This suggests that the initial collision is similar to a violent quench whereas the trailing interactions are occurring
closer to equilibrium.
Further insight into the mechanisms involved in this process can be gathered from Fig. <ref>, which shows the evolution of the charge density.
Particularly striking is the charge density around the static charge after the collision (top-right panel).
Compared to the
results in vacuum, Fig. <ref>, the charge density in-medium is more de-localized.
This is indicative of excitations of the static hadron, and of light hadron production in the collision.
Note that the difference in the charge distributions of the static and dynamic charge at t=0 (left panel) are due to boundary effects.
§.§.§ A Heavy- Incident upon One Static-
By placing a static Q^- in the volume instead of a static Q^+,
collisions of a heavy-meson with a heavy-anti-meson can also be studied.
One difference between the simulations involving two equal charges is that the effects of Pauli blocking are less significant; the electrons surrounding the moving heavy-Q^+ are not Pauli blocked by the positrons surrounding the static Q^-.
In addition, there is now a particle-antiparticle annihilation channel open during the collision.
The energies during these simulations are shown in Fig. <ref> (blue points).
As expected, they are seen to be lower for the oppositely charged heavy-Qs than for the same
charged heavy-Qs.
The right panels show that the energy changes much more rapidly than for the two colliding Q^+s, due to the lack of Pauli blocking.
Also, the net change in energy in this process is noticeably less than
when a heavy-Q^+ passes a static heavy-Q^+.
The charge density is shown in the lower panels of Fig. <ref>.
For this relatively low velocity,
the screening vanishes when the charges are on top of each other (middle column), because the total net
heavy charge is zero.
It is interesting to look
at the charge distribution surrounding the static-Q^- after the moving charge has passed.
When the positive charge is at x=15 (fourth column) the charge distribution surrounding the Q^- has a dipole moment pointing to the right.
However, when the positive charge is at x=19 (fifth column), the dipole moment is
pointing to the left.
This suggests that the heavy-Q^- hadron is left
in an excited state characterized by a time-dependent dipole moment.
Such excitations have recently been identified in dynamical simulations of nuclei moving through dense neutron matter <cit.>.
§.§.§ A Heavy- Incident upon Two Static-s
A medium of multiple neutralized static Q^+s allows for an exploration of in-medium quantum coherence, beyond those involved in heavy-hadron collision.
Limited by the sizes of lattice volumes available for classical simulation, we consider two static Q^+s located in the middle of the lattice and separated by one or two spatial sites.
Quantum correlations between tightly packed static charges are expected to have a large effect on the energy loss, which will not simply be an incoherent sum of the energy lost to each static charge separately.
Figure <ref> shows the energy loss as a function of position of the moving heavy-Q^+ in the presence of two static-Q^+
(for separations of one (upper) and two (lower) spatial sites).
To isolate the effects of in-medium quantum coherence,
it is useful to define the following quantities,
Δ_A = . Δ E/Δ x|_Q^+_{3,v} Q^+_{x',0}
- . Δ E/Δ x|_Q^+_{3,v}
, Δ_B = . Δ E/Δ x|_Q^+_{3,v} Q^+_{x”,0}
- . Δ E/Δ x|_Q^+_{3,v} ,
Δ_C = . Δ E/Δ x|_Q^+_{3,v} Q^+_{x',0} Q^+_{x”,0}
- . Δ E/Δ x|_Q^+_{3,v}
,
where x', x” indicate the lattice sites of the
static-Q^+s,
and these quantities are shown in the right panels of Fig. <ref>.
The combination Δ_C - Δ_A - Δ_B is a measure of in-medium quantum coherence, and is seen to be more significant for static-Q^+s separated by one spatial site compared to two.
Even at low velocities, the energy loss function is sensitive to the increased fermion occupancy and quantum coherence present in dense systems.
As in the case of a single static heavy hadron,
there is a clear asymmetry between the interactions of the leading light degrees of freedom and the trailing ones.
§ QUANTUM SIMULATIONS
The initial state for the simulations performed in this work is the ground state
in the presence of background charges, |ψ_vac⟩_Q_{x}.
Without these background charges, the total charge of the vacuum is q_tot = 0.
In the presence of the charges, the ground state of the system re-arranges in such a way that
q_tot + Q_tot = 0 for sufficiently large lattices.[
For a finite lattice size,
there is a regime of large m/g for which q_tot = 0
when Q_ tot≠ 0, and the ground state of the system is charged.
For the relatively small m/g=0.125 used in this work, the ground state has
q_tot + Q_tot = 0.]
In this phase, the charges are completely screened over the scale of a confinement length ξ∼ m_hadron^-1.
Outside of this screening length, the system is locally in the vacuum without static charges, |ψ_vac⟩.
These observations inform an efficient and scalable method for preparing ground states
in the presence of background charges on a quantum computer.
A key ingredient in the method for state preparation is the Scalable-Circuit-ADAPT-VQE (SC-ADAPT-VQE) algorithm that is detailed in Sec. <ref>.
§.§ Preparing Ground States with Background Charges
Consider preparing the ground state
in the presence of a single positive background charge in the middle of the lattice Q_L-1=+1.
As argued above, the ground state
has q_tot=-1, with the charge density localized around staggered site L-1.
The method that we will use to prepare |ψ_vac⟩_Q^+_L-1 will have two steps:
1. Prepare a state |ψ_init⟩ which has the qualitative features of |ψ_vac⟩_Q^+_L-1 correct.
This state will be |ψ_vac⟩ far from the background charge, and possess a local integrated charge of q_tot=-1 around the background charge.
|ψ_init⟩ is quantitatively correct everywhere except for a few correlation lengths around staggered site L-1.
2. Modify the wavefunction around staggered site L-1. This builds the correct profile of the screening charges, and can be done with circuits that act locally around site L-1.
To construct |ψ_init⟩, first initialize the strong coupling ground state
in the presence of the background charge,
|Ω_0⟩_Q^+_L-1 = 1/√(2) (X̂_L-2|Ω_0⟩ + X̂_L|Ω_0⟩ ) ,
where |Ω_0⟩ = | 01⟩^⊗ L is the strong-coupling vacuum without a background charge.
The X̂ operators leads to electron occupation on the staggered sites next to the
background charge.[
The state with an electron occupied on site L-2 and site L are degenerate with the kinetic term turned off.
By time reversal symmetry, the state can be taken to be a real superposition, and it is found that the equal superposition with a (+) has the lowest energy when the kinetic term is turned on.]
|Ω_0⟩_Q^+_L-1 has the desired property of charge (-1) localized around the position of the background charge.
Next, |ψ_vac⟩ is prepared far away from the background charge.
One way to accomplish this is to act with a unitary Û^aVQE, that prepares the vacuum when acting on the strong coupling vacuum, Û^aVQE |Ω_0 ⟩ = |ψ_vac⟩.
The problem of determining such a unitary with an efficient circuit implementation was recently addressed by the authors <cit.>,
and is an application of the SC-ADAPT-VQE algorithm outlined in Sec. <ref>.
The use of SC-ADAPT-VQE
to determine Û^aVQE is reviewed in App. <ref>.
Acting this vacuum preparation unitary on |Ω_0⟩_Q^+_L-1,
|ψ_init⟩ = Û^aVQE|Ω_0⟩_Q^+_L-1 ,
furnishes
an initial state with the desired properties of having charge (-1) localized around the
background charge and being |ψ_vac⟩ away from the position of the
background charge.
For step 2, |ψ_init⟩ is used as the initial state for another application of SC-ADAPT-VQE.
The goal of this round of SC-ADAPT-VQE is to determine localized circuits that build the correct wavefunction in the region around the background charge.
The target state |ψ_vac⟩_Q^+_L-1 is the ground state of the Hamiltonian with a background charge Q_L-1=+1 and, since both the initial state and target state have q_tot=-1, the operators in the pool should conserve charge.
In addition, as the Hamiltonian is real, the operators are also constrained by time-reversal invariance (operators with an odd number of Ŷ in the Pauli string decomposition).
These constraints imply that there are no single-qubit operators,
and a similar pool to
that
used for preparing a hadron wavepacket in our previous work <cit.> is found to be effective,
{Ô}_Q^+_L-1 = {Ô_mh(n,d) } ,
Ô_mh(n,d) ≡ i/4 [ σ̂^+_L-1-nẐ^d-1σ̂^-_L-1-n+d + h.c. , Ẑ_L-1-n ]
= 1/2 (X̂_L-1-nẐ^d-1Ŷ_L-1-n+d - Ŷ_L-1-nẐ^d-1X̂_L-1-n+d ) ,
where n measures the staggered distance from the background charge, with n ∈{-L+1,-L+2,…,L-1}, and d ∈{1,2,…,N-n-1}.
This pool satisfies the desired symmetry constraints,
as e^i θÔ_mh is real and conserves charge.
The two terms in the RHS of the second line of Eq. (<ref>) commute, and their exponentials can be converted to circuits without Trotter errors.
The convergence of the SC-ADAPT-VQE prepared ground state |ψ_ansatz⟩
to the true ground state can be quantified with the deviation in the energy of the
ansatz state E_ans compared to the true ground state energy E_gs,
δ E = E_gs - E_ans/E_gs ,
as well as the infidelity density of the ansatz wavefunction with respect to the exact
ground state,[
The average infidelity density is not an optimal metric to use as it asymptotes to
the infidelity of the vacuum prepared with Û^aVQE for large system sizes,
i.e., the deviation from the vacuum infidelity density will scale as 1/L.
A better measure of infidelity would be the overlap of partially-reduced density matrices over a region of the lattice localized about the background charge.
Even with this, requiring a precision exceeding that of the vacuum state is not helpful.
]
I_L =1/L (1 - |⟨ψ_ansatz|ψ_vac⟩_Q^+_L-1|^2 ) .
The deviation in the energy and infidelity obtained from performing SC-ADAPT-VQE for m=0.1, g=0.8, and L={8,10,12} are given in Table <ref>.
The number of steps used to prepare the ground state (and its convergence) can be found in App. <ref>.
The sequence of operators and the corresponding variational parameters are given in Table <ref>.
It is surprising the first set of operators that are chosen have d=4, indicating that correlations of separation 4 are more important than separation 2, which come later in the ansatz.
One explanation is that the initial state has already included some of the short-range correlations (the vacuum prepared with Û^avQE has d=1 and d=3 correlations).
The initial state, |ψ_init⟩ in Eq. (<ref>) is labelled as step 0 in Table <ref>, and already has pretty good overlap with the desired state.
After 4 steps, a deviation of the energy density of δ E ≈ 0.012 is reached which is sufficiently converged for our purposes.
The operators that are chosen, e.g., Ô_mh(3,4) and Ô_mh(1,4) in steps 1 and 2, and Ô_mh(3,2) and Ô_mh(-1,2) in steps 3 and 4, are related by a reflection about the position of the background charge.
The optimal variational parameters are equal with
opposite signs, and the wavefunction that is being
established has a version of the CP symmetry, but which is broken by boundary effects beyond 4 steps of SC-ADAPT-VQE.
The operator sequence is stable with increasing L, and the variational parameters are converging rapidly.
This indicates that the extrapolation and scaling of these state preparation circuits should be robust.
Due to CP symmetry, the circuits for establishing the vacuum with a negative background charge at site L are identical to those for a positive charge, but with the variational parameters negated.
This technique can be generalized to prepare the ground state in the presence of multiple background charges located within the simulation volume.
As long as background charges are well separated from each other and the boundaries, then the circuits (presented in the following section) simply need to be repeated around the location of each additional background charge.
§.§ Quantum Circuits and Resource requirements
In the previous section, a sequence of
unitary operations
that prepares the ground state in the presence of a single background charge was presented.
In order to perform simulations on a quantum computer, these
unitary operations
must be converted to a sequence of gates.
Our circuit design is tailored toward devices with linear nearest-neighbor connectivity, such as is native on IBM's quantum computers <cit.>,
and aims to minimize the circuit depth and two-qubit gate count.
The
unitary operators forming
the operator pool in Eq. (<ref>) are of the form e^i θ ( ŶẐ^d-1X̂ - X̂Ẑ^d-1Ŷ ),[The convention is that the operator on the far left acts on the lower numbered qubit, e.g. ŶX̂ = Ŷ_n X̂_n+1.]
and we will use the circuit design introduced in our recent work <cit.> that extends the techniques in Ref. <cit.>.
These circuits have an “X” shape, and are arranged in such a way to cancel the maximum number of CNOT gates.
An example of the circuit that prepares the ground state in the presence of a background
charge Q_L-1=+1 for L=10 is shown in Fig. <ref>.
This circuit has been decomposed into three parts.
First, the strong coupling vacuum in the presence of the heavy charge |Ω_0⟩_Q^+_L-1 is prepared.
Next, the circuits that prepare the two step SC-ADAPT-VQE vacuum without a background charge are applied.
These circuits are collectively denoted as Û^aVQE, and were treated in detail in Ref. <cit.>.
Lastly, the circuits that implement the four step SC-ADAPT-VQE unitaries e^i θ_i Ô_i in Sec. <ref> are applied.
These circuits are localized and only modify the wavefunction around the position of the
background charge.
As discussed in the previous section, preparing the ground state in the presence of multiple
background charges is a straightforward extension of these circuits, provided the charges are well separated from each other and the boundaries.
The first modification is to the preparation of |Ω_0⟩_Q: there is a hadamard-CNOT sequence centered around each heavy charge.
The second change is that the e^i θÔ are repeated around the center of each additional background charge.
In total, the resources required for this state preparation are
# of CNOTs = 16L-12+25N_Q , CNOT depth = 35 ,
where N_Q is the number of background charges.
This circuit depth is well within the capabilities of current devices.
Note that the number of SC-ADAPT-VQE steps to maintain a constant quality of the prepared state will scale linearly with the confinement length ξ.
The circuit depth for each step of SC-ADAPT-VQE also scales with ξ: as O(ξ^2) for devices with nearest-neighbor connectivity and O(ξ) for devices with all-to-all connectivity.
As a result, for devices with nearest-neighbor connectivity, the circuit depth is expected to scale as O(ξ^3) for state preparation.
Once the initial state is prepared, time evolution can be implemented with the time-dependent Hamiltonian defined by the classical trajectory of the heavy charge, Q(t) in Eq. (<ref>).
As shown in our previous work, time evolution in systems without background charges can be reproduced up to exponentially small errors using a truncated electric interaction <cit.>.
In Sec. <ref>, it was argued that a similar procedure should also be possible for systems with heavy charges, provided the charge operators are suitably averaged over the extent of the heavy hadrons.
To get an estimate of the scaling, we assume a truncation of interactions between spatial charges separated by more than λ≈ξ/2 spatial sites.
The resources required for one second-order Trotter step of time evolution can be estimated using the circuits in Ref. <cit.>, with a CNOT gate count of,
# of CNOTs = 4(2L-1)+(2L-4λ)(λ+1)(2λ+1) -(L-2λ+2) .
Taking λ∼ξ, this gives a scaling of 𝒪(L ξ^2) for the number of two-qubit gates and a corresponding circuit depth of 𝒪(ξ^2).
A proper determination of the minimum λ required to reach a predetermined error threshold is left for future work.
To approach the continuum limit, ξ is held fixed in physical units, while the lattice spacing is decreased, i.e., ξ∼ a^-1.
The number of Trotter steps must also grow as ξ∼ a^-1
as can be seen with the following argument.
Trotterization of the kinetic term with a brickwork ordering only allows for correlations to spread two staggered sites per Trotter step.
Keeping the lattice volume traversed by the moving charge fixed in physical units implies that the number of staggered sites traversed scales as 𝒪(a^-1).[It is possible that there is a Trotter ordering, different from brickwork,
that improves this scaling.]
Therefore, the number of Trotter steps
also scales as 𝒪(a^-1) and time evolution is estimated to have a circuit depth that scales as 𝒪(a^-3).
This is the same scaling as for the initial state preparation, giving a total circuit depth for simulating dynamics in dense matter in the Schwinger model to be O(a^-3).
This depth would improve to O(a^-2) on devices with all-to-all connectivity.
Of course, actual simulations that approach the continuum will need to be performed to validate these scaling arguments.
§ SUMMARY AND OUTLOOK
The mechanisms responsible for energy-loss and transport in dense matter are key to understanding the evolution of
matter under extreme conditions: from high-energy collisions of large nuclei, to high-energy cosmic-ray penetrating ordinary matter, to the dynamics of core-collapse supernova.
There has been a long history of successfully using
classical techniques, such as Monte-Carlo simulation, to determine the
electromagnetic responses when charged particles move through matter.
In contrast,
the dynamics of high-energy quarks and gluons in dense matter is much less understood, in part due to the non-perturbative phenomena of confinement and hadronization.
With an eye toward understanding such processes in QCD,
we have performed real-time simulations of energy-loss and hadronization in
the simpler setting of the Schwinger model at finite density.
In particular, we have performed classical simulations of
heavy-hadrons moving through regions of dense matter characterized by static heavy-hadrons.
These simulations have provided insight into internal excitations of hadrons, and the crucial role of quantum coherence between the particles that make up the dense medium.
The effects of quantum coherence between the constituents of matter are visible
in the energy-loss as a function of incident velocity
in the highest density systems we have prepared.
By subtracting the individual contributions, the remaining energy loss is attributed
to quantum correlations in the matter wavefunction with increasing density.
Further, we have provided scalable quantum circuits for preparing ground states with a finite density of heavy hadrons.
In combination with the time evolution circuits presented in our previous work <cit.>, we estimated the circuit depths required for large-scale quantum simulations of energy loss in the
Schwinger model.
The outlook looks promising, and simulations of the dynamics of dense matter in the Schwinger model will be possible in the near-term.
While it is no surprise,
present-day simulations are significantly effected by relatively large lattice spacings,
restricted by the number of qubits (or qudits) that can be assembled into a quantum register
to form a spatial lattice volume
that is large enough to contain more than a few confinement length scales.
The hadronic wavefunctions
have support only over a few lattice sites, rendering an obvious discretization of their wavefunctions, with large lattice spacing artifacts.
One consequence is that when hadrons pass each other, there are relatively large differences between the incident and outgoing fields
at the lattice spacing scale.
In addition, the dispersion relation is such that the velocity of momentum modes has a maximum that is less than the speed of light.
This occurs around the scale of the inverse lattice spacing
(depending on mass and electric charge), and causes high-velocity hadrons
to partially disintegrate as they move, even in the vacuum,
leaving behind a wake of low-energy hadronic excitations.
This corresponds to fragmentation at fixed velocity, and is entirely a lattice spacing artifact.
These effects are mitigated by forming differences between propagation in matter and vacuum, but nonetheless present an unwelcome background from which to extract the physical fragmentation and hadronization.
These differences have a well-defined continuum limit, reflecting the target physics observables,
and quantum simulations using multiple lattice spacings, tuned to known physics observables,
are required
in order to make robust predictions with a complete quantification of uncertainties.
An important result to highlight is that while lattice
discretization effects are seen (and understood) in the energy loss of a single heavy-hadron moving through the vacuum, new effects are seen in the modification of the entanglement structure. This is pointing in the direction that quantum correlations are more sensitive to lattice artifacts than classical correlations.
On top of the discussion in the previous paragraph, the impact of lattice-spacing artifacts in high-energy processes cannot be under-estimated.
One concern is that when colliding high-energy wavepackets together, the non-zero lattice spacing will induce scattering and fragmentation through the modified dispersion relation and beyond.
Care must be taken in such quantum simulations to ensure that the observed inelasticities are coming from physics, and not from the underlying lattice upon which the simulation is being performed.
Alternative formulations to Kogut-Susskind where discretization errors are suppressed,
such as improved-KS <cit.> or improved-Wilson <cit.>
Hamiltonians, are starting to be pursued. More development is required, leveraging knowledge from classical Euclidean lattice QCD
calculations.
A limitation of working in one spatial dimension is that peripheral collisions are not possible, and all collisions between the constituent electrons and positrons (partons)
that make up the hadrons are “head-on”.
In addition, there are no soft momentum-transfer processes, like bremsstrahlung radiation, due to the absence of dynamical gauge fields and the finite spatial extent of the lattice.
More realistic simulations of QCD will require advancing from a U(1) to a SU(3) lattice gauge theory, first in 1+1D and then in higher dimensions.
Development of these more realistic simulations are underway, and will enable a study of the explicit role of
non-Abelian color charges in the dynamics of dense matter.
§ SPIN HAMILTONIAN WITH EXTERNAL CHARGES
The electric part of the Hamiltonian in Eq. (<ref>) can be expanded as,
2/g^2Ĥ_el = ∑_j=0^2L-2 (∑_k≤ jq̂_k +Q_k )^2 = ∑_j=0^2L-2 (∑_k≤ jq̂_k )^2 + 2 ∑_j=0^2L-2( ∑_k≤ jq̂_k ) ( ∑_l≤ j Q_l ) + ∑_j=0^2L-2 ( ∑_k≤ j Q_k )^2 ,
The first term is unaffected by the presence of external charges and is given by,
∑_j=0^2L-2 (∑_k≤ jq̂_k )^2 = L^2/2+1/4∑_j=0^2L-2(2L-j-1/2[1+(-1)^j+1])Ẑ_j+∑_j=0^2L-3∑_k=j+1^2L-22L-1-k/2Ẑ_jẐ_k .
The terms that couple to the external charge are
2 ∑_j=0^2L-2( ∑_k≤ jq̂_k )
( ∑_l≤ j Q_l )
+ ∑_j=0^2L-2 ( ∑_k≤ j Q_k )^2
= ∑_j=0^2L-2[ ( ∑_k≤ j Q_k )^2 - ( ∑_l≤ j (-1)^l )( ∑_k≤ j Q_k )] - ∑_j=0^2L-2(∑_l=j^2L-2∑_m≤ lQ_m) Ẑ_j ,
and contains terms proportional to the identity
and single Ẑ_j.
§ SC-ADAPT-VQE FOR PREPARATION OF THE VACUUM WITHOUT STATIC CHARGES
This appendix provides an overview of
the use of SC-ADAPT-VQE to prepare the vacuum without background charges.
The operator pool is given in Eq. <ref> and for m=0.1,g=0.8, it was found that two steps of SC-ADAPT-VQE, was sufficient to achieve an infidelity density of I_L ≈ 0.004 and a deviation in the energy density of δ E ≈ 0.006.
These quantities are defined in Eq. (<ref>) and Eq. (<ref>), respectively.
The two-step vacuum preparation is used as a proof of principle for this work, and defines the vacuum state preparation unitary Û^aVQE used in Sec. <ref> to prepare the vacuum with background charges.
The operator sequencing and corresponding variational parameters for m=0.1,g=0.8, and several different L, are given in Table <ref>.
CHAPTER: BRIEF REFLECTIONS ON QUANTUM SIMULATION
It is a very exciting time to be involved in the field of quantum simulation and quantum information more generally.
There is a lot that is unexplored, and a motivated student can reach the frontier and begin doing research without a lot of overhead.
I first started really thinking about quantum simulation over Thanksgiving break 2021, initially struggling to reproduce results from 4-qubit classical simulations of the Schwinger model in Ref. <cit.>.
Persistence payed off, and by January 2022 I was beginning to work toward quantum simulations of 1+1D QCD.
Due to wonderful collaborators, we were able to make swift progress, and on July 4 we posted the first paper on quantum simulations of 1+1D QCD to arXiv <cit.>.
Three days later, researchers from Canada posted the second paper on quantum simulations of 1+1D QCD to arXiv <cit.>.
The takeaway is that progress in this field happens quickly and that, with good mentors and collaborators, even a novice can contribute to the research community if they work hard.
I have been astounded by the rapid growth of the capabilities of quantum computers over the last two years, and am very optimistic for the future of digital quantum simulation.
In Summer 2022 we were running circuits on 6 qubits with 34 two-qubit gates.
By Fall 2022 we were running circuits with 17 qubits and 212 two-qubit gates.
In Summer 2023 we reached 100 qubits and were able to extract sensible results from circuits with 2,134 two-qubit gates.
And in Winter 2023 we increased this to 112 qubits and 13,858 two-qubit gates.
There are no signs that the capabilities of quantum hardware have reached a ceiling, and I am looking forward to see what the quantum simulation community will achieve in the next 3-5 years.
I believe that we are on the verge of performing quantum simulations that are beyond the capabilities of even the best approximate algorithms run on supercomputers, and that will provide qualitative insight into dynamical processes relevant to nuclear and particle physics.
The amazing thing is that it is still not clear which quantum computing architecture will offer the greatest utility moving forward.
Table <ref> has been included for posterity, and is a current snapshot of the state-of-the-art.
unsrt
|
http://arxiv.org/abs/2409.02587v1 | 20240904101008 | Explaining 95 (or so) GeV Anomalies in the 2-Higgs Doublet Model Type-I | [
"Akshat Khanna",
"Stefano Moretti",
"Agnivo Sarkar"
] | hep-ph | [
"hep-ph"
] |
[email protected]
[email protected]; [email protected]
[email protected]
HRI-RECAPP-2024-05
§ ABSTRACT
We show how the 2-Higgs Doublet Model (2HDM) Type-I can explain some excesses recently seen at the Large Hadron Collider (LHC) in γγ and τ^+τ^- final states in turn matching Large Electron Positron (LEP) data in bb̅ signatures, all anomalies residing over the 90-100 GeV or so region. The explanation to such anomalous data is found in the aforementioned scenario when in inverted mass hierarchy, in two configurations: i) when the lightest CP-even Higgs state is alone capable of reproducing the excesses; ii) when a combination of such a state and the CP-odd Higgs boson is able to do so. To test further this scenario, we present some Benchmark Points (BPs) of it amenable to phenomenological investigation.
Explaining 95 (or so) GeV Anomalies
in the 2-Higgs Doublet Model Type-I
Agnivo Sarkar
September 9, 2024
========================================================================
§ INTRODUCTION
A long-standing anomaly existing in LEP collider data <cit.> is the one hinting at the possibility of e^+e^-→ Zh events being produced therein, with a Higgs boson state h with a mass of approximately 98 GeV decaying into bb̅ pairs <cit.>. More recently, the CMS collaboration at the LHC has found an excess near 95 GeV in di-photon events in two separate analyses <cit.>.
In fact, they also reported an excess in τ^+τ^- pairs, again, around a mass of about 98 GeV. Finally, ATLAS also observed an excess at around 95 GeV in di-photon events, thereby aligning with CMS although, especially when including `look elsewhere' effects, their findings are far less significant than the CMS ones. Altogether, in view of the limited mass resolution of the di-jet invariant mass at LEP, this older anomaly may well be consistent with the excesses seen by CMS (and, partially) ATLAS in the γγ and, even more so, τ^+τ^- final states (as the mass resolution herein is also rather poor).
As a consequence of this credible mass overlap, many studies <cit.> have tested the possibility of simultaneously fitting these excesses within Beyond the Standard Model (BSM) frameworks featuring a non-SM Higgs state lighter than 125 GeV, i.e., than the one observed at the LHC in 2012 <cit.>.
A possible route to follow in explaining such events through a companion Higgs state (to the 125 GeV one) is to resort to a 2HDM <cit.>, as done, e.g., in Refs. <cit.>, wherein a Type-III (which allows direct couplings of both Higgs doublets to all SM fermions) with specific fermion textures was invoked successfully as a BSM explanation to the bb̅, γγ and τ^+τ^- excesses seen at LEP and the LHC. Herein,
both a fully CP-even and a mixed CP-even/odd solution was found, upon refining the 2HDM Type-III Yukawa structure to comply with both theoretical consistency requirements and experimental measurements of the discovered Higgs mass and couplings (of the 125 GeV Higgs state).
In this study, we show that solutions of the same kind (i.e., both a fully CP-even and a mixed CP-even/odd one) can also be found in a 2HDM Type-I, wherein only one Higgs doublet gives mass to all SM fermions, again, satisfying the aforementioned theoretical requirements and experimental constraints.
The paper is organised as follows. In the next section, we review the theoretical framework of the 2HDM Type-I. Then we discuss the theoretical and experimental constraints applied to such a BSM scenario in our study, after which we move on to show how the latter can naturally explain the discussed anomalous data in various configurations of its parameters space. Our conclusions then follow.
§ THE 2HDM TYPE-I
Among the various BSM scenarios, the 2HDM can be considered as a simple extension of the SM. The scalar sector of this model comprises two complex scalar fields ϕ_1 and ϕ_2 which transform as a doublet under the Electro-Weak (EW) gauge group SU(2)_L× U(1)_Y with hypercharge Y = 1. For a detailed overview of this model interested reader can look into <cit.>. The most general gauge invariant CP-conserving scalar potential can be written as
V(ϕ_1, ϕ_2) = m_11^2 ϕ_1^†ϕ_1 + m_22^2 ϕ_2^†ϕ_2 - m^2_12[ϕ^†_1ϕ_2 + ϕ^†_2ϕ_1] + λ_1/2(ϕ_1^†ϕ_1)^2 + λ_2/2(ϕ_2^†ϕ_2)^2
+ λ_3(ϕ_1^†ϕ_1)(ϕ_2^†ϕ_2) + λ_4(ϕ_1^†ϕ_2)(ϕ_2^†ϕ_1) + [ λ_5/2(ϕ_1^†ϕ_2)^2 + h.c. ] + {[ λ_6(ϕ_1^†ϕ_1
)+ λ_7(ϕ_2^†ϕ_2) ](ϕ^†_1ϕ_2) + h.c. }.
Given the hermiticity of the scalar potential, all the potential parameters of Eq.(<ref>) must be real. In order to prevent tree level Flavour Changing Neutral Currents (FCNCs), one can postulate an additional discrete 𝒵_2 symmetry under which the scalar fields transform as ϕ_1→ϕ_1, ϕ_2→ -ϕ_2. One can realise that the term proportional to m^2_12, λ_6 and λ_7 in Eq.(<ref>) violates this 𝒵_2 symmetry explicitly. Therefore the potential can be expressed in the following form:
V(ϕ_1, ϕ_2) = m_11^2 ϕ_1^†ϕ_1 + m_22^2 ϕ_2^†ϕ_2 + λ_1/2(ϕ_1^†ϕ_1)^2 + λ_2/2(ϕ_2^†ϕ_2)^2
+ λ_3(ϕ_1^†ϕ_1)(ϕ_2^†ϕ_2) + λ_4(ϕ_1^†ϕ_2)(ϕ_2^†ϕ_1) + {λ_5/2(ϕ_1^†ϕ_2)^2 + h.c. }.
Both these Higgs fields, ϕ_1 and ϕ_2 acquire a non-zero vacuum expectation value
(vev) (i.e., ⟨ϕ_1⟩ = v_1 and ⟨ϕ_2⟩ = v_2) and spontaneously break the EW gauge symmetry down to U(1)_ EM. After the symmetry breaking, the W and Z boson becomes massive and the scalar sector contains five physical Higgs bosons - two CP-even {H, h}, one CP-odd A and a pair of charged states H^± with masses m_H, m_h, m_A and m_H^±, respectively. Both these vevs v_1 and v_2 are related to the EW scale v = √(v^2_1 + v^2_2) = 246 GeV. The ratio between these two vevs can be parameterised as tanβ = v_2/v_1. In addition, the mixing angle between the CP-even states {H, h} can be parametrised as α. For present study we consider the inverted hierarchy between the CP-even mass eigenstates. This particular choice alters the usual interpretation of the mass spectrum and the couplings [For example, in the inverted mass hierarchy the alignment limit corresponds to cos(β - α) → 1. However, we will not conform to this limit in the present study.]. Hereafter we align H as to be the SM like Higgs boson and h to be the lighter scalar. In Eq. (<ref>) we present the relations between the potential parameters λ_i's with the physical parameters of the model:
λ_1 = c_α^2m_H^2 + s_α^2m_h^2/v^2c_β^2,
λ_2 = c_α^2m_h^2 + s_α^2m_H^2/v^2s_β^2,
λ_3 = (m_H^2-m_h^2)s_αc_α - (λ_4+λ_5)v^2c_βs_β/v^2c_βs_β,
λ_4 = m_A^2 - 2m_H^±^2/v^2,
λ_5 = -m_A^2/v^2.
The scalar potential given in Eq.(<ref>) clearly has six independent parameters, and the m_ii^2 terms can be traded off in terms of λ_i using the extremization conditions on the potential. We find relations of the parameters of the potentials given in equation Eq.(<ref>) in terms of physical scalar masses and the angles {β, α} and use those as the input parameters for further analysis.
The right handed up/down quarks and lepton fields are also charged under the aforementioned Z_2 symmetry and transform as u^i_R→ -u^i_R, d^i_R→ -d^i_R and ℓ^i_R→ -ℓ^i_R, respectively. From these charge assignment one realises that all charged fermions exclusively couple to the Φ_2 field, leading to the traditional 2HDM Type-I scenario. In Eq.(<ref>) we write down the Yukawa part of the Lagrangian in the mass eigenstate basis.
- ℒ_Yukawa = +∑_f = u,d,ℓ[ m_fff̅ + (m_f/vκ^f_hf̅fh + m_f/vκ^f_Hf̅fH - im_f/vκ^f_Af̅γ_5fA)]
+ √(2)/vu̅(m_uVκ^u_H^+P_L + Vm_dκ^d_H^+P_R)dH^+ + √(2)m_ℓκ^ℓ_H^+/vν̅_Lℓ_RH^+ + h.c.
Here m_f is the fermion mass, V is the CKM matrix and P_L/R = 1 ±γ_5/2 are the projection operators. The explicit form of the scaling function κ_i are detailed in Table <ref>.
§ CONSTRAINTS
In this section, we describe different theoretical and experimental constraints which are required to restrict the parameter space of the Type-I 2HDM.
§.§ Theoretical Constraints
* Vacuum Stability:
The vacuum stability conditions ensure that the potential must be bounded from below in all possible field direction. To achieve these, the λ_i's parameters must follow certain relationship such that the quartic terms in the potential must dominate in large field values. Here we list down the conditions on λ_i's which is needed to meet the stability criteria, that prevents the potential from becoming infinitely negative, <cit.>
λ_1 > 0, λ_2 > 0, λ_3 + √(λ_1 λ_2) > 0, λ_3 + λ_4 - |λ_5| + √(λ_1 λ_2) > 0.
* Unitarity: The unitarity constraints are necessary to ensure that the theory remains predictive at high energies. At tree level, unitarity imposes specific conditions on the energy growth of all possible 2 → 2 scattering processes. Ref<cit.> derives the unitarity conditions for the 2HDM model explicitly. According to the unitarity constraint, the following relations should be obeyed:
|u_i| ≤ 8π ,
where
u_1 = 1/2(λ_1 + λ_2 ±√((λ_1 - λ_2)^2 + 4|λ_5|^2)),
u_2 = 1/2(λ_1 + λ_2 ±√((λ_1 - λ_2)^2 + 4λ_4^2)),
u_3 = 1/2(3(λ_1 + λ_2) ±√(9(λ_1 - λ_2)^2 + 4(2λ_3+λ_4)^2)),
u_4 = λ_3 + 2λ_4 ± 3|λ_5|,
u_5 = λ_3 ± |λ_5|,
u_6 = λ_3 ±λ_4.
* Perturbativity: The perturbativity condition on the parameters of the scalar potential, which imposes an upper limit on all the quartic couplings, demands that, for all i values, one has λ_i ≤ |4π|.
§.§ Experimental Constraints
* EW Precision Tests We evaluated the EW precision constraints by computing the S, T and U parameters using the SPheno package <cit.>, with the model files written in SARAH <cit.>. These so-called `oblique parameters' provide stringent constraints on new physics, thereby demanding that any extension to the SM Higgs sector should conform to high precision data from LEP (primarily). The numerical values of these observables are <cit.>
S = -0.02 ± 0.10, T = 0.03 ± 0.12, U = 0.01 ± 0.11.
* BSM Higgs Boson Exclusion We assessed the exclusion limits from direct searches for the BSM scalars at the LHC, LEP and the Tevatron. These exclusion limits were evaluated at the 95 % Confidence Level (C.L.) using the HiggsBounds-6 <cit.> module via the HiggsTools <cit.> package. In our analysis we have also demanded that our lighter Higgs must comply with the results of <cit.>, where the scalars are produced in association with a massive vector boson or a top anti-quark pair and further decays via leptonic modes.
* SM-Like Higgs Boson Discovery We examined the compatibility of our 125 GeV Higgs boson with the discovered SM-like Higgs boson using a goodness-of-fit test. Specifically, we calculated the χ-square value with HiggsSignals-3 <cit.> via HiggsTools, comparing the predicted signal strengths of our Higgs boson to those observed experimentally. We retained the parameter spaces that satisfies the condition χ_125^2 < 189.42, corresponding to a 95 % C.L. with 159 degrees of freedom.
* Flavour Physics We incorporated constraints from B-physics observables, which are sensitive to potential new physics contributions in loop mediated FCNC processes. Specifically, we tested the most stringent bound on the Branching Ratio (ℬℛ) of the B→ X_s γ decay using Next-to-Leading Order (NLO) calculations in QCD as discussed in <cit.>.
ℬℛ(B → X_s γ) = Γ (B → X_s γ)/Γ_SLℬℛ_SL
where, ℬℛ_ SL is the semi-leptonic branching ratio and Γ_SL is the semi-leptonic decay width.
We took our input parameters from the most recent values from the Particle Data Group (PDG) compilation <cit.>, as follows:
α_s(M_Z) = 0.1179 ± 0.0010 , m_t = 172.76 ± 0.3,
m_b/m_c = 4.58 ± 0.01, α = 1/137.036,
BR_ SL = 0.1049 ± 0.0046, |V_ts^*V_tb/V_cb|^2 = 0.95 ± 0.02,
m_b( MS) = 4.18 ± 0.03, m_c = 1.27 ± 0.02,
m_Z = 91.1876 ± 0.0021, m_W = 86.377 ± 0.012.
The following restriction has been imposed, which represents the 3 σ experimental limit:
2.87 × 10^-4 < ℬℛ(B → X_s γ) < 3.77 × 10^-4.
Other B-physics observables, like ℬℛ(B^+ →τ^+ ν_τ), ℬℛ(D_s →τν_τ), ℬℛ(B_s →μ^+ μ^-) and ℬℛ(B^0 →μ^+ μ^-) have been taken care of by using the FlavorKit tool <cit.> provided by SPheno package <cit.>. Our calculated b → s γ results were also found to be consistent with the FlavorKit tool.
§ EXPLAINING THE ANOMALIES
The primary objective of this paper is to investigate whether the 2HDM Type-I can explain the excesses observed at the LHC and LEP data over the 92-98 GeV or so mass range. To do so, we need to define the signal strength corresponding to these three excesses. The signal strength is formulated as a ratio between the observed number of events to the expected number of events for a hypothetical SM Higgs of mass 95 GeV. Assuming the Narrow Width Approximation (NWA), the signal strength for the τ^+ τ^-, γγ and b b channels can be parameterised as cross section (σ) times the branching ratio (ℬℛ),
μ_τ^+ τ^-(ϕ) = σ_ 2HDM(gg→ϕ)/σ_ SM(gg→ h_ SM)×ℬℛ_ 2HDM(ϕ→τ^+ τ^-)/ℬℛ_ SM(h_ SM→τ^+ τ^-),
μ_γγ(ϕ) = σ_ 2HDM(gg→ϕ)/σ_ SM(gg→ h_ SM)×ℬℛ_ 2HDM(ϕ→γγ)/ℬℛ_ SM(h_ SM→γγ),
μ_b b(ϕ) = σ_ 2HDM(e^+e^-→ Z ϕ)/σ_ SM(e^+e^-→ Z h_ SM)×ℬℛ_ 2HDM(ϕ→ b b)/ℬℛ_ SM(h_ SM→ b b).
Here, h_ SM corresponds to a SM like Higgs Boson with a mass of 95 GeV while ϕ is a 2HDM Type-I Higgs state with the same mass. The experimental measurements for these three signal strengths around 95 GeV are expressed as
μ_γγ^ exp = μ_γγ^ ATLAS+CMS = 0.24^+0.09_-0.08, <cit.>
μ_τ^+ τ^-^ exp = 1.2 ± 0.5, <cit.>
μ_b b^ exp = 0.117 ± 0.057. <cit.>
Although the ditau excess is most prominent around 100 GeV and the b b̅ excess near 98 GeV, a search around 95 GeV could provide a unified explanation for all these three anomalies. This is because the mass resolution in the ditau final states is substantially large, and the LEP excess, associated with the b b̅ final states, is also broad. Therefore, a common origin for these excesses may plausibly reside around 95 GeV.
In our analysis, we have combined the di-photon measurements from the ATLAS and CMS experiments, denoted as μ_γγ^ ATLAS and μ_γγ^ CMS, respectively. The ATLAS measurement yields a central value of 0.18 ± 0.1 <cit.> while the CMS measurement yields a central value of 0.33^+0.19_-0.12 <cit.>. The combined measurement, denoted by μ_γγ^ ATLAS+CMS is determined by taking the average of these two central values, assuming both these measurements are uncorrelated. The corresponding combined uncertainty is calculated by adding corresponding uncertainties in quadrature. To determine if the observed excess can be explained through our model or otherwise, we perform a χ^2 analysis using the central values μ^ exp and the 1 σ uncertainties Δμ^ exp associated with the signal related to the excesses as defined in Eq. (<ref>). The contribution to the χ^2 for each of the channel is calculated using the equation
χ^2_γγ, τ^+ τ^-, b b = (μ_γγ, τ^+ τ^-, b b(ϕ) - μ_γγ, τ^+ τ^-, b b^ exp)^2/(Δμ_γγ, τ^+ τ^-, b b^ exp)^2.
Hence, the resulting χ^2 which we will use to determine if the excesses are explained by the 2HDM Type-I, or otherwise, is the following:
χ^2_γγ, τ^+ τ^-, b b = χ^2_γγ + χ^2_ τ^+ τ^- + χ^2_b b.
We test this BSM scenario into two cases: firstly, we consider both the CP-even and CP-odd (i.e., ϕ=h+A. except for bb̅ where ϕ=h) Higgs states simultaneously in explaining the anomaly and, secondly, we only exploit the CP-even Higgs (i.e., ϕ=h) in order to explain it. Hence, we align our H state (recall that we have m_h<m_H) with the SM Higgs boson, so that m_H=125 GeV, and start a Monte Carlo (MC) sampling of the various input parameters.
§.§ The Overlapping Solution
In case of overlapping solution, the signal strength corresponding to the τ^+ τ^- and γγ channels receive substantial contribution from both the CP-even and CP-odd states simultaneously[Here we have considered CP-conserving potential. As a result, the h and A states would not interfere with each other.]. In contrast, for the b b̅ mode only the CP-even state contribute as the trilinear coupling AZZ is zero at tree level. As a result, the signal strength can be expressed in the following manner -
μ_τ^+ τ^-(h+A) = μ_τ^+ τ^-(h) + μ_τ^+ τ^-(A), μ_γγ(h+A) = μ_γγ(h) + μ_γγ(A), μ_bb̅ (h).
We generated MC samples in the scan range as described in Table <ref>. After testing them against various theoretical and experimental constraints, the allowed parameter space is illustrated in Figure <ref>. The region of the parameter space that pass the theoretical constraints are depicted by the blue points while the region that pass the experimental constraints are depicted by red points. The plot illustrates that a nearly degenerate solution with both the CP-even and CP-odd Higgs in the 92-98 GeV mass range is viable for the charged Higgs mass ranging around 160 < m_H^± < 195. We will use the overlapping region of the two coloured point distributions to test the aforementioned anomalies.
Figure <ref> illustrates the total chi-square distribution for points that are compatible with all three anomalies and the best fit point is marked by a star, corresponding to χ^2_ min. The experimentally observed signal strength with their 1 σ band is also superimposed in the plot to test them against this model's best fit point. The chi-square fit reveals two distinct branches, differentiated by the sign of sin(β - α). The main branch, characterized by positive values, is densely populated, while the second branch, which has negative values, is sparsely populated due to the elimination of many points by various constraints. The figure <ref> also illustrates the sign of the electroweak coupling parameter. The best fit point indicated by a red star has a large positive sin (β - α) value, while the vector boson coupling with the SM Higgs boson, indicated by the color bar, is weakened in that case.
Given that the di-photon excess is most pronounced around 95 GeV, we also plot the allowed points within the 94-96 GeV range over the CMS and ATLAS results for the signal strength in the γγ channel, as shown in Figure <ref>. The expected and observed CMS limits are shown by the black dashed and solid lines. The green and yellow bands indicate the 1 σ and 2 σ uncertainties and the plot is overlaid by the ATLAS observed 95 % confidence level limits on the signal strength with the red dashed and solid lines. The combined signal strength (CMS+ATLAS) at 95.4 GeV with its error bar is also shown using a red dot. The points explaining the anomaly at the 1 σ level, for 3 degrees of freedom, corresponding to the γγ, ττ and b b channels as shown in equation <ref>, which requires χ^2 < 3.53, are plotted in dark red while the points explaining it at the 3 σ level, demanding χ^2 < 7.8147 are shown in peru color (Less likely points are given in sky blue). The best fit point in the 94-96 range is also indicated using midnight blue color. It can be clearly seen that the parameter points are suited to explain the excesses observed. The details of the best fit points as marked in the two figures are shown in Table <ref>.
§.§ The Single Solution
In this case, only the h state is responsible for explaining all the three anomalies. We sampled points for the scan range described in Table <ref>, and the points that pass the various constraints are depicted in Figure <ref>, wherein the blue shaded region indicates the points that pass the various theoretical constraints while the red shaded region indicates region of parameter space that survives after imposing different experimental bounds. The plots clearly represent the allowed parameter space for the CP-odd and Charged Higgs masses, given that we fix our CP-even mass to lie in the range 92-97 GeV. Though the charged Higgs mass is bounded at around 160 GeV, the CP-odd pseudoscalar covers almost the entire scan range. Note that the allowed CP-odd scalar mass also lie in the 90-100 GeV mass window, hinting to the overlapping solution that we discussed in the previous section. We move ahead with testing the overlapping points with the observed anomalies.
The total χ^2 fit for the points passing all the constraints is displayed in Figure <ref>, wherein the best fit point (i.e., again, the χ^2_ min one) is indicated with a star. We have again overlaid the plot with the experimentally observed data with the 1 σ band. The plot clearly depicts that the best fit point lies within the experimental 1 σ boundary. The two bands here again are differentiated with the sign of sin (β - α), wherein the densely populated region is positive while the sparsely populated one negative. This is also depicted in Figure <ref>, where the best fit point is depicted by a red star, which has a large sin(β - α) value, while the vector boson coupling with the SM Higgs boson is weakened, which is indicated by the color bar.
The mass region lying between 94-96 GeV is also displayed on top of the CMS and ATLAS results for the signal strength in the γγ channel in Figure <ref> (with the same color coding as that in Figure <ref>). The figure clearly depicts the best fit point in that particular region to be close to the mean experimentally observed value. Finally the best fit BPs in these cases are shown in Table <ref>.
§ CONCLUSIONS
In summary, we have proven that somewhat anomalous data produced at LEP and the LHC in bb̅ and τ^+τ^- as well as γγ final states, respectively, all clustering in a 10 GeV or so mass window around 95 GeV, are consistent with the possibility of the 2HDM-Type I in inverted mass hierarchy explaining these. Specifically, two configurations are possible: one where both the (degenerate) h and A states cooperate to explain the aforementioned anomalies and another where only the h states does so. This is an intriguing result, as such Higgs states can well be probed in collateral signatures specific to the 2HDM Type-I in inverted mass hierarchy, as emphasised in various previous literature
<cit.>. To aid testing this theoretical hypothesis, we have produced two pairs of BPs amenable to further phenomenological analysis, each pair corresponding to one of the above two solutions, wherein the parameter space point giving the best fit to all anomalies.
§ ACKNOWLEDGMENTS
The work of S.M. is supported in part through the NExT Institute and the STFC Consolidated Grant No. ST/ L000296/1. A.S acknowledges the support from SERB-National Postdoctoral fellowship (Ref No: PDF/2023/002572). A.K. acknowledges the support from Director's Fellowship at IIT Gandhinagar. All authors thank Tanmoy Mondal and Prasenjit Sanyal for their help in discovering a mistake in their calculations.
|
http://arxiv.org/abs/2409.02281v1 | 20240903202830 | K-Origins: Better Colour Quantification for Neural Networks | [
"Lewis Mason",
"Mark Martinez"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
Saturation of magnetised plasma turbulence by propagating zonal flows
W. Dorland^4
September 9, 2024
=====================================================================
§ ABSTRACT
K-Origins is a neural network layer designed to improve image-based network performances when learning colour, or intensities, is beneficial. Over 250 encoder-decoder convolutional networks are trained and tested on 16-bit synthetic data, demonstrating that K-Origins improves semantic segmentation accuracy in two scenarios: object detection with low signal-to-noise ratios, and segmenting multiple objects that are identical in shape but vary in colour. K-Origins generates output features from the input features, X, by the equation Y_k = X-J· w_k for each trainable parameter w_k, where J is a matrix of ones. Additionally, networks with varying receptive fields were trained to determine optimal network depths based on the dimensions of target classes, suggesting that receptive field lengths should exceed object sizes. By ensuring a sufficient receptive field length and incorporating K-Origins, we can achieve better semantic network performance. Examples of these improvements are illustrated in Figure <ref>.
§ INTRODUCTION
Semantic segmentation classifies 2D or 3D images on a pixel-by-pixel basis. It is especially valuable for processing large datasets that are impractical to classify manually. In biomedical and materials science, semantic segmentation is particularly useful for two tasks: distinguishing objects from the background and differentiating tracer particles from non-tracer particles.
The first class of problems, object segmentation, involves distinguishing one, or more, target classes from the background. Examples include <cit.> where white blood cell nuclei are segmented, <cit.> where abdominal organs and regions of interest are segmented, and <cit.> where X-ray tomography images of materials such as liquid-solid composites and ore-particles are segmented. This segmentation problem is prevalent in engineering, biomedical research, and materials sciences.
On the other hand, tracer segmentation involves distinguishing objects that are nearly identical in shape, but vary by colour or intensity. An example of this problem is segmenting X-ray images where contrast enhancing agents have been used (<cit.> and <cit.>). Another example is cancer cell segmentation in pathology slides, like in <cit.>, where cancerous cells can vary from normal cells by colour. The tracer segmentation problem is also relevant for datasets that produces a large number of false positives during segmentation.
Convolutional neural networks (CNNs) have shown to be very good at semantic segmentation and one of the most popular architecture styles for this task is the encoder-decoder network. This architecture makes predictions by combining low-level and high-level image features, effectively integrating information from various fields of view to achieve optimal results. The encoder-decoder network was mainly popularized by U-Net (<cit.>) which has been cited over 89,000 times. Because of its wide spread use, U-Net serves as a basic blueprint for the architectures used in this paper.
The receptive field (RF) is a key characteristic of CNNs. It represents the network's 2D or 3D field of view, indicating how much of the input image is used at each feature layer. To differentiate the RF, an area or volume, from its side length, we refer to the side length as the receptive field length (RFL). The RFL can be calculated for one side using the recursive equation from <cit.>:
r_l-1 = s_l · r_l + (k_l - s_l)
To determine the RFL before a layer in the network (r_l-1) given the RFL after that layer (r_l), use the layer's stride (s_l) and kernel size (k_l) in Equation <ref>. For semantic segmentation, start at the deepest set of features with an RFL of one (r_l=end = 1 pixel) and work backwards to determine the RFL at each feature layer.
The RFL at the beginning of semantic networks can then be thought of as the side length of the area or volume used for a single pixel's prediction. For example, a 2D semantic network with an RFL of 11 uses an 11x11 pixel area to generate the features used for classifying the central pixel. If the RFL is symmetrical in all directions, it only needs to be calculated for one dimension.
RFs have been extensively studied in various articles, with some focusing on determining optimal sizes and the effective RFLs in networks: <cit.>. However, many studies use complex datasets, making it difficult to generalize the findings.
Increasing the complexity of neural networks often improves training accuracy but results in longer training times and higher hardware costs. With millions of trainable parameters, it also becomes difficult to understand what the network is learning. It would be beneficial if networks could be made smaller and more efficient without hurting their performance.
This paper aims to reduce neural network complexity by building architectures from the ground up with synthetic data, ensuring that the correct properties, such as colour and shape, are learned effectively. It deviates from the standard research structure, as it addresses no obvious deficiencies in CNN research. Instead, the work is motivated by testing neural networks on simple datasets to identify shortcomings which can then be resolved. We also look at how the RFL affects results using this simple dataset. Overall, our goal is to decrease network complexity without negatively affecting the results.
In Section <ref>, we discuss the data generation process for all trials. The motivating case for this study is presented in Section <ref>, demonstrating that a CNN can struggle with simple object detection. In Section <ref> we introduce some additional background material that is relevant for quantifying results. In Section <ref>, we introduce K-Origins, a layer designed to help neural networks quantify colours and intensity magnitudes. Section <ref> demonstrates that the motivating case can either be solved by using K-Origins or by increasing the depth and complexity of the network. Finally, in Section <ref>, we test the limits of segmentation across a range of colour distributions for two types of problems: object detection and tracer segmentation.
§ METHODS
§.§ Synthetic data
Greyscale synthetic data is generated with a 16-bit colour channel for various test cases. This data contains a background with randomly placed squares, and the number of squares varies to ensure that the background remains visible. For each trial, 400 synthetic images with the dimensions 200x200 are created for training, and an additional set is used for testing. Square side lengths and class intensity distributions vary between trials. By using squares, which are simpler shapes and are easier to interpret, we can better assess the impact of K-Origins.
For greyscale data, a pixel's colour is represented by a single integer value. For 16-bit data, as used in this paper, the values range from 0 (pure black) to 65535 (pure white), with various shades of grey in between. In this work, a class's colour is represented by its intensity mean (μ_i) and the standard deviation of added Gaussian noise (σ_i). Figure <ref> shows the integer-intensity mapping and provides examples of the synthetic data used in this paper. Data intensity distributions are illustrated using normalized histograms (data = data/max(data)).
§.§ Motivating Case: Network Failure
The motivation for K-Origins and this work is shown in Figure <ref>, where a small encoder-decoder network fails to classify noiseless squares from the background. The network lacks an understanding of colour magnitude; if it could recognize the lighter gray squares against the darker gray background, the task would be simple. However, the network does not directly leverage the 16-bit values—the greyness—of the squares in its predictions. For example, a straightforward solution to this problem is to compare a pixel's integer value to 25000 (the squares' colour) and classify it as a square if it matches, or as background if it does not. Despite having over 70,000 trainable parameters, the network fails to learn this behavior.
Moreover, the network can only correctly classify squares within 4 to 5 pixels from the object border. This suggests the network detects gradients rather than colour magnitudes and does so over a specific length. Convolutions are known for detecting gradient-related behavior so this is almost expected, but it would be highly beneficial if the non-linearity of neural networks could be used to leverage colour magnitudes more directly.
In Figure <ref>, the network's RFL is calculated by setting the bottom-right feature (the deepest point) to r_l=end = 1 pixel and recursively determining the RFL at previous layers. Using Equation <ref>, we calculate that the motivating network has an RFL of 8 pixels, which is twice the distance that gets correctly classified from the object border, plus or minus one pixel. Being twice the correct prediction distance should be expected because the deepest features in that network can "see" about 4-5 pixels on either side of the pixel it wishes to classify. We hypothesize that this network classifies pixels by detecting a square edge in any direction; if no edge is detected within the RF, the pixel is classified as background.
Figure <ref> shows that the network struggles to classify pixels far from the object border and that it also fails to understand intensity magnitudes. We will address both of these issues separately and will use greyscale data because the single channel results extend to additional colour channels (RGB).
§.§ Metrics
In this section we introduce important equations and data properties that will be used throughout the rest of the paper.
To quantify the distance between intensity distributions, we use the Hellinger distance for two classes represented by Gaussian probability density functions (PDFs). The HD for two Gaussian distributions is given by:
HD(𝒩(μ_1,σ_1),𝒩(μ_2,σ_2)) = √(1 - √(2σ_1 σ_2/σ_1^2 + σ_2^2)exp( -1/4(μ_1 - μ_2)^2/σ_1^2 + σ_2^2))
where 𝒩 is a normal distribution with means μ_1 or μ_2 and standard deviations σ_1 or σ_2 respectively (<cit.>). This equation produces a value between 0 and 1, where an HD of 0 indicates identical distributions, and an HD of 1 indicates completely distinguishable distributions.
Next, we introduce a modified accuracy metric to address class imbalance. In this paper, the background (class zero) is so large that it inflates the accuracy score. To counter this, we exclude the background class from all accuracy calculations. The resulting modified accuracy is given by:
MAcc = 1/C-1∑_i≠background^CTP_i/TP_i + FP_i + FN_i
where MAcc is the custom accuracy with background bias removed, C is the total number of classes including the background, TP_i represents true positives, FP_i false positives, and FN_i false negatives for class i. A target class is any class that is not background (i ≠ background). This turns out to be the Jaccard index (<cit.>) for multiple classes and throughout this paper all mentions of accuracy are referring to MAcc.
§.§ K-Origins Layer: The Colour Solution
We first develop a layer to help networks quantify colour magnitudes. To the best of the authors' knowledge at the time of writing, this approach is unique. Given features X, a K-Origins layer with K trainable weights produces output features for each trainable weight w_k ∈ [w_1,w_2,...,w_K], as follows:
Y_k=X-J· w_k
where Y_k is the output given from a single weight w_k, and J is a matrix of ones matching the dimensions of X. This layer produces K copies of the input image, each with a different scalar subtracted from it, resulting in K images with different origins. All values less than the weight w_k, or origin, become negative in Y_k and all data greater than w_k stays positive.
For 2D and 3D image data there is normally one origin (zero), making all data positive relative to it. If we immediately use K-Origins on this input data, then future layers such as convolutions can use the sign changes to determine the relative intensity locations for each pixel. Similar behaviour can be done for deeper features in a network. For the first K-Origins layer the weights w_k must match the data's (or features) order of magnitude. For un-signed 16-bit data we see w_k ∈ [0, 65535] for the first layer, requiring learning rates of 1-100 for significant parameter changes during training.
Figure <ref> shows a small network that takes an input image and concatenates it with the output of a K-Origins layer with one weight, w_1. Concatenating the output of K-Origins with the input provides stable reference features for the rest of the network, which is essential for convergence as Y_k constantly changes. The network then applies a softmax-activated 1x1 convolution with a learning rate of 1E-3 for pixel-wise predictions. Because this network has an RFL of one pixel, it can only use information from a single pixel for its predictions, extracting no spatial information.
The weight w_1 was initialized at 50000 with a learning rate of 100 and ended at w_1,final = 20200 after 33 epochs. This final value lies between the intensity values of the two classes, μ_0 =20000 and μ_1 = 25000. This small network with only 5 trainable parameters achieved 100% accuracy segmenting the case from Figure <ref>, whereas the encoder-decoder network with 71,042 trainable parameters achieved only a 67% accuracy. This small network was also tested with more weights on a 7-class case and achieved 100% accuracy. However, accuracy decreased when the class intensity distributions had an HD less than unity, suggesting that a combination of K-Origins and shape recognition would perform better.
Because supervised learning problems have ground truths, K-Origins weights can be initialized based on known class distributions with learning rates of zero, or near zero. For example, in the first problem of Figure <ref>b, initializing the K-Origins weight as w_1=22500, right between both classes, achieves 100% accuracy in just one epoch. This technique is used later in this article by "clamping" distributions, where a weight is placed above and below the known distribution of a target class to clamp those intensities.
Convolutional neural network layers are generally defined as:
Y = f(X∗c + b)
where X∗c is the convolution of features X with kernel c, f(z) is the activation function, and b is the bias. Often for semantic segmentation networks the reluctance (ReLU) activation function is used (f(z)=ReLU(z)), such as in <cit.>. ReLU forces negative numbers to be near zero and positive, making it hard for a network to learn the behavior of K-Origins without directly implementing it. While a convolutional network could theoretically learn similar behavior, it would be challenging.
Next we look at setting various neural network depths and compare accuracies with and without K-Origins for a range of RFL's.
§.§ RFL's: Length Scale Solution
In Figure <ref>, the network fails for larger objects. In this section, we investigate the required RFL for various object sizes. We use a set of small encoder-decoder networks, shown in Figure <ref>, with additional details in Table <ref>. The six architectures used are RFL8, RFL18, RFL38, KRFL8, KRFL18, and KRFL38, where "KRFLX" refers to an identical architecture to "RFLX" with K-Origins. All networks in Table <ref> use "same" padding, where applicable, to prevent cropping. We hypothesize that the RFL should be larger than the dominant length scale, or the minimum length required to differentiate two objects.
We first train the six networks on noiseless data (μ_0 = 20000, μ_1 = 25000, σ_0=σ_1 = 0) containing squares with a side length of 25 pixels, similar to the scenarios in Figures <ref> and <ref>. We also train on noisy data (μ_0 = 20000, μ_1 = 25000, σ_0=σ_1 = 2000) to simulate the failure case in Figure <ref>. This shows us the effect of increasing the RFL for a fixed object size.
Training runs for 10 epochs with a batch size of 3. Learning rates are set to 1E-3 for convolution layers and 100 for K-Origins layers. The highest-level K-Origins weights are initialized by placing a weight two standard deviations above and below the intensity mean for each class (w_i1,i2 = μ_i ± 2σ_i). This effectively clamps each class's intensity distribution with two K-Origins parameters. For the noiseless case this corresponds to w_i={20000,20000,25000,25000}, and for the noisy case, w_i={16000,24000,21000,29000}. All other K-Origins layers have three weights initialized from Gaussian random variables with μ = 20000 and σ=5000.
The results from these trials are shown in Figure <ref>. Networks without K-Origins increase in accuracy as the RFL approaches the object length, achieving high accuracies when the RFL exceeds the object length. In contrast, networks with K-Origins achieve near-perfect validation accuracy regardless of their RFL, demonstrating a more efficient solution. Achieving a near-perfect accuracy without K-Origins requires about 1.4 million trainable parameters, while using K-Origins achieves the same accuracy with only 187,000 trainable parameters. Additionally, an even smaller network with K-Origins could be possible, as this test did not determine a network size lower bound.
Next, we perform a sweep of square side lengths to RFL ratios, L/RFL, for the six networks using the same training parameters as before. This is done with and without noise. For each RFL, we examine L/RFL≈{0.3,0.6,0.95,1.3,2,3}. These fractions are approximated since side lengths may be rounded. The summary of these tests is shown in Figure <ref> and in almost every case, using K-Origins increases accuracy. We also observe that accuracy decreases when L/RFL is small. This is because a small L/RFL results in very small squares, making segmentation difficult in noisy conditions regardless of the architecture used. All numerical results are found in Appendix <ref>.
In almost every case, networks with K-Origins outperform those without it. For this problem, KRFL8, KRFL18, and KRFL38 achieved nearly 100% accuracy in about 3 epochs, compared to the 10 epochs for their RFLX counterparts. While it might be argued that this is due to the high learning rate of K-Origin layers, training with a learning rate of zero for K-Origins produces similar results with the same initialization. The use of K-Origins may enable smaller and more efficient networks without sacrificing performance.
Networks without K-Origins (RFLX) succeed when the RFL is larger than the object size, which aligns with the preference for very deep networks in most research. These networks also seem to perform better on noisy data than on noiseless data.
So far we have demonstrated a solution to the motivational problem (Figure <ref>) using both network length scales (ensuring sufficient RFL) and intensity quantification (K-Origins) for a noisy and noiseless case. In the noiseless case we set Δμ = 5000 and σ = 0, giving a unity HD. In the noisy case we set Δμ is the same, but σ = 2000 resulting in an HD of 0.73. The next logical step is to sweep across various HDs by adjusting Δμ and Δσ to determine the effectiveness of K-Origins for different intensity distributions.
§.§ Mixed Solution for a Range of Intensity Distributions
In this section we explore how changing the HD affects segmentation by varying Δμ and Δσ. Setting the first class to μ=20000 and σ = 1000, we produce the HD heatmap shown in Figure <ref>. This heatmap will be used to determine HDs for the upcoming trials, allowing us to compare the accuracy of each trial with the HD.
This comparison is done for two network architectures: RFL14 and KRFL14, shown in Figure <ref> with additional parameters listed in Table <ref>. The key difference is the inclusion of K-Origins in KRFL14.
We consider two scenarios: a single target class on a noisy background and two target classes on a noisy background. These problems have two and three output classes, respectively, with the background considered a class. Noise parameter sweeps are performed for two object sizes: squares with side lengths randomized between 6 to 12 pixels (L<RLF) and 20 to 30 pixels (L>RFL). This results in a total of four cases: object detection with L<RFL and L>RFL, and the tracer problem with L<RLF and L>RFL.
Networks are trained for 10 epochs with a batch size of 3. Learning rates are set to 1E-4 for convolution layers and 100 for K-Origins layers. K-Origins initialization follows the same method as in Section <ref> and this time KRFL14 has fewer parameters than RFL14. There are 50 randomly placed squares for L<RFL and 25 for L>RFL. All numerical results are found in Appendix <ref>.
§.§.§ One Target Class: Object Detection
In this section, we segment a single target class (squares) from a background with an intensity mean of μ_0 = 20000 and noise with a standard deviation of σ_0 = 1000. We vary the target class mean and standard deviation, where Δμ = μ_1-μ_0 and Δσ = σ_1 - σ_0. For each mean and standard deviation, we train both RFL14 and KRFL14 and save the validation accuracies in a heatmap. The x-axis represents the change in mean (Δμ), and the y-axis represents the change in standard deviation (Δσ).
Figure <ref> shows results for L < RLF, and Figure <ref> shows results for L > RFL. In both figures, part (a) presents the heatmap with training results for each network. These accuracies can be compared to the HD found in Figure <ref>. Part (b) shows a simple example with an HD of 0.694, and two extreme cases with HDs of 0.176. The first example case has a different distribution than those discussed earlier in this work.
The validation accuracy heatmaps show that the network with K-Origins consistently outperforms the one without it. The network without K-Origins struggles most when Δσ = 0, indicating a pure mean shift. The results also suggest that having L > RLF is beneficial, but this is specific to the dataset used in this paper. Just as small, noisy squares are hard to detect, larger noisy squares become easier to detect with such controlled data.
KRFL14 also makes relatively good predictions when the HD is 0.176 which is an extremely challenging segmentation task for both machines and humans. This demonstrates the effectiveness of intensity quantification for tasks such as object detection.
After adding K-Origins, the accuracy heatmap is almost directly correlated to the class HDs. This is evident by comparing Figure <ref> to the KRFL14 accuracy plots in Figure <ref> and Figure <ref>. As the HD decreases, so does the accuracy, and vice versa. This correlation is not observed in the traditional network without K-Origins.
§.§.§ Two Target Classes: Tracer Segmentation
In this section, we segment two identically shaped target classes (both squares) from a background and differentiate these classes from each other, addressing the "Tracer Problem." This involves varying the intensity distributions of the two target classes to make them more or less similar.
The background has μ_0 = 16500 and noise with a standard deviation σ_0 = 900 to minimize interference with the target classes. The first target class has μ_1 = 20000 and σ_1 = 1000 , while the second target class varies based on Δμ = μ_2-μ_1 and Δσ = σ_2 - σ_1. Results for L<RFL are shown in Figure <ref>, and results for L>RFL are shown in Figure <ref>. In part (a), we present the heatmaps showing validation accuracy results for RFL14 and KRFL14, which can be compared to a trials HD using Figure <ref>. Part (b) provides a straightforward example followed by the two most challenging cases tested.
KRFL14 consistently outperforms RFL14 in this task, especially when the standard deviation remains constant while Δμ varies. Accuracy increases with L>RFL are for the same reason as mentioned before. This shows extremely promising results, with useful segmentation even at an HD of 0.176. As in Section <ref>, the accuracy plots for the network using K-Origins correlates well with the HD plot.
§ DISCUSSION
The most significant improvements from K-Origins occur when the primary difference between class intensity distributions is a mean shift. In reality, this scenario is common because classes often differ by colour or intensity means.
The RFL-related experiments suggest that a network's depth should be set such that the RFL is larger than all target object sizes (L<RFL). Soon after this point, the primary author hypothesizes that making the architecture deeper is less beneficial and less efficient than making it wider. K-Origins is an example of making it wider, as is increasing the number of filters in layers. This could possibly be presented as a guideline for how deep a neural network should be, and could make the design process more deterministic. There is, however, the possibility that these results stem from using such controlled data. This could be studied in the future.
A potential use case for data without such well-defined intensity distributions, such as general image data, is to use N equally spaced weights along the entire intensity or colour spectrum in each channel, dividing it into N+1 different regions for the network to leverage. This approach would make it easier, for example, to determine if a picture of a dog has white snow or green grass in the background. This would likely also help determine the exact colour of the dog. For this reason, K-Origins is likely useful for other classification problems, not just semantic segmentation. The downside, however, is that as N increases, the memory requirements grow significantly due to the number of image copies being created. There are likely ways to make this more efficient.
Additionally, it is unclear if K-Origin layers after the first impact classification results significantly. These weights likely need to be much smaller than those used in this paper and this could be explored in future studies. There is also the possibility to extend the application of K-Origins to un-supervised problems, perhaps by using a modified version of the simple colour network in Figure <ref>.
The experiments in this paper did not involve any hyperparameter tuning, which would likely improve results significantly, nor were the networks necessarily trained to steady state. Our goal was to demonstrate that even with minimal tuning, the ability to understand colour magnitudes is beneficial for predictions.
§ CONCLUSION
The experimental results from this study suggest that encoder-decoder networks struggle with classifications that require an understanding of colour or intensity magnitudes, as opposed to gradients alone. The custom layer K-Origins, which can be added to any network, was tested by incorporating it in modified U-Net architectures. By adding K-Origins and ensuring a sufficient RFL, there were significant accuracy improvements for the object detection and tracer segmentation problems. This approach allows for the development of smaller and more efficient networks.
These improvements are likely relevant to many fields, as object detection and tracer segmentation problems are common. Additionally, as new network architectures are being studied it would be valuable to test the impact of K-Origins on these emerging architectures, given its compatibility with any network.
§ CODE
The Python (TensorFlow) code used in this paper can be found in the GitHub repository associated with the primary author's thesis: <https://github.com/lewismmason/Thesis-Public>.
§ FIGURE 1
The networks used for Figure <ref> are RFL32 and KRFL32, deeper versions of RFL14 and KRFL14. The additional level is added the same way RFL8 is extended to RFL18. This depth satisfies RFL requirements, and still demonstrates that adding K-Origins is beneficial.
§ EXPERIMENTAL DATA
|